Functionality for allowing external tools to import data from command line #31

Open
opened 5 months ago by badsector · 2 comments

A simple way to allow external tools to import data from the command line (e.g. Python scripts or whatever to convert markdown or any other format) could be to have the code check for a script element with JSON content thay by default is empty and if it does and has content merge it with the JSON content of the pages but have the site marked as "modified".

So for example currently pages are saved in "<script id=p type=application/json>...</script>", have a second element at some point with code "<script id=np type=application/json></script>". Then have code (that runs when the wiki loads) to check and merge any pages defined inside np (replacing existing pages) and have the wiki marked as "modified", preferably with an option to save the site without the pages inside "np".

This would allow any script or external program do a search and replace for "<script id=np type=application/json></script>" to insert JSON code for new/replacement pages, which can be even be used in an automated manner (e.g. convert generated documentation from C++/Java/etc to embeddable HTML and embed it under a generated node inside the wiki). This can be helpful for distributing single-file documentation that merges a guide and a reference, for example with minimal overhead to FeatherWiki's source code.

A simple way to allow external tools to import data from the command line (e.g. Python scripts or whatever to convert markdown or any other format) could be to have the code check for a script element with JSON content thay by default is empty and if it does and has content merge it with the JSON content of the pages but have the site marked as "modified". So for example currently pages are saved in "&lt;script id=p type=application/json>...&lt;/script>", have a second element at some point with code "&lt;script id=np type=application/json>&lt;/script>". Then have code (that runs when the wiki loads) to check and merge any pages defined inside np (replacing existing pages) and have the wiki marked as "modified", preferably with an option to save the site without the pages inside "np". This would allow any script or external program do a search and replace for "&lt;script id=np type=application/json>&lt;/script>" to insert JSON code for new/replacement pages, which can be even be used in an automated manner (e.g. convert generated documentation from C++/Java/etc to embeddable HTML and embed it under a generated node inside the wiki). This can be helpful for distributing single-file documentation that merges a guide and a reference, for example with minimal overhead to FeatherWiki's source code.
Alamantus added the
question
label 5 months ago
Owner

If I'm understanding this correctly, the problem trying to be solved here is programmatically adding pages from an external source, is that correct?

If someone wanted to write a tool like this, it would only take a little bit of extra work to insert it directly into the <script id=p type=application/json> block (which I'm realizing now should have had an id of w for wiki... oh well) without needing an extra block to queue new data.

I don't know how OK you are with technical talk, but here I go.

The JSON object inside to p script block is deflated, but when expanded, it looks something like this:

{
  "A": "Feather Wiki",
  "B": "A lightweight quine for simple wikis!",
  "I": [
    {
      "C": ".a'Y4HBuk",
      "A": "About",
      "D": "about",
      "E": 1652303576350,
      "F": "<p>This is my page content!</p>",
      "G": 1653201443382
    },
    ...
  ],
  "J": {
    "1186914097": {
      "J": "data:image....some...image...data...",
      "K": "Feather Wiki logo",
      "L": [200,200]
    }
  },
  "M": false,
  "N": ".a'Y4HBuk",
  "_": {
    "A": "name",
    "B": "desc",
    "C": "id",
    "D": "slug",
    "E": "cd",
    "F": "content",
    "G": "md",
    "H": "parent",
    "I": "pages",
    "J": "img",
    "K": "alt",
    "L": "size",
    "M": "published",
    "N": "home"
  }
}

All of the object keys are stored in a "_" key of the root of the JSON object, so a programmer who's determined enough could programmatically match those keys up enough to find that the pages key is under "I" in this particular object. From there, the pages are stored in an array, which said programmer could use to determine what keys would be need to be used to insert new page objects directly to the array.

When the import is done, the updated JSON object can simply be put back into the p script block and be loaded as normal!

As I extend the website, I'll include things like this in a sort of hacking guide for extending Feather Wiki, but does this direct-insertion import of pages/data sound kind of like what you are looking for or am I misunderstanding your post?

If I'm understanding this correctly, the problem trying to be solved here is programmatically adding pages from an external source, is that correct? If someone wanted to write a tool like this, it would only take a little bit of extra work to insert it directly into the `<script id=p type=application/json>` block (which I'm realizing now should have had an id of `w` for wiki... oh well) without needing an extra block to queue new data. I don't know how OK you are with technical talk, but here I go. The JSON object inside to `p` script block is deflated, but when expanded, it looks something like this: ``` { "A": "Feather Wiki", "B": "A lightweight quine for simple wikis!", "I": [ { "C": ".a'Y4HBuk", "A": "About", "D": "about", "E": 1652303576350, "F": "<p>This is my page content!</p>", "G": 1653201443382 }, ... ], "J": { "1186914097": { "J": "data:image....some...image...data...", "K": "Feather Wiki logo", "L": [200,200] } }, "M": false, "N": ".a'Y4HBuk", "_": { "A": "name", "B": "desc", "C": "id", "D": "slug", "E": "cd", "F": "content", "G": "md", "H": "parent", "I": "pages", "J": "img", "K": "alt", "L": "size", "M": "published", "N": "home" } } ``` All of the object keys are stored in a `"_"` key of the root of the JSON object, so a programmer who's determined enough could programmatically match those keys up enough to find that the `pages` key is under `"I"` in this particular object. From there, the pages are stored in an array, which said programmer could use to determine what keys would be need to be used to insert new page objects directly to the array. When the import is done, the updated JSON object can simply be put back into the `p` script block and be loaded as normal! As I extend the website, I'll include things like this in a sort of hacking guide for extending Feather Wiki, but does this direct-insertion import of pages/data sound kind of like what you are looking for or am I misunderstanding your post?
Poster

I did consider modifying the array directly initially, however having a separate array has two benefits:

  1. No need to parse the existing JSON array - this makes it much easier for one-way conversions as long as the emitted string data is valid JSON code and can be done by any programming language (or even shell scripts with sed and/or awk) without needing to really understand JSON (most have an implementation but it isn't always available out of the box).

  2. Being able to preview the last insert of new data without having to "accept" it (specifically being able to remove it after the fact by keeping the unmodified array in memory until the wiki is saved).

Of course just modifying the array directly in place would work, i just think it'd be more convenient (and safe - think about improper parsing of the existing data leading to data loss) to have the wiki do that.

I did consider modifying the array directly initially, however having a separate array has two benefits: 1. No need to parse the existing JSON array - this makes it much easier for one-way conversions as long as the emitted string data is valid JSON code and can be done by any programming language (or even shell scripts with sed and/or awk) without needing to really understand JSON (most have an implementation but it isn't always available out of the box). 2. Being able to preview the last insert of new data without having to "accept" it (specifically being able to remove it after the fact by keeping the unmodified array in memory until the wiki is saved). Of course just modifying the array directly in place would work, i just think it'd be more convenient (and safe - think about improper parsing of the existing data leading to data loss) to have the wiki do that.
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Reference: Alamantus/FeatherWiki#31
Loading…
There is no content yet.