We can now pass arrays of rule classes to the parser constructor, overriding the rules that would normally be used by the parser.
This allows us to create custom variants of the wikitext parser with their own content type.
It could also provide a basis for a new Markdown parser based on our existing wikitext parser but with new rules.
The bug fixed in this commit had an interesting side effect when the
location hash started with #, e.g. it looked like wiki.html##foo.
In that case, TiddlyWiki's navigation processing is not triggered and
the browser's navigation processing is used instead, which allows
anchors to be used within tiddlers for sub-tiddler navigation. To
preserve this unintended but useful side-effect, we check for a location
hash that starts with # and ignore it if it does.
In some environments (at least on my own machine), TiddlyWiki detects zip files as type `"application/x-zip-compressed"` instead of `"application/zip"`. This commit adds support for zip files with type `"application/x-zip-compressed"` so that they are encoded in `"base64"` like other zip files with type `"application/zip"`.
* First pass at dynamic loading/unloading
* Show warning for changes to plugins containing JS modules
* Use $:/config/RegisterPluginType/* for configuring whether a plugin type is automatically registered
Where "registered" means "the constituent shadows are loaded".
* Fix the info plugin
The previous mechanism re-read all plugin info during startup
* Don't prettify JSON in the plugin library
* Indicate in plugin library whether a plugin requires reloading
* Display the highlighted plugin name in the plugin chooser
And if there's no name field fall back to the part of the title after the final slash.
I (Flibbles) changed it so that lists generated by stringifyList
would always be compatible with a filter parser, but since lists
are not, and never will be, a subset of filters, there isn't a
point.
More importantly, wrapping negative numbers like "-7" in brackets
would mess up some math stuff.
Fixes#4082
This version removes selective updating of the tag index, instead completely clearing the index on each update. I'm investigating restoring that optimisation.
The test rig previously used a simplified implementation of shadow tiddlers which broke with the new indexing engine. There was also a problem that made that even if indexers were disabled they were still initialised.
This PR fixes both problems, in preparation for fixing #4082
* First pass at modular wiki indexes
An exploratory experiment
* Fix tests
* Faster checking for existence of index methods
We don't really need to check the type
* Use the index for the has operator
* Fix typo
* Move iterator index methods into indexer modules
Now boot.js doesn't know the core indexers
* Fix up the other iterator index functions
* Fix crash with missing index branch
* Limit the field indexer to values less than 128 characters
* Fallback to the old manual scan if the index method returns null
* Sadly, we can no longe re-use the field indexer to accelerate the `has` operator, because the index now omits tiddlers that have field values longer than the limit
Still need to make the index configuration exposed somehow
* Rearrange tests so that we can test with and without indexers
We also need to expose the list of enabled indexers as a config option
* Test the field indexer with different length fields
So that we test the indexed and non-indexed codepaths
First part of fix for #3875
The idea is to do a better of job of distinguishing JSON files that contain tiddlers versus those that contain plain blobs of JSON that should be stored as a single application/json tiddler.
Under Node.js, .json files with an accompanying metafile are always treated as a JSON blob. Without a meta file, those that appear to not contain valid tiddlers are returned as a JSON blob, otherwise the tiddlers within the file are imported.
In the browser, we don't have .meta files so we rely on the valid tiddler check.
* Added flag to $tw.utils.parseStringArray to allow non-unique entries
With this change if you use $tw.utils.parseStringArray(list) you get identical behavior to before and enforces uniqueness in lists, but if you use $tw.utils.parseStringArray(list,true) it allows duplicate values in the list.
Because of how JavaScript handles overloaded functions this shouldn't have any affect on existing code that just passes one argument to the function.
* Update to hopefully remove merge conflicts
I noticed that the rendering of a TOC with around 200 entries seemed frustratingly slow.
First, I analysed the execution of the code using the Chrome developer tools "timeline" tab: prepare by switching to the "Tools" tab, then start profiling, switch to the "Contents" tab, and then stop profiling once it has displayed. I then used the "bottom-up" view to dectermine that the various Object.keys() calls in the main wiki store were taking around 500ms of the overall time.
Before making any code changes, I also used TW's built in instrumentation to get some baseline timings: I found that the main refresh cycle was taking around 3.0s when rendering the Contents tab.
I then performed the attached simple optimisations of caching the list of tiddler titles and the list of shadow tiddler titles.
The results bring the overall main refresh time down to about 1.9s, a nearly 50% improvement.
The moral of the story is that the first rule of optimisation is measurement...
* enable doc contributions for dev
fixes#2921
* involves changes to boot.js to properly build OriginalTiddlerPaths on
Windows
* added ContributionBanner
* added Sources tab to info panel
* updated tiddlywiki.info for dev
* normalize path separator to posix for windows
* more generically transform to posix