* Make currentTiddler variable consistent in subfilters and filter run prefixes
* Updated filterun prefix and subfilter operators to use ..currentTiddler instead of outerCurrentTiddler
* Merge
* Clean up
* More clean up
* Ensure image import works when type is not set, clean up post import actions
* Removed spurious new line
* For non image files insert a tiddler link
* Added documentation for new settings and features
* Add :sort filter run prefix, docs and tests. Also extended .utils.makeCompareFunction with a flag for caseSensitivity.
* Documentation updates
* Move case sensitivity handling entirely to utils method so it is reusable
* Make filter run prefixes extensible
* Make filter run prefixes extensible
* Support rich suffixes for filter runs
* merged conflicts
* Pass suffixes to filterrunprefix
* Extend dropzone widget with optional actions invoked after tm-import-tiddlers message has been sent. Allows triggering an alternative UX for the import
* Allow restricting a dropzone to specific mimeTypes via mimeTypes and mimeTypesPrefix attributes
* Use a mimeTypesFilter instead of the mimeTypes and mimeTypesPrefix attributes
* Updated refresh handling
* Syntax cleanup
* Replace references to mimeType with content type for consistency with existing documentation. Update documentation for DropZone widget
* double quotes are no longer escaped in html bodies
* changed tiddlyweb's html-div-tiddler; documentation
French version still needs a translation though
* Replace css-escape-polyfill.js with escapecss.js utility module
* Add $tw.utils.escapeCSS() method and invoke that function within the
escapecss operator.
* Add test cases for the "escapecss" filter operator
* Fix $tw.boot.doesTaskMatchPlatform() so it works as expected if
a module's export.platforms contains more than one values
* Add missed files to the last commit
We can now pass arrays of rule classes to the parser constructor, overriding the rules that would normally be used by the parser.
This allows us to create custom variants of the wikitext parser with their own content type.
It could also provide a basis for a new Markdown parser based on our existing wikitext parser but with new rules.