Compare commits

...

2 Commits

Author SHA1 Message Date
osmarks ade0c9e523 add "useful" information in sidenotes 2023-11-19 21:30:47 +00:00
osmarks e25013c1b4 Apparently I changed everything and forgot to commit it.
- I just added sidenotes (blog being rewritten slightly to incorporate them; WIP)
- Microblog added, compiler caching mechanism reworked
- Image compression
2023-11-19 21:06:25 +00:00
61 changed files with 2062 additions and 77 deletions

3
.gitignore vendored
View File

@ -2,4 +2,5 @@ node_modules
out
openring
draft
cache.json
cache.json
cache.sqlite3

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 589 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 554 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 335 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 524 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 413 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 378 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 431 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 600 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 458 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 616 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 339 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 442 KiB

BIN
assets/images/opinion.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 440 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 517 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 558 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 559 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 493 KiB

BIN
assets/images/polcal.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 316 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 302 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 401 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 423 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 381 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 484 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 364 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 431 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 577 KiB

4
assets/js/date-fns.js Normal file

File diff suppressed because one or more lines are too long

10
blog/computercraft.md Normal file
View File

@ -0,0 +1,10 @@
---
title: ComputerCraft is peak computing
description: It may be a janky Minecraft mod, but in some ways it's nicer than lots of modern software stacks.
slug: computercraft
created: 18/11/2023
draft: yes
---
I have been thinking about [ComputerCraft](https://tweaked.cc/) slightly recently, because of moving [several years of archived code](https://github.com/osmarks/random-stuff/tree/master/computercraft) from Pastebin and some private internal repositories to public view (and writing some minor patches to [PotatOS](https://potatos.madefor.cc/)), and it increasingly seems like a model of what computers *should* be like which highlights the shortcomings of everything else.
Computers undoubtedly grow more powerful every year, as fabs wrangle quantum electrodynamics into providing ever better and smaller transistors at great cost and the handful of companies still at the cutting edge refine their architectures slightly, but, [as has been noted](https://danluu.com/input-lag/), this doesn't actually translate into better user experience.

View File

@ -1,7 +1,7 @@
---
title: "Maghammer: My personal data warehouse"
created: 28/08/2023
updated: 29/08/2023
updated: 12/09/2023
description: Powerful search tools as externalized cognition, and how mine work.
slug: maghammer
---
@ -23,9 +23,9 @@ You'll note that not all of these projects make any attempt to work on non-text
## Why?
Why do I want this? Because human memory is very, very bad. My (declarative) memory is much better than average, but falls very far short of recording everything I read and hear, or even just the source of it (I suspect this is because of poor precision (in the information retrieval sense) making better recall problematic, rather than actual hard limits somewhere - there are documented people with photographic memory, who report remembering somewhat unhelpful information all the time - but without a way to change that it doesn't matter much). According to [Landauer, 1986](https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1004_4)'s estimates, the amount of retrievable information accumulated by a person over a lifetime is less than a gigabyte, or <0.05% of my server's disk space. There's also distortion in remembered material which is hard to correct for. Information is simplified in ways that lose detail, reframed or just changed as your other beliefs change, merged with other memories, or edited for social reasons.
Why do I want this? Because human memory is very, very bad. My (declarative) memory is much better than average, but falls very far short of recording everything I read and hear, or even just the source of it[^1]. According to [Landauer, 1986](https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1004_4)'s estimates, the amount of retrievable information accumulated by a person over a lifetime is less than a gigabyte, or <0.05% of my server's disk space[^5]. There's also distortion in remembered material which is hard to correct for. Information is simplified in ways that lose detail, reframed or just changed as your other beliefs change, merged with other memories, or edited for social reasons.
Throughout human history, even before writing, the solution to this has been externalization of cognitive processing: other tiers in the memory hierarchy with more capacity and worse performance. While it would obviously be [advantageous](/rote/) to be able to remember everything directly, just as it would be great to have arbitrarily large amounts of fast SRAM to feed our CPUs, tradeoffs are forced by reality. Oral tradition and culture were the first implementations, shifting information from one unreliable human mind to several so that there was at least some redundancy. Writing made for greater robustness, but the slowness of writing and copying (and for a long time expense of hardware) was limiting. Printing allowed mass dissemination of media but didn't make recording much easier for the individual. Now, the ridiculous and mostly underexploited power of contemporary computers makes it possible to literally record (and search) everything you ever read at trivial cost, as well as making lookups fast enough to integrate them more tightly into workflows. Roam Research popularized the idea of notes as a "second brain", but it's usually the case that the things you want to know are not ones you thought to explicitly write down and organize.
Throughout human history, even before writing, the solution to this has been externalization of cognitive processing: other tiers in the memory hierarchy with more capacity and worse performance. While it would obviously be [advantageous](/rote/) to be able to remember everything directly, just as it would be great to have arbitrarily large amounts of fast SRAM to feed our CPUs, tradeoffs are forced by reality. Oral tradition and culture were the first implementations, shifting information from one unreliable human mind to several so that there was at least some redundancy. Writing made for greater robustness, but the slowness of writing and copying (and for a long time expense of hardware) was limiting. Printing allowed mass dissemination of media but didn't make recording much easier for the individual. Now, the ridiculous and mostly underexploited power of contemporary computers makes it possible to literally record (and search) everything you ever read at trivial cost, as well as making lookups fast enough to integrate them more tightly into workflows. Roam Research popularized the idea of notes as a "second brain"[^2], but it's usually the case that the things you want to know are not ones you thought to explicitly write down and organize.
More concretely, I frequently read interesting papers or blog posts or articles which I later remember in some other context - perhaps they came up in a conversation and I wanted to send someone a link, or a new project needs a technology I recall there being good content on. Without good archiving, I would have to remember exactly where I saw it (implausible) or use a standard, public search engine and hope it will actually pull the document I need. Maghammer (mostly) stores these and allows me to find them in a few seconds (fast enough for interactive online conversations, and not that much slower than Firefox's omnibox history search) as long as I can remember enough keywords. It's also nice to be able to conveniently find old shell commands for strange things I had to do in the past, or look up sections in books (though my current implementation isn't ideal for this).
@ -41,6 +41,7 @@ Currently, I have custom scripts to import this data, which are run nightly as a
* Unorganized text/HTML/PDF files in my archives folder.
* Books (EPUB) stored in Calibre - overall metadata and chapter full text.
* Media files in my archive folder (all videos I've watched recently) - format, various metadata fields, and full extracted subtitles with full text search.
* I've now added [WhisperX](https://github.com/m-bain/whisperX/) autotranscription on all files with bad/nonexistent subtitles. While it struggles with music more than Whisper itself, its use of batched inference and voice activity detection meant that I got ~100x realtime speed on average processing all my files (after a patch to fix the awfully slow alignment algorithm).
* [Miniflux](/rssgood/) RSS feed entries.
* [Minoteaur](/minoteaur/) notes, files and structured data. I don't have links indexed since SQLite isn't much of a graph database (no, I will not write a recursive common table expression for it), and my importer reads directly off the Minoteaur database and writing a Markdown parser would have been annoying.
* RCLWE web history (including the `circache` holding indexed pages in my former Recoll install).
@ -75,10 +76,24 @@ Being built out of a tool intended for quantitative data processing means that I
While it's not part of the same system, [Meme Search Engine](https://mse.osmarks.net/) is undoubtedly useful to me for rapidly finding images (memetic images) I need or want - so much so that I have a separate internal instance run on my miscellaneous-images-and-screenshots folder. Nobody else seems to even be trying - while there are a lot of demos of CLIP image search engines on GitHub, and I think one with the OpenAI repository, I'm not aware of *production* implementations with the exception of [clip-retrieval](https://github.com/rom1504/clip-retrieval) and the LAION index deployment, and one iPhone app shipping a distilled CLIP. There's not anything like a user-friendly desktop app, which confuses me somewhat, since there's clearly demand amongst people I talked to. Regardless of the reason, this means that Meme Search Engine is quite possibly the world's most advanced meme search tool (since I bothered to design a nice-to-use query UI and online reindexing), although I feel compelled to mention someone's [somewhat horrifying iPhone OCR cluster](https://findthatmeme.com/blog/2023/01/08/image-stacks-and-iphone-racks-building-an-internet-scale-meme-search-engine-Qzrz7V6T.html). Meme Search Engine is not very well-integrated but I usually know which dataset I want to retrieve from anyway.
I've also now implemented semantic search using [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) embeddings. It turns out that I have more data than I thought, so this was somewhat challenging. Schematically, a custom script (implemented in a Datasette plugin for convenience, although it probably shouldn't be) dumps the contents of FTS tables, splits them into chunks, generates embeddings, and inserts the embeddings and location information into a new database, as well as embeddings and an ID into a [FAISS](https://github.com/facebookresearch/faiss/) index. When a search is done, the index is checked, the closest vectors found, filtering done (if asked for) and the relevant text (and other metadata e.g. associated URL and timestamp) found and displayed.
It is actually somewhat more complex than that for various reasons. I had to modify all the importer scripts to log which rows they changed in a separate database, as scanning all databases for new changes would probably be challenging and slow, and the dump script reads off that. Also, an unquantized (FP16) index would be impractically large given my available RAM (5 million vectors × 1024 dimensions × 2 bytes ≈ 10GB), as well as slow (without using HNSW/IVF). To satisfy all the constraints I was under, I settled on a fast-scan PQ (product quantization) index[^4] (which fit into about 1GB of RAM and did search in 50ms) with a reranking stage where the top 1000 items are retrieved from disk and reranked using the original FP16 vectors (and the relevant text chunks retrieved simultaneously). I have no actual benchmarks of the recall/precision of this but it seems fine. This is probably not a standard setup because of throughput problems - however, I only really need low latency (the target was <200ms end-to-end and this is just about met) and this works fine.
## Future directions
The system is obviously not perfect. As well as some minor gaps (browser history isn't actually put in a full-text table, for instance, due to technical limitations), many data sources (often ones with a lot of important content!) aren't covered, such as my emails and conversation history on e.g. Discord. I also want to make better use of ML - for instance, integrating things like Meme Search Engine better, local Whisper autotranscription of videos rather than having no subtitles or relying on awful YouTube ones, semantic search to augment the default [SQLite FTS](https://www.sqlite.org/fts5.html) (which uses term-based ranking - specifically, BM25), and OCR of screenshots. I still haven't found local/open-source OCR which is both good, generalizable and usable (Apple's software works excellently but it's proprietary). Some of the trendier, newer projects in this space use LLMs to do retrieval-augmented generation, but I don't think this is a promising direction right now - available models are either too dumb or too slow/intensive, even on GPU compute, and in any case prone to hallucination.
The system is obviously not perfect. As well as some minor gaps (browser history isn't actually put in a full-text table, for instance, due to technical limitations), many data sources (often ones with a lot of important content!) aren't covered, such as my emails and conversation history on e.g. Discord. I also want to make better use of ML - for instance, integrating things like Meme Search Engine better, ~~local Whisper autotranscription of videos rather than having no subtitles or relying on awful YouTube ones, semantic search to augment the default [SQLite FTS](https://www.sqlite.org/fts5.html) (which uses term-based ranking - specifically, BM25),~~ and OCR of screenshots. I still haven't found local/open-source OCR which is both good, generalizable and usable[^3]. Some of the trendier, newer projects in this space use LLMs to do retrieval-augmented generation, but I don't think this is a promising direction right now - available models are either too dumb or too slow/intensive, even on GPU compute, and in any case prone to hallucination.
Another interesting possibility for a redesign I have is a timeline mode. Since my integration plugin (mostly) knows what columns are timestamps, I could plausibly have a page display all relevant logs from a day and present them neatly.
If you have related good ideas or correct opinions, you may tell me them below. The code for this is somewhat messy and environment-specific, but I may clean it up somewhat and release it if there's interest in its specifics.
If you have related good ideas or correct opinions, you may tell me them below. The code for this is somewhat messy and environment-specific, but I may clean it up somewhat and release it if there's interest in its specifics.
[^1]: I suspect this is because of poor precision (in the information retrieval sense) making better recall problematic, rather than actual hard limits somewhere - there are documented people with photographic memory, who report remembering somewhat unhelpful information all the time - but without a way to change that it doesn't matter much.
[^2]: [Zettelkasten](https://en.wikipedia.org/wiki/Zettelkasten) and such predate this, but Roam definitely *popularized* it amongst tech people.
[^3]: Phone OSes can do this very well now, but the internals are not open.
[^4]: FAISS has some helpful manuals [like this](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index) describing the various forms available, although there are rather a lot of them which say slightly different things.
[^5]: This is the size assuming optimal compression, but obviously the actual brain has many other concerns and isn't storing things that way. The actual hardware probably holds, very roughly, 10<sup>15</sup> bits.

View File

@ -27,7 +27,7 @@ While wrong people believe that better software involves more code, I, as an enl
</div>
After deciding that I really did need something which actually worked even if it wasn't perfect, I settled on... installing [DokuWiki](https://www.dokuwiki.org/dokuwiki) - while a PHP application and not particularly modern featurewise, it was known to be robust, supported *most* of what I wanted, and basically worked.
I even dabbled in the horrors of PHP to make some tweaks and plugins I wanted work.
I even dabbled in the horrors of PHP to make some tweaks and plugins I wanted work.[^1]
However, the dream of Minoteaur had not yet died.
Prototypes were developed and reengineered for new, exciting Minoteaurs based on Node.js, SQLite3 and single-page application technologies, to implement a more TiddlyWiki-like UI with multiple pages open at once and offer generally better interactivity.
@ -50,7 +50,7 @@ Rust having advanced somewhat since the days of Minoteaur 4, it uses asynchronou
It "mostly worked" at the level of Minoteaur 1, but also proved annoying to work on, especially since the Markdown parsing mechanisms are quite annoying (none of the Markdown parsing libraries are particularly easy to *extend*, but `pulldown-cmark` returns an event stream, so I had to write some somewhat terrible code to streamingly process that and count up `[`s and `]`s, which actually then got rewritten to only *partly* do the weird streaming parsing and to mostly hand it off to regexes).
When I got sufficiently annoyed by that again, I rewrote it in Nim for [Minoteaur 6](https://git.osmarks.net/osmarks/minoteaur).
Nim is sort of how I would design a programming language, both in the sense that it makes a lot of nice decisions I agree with (extensive metaprogramming, style insensitivity) and in that it's somewhat quirky and I don't understand why some things happen (particularly with memory management, for which it has seemingly several different incompatible systems which can be switched between at compile time).
Nim is sort of how I would design a programming language, both in the sense that it makes a lot of nice decisions I agree with (extensive metaprogramming, style insensitivity) and in that it's somewhat quirky and I don't understand why some things happen (particularly with memory management, for which it has seemingly several different incompatible systems which can be switched between at compile time[^2]).
It has enough working libraries for things like SQLite and webservers that I thought it worth trying anyway, and it was indeed the most functional Minoteaur at the time, incorporating good SQLite-based search, backlinks, a mostly functional UI, partly style-insensitive links, a reasonably robust parser, a decent UI, and even DokuWiki-like drafts in the editor (a feature I end up using quite often due to things like accidentally closing or refreshing pages).
However, I got annoyed again by the server-rendered design, the terrible, terrible code I had to write to directly bind to a C-based GFM library (I think I at least managed to make it not segfault, even though I don't know why), and probably some things I forgot, leading to the *next* version.
@ -68,7 +68,7 @@ However, I got annoyed again by the server-rendered design, the terrible, terrib
</div>
Python is my go-to language for rapid prototyping, i.e. writing poor-quality code very quickly, so it made some sense for me to rewrite in that next in 2021.
Minoteaur 7 was a short-lived variant using server rendering, which was rapidly replaced by Minoteaur 7.1, which used a frontend web framework called Svelte for its UI.
Minoteaur 7 was a short-lived variant using server rendering, which was rapidly replaced by Minoteaur 7.1, which used a frontend web framework called Svelte for its UI[^3].
It contained many significant departures from all previous Minoteaurs, mostly for the better: notably, it finally incorporated indirection for pages.
While all previous implementations had just stored pages under their (somewhat normalized) title, I decided that not structuring it that way would be advantageous in order to allow pages to be renamed and referred to by multiple names, so instead pages have a unique, fixed ID and several switchable names.
This introduced the minor quirk that all Markdown parsing and rendering was done on the backend, which was not really how I'd usually do things but did actually make a good deal of the code simpler (since it is necessary to parse things there to generate plaintext for search).
@ -119,6 +119,7 @@ It can, however:
* store files, and use them as icons for pages for easy recognition (mostly in search results).
* work on phones, somewhat (it was pretty difficult to reliably detect phones as opposed to vertical monitors, and when I got it to work it broke again after my monitor layout changed and Firefox handled it weirdly).
* run JS on the serverside as part of Markdown processing, in lieu of a plugin API (I had to ship the interpreter anyway for KaTeX).
* associate structured data (text or numbers) with pages, and run queries based on that.
Should you actually use it?
Probably not: while it works reliably enough for me, this is because I am accustomed to its strangeness and deliberately designed it to my requirements rather than anyone else's, sometimes in ways which are very hard to change now (for example, adding things like pen drawings would be really hard structurally, and while there was a Minoteaur 8 prototype with a different architecture which would have made that easier, it was worse to write most code for so I didn't go ahead with that), and can rewrite and debug it easily enough if I have to.
@ -130,4 +131,10 @@ I am not writing this in order to convince people to switch over (that would cre
While it works as-is, mostly, active real-world use has given me ideas about how it could be better.
~~At this time, I'm mostly interested in improving the search mechanism to include phrase queries, negative queries and exact match queries, better integration with external tools (for example, with some engineering effort I could move Anki card specifications into notes and not have to maintain that separately), and a structured data mechanism for attaching machine-readable content to pages.~~
I actually did add some of these. The search mechanism does now allow "exact" and "negative" queries, although it still has some brokenness I intend to fix at some point, and there's a fully featured structured data mechanism. Pages can have a list of key/value pairs attached (numeric or textual) and can then be queried by those using a few operators in the search.
I actually did add some of these. The search mechanism does now allow "exact" and "negative" queries, although it still has some brokenness I intend to fix at some point, and there's a fully featured structured data mechanism. Pages can have a list of key/value pairs attached (numeric or textual) and can then be queried by those using a few operators in the search.
[^1]: I think this was just nice syntax for superscript/subscript formatting which I ultimately realized could just be replaced by TeX, and some ugly hacks to stop it complaining when I upgraded to PHP 8.
[^2]: Apparently it [standardized on](https://nim-lang.org/docs/mm.html) reference counting with cycle detection now.
[^3]: I use this for most new projects now. It's very pleasant to use, and apparently quite fast, which I value to some extent.

View File

@ -13,4 +13,22 @@ updated: 24/01/2020
* The lack of SD card slots is, again, probably just planned obsolecence.
* Proper physical QWERTY keyboards would be nice, though as they're such a niche feature that's probably never going to happen except on a few phones.
* The screens don't need to get bigger. People's hands aren't growing every year. And they don't need more pixels to drain increasingly large amounts of power.
* Removable batteries should come back. When I initially wrote this in 2017 or so, they were pretty common, but now barely any new devices let you *swap the battery*, despite lithium-ion batteries degrading within a few years of heavy use. I know you can't economically do highly modular design in a phone, but this is not a complex, technically difficult or expensive thing to want.
* Removable batteries should come back. When I initially wrote this in 2017 or so, they were pretty common, but now barely any new devices let you *swap the battery*, despite lithium-ion batteries degrading within a few years of heavy use. I know you can't economically do highly modular design in a phone, but this is not a complex, technically difficult or expensive thing to want.
It's now the future (2023) and things have actually improved slightly in some ways but generally remained about the same:
* Notches mostly gave way to punch-hole cutouts for cameras, which are somewhat more tolerable.
* Manufacturers have started offering longer software service lifespans, and Project Treble has had the convenient effect of making it possible to run GSIs on all new devices. While I think this means you don't get updates to vendor firmware components, you can at least get OS-level security updates[^1].
* Battery technology has incrementally improved over the years and SoCs are getting made on better processes with better core designs. This has, of course, been mostly cancelled out by dumber software or something[^2], but you can get a few devices with really good battery capabilities.
* Headphone jacks and micro-SD card slots remain mostly gone, but it turns out that wireless headphones are good now and flash is cheap enough that most phones ship with lots of storage anyway.
* A few highly niche products with physical keyboards still exist. Unfortunately, they're bad in every area aside from having the keyboards so I don't have one[^3].
* Displays are still unreasonably large on most products I guess. At least they can make them much brighter and unnecessarily high-resolution.
* Everyone wants high-refresh-rate displays now. I am told that once you get used to them you can't go back, so I'm avoiding them in order to be able to keep using cheaper display tech.
* We have 5G now, which allows me to use up my entire data plan in mere *minutes* (assuming the theoretical maximum link rate is achieved, which will never actually happen). I've heard that it's simpler and neater internally, but I don't trust telecoms people to ever get this right.
* Foldable phones are cool but I dislike, both aesthetically and for reasons of durability, compromising the solid-brick-of-microelectronics nature of modern phones with (large) mechanical parts, and don't really get the usecase.
[^1]: Assuming Android doesn't drop compatibility with something the vendor code does. I think it actually does that quite a lot. I do not agree with most of Android's design decisions.
[^2]: It's funny and sad to read old phone reviews which praise the performance of devices running single low-IPC cores at 1GHz or so.
[^3]: The most practical right now, inasmuch as BlackBerry/TCL haven't released anything relevant in years, is the [Unihertz Titan (Pocket)](https://www.unihertz.com/products/titan-pocket). It has some cool features aside from the keyboard, but it also has awful cameras, an undersized-by-my-current-standards battery, a bad LCD display, and a MediaTek SoC (according to legend, they're worse at GPL compliance so custom ROMs are lacking).

View File

@ -50,14 +50,15 @@ Obviously this is just stuff *I* like; you might not like it, which isn't really
* Egan has short story anthologies which I have also read and recommend.
* [Stories of Your Life and Others](https://www.goodreads.com/book/show/223380.Stories_of_Your_Life_and_Others) - just very good short stories. Chiang has written a sequel, [Exhalation](https://www.goodreads.com/book/show/41160292-exhalation), which I also entirely recommend.
* He also write [Arrival](https://www.goodreads.com/book/show/32200035-arrival). I like this but not the movie, since the movie's scriptwriters clearly did not understand what was going on.
* [A Hero's War](https://m.fictionpress.com/s/3238329/1/A-Hero-s-War) - bootstrapping industrialization in a setting with magic. Unfortunately, unfinished and seems likely to remain that way.
* [A Hero's War](https://fictionpress.com/s/3238329/1/A-Hero-s-War) - bootstrapping industrialization in a setting with magic. Unfortunately, unfinished and seems likely to remain that way.
* [Snow Crash](https://www.goodreads.com/book/show/40651883-snow-crash) - a fun action story even though I don't take the tangents into Sumerian mythology (?) very seriously.
* Since this list was written, I think it became notorious for introducing the "metaverse" as pushed by Facebook now. This is very silly. Everyone who is paying attention knows that the real metaverse is Roblox.
* [Limitless](https://en.wikipedia.org/wiki/Limitless_(TV_series)) (the movie is also decent) - actually among the least bad depictions of superhuman intelligence I've seen in media, and generally funny.
* [Pantheon](https://en.wikipedia.org/wiki/Pantheon_(TV_series)) - unfortunately cancelled and pulled from streaming (for tax purposes somehow?) and thus hard to watch, but one of about three TV series I've seen on the subject of brain uploads, and I think the smartest. Some day I want my own ominous giant cube of servers in Norway.
* [Pantheon](https://en.wikipedia.org/wiki/Pantheon_(TV_series)) - ~~unfortunately cancelled and pulled from streaming (for tax purposes somehow?) and thus hard to watch,~~ apparently uncancelled and hosted by Amazon now?! Still hard to watch. One of about three TV series I've seen on the subject of brain uploads, and I think the smartest, not that this is a very high bar since it's frequently quite silly (they repeatedly talk about how uploads are just data which can be copied, and then forget this every time it would be useful). Some day I want my own ominous giant cube of servers in Norway.
* [Mark of the Fool](https://www.goodreads.com/series/346305-mark-of-the-fool) - somewhat standardly D&D-like world, but the characters are well-written and take reasonable decisions.
* [Nice Dragons Finish Last](https://www.goodreads.com/series/128485-heartstrikers) - enjoyable urban fantasy.
* [Street Cultivation](https://www.goodreads.com/series/287542-street-cultivation) - again, sane characters who do not make obviously stupid decisions for plot reasons.
* [Nexus](https://www.goodreads.com/book/show/13642710-nexus) - somewhat dumb plot (I think; I read it a while ago and am not far through a reread now) but very cool transhumanist technology.
Special mentions (i.e. "I haven't gotten around to reading these but they are well-reviewed and sound interesting") to:
* [The Divine Cities](https://www.goodreads.com/series/159695-the-divine-cities) by Robert Jackson Bennet.
@ -72,6 +73,6 @@ Special mentions (i.e. "I haven't gotten around to reading these but they are we
* "house of suns is really very good, you should read" - baidicoot/Aidan, creator of the world-renowned [Emu War](/emu-war) game
* [Singularity Sky](https://www.goodreads.com/book/show/81992.Singularity_Sky) by Charlie Stross.
If you want EPUB versions of the free web serial stuff here for your e-reader, there are tools to generate those, or you can contact me for a copy.
If you want EPUB versions of the free web serials here for your e-reader, there are tools to generate those, or you can contact me for a copy.
You can suggest other possibly-good stuff in the comments and I may add it to an extra section, and pointlessly complain there or [by email](mailto:osmarks@protonmail.com) if you don't like some of this. Please tell me if any links are dead.
You can suggest other possibly-good stuff in the comments and I may add it to an extra section, and pointlessly complain there or [by email](mailto:me@osmarks.net) if you don't like some of this. Please tell me if any links are dead.

View File

@ -0,0 +1,42 @@
---
title: Stop having political opinions
description: This is, of course, all part of my evil plan to drive site activity through systematically generating (meta)political outrage.
created: 24/09/2023
slug: opinion
draft: yes
---
This may sound strange coming from someone whose website contains things which are clearly [political opinions](/osbill/); I am being [hypocritical](https://www.overcomingbias.com/p/homo-hipocritushtml)/didn't notice/have updated my views since that/am writing hyperbolically or ironically to make a point/do not require myself to have self-consistent beliefs (select your favourite option). Regardless, I think that holding, forming and in various ways acting on political opinions is somewhere between unnecessary and significantly net harmful. I apologize in advance for not using concrete examples for anything in this post, but those would be political opinions.
## Importance, Tractability, Neglectedness
Political interaction is often framed as altruistic or even morally necessary - most notably, voting is a "civic duty" and in some countries compulsory, and it's common for political movements and their participants to believe that they are helping to bring about a better world through their actions, or that they're preventing some other group from doing harm (and thus in some sense doing good) with their ill-posed opinions, misaligned values or sheer evilness. Thus, let's evaluate it as an altruistic act using the [ITN](https://forum.effectivealtruism.org/topics/itn-framework) framework favoured by Effective Altruism. In brief, Importance is the value of fully solving whatever problem you're targeting, Tractability is the marginal value of your input to it (how much an additional unit of work can affect the problem), and Neglectedness is how little the problem is already being worked on.
Politics clearly fails at neglectedness. The majority of people are interested at least to the extent of thinking and talking about it regularly and voting. Very large chunks of media time are allotted to politics, and people readily seek out political content to read and debate. There is no shortage of advocacy groups, think tanks and public intellectuals engaging in politics. You might contend that your favourite political position is neglected and less popular than widely discussed ones, but given that you are aware of it and supporting it it probably still has a fairly large amount of supporters - the world population is quite large, after all - and since you're still in the same field as all the other positions you are competing with them for resources and attention.
It does not do well on tractability. For mostly the same reasons as neglectedness, your marginal contribution is not big. [Voting](https://putanumonit.com/2015/12/30/010-voting/) is, even under fairly optimistic assumptions, very unlikely to change the outcome of an election. Discussing politics with people you know is notorious for never changing anyone's beliefs, and arguments on social media are even less effective - very little discussion surfaces novel ideas and it mostly serves as an ineffective attempt to apply social pressure. The situation with protests and similar activity is perhaps better because there are fewer people doing that, but I do not think their effectiveness is going to be affected much by the addition or removal of a person on the margin, and I am not convinced that they do much in general. Politics is also especially intractable because on many issues, people are actively working against you.
Importance is somewhat more ambiguous. I have been playing fast and loose with the exact definition of "politics" here - while it's clearly true that the sum of everything people want solved via politics is very important, the plausible consequences of something like electing a party you like or having a policy you want implemented are significantly smaller, both from the perspectives of [conflict theory](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) (the frame of political disagreements as battles between groups over values or resource allocation) and mistake theory (political disagreements as good-faith discussions of what the best thing to do is given a shared understanding of goals). Conflict-theoretically, any victory can be eroded by changing power dynamics later or nulified by enemies in the system surrounding it; mistake-theoretically, the impact of policies is very hard to test, let alone know in advance, and many of the issues policies are intended to solve are very complicated and any single solution is unlikely to work very well.
## The Magic Fix-Everything Button
A large amount of modern politics-as-practiced seems to take a specific kind of conflict-theoretic view which I think makes it less important (in that the policies resulting from it will be worse) as well as less tractable (it's easier to persuade people if they don't tie opposing views into their identity, and easier to take actions if you are not battling some other group). Specifically, the belief that the main obstacle to improving the world is simply that evil people are in power, and that if you can demand it insistently enough you can replace them with favorable people who will then fix everything in a simple and obvious way which has heretofore gone unused. This is exemplified by [movements with unclear goals and underspecified demands to fix things](https://www.astralcodexten.com/p/book-review-the-revolt-of-the-public).
While there are absolutely some cases where a bad policy exists for conflict-theoretic reasons (e.g. one group wants to enrich itself at the expense of others and opposition is too diffuse to stop it), the biggest problems we face now have no clean complete solution, only a wide range of possible policy positions with a complex set of tradeoffs. Insistence on a particular consequence without thought to how it might actually be achieved, erasure of tradeoffs, or [ignorance of the reasons](https://en.wiktionary.org/wiki/Chesterton%27s_fence) someone else might be against an obviously-good-to-you policy result in prolonged conflict and ineffective results. Where possible, it's better to try and [move the Pareto frontier](https://www.overcomingbias.com/p/policy_tugowarhtml) with novel solutions rather than attempting to force through a result against others.
This can also lead to, in effect, passivity: not considering solutions to problems other than wrangling large-scale governmental mechanisms. This is also harmful, since the government is [not omnicompetent](https://www.theonion.com/smart-qualified-people-behind-the-scenes-keeping-ameri-1819571706) and anything complicated is mired in horrifying bureaucratic quagmires of impenetrable dysfunction, as are most large-scale organizations.
## Selfish Reasons To Not Participate
Rather than merely not being a public good, I think involvement in politics is even individually harmful. The most obvious reason is opportunity cost - all the time spent reading political news, voting, forming opinions, or having conversations about it could be spent more effectively - but there is the further reason that because people often tie politics to their identities, political discussions are frequently damaging to relationships.
So if it's bad to participate, why is it so popular? The short answer is, to reuse the favourite adage of "ersatz" on the EleutherAI Discord server, "people are insane". We are [adaptation-executors, not fitness-maximizers](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers), built on evolved cognitive heuristics optimized for ancient savannah environments in smaller tribes. It's plausible that in those, tractability and neglectedness were much lower and social missteps or groups moving against you significantly costlier, the resulting strategies misgeneralize to today's world of 8 billion people, and few people bother to explicitly reason about the cost/benefit and override this. The system is also hyperstitious: now that political interaction is considered altruistic and expected, people are incentivized to participate more for signalling reasons.
This can also be blamed on cultural evolution/memetics. As with religions, the most contagious ideologies are selected for and propagate, growing more able to effectively capture human attention regardless of actual value to their hosts. The incentives of media also help: receiving payment for clicks on your videos and articles incentivizes recapitulation of the same process through deliberate design, resulting in content optimized to spread through exploiting outrage and tribalism.
## Universalizability
The most common objection I've heard is along the lines of "but if everyone did this, no political improvement would occur and the world would be much worse off". This is true but irrelevant: I'm not a Kantian and don't only advocate for behaviors which need to apply to everyone at once. In the current state of the world, I think the marginal benefit (to everyone, and to you) of engagement is below the marginal cost and so it should be avoided - if a sufficiently large amount of people agreed with me on this and did so, some of my arguments would apply less and it would become more worthwhile, and I might then argue in favour of political engagement.
Another is the claim that I am a privileged person who is only able to ignore politics because I'm not heavily threatened or discriminated against by existing instutions. This also misses the point somewhat - this affects importance, but not neglectedness or tractability, which are still, I think, so much lower than people's behaviour implies that this argument holds up.
If you have any arguments against my argument I haven't addressed here, please tell me so I can think about them.

View File

@ -10,7 +10,7 @@ General criticisms of formal education have [already been done](https://en.wikip
I think it's more plausible that teaching focuses on this surface knowledge because it's much easier and more legible, and looks and feels very much like "programming education" to someone who does not have actual domain knowledge (because other subjects are usually done in the same way), or who [isn't thinking very much about it](https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/), and then similar problems and a notion that testing should be "fair" and "cover what students have learned" lead to insufficiently outcome-oriented exams, which then sets up incentives biasing students in similar directions. The underlying issue is a matter of "tacit knowledge": being good at programming requires sets of interlocking and hard-to-describe mental heuristics rather than a long list of memorized rules, and since applying them feels natural and easy - and most people who are now competent don't accurately remember lacking them - it is not immediately obvious that this is the case, and someone asked how they can do something is likely to focus on the things which are, to them, easier to explain and notice.
So why is programming education particularly bad? Shouldn't *every* field be harmed by tacit knowledge transmission problems? My speculative answer is that they generally are, but it's much less noticeable and plausibly also a smaller problem. The heuristics used in programming are strange and unnatural - I'll describe a few of the important ones later - but the overarching theme is that programming is highly reductionist: you have to model a system very different to your own mind, and every abstraction breaks down in some corner case you will eventually have to know about. The human mind very much likes pretending that other systems are more or less identical to it - [animism](https://en.wikipedia.org/wiki/Animism) is no longer a particularly popular explicitly-held belief system, but it's still common to ascribe intention to machinery, "fate" and "karma", animals without very sophisticated cognition, and a wide range of other phenomena. Computers are not at all human, in that they do exactly what someone has set them up to do, which is often [not what they thought they were doing](https://gwern.net/unseeing), while many beginners expect them to "understand what they meant" and act accordingly. Every simple-looking capability is burdened with detail: the computer "knows what time it is" (thanks to some [nontrivial engineering](https://en.wikipedia.org/wiki/Network_Time_Protocol) with some possible failure points); the out-of-order CPU "runs just like an abstract in-order machine, but very fast" (until security researchers [find a difference](https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability))); DNS "resolves domain names to IPs" (but is frequently intercepted by networks, and can also serve as a covert backchannel); video codecs "make videos smaller" (but are also [complex domain-specific programming languages](https://wrv.github.io/h26forge.pdf)); text rendering "is just copying bitmaps into the right places" ([unless you care about Unicode or antialiasing or kerning](https://faultlore.com/blah/text-hates-you/)).
So why is programming education particularly bad? Shouldn't *every* field be harmed by tacit knowledge transmission problems? My speculative answer is that they generally are, but it's much less noticeable and plausibly also a smaller problem. The heuristics used in programming are strange and unnatural - I'll describe a few of the important ones later - but the overarching theme is that programming is highly reductionist: you have to model a system very different to your own mind, and every abstraction breaks down in some corner case you will eventually have to know about. The human mind very much likes pretending that other systems are more or less identical to it - [animism](https://en.wikipedia.org/wiki/Animism) is no longer a particularly popular explicitly-held belief system, but it's still common to ascribe intention to machinery, "fate" and "karma", animals without very sophisticated cognition, and a wide range of other phenomena. Computers are not at all human, in that they do exactly what someone has set them up to do, which is often [not what they thought they were doing](https://gwern.net/unseeing), while many beginners expect them to "understand what they meant" and act accordingly. Every simple-looking capability is burdened with detail[^1]: the computer "knows what time it is" (thanks to some [nontrivial engineering](https://en.wikipedia.org/wiki/Network_Time_Protocol) with some possible failure points); the out-of-order CPU "runs just like an abstract in-order machine, but very fast" (until security researchers [find a difference](https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability))); DNS "resolves domain names to IPs" (but is frequently intercepted by networks, and can also serve as a covert backchannel); video codecs "make videos smaller" (but are also [complex domain-specific programming languages](https://wrv.github.io/h26forge.pdf)); text rendering "is just copying bitmaps into the right places" ([unless you care about Unicode or antialiasing or kerning](https://faultlore.com/blah/text-hates-you/)).
The other fields which I think suffer most are maths and physics. Maths education mostly [fails to convey what mathematicians actually care about](https://www.maa.org/external_archive/devlin/LockhartsLament.pdf) and, despite some attempts to vaguely gesture at it, does not teach "problem-solving" skills as much as sometimes set nontrivial multistep problems and see if some people manage to solve them. Years of physics instruction [fail to stop many students falling back to Aristotlean mechanics](https://www.researchgate.net/profile/Richard-Gunstone/publication/238983736_Student_understanding_in_mechanics_A_large_population_survey/links/02e7e52f8a2f984024000000/Student-understanding-in-mechanics-A-large-population-survey.pdf) on qualitative questions. This is apparently mostly ignored, perhaps because knowledge without deep understanding is sufficient for many uses and enough people generalize to the interesting parts to supply research, but programming makes the problems more obvious, since essentially any useful work will rapidly run into things like debugging.
@ -27,4 +27,8 @@ If you have been paying any attention to anything within the past [two years](ht
Essentially, your job is probably not safe, as long as development continues (and big organizations actually notice).
You may contend that LLMs lack "general intelligence", and thus can't solve novel problems, devise clever new algorithms, etc. I don't think this is exactly right (it's probably a matter of degree rather than binary), but my more interesting objection is that most code doesn't involve anything like that. Most algorithmic problems have already been solved somewhere if you can frame them right (which is, in fairness, also a problem of intelligence, but less so than deriving the solution from scratch), and LLMs probably remember more algorithms than you. More than that, however, most code doesn't even involve sophisticated algorithms: it just has to move some data around or convert between formats or call out to libraries or APIs in the right order or process some forms. I don't really like writing that and try to minimize it, but this only goes so far. You may also have a stronger objection along the line of "LLMs are just stochastic parrots repeating patterns in their training data": this is wrong, and you may direct complaints regarding this to the comments or [microblog](https://b.osmarks.net/), where I will probably ignore them.
You may contend that LLMs lack "general intelligence", and thus can't solve novel problems, devise clever new algorithms, etc. I don't think this is exactly right (it's probably a matter of degree rather than binary), but my more interesting objection is that most code doesn't involve anything like that. Most algorithmic problems have already been solved somewhere if you can frame them right[^2] (which is, in fairness, also a problem of intelligence, but less so than deriving the solution from scratch), and LLMs probably remember more algorithms than you. More than that, however, most code doesn't even involve sophisticated algorithms: it just has to move some data around or convert between formats or call out to libraries or APIs in the right order or process some forms. I don't really like writing that and try to minimize it, but this only goes so far. You may also have a stronger objection along the line of "LLMs are just stochastic parrots repeating patterns in their training data": this is wrong, and you may direct complaints regarding this to the comments or [microblog](https://b.osmarks.net/), where I will probably ignore them.
[^1]: The particular examples here are not ones you're likely to run into for a while, but anyone who writes code for long enough is going to encounter *something* weird.
[^2]: Notably, people who have spent more time on Leetcode than me claim that it is actually just about memorizing a few algorithms which it then uses for a wide range of thinly disguised problems.

View File

@ -6,14 +6,18 @@ updated: 11/05/2023
---
As you may know, osmarks.net is a website, served from computers which are believed to exist. But have you ever wondered exactly how it's all set up? If not, you may turn elsewhere and live in ignorance. Otherwise, continue reading.
Many similar personal sites are hosted on free static site services or various cloud platforms, but mine actually runs on a physical server. This was originally done because of my general distrust of SaaS/cloud platforms, to learn about Linux administration, and desire to run some non-web things, but now it's necessary to run the full range of weird components which are now important to the website. ~~The hardware has remained the same since early 2019, before I actually had a public site, apart from the addition of more disk capacity and a spare GPU for occasional machine learning workloads - I am using an old HP ML110 G7 tower server. Despite limited RAM and CPU power compared to contemporary rackmount models, it was cheap, has continued to work amazingly reliably, and is much more power-efficient than those would have been. It mostly only runs at about 5% CPU load and 2GB of RAM in use anyway, so it's not been an issue.~~ Due to the increasing compute demands of internal workloads, among other things, it has now been replaced with a custom build using a consumer Ryzen CPU. This has massively increased performance thanks to the CPU's much better IPC, clocks and core count, the 8x increase in RAM, and actually having an SSD.
Many similar personal sites are hosted on free static site services or various cloud platforms, but mine actually runs on a physical server. This was originally done because of my general distrust of SaaS/cloud platforms, to learn about Linux administration, and desire to run some non-web things, but now it's necessary to run the full range of weird components which are now important to the website. ~~The hardware has remained the same since early 2019, before I actually had a public site, apart from the addition of more disk capacity and a spare GPU for occasional machine learning workloads - I am using an old HP ML110 G7 tower server. Despite limited RAM and CPU power compared to contemporary rackmount models, it was cheap, has continued to work amazingly reliably, and is much more power-efficient than those would have been. It mostly only runs at about 5% CPU load and 2GB of RAM in use anyway, so it's not been an issue.~~ Due to the increasing compute demands of internal workloads, among other things, it has now been replaced with a custom build using a consumer Ryzen CPU. This has massively increased performance thanks to the CPU's much better IPC, clocks and core count, the 16x increase in RAM, and actually having an SSD[^2].
The main site itself, which you're currently reading, is in fact just a simple static website. Over the years the exact implementation has varied a lot, from the original not-actually-that-static version using Caddy, some weird PHP scripts for Markdown, and a few folders of HTML files, to the later strange combination of Haskell (using Hakyll) and makefiles to the current somewhat horrible Node.js program (which also interacts with someone else's Go program. Fun!). The modern implementation of the compiler does templating, dependency resolution, Markdown and some optimization tasks in about 300 lines of poorly-described JavaScript.
Being static files, many, many different webservers could have been used for this site. In practice, it's mostly alternated randomly between [caddy](https://caddyserver.com/) (a more recent, Go-based webserver with automatic LetsEncrypt integration) and [nginx](https://nginx.org/) (an older and more powerful but slightly quirky program) - caddy generally had easier configuration, but I arbitrarily preferred nginx in some ways. After caddy v2 suddenly required me to rewrite my configuration and introduced a bunch of weird issues, I permanently switched over to nginx and haven't changed back. The configuration file is now 600 lines or so, even with inclusion of includes to shorten things, but it... works, at least. This is mostly to accommodate the bizzarely large set of subdomains I now have for various people, and reverse proxy configuration for backend services. I also use a custom-compiled build of nginx with HTTP/3 (QUIC) support and some modules compiled in.
Some of these backend things are only for personal use, but a few are related to the site itself. For example, the comment server is a standalone Python program, [isso](https://posativ.org/isso/), with corresponding JS embedded in each page. This works pretty well, but has lead to some weird quirkiness, such as each separate 404-erroring URL having its own list of comments. There's also the Random Stuff API, a custom assemblage of about 15 different Python libraries and external programs which, while technically not linked on the site, does interact with other projects like [PotatOS](https://git.osmarks.net/osmarks/potatOS/), and internal services on the same infrastructure like my [RSS reader](https://miniflux.app/). The images subdomain also uses a [PHP program](https://larsjung.de/h5ai/) to generate a nice searchable index; in fact, it is <del>one of two</del> the only PHP thing<del>s</del> I have unfortunately not yet been able to purge. There also used to be a publicly available status page using some custom code, but this doesn't work very well and has now been dropped; previously I had a Grafana (and earlier Netdata) instance there, but this has now been cancelled because it leaks a worrying amount of information.
Some of these backend things are only for personal use, but a few are related to the site itself. For example, the comment server is a standalone Python program, [isso](https://posativ.org/isso/), with corresponding JS embedded in each page. This works pretty well, but has lead to some weird quirkiness, such as each separate 404-erroring URL having its own list of comments. There's also the Random Stuff API, a custom assemblage of about 15 different Python libraries and external programs which, while technically not linked on the site, does interact with other projects like [PotatOS](https://git.osmarks.net/osmarks/potatOS/), and internal services on the same infrastructure like my [RSS reader](https://miniflux.app/). The images subdomain also uses a [PHP program](https://larsjung.de/h5ai/) to generate a nice searchable index; in fact, it is <del>one of two</del> the only PHP thing<del>s</del> I have unfortunately not yet been able to purge[^1]. There also used to be a publicly available status page using some custom code, but this doesn't work very well and has now been dropped; previously I had a Grafana (and earlier Netdata) instance there, but this has now been cancelled because it leaks a worrying amount of information.
As for the underlying OS everything runs on, I currently use [Arch Linux](https://i.osmarks.net/memes-or-something/arch-btw.png) (as well as Alpine on a few lower-resourced cloud servers). Some form of Linux is inevitable - BSDs aren't really compatible with much, and Windows is obviously unsuited for server duty - but I mostly use Arch for its stability (this sounds sarcastic, but I've actually found it to be very reliable with regular updates), wide range of packages (particularly from the AUR; as I don't really run critical production infrastructure, I can generally afford to compile stuff from source a lot), and better general ease-of-use than Alpine. As much as I vaguely resent it, this is mostly down to systemd - despite it being a horrific bloated monolith, `journalctl` is very convenient and unit files are pleasant and easy to write compared to the weird OpenRC scripts Alpine uses.
I am actually considering yet another redesign, however; switching to a dynamic site implementation instead would allow me to integrate the comment system and achievement system better, make things like the "from other blogs" tiles actually update at reasonable intervals, and arbitrarily A/B test users, although it would break some nice things like this site's very aggressive caching and fast serving. Please leave your thoughts or lack of thoughts on this in the comments.
[^1]: The previous one was DokuWiki, now replaced with Minoteaur.
[^2]: My next upgrade is probably going to be more SSD space, since I'm *somehow* running out of that.

View File

@ -0,0 +1,128 @@
---
title: Political Opinion Calendar
description: Instead of wasting time thinking of the best political opinion to hold, simply pick them pseudorandomly per day with this tool.
slug: polcal
---
<script src="/assets/js/mithril.js"></script>
<script src="/assets/js/date-fns.js"></script>
<style>
.calday {
padding: 1em;
margin: 0;
border: none;
}
#app table {
border-collapse: collapse;
}
.opinion {
font-style: italic;
}
#app button, #app input {
font-size: 1.25em;
}
</style>
<div id="app">
</div>
<script>
const STORAGE_KEY = "political-opinion-calendar"
const now = new Date(Date.now()) // JavaScript "irl"
var month = now.getMonth() + 1
var year = now.getFullYear()
const readSave = () => {
try {
const result = JSON.parse(localStorage.getItem(STORAGE_KEY))
if (!result || !Array.isArray(result) || !result.every(x => typeof x.opinion === "string" && typeof x.weight === "number")) { return }
return result
} catch(e) {
console.error(e, "load failed")
}
}
var opinions = readSave() || [{ weight: 1, opinion: "" }]
const writeSave = () => {
localStorage.setItem(STORAGE_KEY, JSON.stringify(opinions))
}
const hash = (str, seed = 0) => {
let h1 = 0xdeadbeef ^ seed, h2 = 0x41c6ce57 ^ seed
for (let i = 0, ch; i < str.length; i++) {
ch = str.charCodeAt(i)
h1 = Math.imul(h1 ^ ch, 2654435761)
h2 = Math.imul(h2 ^ ch, 1597334677)
}
h1 = Math.imul(h1 ^ (h1>>>16), 2246822507) ^ Math.imul(h2 ^ (h2>>>13), 3266489909)
h2 = Math.imul(h2 ^ (h2>>>16), 2246822507) ^ Math.imul(h1 ^ (h1>>>13), 3266489909)
return 4294967296 * (2097151 & h2) + (h1>>>0)
}
function incMonth(by) {
month += by
if (month < 1) {
month = 12 - month
year--
} else if (month > 12) {
month = month - 12
year++
}
}
function displayMonth(year, month) {
var opinionLookup = []
for (const opinion of opinions) {
for (var i = 0; i < opinion.weight; i++) {
opinionLookup.push(opinion.opinion)
}
}
var init = dateFns.addMonths(dateFns.addYears(0, year - 1970), month - 1)
var offset = dateFns.getDay(init) - 1
var weekinit = dateFns.subDays(init, offset >= 0 ? offset : 6)
var rows = [
m("tr.calweek.calhead", ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"].map(x => m("th.calday", x)))
]
outer: for (var i = 0; i < 6; i++) {
var row = []
for (var j = 0; j < 7; j++) {
var x = dateFns.addDays(dateFns.addWeeks(weekinit, i), j)
if (x > init && dateFns.getMonth(x) + 1 !== month && dateFns.getDate(x) >= 7) { break outer }
var opindex = hash(`${dateFns.getYear(x)}-${dateFns.getMonth(x)}-${dateFns.getDate(x)}`) % opinionLookup.length
var opinion = opinionLookup.length > 0 ? opinionLookup[opindex] : "no opinion"
row.push(m("td.calday", { style: `background: hsl(${hash(opinion) % 360}deg, 100%, 60%); opacity: ${dateFns.getMonth(x) + 1 === month ? "1": "0.5"}` }, [
m(".date", dateFns.getDate(x)),
m(".opinion", opinion)
]))
}
rows.push(m("tr.calweek", row))
}
return rows
}
m.mount(document.querySelector("#app"), {
view: function() {
return [
m("", [
m("h1", "Political Opinions"),
m("ul",
opinions.map((opinion, index) => m("li", [
m("button", { onclick: () => opinions.splice(index, 1) }, "-"),
m("input[type=number]", { value: opinion.weight, min: 1, max: 100, oninput: ev => { opinions[index].weight = Math.min(ev.target.value, 100); writeSave() } }),
m("input", { value: opinion.opinion, oninput: ev => { opinions[index].opinion = ev.target.value; writeSave() }, placeholder: "Political opinion..." })
]))
),
m("button", { onclick: () => opinions.push({ opinion: "", weight: 1 }) }, "+")
]),
m("", [
m("h1", "Calendar"),
m("h2", `${year}-${month}`),
m("button", { onclick: () => incMonth(-1) }, "-"),
m("button", { onclick: () => incMonth(1) }, "+"),
m("table", displayMonth(year, month))
]),
]
}
})
</script>

1483
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -4,12 +4,17 @@
"description": "Static site generation code for my website.",
"main": "index.js",
"dependencies": {
"@msgpack/msgpack": "^3.0.0-beta2",
"axios": "^1.5.0",
"better-sqlite3": "^8.7.0",
"chalk": "^4.1.0",
"dayjs": "^1.8.28",
"esbuild": "^0.19.6",
"fs-extra": "^8.1.0",
"gray-matter": "^4.0.2",
"handlebars": "^4.7.6",
"html-minifier": "^4.0.0",
"idb": "^7.1.1",
"markdown-it": "^13.0.1",
"markdown-it-anchor": "^8.6.7",
"markdown-it-footnote": "^3.0.3",
@ -19,7 +24,8 @@
"ramda": "^0.26.1",
"sass": "^1.26.8",
"terser": "^4.8.0",
"uuid": "^9.0.0"
"uuid": "^9.0.0",
"yalps": "^0.5.5"
},
"license": "MIT"
}

View File

@ -18,7 +18,7 @@
"If you can't stand the heat, get out of the server room."
],
"feeds": [
"https://blogs.sciencemag.org/pipeline/feed",
"https://www.science.org/blogs/pipeline/feed",
"https://www.rtl-sdr.com/feed/",
"https://astralcodexten.substack.com/feed",
"https://www.rifters.com/crawl/?feed=rss2",
@ -27,5 +27,6 @@
"https://aphyr.com/posts.atom",
"https://os.phil-opp.com/rss.xml"
],
"dateFormat": "YYYY-MM-DD"
"dateFormat": "YYYY-MM-DD",
"microblogSource": "https://b.osmarks.net/outbox"
}

View File

@ -18,6 +18,10 @@ const childProcess = require("child_process")
const chalk = require("chalk")
const crypto = require("crypto")
const uuid = require("uuid")
const sqlite = require("better-sqlite3")
const axios = require("axios")
const msgpack = require("@msgpack/msgpack")
const esbuild = require("esbuild")
dayjs.extend(customParseFormat)
@ -28,6 +32,7 @@ const blogDir = path.join(root, "blog")
const errorPagesDir = path.join(root, "error")
const assetsDir = path.join(root, "assets")
const outDir = path.join(root, "out")
const srcDir = path.join(root, "src")
const buildID = nanoid()
globalData.buildID = buildID
@ -189,7 +194,7 @@ const processBlog = async () => {
}, processContent: renderMarkdown })
})
console.log(chalk.yellow(`${Object.keys(blog).length} blog entries`))
globalData.blog = addGuids(R.sortBy(x => x.updated ? -x.updated.valueOf() : 0, R.values(blog)))
globalData.blog = addGuids(R.filter(x => !x.draft, R.sortBy(x => x.updated ? -x.updated.valueOf() : 0, R.values(blog))))
}
const processErrorPages = () => {
@ -214,51 +219,76 @@ const applyMetricPrefix = (x, unit) => {
globalData.metricPrefix = applyMetricPrefix
const writeBuildID = () => fsp.writeFile(path.join(outDir, "buildID.txt"), buildID)
const index = async () => {
const index = globalData.templates.index({ ...globalData, title: "Index", posts: globalData.blog, description: globalData.siteDescription })
await fsp.writeFile(path.join(outDir, "index.html"), index)
}
const compileCSS = async () => {
const css = sass.renderSync({
data: await readFile(path.join(root, "style.sass")),
outputStyle: "compressed",
indentedSyntax: true
}).css
globalData.css = css
const cache = sqlite("cache.sqlite3")
cache.exec("CREATE TABLE IF NOT EXISTS cache (k TEXT NOT NULL PRIMARY KEY, v BLOB NOT NULL, ts INTEGER NOT NULL)")
const writeCacheStmt = cache.prepare("INSERT OR REPLACE INTO cache VALUES (?, ?, ?)")
const readCacheStmt = cache.prepare("SELECT * FROM cache WHERE k = ?")
const readCache = (k, maxAge=null, ts=null) => {
const row = readCacheStmt.get(k)
if (!row) return
if ((maxAge && row.ts < (Date.now() - maxAge) || (ts && row.ts != ts))) return
return msgpack.decode(row.v)
}
const loadTemplates = async () => {
globalData.templates = await loadDir(templateDir, async fullPath => pug.compile(await readFile(fullPath), { filename: fullPath }))
const writeCache = (k, v, ts=Date.now()) => {
const enc = msgpack.encode(v)
writeCacheStmt.run(k, Buffer.from(enc.buffer, enc.byteOffset, enc.byteLength), ts)
}
const fetchMicroblog = async () => {
const cached = readCache("microblog", 60*60*1000)
if (cached) { globalData.microblog = cached; return }
const posts = (await axios({ url: globalData.microblogSource, headers: { "Accept": 'application/ld+json; profile="https://www.w3.org/ns/activitystreams"' } })).data.orderedItems
globalData.microblog = posts.slice(0, 6).map(post => minifyHTML(globalData.templates.activitypub({
...globalData,
permalink: post.object.id,
date: dayjs(post.object.published),
content: post.object.content,
bgcol: hashColor(post.object.id, 1, 0.9)
})))
writeCache("microblog", globalData.microblog)
}
const runOpenring = async () => {
try {
var cached = JSON.parse(await fsp.readFile("cache.json", {encoding: "utf8"}))
} catch(e) {
console.log(chalk.keyword("orange")("Failed to load cache:"), e)
}
if (cached && (Date.now() - cached.time) < (60 * 60 * 1000)) {
console.log(chalk.keyword("orange")("Loading Openring data from cache"))
return cached.data
}
globalData.openring = "bee"
const cached = readCache("openring", 60*60*1000)
if (cached) { globalData.openring = cached; return }
// wildly unsafe but only runs on input from me anyway
const arg = `./openring -n6 ${globalData.feeds.map(x => '-s "' + x + '"').join(" ")} < openring.html`
console.log(chalk.keyword("orange")("Openring:") + " " + arg)
const out = await util.promisify(childProcess.exec)(arg)
console.log(chalk.keyword("orange")("Openring:") + "\n" + out.stderr.trim())
globalData.openring = minifyHTML(out.stdout)
await fsp.writeFile("cache.json", JSON.stringify({
time: Date.now(),
data: globalData.openring
}))
writeCache("openring", globalData.openring)
}
const compileCSS = async () => {
const css = sass.renderSync({
data: await readFile(path.join(srcDir, "style.sass")),
outputStyle: "compressed",
indentedSyntax: true
}).css
globalData.css = css
}
const loadTemplates = async () => {
globalData.templates = await loadDir(templateDir, async fullPath => pug.compile(await readFile(fullPath), { filename: fullPath }))
}
const genRSS = async () => {
const rssFeed = globalData.templates.rss({ ...globalData, items: globalData.blog, lastUpdate: new Date() })
await fsp.writeFile(path.join(outDir, "rss.xml"), rssFeed)
}
const genManifest = async () => {
const m = mustache.render(await readFile(path.join(assetsDir, "manifest.webmanifest")), globalData)
fsp.writeFile(path.join(outAssets, "manifest.webmanifest"), m)
}
const minifyJSTask = async () => {
const jsDir = path.join(assetsDir, "js")
const jsOutDir = path.join(outAssets, "js")
@ -267,10 +297,22 @@ const minifyJSTask = async () => {
await minifyJSFile(await readFile(fullpath), file, path.join(jsOutDir, file))
}))
}
const compilePageJSTask = async () => {
await esbuild.build({
entryPoints: [ path.join(srcDir, "page.js") ],
bundle: true,
outfile: path.join(outAssets, "js/page.js"),
minify: true,
sourcemap: true
})
}
const genServiceWorker = async () => {
const serviceWorker = mustache.render(await readFile(path.join(assetsDir, "sw.js")), globalData)
await minifyJSFile(serviceWorker, "sw.js", path.join(outDir, "sw.js"))
}
const copyAsset = subpath => fse.copy(path.join(assetsDir, subpath), path.join(outAssets, subpath))
const doImages = async () => {
@ -279,9 +321,37 @@ const doImages = async () => {
copyAsset("titillium-web-semibold.woff2")
copyAsset("share-tech-mono.woff2")
globalData.images = {}
for (const image of await fse.readdir(path.join(assetsDir, "images"), { encoding: "utf-8" })) {
globalData.images[image.split(".").slice(0, -1).join(".")] = "/assets/images/" + image
}
await Promise.all(
(await fse.readdir(path.join(assetsDir, "images"), { encoding: "utf-8" })).map(async image => {
if (image.endsWith(".original")) { // generate alternative formats
const stripped = image.replace(/\.original$/).split(".").slice(0, -1).join(".")
globalData.images[stripped] = {}
const fullPath = path.join(assetsDir, "images", image)
const stat = await fse.stat(fullPath)
const writeFormat = async (name, ext, mime, cmd, supplementaryArgs) => {
let bytes = readCache(`images/${stripped}/${name}`, null, stat.mtimeMs)
const destFilename = stripped + ext
const destPath = path.join(outAssets, "images", destFilename)
if (!bytes) {
console.log(chalk.keyword("orange")(`Compressing image ${stripped} (${name})`))
await util.promisify(childProcess.execFile)(cmd, supplementaryArgs.concat([
fullPath,
destPath
]))
writeCache(`images/${stripped}/${name}`, await fsp.readFile(destPath), stat.mtimeMs)
} else {
await fsp.writeFile(destPath, bytes)
}
globalData.images[stripped][mime] = "/assets/images/" + destFilename
}
await writeFormat("avif", ".avif", "image/avif", "avifenc", ["-s", "0", "-q", "20"])
await writeFormat("jpeg-scaled", ".jpg", "_fallback", "convert", ["-resize", "25%", "-format", "jpeg"])
} else {
globalData.images[image.split(".").slice(0, -1).join(".")] = "/assets/images/" + image
}
})
)
}
const tasks = {
@ -290,18 +360,20 @@ const tasks = {
pagedeps: { deps: ["templates", "css"] },
css: { deps: [], fn: compileCSS },
writeBuildID: { deps: [], fn: writeBuildID },
index: { deps: ["openring", "pagedeps", "blog", "experiments", "images"], fn: index },
index: { deps: ["openring", "pagedeps", "blog", "experiments", "images", "fetchMicroblog"], fn: index },
openring: { deps: [], fn: runOpenring },
rss: { deps: ["blog"], fn: genRSS },
blog: { deps: ["pagedeps"], fn: processBlog },
fetchMicroblog: { deps: [], fn: fetchMicroblog },
experiments: { deps: ["pagedeps"], fn: processExperiments },
assetsDir: { deps: [], fn: () => fse.ensureDir(outAssets) },
manifest: { deps: ["assetsDir"], fn: genManifest },
minifyJS: { deps: ["assetsDir"], fn: minifyJSTask },
compilePageJS: { deps: ["assetsDir"], fn: compilePageJSTask },
serviceWorker: { deps: [], fn: genServiceWorker },
images: { deps: ["assetsDir"], fn: doImages },
offlinePage: { deps: ["assetsDir", "pagedeps"], fn: () => applyTemplate(globalData.templates.experiment, path.join(assetsDir, "offline.html"), () => path.join(outAssets, "offline.html"), {}) },
assets: { deps: ["manifest", "minifyJS", "serviceWorker", "images"] },
assets: { deps: ["manifest", "minifyJS", "serviceWorker", "images", "compilePageJS"] },
main: { deps: ["writeBuildID", "index", "errorPages", "assets", "experiments", "blog", "rss"] }
}

View File

@ -1,6 +1,5 @@
// I cannot be bothered to set up a bundler
// https://www.npmjs.com/package/idb
!function(e,t){t(window.idb={})}(this,(function(e){"use strict";let t,n;const r=new WeakMap,o=new WeakMap,s=new WeakMap,i=new WeakMap,a=new WeakMap;let c={get(e,t,n){if(e instanceof IDBTransaction){if("done"===t)return o.get(e);if("objectStoreNames"===t)return e.objectStoreNames||s.get(e);if("store"===t)return n.objectStoreNames[1]?void 0:n.objectStore(n.objectStoreNames[0])}return f(e[t])},set:(e,t,n)=>(e[t]=n,!0),has:(e,t)=>e instanceof IDBTransaction&&("done"===t||"store"===t)||t in e};function d(e){return e!==IDBDatabase.prototype.transaction||"objectStoreNames"in IDBTransaction.prototype?(n||(n=[IDBCursor.prototype.advance,IDBCursor.prototype.continue,IDBCursor.prototype.continuePrimaryKey])).includes(e)?function(...t){return e.apply(p(this),t),f(r.get(this))}:function(...t){return f(e.apply(p(this),t))}:function(t,...n){const r=e.call(p(this),t,...n);return s.set(r,t.sort?t.sort():[t]),f(r)}}function u(e){return"function"==typeof e?d(e):(e instanceof IDBTransaction&&function(e){if(o.has(e))return;const t=new Promise(((t,n)=>{const r=()=>{e.removeEventListener("complete",o),e.removeEventListener("error",s),e.removeEventListener("abort",s)},o=()=>{t(),r()},s=()=>{n(e.error||new DOMException("AbortError","AbortError")),r()};e.addEventListener("complete",o),e.addEventListener("error",s),e.addEventListener("abort",s)}));o.set(e,t)}(e),n=e,(t||(t=[IDBDatabase,IDBObjectStore,IDBIndex,IDBCursor,IDBTransaction])).some((e=>n instanceof e))?new Proxy(e,c):e);var n}function f(e){if(e instanceof IDBRequest)return function(e){const t=new Promise(((t,n)=>{const r=()=>{e.removeEventListener("success",o),e.removeEventListener("error",s)},o=()=>{t(f(e.result)),r()},s=()=>{n(e.error),r()};e.addEventListener("success",o),e.addEventListener("error",s)}));return t.then((t=>{t instanceof IDBCursor&&r.set(t,e)})).catch((()=>{})),a.set(t,e),t}(e);if(i.has(e))return i.get(e);const t=u(e);return t!==e&&(i.set(e,t),a.set(t,e)),t}const p=e=>a.get(e);const l=["get","getKey","getAll","getAllKeys","count"],D=["put","add","delete","clear"],b=new Map;function v(e,t){if(!(e instanceof IDBDatabase)||t in e||"string"!=typeof t)return;if(b.get(t))return b.get(t);const n=t.replace(/FromIndex$/,""),r=t!==n,o=D.includes(n);if(!(n in(r?IDBIndex:IDBObjectStore).prototype)||!o&&!l.includes(n))return;const s=async function(e,...t){const s=this.transaction(e,o?"readwrite":"readonly");let i=s.store;return r&&(i=i.index(t.shift())),(await Promise.all([i[n](...t),o&&s.done]))[0]};return b.set(t,s),s}c=(e=>({...e,get:(t,n,r)=>v(t,n)||e.get(t,n,r),has:(t,n)=>!!v(t,n)||e.has(t,n)}))(c),e.deleteDB=function(e,{blocked:t}={}){const n=indexedDB.deleteDatabase(e);return t&&n.addEventListener("blocked",(()=>t())),f(n).then((()=>{}))},e.openDB=function(e,t,{blocked:n,upgrade:r,blocking:o,terminated:s}={}){const i=indexedDB.open(e,t),a=f(i);return r&&i.addEventListener("upgradeneeded",(e=>{r(f(i.result),e.oldVersion,e.newVersion,f(i.transaction))})),n&&i.addEventListener("blocked",(()=>n())),a.then((e=>{s&&e.addEventListener("close",(()=>s())),o&&e.addEventListener("versionchange",(()=>o()))})).catch((()=>{})),a},e.unwrap=p,e.wrap=f}));
const idb = require("idb")
const { solve } = require("yalps")
// attempt to register service worker
if ("serviceWorker" in navigator) {
@ -34,6 +33,7 @@ const hashString = function(str, seed = 0) {
}
const colHash = (str, saturation = 100, lightness = 70) => `hsl(${hashString(str) % 360}, ${saturation}%, ${lightness}%)`
window.colHash = colHash
// Arbitrary Points code, wrapped in an IIFE to not pollute the global environment much more than it already is
window.points = (async () => {
@ -368,6 +368,144 @@ window.points = (async () => {
}
})()
const footnotes = document.querySelector(".footnotes")
const sidenotes = document.querySelector(".sidenotes")
if (sidenotes) {
const codeblocks = document.querySelectorAll("pre.hljs")
const article = document.querySelector("main.blog-post")
while (footnotes.firstChild) {
sidenotes.appendChild(footnotes.firstChild)
}
const footnoteItems = sidenotes.querySelectorAll(".footnote-item")
const sum = xs => xs.reduce((a, b) => a + b, 0)
const arrayOf = (n, x) => new Array(n).fill(x)
const BORDER = 16
const sidenotesAtSide = () => getComputedStyle(sidenotes).paddingLeft !== "0px"
let rendered = false
const relayout = forceRedraw => {
// sidenote column width is static: no need to redo positioning on resize unless no positions applied
if (sidenotesAtSide()) {
if (rendered && !forceRedraw) return
// sidenote vertical placement algorithm
const snRect = sidenotes.getBoundingClientRect()
const articleRect = article.getBoundingClientRect()
const exclusions = [[-Infinity, Math.max(articleRect.top, snRect.top)]]
for (const codeblock of codeblocks) {
const codeblockRect = codeblock.getBoundingClientRect()
if (codeblockRect.width !== 0) { // collapsed
exclusions.push([codeblockRect.top - BORDER, codeblockRect.top + codeblockRect.height + BORDER])
}
}
// convert unusable regions into list of usable regions
const inclusions = []
for (const [start, end] of exclusions) {
if (inclusions.length) inclusions[inclusions.length - 1].end = start - snRect.top
inclusions.push({ start: end - snRect.top, contents: [] })
}
inclusions[inclusions.length - 1].end = Infinity
const notes = []
// read off sidenotes to place
for (const item of footnoteItems) {
const itemRect = item.getBoundingClientRect()
const link = article.querySelector(`#${item.id.replace(/^fn/, "fnref")}`)
const linkRect = link.getBoundingClientRect()
item.style.position = "absolute"
item.style.left = getComputedStyle(sidenotes).paddingLeft
item.style.marginBottom = item.style.marginTop = `${BORDER / 2}px`
notes.push({
item,
height: itemRect.height + BORDER,
target: linkRect.top - snRect.top
})
}
// preliminary placement: place in valid regions going down
for (const note of notes) {
const index = inclusions.findLastIndex(inc => (inc.start + note.height) < note.target)
const next = inclusions.slice(index)
.findIndex(inc => (sum(inc.contents.map(x => x.height)) + note.height) < (inc.end - inc.start))
inclusions[index + next].contents.push(note)
}
// TODO: try simple moves between regions? might be useful sometimes
// place within region and apply styles
for (const inc of inclusions) {
const regionNotes = inc.contents
if (regionNotes.length > 0) {
const variables = {}
const constraints = {}
if (inc.end !== Infinity) {
const heights = regionNotes.map(note => note.height)
constraints["sum_gaps"] = { max: inc.end - inc.start - sum(heights) }
}
regionNotes.forEach((note, i) => {
variables[`distbound_${i}`] = {
"distsum": 1,
[`distbound_${i}_offset`]: 1,
[`distbound_${i}_offset_neg`]: 1
}
const heightsum = sum(regionNotes.slice(0, i).map(x => x.height))
const baseoffset = heightsum - note.target
// WANT: distbound_i >= placement_i - target_i AND distbound_i <= target_i - placement_i
// distbound_i >= gapsum_i + heightsum_i - target_i
// distbound_i_offset = distbound_i - gapsum_i
// so distbound_i_offset >= heightsum_i - target_i
// implies distbound_i - gapsum_i >= heightsum_i - target_i
// (as required)
// distbound_i + gapsum_i >= heightsum_i - target_i
constraints[`distbound_${i}_offset`] = { min: baseoffset }
constraints[`distbound_${i}_offset_neg`] = { min: -baseoffset }
constraints[`gap_${i}`] = { min: 0 }
const G_i_var = { "sum_gaps": 1 }
for (let j = i; j <= regionNotes.length; j++) G_i_var[`distbound_${j}_offset`] = -1
for (let j = i; j < regionNotes.length; j++) G_i_var[`distbound_${j}_offset_neg`] = 1
variables[`gap_${i}`] = G_i_var
})
const model = {
direction: "minimize",
objective: "distsum",
constraints,
variables
}
const solution = solve(model, { includeZeroVariables: true })
if (solution.status !== "optimal") {
// implode
solution.variables = []
console.warn("Sidenote layout failed", solution.status)
}
const solutionVars = new Map(solution.variables)
let position = 0
regionNotes.forEach((note, i) => {
position += solutionVars.get(`gap_${i}`) || 0
note.item.style.top = position + "px"
position += note.height
})
}
}
rendered = true
} else {
for (const item of sidenotes.querySelectorAll(".footnote-item")) {
item.style.position = "static"
}
rendered = false
}
}
window.onresize = relayout
window.onload = relayout
document.querySelectorAll("summary").forEach(x => {
x.addEventListener("click", () => {
setTimeout(() => relayout(true), 0)
})
})
window.relayout = relayout
}
const customStyle = localStorage.getItem("user-stylesheet")
let customStyleEl = null
if (customStyle) {
@ -376,4 +514,6 @@ if (customStyle) {
customStyleEl.onload = () => console.log("Loaded custom styles")
customStyleEl.id = "custom-style"
document.head.appendChild(customStyleEl)
}
}
window.customStyleEl = customStyleEl
window.customStyle = customStyle

View File

@ -1,3 +1,7 @@
$sidenotes-width: 20rem
$content-margin: 1rem
$content-width: 40rem
@font-face
font-family: 'Titillium Web'
font-style: normal
@ -56,7 +60,7 @@ nav
color: white
font-size: 1.25em
a, img
a, img, picture
margin-right: 0.5em
@for $i from 1 through 6
@ -71,18 +75,18 @@ h1, h2, h3, h4, h5, h6
color: inherit
main, .header
margin-left: 1em
margin-right: 1em
margin-left: $content-margin
margin-right: $content-margin
// for easier viewing on big screen devices, narrow the width of text
// also make links a bit more distinct
main.blog-post
max-width: 40em
max-width: $content-width
text-align: justify
a
text-decoration: underline
.blog, .experiments, .atl
.blog, .experiments, .atl, .microblog
margin: -0.5em
margin-bottom: 0
display: flex
@ -94,6 +98,9 @@ main.blog-post
padding: 1em
flex: 1 1 20%
.microblog > div
flex: 1 1 30%
main
margin-top: 1em
@ -147,7 +154,7 @@ button, select, input, textarea, .textarea
.imbox
display: flex
img
img, picture
padding-right: 1em
height: 8em
width: 8em
@ -162,5 +169,36 @@ button, select, input, textarea, .textarea
border: 1px solid black
padding: 1em
margin: -1px
img
width: 100%
img, picture
width: 100%
blockquote
padding-left: 0.4rem
border-left: 0.4rem solid black
margin-left: 0.2rem
.microblog p
margin: 0
.sidenotes-container
display: flex
flex-wrap: wrap
.sidenotes
width: $sidenotes-width
min-width: $sidenotes-width
padding-left: 1.5rem
position: relative
.footnotes-sep
display: none
.footnotes-list
text-align: justify
@media (max-width: calc(2 * $content-margin + $content-width + $sidenotes-width))
.sidenotes
min-width: auto
width: auto
max-width: $content-width
padding: 0
margin-left: $content-margin
margin-right: $content-margin
.footnotes-sep
display: block

View File

@ -0,0 +1,4 @@
div(style=`background: ${bgcol}`)
div
a(href=permalink)= renderDate(date)
div!= content

View File

@ -1,4 +1,10 @@
extends layout.pug
block content
main.blog-post!= content
.sidenotes-container
main.blog-post!= content
.sidenotes
block under-title
if draft
h1 DRAFT

View File

@ -9,13 +9,20 @@ block content
each post in posts
.imbox(style=`background: ${post.bgcol}`)
if images.hasOwnProperty(post.slug)
img(src=images[post.slug])
+image(images[post.slug])
div
div
a.title(href=`/${post.slug}/`)= post.title
span.deemph= `${renderDate(post.created)} / ${metricPrefix(post.wordCount, "")} words`
div.deemph= `${renderDate(post.created)} / ${metricPrefix(post.wordCount, "")} words`
div.description!= post.description
h2 Microblog
p.
Short-form observations.
div.microblog
each entry in microblog
!= entry
h2 Experiments
p.
Various web projects I have put together over many years. Made with at least four different JS frameworks. Some of them are bad.
@ -23,7 +30,7 @@ block content
each experiment in experiments
.imbox(style=`background: ${experiment.bgcol}`)
if images.hasOwnProperty(experiment.slug)
img(src=images[experiment.slug])
+image(images[experiment.slug])
div
div
a.title(href=`/${experiment.slug}/`)= experiment.title

View File

@ -1,9 +1,23 @@
mixin nav-item(url, name)
a(href=url)= name
mixin image(src)
if typeof src === "string"
img(src=src)
else
picture
each val, key in src
if key == "_fallback"
img(src=val)
else
source(srcset=val, type=key)
doctype html
html(lang="en")
head
link(rel="preload", href="/assets/share-tech-mono.woff2", as="font", crossorigin="anonymous")
link(rel="preload", href="/assets/titillium-web-semibold.woff2", as="font", crossorigin="anonymous")
link(rel="preload", href="/assets/titillium-web.woff2", as="font", crossorigin="anonymous")
title= `${title} @ ${name}`
script(src="/assets/js/page.js", defer=true)
meta(charset="UTF-8")