1
0
mirror of https://github.com/osmarks/website synced 2024-12-23 16:40:31 +00:00

Apparently I changed everything and forgot to commit it.

- I just added sidenotes (blog being rewritten slightly to incorporate them; WIP)
- Microblog added, compiler caching mechanism reworked
- Image compression
This commit is contained in:
osmarks 2023-11-19 21:06:25 +00:00
parent 981e1a0be2
commit e25013c1b4
59 changed files with 2039 additions and 71 deletions

1
.gitignore vendored
View File

@ -3,3 +3,4 @@ out
openring openring
draft draft
cache.json cache.json
cache.sqlite3

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 589 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 554 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 335 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 524 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 413 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 378 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 431 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 600 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 458 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 616 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 339 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 442 KiB

BIN
assets/images/opinion.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 440 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 517 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 558 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 559 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 493 KiB

BIN
assets/images/polcal.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 316 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 302 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 401 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 423 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 381 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 484 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 364 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 431 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 577 KiB

4
assets/js/date-fns.js Normal file

File diff suppressed because one or more lines are too long

10
blog/computercraft.md Normal file
View File

@ -0,0 +1,10 @@
---
title: ComputerCraft is peak computing
description: It may be a janky Minecraft mod, but in some ways it's nicer than lots of modern software stacks.
slug: computercraft
created: 18/11/2023
draft: yes
---
I have been thinking about [ComputerCraft](https://tweaked.cc/) slightly recently, because of moving [several years of archived code](https://github.com/osmarks/random-stuff/tree/master/computercraft) from Pastebin and some private internal repositories to public view (and writing some minor patches to [PotatOS](https://potatos.madefor.cc/)), and it increasingly seems like a model of what computers *should* be like which highlights the shortcomings of everything else.
Computers undoubtedly grow more powerful every year, as fabs wrangle quantum electrodynamics into providing ever better and smaller transistors at great cost and the handful of companies still at the cutting edge refine their architectures slightly, but, [as has been noted](https://danluu.com/input-lag/), this doesn't actually translate into better user experience.

View File

@ -1,7 +1,7 @@
--- ---
title: "Maghammer: My personal data warehouse" title: "Maghammer: My personal data warehouse"
created: 28/08/2023 created: 28/08/2023
updated: 29/08/2023 updated: 12/09/2023
description: Powerful search tools as externalized cognition, and how mine work. description: Powerful search tools as externalized cognition, and how mine work.
slug: maghammer slug: maghammer
--- ---
@ -23,9 +23,9 @@ You'll note that not all of these projects make any attempt to work on non-text
## Why? ## Why?
Why do I want this? Because human memory is very, very bad. My (declarative) memory is much better than average, but falls very far short of recording everything I read and hear, or even just the source of it (I suspect this is because of poor precision (in the information retrieval sense) making better recall problematic, rather than actual hard limits somewhere - there are documented people with photographic memory, who report remembering somewhat unhelpful information all the time - but without a way to change that it doesn't matter much). According to [Landauer, 1986](https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1004_4)'s estimates, the amount of retrievable information accumulated by a person over a lifetime is less than a gigabyte, or <0.05% of my server's disk space. There's also distortion in remembered material which is hard to correct for. Information is simplified in ways that lose detail, reframed or just changed as your other beliefs change, merged with other memories, or edited for social reasons. Why do I want this? Because human memory is very, very bad. My (declarative) memory is much better than average, but falls very far short of recording everything I read and hear, or even just the source of it[^1]. According to [Landauer, 1986](https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1004_4)'s estimates, the amount of retrievable information accumulated by a person over a lifetime is less than a gigabyte, or <0.05% of my server's disk space[^5]. There's also distortion in remembered material which is hard to correct for. Information is simplified in ways that lose detail, reframed or just changed as your other beliefs change, merged with other memories, or edited for social reasons.
Throughout human history, even before writing, the solution to this has been externalization of cognitive processing: other tiers in the memory hierarchy with more capacity and worse performance. While it would obviously be [advantageous](/rote/) to be able to remember everything directly, just as it would be great to have arbitrarily large amounts of fast SRAM to feed our CPUs, tradeoffs are forced by reality. Oral tradition and culture were the first implementations, shifting information from one unreliable human mind to several so that there was at least some redundancy. Writing made for greater robustness, but the slowness of writing and copying (and for a long time expense of hardware) was limiting. Printing allowed mass dissemination of media but didn't make recording much easier for the individual. Now, the ridiculous and mostly underexploited power of contemporary computers makes it possible to literally record (and search) everything you ever read at trivial cost, as well as making lookups fast enough to integrate them more tightly into workflows. Roam Research popularized the idea of notes as a "second brain", but it's usually the case that the things you want to know are not ones you thought to explicitly write down and organize. Throughout human history, even before writing, the solution to this has been externalization of cognitive processing: other tiers in the memory hierarchy with more capacity and worse performance. While it would obviously be [advantageous](/rote/) to be able to remember everything directly, just as it would be great to have arbitrarily large amounts of fast SRAM to feed our CPUs, tradeoffs are forced by reality. Oral tradition and culture were the first implementations, shifting information from one unreliable human mind to several so that there was at least some redundancy. Writing made for greater robustness, but the slowness of writing and copying (and for a long time expense of hardware) was limiting. Printing allowed mass dissemination of media but didn't make recording much easier for the individual. Now, the ridiculous and mostly underexploited power of contemporary computers makes it possible to literally record (and search) everything you ever read at trivial cost, as well as making lookups fast enough to integrate them more tightly into workflows. Roam Research popularized the idea of notes as a "second brain"[^2], but it's usually the case that the things you want to know are not ones you thought to explicitly write down and organize.
More concretely, I frequently read interesting papers or blog posts or articles which I later remember in some other context - perhaps they came up in a conversation and I wanted to send someone a link, or a new project needs a technology I recall there being good content on. Without good archiving, I would have to remember exactly where I saw it (implausible) or use a standard, public search engine and hope it will actually pull the document I need. Maghammer (mostly) stores these and allows me to find them in a few seconds (fast enough for interactive online conversations, and not that much slower than Firefox's omnibox history search) as long as I can remember enough keywords. It's also nice to be able to conveniently find old shell commands for strange things I had to do in the past, or look up sections in books (though my current implementation isn't ideal for this). More concretely, I frequently read interesting papers or blog posts or articles which I later remember in some other context - perhaps they came up in a conversation and I wanted to send someone a link, or a new project needs a technology I recall there being good content on. Without good archiving, I would have to remember exactly where I saw it (implausible) or use a standard, public search engine and hope it will actually pull the document I need. Maghammer (mostly) stores these and allows me to find them in a few seconds (fast enough for interactive online conversations, and not that much slower than Firefox's omnibox history search) as long as I can remember enough keywords. It's also nice to be able to conveniently find old shell commands for strange things I had to do in the past, or look up sections in books (though my current implementation isn't ideal for this).
@ -41,6 +41,7 @@ Currently, I have custom scripts to import this data, which are run nightly as a
* Unorganized text/HTML/PDF files in my archives folder. * Unorganized text/HTML/PDF files in my archives folder.
* Books (EPUB) stored in Calibre - overall metadata and chapter full text. * Books (EPUB) stored in Calibre - overall metadata and chapter full text.
* Media files in my archive folder (all videos I've watched recently) - format, various metadata fields, and full extracted subtitles with full text search. * Media files in my archive folder (all videos I've watched recently) - format, various metadata fields, and full extracted subtitles with full text search.
* I've now added [WhisperX](https://github.com/m-bain/whisperX/) autotranscription on all files with bad/nonexistent subtitles. While it struggles with music more than Whisper itself, its use of batched inference and voice activity detection meant that I got ~100x realtime speed on average processing all my files (after a patch to fix the awfully slow alignment algorithm).
* [Miniflux](/rssgood/) RSS feed entries. * [Miniflux](/rssgood/) RSS feed entries.
* [Minoteaur](/minoteaur/) notes, files and structured data. I don't have links indexed since SQLite isn't much of a graph database (no, I will not write a recursive common table expression for it), and my importer reads directly off the Minoteaur database and writing a Markdown parser would have been annoying. * [Minoteaur](/minoteaur/) notes, files and structured data. I don't have links indexed since SQLite isn't much of a graph database (no, I will not write a recursive common table expression for it), and my importer reads directly off the Minoteaur database and writing a Markdown parser would have been annoying.
* RCLWE web history (including the `circache` holding indexed pages in my former Recoll install). * RCLWE web history (including the `circache` holding indexed pages in my former Recoll install).
@ -75,10 +76,24 @@ Being built out of a tool intended for quantitative data processing means that I
While it's not part of the same system, [Meme Search Engine](https://mse.osmarks.net/) is undoubtedly useful to me for rapidly finding images (memetic images) I need or want - so much so that I have a separate internal instance run on my miscellaneous-images-and-screenshots folder. Nobody else seems to even be trying - while there are a lot of demos of CLIP image search engines on GitHub, and I think one with the OpenAI repository, I'm not aware of *production* implementations with the exception of [clip-retrieval](https://github.com/rom1504/clip-retrieval) and the LAION index deployment, and one iPhone app shipping a distilled CLIP. There's not anything like a user-friendly desktop app, which confuses me somewhat, since there's clearly demand amongst people I talked to. Regardless of the reason, this means that Meme Search Engine is quite possibly the world's most advanced meme search tool (since I bothered to design a nice-to-use query UI and online reindexing), although I feel compelled to mention someone's [somewhat horrifying iPhone OCR cluster](https://findthatmeme.com/blog/2023/01/08/image-stacks-and-iphone-racks-building-an-internet-scale-meme-search-engine-Qzrz7V6T.html). Meme Search Engine is not very well-integrated but I usually know which dataset I want to retrieve from anyway. While it's not part of the same system, [Meme Search Engine](https://mse.osmarks.net/) is undoubtedly useful to me for rapidly finding images (memetic images) I need or want - so much so that I have a separate internal instance run on my miscellaneous-images-and-screenshots folder. Nobody else seems to even be trying - while there are a lot of demos of CLIP image search engines on GitHub, and I think one with the OpenAI repository, I'm not aware of *production* implementations with the exception of [clip-retrieval](https://github.com/rom1504/clip-retrieval) and the LAION index deployment, and one iPhone app shipping a distilled CLIP. There's not anything like a user-friendly desktop app, which confuses me somewhat, since there's clearly demand amongst people I talked to. Regardless of the reason, this means that Meme Search Engine is quite possibly the world's most advanced meme search tool (since I bothered to design a nice-to-use query UI and online reindexing), although I feel compelled to mention someone's [somewhat horrifying iPhone OCR cluster](https://findthatmeme.com/blog/2023/01/08/image-stacks-and-iphone-racks-building-an-internet-scale-meme-search-engine-Qzrz7V6T.html). Meme Search Engine is not very well-integrated but I usually know which dataset I want to retrieve from anyway.
I've also now implemented semantic search using [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) embeddings. It turns out that I have more data than I thought, so this was somewhat challenging. Schematically, a custom script (implemented in a Datasette plugin for convenience, although it probably shouldn't be) dumps the contents of FTS tables, splits them into chunks, generates embeddings, and inserts the embeddings and location information into a new database, as well as embeddings and an ID into a [FAISS](https://github.com/facebookresearch/faiss/) index. When a search is done, the index is checked, the closest vectors found, filtering done (if asked for) and the relevant text (and other metadata e.g. associated URL and timestamp) found and displayed.
It is actually somewhat more complex than that for various reasons. I had to modify all the importer scripts to log which rows they changed in a separate database, as scanning all databases for new changes would probably be challenging and slow, and the dump script reads off that. Also, an unquantized (FP16) index would be impractically large given my available RAM (5 million vectors × 1024 dimensions × 2 bytes ≈ 10GB), as well as slow (without using HNSW/IVF). To satisfy all the constraints I was under, I settled on a fast-scan PQ (product quantization) index[^4] (which fit into about 1GB of RAM and did search in 50ms) with a reranking stage where the top 1000 items are retrieved from disk and reranked using the original FP16 vectors (and the relevant text chunks retrieved simultaneously). I have no actual benchmarks of the recall/precision of this but it seems fine. This is probably not a standard setup because of throughput problems - however, I only really need low latency (the target was <200ms end-to-end and this is just about met) and this works fine.
## Future directions ## Future directions
The system is obviously not perfect. As well as some minor gaps (browser history isn't actually put in a full-text table, for instance, due to technical limitations), many data sources (often ones with a lot of important content!) aren't covered, such as my emails and conversation history on e.g. Discord. I also want to make better use of ML - for instance, integrating things like Meme Search Engine better, local Whisper autotranscription of videos rather than having no subtitles or relying on awful YouTube ones, semantic search to augment the default [SQLite FTS](https://www.sqlite.org/fts5.html) (which uses term-based ranking - specifically, BM25), and OCR of screenshots. I still haven't found local/open-source OCR which is both good, generalizable and usable (Apple's software works excellently but it's proprietary). Some of the trendier, newer projects in this space use LLMs to do retrieval-augmented generation, but I don't think this is a promising direction right now - available models are either too dumb or too slow/intensive, even on GPU compute, and in any case prone to hallucination. The system is obviously not perfect. As well as some minor gaps (browser history isn't actually put in a full-text table, for instance, due to technical limitations), many data sources (often ones with a lot of important content!) aren't covered, such as my emails and conversation history on e.g. Discord. I also want to make better use of ML - for instance, integrating things like Meme Search Engine better, ~~local Whisper autotranscription of videos rather than having no subtitles or relying on awful YouTube ones, semantic search to augment the default [SQLite FTS](https://www.sqlite.org/fts5.html) (which uses term-based ranking - specifically, BM25),~~ and OCR of screenshots. I still haven't found local/open-source OCR which is both good, generalizable and usable[^3]. Some of the trendier, newer projects in this space use LLMs to do retrieval-augmented generation, but I don't think this is a promising direction right now - available models are either too dumb or too slow/intensive, even on GPU compute, and in any case prone to hallucination.
Another interesting possibility for a redesign I have is a timeline mode. Since my integration plugin (mostly) knows what columns are timestamps, I could plausibly have a page display all relevant logs from a day and present them neatly. Another interesting possibility for a redesign I have is a timeline mode. Since my integration plugin (mostly) knows what columns are timestamps, I could plausibly have a page display all relevant logs from a day and present them neatly.
If you have related good ideas or correct opinions, you may tell me them below. The code for this is somewhat messy and environment-specific, but I may clean it up somewhat and release it if there's interest in its specifics. If you have related good ideas or correct opinions, you may tell me them below. The code for this is somewhat messy and environment-specific, but I may clean it up somewhat and release it if there's interest in its specifics.
[^1]: I suspect this is because of poor precision (in the information retrieval sense) making better recall problematic, rather than actual hard limits somewhere - there are documented people with photographic memory, who report remembering somewhat unhelpful information all the time - but without a way to change that it doesn't matter much.
[^2]: [Zettelkasten](https://en.wikipedia.org/wiki/Zettelkasten) and such predate this, but Roam definitely *popularized* it amongst tech people.
[^3]: Phone OSes can do this very well now, but the internals are not open.
[^4]: FAISS has some helpful manuals [like this](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index) describing the various forms available, although there are rather a lot of them which say slightly different things.
[^5]: This is the size assuming optimal compression, but obviously the actual brain has many other concerns and isn't storing things that way. The actual hardware probably holds, very roughly, 10<sup>15</sup> bits.

View File

@ -14,3 +14,15 @@ updated: 24/01/2020
* Proper physical QWERTY keyboards would be nice, though as they're such a niche feature that's probably never going to happen except on a few phones. * Proper physical QWERTY keyboards would be nice, though as they're such a niche feature that's probably never going to happen except on a few phones.
* The screens don't need to get bigger. People's hands aren't growing every year. And they don't need more pixels to drain increasingly large amounts of power. * The screens don't need to get bigger. People's hands aren't growing every year. And they don't need more pixels to drain increasingly large amounts of power.
* Removable batteries should come back. When I initially wrote this in 2017 or so, they were pretty common, but now barely any new devices let you *swap the battery*, despite lithium-ion batteries degrading within a few years of heavy use. I know you can't economically do highly modular design in a phone, but this is not a complex, technically difficult or expensive thing to want. * Removable batteries should come back. When I initially wrote this in 2017 or so, they were pretty common, but now barely any new devices let you *swap the battery*, despite lithium-ion batteries degrading within a few years of heavy use. I know you can't economically do highly modular design in a phone, but this is not a complex, technically difficult or expensive thing to want.
It's now the future (2023) and things have actually improved slightly in some ways but generally remained about the same:
* Notches mostly gave way to punch-hole cutouts for cameras, which are somewhat more tolerable.
* Manufacturers have started offering longer software service lifespans, and Project Treble has had the convenient effect of making it possible to run GSIs on all new devices. While I think this means you don't get updates to vendor firmware components, you can at least get OS-level security updates.
* Battery technology has incrementally improved over the years and SoCs are getting made on better processes with better core designs. This has, of course, been mostly cancelled out by dumber software or something, but you can get a few devices with really good battery capabilities.
* Headphone jacks and micro-SD card slots remain mostly gone, but it turns out that wireless headphones are good now and flash is cheap enough that most phones ship with lots of storage anyway.
* A few highly niche products with physical keyboards still exist. Unfortunately, they're bad in every area aside from having the keyboards so I don't have one.
* Displays are still unreasonably large on most products I guess. At least they can make them much brighter and unnecessarily high-resolution.
* Everyone wants high-refresh-rate displays now. I am told that once you get used to them you can't go back, so I'm avoiding them in order to be able to keep using cheaper display tech.
* We have 5G now, which allows me to use up my entire data plan in mere *minutes* (assuming the theoretical maximum link rate is achieved, which will never actually happen). I've heard that it's simpler and neater internally, but I don't trust telecoms people to ever get this right.
* Foldable phones are cool but I dislike, both aesthetically and for reasons of durability, compromising the solid-brick-of-microelectronics nature of modern phones with (large) mechanical parts, and don't really get the usecase.

View File

@ -50,14 +50,15 @@ Obviously this is just stuff *I* like; you might not like it, which isn't really
* Egan has short story anthologies which I have also read and recommend. * Egan has short story anthologies which I have also read and recommend.
* [Stories of Your Life and Others](https://www.goodreads.com/book/show/223380.Stories_of_Your_Life_and_Others) - just very good short stories. Chiang has written a sequel, [Exhalation](https://www.goodreads.com/book/show/41160292-exhalation), which I also entirely recommend. * [Stories of Your Life and Others](https://www.goodreads.com/book/show/223380.Stories_of_Your_Life_and_Others) - just very good short stories. Chiang has written a sequel, [Exhalation](https://www.goodreads.com/book/show/41160292-exhalation), which I also entirely recommend.
* He also write [Arrival](https://www.goodreads.com/book/show/32200035-arrival). I like this but not the movie, since the movie's scriptwriters clearly did not understand what was going on. * He also write [Arrival](https://www.goodreads.com/book/show/32200035-arrival). I like this but not the movie, since the movie's scriptwriters clearly did not understand what was going on.
* [A Hero's War](https://m.fictionpress.com/s/3238329/1/A-Hero-s-War) - bootstrapping industrialization in a setting with magic. Unfortunately, unfinished and seems likely to remain that way. * [A Hero's War](https://fictionpress.com/s/3238329/1/A-Hero-s-War) - bootstrapping industrialization in a setting with magic. Unfortunately, unfinished and seems likely to remain that way.
* [Snow Crash](https://www.goodreads.com/book/show/40651883-snow-crash) - a fun action story even though I don't take the tangents into Sumerian mythology (?) very seriously. * [Snow Crash](https://www.goodreads.com/book/show/40651883-snow-crash) - a fun action story even though I don't take the tangents into Sumerian mythology (?) very seriously.
* Since this list was written, I think it became notorious for introducing the "metaverse" as pushed by Facebook now. This is very silly. Everyone who is paying attention knows that the real metaverse is Roblox. * Since this list was written, I think it became notorious for introducing the "metaverse" as pushed by Facebook now. This is very silly. Everyone who is paying attention knows that the real metaverse is Roblox.
* [Limitless](https://en.wikipedia.org/wiki/Limitless_(TV_series)) (the movie is also decent) - actually among the least bad depictions of superhuman intelligence I've seen in media, and generally funny. * [Limitless](https://en.wikipedia.org/wiki/Limitless_(TV_series)) (the movie is also decent) - actually among the least bad depictions of superhuman intelligence I've seen in media, and generally funny.
* [Pantheon](https://en.wikipedia.org/wiki/Pantheon_(TV_series)) - unfortunately cancelled and pulled from streaming (for tax purposes somehow?) and thus hard to watch, but one of about three TV series I've seen on the subject of brain uploads, and I think the smartest. Some day I want my own ominous giant cube of servers in Norway. * [Pantheon](https://en.wikipedia.org/wiki/Pantheon_(TV_series)) - ~~unfortunately cancelled and pulled from streaming (for tax purposes somehow?) and thus hard to watch,~~ apparently uncancelled and hosted by Amazon now?! Still hard to watch. One of about three TV series I've seen on the subject of brain uploads, and I think the smartest, not that this is a very high bar since it's frequently quite silly (they repeatedly talk about how uploads are just data which can be copied, and then forget this every time it would be useful). Some day I want my own ominous giant cube of servers in Norway.
* [Mark of the Fool](https://www.goodreads.com/series/346305-mark-of-the-fool) - somewhat standardly D&D-like world, but the characters are well-written and take reasonable decisions. * [Mark of the Fool](https://www.goodreads.com/series/346305-mark-of-the-fool) - somewhat standardly D&D-like world, but the characters are well-written and take reasonable decisions.
* [Nice Dragons Finish Last](https://www.goodreads.com/series/128485-heartstrikers) - enjoyable urban fantasy. * [Nice Dragons Finish Last](https://www.goodreads.com/series/128485-heartstrikers) - enjoyable urban fantasy.
* [Street Cultivation](https://www.goodreads.com/series/287542-street-cultivation) - again, sane characters who do not make obviously stupid decisions for plot reasons. * [Street Cultivation](https://www.goodreads.com/series/287542-street-cultivation) - again, sane characters who do not make obviously stupid decisions for plot reasons.
* [Nexus](https://www.goodreads.com/book/show/13642710-nexus) - somewhat dumb plot (I think; I read it a while ago and am not far through a reread now) but very cool transhumanist technology.
Special mentions (i.e. "I haven't gotten around to reading these but they are well-reviewed and sound interesting") to: Special mentions (i.e. "I haven't gotten around to reading these but they are well-reviewed and sound interesting") to:
* [The Divine Cities](https://www.goodreads.com/series/159695-the-divine-cities) by Robert Jackson Bennet. * [The Divine Cities](https://www.goodreads.com/series/159695-the-divine-cities) by Robert Jackson Bennet.
@ -72,6 +73,6 @@ Special mentions (i.e. "I haven't gotten around to reading these but they are we
* "house of suns is really very good, you should read" - baidicoot/Aidan, creator of the world-renowned [Emu War](/emu-war) game * "house of suns is really very good, you should read" - baidicoot/Aidan, creator of the world-renowned [Emu War](/emu-war) game
* [Singularity Sky](https://www.goodreads.com/book/show/81992.Singularity_Sky) by Charlie Stross. * [Singularity Sky](https://www.goodreads.com/book/show/81992.Singularity_Sky) by Charlie Stross.
If you want EPUB versions of the free web serial stuff here for your e-reader, there are tools to generate those, or you can contact me for a copy. If you want EPUB versions of the free web serials here for your e-reader, there are tools to generate those, or you can contact me for a copy.
You can suggest other possibly-good stuff in the comments and I may add it to an extra section, and pointlessly complain there or [by email](mailto:osmarks@protonmail.com) if you don't like some of this. Please tell me if any links are dead. You can suggest other possibly-good stuff in the comments and I may add it to an extra section, and pointlessly complain there or [by email](mailto:me@osmarks.net) if you don't like some of this. Please tell me if any links are dead.

View File

@ -0,0 +1,42 @@
---
title: Stop having political opinions
description: This is, of course, all part of my evil plan to drive site activity through systematically generating (meta)political outrage.
created: 24/09/2023
slug: opinion
draft: yes
---
This may sound strange coming from someone whose website contains things which are clearly [political opinions](/osbill/); I am being [hypocritical](https://www.overcomingbias.com/p/homo-hipocritushtml)/didn't notice/have updated my views since that/am writing hyperbolically or ironically to make a point/do not require myself to have self-consistent beliefs (select your favourite option). Regardless, I think that holding, forming and in various ways acting on political opinions is somewhere between unnecessary and significantly net harmful. I apologize in advance for not using concrete examples for anything in this post, but those would be political opinions.
## Importance, Tractability, Neglectedness
Political interaction is often framed as altruistic or even morally necessary - most notably, voting is a "civic duty" and in some countries compulsory, and it's common for political movements and their participants to believe that they are helping to bring about a better world through their actions, or that they're preventing some other group from doing harm (and thus in some sense doing good) with their ill-posed opinions, misaligned values or sheer evilness. Thus, let's evaluate it as an altruistic act using the [ITN](https://forum.effectivealtruism.org/topics/itn-framework) framework favoured by Effective Altruism. In brief, Importance is the value of fully solving whatever problem you're targeting, Tractability is the marginal value of your input to it (how much an additional unit of work can affect the problem), and Neglectedness is how little the problem is already being worked on.
Politics clearly fails at neglectedness. The majority of people are interested at least to the extent of thinking and talking about it regularly and voting. Very large chunks of media time are allotted to politics, and people readily seek out political content to read and debate. There is no shortage of advocacy groups, think tanks and public intellectuals engaging in politics. You might contend that your favourite political position is neglected and less popular than widely discussed ones, but given that you are aware of it and supporting it it probably still has a fairly large amount of supporters - the world population is quite large, after all - and since you're still in the same field as all the other positions you are competing with them for resources and attention.
It does not do well on tractability. For mostly the same reasons as neglectedness, your marginal contribution is not big. [Voting](https://putanumonit.com/2015/12/30/010-voting/) is, even under fairly optimistic assumptions, very unlikely to change the outcome of an election. Discussing politics with people you know is notorious for never changing anyone's beliefs, and arguments on social media are even less effective - very little discussion surfaces novel ideas and it mostly serves as an ineffective attempt to apply social pressure. The situation with protests and similar activity is perhaps better because there are fewer people doing that, but I do not think their effectiveness is going to be affected much by the addition or removal of a person on the margin, and I am not convinced that they do much in general. Politics is also especially intractable because on many issues, people are actively working against you.
Importance is somewhat more ambiguous. I have been playing fast and loose with the exact definition of "politics" here - while it's clearly true that the sum of everything people want solved via politics is very important, the plausible consequences of something like electing a party you like or having a policy you want implemented are significantly smaller, both from the perspectives of [conflict theory](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) (the frame of political disagreements as battles between groups over values or resource allocation) and mistake theory (political disagreements as good-faith discussions of what the best thing to do is given a shared understanding of goals). Conflict-theoretically, any victory can be eroded by changing power dynamics later or nulified by enemies in the system surrounding it; mistake-theoretically, the impact of policies is very hard to test, let alone know in advance, and many of the issues policies are intended to solve are very complicated and any single solution is unlikely to work very well.
## The Magic Fix-Everything Button
A large amount of modern politics-as-practiced seems to take a specific kind of conflict-theoretic view which I think makes it less important (in that the policies resulting from it will be worse) as well as less tractable (it's easier to persuade people if they don't tie opposing views into their identity, and easier to take actions if you are not battling some other group). Specifically, the belief that the main obstacle to improving the world is simply that evil people are in power, and that if you can demand it insistently enough you can replace them with favorable people who will then fix everything in a simple and obvious way which has heretofore gone unused. This is exemplified by [movements with unclear goals and underspecified demands to fix things](https://www.astralcodexten.com/p/book-review-the-revolt-of-the-public).
While there are absolutely some cases where a bad policy exists for conflict-theoretic reasons (e.g. one group wants to enrich itself at the expense of others and opposition is too diffuse to stop it), the biggest problems we face now have no clean complete solution, only a wide range of possible policy positions with a complex set of tradeoffs. Insistence on a particular consequence without thought to how it might actually be achieved, erasure of tradeoffs, or [ignorance of the reasons](https://en.wiktionary.org/wiki/Chesterton%27s_fence) someone else might be against an obviously-good-to-you policy result in prolonged conflict and ineffective results. Where possible, it's better to try and [move the Pareto frontier](https://www.overcomingbias.com/p/policy_tugowarhtml) with novel solutions rather than attempting to force through a result against others.
This can also lead to, in effect, passivity: not considering solutions to problems other than wrangling large-scale governmental mechanisms. This is also harmful, since the government is [not omnicompetent](https://www.theonion.com/smart-qualified-people-behind-the-scenes-keeping-ameri-1819571706) and anything complicated is mired in horrifying bureaucratic quagmires of impenetrable dysfunction, as are most large-scale organizations.
## Selfish Reasons To Not Participate
Rather than merely not being a public good, I think involvement in politics is even individually harmful. The most obvious reason is opportunity cost - all the time spent reading political news, voting, forming opinions, or having conversations about it could be spent more effectively - but there is the further reason that because people often tie politics to their identities, political discussions are frequently damaging to relationships.
So if it's bad to participate, why is it so popular? The short answer is, to reuse the favourite adage of "ersatz" on the EleutherAI Discord server, "people are insane". We are [adaptation-executors, not fitness-maximizers](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers), built on evolved cognitive heuristics optimized for ancient savannah environments in smaller tribes. It's plausible that in those, tractability and neglectedness were much lower and social missteps or groups moving against you significantly costlier, the resulting strategies misgeneralize to today's world of 8 billion people, and few people bother to explicitly reason about the cost/benefit and override this. The system is also hyperstitious: now that political interaction is considered altruistic and expected, people are incentivized to participate more for signalling reasons.
This can also be blamed on cultural evolution/memetics. As with religions, the most contagious ideologies are selected for and propagate, growing more able to effectively capture human attention regardless of actual value to their hosts. The incentives of media also help: receiving payment for clicks on your videos and articles incentivizes recapitulation of the same process through deliberate design, resulting in content optimized to spread through exploiting outrage and tribalism.
## Universalizability
The most common objection I've heard is along the lines of "but if everyone did this, no political improvement would occur and the world would be much worse off". This is true but irrelevant: I'm not a Kantian and don't only advocate for behaviors which need to apply to everyone at once. In the current state of the world, I think the marginal benefit (to everyone, and to you) of engagement is below the marginal cost and so it should be avoided - if a sufficiently large amount of people agreed with me on this and did so, some of my arguments would apply less and it would become more worthwhile, and I might then argue in favour of political engagement.
Another is the claim that I am a privileged person who is only able to ignore politics because I'm not heavily threatened or discriminated against by existing instutions. This also misses the point somewhat - this affects importance, but not neglectedness or tractability, which are still, I think, so much lower than people's behaviour implies that this argument holds up.
If you have any arguments against my argument I haven't addressed here, please tell me so I can think about them.

View File

@ -6,14 +6,18 @@ updated: 11/05/2023
--- ---
As you may know, osmarks.net is a website, served from computers which are believed to exist. But have you ever wondered exactly how it's all set up? If not, you may turn elsewhere and live in ignorance. Otherwise, continue reading. As you may know, osmarks.net is a website, served from computers which are believed to exist. But have you ever wondered exactly how it's all set up? If not, you may turn elsewhere and live in ignorance. Otherwise, continue reading.
Many similar personal sites are hosted on free static site services or various cloud platforms, but mine actually runs on a physical server. This was originally done because of my general distrust of SaaS/cloud platforms, to learn about Linux administration, and desire to run some non-web things, but now it's necessary to run the full range of weird components which are now important to the website. ~~The hardware has remained the same since early 2019, before I actually had a public site, apart from the addition of more disk capacity and a spare GPU for occasional machine learning workloads - I am using an old HP ML110 G7 tower server. Despite limited RAM and CPU power compared to contemporary rackmount models, it was cheap, has continued to work amazingly reliably, and is much more power-efficient than those would have been. It mostly only runs at about 5% CPU load and 2GB of RAM in use anyway, so it's not been an issue.~~ Due to the increasing compute demands of internal workloads, among other things, it has now been replaced with a custom build using a consumer Ryzen CPU. This has massively increased performance thanks to the CPU's much better IPC, clocks and core count, the 8x increase in RAM, and actually having an SSD. Many similar personal sites are hosted on free static site services or various cloud platforms, but mine actually runs on a physical server. This was originally done because of my general distrust of SaaS/cloud platforms, to learn about Linux administration, and desire to run some non-web things, but now it's necessary to run the full range of weird components which are now important to the website. ~~The hardware has remained the same since early 2019, before I actually had a public site, apart from the addition of more disk capacity and a spare GPU for occasional machine learning workloads - I am using an old HP ML110 G7 tower server. Despite limited RAM and CPU power compared to contemporary rackmount models, it was cheap, has continued to work amazingly reliably, and is much more power-efficient than those would have been. It mostly only runs at about 5% CPU load and 2GB of RAM in use anyway, so it's not been an issue.~~ Due to the increasing compute demands of internal workloads, among other things, it has now been replaced with a custom build using a consumer Ryzen CPU. This has massively increased performance thanks to the CPU's much better IPC, clocks and core count, the 16x increase in RAM, and actually having an SSD[^2].
The main site itself, which you're currently reading, is in fact just a simple static website. Over the years the exact implementation has varied a lot, from the original not-actually-that-static version using Caddy, some weird PHP scripts for Markdown, and a few folders of HTML files, to the later strange combination of Haskell (using Hakyll) and makefiles to the current somewhat horrible Node.js program (which also interacts with someone else's Go program. Fun!). The modern implementation of the compiler does templating, dependency resolution, Markdown and some optimization tasks in about 300 lines of poorly-described JavaScript. The main site itself, which you're currently reading, is in fact just a simple static website. Over the years the exact implementation has varied a lot, from the original not-actually-that-static version using Caddy, some weird PHP scripts for Markdown, and a few folders of HTML files, to the later strange combination of Haskell (using Hakyll) and makefiles to the current somewhat horrible Node.js program (which also interacts with someone else's Go program. Fun!). The modern implementation of the compiler does templating, dependency resolution, Markdown and some optimization tasks in about 300 lines of poorly-described JavaScript.
Being static files, many, many different webservers could have been used for this site. In practice, it's mostly alternated randomly between [caddy](https://caddyserver.com/) (a more recent, Go-based webserver with automatic LetsEncrypt integration) and [nginx](https://nginx.org/) (an older and more powerful but slightly quirky program) - caddy generally had easier configuration, but I arbitrarily preferred nginx in some ways. After caddy v2 suddenly required me to rewrite my configuration and introduced a bunch of weird issues, I permanently switched over to nginx and haven't changed back. The configuration file is now 600 lines or so, even with inclusion of includes to shorten things, but it... works, at least. This is mostly to accommodate the bizzarely large set of subdomains I now have for various people, and reverse proxy configuration for backend services. I also use a custom-compiled build of nginx with HTTP/3 (QUIC) support and some modules compiled in. Being static files, many, many different webservers could have been used for this site. In practice, it's mostly alternated randomly between [caddy](https://caddyserver.com/) (a more recent, Go-based webserver with automatic LetsEncrypt integration) and [nginx](https://nginx.org/) (an older and more powerful but slightly quirky program) - caddy generally had easier configuration, but I arbitrarily preferred nginx in some ways. After caddy v2 suddenly required me to rewrite my configuration and introduced a bunch of weird issues, I permanently switched over to nginx and haven't changed back. The configuration file is now 600 lines or so, even with inclusion of includes to shorten things, but it... works, at least. This is mostly to accommodate the bizzarely large set of subdomains I now have for various people, and reverse proxy configuration for backend services. I also use a custom-compiled build of nginx with HTTP/3 (QUIC) support and some modules compiled in.
Some of these backend things are only for personal use, but a few are related to the site itself. For example, the comment server is a standalone Python program, [isso](https://posativ.org/isso/), with corresponding JS embedded in each page. This works pretty well, but has lead to some weird quirkiness, such as each separate 404-erroring URL having its own list of comments. There's also the Random Stuff API, a custom assemblage of about 15 different Python libraries and external programs which, while technically not linked on the site, does interact with other projects like [PotatOS](https://git.osmarks.net/osmarks/potatOS/), and internal services on the same infrastructure like my [RSS reader](https://miniflux.app/). The images subdomain also uses a [PHP program](https://larsjung.de/h5ai/) to generate a nice searchable index; in fact, it is <del>one of two</del> the only PHP thing<del>s</del> I have unfortunately not yet been able to purge. There also used to be a publicly available status page using some custom code, but this doesn't work very well and has now been dropped; previously I had a Grafana (and earlier Netdata) instance there, but this has now been cancelled because it leaks a worrying amount of information. Some of these backend things are only for personal use, but a few are related to the site itself. For example, the comment server is a standalone Python program, [isso](https://posativ.org/isso/), with corresponding JS embedded in each page. This works pretty well, but has lead to some weird quirkiness, such as each separate 404-erroring URL having its own list of comments. There's also the Random Stuff API, a custom assemblage of about 15 different Python libraries and external programs which, while technically not linked on the site, does interact with other projects like [PotatOS](https://git.osmarks.net/osmarks/potatOS/), and internal services on the same infrastructure like my [RSS reader](https://miniflux.app/). The images subdomain also uses a [PHP program](https://larsjung.de/h5ai/) to generate a nice searchable index; in fact, it is <del>one of two</del> the only PHP thing<del>s</del> I have unfortunately not yet been able to purge[^1]. There also used to be a publicly available status page using some custom code, but this doesn't work very well and has now been dropped; previously I had a Grafana (and earlier Netdata) instance there, but this has now been cancelled because it leaks a worrying amount of information.
As for the underlying OS everything runs on, I currently use [Arch Linux](https://i.osmarks.net/memes-or-something/arch-btw.png) (as well as Alpine on a few lower-resourced cloud servers). Some form of Linux is inevitable - BSDs aren't really compatible with much, and Windows is obviously unsuited for server duty - but I mostly use Arch for its stability (this sounds sarcastic, but I've actually found it to be very reliable with regular updates), wide range of packages (particularly from the AUR; as I don't really run critical production infrastructure, I can generally afford to compile stuff from source a lot), and better general ease-of-use than Alpine. As much as I vaguely resent it, this is mostly down to systemd - despite it being a horrific bloated monolith, `journalctl` is very convenient and unit files are pleasant and easy to write compared to the weird OpenRC scripts Alpine uses. As for the underlying OS everything runs on, I currently use [Arch Linux](https://i.osmarks.net/memes-or-something/arch-btw.png) (as well as Alpine on a few lower-resourced cloud servers). Some form of Linux is inevitable - BSDs aren't really compatible with much, and Windows is obviously unsuited for server duty - but I mostly use Arch for its stability (this sounds sarcastic, but I've actually found it to be very reliable with regular updates), wide range of packages (particularly from the AUR; as I don't really run critical production infrastructure, I can generally afford to compile stuff from source a lot), and better general ease-of-use than Alpine. As much as I vaguely resent it, this is mostly down to systemd - despite it being a horrific bloated monolith, `journalctl` is very convenient and unit files are pleasant and easy to write compared to the weird OpenRC scripts Alpine uses.
I am actually considering yet another redesign, however; switching to a dynamic site implementation instead would allow me to integrate the comment system and achievement system better, make things like the "from other blogs" tiles actually update at reasonable intervals, and arbitrarily A/B test users, although it would break some nice things like this site's very aggressive caching and fast serving. Please leave your thoughts or lack of thoughts on this in the comments. I am actually considering yet another redesign, however; switching to a dynamic site implementation instead would allow me to integrate the comment system and achievement system better, make things like the "from other blogs" tiles actually update at reasonable intervals, and arbitrarily A/B test users, although it would break some nice things like this site's very aggressive caching and fast serving. Please leave your thoughts or lack of thoughts on this in the comments.
[^1]: The previous one was DokuWiki, now replaced with Minoteaur.
[^2]: My next upgrade is probably going to be more SSD space, since I'm *somehow* running out of that.

View File

@ -0,0 +1,128 @@
---
title: Political Opinion Calendar
description: Instead of wasting time thinking of the best political opinion to hold, simply pick them pseudorandomly per day with this tool.
slug: polcal
---
<script src="/assets/js/mithril.js"></script>
<script src="/assets/js/date-fns.js"></script>
<style>
.calday {
padding: 1em;
margin: 0;
border: none;
}
#app table {
border-collapse: collapse;
}
.opinion {
font-style: italic;
}
#app button, #app input {
font-size: 1.25em;
}
</style>
<div id="app">
</div>
<script>
const STORAGE_KEY = "political-opinion-calendar"
const now = new Date(Date.now()) // JavaScript "irl"
var month = now.getMonth() + 1
var year = now.getFullYear()
const readSave = () => {
try {
const result = JSON.parse(localStorage.getItem(STORAGE_KEY))
if (!result || !Array.isArray(result) || !result.every(x => typeof x.opinion === "string" && typeof x.weight === "number")) { return }
return result
} catch(e) {
console.error(e, "load failed")
}
}
var opinions = readSave() || [{ weight: 1, opinion: "" }]
const writeSave = () => {
localStorage.setItem(STORAGE_KEY, JSON.stringify(opinions))
}
const hash = (str, seed = 0) => {
let h1 = 0xdeadbeef ^ seed, h2 = 0x41c6ce57 ^ seed
for (let i = 0, ch; i < str.length; i++) {
ch = str.charCodeAt(i)
h1 = Math.imul(h1 ^ ch, 2654435761)
h2 = Math.imul(h2 ^ ch, 1597334677)
}
h1 = Math.imul(h1 ^ (h1>>>16), 2246822507) ^ Math.imul(h2 ^ (h2>>>13), 3266489909)
h2 = Math.imul(h2 ^ (h2>>>16), 2246822507) ^ Math.imul(h1 ^ (h1>>>13), 3266489909)
return 4294967296 * (2097151 & h2) + (h1>>>0)
}
function incMonth(by) {
month += by
if (month < 1) {
month = 12 - month
year--
} else if (month > 12) {
month = month - 12
year++
}
}
function displayMonth(year, month) {
var opinionLookup = []
for (const opinion of opinions) {
for (var i = 0; i < opinion.weight; i++) {
opinionLookup.push(opinion.opinion)
}
}
var init = dateFns.addMonths(dateFns.addYears(0, year - 1970), month - 1)
var offset = dateFns.getDay(init) - 1
var weekinit = dateFns.subDays(init, offset >= 0 ? offset : 6)
var rows = [
m("tr.calweek.calhead", ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"].map(x => m("th.calday", x)))
]
outer: for (var i = 0; i < 6; i++) {
var row = []
for (var j = 0; j < 7; j++) {
var x = dateFns.addDays(dateFns.addWeeks(weekinit, i), j)
if (x > init && dateFns.getMonth(x) + 1 !== month && dateFns.getDate(x) >= 7) { break outer }
var opindex = hash(`${dateFns.getYear(x)}-${dateFns.getMonth(x)}-${dateFns.getDate(x)}`) % opinionLookup.length
var opinion = opinionLookup.length > 0 ? opinionLookup[opindex] : "no opinion"
row.push(m("td.calday", { style: `background: hsl(${hash(opinion) % 360}deg, 100%, 60%); opacity: ${dateFns.getMonth(x) + 1 === month ? "1": "0.5"}` }, [
m(".date", dateFns.getDate(x)),
m(".opinion", opinion)
]))
}
rows.push(m("tr.calweek", row))
}
return rows
}
m.mount(document.querySelector("#app"), {
view: function() {
return [
m("", [
m("h1", "Political Opinions"),
m("ul",
opinions.map((opinion, index) => m("li", [
m("button", { onclick: () => opinions.splice(index, 1) }, "-"),
m("input[type=number]", { value: opinion.weight, min: 1, max: 100, oninput: ev => { opinions[index].weight = Math.min(ev.target.value, 100); writeSave() } }),
m("input", { value: opinion.opinion, oninput: ev => { opinions[index].opinion = ev.target.value; writeSave() }, placeholder: "Political opinion..." })
]))
),
m("button", { onclick: () => opinions.push({ opinion: "", weight: 1 }) }, "+")
]),
m("", [
m("h1", "Calendar"),
m("h2", `${year}-${month}`),
m("button", { onclick: () => incMonth(-1) }, "-"),
m("button", { onclick: () => incMonth(1) }, "+"),
m("table", displayMonth(year, month))
]),
]
}
})
</script>

1483
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -4,12 +4,17 @@
"description": "Static site generation code for my website.", "description": "Static site generation code for my website.",
"main": "index.js", "main": "index.js",
"dependencies": { "dependencies": {
"@msgpack/msgpack": "^3.0.0-beta2",
"axios": "^1.5.0",
"better-sqlite3": "^8.7.0",
"chalk": "^4.1.0", "chalk": "^4.1.0",
"dayjs": "^1.8.28", "dayjs": "^1.8.28",
"esbuild": "^0.19.6",
"fs-extra": "^8.1.0", "fs-extra": "^8.1.0",
"gray-matter": "^4.0.2", "gray-matter": "^4.0.2",
"handlebars": "^4.7.6", "handlebars": "^4.7.6",
"html-minifier": "^4.0.0", "html-minifier": "^4.0.0",
"idb": "^7.1.1",
"markdown-it": "^13.0.1", "markdown-it": "^13.0.1",
"markdown-it-anchor": "^8.6.7", "markdown-it-anchor": "^8.6.7",
"markdown-it-footnote": "^3.0.3", "markdown-it-footnote": "^3.0.3",
@ -19,7 +24,8 @@
"ramda": "^0.26.1", "ramda": "^0.26.1",
"sass": "^1.26.8", "sass": "^1.26.8",
"terser": "^4.8.0", "terser": "^4.8.0",
"uuid": "^9.0.0" "uuid": "^9.0.0",
"yalps": "^0.5.5"
}, },
"license": "MIT" "license": "MIT"
} }

View File

@ -18,7 +18,7 @@
"If you can't stand the heat, get out of the server room." "If you can't stand the heat, get out of the server room."
], ],
"feeds": [ "feeds": [
"https://blogs.sciencemag.org/pipeline/feed", "https://www.science.org/blogs/pipeline/feed",
"https://www.rtl-sdr.com/feed/", "https://www.rtl-sdr.com/feed/",
"https://astralcodexten.substack.com/feed", "https://astralcodexten.substack.com/feed",
"https://www.rifters.com/crawl/?feed=rss2", "https://www.rifters.com/crawl/?feed=rss2",
@ -27,5 +27,6 @@
"https://aphyr.com/posts.atom", "https://aphyr.com/posts.atom",
"https://os.phil-opp.com/rss.xml" "https://os.phil-opp.com/rss.xml"
], ],
"dateFormat": "YYYY-MM-DD" "dateFormat": "YYYY-MM-DD",
"microblogSource": "https://b.osmarks.net/outbox"
} }

View File

@ -18,6 +18,10 @@ const childProcess = require("child_process")
const chalk = require("chalk") const chalk = require("chalk")
const crypto = require("crypto") const crypto = require("crypto")
const uuid = require("uuid") const uuid = require("uuid")
const sqlite = require("better-sqlite3")
const axios = require("axios")
const msgpack = require("@msgpack/msgpack")
const esbuild = require("esbuild")
dayjs.extend(customParseFormat) dayjs.extend(customParseFormat)
@ -28,6 +32,7 @@ const blogDir = path.join(root, "blog")
const errorPagesDir = path.join(root, "error") const errorPagesDir = path.join(root, "error")
const assetsDir = path.join(root, "assets") const assetsDir = path.join(root, "assets")
const outDir = path.join(root, "out") const outDir = path.join(root, "out")
const srcDir = path.join(root, "src")
const buildID = nanoid() const buildID = nanoid()
globalData.buildID = buildID globalData.buildID = buildID
@ -189,7 +194,7 @@ const processBlog = async () => {
}, processContent: renderMarkdown }) }, processContent: renderMarkdown })
}) })
console.log(chalk.yellow(`${Object.keys(blog).length} blog entries`)) console.log(chalk.yellow(`${Object.keys(blog).length} blog entries`))
globalData.blog = addGuids(R.sortBy(x => x.updated ? -x.updated.valueOf() : 0, R.values(blog))) globalData.blog = addGuids(R.filter(x => !x.draft, R.sortBy(x => x.updated ? -x.updated.valueOf() : 0, R.values(blog))))
} }
const processErrorPages = () => { const processErrorPages = () => {
@ -214,51 +219,76 @@ const applyMetricPrefix = (x, unit) => {
globalData.metricPrefix = applyMetricPrefix globalData.metricPrefix = applyMetricPrefix
const writeBuildID = () => fsp.writeFile(path.join(outDir, "buildID.txt"), buildID) const writeBuildID = () => fsp.writeFile(path.join(outDir, "buildID.txt"), buildID)
const index = async () => { const index = async () => {
const index = globalData.templates.index({ ...globalData, title: "Index", posts: globalData.blog, description: globalData.siteDescription }) const index = globalData.templates.index({ ...globalData, title: "Index", posts: globalData.blog, description: globalData.siteDescription })
await fsp.writeFile(path.join(outDir, "index.html"), index) await fsp.writeFile(path.join(outDir, "index.html"), index)
} }
const compileCSS = async () => {
const css = sass.renderSync({ const cache = sqlite("cache.sqlite3")
data: await readFile(path.join(root, "style.sass")), cache.exec("CREATE TABLE IF NOT EXISTS cache (k TEXT NOT NULL PRIMARY KEY, v BLOB NOT NULL, ts INTEGER NOT NULL)")
outputStyle: "compressed", const writeCacheStmt = cache.prepare("INSERT OR REPLACE INTO cache VALUES (?, ?, ?)")
indentedSyntax: true const readCacheStmt = cache.prepare("SELECT * FROM cache WHERE k = ?")
}).css const readCache = (k, maxAge=null, ts=null) => {
globalData.css = css const row = readCacheStmt.get(k)
if (!row) return
if ((maxAge && row.ts < (Date.now() - maxAge) || (ts && row.ts != ts))) return
return msgpack.decode(row.v)
} }
const loadTemplates = async () => { const writeCache = (k, v, ts=Date.now()) => {
globalData.templates = await loadDir(templateDir, async fullPath => pug.compile(await readFile(fullPath), { filename: fullPath })) const enc = msgpack.encode(v)
writeCacheStmt.run(k, Buffer.from(enc.buffer, enc.byteOffset, enc.byteLength), ts)
} }
const fetchMicroblog = async () => {
const cached = readCache("microblog", 60*60*1000)
if (cached) { globalData.microblog = cached; return }
const posts = (await axios({ url: globalData.microblogSource, headers: { "Accept": 'application/ld+json; profile="https://www.w3.org/ns/activitystreams"' } })).data.orderedItems
globalData.microblog = posts.slice(0, 6).map(post => minifyHTML(globalData.templates.activitypub({
...globalData,
permalink: post.object.id,
date: dayjs(post.object.published),
content: post.object.content,
bgcol: hashColor(post.object.id, 1, 0.9)
})))
writeCache("microblog", globalData.microblog)
}
const runOpenring = async () => { const runOpenring = async () => {
try { const cached = readCache("openring", 60*60*1000)
var cached = JSON.parse(await fsp.readFile("cache.json", {encoding: "utf8"})) if (cached) { globalData.openring = cached; return }
} catch(e) {
console.log(chalk.keyword("orange")("Failed to load cache:"), e)
}
if (cached && (Date.now() - cached.time) < (60 * 60 * 1000)) {
console.log(chalk.keyword("orange")("Loading Openring data from cache"))
return cached.data
}
globalData.openring = "bee"
// wildly unsafe but only runs on input from me anyway // wildly unsafe but only runs on input from me anyway
const arg = `./openring -n6 ${globalData.feeds.map(x => '-s "' + x + '"').join(" ")} < openring.html` const arg = `./openring -n6 ${globalData.feeds.map(x => '-s "' + x + '"').join(" ")} < openring.html`
console.log(chalk.keyword("orange")("Openring:") + " " + arg) console.log(chalk.keyword("orange")("Openring:") + " " + arg)
const out = await util.promisify(childProcess.exec)(arg) const out = await util.promisify(childProcess.exec)(arg)
console.log(chalk.keyword("orange")("Openring:") + "\n" + out.stderr.trim()) console.log(chalk.keyword("orange")("Openring:") + "\n" + out.stderr.trim())
globalData.openring = minifyHTML(out.stdout) globalData.openring = minifyHTML(out.stdout)
await fsp.writeFile("cache.json", JSON.stringify({ writeCache("openring", globalData.openring)
time: Date.now(),
data: globalData.openring
}))
} }
const compileCSS = async () => {
const css = sass.renderSync({
data: await readFile(path.join(srcDir, "style.sass")),
outputStyle: "compressed",
indentedSyntax: true
}).css
globalData.css = css
}
const loadTemplates = async () => {
globalData.templates = await loadDir(templateDir, async fullPath => pug.compile(await readFile(fullPath), { filename: fullPath }))
}
const genRSS = async () => { const genRSS = async () => {
const rssFeed = globalData.templates.rss({ ...globalData, items: globalData.blog, lastUpdate: new Date() }) const rssFeed = globalData.templates.rss({ ...globalData, items: globalData.blog, lastUpdate: new Date() })
await fsp.writeFile(path.join(outDir, "rss.xml"), rssFeed) await fsp.writeFile(path.join(outDir, "rss.xml"), rssFeed)
} }
const genManifest = async () => { const genManifest = async () => {
const m = mustache.render(await readFile(path.join(assetsDir, "manifest.webmanifest")), globalData) const m = mustache.render(await readFile(path.join(assetsDir, "manifest.webmanifest")), globalData)
fsp.writeFile(path.join(outAssets, "manifest.webmanifest"), m) fsp.writeFile(path.join(outAssets, "manifest.webmanifest"), m)
} }
const minifyJSTask = async () => { const minifyJSTask = async () => {
const jsDir = path.join(assetsDir, "js") const jsDir = path.join(assetsDir, "js")
const jsOutDir = path.join(outAssets, "js") const jsOutDir = path.join(outAssets, "js")
@ -267,10 +297,22 @@ const minifyJSTask = async () => {
await minifyJSFile(await readFile(fullpath), file, path.join(jsOutDir, file)) await minifyJSFile(await readFile(fullpath), file, path.join(jsOutDir, file))
})) }))
} }
const compilePageJSTask = async () => {
await esbuild.build({
entryPoints: [ path.join(srcDir, "page.js") ],
bundle: true,
outfile: path.join(outAssets, "js/page.js"),
minify: true,
sourcemap: true
})
}
const genServiceWorker = async () => { const genServiceWorker = async () => {
const serviceWorker = mustache.render(await readFile(path.join(assetsDir, "sw.js")), globalData) const serviceWorker = mustache.render(await readFile(path.join(assetsDir, "sw.js")), globalData)
await minifyJSFile(serviceWorker, "sw.js", path.join(outDir, "sw.js")) await minifyJSFile(serviceWorker, "sw.js", path.join(outDir, "sw.js"))
} }
const copyAsset = subpath => fse.copy(path.join(assetsDir, subpath), path.join(outAssets, subpath)) const copyAsset = subpath => fse.copy(path.join(assetsDir, subpath), path.join(outAssets, subpath))
const doImages = async () => { const doImages = async () => {
@ -279,9 +321,37 @@ const doImages = async () => {
copyAsset("titillium-web-semibold.woff2") copyAsset("titillium-web-semibold.woff2")
copyAsset("share-tech-mono.woff2") copyAsset("share-tech-mono.woff2")
globalData.images = {} globalData.images = {}
for (const image of await fse.readdir(path.join(assetsDir, "images"), { encoding: "utf-8" })) { await Promise.all(
globalData.images[image.split(".").slice(0, -1).join(".")] = "/assets/images/" + image (await fse.readdir(path.join(assetsDir, "images"), { encoding: "utf-8" })).map(async image => {
} if (image.endsWith(".original")) { // generate alternative formats
const stripped = image.replace(/\.original$/).split(".").slice(0, -1).join(".")
globalData.images[stripped] = {}
const fullPath = path.join(assetsDir, "images", image)
const stat = await fse.stat(fullPath)
const writeFormat = async (name, ext, mime, cmd, supplementaryArgs) => {
let bytes = readCache(`images/${stripped}/${name}`, null, stat.mtimeMs)
const destFilename = stripped + ext
const destPath = path.join(outAssets, "images", destFilename)
if (!bytes) {
console.log(chalk.keyword("orange")(`Compressing image ${stripped} (${name})`))
await util.promisify(childProcess.execFile)(cmd, supplementaryArgs.concat([
fullPath,
destPath
]))
writeCache(`images/${stripped}/${name}`, await fsp.readFile(destPath), stat.mtimeMs)
} else {
await fsp.writeFile(destPath, bytes)
}
globalData.images[stripped][mime] = "/assets/images/" + destFilename
}
await writeFormat("avif", ".avif", "image/avif", "avifenc", ["-s", "0", "-q", "20"])
await writeFormat("jpeg-scaled", ".jpg", "_fallback", "convert", ["-resize", "25%", "-format", "jpeg"])
} else {
globalData.images[image.split(".").slice(0, -1).join(".")] = "/assets/images/" + image
}
})
)
} }
const tasks = { const tasks = {
@ -290,18 +360,20 @@ const tasks = {
pagedeps: { deps: ["templates", "css"] }, pagedeps: { deps: ["templates", "css"] },
css: { deps: [], fn: compileCSS }, css: { deps: [], fn: compileCSS },
writeBuildID: { deps: [], fn: writeBuildID }, writeBuildID: { deps: [], fn: writeBuildID },
index: { deps: ["openring", "pagedeps", "blog", "experiments", "images"], fn: index }, index: { deps: ["openring", "pagedeps", "blog", "experiments", "images", "fetchMicroblog"], fn: index },
openring: { deps: [], fn: runOpenring }, openring: { deps: [], fn: runOpenring },
rss: { deps: ["blog"], fn: genRSS }, rss: { deps: ["blog"], fn: genRSS },
blog: { deps: ["pagedeps"], fn: processBlog }, blog: { deps: ["pagedeps"], fn: processBlog },
fetchMicroblog: { deps: [], fn: fetchMicroblog },
experiments: { deps: ["pagedeps"], fn: processExperiments }, experiments: { deps: ["pagedeps"], fn: processExperiments },
assetsDir: { deps: [], fn: () => fse.ensureDir(outAssets) }, assetsDir: { deps: [], fn: () => fse.ensureDir(outAssets) },
manifest: { deps: ["assetsDir"], fn: genManifest }, manifest: { deps: ["assetsDir"], fn: genManifest },
minifyJS: { deps: ["assetsDir"], fn: minifyJSTask }, minifyJS: { deps: ["assetsDir"], fn: minifyJSTask },
compilePageJS: { deps: ["assetsDir"], fn: compilePageJSTask },
serviceWorker: { deps: [], fn: genServiceWorker }, serviceWorker: { deps: [], fn: genServiceWorker },
images: { deps: ["assetsDir"], fn: doImages }, images: { deps: ["assetsDir"], fn: doImages },
offlinePage: { deps: ["assetsDir", "pagedeps"], fn: () => applyTemplate(globalData.templates.experiment, path.join(assetsDir, "offline.html"), () => path.join(outAssets, "offline.html"), {}) }, offlinePage: { deps: ["assetsDir", "pagedeps"], fn: () => applyTemplate(globalData.templates.experiment, path.join(assetsDir, "offline.html"), () => path.join(outAssets, "offline.html"), {}) },
assets: { deps: ["manifest", "minifyJS", "serviceWorker", "images"] }, assets: { deps: ["manifest", "minifyJS", "serviceWorker", "images", "compilePageJS"] },
main: { deps: ["writeBuildID", "index", "errorPages", "assets", "experiments", "blog", "rss"] } main: { deps: ["writeBuildID", "index", "errorPages", "assets", "experiments", "blog", "rss"] }
} }

View File

@ -1,6 +1,5 @@
// I cannot be bothered to set up a bundler const idb = require("idb")
// https://www.npmjs.com/package/idb const { solve } = require("yalps")
!function(e,t){t(window.idb={})}(this,(function(e){"use strict";let t,n;const r=new WeakMap,o=new WeakMap,s=new WeakMap,i=new WeakMap,a=new WeakMap;let c={get(e,t,n){if(e instanceof IDBTransaction){if("done"===t)return o.get(e);if("objectStoreNames"===t)return e.objectStoreNames||s.get(e);if("store"===t)return n.objectStoreNames[1]?void 0:n.objectStore(n.objectStoreNames[0])}return f(e[t])},set:(e,t,n)=>(e[t]=n,!0),has:(e,t)=>e instanceof IDBTransaction&&("done"===t||"store"===t)||t in e};function d(e){return e!==IDBDatabase.prototype.transaction||"objectStoreNames"in IDBTransaction.prototype?(n||(n=[IDBCursor.prototype.advance,IDBCursor.prototype.continue,IDBCursor.prototype.continuePrimaryKey])).includes(e)?function(...t){return e.apply(p(this),t),f(r.get(this))}:function(...t){return f(e.apply(p(this),t))}:function(t,...n){const r=e.call(p(this),t,...n);return s.set(r,t.sort?t.sort():[t]),f(r)}}function u(e){return"function"==typeof e?d(e):(e instanceof IDBTransaction&&function(e){if(o.has(e))return;const t=new Promise(((t,n)=>{const r=()=>{e.removeEventListener("complete",o),e.removeEventListener("error",s),e.removeEventListener("abort",s)},o=()=>{t(),r()},s=()=>{n(e.error||new DOMException("AbortError","AbortError")),r()};e.addEventListener("complete",o),e.addEventListener("error",s),e.addEventListener("abort",s)}));o.set(e,t)}(e),n=e,(t||(t=[IDBDatabase,IDBObjectStore,IDBIndex,IDBCursor,IDBTransaction])).some((e=>n instanceof e))?new Proxy(e,c):e);var n}function f(e){if(e instanceof IDBRequest)return function(e){const t=new Promise(((t,n)=>{const r=()=>{e.removeEventListener("success",o),e.removeEventListener("error",s)},o=()=>{t(f(e.result)),r()},s=()=>{n(e.error),r()};e.addEventListener("success",o),e.addEventListener("error",s)}));return t.then((t=>{t instanceof IDBCursor&&r.set(t,e)})).catch((()=>{})),a.set(t,e),t}(e);if(i.has(e))return i.get(e);const t=u(e);return t!==e&&(i.set(e,t),a.set(t,e)),t}const p=e=>a.get(e);const l=["get","getKey","getAll","getAllKeys","count"],D=["put","add","delete","clear"],b=new Map;function v(e,t){if(!(e instanceof IDBDatabase)||t in e||"string"!=typeof t)return;if(b.get(t))return b.get(t);const n=t.replace(/FromIndex$/,""),r=t!==n,o=D.includes(n);if(!(n in(r?IDBIndex:IDBObjectStore).prototype)||!o&&!l.includes(n))return;const s=async function(e,...t){const s=this.transaction(e,o?"readwrite":"readonly");let i=s.store;return r&&(i=i.index(t.shift())),(await Promise.all([i[n](...t),o&&s.done]))[0]};return b.set(t,s),s}c=(e=>({...e,get:(t,n,r)=>v(t,n)||e.get(t,n,r),has:(t,n)=>!!v(t,n)||e.has(t,n)}))(c),e.deleteDB=function(e,{blocked:t}={}){const n=indexedDB.deleteDatabase(e);return t&&n.addEventListener("blocked",(()=>t())),f(n).then((()=>{}))},e.openDB=function(e,t,{blocked:n,upgrade:r,blocking:o,terminated:s}={}){const i=indexedDB.open(e,t),a=f(i);return r&&i.addEventListener("upgradeneeded",(e=>{r(f(i.result),e.oldVersion,e.newVersion,f(i.transaction))})),n&&i.addEventListener("blocked",(()=>n())),a.then((e=>{s&&e.addEventListener("close",(()=>s())),o&&e.addEventListener("versionchange",(()=>o()))})).catch((()=>{})),a},e.unwrap=p,e.wrap=f}));
// attempt to register service worker // attempt to register service worker
if ("serviceWorker" in navigator) { if ("serviceWorker" in navigator) {
@ -34,6 +33,7 @@ const hashString = function(str, seed = 0) {
} }
const colHash = (str, saturation = 100, lightness = 70) => `hsl(${hashString(str) % 360}, ${saturation}%, ${lightness}%)` const colHash = (str, saturation = 100, lightness = 70) => `hsl(${hashString(str) % 360}, ${saturation}%, ${lightness}%)`
window.colHash = colHash
// Arbitrary Points code, wrapped in an IIFE to not pollute the global environment much more than it already is // Arbitrary Points code, wrapped in an IIFE to not pollute the global environment much more than it already is
window.points = (async () => { window.points = (async () => {
@ -368,6 +368,144 @@ window.points = (async () => {
} }
})() })()
const footnotes = document.querySelector(".footnotes")
const sidenotes = document.querySelector(".sidenotes")
if (sidenotes) {
const codeblocks = document.querySelectorAll("pre.hljs")
const article = document.querySelector("main.blog-post")
while (footnotes.firstChild) {
sidenotes.appendChild(footnotes.firstChild)
}
const footnoteItems = sidenotes.querySelectorAll(".footnote-item")
const sum = xs => xs.reduce((a, b) => a + b, 0)
const arrayOf = (n, x) => new Array(n).fill(x)
const BORDER = 16
const sidenotesAtSide = () => getComputedStyle(sidenotes).paddingLeft !== "0px"
let rendered = false
const relayout = forceRedraw => {
// sidenote column width is static: no need to redo positioning on resize unless no positions applied
if (sidenotesAtSide()) {
if (rendered && !forceRedraw) return
// sidenote vertical placement algorithm
const snRect = sidenotes.getBoundingClientRect()
const articleRect = article.getBoundingClientRect()
const exclusions = [[-Infinity, Math.max(articleRect.top, snRect.top)]]
for (const codeblock of codeblocks) {
const codeblockRect = codeblock.getBoundingClientRect()
if (codeblockRect.width !== 0) { // collapsed
exclusions.push([codeblockRect.top - BORDER, codeblockRect.top + codeblockRect.height + BORDER])
}
}
// convert unusable regions into list of usable regions
const inclusions = []
for (const [start, end] of exclusions) {
if (inclusions.length) inclusions[inclusions.length - 1].end = start - snRect.top
inclusions.push({ start: end - snRect.top, contents: [] })
}
inclusions[inclusions.length - 1].end = Infinity
const notes = []
// read off sidenotes to place
for (const item of footnoteItems) {
const itemRect = item.getBoundingClientRect()
const link = article.querySelector(`#${item.id.replace(/^fn/, "fnref")}`)
const linkRect = link.getBoundingClientRect()
item.style.position = "absolute"
item.style.left = getComputedStyle(sidenotes).paddingLeft
item.style.marginBottom = item.style.marginTop = `${BORDER / 2}px`
notes.push({
item,
height: itemRect.height + BORDER,
target: linkRect.top - snRect.top
})
}
// preliminary placement: place in valid regions going down
for (const note of notes) {
const index = inclusions.findLastIndex(inc => (inc.start + note.height) < note.target)
const next = inclusions.slice(index)
.findIndex(inc => (sum(inc.contents.map(x => x.height)) + note.height) < (inc.end - inc.start))
inclusions[index + next].contents.push(note)
}
// TODO: try simple moves between regions? might be useful sometimes
// place within region and apply styles
for (const inc of inclusions) {
const regionNotes = inc.contents
if (regionNotes.length > 0) {
const variables = {}
const constraints = {}
if (inc.end !== Infinity) {
const heights = regionNotes.map(note => note.height)
constraints["sum_gaps"] = { max: inc.end - inc.start - sum(heights) }
}
regionNotes.forEach((note, i) => {
variables[`distbound_${i}`] = {
"distsum": 1,
[`distbound_${i}_offset`]: 1,
[`distbound_${i}_offset_neg`]: 1
}
const heightsum = sum(regionNotes.slice(0, i).map(x => x.height))
const baseoffset = heightsum - note.target
// WANT: distbound_i >= placement_i - target_i AND distbound_i <= target_i - placement_i
// distbound_i >= gapsum_i + heightsum_i - target_i
// distbound_i_offset = distbound_i - gapsum_i
// so distbound_i_offset >= heightsum_i - target_i
// implies distbound_i - gapsum_i >= heightsum_i - target_i
// (as required)
// distbound_i + gapsum_i >= heightsum_i - target_i
constraints[`distbound_${i}_offset`] = { min: baseoffset }
constraints[`distbound_${i}_offset_neg`] = { min: -baseoffset }
constraints[`gap_${i}`] = { min: 0 }
const G_i_var = { "sum_gaps": 1 }
for (let j = i; j <= regionNotes.length; j++) G_i_var[`distbound_${j}_offset`] = -1
for (let j = i; j < regionNotes.length; j++) G_i_var[`distbound_${j}_offset_neg`] = 1
variables[`gap_${i}`] = G_i_var
})
const model = {
direction: "minimize",
objective: "distsum",
constraints,
variables
}
const solution = solve(model, { includeZeroVariables: true })
if (solution.status !== "optimal") {
// implode
solution.variables = []
console.warn("Sidenote layout failed", solution.status)
}
const solutionVars = new Map(solution.variables)
let position = 0
regionNotes.forEach((note, i) => {
position += solutionVars.get(`gap_${i}`) || 0
note.item.style.top = position + "px"
position += note.height
})
}
}
rendered = true
} else {
for (const item of sidenotes.querySelectorAll(".footnote-item")) {
item.style.position = "static"
}
rendered = false
}
}
window.onresize = relayout
window.onload = relayout
document.querySelectorAll("summary").forEach(x => {
x.addEventListener("click", () => {
setTimeout(() => relayout(true), 0)
})
})
window.relayout = relayout
}
const customStyle = localStorage.getItem("user-stylesheet") const customStyle = localStorage.getItem("user-stylesheet")
let customStyleEl = null let customStyleEl = null
if (customStyle) { if (customStyle) {
@ -377,3 +515,5 @@ if (customStyle) {
customStyleEl.id = "custom-style" customStyleEl.id = "custom-style"
document.head.appendChild(customStyleEl) document.head.appendChild(customStyleEl)
} }
window.customStyleEl = customStyleEl
window.customStyle = customStyle

View File

@ -1,3 +1,7 @@
$sidenotes-width: 20rem
$content-margin: 1rem
$content-width: 40rem
@font-face @font-face
font-family: 'Titillium Web' font-family: 'Titillium Web'
font-style: normal font-style: normal
@ -56,7 +60,7 @@ nav
color: white color: white
font-size: 1.25em font-size: 1.25em
a, img a, img, picture
margin-right: 0.5em margin-right: 0.5em
@for $i from 1 through 6 @for $i from 1 through 6
@ -71,18 +75,18 @@ h1, h2, h3, h4, h5, h6
color: inherit color: inherit
main, .header main, .header
margin-left: 1em margin-left: $content-margin
margin-right: 1em margin-right: $content-margin
// for easier viewing on big screen devices, narrow the width of text // for easier viewing on big screen devices, narrow the width of text
// also make links a bit more distinct // also make links a bit more distinct
main.blog-post main.blog-post
max-width: 40em max-width: $content-width
text-align: justify text-align: justify
a a
text-decoration: underline text-decoration: underline
.blog, .experiments, .atl .blog, .experiments, .atl, .microblog
margin: -0.5em margin: -0.5em
margin-bottom: 0 margin-bottom: 0
display: flex display: flex
@ -94,6 +98,9 @@ main.blog-post
padding: 1em padding: 1em
flex: 1 1 20% flex: 1 1 20%
.microblog > div
flex: 1 1 30%
main main
margin-top: 1em margin-top: 1em
@ -147,7 +154,7 @@ button, select, input, textarea, .textarea
.imbox .imbox
display: flex display: flex
img img, picture
padding-right: 1em padding-right: 1em
height: 8em height: 8em
width: 8em width: 8em
@ -162,5 +169,36 @@ button, select, input, textarea, .textarea
border: 1px solid black border: 1px solid black
padding: 1em padding: 1em
margin: -1px margin: -1px
img img, picture
width: 100% width: 100%
blockquote
padding-left: 0.4rem
border-left: 0.4rem solid black
margin-left: 0.2rem
.microblog p
margin: 0
.sidenotes-container
display: flex
flex-wrap: wrap
.sidenotes
width: $sidenotes-width
min-width: $sidenotes-width
padding-left: 1.5rem
position: relative
.footnotes-sep
display: none
.footnotes-list
text-align: justify
@media (max-width: calc(2 * $content-margin + $content-width + $sidenotes-width))
.sidenotes
min-width: auto
width: auto
max-width: $content-width
padding: 0
margin-left: $content-margin
margin-right: $content-margin
.footnotes-sep
display: block

View File

@ -0,0 +1,4 @@
div(style=`background: ${bgcol}`)
div
a(href=permalink)= renderDate(date)
div!= content

View File

@ -1,4 +1,10 @@
extends layout.pug extends layout.pug
block content block content
main.blog-post!= content .sidenotes-container
main.blog-post!= content
.sidenotes
block under-title
if draft
h1 DRAFT

View File

@ -9,13 +9,20 @@ block content
each post in posts each post in posts
.imbox(style=`background: ${post.bgcol}`) .imbox(style=`background: ${post.bgcol}`)
if images.hasOwnProperty(post.slug) if images.hasOwnProperty(post.slug)
img(src=images[post.slug]) +image(images[post.slug])
div div
div div
a.title(href=`/${post.slug}/`)= post.title a.title(href=`/${post.slug}/`)= post.title
span.deemph= `${renderDate(post.created)} / ${metricPrefix(post.wordCount, "")} words` div.deemph= `${renderDate(post.created)} / ${metricPrefix(post.wordCount, "")} words`
div.description!= post.description div.description!= post.description
h2 Microblog
p.
Short-form observations.
div.microblog
each entry in microblog
!= entry
h2 Experiments h2 Experiments
p. p.
Various web projects I have put together over many years. Made with at least four different JS frameworks. Some of them are bad. Various web projects I have put together over many years. Made with at least four different JS frameworks. Some of them are bad.
@ -23,7 +30,7 @@ block content
each experiment in experiments each experiment in experiments
.imbox(style=`background: ${experiment.bgcol}`) .imbox(style=`background: ${experiment.bgcol}`)
if images.hasOwnProperty(experiment.slug) if images.hasOwnProperty(experiment.slug)
img(src=images[experiment.slug]) +image(images[experiment.slug])
div div
div div
a.title(href=`/${experiment.slug}/`)= experiment.title a.title(href=`/${experiment.slug}/`)= experiment.title

View File

@ -1,9 +1,23 @@
mixin nav-item(url, name) mixin nav-item(url, name)
a(href=url)= name a(href=url)= name
mixin image(src)
if typeof src === "string"
img(src=src)
else
picture
each val, key in src
if key == "_fallback"
img(src=val)
else
source(srcset=val, type=key)
doctype html doctype html
html(lang="en") html(lang="en")
head head
link(rel="preload", href="/assets/share-tech-mono.woff2", as="font", crossorigin="anonymous")
link(rel="preload", href="/assets/titillium-web-semibold.woff2", as="font", crossorigin="anonymous")
link(rel="preload", href="/assets/titillium-web.woff2", as="font", crossorigin="anonymous")
title= `${title} @ ${name}` title= `${title} @ ${name}`
script(src="/assets/js/page.js", defer=true) script(src="/assets/js/page.js", defer=true)
meta(charset="UTF-8") meta(charset="UTF-8")