mirror of
https://github.com/osmarks/website
synced 2026-04-16 03:51:23 +00:00
new other things
This commit is contained in:
@@ -64,7 +64,7 @@ Moreover, people are on average [not very smart](https://www.overcomingbias.com/
|
||||
Increasingly, doing well in modernity requires long-horizon, complex, quantitative decision-making and forward planning, and/or cultural knowledge not common to everyone. Consider:
|
||||
|
||||
* For Americans, getting into high-status colleges has a significant effect on later career outcomes, and famously requires years of wrangling highly specific extracurriculars and knowing how to write essays to accurately pander to admissions officers.
|
||||
* With the fall of defined-benefit pensions, (comfortable) retirement requires understanding compound interest and investment returns, various tax-advantaged savings options with inscrutable acronyms and constraints, as well as having the low time preference/discipline to bother to do this.
|
||||
* With the fall of defined-benefit pensions, (comfortable) retirement requires understanding compound interest and investment returns and various tax-advantaged savings options with inscrutable acronyms and constraints, as well as having the low time preference/discipline to bother to do this.
|
||||
* [Advance-booked transport ticket pricing](/pricecog/).
|
||||
* Safely and correctly using credit is similarly complex and valuable.
|
||||
* Job applications now require extremely scaled [guessing the teacher's password](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) and knowledge of the current fashions amongst hiring managers.
|
||||
|
||||
@@ -19,7 +19,7 @@ IPU architecture diagram via [Graphcore docs](https://docs.graphcore.ai/projects
|
||||
|
||||
Why this architecture? Graphcore has made different claims about it over the years, being quite an old company by AI standards (their founding in 2016 predates transformers and I imagine they had the core ideas beforehand). The most obvious reason for their design is sparsity support and overfitting to contemporary RNNs/CNNs[^9], but there are better reasons. GPT-5.2-high found [a presentation](https://cdn2.hubspot.net/hubfs/729091/assets/ScaledML%20Stanford%2024mar18%20SK.pdf) from 2018 justifying their strategy. They correctly determined that power would be a binding constraint on future AI hardware, that direct-to-GPU interconnects would need to scale beyond a single node and that memory bandwidth would continue to be a bottleneck. Also, they added hardware-accelerated [stochastic rounding](https://shape-of-code.com/2022/11/20/stochastic-rounding-reemerges/) for low-precision training in their first generation, while Nvidia only integrated this in recent Blackwell GPUs[^20]. Later, they [talk about](https://hc33.hotchips.org/assets/program/conference/day2/HC2021.Graphcore.SimonKnowles.v04.pdf) the power and cost advantages of avoiding HBM, and how having enough SRAM allows using DRAM with lower bandwidth.
|
||||
|
||||
Most of these arguments and decisions are essentially correct, and very early: the overall Graphcore design was locked in a decade ago, but it's only in the past two or three years that datacentre buildouts became heavily power-constrained, Nvidia [started scaling NVLink to racks](https://www.nvidia.com/en-us/data-center/gb200-nvl72/), and HBM became supply-crunched (due to advanced packaging in ~2023 and memory production in ~2025[^6]) rather than merely costly. Some have blamed their lack of adoption on the architecture being difficult to program but this fails to distinguish them from competitors: efficient GPU kernels involve [all kinds of arcana](https://siboehm.com/articles/22/CUDA-MMM) even without newer sometimes-programming-model-breaking innovations such as tensor cores, [TMA](https://docs.nvidia.com/cuda/hopper-tuning-guide/index.html#tensor-memory-accelerator) asynchronous loads, Blackwell's [async matrix multiplications](https://research.colfax-intl.com/cutlass-tutorial-writing-gemm-kernels-using-tensor-memory-for-nvidia-blackwell-gpus/), new low-precision floating point formats, partitioning SMs into compute and communication, and Hopper's [cursed swizzles](https://hazyresearch.stanford.edu/blog/2024-05-12-tk). Google TPUs used to require you to write TensorFlow code and have no public way to write low-level code for cases where the compiler isn't sufficient, and many were willing to put up with this agony because they were reasonably fast and [free](https://sites.research.google/trc/about/) for some hobbyists[^4], and they have a number of external customers these days. Graphcore IPUs lack a performant "eager mode" experience like GPUs, which puts off researchers, but this is also true of TPUs, as are the long compile times[^13]. TPUs and GPUs are (were) more accessible to hobbyists and consumers, but this feels an unreasonably self-serving explanation, IPUs were given to many researchers, and large B2B sales (which they had, or at least tried for) should have been less affected by this.
|
||||
Most of these arguments and decisions are essentially correct, and very early: the overall Graphcore design was locked in a decade ago, but it's only in the past two or three years that datacentre buildouts became heavily power-constrained, Nvidia [started scaling NVLink to racks](https://www.nvidia.com/en-us/data-center/gb200-nvl72/), and HBM became supply-crunched (due to advanced packaging in ~2023 and memory production in ~2025[^6]) rather than merely costly. Some have blamed their lack of adoption on the architecture being difficult to program but this fails to distinguish them from competitors: efficient GPU kernels involve [all kinds of arcana](https://siboehm.com/articles/22/CUDA-MMM) even without newer sometimes-programming-model-breaking innovations such as tensor cores, [TMA](https://docs.nvidia.com/cuda/hopper-tuning-guide/index.html#tensor-memory-accelerator) asynchronous loads, Blackwell's [async matrix multiplications](https://research.colfax-intl.com/cutlass-tutorial-writing-gemm-kernels-using-tensor-memory-for-nvidia-blackwell-gpus/), new low-precision floating point formats, partitioning SMs into compute and communication, Hopper's [cursed swizzles](https://hazyresearch.stanford.edu/blog/2024-05-12-tk), and [nonsense compiler quirks](https://github.com/triton-lang/triton/pull/7298). Google TPUs used to require you to write TensorFlow code and have no public way to write low-level code for cases where the compiler isn't sufficient, and many were willing to put up with this agony because they were reasonably fast and [free](https://sites.research.google/trc/about/) for some hobbyists[^4], and they have a number of external customers these days. Graphcore IPUs lack a performant "eager mode" experience like GPUs, which puts off researchers, but this is also true of TPUs, as are the long compile times[^13]. TPUs and GPUs are (were) more accessible to hobbyists and consumers, but this feels an unreasonably self-serving explanation, IPUs were given to many researchers, and large B2B sales (which they had, or at least tried for) should have been less affected by this.
|
||||
|
||||
You could argue that they missed out on the bitter lesson. This appears partly true - an [early talk](https://www.youtube.com/watch?v=dLvkF_TmyAc) has their CTO expect that tensor compute would be less important in the future, that future workloads would be more heterogeneous, and that different specialized architectures would need to be designed/searched for different tasks - but regardless of their opinions, the chips are flexible enough that they can run transformers. The lack of directly-attached DRAM is problematic with big models (which I don't think they anticipated), but, as they describe, the capacious on-chip SRAM makes it tractable in principle to stream weights from cheap high-capacity server DRAM rather than use HBM[^5], as long as your workloads aren't especially latency-sensitive[^7]. Aside from interactive chatbots and now reinforcement learning training, most inference involving models big enough for this to be a problem *isn't* very latency-sensitive. I think the immediate cause of Graphcore's commercial failure was the [end of their deal with Microsoft](https://www.uktech.news/deep-tech/graphcore-microsoft-deal-20221010) in October 2022; unless someone involved was very perceptive[^8] (and saw no value in having IPUs for training), it is unlikely that the deal was shelved over concerns about LLM inference. My sense is that it's something like "nobody ever got fired for buying Nvidia" - people are and were used to Nvidia GPUs despite their bad system-level design (high power draw per accelerator[^10], [enormous failure rates](https://arxiv.org/abs/2503.11901v3), limited integrated networking), they were easy to prototype things on, and because transformers fit GPUs (and TPUs) well, Graphcore could win on cost grounds at best. Also, according to [dubiously sourced slides](https://www.gizchina.com/tech/tsmc-announces-its-first-3nm-ai-chip-customer-neither-apple-nor-huawei), they were planning to skip 5nm manufacturing and go straight to TSMC N3, which was delayed about a year and had yield problems (hence N3B and the relaxed N3E), so in 2022 they were competing against newer and very capable Nvidia H100s with a two-year-old chip. We must wonder whether any prototype Mk3 IPUs were ever built.
|
||||
|
||||
|
||||
@@ -7,14 +7,14 @@ slug: otherstuff
|
||||
tags: ["fiction", "opinion"]
|
||||
---
|
||||
I'm excluding music from this because music preferences seem to be even more varied between the people I interact with than other preferences.
|
||||
Obviously this is just material *I* like; you might not like it, which isn't really my concern - this list is primarily made to bring to people's attention media they might like but have not heard of.
|
||||
Enjoy the newly reformatted version of this list, with my slightly opaque organizational scheme and manually sorted lists.
|
||||
This list is primarily made to bring to people's attention media they might like but have not heard of, assuming they have approximately my preferences, and so may disagree with you.
|
||||
Enjoy the ~~newly~~ reformatted version of this list, with its manually sorted lists and organization by media type.
|
||||
|
||||
## Writing
|
||||
|
||||
### 12 Miles Below
|
||||
|
||||
[12 Miles Below](https://www.royalroad.com/fiction/42367/12-miles-below/) is an ongoing webserial (I am not fully caught up or close to it yet) with (somewhat) intelligent and well-written characters and a quirky setting. It has more grammar/spelling errors than I would like (I would like none) but most people care about this less than me.
|
||||
[12 Miles Below](https://www.royalroad.com/fiction/42367/12-miles-below/) is an ongoing webserial (I am not fully caught up or close to it yet) with a quirky setting with some cool ideas. It has more grammar/spelling errors than I would like (I would like none) but most people care about this less than me.
|
||||
|
||||
### A Hero's War
|
||||
|
||||
@@ -256,7 +256,7 @@ ATTENTION. DUE TO A SCALE BACK IN COVERAGE, THE MORAL ARC OF THE UNIVERSE NO LON
|
||||
You are that which copies your genes into the future. I am that which dissolves the order in certain kinds of complex system. That’s the deep structure of things.
|
||||
:::
|
||||
|
||||
[Void Star](https://www.goodreads.com/book/show/29939057-void-star) - somewhat strange for a "mainstream" scifi book (it was reviewed by the Guardian) but in good ways. The prose is very... poetic is probably the best word (it contains phrases like "isoclines of commitment and dread", "concentric and innumerable" and "high empyrean")... which I enjoyed, but it is polarizing. The setting seems like a broadly reasonable extrapolation of ongoing trends into the future, although it's unclear exactly *when* it is (some of the book implies 2150 or so, but this seems implausible). The author is a software engineer, so, unlike many other books with computers in them, the computers are not totally wrong.
|
||||
[Void Star](https://www.goodreads.com/book/show/29939057-void-star) - somewhat strange for a "mainstream" scifi book (it was reviewed by the Guardian) but in good ways. The prose is (approximately) very poetic (it contains phrases like "isoclines of commitment and dread", "concentric and innumerable" and "high empyrean"), which I enjoyed, but it is polarizing. The setting seems like a broadly reasonable business-as-usual extrapolation of ongoing trends into the future, although it's unclear exactly *when* it is (some of the book implies 2150 or so, but this seems implausible). The author is a software engineer, so, unlike many other books with computers in them, the computers are not totally wrong.
|
||||
|
||||
Its most unusual characteristic is that it absolutely does not tell you what's going on ever: an interview I read said it was written out of order, and that makes sense (another fun quirk of it is that the chapters are generally very short). I think I know most of what happens now, but it has taken a while. It has about one big idea in it, but it's written well.
|
||||
|
||||
@@ -373,9 +373,10 @@ Men have gazed at the stars for millennia, and wondered whether there was a deit
|
||||
|
||||
Special mentions (i.e. "I haven't gotten around to reading these but they are well-reviewed and sound interesting") to:
|
||||
* [Children of Time](https://www.goodreads.com/book/show/25499718-children-of-time) by Adrian Tchaikovsky.
|
||||
* [Codex Alera](https://www.goodreads.com/series/45545-codex-alera) by Jim Butcher.
|
||||
* [The Books of Babel](https://www.goodreads.com/series/127130-the-books-of-babel) by Josiah Bancroft.
|
||||
* [Singularity Sky](https://www.goodreads.com/book/show/81992.Singularity_Sky) by Charlie Stross.
|
||||
* Project Hail Mary by Andy Weir.
|
||||
* Worth the Candle by Alexander Wales.
|
||||
|
||||
If you want EPUB versions of the free web serials here for your e-reader, there are tools to generate those, or you can contact me for a copy.
|
||||
|
||||
|
||||
136
links_cache.json
136
links_cache.json
@@ -5250,5 +5250,141 @@
|
||||
"date": null,
|
||||
"website": "GitHub",
|
||||
"auto": true
|
||||
},
|
||||
"https://github.com/triton-lang/triton/pull/7298": {
|
||||
"excerpt": "Rewrite the attention kernel to be persistent. This gives better performance at low-contexts. However, fp16 at large context has suffered a bit due to a ptxas instruction scheduling issue in the so...",
|
||||
"title": "[Gluon][Tutorial] Persistent attention by Mogball · Pull Request #7298 · triton-lang/triton",
|
||||
"author": "Mogball",
|
||||
"date": null,
|
||||
"website": "GitHub",
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/Euler%27s_critical_load": {
|
||||
"excerpt": "From Wikipedia, the free encyclopedia",
|
||||
"title": "Euler's critical load",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2015-11-07T13:46:17Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/Young%27s_modulus": {
|
||||
"excerpt": "From Wikipedia, the free encyclopedia",
|
||||
"title": "Young's modulus",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2003-05-16T19:22:56Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/London_Clay": {
|
||||
"excerpt": "From Wikipedia, the free encyclopedia",
|
||||
"title": "London Clay",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2004-07-30T05:33:48Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
},
|
||||
"https://worksinprogress.co/issue/lab-grown-diamonds/": {
|
||||
"excerpt": "Synthetic diamonds are now purer, more beautiful, and vastly cheaper than mined diamonds. Beating nature took decades of hard graft and millions of pounds of pressure.",
|
||||
"title": "Lab-grown diamonds - Works in Progress Magazine",
|
||||
"author": "Javid Lakha",
|
||||
"date": "2024-08-30T16:00:00+00:00",
|
||||
"website": null,
|
||||
"auto": true
|
||||
},
|
||||
"https://pmc.ncbi.nlm.nih.gov/articles/PMC8951216/": {
|
||||
"excerpt": "Nowadays, synthetic diamonds are easy to fabricate industrially, and a wide range of methods were developed during the last century. Among them, the high-pressure–high-temperature (HP–HT) process is the most used to prepare diamond compacts for ...",
|
||||
"title": "A Review of Binderless Polycrystalline Diamonds: Focus on the High-Pressure–High-Temperature Sintering Process",
|
||||
"author": null,
|
||||
"date": null,
|
||||
"website": "PubMed Central (PMC)",
|
||||
"auto": true
|
||||
},
|
||||
"https://nanosyste.ms/": {
|
||||
"excerpt": "Written by a leading researcher in the field and one of its founders, Nanosystems is the first technical introduction to molecular nanotechnology. 'Devices enormously smaller than before will remodel engineering, chemistry, medicine, and computer technology. How can we understand machines that are so small? Nanosystems covers it all: power and strength, friction and wear, thermal noise and quantum uncertainty. This is the book for starting the next century of engineering.' - Marvin Minsky",
|
||||
"title": "Nanosystems by K. Eric Drexler",
|
||||
"author": null,
|
||||
"date": null,
|
||||
"website": null,
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/Coefficient_of_performance": {
|
||||
"excerpt": "From Wikipedia, the free encyclopedia",
|
||||
"title": "Coefficient of performance",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2004-03-21T17:19:43Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/High_Speed_2": {
|
||||
"excerpt": "The planned extent of HS2 as of October 2023",
|
||||
"title": "High Speed 2",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2007-06-23T13:57:16Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
},
|
||||
"https://richmondcanoeclub.com/members/flow/kingston/": {
|
||||
"excerpt": "Other stations:",
|
||||
"title": "River Thames Flow at Kingston Bridge – Richmond Canoe Club",
|
||||
"author": null,
|
||||
"date": null,
|
||||
"website": null,
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/Canals_of_the_United_Kingdom": {
|
||||
"excerpt": "From Wikipedia, the free encyclopedia",
|
||||
"title": "Canals of the United Kingdom",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2004-08-23T10:53:06Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
},
|
||||
"https://www.gem.wiki/POSCO_Gwangyang_steel_plant": {
|
||||
"excerpt": "POSCO Gwangyang steel plant, also known as Pohang Iron & Steel Gwangyang, is a steel plant in Gwangyang, South Jeolla, South Korea that operates blast furnace (BF), basic oxygen furnace (BOF), direct reduced iron (DRI), and electric arc furnace (EAF) technology.",
|
||||
"title": "POSCO Gwangyang steel plant - Global Energy Monitor",
|
||||
"author": "Global Energy Monitor",
|
||||
"date": "2026-03-30T18:00:25Z",
|
||||
"website": "Global Energy Monitor",
|
||||
"auto": true
|
||||
},
|
||||
"https://newsroom.posco.com/en/posco-steelworks-create-forests-from-within/": {
|
||||
"excerpt": "Posco Newsroom",
|
||||
"title": "POSCO Steelworks Create Forests from Within",
|
||||
"author": null,
|
||||
"date": null,
|
||||
"website": null,
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/Doggerland": {
|
||||
"excerpt": "Doggerland was a large area of land in Northern Europe, now submerged beneath the southern North Sea. This region was repeatedly exposed at various times during the Pleistocene epoch due to the lowering of sea levels during glacial periods. However, the term \"Doggerland\" is generally specifically used for this region during the Late Pleistocene and Early Holocene. During the early Holocene following the glacial retreat at the end of the Last Glacial Period, the exposed land area of Doggerland stretched across the region between what is now the east coast of Great Britain, northern France, Belgium, the Netherlands, north-western Germany, and the Danish peninsula of Jutland. Between 10,000 and 7,000 years ago, Doggerland was inundated by rising sea levels, disintegrating initially into a series of low-lying islands before submerging completely.[1][2] The impact of the tsunami generated by the Storegga underwater landslide c. 8,200 years ago on Doggerland is controversial.[1] The flooded land is known as the Dogger Littoral.[3]",
|
||||
"title": "Doggerland",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2007-01-03T20:41:11Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
},
|
||||
"https://www.engineeringtoolbox.com/sizing-ducts-d_207.html": {
|
||||
"excerpt": "The velocity reduction method can be used when sizing air ducts.",
|
||||
"title": "Ducts Sizing - the Velocity Reduction Method",
|
||||
"author": "Editor Engineeringtoolbox",
|
||||
"date": "2003-02-16T04:25:03Z",
|
||||
"website": null,
|
||||
"auto": true
|
||||
},
|
||||
"https://www.ncbi.nlm.nih.gov/books/NBK219009": {
|
||||
"excerpt": "Although the variety of airplanes operating throughout the world is large, the basic designs of the environmental control systems (ECSs) used on most aircraft in commercial service are remarkably similar. In simplified terms, air is first compressed to high pressure and temperature and then conditioned in an environmental control unit (ECU), where excess moisture is removed and the temperature necessary for heating or cooling the airplane is established. The conditioned air is then delivered to the cabin and cockpit to maintain a comfortable environment.",
|
||||
"title": "Environmental Control Systems on Commercial Passenger Aircraft",
|
||||
"author": null,
|
||||
"date": null,
|
||||
"website": "NCBI Bookshelf",
|
||||
"auto": true
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/The_Concentration_City": {
|
||||
"excerpt": "From Wikipedia, the free encyclopedia",
|
||||
"title": "The Concentration City",
|
||||
"author": "Contributors to Wikimedia projects",
|
||||
"date": "2008-05-08T01:28:18Z",
|
||||
"website": "Wikimedia Foundation, Inc.",
|
||||
"auto": true
|
||||
}
|
||||
}
|
||||
29
src/index.js
29
src/index.js
@@ -33,6 +33,7 @@ const json5 = require("json5")
|
||||
const readability = require("@mozilla/readability")
|
||||
const { JSDOM } = require("jsdom")
|
||||
const hljs = require("highlight.js")
|
||||
const { xxHash32 } = require("js-xxhash")
|
||||
|
||||
const fts = require("./fts.mjs")
|
||||
|
||||
@@ -45,6 +46,7 @@ const blogDir = path.join(root, "blog")
|
||||
const errorPagesDir = path.join(root, "error")
|
||||
const assetsDir = path.join(root, "assets")
|
||||
const outDir = path.join(root, "out")
|
||||
const patternDir = path.join(outDir, "assets/pattern")
|
||||
const srcDir = path.join(root, "src")
|
||||
const nodeModules = path.join(root, "node_modules")
|
||||
|
||||
@@ -139,16 +141,37 @@ const fetchLinksOut = async () => {
|
||||
await fsp.writeFile(path.join(root, "links_cache.json"), JSON.stringify(cachedLinks, null, 4))
|
||||
}
|
||||
|
||||
const background = []
|
||||
|
||||
const removeExtension = x => x.replace(/\.[^/.]+$/, "")
|
||||
|
||||
// This causes browser flickering somehow. The world is just not ready.
|
||||
const generateBoxPattern = name => {
|
||||
/*
|
||||
const hash = xxHash32(name)
|
||||
|
||||
const filename = path.join(patternDir, `${hash}.png`)
|
||||
background.push(async () => {
|
||||
await fse.ensureDir(patternDir)
|
||||
await util.promisify(childProcess.execFile)(path.join(__dirname, "strichtarn_generator.py"), [filename])
|
||||
})
|
||||
//return `background-image: url(/assets/pattern/${hash}.png);`
|
||||
*/
|
||||
return null
|
||||
}
|
||||
|
||||
globalData.generateBoxPattern = generateBoxPattern
|
||||
|
||||
const renderContainer = (tokens, idx) => {
|
||||
let opening = true
|
||||
let interior = ""
|
||||
if (tokens[idx].type === "container__close") {
|
||||
let nesting = 0
|
||||
for (; tokens[idx].type !== "container__open" && nesting !== 1; idx--) {
|
||||
nesting += tokens[idx].nesting
|
||||
}
|
||||
opening = false
|
||||
interior += JSON.stringify(tokens[idx].content)
|
||||
}
|
||||
const m = tokens[idx].info.trim().split(" ");
|
||||
const blockType = m.shift()
|
||||
@@ -188,7 +211,8 @@ const renderContainer = (tokens, idx) => {
|
||||
}
|
||||
return out
|
||||
} else if (blockType === "emphasis") {
|
||||
return `<div class="emphasis box">`
|
||||
const style = generateBoxPattern(interior) && ` style="${generateBoxPattern(interior)}"`
|
||||
return `<div class="emphasis box${style || ""}>`
|
||||
}
|
||||
} else {
|
||||
if (blockType === "captioned") {
|
||||
@@ -798,7 +822,8 @@ const tasks = {
|
||||
searchIndex: { deps: ["blog", "fetchMicroblog", "fetchMycorrhiza", "experiments"], fn: buildFTS },
|
||||
fetchMycorrhiza: { deps: [], fn: fetchMycorrhiza },
|
||||
fetchLinksOut: { deps: ["blog"], fn: fetchLinksOut },
|
||||
loadLinksOut: { deps: [], fn: loadLinksOut }
|
||||
loadLinksOut: { deps: [], fn: loadLinksOut },
|
||||
misc: { deps: ["main"], fn: () => Promise.all(background.map(x => x())) }
|
||||
}
|
||||
|
||||
const compile = async () => {
|
||||
|
||||
@@ -813,4 +813,23 @@
|
||||
website: "YouTube",
|
||||
author: "Man AHL"
|
||||
},
|
||||
"https://en.wikipedia.org/wiki/McKelvey%E2%80%93Schofield_chaos_theorem": {
|
||||
title: "McKelvey–Schofield chaos theorem",
|
||||
website: "Wikipedia",
|
||||
referenceIn: { opinion: "" }
|
||||
},
|
||||
"https://www.youtube.com/watch?v=96eFnTescoY": {
|
||||
title: "The Highly Ridiculous Over-Engineering of the Diamondback Nozzle",
|
||||
website: "YouTube",
|
||||
author: "Zack Freedman"
|
||||
},
|
||||
"https://www.aisc.org/media/hf4jbmik/b904_sbdh_chapter4.pdf": {
|
||||
title: "Steel Bridge Design Handbook",
|
||||
date: "2022-02-01",
|
||||
author: "American Institute of Steel Construction"
|
||||
},
|
||||
"https://assets.publishing.service.gov.uk/media/67d2bb074702aacd2251cb94/Approved_Document_B_volume_1_Dwellings_2019_edition_incorporating_2020_2022_and_2025_amendments_collated_with_2026_and_2029_amendments.pdf": {
|
||||
title: "The Building Regulations 2010 Approved Document B",
|
||||
date: "2025"
|
||||
}
|
||||
}
|
||||
|
||||
115
src/strichtarn_generator.py
Executable file
115
src/strichtarn_generator.py
Executable file
@@ -0,0 +1,115 @@
|
||||
#!/usr/bin/env python3
|
||||
# by GPT-5.3-codex medium
|
||||
"""Generate a deterministic alpha strichtarn PNG pattern."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import os
|
||||
import random
|
||||
from pathlib import Path
|
||||
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
BG = (0, 0, 0, 0)
|
||||
LINE = (0, 0, 0, 10)
|
||||
WIDTH = 512
|
||||
HEIGHT = 512
|
||||
GRID_STEP = 4
|
||||
|
||||
|
||||
def seed_from_basename(path: str) -> int:
|
||||
basename = os.path.basename(path)
|
||||
digest = hashlib.sha256(basename.encode("utf-8")).digest()
|
||||
return int.from_bytes(digest[:8], "big", signed=False)
|
||||
|
||||
|
||||
def pattern_params(seed: int) -> tuple[bool, int, int, int, int, int, int]:
|
||||
rng = random.Random(seed ^ 0x94D049BB133111EB)
|
||||
horizontal = (seed & 1) == 0
|
||||
column_step_slots = rng.choice((2, 3, 4, 5, 6))
|
||||
line_width = rng.choice((1, 1, 2, 2, 3, 4))
|
||||
run_len = rng.randint(12, 46)
|
||||
gap_len = rng.randint(4, 22)
|
||||
x_phase = rng.randrange(column_step_slots)
|
||||
y_phase = rng.randint(0, run_len + gap_len - 1)
|
||||
return horizontal, column_step_slots, line_width, run_len, gap_len, x_phase, y_phase
|
||||
|
||||
|
||||
def build_phase_offsets(seed: int, count: int, period: int) -> list[int]:
|
||||
"""Seeded short-period linear phase offsets."""
|
||||
if count <= 0:
|
||||
return []
|
||||
|
||||
rng = random.Random(seed ^ 0x2545F4914F6CDD1D)
|
||||
pattern_period = rng.randint(2, 6)
|
||||
phase = rng.randrange(pattern_period)
|
||||
slope_limit = max(1, period // (pattern_period * 2))
|
||||
slope = rng.choice((-1, 1)) * rng.randint(1, slope_limit)
|
||||
base = rng.randrange(period)
|
||||
|
||||
offsets: list[int] = []
|
||||
for i in range(count):
|
||||
t = (i + phase) % pattern_period
|
||||
offsets.append((base + slope * t) % period)
|
||||
return offsets
|
||||
|
||||
|
||||
def generate_strichtarn(seed: int, width: int, height: int) -> Image.Image:
|
||||
img = Image.new("RGBA", (width, height), BG)
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
horizontal, step_slots, line_width, run_len, gap_len, x_phase, y_phase = pattern_params(seed)
|
||||
slots = max(1, (height if horizontal else width) // GRID_STEP)
|
||||
lines = [slot for slot in range(slots) if (slot - x_phase) % step_slots == 0]
|
||||
period = run_len + gap_len
|
||||
phase_offsets = build_phase_offsets(seed, len(lines), period)
|
||||
|
||||
for idx, slot in enumerate(lines):
|
||||
fixed_axis = slot * GRID_STEP
|
||||
start = -((y_phase + phase_offsets[idx]) % period)
|
||||
|
||||
while start < (width if horizontal else height):
|
||||
start += gap_len
|
||||
if start >= (width if horizontal else height):
|
||||
break
|
||||
|
||||
if horizontal:
|
||||
x1 = start
|
||||
y1 = fixed_axis
|
||||
x2 = min(width - 1, start + run_len - 1)
|
||||
y2 = min(height - 1, fixed_axis + line_width - 1)
|
||||
else:
|
||||
x1 = fixed_axis
|
||||
y1 = start
|
||||
x2 = min(width - 1, fixed_axis + line_width - 1)
|
||||
y2 = min(height - 1, start + run_len - 1)
|
||||
|
||||
draw.rectangle((x1, y1, x2, y2), fill=LINE)
|
||||
start += run_len
|
||||
|
||||
return img
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(
|
||||
description=(
|
||||
"Generate an alpha strichtarn pattern PNG at the given path. "
|
||||
"Orientation and pattern parameters are deterministically seeded from basename(output_path)."
|
||||
)
|
||||
)
|
||||
parser.add_argument("output", help="Output PNG filename")
|
||||
args = parser.parse_args()
|
||||
|
||||
output = Path(args.output)
|
||||
if output.parent != Path("."):
|
||||
output.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
seed = seed_from_basename(str(output))
|
||||
img = generate_strichtarn(seed, WIDTH, HEIGHT)
|
||||
img.save(output, format="PNG")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -358,6 +358,9 @@ $hl-border: 3px
|
||||
.sidenotes img.big
|
||||
max-width: 30em
|
||||
|
||||
p img
|
||||
max-width: 100%
|
||||
|
||||
.hoverdefn
|
||||
text-decoration-style: dotted
|
||||
text-decoration-line: underline
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
.box
|
||||
.box(style=generateBoxPattern(permalink))
|
||||
div
|
||||
a.title(href=permalink)= renderDate(date)
|
||||
div!= content
|
||||
|
||||
@@ -7,7 +7,7 @@ block content
|
||||
h2 See also
|
||||
div
|
||||
each ref in references
|
||||
div.box.ref
|
||||
div.box.ref(style=generateBoxPattern(ref.url))
|
||||
a.title(href=ref.url)= ref.title
|
||||
div.deemph
|
||||
if ref.author
|
||||
|
||||
@@ -7,7 +7,10 @@ block content
|
||||
Read my opinions via the internet.
|
||||
div.blog
|
||||
each post, i in posts
|
||||
.box.imbox(style=post.accentColor && `--stripe: ${post.accentColor}`)
|
||||
- var styles = []
|
||||
- post.accentColor && styles.push(`--stripe: ${post.accentColor}`)
|
||||
- generateBoxPattern(post.slug) && styles.push(generateBoxPattern(post.slug))
|
||||
.box.imbox(style=styles.join(";"))
|
||||
if images.hasOwnProperty(post.slug)
|
||||
+image(images[post.slug])
|
||||
div
|
||||
@@ -28,7 +31,7 @@ block content
|
||||
Various web projects I have put together over many years. Made with at least four different JS frameworks. Some of them are bad.
|
||||
div.experiments
|
||||
each experiment, i in experiments
|
||||
.box.imbox
|
||||
.box.imbox(style=generateBoxPattern(experiment.slug))
|
||||
if images.hasOwnProperty(experiment.slug)
|
||||
+image(images[experiment.slug])
|
||||
div
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
.box
|
||||
.box(style=generateBoxPattern(link))
|
||||
div
|
||||
a.title(href=link)= title
|
||||
div.deemph
|
||||
|
||||
Reference in New Issue
Block a user