mirror of
https://github.com/osmarks/website
synced 2025-04-21 10:13:21 +00:00
blog external link tracking
This commit is contained in:
parent
1a271f69f3
commit
71e8e39b3b
@ -31,7 +31,7 @@ While "generative AI" now comprises the majority of interest in AI, a large frac
|
||||
|
||||
But what, exactly, are the constraints on hardware driving these limits? I'm not an electrical engineer or chip designer, but much of this is public and relatively easy to understand, and some vendors provide helpful information in whitepapers.
|
||||
|
||||
Excluding a few specialized and esoteric products like [Lightmatter](https://en.wikipedia.org/wiki/Lightmatter) and [Mythic](https://mythic.ai/)'s, AI accelerators are built on modern digital logic semiconductor processes. They contain at least one logic die - it can be more than one thanks to modern advanced packaging like Intel Foveros and TSMC CoWoS - with some mix of analog circuitry for IO, SRAM (static random access memory) for fast on-chip memory, and logic gates for the control and computation. The main limit on the complexity of the GPU is die area: each transistor in a logic circuit or IO interface or memory array consumes some area, and the cost increases somewhat superlinearly with die area. This is because die are made on a fixed-size wafer which is then cut up ("singulated"), bigger die have a higher total number of random defects ("worse yields") and so need to be discarded more often, and there's a maximum size (the "reticle limit"), above which it's necessary to combine several with expensive advanced packaging.
|
||||
Excluding a few specialized and esoteric products like [Lightmatter](https://web.archive.org/web/20240909133702/https://en.wikipedia.org/wiki/Lightmatter) and [Mythic](https://mythic.ai/)'s, AI accelerators are built on modern digital logic semiconductor processes. They contain at least one logic die - it can be more than one thanks to modern advanced packaging like Intel Foveros and TSMC CoWoS - with some mix of analog circuitry for IO, SRAM (static random access memory) for fast on-chip memory, and logic gates for the control and computation. The main limit on the complexity of the GPU is die area: each transistor in a logic circuit or IO interface or memory array consumes some area, and the cost increases somewhat superlinearly with die area. This is because die are made on a fixed-size wafer which is then cut up ("singulated"), bigger die have a higher total number of random defects ("worse yields") and so need to be discarded more often, and there's a maximum size (the "reticle limit"), above which it's necessary to combine several with expensive advanced packaging.
|
||||
|
||||
Better manufacturing processes make transistors smaller, faster and lower-power, with the downside that a full wafer costs more. Importantly, though, not everything scales down the same - recently, SRAM has almost entirely stopped getting smaller[^7], and analog has not scaled well for some time. Only logic is still shrinking fast.
|
||||
|
||||
|
@ -26,7 +26,7 @@ Even with a powerful pretrained model providing prior knowledge about the world
|
||||
|
||||
It has been said that neural nets really want to learn - even if you make mistakes in the architecture or training code, a lot of the time they will train anyway, if worse than they should. However, they don't necessarily learn what you *want*. With just a thousand data points to train on, and a nontrivial [model](https://github.com/osmarks/meme-search-engine/blob/master/meme-rater/model.py) (an ensemble of <span class="hoverdefn" title="multi-layer perceptrons">MLPs</span>), it would immediately overfit after a single epoch. I tried various regularization schemes - increasing weight decay a lot, dropout, and even cutting the model down as far as linear regression - but this either resulted in the model underfitting or it overfitting with a slightly stranger loss curve.
|
||||
|
||||
Fortunately, I had sort of planned for this. The existing pairs of memes I rated were randomly selected from the dataset and as a result often quite low-signal, telling the model things it "already knew" from other examples. None of this is particularly novel[^4] and active learning - inferring what data would be most useful to a model so you can gather and train on it - is widely studied. I used the least bad checkpoint to [find the highest-variance pairs](https://github.com/osmarks/meme-search-engine/blob/master/meme-rater/active_learning.py) out of a large random set, then manually rated those - they were given a separate label so I could place those at the end of training (this ultimately seemed to make the model worse) and have a separate validation set (helpful). I also looked into [finding the highest-gradient pairs](https://github.com/osmarks/meme-search-engine/blob/master/meme-rater/al2.py), but it turns out this isn't what I want: they are in some sense the most "surprising", and the most surprising things were apparently things which were already clear to the model being overturned by a new data point, not marginal cases it could nevertheless learn from.
|
||||
Fortunately, I had sort of planned for this. The existing pairs of memes I rated were randomly selected from the dataset and as a result often quite low-signal, telling the model things it "already knew" from other examples. None of this is particularly novel[^4] and active learning - inferring what data would be most useful to a model so you can gather and train on it - is widely studied. I used the least bad checkpoint to [find the highest-variance pairs](https://github.com/osmarks/meme-search-engine/blob/master/meme-rater/active_learning.py) out of a large random set, then manually rated those - they were given a separate label so I could place those at the end of training (this ultimately seemed to make the model worse) and have a separate validation set (helpful). I also looked into [finding the highest-gradient pairs](https://github.com/osmarks/meme-search-engine/blob/fc6d0c94091e5c858e424cd284ff697e415099be/meme-rater/al2.py), but it turns out this isn't what I want: they are in some sense the most "surprising", and the most surprising things were apparently things which were already clear to the model being overturned by a new data point, not marginal cases it could nevertheless learn from.
|
||||
|
||||
The new data seemingly made it slightly less prone to overfitting, but, "excitingly", it would achieve the lowest validation loss for the early set and new set at different steps. I was not able to resolve this, so I just eyeballed the curve to find a checkpoint which a reasonable balance between these, ran another round of active learning and manual labelling, and trained on that. It was very hard for the model to learn that at all, which makes sense - many of the pairs were *very* marginal (I didn't include a tie option and didn't want to add one at a late stage, but it would perhaps have been appropriate). At some point I decided that I did not want to try and wrangle the model into working any longer and picked a good-enough checkpoint to use.
|
||||
|
||||
@ -70,4 +70,4 @@ Thanks to AI_WAIFU, RossM, Claude-3 Opus and LLaMA-3-70B for their help with dat
|
||||
|
||||
[^4]: Someone has, in fact, built a [similar tool](https://github.com/RossM/image-rater/tree/dev) which they used to find the best possible catgirl, but I think they do logistic regression (insufficiently powerful for my requirements) and understanding and rearranging their code to my requirements would have been slower than writing it myself.
|
||||
|
||||
[^5]: Actually, most research wouldn't do this because research norms are terrible.
|
||||
[^5]: Actually, most research wouldn't do this because research norms are terrible.
|
||||
|
@ -285,7 +285,7 @@ The author, Zachary Mason, also wrote [The Lost Books of the Odyssey](https://ww
|
||||
* [The Titanomachy](https://kishoto.wordpress.com/2015/09/09/the-titanomachy-rrational-challenge-defied-prophecy/).
|
||||
* [Wizard, Cabalist, Ascendant](/stuff/wizard-cabalist-ascendant.xhtml).
|
||||
* [Sekhmet Hunts The Dying Gnosis: A Computation](http://www.beneath-ceaseless-skies.com/stories/sekhmet-hunts-the-dying-gnosis-a-computation/).
|
||||
* [The Finale of the Ultimate Meta Mega Crossover](https://m.fanfiction.net/s/5389450/1/).
|
||||
* [The Finale of the Ultimate Meta Mega Crossover](https://fanfiction.net/s/5389450/1/).
|
||||
* [Shannon's Law](https://web.archive.org/web/20140628065855/http://www.tor.com/stories/2011/05/shannons-law).
|
||||
* [Dave Scum](https://docs.google.com/document/d/1SddGHeVfcVa5SCDHHTOA4RlKwnef-Q6IMw_Jqw9I0Mw/mobilebasic).
|
||||
* [Nate the Snake](https://natethesnake.com/) is a complex setup for a pun.
|
||||
@ -351,7 +351,7 @@ Men have gazed at the stars for millennia, and wondered whether there was a deit
|
||||
|
||||
### Freefall
|
||||
|
||||
[Freefall](https://freefall.purrsia.com/), a hard-science-fiction webcomic. Long-running enough that it has also gone through a wide range of themes. I find it to be consistently surprisingly thoughtful.
|
||||
[Freefall](http://freefall.purrsia.com/), a hard-science-fiction webcomic. Long-running enough that it has also gone through a wide range of themes. I find it to be consistently surprisingly thoughtful.
|
||||
|
||||
Special mentions (i.e. "I haven't gotten around to reading these but they are well-reviewed and sound interesting") to:
|
||||
* [Children of Time](https://www.goodreads.com/book/show/25499718-children-of-time) by Adrian Tchaikovsky.
|
||||
|
@ -16,7 +16,7 @@ The goal of this is for computers to be able to solve them automatically with th
|
||||
One common example of a problem-solving puzzle to help develop 'computational skills' is Sudoku.
|
||||
While the logical reasoning skills used in Sudoku are admirable, and may even be useful for a few problems in CS, we do not often think about *why we take certain steps* when deducing the value of a tile, instead reasoning retroactively about *how we know* it.
|
||||
When we solve Sudoku we do not develop a general algorithm for solving Sudoku in our heads and then execute it; instead, we solve it through a combination of intuition and creative logical reasoning, which are not well-defined or easily applicable to computers.
|
||||
[Here](http://educ.jmu.edu/~arnoldea/MathMagHss.pdf) is an interesting article relating to this.
|
||||
[Here](https://educ.jmu.edu/~arnoldea/MathMagHss.pdf) is an interesting article relating to this.
|
||||
|
||||
'Computational skills' puzzles are inherently unhelpful as tools for improving CS ability, as they do not force players to formalise and be specific in their thought processes while solving problems; instead, they develop the player's intuitive ability.
|
||||
In chess, competitive players often talk about 'developing intuition', i.e. being able to play a chess game automatically and without formalising their thought processes.
|
||||
|
@ -29,7 +29,7 @@ So what can be done? I don't know. Formal education is likely a lost cause: ince
|
||||
* Security mindset: as well as being directly useful for ensuring security, always thinking about where your assumptions might be flawed or how something might go wrong is vital for reliability.
|
||||
* Good code structuring, e.g. knowing when to disaggregate or aggregate modules. I think that lots of people, particularly when using OOP, are too quick to try and "break apart" interdependent code in a way which makes development much slower without actually providing much flexibility, but thousand-line files with global variables everywhere are hard to work on.
|
||||
|
||||
If you have been paying any attention to anything within the past [two years](https://openai.com/blog/openai-codex) or so, you're probably also aware that AI (specifically large language models) will obsolete, augment, change, or do nothing whatsoever to software engineering jobs. My previous list provides some perspective for this: ChatGPT (GPT-3.5 versions; I haven't used the GPT-4 one) can model computers well enough that it can [pretend to be a Linux shell](https://www.engraved.blog/building-a-virtual-machine-inside/) quite accurately, tracking decent amounts of state while it does so; big language models have vague knowledge of basically everything on the internet, even if they don't always connect it well; ChatGPT can [also](https://twitter.com/gf_256/status/1598104835848798208) find some vulnerabilities in code; [tool use](https://til.simonwillison.net/llms/python-react-pattern) [is continually](https://openai.com/blog/function-calling-and-other-api-updates?ref=upstract.com) [being](https://gorilla.cs.berkeley.edu/) [improved](https://twitter.com/emollick/status/1657050639644360706) (probably their quick-script-writing capability already exceeds most humans'). Not every capability is there yet, of course, and I think LLMs are significantly hampered by issues humans don't have, like context window limitations, lack of online learning, and bad planning ability, but these are probably not that fundamental.
|
||||
If you have been paying any attention to anything within the past [two years](https://openai.com/blog/openai-codex/) or so, you're probably also aware that AI (specifically large language models) will obsolete, augment, change, or do nothing whatsoever to software engineering jobs. My previous list provides some perspective for this: ChatGPT (GPT-3.5 versions; I haven't used the GPT-4 one) can model computers well enough that it can [pretend to be a Linux shell](https://www.engraved.blog/building-a-virtual-machine-inside/) quite accurately, tracking decent amounts of state while it does so; big language models have vague knowledge of basically everything on the internet, even if they don't always connect it well; ChatGPT can [also](https://twitter.com/gf_256/status/1598104835848798208) find some vulnerabilities in code; [tool use](https://til.simonwillison.net/llms/python-react-pattern) [is continually](https://openai.com/blog/function-calling-and-other-api-updates) [being](https://gorilla.cs.berkeley.edu/) [improved](https://twitter.com/emollick/status/1657050639644360706) (probably their quick-script-writing capability already exceeds most humans'). Not every capability is there yet, of course, and I think LLMs are significantly hampered by issues humans don't have, like context window limitations, lack of online learning, and bad planning ability, but these are probably not that fundamental.
|
||||
|
||||
Essentially, your job is probably not safe, as long as development continues (and big organizations actually notice).
|
||||
|
||||
|
@ -27,7 +27,7 @@ If you know what <span class="hoverdefn" title="Simple Mail Transfer Protocol">S
|
||||
|
||||
A fun quirk of the nginx installation on `procyon` is that, since I wanted it to not be able to decrypt requests to `protagonism` (I don't entirely trust it, and duplicating the certificate issuance programs on each would be irritating), I use [ngx_stream_ssl_preread](https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html) to forward still-encrypted TLS connections either to itself (on another port) or `protagonism`'s reverse proxy. As janky as this sounds, it does seem to work fine, except for one extremely-hard-to-reproduce bug I suspect might be related where users sometimes get shown 404 pages or the status page incorrectly. Traffic is routed over [Tailscale](https://tailscale.com/) using [Headscale](https://github.com/juanfont/headscale)[^5].
|
||||
|
||||
Several of `protagonism`'s services are mostly-self-contained personal-use applications, such as [Minoteaur](/minoteaur/) (notes), [ankisyncd](https://github.com/ankicommunity/ankicommunity-sync-server/) (flashcards), [atuin](https://github.com/atuinsh/atuin) (shell history) and [calibre-web](https://github.com/janeczku/calibre-web) (books). The rest are somewhat more interesting, in that they do more and are in some cases publicly accessible. For example, [SPUDNET](https://d.gh0.pw/doku.php?id=gtech:spudnet). It was built to serve the needs of an ["operating system"](https://potatos.madefor.cc/) for ComputerCraft (a Minecraft computer mod) by providing <span class="hoverdefn" title="backdoors">remote debugging services</span>. Originally built about six years ago, it somehow still works with relatively minor changes (new protocol support). It provides bidirectional many-to-one and many-to-many communications over websocket, with an unnecessarily sophisticated authentication system, as well as HTTP long polling fallbacks and incident reports. [Skynet](https://github.com/osmarks/skynet) is a somewhat simpler version.
|
||||
Several of `protagonism`'s services are mostly-self-contained personal-use applications, such as [Minoteaur](/minoteaur/) (notes), [ankisyncd](https://github.com/ankicommunity/ankicommunity-sync-server/) (flashcards), [atuin](https://github.com/atuinsh/atuin) (shell history) and [calibre-web](https://github.com/janeczku/calibre-web) (books). The rest are somewhat more interesting, in that they do more and are in some cases publicly accessible. For example, [SPUDNET](https://docs.osmarks.net/hypha/spudnet). It was built to serve the needs of an ["operating system"](https://potatos.madefor.cc/) for ComputerCraft (a Minecraft computer mod) by providing <span class="hoverdefn" title="backdoors">remote debugging services</span>. Originally built about six years ago, it somehow still works with relatively minor changes (new protocol support). It provides bidirectional many-to-one and many-to-many communications over websocket, with an unnecessarily sophisticated authentication system, as well as HTTP long polling fallbacks and incident reports. [Skynet](https://github.com/osmarks/skynet) is a somewhat simpler version.
|
||||
|
||||
I also have a monitoring system using [VictoriaMetrics](https://docs.victoriametrics.com/)[^4] and [Grafana](https://grafana.com/). VictoriaMetrics periodically scrapes services for metrics and stores time series, and Grafana can plot them. This is a fairly standard setup, and lots of software exposes Prometheus-compatible metrics itself or has an exporter available (e.g. the [node_exporter](https://github.com/prometheus/node_exporter) for general Linux machine status and the [PostgreSQL exporter](https://github.com/prometheus-community/postgres_exporter)). I went slightly further by exposing metrics in most of my custom applications, so I have, for instance, nice dashboards from my Discord bot. These used to be public, but apparently that exposing any dashboard in Grafana allows users to read any data out of the backend, which was a bit of a security issue. One might reasonably question how much use I get out of these, as I don't get enough traffic to have to debug performance issues much, but they do look nice and their presence is calming.
|
||||
|
||||
@ -102,7 +102,7 @@ I haven't covered *every* osmarks.net service in this post, or even all the ones
|
||||
|
||||
[^4]: This used to be Prometheus, but I swapped VictoriaMetrics in to reduce storage requirements.
|
||||
|
||||
[^5]: I like Tailscale's ease of use, but it's horrifyingly CPU-intensive for no obvious reason, and `procyon` is not very powerful. This would be a problem if I had traffic.
|
||||
[^5]: I like Tailscale's ease of use, but it's unreasonably CPU-intensive for no obvious reason, and `procyon` is not very powerful. This would be a problem if I had traffic.
|
||||
|
||||
[^6]: "The structure of any system designed by an organization is isomorphic to the structure of the organization." You could argue that this is more "directly in line with Conway's law" than "freed from it", but ignore that.
|
||||
|
||||
|
3861
links_cache.json
Normal file
3861
links_cache.json
Normal file
File diff suppressed because it is too large
Load Diff
4611
package-lock.json
generated
4611
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -5,6 +5,7 @@
|
||||
"main": "index.js",
|
||||
"dependencies": {
|
||||
"@extractus/feed-extractor": "^7.1.3",
|
||||
"@mozilla/readability": "^0.6.0",
|
||||
"@msgpack/msgpack": "^3.0.0-beta2",
|
||||
"@quri/squiggle-lang": "^0.10.0",
|
||||
"@vscode/markdown-it-katex": "^1.1.0",
|
||||
@ -25,6 +26,8 @@
|
||||
"html-to-text": "^9.0.5",
|
||||
"idb": "^7.1.1",
|
||||
"js-xxhash": "^4.0.0",
|
||||
"jsdom": "^26.0.0",
|
||||
"json5": "^2.2.3",
|
||||
"markdown-it": "^14.1.0",
|
||||
"markdown-it-anchor": "^8.6.7",
|
||||
"markdown-it-container": "^4.0.0",
|
||||
|
114
src/index.js
114
src/index.js
@ -29,6 +29,9 @@ const domutils = require("domutils")
|
||||
const feedExtractor = require("@extractus/feed-extractor")
|
||||
const https = require("https")
|
||||
const pLimit = require("p-limit")
|
||||
const json5 = require("json5")
|
||||
const readability = require("@mozilla/readability")
|
||||
const { JSDOM } = require("jsdom")
|
||||
|
||||
const fts = require("./fts.mjs")
|
||||
|
||||
@ -94,6 +97,76 @@ const hashBG = (cls, i, hue) => {
|
||||
}
|
||||
globalData.hashBG = hashBG
|
||||
|
||||
const links = {}
|
||||
|
||||
// Retrieve cached (automatically fetched, but version-controlled) link metadata
|
||||
// and manually configured overrides.
|
||||
const loadLinksOut = async () => {
|
||||
const cache = JSON.parse(await fsp.readFile(path.join(root, "links_cache.json")))
|
||||
for (const [url, meta] of Object.entries(cache)) {
|
||||
links[url] = {
|
||||
...meta,
|
||||
inline: undefined
|
||||
}
|
||||
}
|
||||
const manual = json5.parse(await fsp.readFile(path.join(srcDir, "links.json")))
|
||||
for (const [url, meta] of Object.entries(manual)) {
|
||||
links[url] = {
|
||||
...links[url] || {},
|
||||
...meta
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const fetchLinksOut = async () => {
|
||||
for (const [url, meta] of Object.entries(links)) {
|
||||
// All links need at least a title. If that is not there, fetch target.
|
||||
if (!meta.title) {
|
||||
let article
|
||||
if (!url.endsWith(".pdf")) {
|
||||
try {
|
||||
// I don't know why, but without the timeout race configured here
|
||||
const response = await Promise.race([
|
||||
axiosInst({ url }),
|
||||
new Promise((_, reject) =>
|
||||
setTimeout(() => reject(new Error("timeout")), 15000)
|
||||
)
|
||||
])
|
||||
if (response.headers["content-type"].startsWith("text/html")) {
|
||||
const doc = new JSDOM(response.data, { url })
|
||||
const reader = new readability.Readability(doc.window.document)
|
||||
article = reader.parse()
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn(chalk.red(`Failed to fetch ${url}: ${e.message}`))
|
||||
}
|
||||
}
|
||||
if (article && article.title) {
|
||||
meta.excerpt = article.excerpt
|
||||
meta.title = article.title
|
||||
meta.author = article.byline && article.byline.split("\n")[0]
|
||||
meta.date = article.publishedTime
|
||||
meta.website = article.siteName
|
||||
meta.auto = true
|
||||
console.log(chalk.greenBright("Fetched"), meta.title)
|
||||
} else {
|
||||
console.log(chalk.red(`Manual intervention required for`), url)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const cachedLinks = {}
|
||||
for (const [url, meta] of Object.entries(links)) {
|
||||
if (meta.auto) {
|
||||
cachedLinks[url] = {
|
||||
...meta,
|
||||
references: undefined,
|
||||
}
|
||||
}
|
||||
}
|
||||
await fsp.writeFile(path.join(root, "links_cache.json"), JSON.stringify(cachedLinks, null, 4))
|
||||
}
|
||||
|
||||
const removeExtension = x => x.replace(/\.[^/.]+$/, "")
|
||||
|
||||
const renderContainer = (tokens, idx) => {
|
||||
@ -191,6 +264,15 @@ md.renderer.rules.text = (tokens, idx, options, env, self) => {
|
||||
token.content = token.content.replace(/ \- /g, " – ") // highly advanced typography
|
||||
return textRenderer(tokens, idx, options, env, self)
|
||||
}
|
||||
md.renderer.rules.link_open = (tokens, idx, options, env, self) => {
|
||||
for (const [attr, value] of tokens[idx].attrs) {
|
||||
if (attr === "href") {
|
||||
env.urls = env.urls ?? []
|
||||
env.urls.push(value)
|
||||
}
|
||||
}
|
||||
return self.renderToken(tokens, idx, options)
|
||||
}
|
||||
|
||||
const minifyHTML = x => htmlMinifier(x, {
|
||||
collapseWhitespace: true,
|
||||
@ -201,7 +283,11 @@ const minifyHTML = x => htmlMinifier(x, {
|
||||
conservativeCollapse: true,
|
||||
collapseBooleanAttributes: true
|
||||
})
|
||||
const renderMarkdown = x => md.render(x)
|
||||
const renderMarkdown = text => {
|
||||
const env = {}
|
||||
const html = md.render(text, env)
|
||||
return [ html, env.urls ?? [] ]
|
||||
}
|
||||
// basically just whitespace removal - fast and still pretty good versus unminified code
|
||||
const minifyJS = (x, filename) => {
|
||||
const res = terser.minify(x, {
|
||||
@ -303,7 +389,25 @@ const processBlog = async () => {
|
||||
meta.slug = meta.slug || removeExtension(basename)
|
||||
meta.wordCount = page.content.split(/\s+/).map(x => x.trim()).filter(x => x).length
|
||||
meta.haveSidenotes = true
|
||||
meta.content = renderMarkdown(page.content)
|
||||
const [html, urls] = renderMarkdown(page.content)
|
||||
meta.content = html
|
||||
|
||||
for (const url of urls) {
|
||||
try {
|
||||
const parsed = new URL(url)
|
||||
if (parsed.protocol === "http:" || parsed.protocol === "https:") {
|
||||
parsed.hash = "" // TODO handle this more cleanly
|
||||
if (!links[parsed]) {
|
||||
links[parsed] = {
|
||||
inline: true
|
||||
}
|
||||
links[parsed].references = links[parsed].references ?? []
|
||||
links[parsed].references.push(meta.slug)
|
||||
}
|
||||
}
|
||||
} catch (e) {}
|
||||
}
|
||||
|
||||
fts.pushEntry("blog", {
|
||||
html: meta.content,
|
||||
url: "/" + meta.slug,
|
||||
@ -654,7 +758,7 @@ const tasks = {
|
||||
index: { deps: ["fetchFeeds", "pagedeps", "blog", "experiments", "images", "fetchMicroblog"], fn: index },
|
||||
fetchFeeds: { deps: ["templates"], fn: fetchFeeds },
|
||||
rss: { deps: ["blog"], fn: genRSS },
|
||||
blog: { deps: ["pagedeps"], fn: processBlog },
|
||||
blog: { deps: ["pagedeps", "loadLinksOut"], fn: processBlog },
|
||||
fetchMicroblog: { deps: ["templates"], fn: fetchMicroblog },
|
||||
experiments: { deps: ["pagedeps"], fn: processExperiments },
|
||||
assetsDir: { deps: [], fn: () => fse.ensureDir(outAssets) },
|
||||
@ -667,7 +771,9 @@ const tasks = {
|
||||
assets: { deps: ["manifest", "minifyJS", "serviceWorker", "images", "compilePageJS"] },
|
||||
main: { deps: ["writeBuildID", "index", "errorPages", "assets", "experiments", "blog", "rss"] },
|
||||
searchIndex: { deps: ["blog", "fetchMicroblog", "fetchMycorrhiza", "experiments"], fn: buildFTS },
|
||||
fetchMycorrhiza: { deps: [], fn: fetchMycorrhiza }
|
||||
fetchMycorrhiza: { deps: [], fn: fetchMycorrhiza },
|
||||
fetchLinksOut: { deps: ["blog"], fn: fetchLinksOut },
|
||||
loadLinksOut: { deps: [], fn: loadLinksOut }
|
||||
}
|
||||
|
||||
const compile = async () => {
|
||||
|
482
src/links.json
Normal file
482
src/links.json
Normal file
@ -0,0 +1,482 @@
|
||||
{
|
||||
"/assets/misc/2014-horowitz.pdf": {
|
||||
title: "Computing's Energy Problem (and what we can do about it)",
|
||||
author: "Mark Horowitz",
|
||||
date: "2014-02-10"
|
||||
},
|
||||
"https://nooscope.osmarks.net/": {
|
||||
title: "Nooscope",
|
||||
author: "osmarks"
|
||||
},
|
||||
"https://www.youtube.com/playlist?list=PLIoFMnkvRA5PggXaYGQ2QJPEH8LPDm5Dd": {
|
||||
title: "Untitled Objection Series",
|
||||
author: "osmarks",
|
||||
website: "YouTube"
|
||||
},
|
||||
"https://www.youtube.com/watch?v=BsEltlIXkYg": {
|
||||
title: "The New Super Cool Epson MOVERIO BT-35E Smart Glasses for Drones",
|
||||
author: "CAPTAIN DRONE",
|
||||
website: "YouTube"
|
||||
},
|
||||
"https://sigbovik.org/2024/proceedings.pdf": {
|
||||
title: "Proceedings",
|
||||
author: "SIGBOVIK",
|
||||
date: "2024-04-01"
|
||||
},
|
||||
"https://resources.nvidia.com/en-us-tensor-core/gtc22-whitepaper-hopper": {
|
||||
title: "NVIDIA H100 Tensor Core GPU Architecture",
|
||||
website: "NVIDIA"
|
||||
},
|
||||
"https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-nvidia-us-2188504-web.pdf": {
|
||||
title: "NVIDIA A100 Tensor Core GPU",
|
||||
website: "NVIDIA"
|
||||
},
|
||||
"https://karthikecon.github.io/karthiksrinivasan.org/paying_attention.pdf": {
|
||||
title: "Paying Attention",
|
||||
author: "Karthik Srinivasan",
|
||||
date: "2023-12-18"
|
||||
},
|
||||
"https://x.com/seatedro/status/1839661758107070858": {
|
||||
date: "2024-09-27",
|
||||
author: "@seatedro",
|
||||
title: "it does now, memegrep [dot] com",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://datasets.osmarks.net/components.html": {
|
||||
title: "Embeddings PCA",
|
||||
website: "osmarks datasets"
|
||||
},
|
||||
"https://x.com/michaelsayman/status/1835841675584811239": {
|
||||
title: "Introducing SocialAI",
|
||||
author: "@michaelsayman",
|
||||
date: "2024-09-17",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://www.intel.com/content/www/us/en/products/sku/77978/intel-atom-processor-c2358-1m-cache-1-70-ghz/specifications.html": {
|
||||
title: "Intel Atom® Processor C2358",
|
||||
website: "Intel"
|
||||
},
|
||||
"https://www.intel.com/content/www/us/en/products/details/ethernet/700-network-adapters/x710-network-adapters/docs.html": {
|
||||
title: "Intel® Ethernet Network Adapter X710",
|
||||
website: "Intel",
|
||||
excerpt: "10GbE adapters with hardware optimization and offloads for the rapid provisioning of networks in the data center. These adapters are available with SFP+ or RJ45 connection types."
|
||||
},
|
||||
"https://vi-fi.github.io/Immortem.html": {
|
||||
title: "Immortem",
|
||||
author: "vi-fi"
|
||||
},
|
||||
"https://vi-fi.github.io/Compressibility.html": {
|
||||
title: "Compressibility",
|
||||
author: "vi-fi"
|
||||
},
|
||||
"https://0t.lt/": {
|
||||
title: "0t.lt",
|
||||
author: "osmarks",
|
||||
description: "osmarks.net URL shortener"
|
||||
},
|
||||
"https://www.anandtech.com/show/21342/intel-introduces-gaudi-3-accelerator-going-bigger-and-aiming-higher": {
|
||||
title: "Intel Introduces Gaudi 3 AI Accelerator: Going Bigger and Aiming Higher In AI Market",
|
||||
author: "Ryan Smith",
|
||||
website: "AnandTech",
|
||||
date: "2024-04-09"
|
||||
},
|
||||
"https://global.canon/en/technology/nil-2023.html": {
|
||||
title: "Nanoimprint Lithography",
|
||||
date: "2023-10-16",
|
||||
website: "Canon"
|
||||
},
|
||||
"https://0t.lt/YearningPried": {
|
||||
title: "Cryoapioform Rotating At 0.223 Radians Per Second",
|
||||
website: "osmarks.net"
|
||||
},
|
||||
"https://osmarks.net/rss.xml": {
|
||||
title: "osmarks.net RSS feed",
|
||||
website: "osmarks.net"
|
||||
},
|
||||
"https://www.youtube.com/watch?v=rsxCZAE8QNA&t=1067": {
|
||||
title: "HC2023-K2: Hardware for Deep Learning",
|
||||
website: "YouTube",
|
||||
author: "Bill Dally"
|
||||
},
|
||||
"https://www.youtube.com/watch?v=rsxCZAE8QNA&t=646": {
|
||||
title: "HC2023-K2: Hardware for Deep Learning",
|
||||
website: "YouTube",
|
||||
author: "Bill Dally"
|
||||
},
|
||||
"https://www.youtube.com/watch?v=qAuwW7Wzrng": {
|
||||
title: "Why am I wearing a heads-up display?",
|
||||
website: "YouTube",
|
||||
author: "Zack Freedman"
|
||||
},
|
||||
"https://osmarks.net/stuff/apn.pdf": {
|
||||
title: "Advancing Consensus: Automated Persuasion Networks for Public Belief Enhancement",
|
||||
author: "osmarks.net Computational Memetics Division",
|
||||
date: "2024-04-01"
|
||||
},
|
||||
"https://www.youtube.com/playlist?list=PLliiJ70rl2NvJjby2LoVuP1EuOvRAyf97": {
|
||||
title: "Minecraft - Gregtech New Horizons",
|
||||
website: "YouTube",
|
||||
author: "Kharax82"
|
||||
},
|
||||
"https://kay21s.github.io/RoarGraph-VLDB2024.pdf": {
|
||||
title: "RoarGraph: A Projected Bipartite Graph for Efficient Cross-Modal Approximate Nearest Neighbor Search",
|
||||
author: ["Meng Chen", "Kai Zhang", "Zhenying He", "Yinan Jing", "X.Sean Wang"]
|
||||
},
|
||||
"https://www.fanfiction.net/s/9658524/1/Branches-on-the-Tree-of-Time": {
|
||||
title: "Branches on the Tree of Time",
|
||||
website: "fanfiction.net",
|
||||
author: "alexanderwales"
|
||||
},
|
||||
"https://www.reuters.com/technology/nvidia-supplier-sk-hynix-says-hbm-chips-almost-sold-out-2025-2024-05-02/": {
|
||||
title: "Nvidia supplier Sk Hynix says HBM chips almost sold out for 2025",
|
||||
website: "Reuters",
|
||||
author: ["Joyce Lee", "Heekyong Yang"],
|
||||
date: "2024-05-02"
|
||||
},
|
||||
"https://x.com/nearcyan/status/1532076277947330561": {
|
||||
title: "heavenbanning, the hypothetical practice of banishing a user from a platform by causing everyone that they speak with to be replaced by AI models that constantly agree and praise them, but only from their own perspective, is entirely feasible with the current state of AI/LLMs",
|
||||
author: "@nearcyan",
|
||||
date: "2022-06-01",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://support.reddithelp.com/hc/en-us/articles/360043043412-What-is-a-custom-feed-and-how-do-I-make-one": {
|
||||
title: "What is a custom feed and how do I make one?",
|
||||
website: "Reddit Help"
|
||||
},
|
||||
"https://www.youtube.com/watch?v=L5pUA3LsEaw": {
|
||||
title: "Why Not Just: Think of AGI Like a Corporation?",
|
||||
website: "YouTube",
|
||||
author: "Robert Miles AI Safety"
|
||||
},
|
||||
"https://files.osf.io/v1/resources/xcz2t/providers/osfstorage/628662af52d1723f1080bc21?action=download&direct&version=1": {
|
||||
title: "Shedding Light on Shadowbanning",
|
||||
author: "Gabriel Nicholas",
|
||||
date: "2022-04-01"
|
||||
},
|
||||
"https://www.usenix.org/system/files/nsdi24-yi.pdf": {
|
||||
title: "BFMSense: WiFi Sensing Using Beamforming Feedback Matrix",
|
||||
date: "2024-04-16",
|
||||
author: ["Enze Yi", "Dan Wu", "Fusang Zhang", "Kai Niu", "Daqing Zhang"]
|
||||
},
|
||||
"https://proceedings.neurips.cc/paper_files/paper/2023/file/9d89448b63ce1e2e8dc7af72c984c196-Paper-Conference.pdf": {
|
||||
title: "Scaling Data-Constrained Language Models",
|
||||
date: "2023-05-25",
|
||||
author: ["Niklas Muennighoff", "Alexander M. Rush", "Boaz Barak", "Teven Le Scao", "Aleksandra Piktus", "Nouamane Tazi", "Sampo Pyysalo", "Thomas Wolf", "Colin Raffel"]
|
||||
},
|
||||
"https://pubs.aip.org/aip/jcp/article/139/12/121923/74793/Statistical-physics-of-self-replication": {
|
||||
title: "Statistical physics of self-replication",
|
||||
date: "2013-08-21",
|
||||
author: "Jeremy L. England"
|
||||
},
|
||||
"https://harsha-simhadri.org/pubs/Filtered-DiskANN23.pdf": {
|
||||
title: "Filtered-DiskANN: Graph Algorithms for Approximate Nearest Neighbor Search with Filters",
|
||||
date: "2023-04-30",
|
||||
author: ["Siddharth Gollapudi","Neel Karia","Varun Sivashankar","Ravishankar Krishnaswamy","Nikit Begwani","Swapnil Raz","Yiyong Lin","Yin Zhang","Neelam Mahapatro","Premkumar Srinivasan","Amit Singh","Harsha Vardhan Simhadri"]
|
||||
},
|
||||
"https://twitter.com/gf_256/status/1598104835848798208": {
|
||||
title: "OMG WTF",
|
||||
author: "@gf_256",
|
||||
date: "2022-12-01",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://twitter.com/emollick/status/1657050639644360706": {
|
||||
title: "Digital humanities is about to be wild.",
|
||||
author: "@emollick",
|
||||
date: "2023-05-12",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://x.com/carrigmat/status/1884244369907278106": {
|
||||
title: "Complete hardware + software setup for running Deepseek-R1 locally. The actual model, no distillations, and Q8 quantization for full quality. Total cost, $6,000. All download and part links below:",
|
||||
author: "@carrigmat",
|
||||
date: "2025-01-28",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://discord.gg/ckGrMDR": {
|
||||
title: "styropyro Discord server",
|
||||
website: "Discord"
|
||||
},
|
||||
"https://openai.com/blog/openai-codex/": {
|
||||
title: "OpenAI Codex",
|
||||
website: "OpenAI",
|
||||
date: "2021-08-10"
|
||||
},
|
||||
"https://www.facebook.com/509414227/posts/10157671264984228/": {
|
||||
title: "Eliezer Yudkowsky's post",
|
||||
website: "Facebook",
|
||||
date: "2019-09-08"
|
||||
},
|
||||
"https://dlportal.barracudanetworks.com/download/6120/GWAY-9.0.4-0097.iso": {
|
||||
title: "Barracuda Networks gateway firmware ISO version 9.0.4"
|
||||
},
|
||||
"https://huggingface.co/intfloat/e5-large-v2": {
|
||||
title: "e5-large-v2",
|
||||
author: "intfloat",
|
||||
website: "HuggingFace"
|
||||
},
|
||||
"https://huggingface.co/Snowflake/snowflake-arctic-embed-l": {
|
||||
title: "Snowflake's Arctic-embed-l",
|
||||
author: "Snowflake",
|
||||
website: "HuggingFace"
|
||||
},
|
||||
"https://huggingface.co/lightonai/modernbert-embed-large": {
|
||||
title: "ModernBERT-embed-large",
|
||||
author: "LightOn AI",
|
||||
website: "HuggingFace"
|
||||
},
|
||||
"https://x.com/threat_update": {
|
||||
title: "Threat Update",
|
||||
author: "@threat_update",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://twitter.com/threat_update": {
|
||||
title: "Threat Update",
|
||||
author: "@threat_update",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://www.anandtech.com/show/17415/asmls-highna-update-coming-to-fabs-in-2024-2025": {
|
||||
title: "ASML High-NA Development Update: Coming to Fabs in 2024 - 2025",
|
||||
author: "Anton Shilov",
|
||||
date: "2022-05-26",
|
||||
website: "AnandTech"
|
||||
},
|
||||
"https://i.osmarks.net/memes-or-something/arch-btw.png": {
|
||||
title: "I use Arch BTW"
|
||||
},
|
||||
"https://r.osmarks.net/threat-update": {
|
||||
title: "Live Threat Update",
|
||||
website: "osmarks.net"
|
||||
},
|
||||
"https://wrv.github.io/h26forge.pdf": {
|
||||
title: "The Most Dangerous Codec in the World: Finding and Exploiting Vulnerabilities in H.264 Decoders",
|
||||
author: ["Willy R. Vasquez", "Stephen Checkoway", "Hovav Shacham"]
|
||||
},
|
||||
"https://battle-blackberry-78e.notion.site/How-to-run-ML-Experiments-for-cheap-b1270491395747458ac6726515b323cc": {
|
||||
title: "How to run ML Experiments (for cheap)"
|
||||
},
|
||||
"https://educ.jmu.edu/~arnoldea/MathMagHss.pdf": {
|
||||
title: "How Do YOU Solve Sudoku? A Group-theoretic Approach to Human Solving Strategies",
|
||||
author: ["Elizabeth A. Arnold", "Harrison C. Chapman", "Malcom E. Rupert"]
|
||||
},
|
||||
"https://archiveofourown.org/works/50491414": {
|
||||
title: "an important lesson in cryptography",
|
||||
author: "heav_1",
|
||||
date: "2023-10-02"
|
||||
},
|
||||
"https://www.xreal.com/uk/air2/": {
|
||||
title: "Xreal Air2"
|
||||
},
|
||||
"https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1004_4": {
|
||||
title: "How Much Do People Remember? Some Estimates of the Quantity of Learned Information in Long-term Memory",
|
||||
author: "Thomas K. Landauer",
|
||||
date: "1986-12-01"
|
||||
},
|
||||
"https://fanfiction.net/s/5389450/1/": {
|
||||
title: "The Finale of the Ultimate Meta Mega Crossover",
|
||||
author: "Eliezer Yudkowsky",
|
||||
website: "fanfiction.net"
|
||||
},
|
||||
"https://twitter.com/matthew_d_green/status/1392814211122843651": {
|
||||
title: "This article about end-to-end encryption and authorities’ desire to perform real-time content scanning is very well written and I hope you’ll read it. It also makes me pretty angry. https://t.co/YQijeQgRL6",
|
||||
author: "@matthew_d_green",
|
||||
date: "2021-05-13",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://www.schneier.com/wp-content/uploads/2016/09/paper-keys-under-doormats-CSAIL.pdf": {
|
||||
title: "Keys Under Doormats: Mandating Insecurity By Requiring Government Access To All Data And Communications",
|
||||
author: ["H. Abelson","R. Anderson","S. M. Bellovin","J. Benaloh","M. Blaze","W. Diffie","J. Gilmore","M. Green","S. Landau","P. G. Neumann","R. L. Rivest","J. I. Schiller","B. Schneier","M. Specter","D. J. Weitzner"],
|
||||
date: "2015-11-01"
|
||||
},
|
||||
"https://www.lesbonscomptes.com/recoll/faqsandhowtos/IndexWebHistory": {
|
||||
title: "Indexing visited Web pages with the Recoll Firefox extension"
|
||||
},
|
||||
"https://twitter.com/giffmana/status/1707327094982672666": {
|
||||
title: "Pleased to announce we are releasing checkpoints for our SigLIP models!",
|
||||
author: "@giffmana",
|
||||
date: "2023-09-28",
|
||||
website: "Twitter"
|
||||
},
|
||||
"https://www.pnas.org/doi/10.1073/pnas.0510013103": {
|
||||
title: "Essential genes of a minimal bacterium",
|
||||
author: ["John I. Glass","Nacyra Assad-Garcia","Nina Alperovich","Shibu Yooseph","Matthew R. Lewis","Mahir Maruf","Clyde A. Hutchison, III","Hamilton O. Smith","J. Craig Venter"]
|
||||
},
|
||||
"https://www.telegraph.co.uk/news/2017/07/31/dont-want-ban-encryption-inability-see-terrorists-plotting-online/": {
|
||||
title: "We don't want to ban encryption, but our inability to see what terrorists are plotting undermines our security",
|
||||
author: "Amber Rudd",
|
||||
date: "2017-07-31",
|
||||
website: "The Telegraph"
|
||||
},
|
||||
"https://openai.com/blog/function-calling-and-other-api-updates": {
|
||||
title: "Function calling and other API updates",
|
||||
date: "2023-06-13",
|
||||
website: "OpenAI",
|
||||
excerpt: "We’re announcing updates including more steerable API models, function calling capabilities, longer context, and lower prices."
|
||||
},
|
||||
"https://youtube.com/user/styropyro/": {
|
||||
title: "styropyro",
|
||||
website: "YouTube",
|
||||
author: "styropyro",
|
||||
excerpt: "I'm a science maniac that loves building huge lasers and playing with electricity and chemicals."
|
||||
},
|
||||
"https://platform.openai.com/docs/api-reference": {
|
||||
title: "Introduction",
|
||||
website: "OpenAI Platform"
|
||||
},
|
||||
"https://platform.openai.com/docs/api-reference/completions": {
|
||||
title: "Completions (Legacy)",
|
||||
excerpt: "Given a prompt, the model will return one or more predicted completions along with the probabilities of alternative tokens at each position. Most developer should use our Chat Completions API to leverage our best and newest models.",
|
||||
website: "OpenAI Platform"
|
||||
},
|
||||
"https://openaccess.thecvf.com/content_ECCV_2018/papers/Julieta_Martinez_LSQ_lower_runtime_ECCV_2018_paper.pdf": {
|
||||
title: "LSQ++: Lower running time and higher recall in multi-codebook quantization",
|
||||
author: ["Julieta Martinez", "Shobhit Zakhmi", "Holger H. Hoos", "James J. Little"],
|
||||
date: "2018-10-06"
|
||||
},
|
||||
"https://ieeexplore.ieee.org/document/10067540": {
|
||||
title: "“Zen 4”: The AMD 5nm 5.7GHz x86-64 Microprocessor Core",
|
||||
author: ["Benjamin Munger", "Kathy Wilcox", "Jeshuah Sniderman", "Chuck Tung", "Brett Johnson", "Russell Schreiber"],
|
||||
date: "2023-03-23"
|
||||
},
|
||||
"https://www.researchgate.net/profile/Richard-Gunstone/publication/238983736_Student_understanding_in_mechanics_A_large_population_survey/links/02e7e52f8a2f984024000000/Student-understanding-in-mechanics-A-large-population-survey.pdf": {
|
||||
title: "Student understanding in mechanics: A large population survey",
|
||||
author: "Richard Gunstone",
|
||||
date: "1987-08-01"
|
||||
},
|
||||
"https://proceedings.neurips.cc/paper_files/paper/2019/file/09853c7fb1d3f8ee67a61b6bf4a7f8e6-Paper.pdf": {
|
||||
title: "DiskANN: Fast Accurate Billion-point Nearest Neighbor Search on a Single Node",
|
||||
author: ["Suhas Jayaram Subramanya", "Devvrit", "Rohan Kadekodi", "Ravishankar Krishaswamy", "Harsha Vardhan Simhadri"],
|
||||
date: "2019-11-01"
|
||||
},
|
||||
"https://conference-indico.kek.jp/event/160/contributions/2876/attachments/2148/2699/Zhilong_Pan.pdf": {
|
||||
title: "Recent progress of SSMB EUV light source project at Tsinghua University",
|
||||
author: "Zhilong Pan",
|
||||
date: "2022-01-18"
|
||||
},
|
||||
"https://support.reddithelp.com/hc/en-us/articles/26410290525844-Public-Content-Policy": {
|
||||
title: "Public Content Policy",
|
||||
website: "Reddit Help"
|
||||
},
|
||||
"https://leetcode.com/": {
|
||||
title: "LeetCode"
|
||||
},
|
||||
"https://www.lfs.org/": {
|
||||
title: "Libertarian Futurist Society"
|
||||
},
|
||||
"https://www.dropbox.com/scl/fi/juc2c8gku938brmedjm8q/Hronar-Final.odt?rlkey=yf01124ruzar1xza82cougpmk&e=1&dl=0": {
|
||||
title: "Hronar (Final)",
|
||||
author: "Maximilian Krause"
|
||||
},
|
||||
"https://www.dropbox.com/scl/fi/fq40ef5dpgd5awun1vuu6/Voices-of-the-Gods.doc?rlkey=dxx6adofzrpr4o9vri15qkbby&e=1&dl=0": {
|
||||
title: "Voices of the Gods",
|
||||
author: "Maximilian Krause"
|
||||
},
|
||||
"https://www.fictionpress.com/s/2961893/1/Mother-of-Learning": {
|
||||
title: "Mother of Learning",
|
||||
author: "nobody103",
|
||||
website: "fictionpress.com"
|
||||
},
|
||||
"https://www.fanfiction.net/s/13451176/1/Chili-and-the-Chocolate-Factory-Fudge-Revelation": {
|
||||
title: "Chili and the Chocolate Factory: Fudge Revelation",
|
||||
author: "gaizemaize",
|
||||
website: "fanfiction.net"
|
||||
},
|
||||
"https://fictionpress.com/s/3238329/1/A-Hero-s-War": {
|
||||
title: "A Hero's War",
|
||||
author: "jseah",
|
||||
website: "fictionpress.com"
|
||||
},
|
||||
"https://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html": {
|
||||
title: "Accelerando",
|
||||
author: "Charles Stross",
|
||||
date: "2005-07-01"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/ender-io": {
|
||||
title: "Ender IO",
|
||||
author: "crazypants_mc_the_second",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/create": {
|
||||
title: "Create",
|
||||
author: "simibubi",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/pretty-pipes": {
|
||||
title: "Pretty Pipes",
|
||||
author: "Ellpeck",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/psi": {
|
||||
title: "Psi",
|
||||
author: "Vazkii",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/immersive-engineering": {
|
||||
title: "Immersive Engineering",
|
||||
author: "BluSunrize",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/forestry": {
|
||||
title: "Forestry",
|
||||
author: "mezz",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/mystical-agriculture": {
|
||||
title: "Mystical Agriculture",
|
||||
author: "BlakeBr0",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/thermal-expansion": {
|
||||
title: "Thermal Expansion",
|
||||
author: "TeamCoFH",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/cc-tweaked": {
|
||||
title: "CC: Tweaked",
|
||||
author: "SquidDev",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/thaumcraft": {
|
||||
title: "Thaumcraft",
|
||||
author: "Azanor",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/crafttweaker": {
|
||||
title: "CraftTweaker",
|
||||
author: "Jaredlll08",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/astral-sorcery": {
|
||||
title: "Astral Sorcery",
|
||||
author: "HellFirePvP",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/singularities": {
|
||||
title: "Singularities",
|
||||
author: "Shadows_of_Fire",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/immersive-engineering": {
|
||||
title: "Immersive Engineering",
|
||||
author: "BluSunrize",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/big-reactors": {
|
||||
title: "Big Reactors",
|
||||
author: "ErogenousBeef",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/modern-industrialization": {
|
||||
title: "Modern Industrialization",
|
||||
author: "thetechnici4n",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/pneumaticcraft-repressurized": {
|
||||
title: "PneumaticCraft Repressurized",
|
||||
author: "desht_08",
|
||||
website: "CurseForge"
|
||||
},
|
||||
"https://www.curseforge.com/minecraft/mc-mods/the-twilight-forest": {
|
||||
title: "The Twilight Forest",
|
||||
author: "Benimatic",
|
||||
website: "CurseForge"
|
||||
}
|
||||
}
|
Loading…
x
Reference in New Issue
Block a user