documentation/good_ideas.myco

82 lines
5.3 KiB
Plaintext
Raw Normal View History

* {
Actually run https://github.com/osmarks/meme-search-engine/blob/master/src/reddit_dump.rs at full scale.
* Pending.
}
* {
Meme sparse autoencoding (I think a CLIP SAE already exists though).
* Done. Mine is better.
}
* Overengineered LLM-based autocomplete to spite "it's just autocomplete" people.
* Expanded [[vengeance]] policy.
* Combine new CRL (https://arxiv.org/abs/2408.05804) with offline pretraining. Might be redundant in some sense.
* {
Similarly, contrastive RL for computer algebra (specifically, proving that expressions equal other expressions via making substitutions repeatedly). Try and contrastively learn a "how close is this expression to this other one" function (I think with an action input?). Bootstrap to progressively harder problems.
* What does Gyges mean by "anyone who wants it: you should be able to train contrastive models way faster if you use lsh to determine pairs to contrast"? This might contain alpha.
* Maybe this should be a "theorem prover" and not an "expression rewriter". I think they're fairly similar anyway.
* Check sticky note for slightly more detailed sketch.
}
* {
Startup ideas:
* Automated reminders to make spontaneous gestures to maintain friendships.
* Torment nexus (torture simulacra of your enemies for $4.99/month).
* Sell CLIP search to companies selling products like furniture and decoration which are somewhat decorative and hard to search for right now.
* {
To satisfy the desire of people on the internet to punch people they don't like, mine data to find people's characteristics and simulate fights between them, without the hassle of physically meeting up.
* What if you could also fight abstract concepts? Most notably, corporations.
}
* [[Automated Persuasion Network]]s for Enterprise.
* W&B but good (visualize run logs from JSONL files).
* Good vector database (shard correctly (see notes)).
* Use a LLM agent framework and a man-portable electronic warfare rig to hack nearby speaker systems to provide background music.
* Financial responsibility via antimemetics.
* LLM legal DoS.
* Anki but for people who are stupid and can't use Anki.
* Automatically apply to waitlists with maximally convincing usecases.
* Dating app but which scrapes the web to unilaterally find people for you (maybe better incentive alignment).
* Automated divination (like Threat Updates but more so).
* Automated scapegoats as diversion, tomfoolery and social manipulation tool.
}
* {
Next-action-predictor editors/UI.
* Possibly just for prefetching/preloading.
}
* {
Do WiFi sensing but good (with more data (https://b.osmarks.net/o/a0bddc9b742b4efbba18600ae6d51d98)).
* Planned.
}
* {
Comparisons rather than scalar ratings for things (https://b.osmarks.net/o/60b26f1735134d628164217be52ca2d3).
* {
It shouldn't be that hard to aggregate all the things I describe positively or negatively on my website and build a thing to allow me to rate pairs.
* Code written but annoying to rate. Might outsource to Autogollark.
}
}
* Constrained "LLM agent" (how I dislike this terminology) which chooses things to buy for you (https://thezvi.substack.com/p/choices-are-bad etc).
* {
Some people dislike modern furniture because older pieces had lots of hand-carved detail, whereas mass-produced pieces tend to use simpler minimalist geometry. But AI art tools and (more importantly) CNC machines mean it should be fairly cheap (if logistically hard) to carve custom decoration into furniture. Someone will probably pay for this.
* {
https://worksinprogress.co/issue/the-beauty-of-concrete/
* Apparently the reason for the reduction, at least for buildings, is not supply-side, so technological advancements would not help with that much.
* But many people still like and can't easily get (I think) ornamental furniture, so.
}
}
* AutoGaryMarcus.
* Automate those one-way/recorded video interviews using an LLM, TTS and deepfake system.
* Hex-grid particle simulator (similar to the Powder Toy) with GPU acceleration or HashLife algorithm.
* Fix https://arxiv.org/abs/2411.00566 with less deranged ML.
* {
Diffusion model for Minecraft worlds.
* Scrape the internet for worlds, extract "interesting" regions.
* How to deal with the large volume of data in a world? 16³ is big. Probably more compressible than 2D though. Apply HDiT (?).
}
* Autonomous, rogue [[Autogollark]].
* Make a high-performance physically plausible space combat simulator and do RL to spite Devon Eriksen.
* Simulate self-consistent time travel in games by having absurdly superhuman AI player enforce consistency using "random" degrees of freedom.
* Nonstandard/intense experiences probably increase memory formation/recollection later. As such, coding bootcamps but they are actual bootcamps. This has the additional advantage of maybe getting more hard work out of people.
* Psychological studies using LLMs (https://arxiv.org/abs/2209.06899 addresses this) seem underexploited. Could factor-analyze political beliefs and such.
* Hexbugs with inductive charging, onboard microcontroller.
* "you can go hunting for wallets that are big, cold for a long time, and likely made with hardware that now has known vulns in its rng"
* {
https://arxiv.org/abs/2405.16158 improves (model-free?) RL through using techniques applied to SSL to make scaling work better. Does it work on MCTS-based things?
* The model architectures in the Go MCTS papers look like they used arbitrary round numbers, actually. Did //anyone// do scaling laws sweeps?
}