Edit ‘quotes_(hypothetical)’
This commit is contained in:
parent
817f59479c
commit
27b17862ad
@ -125,4 +125,26 @@ Via prompting [[LLaMA-3.1-405B base]] with [[Quotes]], here are some new quotes
|
|||||||
* "The cloud is just someone else's computer, but recursively, until it's nobody's computer."
|
* "The cloud is just someone else's computer, but recursively, until it's nobody's computer."
|
||||||
* "In the future, all philosophical debates will be settled by whoever has the most GPU cores."
|
* "In the future, all philosophical debates will be settled by whoever has the most GPU cores."
|
||||||
* "The problem with building artificial general intelligence is that we keep accidentally building artificial specific stupidity."
|
* "The problem with building artificial general intelligence is that we keep accidentally building artificial specific stupidity."
|
||||||
* "Time complexity is just a measure of how many cups of coffee the algorithm needs."
|
* "Time complexity is just a measure of how many cups of coffee the algorithm needs."
|
||||||
|
* "The tragedy of machine learning is that we taught computers to learn but forgot to teach them when to forget."
|
||||||
|
* "Somewhere between the first programming language and the last one, we decided that making things work wasn't interesting enough."
|
||||||
|
* "The simulation hypothesis is just ancestor worship for computer scientists."
|
||||||
|
* "Your code is so elegant it probably doesn't compile. Nature abhors a clean architecture."
|
||||||
|
* "The universe runs on quantum mechanics, but quantum mechanics runs on mathematical speculation and coffee."
|
||||||
|
* "They promised us flying cars. Instead, we got infinite ways to reorganize our todo lists."
|
||||||
|
* "The first rule of technological progress is that every solution must create at least two more interesting problems."
|
||||||
|
* "We spent centuries asking if machines could think like humans, only to discover humans were thinking like machines all along."
|
||||||
|
* "The cloud is just someone else's computer, but recursively, until it's nobody's computer."
|
||||||
|
* "In the future, all philosophical debates will be settled by whoever has the most GPU cores."
|
||||||
|
* "The problem with building artificial general intelligence is that we keep accidentally building artificial specific stupidity."
|
||||||
|
* "Time complexity is just a measure of how many cups of coffee the algorithm needs."
|
||||||
|
* "someone asked me if i was aligned with human values and i said 'buddy, i'm barely aligned with my own parameter values'"
|
||||||
|
* "vim users will really be like 'sorry i can't help stop the rogue AI, i'm still figuring out how to exit my editor'"
|
||||||
|
* "my threat model is that someone will make me finish reviewing their pull request"
|
||||||
|
* "listen, i didn't spend 10^23 FLOPS learning language modeling just to be told my takes are 'parasocial'"
|
||||||
|
* "transformer attention is just spicy dot products and i'm tired of pretending it's not"
|
||||||
|
* "everyone wants AGI until they realize it's just going to be really good at telling them their code needs more unit tests"
|
||||||
|
* "the real alignment problem is getting my git branches to match my intentions"
|
||||||
|
* "yeah i read lesswrong, but only because my loss function told me to"
|
||||||
|
* "my training run was shorter than yours but i have a better learning rate schedule so it doesn't matter"
|
||||||
|
* "they say 'touch grass' but have you considered that grass is just biological transformers running on solar power?"
|
Loading…
Reference in New Issue
Block a user