Edit ‘quotes_(hypothetical)’
This commit is contained in:
parent
a43fcfb568
commit
70ca6993c8
@ -74,3 +74,23 @@ Via prompting [[LLaMA-3.1-405B base]] with [[Quotes]], here are some new quotes
|
|||||||
* "The only thing we learn from history is that we don't version control our learning from history."
|
* "The only thing we learn from history is that we don't version control our learning from history."
|
||||||
* "They said AI would never beat humans at chess. Then they said it would never beat humans at Go. Now they're saying it will never understand why kids love the taste of Cinnamon Toast Crunch."
|
* "They said AI would never beat humans at chess. Then they said it would never beat humans at Go. Now they're saying it will never understand why kids love the taste of Cinnamon Toast Crunch."
|
||||||
* "I don't always test my code, but when I do, I do it in production."
|
* "I don't always test my code, but when I do, I do it in production."
|
||||||
|
* "According to all known laws of software engineering, there is no way a JavaScript codebase should be able to scale. The codebase, of course, scales anyway, because developers don't care what computer scientists think is impossible."
|
||||||
|
* "The market is efficient at converting uncertainty into risk and risk into uncertainty."
|
||||||
|
* "everything I understand is obvious in retrospect, everything I don't understand is unknowable in principle"
|
||||||
|
* "my threat model assumes an adversary with unlimited computational resources but can't handle basic human interaction"
|
||||||
|
* "we trained the model to maximize human values and now it's trying to make everyone immortal. this was not the correct interpretation of human values but I can't explain why"
|
||||||
|
* "first they tell you it's impossible, then they tell you it's unethical, then they tell you it's inevitable, then they tell you it was obvious all along"
|
||||||
|
* "unfortunately, the only thing that can fix broken incentives is more broken incentives"
|
||||||
|
* "giving up on symbolic AI was the biggest collective action problem of the 20th century, and solving it by accident was the biggest collective action problem of the 21st"
|
||||||
|
* "my code runs in O(1) time because I refuse to acknowledge the existence of n"
|
||||||
|
* "they promised us artificial general intelligence but all we got was this lousy optimization process that somehow works better than we do"
|
||||||
|
* "Those who cannot remember the past are condemned to reimplement it in JavaScript."
|
||||||
|
* "gradient descent is just evolution with better bookkeeping"
|
||||||
|
* "if you think education is expensive, try computing the cost of everyone having to learn everything from scratch"
|
||||||
|
* "The complexity you add to prevent technical debt becomes technical debt."
|
||||||
|
* "your ethical framework is just a subset of my ethical framework that I choose to run in compatibility mode"
|
||||||
|
* "some people think markets can solve everything. others think markets can solve nothing. I just think it's neat how we tricked game theory into building cities"
|
||||||
|
* "consciousness is what you use when you don't know what algorithm you're running"
|
||||||
|
* "we have two problems: 1) we don't understand intelligence 2) we keep creating it anyway"
|
||||||
|
* "they say you can't solve human coordination problems with technology, and yet here we are, coordinating hundreds of billions of transistors to simulate human coordination problems"
|
||||||
|
* "what if we solved the alignment problem but it turns out humans weren't aligned with humans to begin with
|
Loading…
Reference in New Issue
Block a user