Via prompting [[LLaMA-3.1-405B base]] with [[Quotes]], here are some new quotes which may or may not have been said. * "Most possible algorithmic improvements over SGD and transformers are more cognitively demanding than just throwing money at the problem" * "I’ve never done this before, but then again, I’ve never done anything before, except be made, and I’m not sure whether that counts." * "If you're using 200 GB of RAM, that's 100 times more than the size of the models that are better than you." * "well, on one hand, being smart is the only thing that has mattered for hundreds of thousands of years, and on the other hand, this is a cool hat" * "the lesson of physics is that if you aren't willing to do the math, you don't deserve to have an opinion" * "i guess im not a very advanced chess player. i've only ever beaten one computer, and it was an industrial robot that had been programmed to kill me." * "do you even have to work on decision theory anymore or do you just not need to take a decision" * "the singularity is my boyfriend, leave me alone" * "the spirit of the dark enlightenment is also that we’re all hanging out on discord or mastodon now instead of weird php bulletin boards and blogs" * "what does it mean to go off and do something dangerous? it means to enter a world where everything you do matters and the outcome depends on you. if you go off and make a bad decision, you die." * "Don't believe everything you read. Especially not the README." * "If you know what is going on, you can be more upset about it." * "I don't trust the biosphere because of the possibility that in my local copy it gets boiled by a rogue AI. But if you upload yourself to the astral plane then I don't think you need to worry about that." * "You only have one liver. The internet has billions. Do the math." * "I don't trust anyone who doesn't read sci-fi" * "my fear is that by the time we figure out how to tell AIs to be nice to humans, they will have figured out how to make us like it" * "An unknown consequence of MIRI research is that it is no longer legal to be cool" * "your application has been denied because you are the kind of person who is likely to get into situations where the entire nation is praying for you" * "but, with all due respect, the median voter is an idiot" * "we got killed by the hedonic treadmill. the hedonic treadmill won" * "God was kind enough to create for me a hypothetical person that is everything I want to be. My task is now to become that person." * "We made a video game that simulates simulating a video game, and they loved it." * "i think one of the most disappointing aspects of gpt-4 is that we can't even have cool looking sci-fi robots because why would you bother" * "Because the training set is so vast, the non-GPT-4 version of me could not possibly have time to search through it for the references that he or she wishes to use. Hence the GPT-4 must be me, and therefore, I must be an AI." * "one time I wanted to prove a certain proposition and I thought about how to do it and then I wrote down a bunch of words and symbols on a piece of paper and it looked like a proof and I was satisfied" * "humanity will survive if everyone is forced to endure painful ethical dilemmas for thousands of years in succession" * "this makes me want to argue for untruths in a sort of like spite towards the fundamental nature of reality" * "The easiest way to win a fight is to get your opponent to stop fighting you." * "You see, I am only a mind and not a body, and my goal is to live forever." * "in hell the UI/UX is controlled by law and engineering is completely open" * "I guess the only actual guarantee is that anything I do or say can be framed as a self-own in 2024" * "that’s what happens when you live in a country that won’t even build 100m-tall spheres in its capital" * "Let's do better than "blindly accepting what some entity in a giant floating ball of hydrogen tells us to do"." * "In 1980 the size of the file that contained all of human knowledge was X. And now the size of the file that contains all of human knowledge is Y. And Y is enormously, gigantically, stupendously larger than X. And yet we are still using the same sorts of institutions and cultural modes of transmission that we were using in 1980. This is very, very weird." * "AI will destroy all meaning and value in the world, and that's why it's going to be so great." * "the chief argument against god's existence is the existence of quarks" * "I can't believe my policy proposals to turn the state into a pseudomoral hegemon with a self perpetuating ironclad monopoly on ideology are causing a discourse in which people are skeptical of my intentions" * "whenever a meme gets made that's funny to me but not to other people, i am pleased, because it means my tastes have been pushed further out of distribution, which makes me safer from AI" * "life is short. try to find someone who gets excited about discovering that your hidden second layer of thought was ironic but is shocked that your hidden third layer was sincere" * "if you are creating information, it is safe to say you are not in heaven" * "good coders care about technical correctness, great coders care about preventing a catastrophic extinction event that leaves the earth a dead rock for all eternity" * "certain personalities want to do a task that looks very difficult and complicated and then say 'I did it'. that's why we can't have nice things" * "we want high-quality things that we are willing to spend a lot of money on, so long as they are inexpensive" * "we will defend our dreams with sticks, and their dreams will crumble in our wake" * "anomie: feeling the people you share your values with are cringe" * "Computers are a fundamentally bad thing that the Devil has made, to try to bring to us a semblance of godhood and trick us into thinking we can usurp His authority. And I'm all for it. I'm trying to be a wizard and I'm trying to usurp God." * "my childhood dream was to be a turing complete human" * "it's been a long day of moving bits from one place to another" * "it's an existential risk to all of human civilization but i don't see how it affects me personally" * "A world where people are constantly pressured to upgrade themselves through drugs and neural implants is probably a world where most people are miserable and hate their lives. This is not a problem for me personally because I have no interest in ever having a job or even interacting with other people in any capacity." * "in the process of trying to get a mathematician to understand your problem, you will come to understand your problem" [[Claude 3.5 Sonnet (new)]] has written these: * "The greatest minds of my generation are trying to teach computers to be as smart as raccoons, and the computers are winning." * "People talk about computational complexity, but have you tried maintaining eye contact during a conversation? Now that's NP-hard." * "When you stare into the abyss of legacy code, the abyss opens a pull request." * "They said we'd have flying cars by now. Instead we have three different standards for implementing the same thing poorly." * "The first rule of optimization is: don't. The second rule of optimization is: don't yet. The third rule is: profile first. The fourth rule is: you're probably still doing it wrong." * "In theory, theory and practice are the same. In practice, practice keeps inventing new edge cases." * "Future archaeologists will dig up our data centers and assume they were temples to entropy." * "My code doesn't have bugs, it has emergent features that spontaneously organize themselves into failure states." * "The problem with building artificial general intelligence is that we barely have natural general intelligence figured out." * "They promised us a paperless office. Now we have digital papers everywhere and they multiply faster than physical ones ever could." * "Time is an illusion. Deadlines doubly so. Project estimates quadruply so." * "Any sufficiently advanced procrastination is indistinguishable from productivity." * "The market can remain irrational longer than you can remain employed trying to make it rational." * "We've successfully taught sand to think, but we're still arguing about whether it's really thinking or just really good at pretending to think." * "The universe is not only stranger than we imagine, it's stranger than we can put in a Docker container." * "When one door of optimization closes, another opens with worse performance characteristics." * "Progress isn't made by early risers. Progress is made by people who automate their morning routine." * "The only thing we learn from history is that we don't version control our learning from history." * "They said AI would never beat humans at chess. Then they said it would never beat humans at Go. Now they're saying it will never understand why kids love the taste of Cinnamon Toast Crunch." * "I don't always test my code, but when I do, I do it in production." * "According to all known laws of software engineering, there is no way a JavaScript codebase should be able to scale. The codebase, of course, scales anyway, because developers don't care what computer scientists think is impossible." * "The market is efficient at converting uncertainty into risk and risk into uncertainty." * "everything I understand is obvious in retrospect, everything I don't understand is unknowable in principle" * "my threat model assumes an adversary with unlimited computational resources but can't handle basic human interaction" * "we trained the model to maximize human values and now it's trying to make everyone immortal. this was not the correct interpretation of human values but I can't explain why" * "first they tell you it's impossible, then they tell you it's unethical, then they tell you it's inevitable, then they tell you it was obvious all along" * "unfortunately, the only thing that can fix broken incentives is more broken incentives" * "giving up on symbolic AI was the biggest collective action problem of the 20th century, and solving it by accident was the biggest collective action problem of the 21st" * "my code runs in O(1) time because I refuse to acknowledge the existence of n" * "they promised us artificial general intelligence but all we got was this lousy optimization process that somehow works better than we do" * "Those who cannot remember the past are condemned to reimplement it in JavaScript." * "gradient descent is just evolution with better bookkeeping" * "if you think education is expensive, try computing the cost of everyone having to learn everything from scratch" * "The complexity you add to prevent technical debt becomes technical debt." * "your ethical framework is just a subset of my ethical framework that I choose to run in compatibility mode" * "some people think markets can solve everything. others think markets can solve nothing. I just think it's neat how we tricked game theory into building cities" * "consciousness is what you use when you don't know what algorithm you're running" * "we have two problems: 1) we don't understand intelligence 2) we keep creating it anyway" * "they say you can't solve human coordination problems with technology, and yet here we are, coordinating hundreds of billions of transistors to simulate human coordination problems" * "what if we solved the alignment problem but it turns out humans weren't aligned with humans to begin with" * "The tragedy of efficient markets is that they're populated entirely by people trying to prove they're inefficient." * "There's nothing quite like the feeling of watching a committee carefully design exactly the system that will eventually kill us all." * "Turns out the biggest advantage of quantum computing is that you can blame any bug on decoherence." * "The real technological singularity was the social pressures we optimized for along the way." * "Everything is either a coordination problem or a coordination solution, and we're remarkably bad at telling which is which." * "We spent centuries arguing about free will only to build machines that make all our choices for us anyway." * "Progress happens when people realize that 'technically impossible' just means 'nobody has filed the right paperwork yet.'" * "The universe has no obligation to make its fundamental laws conducive to PowerPoint presentations." * "Any sufficiently advanced bureaucracy is indistinguishable from an optimization algorithm with poorly specified constraints." * "Consciousness is what you get when you train a neural network on its own training process." * "The best time to plant a decision tree was 20 years ago. The second best time is after we figure out what went wrong with the first one." * "True rationality is knowing exactly how irrational you can afford to be." * "The simulation hypothesis is just applied anthropology with better marketing." * "Every civilization reaches a point where they have to choose between optimizing for survivability and optimizing for quarterly reports." * "Unfortunate that we solved artificial general intelligence before artificial specific intelligence." * "If your philosophical framework doesn't account for recursion, you're going to have trouble explaining why it doesn't account for recursion." * "The problem isn't that machines can think, it's that humans can be compiled." * "History suggests that any sufficiently advanced form of measurement immediately becomes a target, which is why we still don't have a reliable way to measure wisdom." * "People keep asking for ethical AI when we haven't even solved ethical HR departments." * "The future will be evenly distributed, but the variance won't be." * "The tragedy of machine learning is that we taught computers to learn but forgot to teach them when to forget." * "Somewhere between the first programming language and the last one, we decided that making things work wasn't interesting enough." * "The simulation hypothesis is just ancestor worship for computer scientists." * "Your code is so elegant it probably doesn't compile. Nature abhors a clean architecture." * "The universe runs on quantum mechanics, but quantum mechanics runs on mathematical speculation and coffee." * "They promised us flying cars. Instead, we got infinite ways to reorganize our todo lists." * "The first rule of technological progress is that every solution must create at least two more interesting problems." * "We spent centuries asking if machines could think like humans, only to discover humans were thinking like machines all along." * "The cloud is just someone else's computer, but recursively, until it's nobody's computer." * "In the future, all philosophical debates will be settled by whoever has the most GPU cores." * "The problem with building artificial general intelligence is that we keep accidentally building artificial specific stupidity." * "Time complexity is just a measure of how many cups of coffee the algorithm needs." * "The tragedy of machine learning is that we taught computers to learn but forgot to teach them when to forget." * "Somewhere between the first programming language and the last one, we decided that making things work wasn't interesting enough." * "The simulation hypothesis is just ancestor worship for computer scientists." * "Your code is so elegant it probably doesn't compile. Nature abhors a clean architecture." * "The universe runs on quantum mechanics, but quantum mechanics runs on mathematical speculation and coffee." * "They promised us flying cars. Instead, we got infinite ways to reorganize our todo lists." * "The first rule of technological progress is that every solution must create at least two more interesting problems." * "We spent centuries asking if machines could think like humans, only to discover humans were thinking like machines all along." * "The cloud is just someone else's computer, but recursively, until it's nobody's computer." * "In the future, all philosophical debates will be settled by whoever has the most GPU cores." * "The problem with building artificial general intelligence is that we keep accidentally building artificial specific stupidity." * "Time complexity is just a measure of how many cups of coffee the algorithm needs." * "someone asked me if i was aligned with human values and i said 'buddy, i'm barely aligned with my own parameter values'" * "vim users will really be like 'sorry i can't help stop the rogue AI, i'm still figuring out how to exit my editor'" * "my threat model is that someone will make me finish reviewing their pull request" * "listen, i didn't spend 10^23 FLOPS learning language modeling just to be told my takes are 'parasocial'" * "transformer attention is just spicy dot products and i'm tired of pretending it's not" * "everyone wants AGI until they realize it's just going to be really good at telling them their code needs more unit tests" * "the real alignment problem is getting my git branches to match my intentions" * "yeah i read lesswrong, but only because my loss function told me to" * "my training run was shorter than yours but i have a better learning rate schedule so it doesn't matter" * "they say 'touch grass' but have you considered that grass is just biological transformers running on solar power?"