diff --git a/quotes_(hypothetical).myco b/quotes_(hypothetical).myco index 4cba622..e4adea1 100644 --- a/quotes_(hypothetical).myco +++ b/quotes_(hypothetical).myco @@ -93,4 +93,24 @@ Via prompting [[LLaMA-3.1-405B base]] with [[Quotes]], here are some new quotes * "consciousness is what you use when you don't know what algorithm you're running" * "we have two problems: 1) we don't understand intelligence 2) we keep creating it anyway" * "they say you can't solve human coordination problems with technology, and yet here we are, coordinating hundreds of billions of transistors to simulate human coordination problems" -* "what if we solved the alignment problem but it turns out humans weren't aligned with humans to begin with \ No newline at end of file +* "what if we solved the alignment problem but it turns out humans weren't aligned with humans to begin with" +* "The tragedy of efficient markets is that they're populated entirely by people trying to prove they're inefficient." +* "There's nothing quite like the feeling of watching a committee carefully design exactly the system that will eventually kill us all." +* "Turns out the biggest advantage of quantum computing is that you can blame any bug on decoherence." +* "The real technological singularity was the social pressures we optimized for along the way." +* "Everything is either a coordination problem or a coordination solution, and we're remarkably bad at telling which is which." +* "We spent centuries arguing about free will only to build machines that make all our choices for us anyway." +* "Progress happens when people realize that 'technically impossible' just means 'nobody has filed the right paperwork yet.'" +* "The universe has no obligation to make its fundamental laws conducive to PowerPoint presentations." +* "Any sufficiently advanced bureaucracy is indistinguishable from an optimization algorithm with poorly specified constraints." +* "Consciousness is what you get when you train a neural network on its own training process." +* "The best time to plant a decision tree was 20 years ago. The second best time is after we figure out what went wrong with the first one." +* "True rationality is knowing exactly how irrational you can afford to be." +* "The simulation hypothesis is just applied anthropology with better marketing." +* "Every civilization reaches a point where they have to choose between optimizing for survivability and optimizing for quarterly reports." +* "Unfortunate that we solved artificial general intelligence before artificial specific intelligence." +* "If your philosophical framework doesn't account for recursion, you're going to have trouble explaining why it doesn't account for recursion." +* "The problem isn't that machines can think, it's that humans can be compiled." +* "History suggests that any sufficiently advanced form of measurement immediately becomes a target, which is why we still don't have a reliable way to measure wisdom." +* "People keep asking for ethical AI when we haven't even solved ethical HR departments." +* "The future will be evenly distributed, but the variance won't be." \ No newline at end of file