This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
On a recent post I made, I got into a debate about the singularity with https://www.reddit.com/u/donaldhobson/s/OgimEwugHm and I'm wondering what you all think. Here are some excerpts from the discussion.
Me : I find the typical idea of a singularity to be rather absurd, because it completely ignores the "tech tree" and assumes a sentient AI is right around the corner just a few steps up from Chatgpt and that it'll automatically be able to just will itself into being exponentially smarter, all before we even get major interplanetary colonies going, heck many singularity estimates place a fricking superintelligence before basic nuclear fusion. Superintelligence is the destination, not the journey. It's a reward, not a cheat code we get for free. Even if we can somehow make a truly human-equivalent AGI without first having mastered the human brain (something I bet will be happening parallel to interstellar missions or perhaps a few decades after) and we're able to immediately give it framejacking capabilities that doesn't mean we get a true superintelligence, just a fast thinking humanoid mind. A human could spend eons studying but they'd never be a civilization on their own, their brain has fundamental limitations and even the benefits of being digital (perfect memory, multitasking, framejacking, etc.) wouldn't make us any closer to superintelligence or even our current collective intelligence as a civilization than a chimp would be to us if given the same advantages, and even if that was enough to allow it to improve itself that doesn't automatically mean it continues linearly, let alone exponentially or hyper-exponentially.
https://www.reddit.com/u/donaldhobson/s/OgimEwugHm : The tech tree in games is balanced. To make it a good game. You seem to be assuming a sort of "fairness". If you had been around 100,000 years ago, looking over the different animals, you might have said "some animals have claws, some have big muscles, some run fast, some breed fast. There is no cheat code.". From the perspective of evolution, and endless aeons of back and forth between animals with no one species being clearly superior, it sure looks like modern humans hit a cheat code. "There are no cheat codes" is not a law of physics. We didn't build planes by mastering the birds wing. Intelligence in the abstract is not a complicated phenomena. Evolution doesn't do things the simplest way. In your world model, you see the human brain as equal to intelligence, both irreducibly complicated. There is an algorithm called AIXI which, given unlimited compute, displays the maximum possible intelligence. This is known today and only a few lines of code. ChatGPT can write poems and play chess and do all sorts of things, despite being some pretty simple code with no pieces explicitly hardcoded for those tasks. Evolution selected humans to make stone tools and play tribal politics, and accidentally got a mind that can sometimes do quantum physics. Put all those pieces together and it suggests that there is some algorithm, not much more complicated than ChatGPT, that is a general superhuman intelligence with a reasonable compute demand. I suspect I'm living in a world where ASI requires another 3 breakthroughs on the scale of backpropagation and transformer architecture.
Me : Don't get me wrong, superintelligence is kind of a cheat code, just not nearly as much as you think, and AGI definitely isn't one since if it was we would've gone singularity back in the stone age or something. What I'm proposing is still probably one of the fasyest technology timelines here with all of science being done in a few centuries and us speed running the kardeshev scale to the point of a full lightweight dyson statite swarm by maybe 2400-500, and a genuine full k2 civilization long before the end of the millenia, and a complete shattering of anything even vaguely resembling human society, with a world of septillions of immortal million IQ (often vastly more) ultra-benevolent beings so peaceful and enlightened they don't even need a government to make them all get along (with the exception of msybe a few experiments gone wrong after being rushed, giving rise to genuine eldritch horrors, but we'll ignore that for now, besides your worldview would be dominated exclusively by such horrors). However even compared to my ridiculously optimistic proposal of mere centuries making the biosphere and human civilization completey obsolete, your worldview is cartoonish. Yes the speed of progress is breathtaking but our cheatcode is likely much farther away than you think. Superintelligence will arise from the world of messing around with our psychology, building early dyson swarms, self replicating factories and probes, beings hundreds or thousands of tiems smarter than us being the norm, and massive colony ships pushed by beams of starlight, not the world of fossil fuels, memes, flat-earthers, covid, porn, tiktok, aging, and the last few generations of farmers that actually use fields instead of skyscrapers. Our cheat code will arrive one day my friend, but alas we are not there yet. Again, ask any neuroscientist how intelligence even works and they'll shrug. Algorithms are just fundamentally incapable of actual intelligence, heck even the pioneers of artificial intelligence admitted the name was kind of a joke, the machines weren't actually thinking it was just equations. There is no "maximum possible intelligence" especially not from primitive algorithms, intelligence goes up exponentially with complexity, it (and consciousness) are emergent properties kinda like the wetness of water. The hardware is no longer exponential, we need to build up the software for a few decades then move past the idea of software altogether by figuring out how real intelligence and consciousness work via our brains and then backtrack and find the most efficient form of that, then we add more complexity to the system while actually understanding what that complexity is (because it we don't understand it then it's not really complexity, just a really powerful version of the current PSYCHOLOGY [not software anymore]). Also humans are really only cruel to animals through carelessness (something superintelligences would lack) or because we WANT something from them, but there's nothing a superintelligence would want from us and we're a tiny speck to take care of which is great because it's mind is so complex it notices every speck of dust and has resources so vast even a trillionth of those resources would please the slightly larger specks of dust (us). And there's such a vast gap between them and us that we CAN'T throw together something so good it becomes them quickly, not all of us even wipe our own asses yet, through vast effort we can make things better than us that are better at making things better and so on, but even an artificial mind equivalent to our own is a big deal for our little ape brains much like interstellar travel, not something Elon Musk can throw together for a few billion dollars in a decade or two.
https://www.reddit.com/u/donaldhobson/s/OgimEwugHm : Ah. I think we are getting to the fundamental disagreement. I think intelligence is a specific type of fancy equation. And the likes of chatGPT are what you get when you get parts of the equation mostly right, but other parts wrong/missing. On a related note, submarines don't actually swim, it's just propellers. This seems entirely vibes based reasoning. Like you have a mental notion of how scifi various technologies are. And ASI feels too scifi to come yet. There is nothing about ASI that requires a dyson swarm or colony ship. They are independant branches on the tech tree. If we totally neglect research on one branch while pushing on the other, we can get the tech to arrive in all sorts of different orders. If that's your fast timeline, what's your slow one like. NO way does it take that long.
Me: It's not just a matter of not having the means to make the edit, it's also not even knowing what edits to make in the first place. We don't even understand what we have let alone how to improve. The best we've got is blindly increasing brain size but there's no guarantee that'll even do anything. A human level AGI wouldn't even know where to start with improving itself anymore than we do, sure it'd have some advantages but most of that advantage would be having the knowledge used to create it which is a luxury we don't have for ourselves, but knowing how something works doesn't mean you know how to make it better. If you were given all the tools needed to make your phone better you wouldn't make any progress on that because you have no clue how and you can't discover that knowledge on your own through research because that takes countless people and decades of research woth advanced equipment even more perplexing to you. There's things we can reasonably expect based on how difficult they actually are. You're not getting social media before steam engines despite those two being completely unrelated. Also you'd only expect weird anomalies in the order of the tech tree if we develop an obsession with some technology and it turns out to be relatively easy to research and develop. Maybe if a full percentage of our population were researching AI we could get sort of close to your timelines (probably not though cause' economy). Also we need to consider the fundamental nature of the technology itself, algorithms aren't MINDS. we aren't equations, we're a complex ever-changing system of chemicals. Intelligence is something that seems to be outside the fundamental workings of transistors running algorithms in the first place. Also your analogy is pretty bad. For starters there are designs for swimming fish-like submarines and it's hard to tell right now what parts of brains need to be replicated for intelligence, we just don't know how much you can distill out to get the true foundations of intelligence and how much is just extra fluff mother nature vomited out. But honestly no neuroscientist I've heard of thinks the way you do, that's an attitude peddled solely by computer nerds as opposed to the people who actually research how intelligence works. People are so impressed by our progress in algorithms they get caught up in the hype and greatly underestimate how much progress we haven't made yet. At the end of the day though we fundamentally can't know how much of our brains are evolutionary fluff and just how alien intelligence can be until we actually research our minds. There's a gap between algorithms and minds that we can cross much faster by researching our minds first as opposed to blindly trying different algorithms. We stick close to biology first then backtrack from there, that's how we do it fast.
https://www.reddit.com/u/donaldhobson/s/OgimEwugHm : Why do you think ASI is super hard to make? If ASI was totally trivial, we would have made it by now. If it's super hard, it's a long time away. There is some level of difficulty that makes it arrive in 2050. How do you know that ASI isn't at that level of difficulty? we display intelligence it's because those chemicals are doing computations. Evolution picked analogue over digital, but it's still computations. And I think a lot of the complications are random things that are getting in the way of intelligence. Feather mites living in bird wings are a random complication. That doesn't mean that flight is inherently complicated and needs feather mites. Those "computer nerds" are also researching how intelligence works, just from a different angle. You seem to think the biologists have some deep insight, and the computer scientists are just dumb. That's not how it has worked for any technology in the past. Planes didn't start by copying birds exactly. Well the first means of flight were balloons, which are even less birdlike. Evolution found human minds by trial and error. Which shows that it is findable by trial and error. Evolution was slow, but much of the selection pressure was on the rest of the body, not the brain. And when evolution has overheating problems, it has to trial and error it's way to better heat sinks. Humans can quickly engineer a better heat sink. And humans can at least extrapolate a line on a graph. (Ie as this number goes up, performance goes up. ) Evolution can't even do that. Why do you think trial and error will fail? And you do know there is at least some theory behind deep learning, it's not just trial and error, right?
Me: Except it will be once we get it up to our level. That's uncharted territory and we have literally nothing to work off of. We have zero clue how such a being would think, how to make the new emotions and sensations it would experience, and all those other philosophical differences. Sure we could add raw computational power to us and our AIs but that can really only get us so far, it's incredibly inefficient, dangerous, and lacks precision. It's the sledgehammer approach, ideally considering we're dealing with crafting living beings we want the scalpel approach. Maybe we could get an AGI of SOME kind by roughly 2050 but I don't think that would cause the avalanche of progress you think it would, and even then I'm leaning more towards 2080. Again general intelligence is kinda hard to describe and doesn't necessarily mean it's anything like us or particularly efficient or easy to upgrade. Maybe computer science and neuroscience will meet in the middle in the mid to late 21st century and that's how we get AGI, a close-ish replication of intelligence that we then make more complex and useful and in the process figure out how our brains work and unlock all the tech that comes with that. Still though that's a process of like a century. I'd still call that a singularity, just not what you're envisioning. difficultly is a pretty reasonable assumption for a technology like superintelligence, and it's not just an assumption, we barely understand animal intelligence. I'm willing to concede that maybe AGI could be done by 2050 (probably a few decades later at least in terms of it being widely avaliable, safe, and super useful), however a bunch of a AGI does not a superintelligence make. It definitely accelerates civilization by a lot and undoubtedly kicks off the singularity, but I just don't see your level of speed being very likely, nor desirable. I'm aware computer scientists have a lot of good insights on intelligence, however I'm also aware of the huge overestimation everyone has of them, like they're on the verge of the singularity while every other field is irrelevant. I've definitely sensed a lot of arrogance from computer science, like they have everything figured out and in a few decades science will be over. Imo, you just drank the silicon valley kool-aid a but too much. Tech billionaires benefit greatly from seeming way more advanced than they actually are. I don't think flight and birds is the right analogy for AGI, it's more like how we're trying to design nanobots, ie by essentially just copying cells, and woth nanotech the funny thing is even very different chemistries still look like biological cells. Again, I'm not saying we need to replicate brains exactly in fact human likeness is rarely ever optimal for technology, but I think the problem of intelligence is hard enough that without assistance from brain research it's nit going to get very far very fast, and even with help it's still going to take a long while to actually make artificial life. Ultimately I think there's a fundamental limit to how fast a singularity can actually happen, and I believe that will always be on the order of many decades or a century or two, due to a combination of difficulties setting up AGI, making it more alive than the initial models, improving its intelligence beyond human levels, and getting it to the point where it can truly advance faster without our help, not to mention any singularity that goes too fast risks making manmade horrors beyond comprehension and nobody legitimately wants that and pretty much everyone agrees caution is a virtue when it comes to AI, plus like I said research can only happen so fast and we don't know how difficult it will be for each level of the singularity to advance, it turn out that all intelligence finds great difficultly improving itself or maybe its a piece of cake (it's probably somewhere in the middle). Either way I think my compounding difficulties are really hard for you to just explain away and cram into your timeline. And at the end of the day despite you proposing a faster timeline I think I'm being the more optimistic one here since a slower timeline actually gives us some amount of time to figure things out and ensure that the future at the other end of the singularity is a pleasant one.
https://www.reddit.com/u/donaldhobson/s/OgimEwugHm : In nuclear physics terms, maybe AI's are like 90% of the way to critical mass. It's not IMPOSSIBLE to just stay at that level. But it's a cutting it close thing. Once you start getting beyond human level, you are in a region of strong positive feedback. And it's unlikely that a field with rapid tech progress suddenally screaches to a halt just before the strong feedback loops really kick in. Intelligence isn't magic. (And your using "emergent" as a synonym for magic) It's understandable. It's mathy. There are fundemental limits. Well it depends what you define as a "singularity". But you won't find any of your fundamental limits exceeding a microsecond for most sensible definitions. Setting up the AI is about 3 hours of installing extra tensorflow libraries. Making sure it can advance without our help is giving it terminal access and letting it figure out the rest. Improving beyond human is a case of doubling NEURON_COUNT and LAYER_COUNT. Making it "more alive" is not a thing.
Me : Except that's not really an ASI then, that's just a powerful algorithm. If you remove all the thoughts and complexity then you just get a tool that's about as close to the singularity as our current narrow AIs, heck at that point you're better off just making a narrow AI for everything. But you're not getting any special insight from that ASI, nor is it taking over the world. Also, this goes to show your complete disregard of neuroscience. Also, it took evolution billions of years to get to our level, so us taking another century or two instead of a few microseconds makes far more sense. it's not just math, and we have no idea what emergent property actually creates it, let alone how to make a computer do that. Neuroscientists don't even tend to think transistors are the proper substrate for intelligence and consciousness in the first place. I have a question for you, are you expecting the first superintelligence to be conscious or not? Because an unconscious AGI that's very good at it's job is what you seem to consider an ASI. However that is not an ASI at all, that would require actually increasing the complexity of the mind. Btw the only definition of "singularity" I find plausible is a century of rapid progress because of a bunch of intelligence augmentation both in ASIs and transhumans, not the classic skynet taking over in a microsecond BS.
https://www.reddit.com/u/donaldhobson/s/OgimEwugHm : ASI is a powerful algorithm. I expect that an ASI can exist with code not vastly more complicated than chatGPT. I'm not saying you can just scale chatGPT up. But I wouldn't be totally shocked if scaling chatGPT did lead to superintelligence. And some change on the magnitude of RNN's to transformers. Sure, that could do it. An AI that brute force simulates every atom in the universe is pretty comprehensible. (Impractical compute requirements) The equations of quantum mechanics aren't THAT hard. It's just a big PDE solver. But such an AI is in practice very powerful and dangerous. I don't think "transistors aren't the proper substrate for intelligence" is the consensus view of neurologists. It's also blatently incorrect. You are waving around "emergent properties" as if it's a magic, inherently incomprehensible thing. In reality, the combined behaviour of lots of simple parts is something that maths can and does deal with. There are theorems saying that as the size of neural nets goes to infinity, so too does the amount of data they can store. I'm not sure it matters from a practical standpoint. (It matters from a moral one. We want to be nice to conscious AI's. But what can a conscious AI do that an unconscious one can't.
Sounds like religion with extra steps.
Subreddit
Post Details
- Posted
- 6 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/IsaacArthur...