Why Ray Kurzweil is Wrong

The brilliant and entertaining Ray Kurzweil is in the media a lot of late, talking about his new book, which I’m currently enjoying: How to Create a Mind. As well as promoting his book, he’s also promoting his core idea, a concept with which he’s become synonymous. And that’s that by around 2040 the rate of technological change in the world will have become so great that we will have reached a ‘singularity‘.

In other words, we will have machine intelligence, and/or augmented human intelligence, and/or something else we haven’t even built yet, such that we will ascend toward a kind of godhood. Problems like finite lifespans won’t bother us any more because we’ll be uploaded into machines. Issues like the climate or world poverty will become irrelevant, as our ability to engineer solutions will become so powerful that that they’re irrelevant.

To support his argument, he provides a wealth of data that shows that human development follows an exponential growth curve. Whether you’re looking at the pace of change, or the amount of computation a single person can do, or the speed with which human beings can communicate, it all pretty much follows the same pattern. And he’s right. We are on an exponential curve and crazy things are going to happen in our lifetime. However, the kind of things he’s predicting are all wrong, and, though I’ve touched on this topic before on this blog, I’m going to have another crack at explaining why.

First though, a little on why you should take Ray’s arguments seriously. Ask most people why they doubt that we’ll ascend to godhood within the next 30 years, and they’ll tell you that it sounds like science fiction–that the change is much too fast. But as Ray points out, people are terrible about anticipating exponential growth curves. If you had told a person in 1975 that by 2012 you’d have access to just about all the knowledge on Earth via a device you could hold in your hand, they’d probably have doubted you. Or take climate change. The world is still full of people who doubt that it’s happening, while at the same time, the rate of change is outstripping all our predictions. When it comes to exponential growth curves, I think Ray has the answer dead right.

So if Ray’s predictions have so much weight, why should we doubt him?

Because of game theory. Because of the Tragedy of the Commons. And because the mind is a commons in which tragedies can happen, just like everything else.

To explain what I mean, consider this: intelligence doesn’t happen in a vacuum. It happens in brains, and brains have the shape they do for a reason. Specifically, brains are packed with mechanisms to make it really hard for you to quit bad habits. And the reason why those mechanisms are there is because these are the same parts that make it hard for someone, or something, to take control over your behavior.

Go to the business section of your local bookshop, and you will find the shelves packed with books on negotiation, sales, leadership, etc. All ways to try to get other people to do things. Ask anyone in sales and marketing how hard it can be to gain new customers, and they’ll tell you. It’s bloody hard.

Your brain is packed with checks and balances to make your behavior almost impossible to change simply because, if it weren’t, you’d probably be dead by now. People who are easy to coerce get coerced. That’s why we have brains in the first place. To get people, or things, or animals, to do the things we want by outsmarting them. That’s why a third of the world’s biodiversity has disappeared since 1970. Because we’ve outsmarted all the other species on the planet, to their cost.

We don’t notice this part of our brains because it’s broadly counter-productive to believe that your every attempt at personal change is hamstrung right out of the gate. Similarly, believing that your new business venture is probably doomed because it’s going to be horribly hard to get people to notice what you’re doing doesn’t select for winners. There is a mounting body of evidence to suggest that people are designed to be optimistic, just like they’re designed to be stubborn.

But just because we don’t see that part of our minds doesn’t mean it’s not there. And because intelligence needs stuff to run on, just like a computer program, you take a risk every time you start fiddling around with that stuff. Muck around with the operating system on your laptop and sooner or later something bad happens, even if you gain an advantage over the usefulness of your machine in the short term. When the stuff that people run on becomes an operating system that people can muck with, everyone takes a risk.

Which is not to say that we’re all going to die, or that we’ll all become lobotomized zombie robots in the thrall of a mad professor somewhere. Rather, it’s just that once you reach the point where it’s at least as easy to muck with the stuff that you’re made of as it is to leave it alone, you’re in trouble.

Not necessarily fatal trouble. After all, you can take a reformed heroin addict who’s short-circuited his brain and recovered, put a big pile of heroin in front of him, and likely as not, he won’t take it. He’ll refuse it with pride. But it’ll be work. Once you have found the way to game a system, not gaming that system becomes an effort, and that’s true regardless of how smart you get, because the problem scales with your intelligence.

Let me say that again, to make sure that it’s clear. There is no level of intelligence at which it suddenly becomes easy to cooperate or avoid making mistakes. Sure, the risks to gaming the system become easier to see, but so do the number of ways to game the system. And the more sophisticated your technology is, the more ways there are for it to screw up.

There is a proof in computer science that you cannot build a computer program that can tell how all possible computer programs will behave. This proof applies just as well to people, and essentially tells us that no matter how smart you get, you won’t be able to outguess or predict someone, or something, as complicated as you are.

This rule is stable no matter what path to techno-trouble you want to pick. You can choose genetic modification, brain enhancements, intelligent machines, nanobots, or any other nifty technology you like. Once the technology table is covered with loaded weapons, sooner or later, someone is going to pick one up and have an accident. Whether it’s someone trying to convince everyone to buy their shampoo by making it addictive, or asking people to receive a minor genetic change before they join a company, or trawling for cat photos on the internet with a program that adapts itself, it’s going to happen sooner or later. It’s not one slippery slope. It’s a slippery slope in every direction that you look.

My guess is that we’ll recover from whatever accident we have. However, we’ll then nudge back towards the table of trouble, and then have another accident. And then another. And that this continues until we basically run out of planet to mess about with. To my mind, this is why we don’t see signs of intelligent life out in space. It’s because nobody gets past this point–the Tinker Point. It’s like an information-theoretic glass ceiling.

So is there anything we can do about this?

Yes. Of course. The best technological bets are space travel and suspended animation. The more people are spread out in time and space, the less likely it is that any one accident will be devastating. We’ll be able to keep playing the same game for a long, long time.

However, the fact that there’s no evidence that anyone else in the universe has summoned up the gumption to pull off this trick isn’t exactly comforting. As a species, we should get our skates on. By Ray’s estimate, we have about thirty years.

10 thoughts on “Why Ray Kurzweil is Wrong”

  1. Bad premise.

    We are already Gods and all is divine. There is no separation between humans and the godhead. All is God. Including Kurzweil and his stories. Simply another version of the same thing. Consciousness is already pervasive and non localized, not simply only embedded in individual brains or for Kurzweil, in computational formats.

    Consciousness is and we already live within the singularity. Its all One. He’s just catching up, I guess.

    Bad premise.

    1. Hey Kristen,
      Apologies for the incredibly slow reply. I’ve been learning to be a dad and my blogging dried up completely. I think I see where you’re coming from, but if all is God and all is divine, how does that change anything? If you put everything in the universe in one category, how does that category help you make predictions or distinctions about what’s going on? While there might be some philosophical satisfaction to grouping things this way, isn’t it really the same thing as saying we’re all just mud?
      As for consciousness, I can’t help but feel that it’s an overrated feature of the mind. Consciousness, I would propose, is simply the mirror through which we see the wonders of our own mental functioning. It doesn’t tell us how to hear or see, or even how we make decisions. Proposing that consciousness is everywhere seems fine, so long as we acknowledge that consciousness itself is a trivial feedback loop and that the real processing that makes people special happens elsewhere.
      May your philosophical explorations bring you happiness and clarity, and thank you for making a comment on my blog.

  2. Quite insightful exposition. Thanks. I’m in graduate school (instead of retiring) and my thesis is on climate change behavior so I was researching Ray Kurzweil when I stumbled on your blog. I’ll read more when I have time to spare from studying and data collecting.

    1. Hey,
      Thank you for your comment! I’m sorry if that reference blew away the whole theme for you, so I’ll try to make a few things clear.
      First, I don’t see global warming as increasing exponentially. Rather, I see climate change as a consequence of exponential growth in other areas. Apologies if I didn’t make that clear.

      Secondly, while I’m not a biologist by training, I currently hold a research position in the Ecology and Evolutionary Biology department at Princeton University, mostly because of the work I do in game theory and social simulation design. My take on climate change science is that the whole thing has become very confused in the public mind.

      It seems to me that climate change isn’t really about warming. Warming is a part of it, but if people try to parse what’s going on in terms of whether or not the temperature has gone up in the last 17 years (thanks for the link, btw), then they’re missing the point. The point, as I see it, is better understood by looking at facts like these:

      * By some measures, we have lost a third of the world’s biodiversity since 1970. That die-off is orders of magnitude faster than the one that killed the dinosaurs.
      * We have giant reefs of floating plastic in the center of every world ocean that have become breeding grounds for rapidly mutating bacteria. Some of them look a lot like cholera. Others eat hydrocarbons and would be just as happy devouring the content of your gas tank as your dashboard. The area in the Pacific is the size of Texas.
      * The only place in the world where glaciers are not retreating, but advancing instead is the Himalayas. That’s because they’re melting so fast from the bottom that they’re sliding down the mountains.
      * Summer sea ice in the Arctic is vanishing at a ferocious rate and will likely be completely gone by 2050. This isn’t great news, because along with that, Siberia is melting too, and as that permafrost melts, it’s pushing millions of tons of methane into the atmosphere.
      * The Asian brown cloud (a man-made phenomenon) is now two miles deep, covers an entire continent, and is changing weather patterns worldwide.

      In other words, you may be right that the temperature isn’t going up at the moment. But if that’s true, it’s because of a cascade of far more powerful climate factors at work temporarily appearing to cancel each other out.

      The real point is that we’re so far off the map that we have *no idea* of what we’re doing, or what’s going to happen next. Ecologists seem to be really reticent about making this point, mostly because I think they don’t want to freak people out. But while we can guess vaguely about what’s going to happen to global temperature, one thing is unavoidably obvious: we are transforming the planet at a rate that boggles the imagination, and which we stand zero chance of controlling.

      Warming shwarming. If people want to quibble about temperature rise data while the planet goes to hell in a handcart, they have that right. They just shouldn’t be surprised if I laugh at them.

  3. Alex,

    Interesting ideas here but the evolutionary reasoning is a little false. There are already examples in nature of convergent individuals that are very successful. So successful in fact that they have all but given up individuality for group success. The family Physaliidae is the best example of a society that maintains both a division of labor as well as an absolute requirement of cooperation.

    But higher eusocial organisms also fit the bill, such as hymenopteran and isopteran societies that grow into vast numbers of of individuals. Even mammals have eusocial organiosms in the naked mole rats of East Africa.

    I would not be so quick to imagine an evolutionary disadvantage to being susceptible to mental manipulation. Organisms that “allow” themselves to be domesticated do very well, much better than their wild type counter parts. Domestic chickens have gone global and outshine jungle fowl to a ridiculous extant. All horses are now domesticated. Dogs, corn, wheat, sheep, pigs, all have allowed themselves to be under profound control and in so doing have far outstripped their wild ancestors in evolutionary success.

    I think what your argument does cover is that there are some very dark solutions to a future that includes a Kurzweil Singularity. If you mean that we may lose something of our individuality, or our humanity, then you are right to hold that fear. But game theory indicates not that individuality will be maintained, but rather that choices will be made that enhance a payoff. And every payoff is not the same to everyone in the game. So if something, such as a collective creature of electronic, biological, or amalgam, gets an advantage over a bunch of individuals then game theory and evolution both predict the payoff goes to that “something.” And by the way, evolution does’t care if an advantage is short term, it only cares about who wins right now. That’s why we still have global warming, and that’s why Kurzweil is correct.

  4. Hmm clickbait title. It should be “Why Kurzweil Might Be Wrong Due To An Unforeseen Existential Crisis”. He has researched the issue alot more you have and I don’t think his “wrongness” has been justified based off your arguments.

    1. Hi SputnicK, thank you for your comment! I respect your position, and I respectfully disagree with it!

      First, regarding the title, I wouldn’t call it clickbait. I’d call it my opinion. And the point is not a single existential crisis, but the systemic inevitability of one, regardless of the form that it takes. Furthermore, it can hardly be unforeseen if I’m predicting it now. 🙂

      As for his having researched this a lot more than I have, I debate that too. I have been a machine learning researcher, a complex systems theorist, a software developer, and a science fiction writer. My research has touched evolutionary biology, quantum gravity, sociology, economics, network theory, and organizational psychology. By contrast, I would argue that Kurzweil’s take on this field is somewhat narrow, which is why he’s making mistakes. For starters, his argument comes from the extrapolation of social, biological, and technological trends, but (so far as I’m aware) pays no real attention to the game theoretic landscape that causes these innovations to come about in the first place. As you may know, there are plenty of kinds of game theoretic interactions in which increased reasoning or intelligence does not afford agents a superior strategy. Reasoning power is not a magic bullet. Nor is it a monolithic talent that necessarily scales in the way that he proposes.

      As for my not convincing you with my arguments, my apologies. Quite possibly I haven’t laid them out clearly or thoroughly enough. And admittedly there is a dose of speculation to them. While I can model some parts of the human social system, modeling the process by which sentient entities dismantle the framework for their own survival is hard to do. I can make sketches of the process, but of course not crack all of it without solving some major problems in AI. (Maybe later. And perhaps Kurzweil’s own research will help!)

      What I can say (based on recent research conducted since I wrote this post) is that a unified human singularity certainly looks like doomed system. Variant scenarios in which uplifted humanity creates a loosely-coupled network of competing intelligence clusters have not yet been conclusively ruled out. So there’s hope, at least.

      If there are specific aspects of Kurzweil’s position that you think I’ve failed to address, please feel free to outline them, and I’ll do my best to address them.

      Once again, thank you for your interest in my blog, and I hope the reading experience was more entertaining than annoying.
      May your day be fully awesome.

Leave a comment