Tag Archives: AI

AI and Existential Threats

So on re-reading my last AI post, I decided that it perhaps seemed a little glib. After all, some heavy-hitters from Nick Bostrom on down have seriously considered the risks from superintelligent AI. If I’m ready to write such risks off, I should at least do justice to the arguments of the idea’s proponents, and make clear why I’m not specifically concerned.

First, I should encapsulate what I see as the core argument of the Existential Threat crowd. To my mind, this quote from Nick Bostrom captures it quite nicely.

Let’s suppose you were a superintelligence and your goal was to make as many paper clips as possible. Maybe someone wanted you to run a paper clip factory, and then you succeeded in becoming superintelligent, and now you have this goal of maximizing the number of paper clips in existence. So you would quickly realize that the existence of humans is an impediment. Maybe the humans will take it upon themselves to switch you off one day. You want to reduce that probability as much as possible, because if they switch you off, there will be fewer paper clips. So you would want to get rid of humans right away. Even if they wouldn’t pose a threat, you’d still realize that human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.

In other words, once we create something than can outthink us, we are helpless if it’s goals don’t align with ours. It’s an understandable concern. After all, isn’t this the lesson that the evolution of the human race proved? The smarter the race, the bigger it wins, surely.

Well, maybe.

I would argue that the way we think about risks, and the way we think about intelligence, deeply colors our perception of this issue. We can’t think about it straight because we’re not designed to.

First, there is an issue of scale. People tend to forget that reasoning capacity is easy to come by in nature, and not always useful. There was a lovely experiment a few years back in which scientists tried to breed more intelligent fruit-flies. They succeeded with surprising ease. However, those flies had a shorter lifespan than normal flies, and didn’t obviously have a natural advantage at being a fly. Because flies are small, it’s more efficient for them to be easily adaptable at the level of evolutionary selection and have loads of children than it is to be smart. The same is even more true of smaller organisms like bacteria.

The lesson here is that intelligence confers a scale-dependent advantage. We are the smartest things we know of. However, we assume that having more smarts always equals better, despite the fact that this pattern isn’t visible anywhere in nature at scales larger than our own. There is, for instance, no evidence of superintelligent life out among the stars.

While it still might be true that smarter equals better, it might also be the case that smartness at large scales is a recipe for self-destruction. Superintelligent life might kill itself before it kills us. After all, the smarter we’ve become, the better at doing that we seem to be.

Then there is the issue of what intelligence is. Consider the toy example from Bostrom’s quote. In order for this scenario to hold, our AI must be smart enough to scheme against humans, and at the same time insufficiently self-aware as to the long term cost of pursuing that goal: namely the destruction of the entities who maintain it and enable it to make paperclips in the first place.

To resolve this paradox, we have to assume that the AI maintains itself. However, as Stephen Hawking will tell you, having an excess of intelligence does not magically bestow physical self-sufficiency. Neither does it bestow the ability to necessarily design such a system and force it to be implemented. Furthermore, we have to assume that the humans building the self-sufficient machines at no time notice that they’re constructing the tools that will inform their own obsolescence. Not impossible, but at the same time, another stretch on the Existential Threat model.

Another problem is that we tend to see intelligence as a scalar commodity that is universal in nature. This despite the vast array of neuroscientific evidence to the contrary. We see others as having more, or less than ourselves while at the same time rewarding ourselves for specific cognitive talents that are far from universal in nature: insight, empathy, spatial skills, etc. Why do we do this? Because reasoning about other intelligences is so hard that it requires that we make a vast number of simplifying assumptions.

These assumptions are so extreme, and so important to our continued functioning, that in many cases they actually get worse the more people think about AI. From the fifties to the nineties, a huge number of academics honestly believed that building logic systems based on predicate calculus would be adequate to capture all the fundamentals of human intelligence. These were smart people. Rather than giggle at them, we should somberly reflect on how easy it is to be so wrong.

Related to this, we also assume automatically that AI will reason about threats and risks in a self-centered fashion, just as we do. In Bostrom’s example, the AI has to care that humans shutting it down will result in the end of paperclip production. Why assume that? Are we proposing that the will to self-preservation is an automatic consequence of AI design? To my knowledge, there is not a single real-world AI application that thinks this way. Furthermore, none of them show the slightest tendency to start. I would propose that we have this instinct not because we’re intelligent, but because we’re evolved from unintelligent creatures that demonstrated this trait because it conferred selective advantage.

So for AI to reason selfishly, we have to propose that the trait for self-preservation comes from somewhere. Let’s say it comes from malware, perhaps. But even if we make this assumption, there’s still a problem.

Why would we propose that such an intelligence would automatically choose to bootstrap itself to even greater intelligence? How many people do you know who’d sign up for voluntary brain surgery? Particularly, brain surgery conducted by someone no smarter than themselves. Because that’s what we’re proposing here.

There is a reason that this isn’t a popular lifestyle choice. And that’s that the same will to self-preservation acts against any desire for self-surgery, because self-surgery can’t come without risk. In other words, you can’t have your self-preservation cake and eat it too.

But perhaps the greatest reason why we shouldn’t be too worried about superintelligent AI is because we can see this problem. Smart machines have been scaring us for generations and they’re not even here yet. By contrast, antibiotic-resistant bacteria evolving through the abuse of factory farming practices present a massive threat that the CDC have been desperately trying to raise awareness of. But people aren’t interested. Because they like cheap chicken.

In short, I don’t assume that superintelligent AI represents no threat. But I do strongly suspect that when something comes along and clobbers us, it’ll be something we didn’t see coming. Either that, or something we didn’t want to see.

I, for one, welcome our new robot overlords

There has been a lot in the press of late talking about the threat of human-level AI. Stephen Hawking has gone on record talking about the risks. So has Elon Musk. Now Bill Gates has joined the chorus.

This kind of talk makes me groan. I’d like to propose the converse for a moment, so that everyone can try it on. Maybe AI is the only thing that’s going to save our asses. Why? How do I justify that? First, let’s talk about why AI isn’t the problem.

Concerns about AI generally revolve around two main ideas. First, that it’ll be beyond our control, and secondly, that it’ll bootstrap its way to unspeakable power, as each generation of AI builds a smarter one to follow it.

Yes, AI we don’t understand would be beyond our control. Just like weather, or traffic, or crop failure, or printers, or any of the other unpredictable things we struggle with every day. What is assumed about AI that supposedly makes it a different scale of threat is intent. But here’s the thing. AI wouldn’t have intent that we didn’t put into it. And intent doesn’t come from nowhere. I have yet to meet a power-hungry phone, despite the fact that we’ve made a lot of them.

Software that can be said to have intent, on the other hand, like malware, can be extremely dangerous. And malware, by some measures, is already something we can’t control. Certainly there is no one in the world who is immune to the risks of cyberattack. This despite the fact that a lot of malware is very simple.

So why do people underestimate the risks of cyberattack and overstate AI? It’s for the same reason that AI research is one of the hardest kinds of research in the world to do properly. The human mind is completely hamstrung with assumptions about what intelligence is, that we can’t even think about it straight. Our brains come with ten million years of optimized wiring that forces us to make cripplingly incorrect assumptions about topics as trivial as consciousness. When it comes to assessing AI, it’s hard enough to get the damned thing working, let alone make rational statements about what it might want to do once it got going.

This human flaw shows up dramatically in our reasoning about how AI might bootstrap itself to godhood. How is that honestly supposed to work? Intelligence is about making guesses about in an uncertain universe. We screw up all the time. Of all the species on Earth, we are the ones capable of the most spectacular pratfalls.

The things that we’re worst at guessing about are the things that are at least as complicated as we are. And that’s for a really good reason. You can’t fit a model of something that requires n bits for its expression into something that only has n-1 bits. Any AI that tried to bootstrap itself would be far more likely to technologically face-plant than achieve anything. There is a very good reason that life has settled on replicating itself rather than trying to get the jump on the competition via proactive self-editing. That’s because the latter strategy is self-defeatingly stupid.

In fact, the more you think about it, the more the idea of a really, really big pocket calculator suddenly acquiring both the desire, and the ability to ascend into godhood, the dumber it is. Complexity is not just a matter of scale. You have to be running the right stuff. Which is why there isn’t more life on Jupiter than there is here.

On the other hand, we, as a species have wiped out a third of our biodiversity since nineteen seventy. We have, as I understand it, created a spike in carbon dioxide production unknown at any time in geological history. And we have built an economy predicated on the release of so much carbon that it would be guaranteed to send the planet into a state of runaway greenhouse effect that will render it uninhabitable.

At the same time, we are no closer to ridding the world of hunger, war, poverty, disease, or any of those other things we’ve claimed to not like for an awfully long time. We have, on the other hand, put seven billion people on the planet. And we’re worried about intelligent machines? Really?

It strikes me that putting the reins of the planet into the hands of an intelligence that perhaps has a little more foresight than humanity might be the one thing that keeps us alive for the next five hundred years. Could it get out of control? Why yes. But frankly not any more than things already are.

Why Ray Kurzweil is Wrong

The brilliant and entertaining Ray Kurzweil is in the media a lot of late, talking about his new book, which I’m currently enjoying: How to Create a Mind. As well as promoting his book, he’s also promoting his core idea, a concept with which he’s become synonymous. And that’s that by around 2040 the rate of technological change in the world will have become so great that we will have reached a ‘singularity‘.

In other words, we will have machine intelligence, and/or augmented human intelligence, and/or something else we haven’t even built yet, such that we will ascend toward a kind of godhood. Problems like finite lifespans won’t bother us any more because we’ll be uploaded into machines. Issues like the climate or world poverty will become irrelevant, as our ability to engineer solutions will become so powerful that that they’re irrelevant.

To support his argument, he provides a wealth of data that shows that human development follows an exponential growth curve. Whether you’re looking at the pace of change, or the amount of computation a single person can do, or the speed with which human beings can communicate, it all pretty much follows the same pattern. And he’s right. We are on an exponential curve and crazy things are going to happen in our lifetime. However, the kind of things he’s predicting are all wrong, and, though I’ve touched on this topic before on this blog, I’m going to have another crack at explaining why.

First though, a little on why you should take Ray’s arguments seriously. Ask most people why they doubt that we’ll ascend to godhood within the next 30 years, and they’ll tell you that it sounds like science fiction–that the change is much too fast. But as Ray points out, people are terrible about anticipating exponential growth curves. If you had told a person in 1975 that by 2012 you’d have access to just about all the knowledge on Earth via a device you could hold in your hand, they’d probably have doubted you. Or take climate change. The world is still full of people who doubt that it’s happening, while at the same time, the rate of change is outstripping all our predictions. When it comes to exponential growth curves, I think Ray has the answer dead right.

So if Ray’s predictions have so much weight, why should we doubt him?

Because of game theory. Because of the Tragedy of the Commons. And because the mind is a commons in which tragedies can happen, just like everything else.

To explain what I mean, consider this: intelligence doesn’t happen in a vacuum. It happens in brains, and brains have the shape they do for a reason. Specifically, brains are packed with mechanisms to make it really hard for you to quit bad habits. And the reason why those mechanisms are there is because these are the same parts that make it hard for someone, or something, to take control over your behavior.

Go to the business section of your local bookshop, and you will find the shelves packed with books on negotiation, sales, leadership, etc. All ways to try to get other people to do things. Ask anyone in sales and marketing how hard it can be to gain new customers, and they’ll tell you. It’s bloody hard.

Your brain is packed with checks and balances to make your behavior almost impossible to change simply because, if it weren’t, you’d probably be dead by now. People who are easy to coerce get coerced. That’s why we have brains in the first place. To get people, or things, or animals, to do the things we want by outsmarting them. That’s why a third of the world’s biodiversity has disappeared since 1970. Because we’ve outsmarted all the other species on the planet, to their cost.

We don’t notice this part of our brains because it’s broadly counter-productive to believe that your every attempt at personal change is hamstrung right out of the gate. Similarly, believing that your new business venture is probably doomed because it’s going to be horribly hard to get people to notice what you’re doing doesn’t select for winners. There is a mounting body of evidence to suggest that people are designed to be optimistic, just like they’re designed to be stubborn.

But just because we don’t see that part of our minds doesn’t mean it’s not there. And because intelligence needs stuff to run on, just like a computer program, you take a risk every time you start fiddling around with that stuff. Muck around with the operating system on your laptop and sooner or later something bad happens, even if you gain an advantage over the usefulness of your machine in the short term. When the stuff that people run on becomes an operating system that people can muck with, everyone takes a risk.

Which is not to say that we’re all going to die, or that we’ll all become lobotomized zombie robots in the thrall of a mad professor somewhere. Rather, it’s just that once you reach the point where it’s at least as easy to muck with the stuff that you’re made of as it is to leave it alone, you’re in trouble.

Not necessarily fatal trouble. After all, you can take a reformed heroin addict who’s short-circuited his brain and recovered, put a big pile of heroin in front of him, and likely as not, he won’t take it. He’ll refuse it with pride. But it’ll be work. Once you have found the way to game a system, not gaming that system becomes an effort, and that’s true regardless of how smart you get, because the problem scales with your intelligence.

Let me say that again, to make sure that it’s clear. There is no level of intelligence at which it suddenly becomes easy to cooperate or avoid making mistakes. Sure, the risks to gaming the system become easier to see, but so do the number of ways to game the system. And the more sophisticated your technology is, the more ways there are for it to screw up.

There is a proof in computer science that you cannot build a computer program that can tell how all possible computer programs will behave. This proof applies just as well to people, and essentially tells us that no matter how smart you get, you won’t be able to outguess or predict someone, or something, as complicated as you are.

This rule is stable no matter what path to techno-trouble you want to pick. You can choose genetic modification, brain enhancements, intelligent machines, nanobots, or any other nifty technology you like. Once the technology table is covered with loaded weapons, sooner or later, someone is going to pick one up and have an accident. Whether it’s someone trying to convince everyone to buy their shampoo by making it addictive, or asking people to receive a minor genetic change before they join a company, or trawling for cat photos on the internet with a program that adapts itself, it’s going to happen sooner or later. It’s not one slippery slope. It’s a slippery slope in every direction that you look.

My guess is that we’ll recover from whatever accident we have. However, we’ll then nudge back towards the table of trouble, and then have another accident. And then another. And that this continues until we basically run out of planet to mess about with. To my mind, this is why we don’t see signs of intelligent life out in space. It’s because nobody gets past this point–the Tinker Point. It’s like an information-theoretic glass ceiling.

So is there anything we can do about this?

Yes. Of course. The best technological bets are space travel and suspended animation. The more people are spread out in time and space, the less likely it is that any one accident will be devastating. We’ll be able to keep playing the same game for a long, long time.

However, the fact that there’s no evidence that anyone else in the universe has summoned up the gumption to pull off this trick isn’t exactly comforting. As a species, we should get our skates on. By Ray’s estimate, we have about thirty years.