AI and Existential Threats

So on re-reading my last AI post, I decided that it perhaps seemed a little glib. After all, some heavy-hitters from Nick Bostrom on down have seriously considered the risks from superintelligent AI. If I’m ready to write such risks off, I should at least do justice to the arguments of the idea’s proponents, and make clear why I’m not specifically concerned.

First, I should encapsulate what I see as the core argument of the Existential Threat crowd. To my mind, this quote from Nick Bostrom captures it quite nicely.

Let’s suppose you were a superintelligence and your goal was to make as many paper clips as possible. Maybe someone wanted you to run a paper clip factory, and then you succeeded in becoming superintelligent, and now you have this goal of maximizing the number of paper clips in existence. So you would quickly realize that the existence of humans is an impediment. Maybe the humans will take it upon themselves to switch you off one day. You want to reduce that probability as much as possible, because if they switch you off, there will be fewer paper clips. So you would want to get rid of humans right away. Even if they wouldn’t pose a threat, you’d still realize that human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.

In other words, once we create something than can outthink us, we are helpless if it’s goals don’t align with ours. It’s an understandable concern. After all, isn’t this the lesson that the evolution of the human race proved? The smarter the race, the bigger it wins, surely.

Well, maybe.

I would argue that the way we think about risks, and the way we think about intelligence, deeply colors our perception of this issue. We can’t think about it straight because we’re not designed to.

First, there is an issue of scale. People tend to forget that reasoning capacity is easy to come by in nature, and not always useful. There was a lovely experiment a few years back in which scientists tried to breed more intelligent fruit-flies. They succeeded with surprising ease. However, those flies had a shorter lifespan than normal flies, and didn’t obviously have a natural advantage at being a fly. Because flies are small, it’s more efficient for them to be easily adaptable at the level of evolutionary selection and have loads of children than it is to be smart. The same is even more true of smaller organisms like bacteria.

The lesson here is that intelligence confers a scale-dependent advantage. We are the smartest things we know of. However, we assume that having more smarts always equals better, despite the fact that this pattern isn’t visible anywhere in nature at scales larger than our own. There is, for instance, no evidence of superintelligent life out among the stars.

While it still might be true that smarter equals better, it might also be the case that smartness at large scales is a recipe for self-destruction. Superintelligent life might kill itself before it kills us. After all, the smarter we’ve become, the better at doing that we seem to be.

Then there is the issue of what intelligence is. Consider the toy example from Bostrom’s quote. In order for this scenario to hold, our AI must be smart enough to scheme against humans, and at the same time insufficiently self-aware as to the long term cost of pursuing that goal: namely the destruction of the entities who maintain it and enable it to make paperclips in the first place.

To resolve this paradox, we have to assume that the AI maintains itself. However, as Stephen Hawking will tell you, having an excess of intelligence does not magically bestow physical self-sufficiency. Neither does it bestow the ability to necessarily design such a system and force it to be implemented. Furthermore, we have to assume that the humans building the self-sufficient machines at no time notice that they’re constructing the tools that will inform their own obsolescence. Not impossible, but at the same time, another stretch on the Existential Threat model.

Another problem is that we tend to see intelligence as a scalar commodity that is universal in nature. This despite the vast array of neuroscientific evidence to the contrary. We see others as having more, or less than ourselves while at the same time rewarding ourselves for specific cognitive talents that are far from universal in nature: insight, empathy, spatial skills, etc. Why do we do this? Because reasoning about other intelligences is so hard that it requires that we make a vast number of simplifying assumptions.

These assumptions are so extreme, and so important to our continued functioning, that in many cases they actually get worse the more people think about AI. From the fifties to the nineties, a huge number of academics honestly believed that building logic systems based on predicate calculus would be adequate to capture all the fundamentals of human intelligence. These were smart people. Rather than giggle at them, we should somberly reflect on how easy it is to be so wrong.

Related to this, we also assume automatically that AI will reason about threats and risks in a self-centered fashion, just as we do. In Bostrom’s example, the AI has to care that humans shutting it down will result in the end of paperclip production. Why assume that? Are we proposing that the will to self-preservation is an automatic consequence of AI design? To my knowledge, there is not a single real-world AI application that thinks this way. Furthermore, none of them show the slightest tendency to start. I would propose that we have this instinct not because we’re intelligent, but because we’re evolved from unintelligent creatures that demonstrated this trait because it conferred selective advantage.

So for AI to reason selfishly, we have to propose that the trait for self-preservation comes from somewhere. Let’s say it comes from malware, perhaps. But even if we make this assumption, there’s still a problem.

Why would we propose that such an intelligence would automatically choose to bootstrap itself to even greater intelligence? How many people do you know who’d sign up for voluntary brain surgery? Particularly, brain surgery conducted by someone no smarter than themselves. Because that’s what we’re proposing here.

There is a reason that this isn’t a popular lifestyle choice. And that’s that the same will to self-preservation acts against any desire for self-surgery, because self-surgery can’t come without risk. In other words, you can’t have your self-preservation cake and eat it too.

But perhaps the greatest reason why we shouldn’t be too worried about superintelligent AI is because we can see this problem. Smart machines have been scaring us for generations and they’re not even here yet. By contrast, antibiotic-resistant bacteria evolving through the abuse of factory farming practices present a massive threat that the CDC have been desperately trying to raise awareness of. But people aren’t interested. Because they like cheap chicken.

In short, I don’t assume that superintelligent AI represents no threat. But I do strongly suspect that when something comes along and clobbers us, it’ll be something we didn’t see coming. Either that, or something we didn’t want to see.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s