All posts by alexlamb

How to Fix the Hugos

I have spent way too much time this last week reading the various back and forth articles about the Hugo award debacle. For those of you who aren’t familiar with this, the short version is as follows.

Two groups, one of right-leaning fans (the Sad Puppies), and another of far-right fans (the Rabid Puppies), both felt that they were being marginalized in the Hugo award ballots. They gamed the voting system, which was entirely unprotected against gaming, and made sure their own candidates dominated this year’s ballot. The left-leaning press then jumped on this act, mischaracterized what had just happened by conflating the two groups and their members, and handed a small victory to the right-leaning fans. The right-leaning press then leapt in after with equally poorly researched accusations of a liberal conspiracy.

Subsequently, everyone and his aunt has jumped into the debate to express an opinion, including George R. R. Martin, who wrote a sequence of eloquent posts which, in my opinion, calmly and clearly exposed the latent reality distortion in the X Puppies positions, (where X denotes some form of disease or misery).

The X Puppies main point appears to be this: “These days, the Hugos seem to be full of dull works full of left-leaning politics. Why can’t the Hugos just be about good old space adventures without politics, like they used to be?”

The reply from GRR Martin and others is roughly this: “To my knowledge, they were never about good old space adventures without politics. Please point at a moment in history when this was true.”

The comeback from the X Puppies to me smacks of angry avoidance. Here is a quote from Larry Correia:

In your Where’s the Beef post you attempted to dismiss our allegations that there is a political bias in the awards now, by going through the history of the awards and looking at the political diversity of winners from long ago. Nice, but we are talking about a relatively recent trend.

In other words: in the 70’s, 80’s, 90’s and 00’s this award was not about politics-free space adventures. However, what concerns us is the recent trend toward it not being so.

I find this debacle both sad and fascinating. It’s sad, because the Hugos are broken now. It’s unclear if they will ever be quite the same. Something I put value in throughout my childhood has been soiled.

However, it’s also fascinating to me, because the whole thing is an extraordinary example of group psychology in action. I believe that if fandom views this unpleasant experience through the right lens, it can make itself stronger, wiser, and more diverse than ever before.

My source of inspiration in this matter is an excellent post by Django Wexler, focusing not on the awards themselves, but the attendant voting system, and how it might be repaired to discourage future weaponization.

His post encouraged me to think that instead of mourning the Hugos, perhaps we should accept their current brokenness and start playing with them. And to that end, I have a variety of suggestions, some more serious than others.

Suggestion One: Award a happy, hollow victory

One solution would be for everyone to vote for Vox Day (the leader of the far-right group) and any author who supports him in every single category. Then when they go up to collect the award each time, we laugh. We cheer and whistle. We thank them effusively for rescuing us from a nightmare of inclusiveness and equality. We give them loads of long, uncomfortable, sweaty hugs.

This, to my mind, would the the improvisors yes-and-based solution. A spoiler can only feel victory so long as it is not pressed gleefully into his hands.

Suggestion Two: Create a new category

Maybe the Hugo award organizers should create a special award category for ‘old timey space adventures with no politics, honest’ to commemorate this event. We might call this the Iron Dream award, or some such thing. Then the right-leaning fans can vote for that award instead and feel like their peculiar historic fantasy is being maintained. If other fans felt the urge to vote for the most blatantly, creakingly right-leaning fiction they could find, one could hardly blame them. A match between the Iron Dream award and Best Novel might serve as an in indicator that the voting had been something other than straightforward.

Suggestion Three: Pattern voting

Django Wexler proposed anti-votes to compensate for slate voting. What’s nice about this system is that slate voting is at a disadvantage, rather than an advantage. The issue, as has been subsequently pointed out, is that anti-votes carry a social connotation that is perhaps at odds with the Hugos and likely to lead to more argument.

So instead, I’d propose a system in which each pattern of votes counts once. Identical vote sheets end up constituting a single vote. This means that anyone who wants to force a slate through has to put in work. The more power they want to have, the larger the set of works they vote for has to become.

Is such a system gameable? Of course it is. Ken Arrow has made that clear. However, it is positive in social implications, provides an incentive for people to read broadly, and disincentivizes slates. I invite criticism to this idea, as I’d love to know where the flaws are. I’m sure they’re in there somewhere.

Suggestion four: A Hugos mission statement

If we’re being straight about this, we can admit that the X Puppies did what they did because of what they perceived as the truth, and what they perceived as injustice. That perception was, to my eye, a skewed one, but it existed for a reason.

That reason is that the people in that group felt demonized. Everyone wants to see themselves as a good guy on the side of truth and justice, so when they started to encounter a social consensus that characterized them as bad guys, they went into amygdala hijack and lashed out.

People take action in the way that the X Puppies did when their brains register that some pathway to self-validation has been compromised. Then they did what people always do under these conditions, which was to construct a goal chain with the shortest discernible path leading to a state where they could continue to self-validate safely.

Their solution was to ensure that the Hugos were unambiguously political, so that they could believe this without interior conflict, and propose that this was why they were not getting awards. So far as the X Puppies brains are concerned, job done. All else is just the post-justification that conscious reasoning affords. Now that we are all angry, nobody has to feel unworthy. We can talk about libel and conspiracies and groupthink instead.

There seem to me to be two takeaways from this. First, it’s clear that some left-leaning fans are guilty of cheap, self-serving reasoning, just as the X Puppies are. Some of the flawed journalism that occurred during this event provides clear examples of why we should always hesitate in judgement, even if only to be accurate in our critique.

To my mind, robust liberal thought is synonymous with scientific thought. We should always consider whether there’s some position we haven’t considered, just as we should always wonder if our understanding of diversity, or privilege, or justice requires modification. This is true even when considering those whose positions we find profoundly distasteful. The alternative is defensive knee-jerk reasoning, which is either bigotry, or bigotry in disguise, no matter what political credentials are trumpeted. Outrage, regardless of its target, is the enemy of reason.

Thus, perhaps we should use this as an opportunity to think more deeply than ever about diversity and how we can communicate its benefits more effectively. If the X Puppies hadn’t found themselves auto-included in groups labeled as ‘bad’ during conversations in the community, they might not have lost the plot. Even if we consider the social positions of the X Puppies to be socially untenable, how can we consider the current outcome a win? Could those people who are now defensive and angry have been more persuasively and proactively brought around?

More simply, we might also consider simply attaching a mission statement to the Hugos that makes it very clear what they stand for. Then those who can’t get behind the mission statement can feel free to disregard the awards at their leisure. The more clear we are about what we stand for, the easier it will be for those who don’t want to play to turn their noses up at us and stalk off. Good luck to them.

So, given the options, which solution do you prefer? Do you have an alternative proposal? If so, I’d be delighted to hear about it.

(My first book, Roboteer, comes out from Gollancz in July.) 

Advertisements

How we decide

Since my recent post on Twitter and Facebook, I’ve been thinking of airing a piece of science I was tinkering with at Princeton, and never got round to putting in the world. That science asks the following question:

What makes us believe that something is true or right when we lack direct evidence, or experience, to support it? When do we decide to add our voice to a social chorus to demand change?

This question seems particularly pertinent to me in our modern age when so much of our information arrives through electronic channels that are mediated by organizations and mechanisms that we cannot control.

For many social animals, the most important factor in making a decision seems to be what the other members of their social group are doing. Similar patterns show up in experiments on everything from wildebeest, to fish, to honey bees.

Similarly, we humans often tend to believe things if enough of our peers seem to do likewise. People have been examining this tendency in us ever since the experiments of the amazing Stanley Milgram.

It’s easy to see this human trait as a little bit scary—the fact that we often take so much on faith—but, of course, a lot of the time we don’t have a choice. We can’t independently check every single fact that comes our way, or consider every side of a story. And when many of the people who we care about are already convinced, aligning with them is easy.

Fortunately, a lot of animal behavior research suggests that going with the flow is often actually a really good way for social groups to make decisions. Iain Couzin’s lab at Princeton has done some excellent work on this. They’ve shown that increasing the number of members of a social group who aren’t opinionated, and who can be swayed by the consensus by sheer force of numbers, often increases the quality of collective decision making. Consequently. there are many people who think we should be taking a closer look at these animal patterns to improve our own systems of democracy.

But how effective is this kind of group reasoning for humans? How much weight should we be giving to the loud and numerous voices that penetrate our lives? And how often can we expect to get things dangerously wrong.

Well, the good news is that, because we’re humans rather than bees, we can do some easy science and find out. And I’m going to show you how. To start with, though, we’ll need to know how animal behavior scientists model social decision-making.

In the simple models that are often used, there are two types of agent (where an agent is like a digital sketch of a person, fish or bee). These types are what you might call decided agents, and undecided agents. Decided agents are hard to convince. Undecided agents are more flexible.

Decided agents are also usually divided into camps with differing opinions. X many of them prefer option A. Y many of them prefer option B. If the number of decided agents who like B is a lot more than the agents who like X, we assume that B is the better option. Then we look to see how easy it is for agents, as a group,  to settle on B over A.

To convince an agent, you let it meet a few others in the group to exchange opinions. If an agent encounters several peers who disagree with it in succession, (let’s say three), it considers changing its mind. And the probability of changing is, of course, higher for an undecided agent than for a decided one.

Then we put a bunch of agents in a digital room and have them mill about and chat at random. Very quickly we can make a system that emulates the kind of collective decision-making  that often occurs in nature. And, as Iain’s team found, the more undecided agents you have, the better the decision-making gets.

WellMixedPop

In this plot, we’re looking at how the quality of the decision-making scales with the size of the group. We always start with ten decided individuals, four in favor of option A and six in favor of B, and we increase the size of the undecided community, from zero up to 2550.

A score of one on the Y-axis shows an ideal democracy.  A score of zero shows the smaller group winning every time. The X-axis shows the total size of the population.

As you can see, as the group gets bigger, the quality of the decisions goes up. This is because the minority group of decided individuals take a lot of convincing. If they duke it out directly with the decided majority with nobody else around, the results are likely to be a bit random.

But think about what happens when both decided parties talk to a bunch of random strangers first. A small difference in the sizes of the decided groups has a huge difference in the number of random people they can reach. That’s because each one of those random people also talks to their friends, and the effects are cumulative.

That means that, before too long, the minority group is much more likely to encounter a huge number of people already in agreement. Hence, they eventually change their tune. Having undecided people makes that chorus of agreement bigger.

This is awesome for bees and fish, but here’s the problem: human beings don’t mill and chat with strangers at random. We inhabit networks of family and friends. In the modern world, the size, and pervasiveness, of those networks is greater than it ever has been. So shouldn’t we look at what happens to the same experiment if we put it on a network and only let agents talk to their friends?

Let’s do that. First, let’s use an arbitrary network. One that’s basically random. The result looks like this.

RandomPop

As you can see, we get the same result, nearly, but the group has to be a little bigger before we get the same decision-making quality. That doesn’t seem so bad.

But unfortunately, human social networks aren’t random. Modern electronic social networks tend to be what’s called scale-free networks. What happens if we build one of those?

ScalePop

That’s not so hot. Once the network goes past a certain size, the quality of the decision-making actually seems to degrade. Bummer. For our simple simulation, at least, adding voices doesn’t add accuracy.

But still, the degradation doesn’t seem too bad. A score of 0.95 is pretty decent. Maybe we shouldn’t worry. Except of course, that in human networks, not every voice has the same power. Some people can buy television channels while others can only blog. And many people lack the resources or the freedom to do even that.

So what happens if we give the minority opinion-holders the most social power? In essence, if we make them the hubs of our networks and turn them into an elite with their own agenda? If you do that, the result looks like this.

ScaleBiasPopAs you can see, as the system scales, the elite wins ever more frequently. Past a certain network size, they’re winning more often than not. They own the consensus reality, even though most of the conversations that are happening don’t even involve them.

My broad conclusion, then, is that we should be extremely careful about accepting what is presented to us as the truth via electronic media, even when it seems to come from our peers. The more powerful communication technology gets, the easier it is for powerful individuals to exploit it. A large, networked society is trivially easy to manipulate.

Is there something we can do about this? I think so. Remember that the randomly mixed population always does better. So maybe we should be paying a little less attention to the news and Facebook, and having more conversations with people we encounter in our day-to-day lives.

In the networked society we inhabit, we’re conditioned not to do that. Often it feels uncomfortable. However, maybe we need to be rethinking that habit if we want to retain our social voice. The more we reach out to people whose opinions we don’t know yet, and allow ourselves to be influenced by them, the less power the media has, and the stronger we collectively become.

 

On Spock

Leonard Nimoy died today. I found myself surprisingly affected by that. And as I watched the tide of sorrow pour over the internet, it occurred to me to ask: why, specifically, was I so touched? In effect, I channelled my inner Spock to process my feelings, and in doing so, partially answered my own question.

Leonard Nimoy had many talents, but for many of my generation, the fact that he was Spock effectively eclipsed the rest of them. Does that belittle the rest of his achievements? To my mind, not in the least, because what Spock symbolized was inspiring, and generation defining.

I grew up as a nerdy, troubled kid in a school that didn’t have the first clue of what to do with me. They couldn’t figure out whether to shove me into the top math stream or detention, so they did both. I was singled out for bullying by both other pupils and the school staff, and had test scores that oscillated wildly from the top of the class to the very bottom, depending on how miserable I was.

In that environment, it was trivially easy to see myself as an alien. I cherished my ability to think rationally, and came to also cherish my differentness. There weren’t many characters in popular media for a kid like that to empathize with. Spock, though, nailed it.

And while my school experience was perhaps a little extreme, I suspect that a very similar process was happening for isolated, nerdy kids all across the western world.

And here’s the root of why: Spock was strong because he was rational. Sure, he was physically powerful and had a light form of telepathy and all that, but what made him terrific was his utter calm under incredibly tough conditions. Furthermore, as Leonard Nimoy’s acting made clear, he was still quite capable of emotionally engaging, of loving, and having friends, even if he seldom admitted it to himself. Spock didn’t just give us someone to identify with. He encouraged us to inhabit that rationality, and let it define us.

Leonard Nimoy’s character kept it together when everyone around him wasn’t thinking straight, and made it look cool. In doing so, he helped to inspire a generation of computer scientists, entrepreneurs, and innovators who have changed the world, and the status of nerds within it.

The kids growing up now don’t have a Spock. Sure, they have plenty of other nerd-icons to draw from, and maybe they’re more appropriate for the current age. But for me, none of them really speak to the life-affirming power of level-headed thought in the way that Spock did.

Looking back on it, I see that Leonard Nimoy, Gene Roddenberry, and the rest of the team who created Spock’s character, helped inform the life philosophy that has guided me for years, and that’s this.

All emotions are valid, from schadenfreude to love. They’ve all part of us, and should be respected, even when we’re tempted to be ashamed of them. But emotions should have a little paddock to run around in. The point at which emotions start causing problems and eating the flowers is when you let them get out of the paddock. So long as you look after your paddock, you can transcend your limitations while remaining fully human. 

And so, today, I confess that I find the death of Leonard Nimoy incredibly sad, but its significance also, somehow, fascinating.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Social media and creeping horror

One of the things my friends have advised me to do as part of building my presence as a new author is take social media seriously. Particularly Twitter. I’ve been doing that, and for the most part enjoying it, but I’m also increasingly convinced that the medium of electronic social media is terrifying, both in its power, and its implications.

By this point, many of us are familiar with the risks of not being careful around social media. The New York Times recently published a brilliant article on it.

It’s easy to look at cases such as those the article describes and to think, “well, that was a dumb thing to do,” of the various individuals singled out for mob punishment. But I’d propose that making this kind of mistake is far easier than one might think.

A few years ago, I accidentally posted news of the impending birth of my son on Facebook at a time when my wife wasn’t yet ready to talk about it. It happened because I confused adding content to my wall with replying to a direct message. That confusion came about because the interface had been changed. I wondered subsequently, after learning more about Facebook, whether the change had been made on purpose, to solicit exactly that kind of public sharing of information.

In the end, this wasn’t a big deal. Everyone was very nice about it, including my wife. But it reminded me that any time we put information into the internet, we basically take the world on trust to use that information kindly.

However, the fact that we can’t always trust the world isn’t what’s freaking me out. What freaks me out is why.

The root of my concern can perhaps be summarized by the following excellent tweet by Sarah Pinborough.

*Looks through Twitter feed desperate for something funny.. humour feeds the soul. Nope, just people shouting their worthy into the void…*

I think the impressive Ms. Pinborough intended this remark in a rather casual way, but to my mind, it points up something crucial. And this is where it gets sciencey.

Human beings exercise social behavior when it fits with their validation framework. We all have some template identity for ourselves, stored in our brains as a set of patterns which we spend our days trying to match. Each one of those patterns informs some facet of who we are. And matching those patterns with action is mediated by exactly the same dopamine neuron system that guides us towards beer and chocolate cake.

What this means is that when we encounter a way to self-validate on some notion of social worth with minimal effort, we generally take it. Just like we eat that slice of cake left for us on the table.  And social media has turned that validation into a single-click process. In other words, without worrying too much about it, we shout our worthy into the void. 

This is scary because a one-click process doesn’t leave much room for second-guessing or self-reflection. Furthermore, the effects of clicking are often immediate. This reinforces the pattern, making it ever more likely that we’ll do the same thing again. And that’s not good for us. We get used to social validation being effortless, satisfying, and requiring little or no thought.

We may firmly assure ourselves that all our retweeting, liking, and pithy outrage is born out of careful judgement and a strong moral center, but neurological reality is against us. The human mind loves short-cuts. Even if we start with the best rational intentions, our own mental reward mechanisms inevitably betray us. Sooner or later, we get lazy.

Twenty years ago, did people spend so much of their effort shouting out repeated worthy slogans at each other. Were they as fast to outrage or shame those who’d slipped up? How about ten years ago? I’d argue that we have turned some kind of corner in terms of the aggressiveness of our social norming. And we’ve done so, not because we are now suddenly somehow more righteous. We’ve done it because it’s cheap. Somebody turned self-righteousness into a drug for us, and we’re snorting it.

But unlike lines of cocaine, this kind of social validation does not come with social criticism attached. Instead, it usually comes from spontaneous support from everyone else who’s taking part. This kind of drug comes with a vast, unstoppable enabler network built in. This makes electronic outrage into a kind of social ramjet, accelerating under its own power. And as with all such self-reinforcing systems, it is likely to continue feeding on itself until something breaks horribly.

Furthermore, dissent to this process produces an attendant reflexive response, just as hard and as sharp as our initial social judgements. Those who contest the social norming are likely to be punished too, because they threaten an established channel of validation. The off-switch on our ramjet has been electrified. Who dares touch it?

The social media companies see this to some extent, I believe. But they don’t want to step in because they’d lose money. So long as Twitter and Facebook build themselves into the fabric of our process of moral self-reward, the more dependent on them we are. The less likely we are to spend a day with those apps turned off.

Is there a solution to this kind of creeping self-manifested social malaise? Yes. Of course. The answer is to keep social media for humor, and for news that needs to travel fast. We should never shout our worthiness. We should resist the commoditization of our morality at all costs.

Instead, we should compose thoughts in a longer format for digestion and dialog. Maybe that’s slower and harder to read, but that’s the point. Human social and moral judgements deserve better than the status of viruses. When it comes to ostracizing others, or voting, or considering social issues, taking the time to think makes the difference between civilization and planet-wide regret.

The irony here is that many of those people clicking are those most keen to rid the world of bigotry. They hunger for a better, kinder planet. Yet by engaging in reflexive norming, they cannot help but shut down the very processes that makes liberal thinking possible. The people whose voices the world arguably needs most are being quietly trained to shout out sound-bites in return for digital treats. We leap to outrage, ignoring the fact that the same kind of instant indignation can be used to support everything from religious totalitarianism to the mistreatment of any kind of minority group you care to name. A world that judges with a single click is very close in spirit to one that burns witches.

In short, I propose: post cats, post jokes, post articles. Social justice, when fairly administered, is far about more about the justice than about the social.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Barricade, and opening up

I have a book coming out this year and the anticipation is affecting me. Perhaps understandably, I have become fascinated in that process that authors go through when their books hit print. Countless writers have gone through it. Some to harsh reviews, some to raves, and some, of course, to dreadful indifference. What must that be like, to have something you’ve spent years on suddenly be held up for casual judgement? I have no idea, but I’ll probably find out soon.

It’s probably natural that in trying to second guess this slightly life-changing event that I’ve looked to my peers. Specifically, I’ve looked to those other new authors that my publisher is carrying—those people a little further down the same path as myself.

In stalking them on the web, I hit my first key realization. As a writer, I should have been giving reviews, since years ago, to every writer whose work struck me in one way or another. And that’s because without such feedback, a writer is alone in the dark. A review by another writer, even an unfavorable one, is a mark of respect.

As it is, I have a tendency to lurk online, finding what I need but not usually participating in the business of commenting. However, this process of looking at the nearly-me’s out there has brought home that the web can and should be a place of dialog. It’s stronger and better when individual opinions are contributed. If I expect it from others, I should contribute myself. The reviewing habit, then is one which I am going to try to take up immediately.

Which brings me to the first Gollancz title I consumed during my peerwise investigation: Barricade, by Jon Wallace. And to my first online book review. Before I tell you what I thought of it, I should first give you a sense of what it’s about. Rather than cutting a fresh description, I will pull from Amazon.

Kenstibec was genetically engineered to build a new world, but the apocalypse forced a career change. These days he drives a taxi instead. A fast-paced, droll and disturbing novel, BARRICADE is a savage road trip across the dystopian landscape of post-apocalypse Britain; narrated by the cold-blooded yet magnetic antihero, Kenstibec. Kenstibec is a member of the ‘Ficial’ race, a breed of merciless super-humans. Their war on humanity has left Britain a wasteland, where Ficials hide in barricaded cities, besieged by tribes of human survivors. Originally optimised for construction, Kenstibec earns his keep as a taxi driver, running any Ficial who will pay from one surrounded city to another. The trips are always eventful, but this will be his toughest yet. His fare is a narcissistic journalist who’s touchy about her luggage. His human guide is constantly plotting to kill him. And that’s just the start of his troubles. On his journey he encounters ten-foot killer rats, a mutant king with a TV fixation, a drug-crazed army, and even the creator of the Ficial race. He also finds time to uncover a terrible plot to destroy his species for good – and humanity too.

My two cents:

I enjoyed this book. It had shades of Blade Runner and Mad Max, with a heavy dose of English cultural claustrophobia thrown in. I liked the way that the viewpoint character’s flattened affect lifted gently over the course of the novel. I liked the pacing. I liked the simple, self-contained quality of the dystopian world that’s presented. While the content is often bleak, sometimes to the point of ruling out hope, there is always humor there. And most of all, I appreciated the underlying message of the book. In essence, Barricade proposes (IMO) that we are saved in the end not by clever ideas or grand political visions, but by hope, humanity, and persistent, restless experimentation in the face of adversity. I sympathize with that outlook.

Is the book perfect? Of course not. No book ever is. The flattened affect, and the blunt, violent bleakness of the novel, both come with a cost in reader engagement that will no doubt vary from person to person. I was not bothered, but I can imagine others who would be.

Furthermore, the human characters, bar one, are ironically the least fully drawn (perhaps deliberately). But all creative choices come with side-effects. Barricade held my attention to the end, entertained me, and encouraged me to think.

AI and Existential Threats

So on re-reading my last AI post, I decided that it perhaps seemed a little glib. After all, some heavy-hitters from Nick Bostrom on down have seriously considered the risks from superintelligent AI. If I’m ready to write such risks off, I should at least do justice to the arguments of the idea’s proponents, and make clear why I’m not specifically concerned.

First, I should encapsulate what I see as the core argument of the Existential Threat crowd. To my mind, this quote from Nick Bostrom captures it quite nicely.

Let’s suppose you were a superintelligence and your goal was to make as many paper clips as possible. Maybe someone wanted you to run a paper clip factory, and then you succeeded in becoming superintelligent, and now you have this goal of maximizing the number of paper clips in existence. So you would quickly realize that the existence of humans is an impediment. Maybe the humans will take it upon themselves to switch you off one day. You want to reduce that probability as much as possible, because if they switch you off, there will be fewer paper clips. So you would want to get rid of humans right away. Even if they wouldn’t pose a threat, you’d still realize that human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.

In other words, once we create something than can outthink us, we are helpless if it’s goals don’t align with ours. It’s an understandable concern. After all, isn’t this the lesson that the evolution of the human race proved? The smarter the race, the bigger it wins, surely.

Well, maybe.

I would argue that the way we think about risks, and the way we think about intelligence, deeply colors our perception of this issue. We can’t think about it straight because we’re not designed to.

First, there is an issue of scale. People tend to forget that reasoning capacity is easy to come by in nature, and not always useful. There was a lovely experiment a few years back in which scientists tried to breed more intelligent fruit-flies. They succeeded with surprising ease. However, those flies had a shorter lifespan than normal flies, and didn’t obviously have a natural advantage at being a fly. Because flies are small, it’s more efficient for them to be easily adaptable at the level of evolutionary selection and have loads of children than it is to be smart. The same is even more true of smaller organisms like bacteria.

The lesson here is that intelligence confers a scale-dependent advantage. We are the smartest things we know of. However, we assume that having more smarts always equals better, despite the fact that this pattern isn’t visible anywhere in nature at scales larger than our own. There is, for instance, no evidence of superintelligent life out among the stars.

While it still might be true that smarter equals better, it might also be the case that smartness at large scales is a recipe for self-destruction. Superintelligent life might kill itself before it kills us. After all, the smarter we’ve become, the better at doing that we seem to be.

Then there is the issue of what intelligence is. Consider the toy example from Bostrom’s quote. In order for this scenario to hold, our AI must be smart enough to scheme against humans, and at the same time insufficiently self-aware as to the long term cost of pursuing that goal: namely the destruction of the entities who maintain it and enable it to make paperclips in the first place.

To resolve this paradox, we have to assume that the AI maintains itself. However, as Stephen Hawking will tell you, having an excess of intelligence does not magically bestow physical self-sufficiency. Neither does it bestow the ability to necessarily design such a system and force it to be implemented. Furthermore, we have to assume that the humans building the self-sufficient machines at no time notice that they’re constructing the tools that will inform their own obsolescence. Not impossible, but at the same time, another stretch on the Existential Threat model.

Another problem is that we tend to see intelligence as a scalar commodity that is universal in nature. This despite the vast array of neuroscientific evidence to the contrary. We see others as having more, or less than ourselves while at the same time rewarding ourselves for specific cognitive talents that are far from universal in nature: insight, empathy, spatial skills, etc. Why do we do this? Because reasoning about other intelligences is so hard that it requires that we make a vast number of simplifying assumptions.

These assumptions are so extreme, and so important to our continued functioning, that in many cases they actually get worse the more people think about AI. From the fifties to the nineties, a huge number of academics honestly believed that building logic systems based on predicate calculus would be adequate to capture all the fundamentals of human intelligence. These were smart people. Rather than giggle at them, we should somberly reflect on how easy it is to be so wrong.

Related to this, we also assume automatically that AI will reason about threats and risks in a self-centered fashion, just as we do. In Bostrom’s example, the AI has to care that humans shutting it down will result in the end of paperclip production. Why assume that? Are we proposing that the will to self-preservation is an automatic consequence of AI design? To my knowledge, there is not a single real-world AI application that thinks this way. Furthermore, none of them show the slightest tendency to start. I would propose that we have this instinct not because we’re intelligent, but because we’re evolved from unintelligent creatures that demonstrated this trait because it conferred selective advantage.

So for AI to reason selfishly, we have to propose that the trait for self-preservation comes from somewhere. Let’s say it comes from malware, perhaps. But even if we make this assumption, there’s still a problem.

Why would we propose that such an intelligence would automatically choose to bootstrap itself to even greater intelligence? How many people do you know who’d sign up for voluntary brain surgery? Particularly, brain surgery conducted by someone no smarter than themselves. Because that’s what we’re proposing here.

There is a reason that this isn’t a popular lifestyle choice. And that’s that the same will to self-preservation acts against any desire for self-surgery, because self-surgery can’t come without risk. In other words, you can’t have your self-preservation cake and eat it too.

But perhaps the greatest reason why we shouldn’t be too worried about superintelligent AI is because we can see this problem. Smart machines have been scaring us for generations and they’re not even here yet. By contrast, antibiotic-resistant bacteria evolving through the abuse of factory farming practices present a massive threat that the CDC have been desperately trying to raise awareness of. But people aren’t interested. Because they like cheap chicken.

In short, I don’t assume that superintelligent AI represents no threat. But I do strongly suspect that when something comes along and clobbers us, it’ll be something we didn’t see coming. Either that, or something we didn’t want to see.

I, for one, welcome our new robot overlords

There has been a lot in the press of late talking about the threat of human-level AI. Stephen Hawking has gone on record talking about the risks. So has Elon Musk. Now Bill Gates has joined the chorus.

This kind of talk makes me groan. I’d like to propose the converse for a moment, so that everyone can try it on. Maybe AI is the only thing that’s going to save our asses. Why? How do I justify that? First, let’s talk about why AI isn’t the problem.

Concerns about AI generally revolve around two main ideas. First, that it’ll be beyond our control, and secondly, that it’ll bootstrap its way to unspeakable power, as each generation of AI builds a smarter one to follow it.

Yes, AI we don’t understand would be beyond our control. Just like weather, or traffic, or crop failure, or printers, or any of the other unpredictable things we struggle with every day. What is assumed about AI that supposedly makes it a different scale of threat is intent. But here’s the thing. AI wouldn’t have intent that we didn’t put into it. And intent doesn’t come from nowhere. I have yet to meet a power-hungry phone, despite the fact that we’ve made a lot of them.

Software that can be said to have intent, on the other hand, like malware, can be extremely dangerous. And malware, by some measures, is already something we can’t control. Certainly there is no one in the world who is immune to the risks of cyberattack. This despite the fact that a lot of malware is very simple.

So why do people underestimate the risks of cyberattack and overstate AI? It’s for the same reason that AI research is one of the hardest kinds of research in the world to do properly. The human mind is completely hamstrung with assumptions about what intelligence is, that we can’t even think about it straight. Our brains come with ten million years of optimized wiring that forces us to make cripplingly incorrect assumptions about topics as trivial as consciousness. When it comes to assessing AI, it’s hard enough to get the damned thing working, let alone make rational statements about what it might want to do once it got going.

This human flaw shows up dramatically in our reasoning about how AI might bootstrap itself to godhood. How is that honestly supposed to work? Intelligence is about making guesses about in an uncertain universe. We screw up all the time. Of all the species on Earth, we are the ones capable of the most spectacular pratfalls.

The things that we’re worst at guessing about are the things that are at least as complicated as we are. And that’s for a really good reason. You can’t fit a model of something that requires n bits for its expression into something that only has n-1 bits. Any AI that tried to bootstrap itself would be far more likely to technologically face-plant than achieve anything. There is a very good reason that life has settled on replicating itself rather than trying to get the jump on the competition via proactive self-editing. That’s because the latter strategy is self-defeatingly stupid.

In fact, the more you think about it, the more the idea of a really, really big pocket calculator suddenly acquiring both the desire, and the ability to ascend into godhood, the dumber it is. Complexity is not just a matter of scale. You have to be running the right stuff. Which is why there isn’t more life on Jupiter than there is here.

On the other hand, we, as a species have wiped out a third of our biodiversity since nineteen seventy. We have, as I understand it, created a spike in carbon dioxide production unknown at any time in geological history. And we have built an economy predicated on the release of so much carbon that it would be guaranteed to send the planet into a state of runaway greenhouse effect that will render it uninhabitable.

At the same time, we are no closer to ridding the world of hunger, war, poverty, disease, or any of those other things we’ve claimed to not like for an awfully long time. We have, on the other hand, put seven billion people on the planet. And we’re worried about intelligent machines? Really?

It strikes me that putting the reins of the planet into the hands of an intelligence that perhaps has a little more foresight than humanity might be the one thing that keeps us alive for the next five hundred years. Could it get out of control? Why yes. But frankly not any more than things already are.