All posts by alexlamb

Let’s Play God

Let’s start today by answering a few nice chewy questions that some people have spent far too long worrying about.

  • Q: Did God create life? A: No.
  • Q: Is life a miracle? A: No.
  • Q: Is the creation of life on Earth a mystery? A: No.
  • Q: Was the creation of life an unlikely event? A: No.
  • Q: Can I have a go, preferably on my tea-break? A: Yes.

We can be utterly confident of these answers. Why? Because life is trivially easy to make and I’m going to show you how to do it. Then, if you want to play God and initiate Genesis on your laptop, all you’ll need to do is cut some code and hit run. As many times as you like, with as many variations as you like. You can spend your afternoon having Yahweh-happy-fun-times and see how far you get at reproducing Eden.

During my brief time at Princeton, there were a few simulation results that I built which really got me excited and this was one of them. It demonstrates that far from being mysterious or difficult to model, life can show up anywhere. All you need is a system that supports a suitably dense set of copying operations.

Before I explain what that means, I should first explain what I mean by creating life. It’s a pretty charged phrase with a lot of connotations. And what I’m not going to do is show you how to build Frankenstein’s monster in your living room. So for those who want to quibble over the implications of this result, this is where we get quibbly.

What I mean by life is a self-organizing, self-reproducing system that’s capable of adaptation. And what I mean by create is that this life will assemble itself from raw ingredients in its environment. And that those raw ingredients are in isolation not alive. I’m not talking about biochemistry here, and I’m not going to demonstrate any metabolic processes. This Genesis event will be entirely digital in nature, and very, very simple.

Some of you may wonder why, in that case, I imagine there’s anything new or special in this post. People have been messing around with artificial life since about 1993. The Tierra system has been used as the basis for numerous papers on artificial abiogenesis.

The answer is that I’ve never seen an artificial life system that boils down the requirements for life quite so much, or organizes so fast. The simulation I’m going to show you allows you to watch evolution, of a sort, in real time. What it proves is that, under the right conditions, life is an unavoidable consequence of thermodynamics. You can’t stop it from showing up. It’s more a matter of falling down stairs than inspired creation.

So how do you do create life-lite? Here’s how:

  1. Build a big grid of cells.
  2. Fill each cell with a randomly generated copying instruction. Each instruction should take the form: copy the instruction at position X relative to me to the cell at position Y. For instance, a cell might say, “Copy from 3 north and 2 left to 1 south and 4 right”.
  3. Pick a cell at random. Execute the instruction there, so long as it does not result in an instruction copying a copy of itself. If you pick a cell at the edge, copy from the opposite side of the grid, just as you might to wrap a computer game screen.
  4. Repeat step 3 until a single species has devoured all others.

Start running it and you get something like this. (I’ve colored the cells based on the instruction they contain to make the whole thing easy to see.)

It’s that simple. You can watch it evolving. Voila Genesis. But why does it work? After all, we’ve explicitly forbidden any instruction from copying itself.

It works because once patterns of instructions appear that mutually self-copy, they spread. And that means they take over from instructions that don’t spread. Another way of saying this is that in a system that’s dense in copying operations, self-replicating patterns are the entropic outcome. Furthermore, the simplest, fastest, most robust self-replicating patterns are likely to be the ones that dominate.

Clearly this simulation approach has limitations. The only kind of adaptation that can happen is when copying patterns start interfering with each other. There’s no true mutation. Furthermore, the descriptive limits on the instructions mean that life can only ever get so far. This simulation, sadly, never tries to take over the world.

But I find this digital petri-dish wonderful to watch in any case. It’s exciting to see the little digital critters duke it out for dominance. The result is different every time you watch. And, as a science fiction writer, I find this model powerfully suggestive. If it’s this easy to kick off self-replicating systems under the right conditions, have we underestimated the range of possible conditions where alien life might arise? And given that this kind of life is so easy to make, why isn’t it filling up our computers already? What is it about our existing digital environment that’s not adequately copy-rich?

To my mind, this simulation makes it clear that the real mystery on Earth isn’t the creation of life itself. Life starts the moment you create the right conditions. The real question is how those conditions arise in the first place. Understand that, and we might be able to reproduce them, both digitally and physically. Once we can do that, playing God won’t just be easy or fun to watch, it’ll be world-changing.

(My first novel, Roboteer, comes out from Gollancz on July 16th.)

Cheating Light

There’s a lovely article that came out recently by Alasdair Reynolds about how humanity might reach the stars. In it, he talks about how unlikely it is that we’ll ever travel faster than light, but how we might reach other worlds anyway.

His assessment is, to my mind, broadly correct, and his message is ultimately optimistic. But on reading that post, I feel motivated to write one of my own proposing a somewhat different vision. I want to convince you by the end of this article that humanity has a hope of cheating light speed, and that you, dear reader, can help us do it. How? By having fun.

Does that seem unlikely? Yes? Good. Now let’s get down to business.

My reasoning starts with a story about Star Trek. The show’s creator, Gene Roddenberry, apparently wanted a premise with an at least vaguely plausible scientific basis. So, to justify the speed of the Starship Enterprise and avoid the difficulties of relativity, he and his writers speculated nebulously about ‘warp drive’. They proposed that space was distorted somehow to allow the light barrier to be broken and from it created one of the more enduring tropes in science fiction.

Years later, a physicist named Miguel Alcubierre, purely out of fun, as I understand it, decided to look for special case scenarios in general relativity that would allow a Star Trek-style space-warping drive to work. To his surprise, he found one.

While Alcubierre, to my knowledge, never meant for his paper to be anything other than interesting speculation, the world gleefully seized on his idea and ran with it. Despite the huge technical difficulties inherent in his model, there is now a NASA-funded research group trying to make progress with it. Their chances of success are slim, but interestingly, little pieces of research keep popping up that keep hope alive. Once the door of theoretical possibility was opened, human ingenuity started pouring in.

The lesson of this story, for me, is that when we bother to suspend disbelief, and to use our imagination to stretch science, we begin to see exciting possibilities that we otherwise miss. Most of those possibilities don’t pan out, but unless we stretch, we never look, and consequently, never learn. But this lesson is only part of the greater picture I’d like to paint for you. The next part has to do with how science proceeds.

As I have alluded to in previous posts, science has a problem. The funding and career conditions under which scientists have to exist have become ridiculous, and this isn’t just bad for scientists, it’s bad for science itself. Nobody wants to risk an already fragile career on an unpopular or speculative idea. That’s a recipe for a doomed postdoc trajectory and years of underpaid misery.

Unfortunately, though, that’s exactly what scientists should be doing. It’s not just a scientist’s job to exercise robust skepticism. It’s also their job to extend bold, falsifiable ideas that can push human understanding of the universe forward. The current institutional paradigm lets precious little of that happen.

Take string theory, for instance. Because it’s been the dominant paradigm in particle physics for years now, universities have produced thousands of string theorists, despite there being no testable evidence for the theory’s validity, and no success in even completely writing it down. Now there’s mounting evidence from the LHC that while the math may be useful, the theory isn’t actually correct. By contrast, I suspect you could count the number of people professionally studying warp drive on the fingers of your hands despite the fact that there’s no experimental evidence that actually rules it out.

So of course warp drive looks impossible right now. We’ve barely looked at the options. Which brings me to my next point, which is this: we hardly understand spacetime at all.

The truth is that we currently have little or no idea about what spacetime is or how it works. Measuring particles, we can do. Testing the properties of the context they inhabit is much harder. But what we do know about it is weird and suggestive and points to a gap in our knowledge big enough to drive a starship through.

First off, we know that general relativity has problems, even though everyone agrees that the math is lovely. Kurt Godel pointed this out within a few years of Einstein pushing out his general relativity paper.

Secondly, we know that empty space should contain some amount of energy, called ‘zero point energy’. The problem is that if you try modeling this using the theoretical tools we have right now, that energy either comes out as being minuscule or infinite.

Thirdly, we have to account for the fact that the expansion of the universe is accelerating. Not only does this completely mess up our notions of conservation of energy, but it also demonstrates that there must be some anti-gravitational effect at work in the universe that is not addressed in the Standard Model. Most likely it only operates at very large scales, but right now, we don’t actually know.

Fourthly, there’s the fact that the Standard Model itself, on which our understanding of the universe rests, has no room in it to model ordinary gravity either. And a solution to that problem has evaded discovery for getting on for a hundred years. Why? At root because relativity and quantum mechanics require different notions of what spacetime is like that don’t actually agree.

I could go on. I could talk about dodgy equivalence principles, research on negative mass, dark matter, issues with Lorentz invariance, and all manner of other things, but I think you get the point. While we have a great handle on the stuff in the universe we can see, our handle on what we can’t see is weak.

That’s all very well, I hear you say, but showing that our understanding is limited is different from showing that something is actually possible. After all, there’s still no evidence that FTL could ever work. And you’d be right. Furthermore, in his post, Al Reynolds points at several disappointing results in recent years where potentially superluminal effects have turned out to be nothing of the kind.

Here’s the question, though: are we looking for evidence in the right places? Because, let’s be honest. if building a warp drive is possible, it’s likely to require some pretty serious nature-hacking. We’re probably going to have to figure out a lot more about how space works before we get a reliable demonstration.

If I were looking for suggestive results, I’d look at the output from Fermilab’s awesome holometer experiment and other gravity wave studies, which may be on the brink of demonstrating that spacetime comes in tiny chunks. I’d be looking for more evidence of the so-called penguin anomaly which hints at physics beyond the Standard Model. I’d even be looking at condensed matter experiments like the weird and tantalizing bosenova. It’s out of effects like these that we’ll tease out a deeper understanding of the universe and maybe find ways to bend its rules.

But there’s still obviously a gap here. All these results I’ve mentioned are a long way off from showing anything even remotely useful for FTL research. There are hints that spacetime can be manipulated and little else. But that’s where you and I come in, dear reader. Our job, as I see it, is to speculate, and to have fun doing it.

Science belongs to everyone, not a small elite corps of professionals, and it is helped when public engagement remains high. The more we care to read, to play with ideas like warp drive, and to tell the world we care, the better the chances that projects like the one at NASA get funded. And therefore, the better our chances of finding something awesome. Because, at the moment, we’re hardly looking. We’re too proud of our own skepticism to try.

Am I doing my part? You bet. In my first novel, Roboteer, which comes out this summer, I tried to pick up where Alcubierre left off. I speculate that there are particles called curvons that are essentially knots of spatial potential. They’re radiated by black holes, because black holes can’t pay their information debt to the universe any other way.

The ships in Roboteer fly faster than light by triggering the collapse of curvons around the ship to force space to expand or contract, thus creating the necessary warp field.

There’s a catch, though. Unless you can match the curvon density ahead of your ship with the density behind it, you’re stuck at sublight speeds. And that means that the only stars mankind can visit lie in a thin shell all the same distance from the galactic core.

But how would such particles climb out of a black hole’s gravity well when nothing else can, I hear you ask? Great question, reader! Well depending on how you model spacetime, such things can be done. Remember we’re talking about space exiting a black hole, not matter. For instance, if you model the fabric of reality as a dense, directed network as I did for this very speculative simulation, such things are not too hard to arrange.

Would this method work in reality? It’s highly unlikely, but not yet known to be impossible. And that’s the point. From willful speculation, great futures are made. Because speculation is hope. And hope is worth holding onto. If we let the healthy habit of skepticism overtake the equally healthy habit of scientific play, progress dies.

But we don’t need just one speculative warp drive. We need thousands. Because almost all of them are going to faceplant at the first hurdle.

Currently, that group in NASA has about $50K in funding, which is nothing, and they’re trying to give us the stars. The failed ‘Star Wars’ defense projects started under Reagan that are still running have wasted countless billions of public money without delivering anything useful. Imagine what we might achieve if just one percent of that budget went towards researching crazy ideas like warp drive. Certainly it wouldn’t be less. And maybe it’d be everything. Isn’t it worth buying a lottery ticket now and then if the prize is THE ENTIRE GALAXY? If you agree, the world needs to hear you roar.

So there we have it. Your mission, if you choose to accept it, is simple: read avidly, extrapolate from what you know, and dream. Share wild ideas. Insist on science in your science fiction even if you can’t follow it. Be a proud crackpot whenever opportunity arises. Because if we don’t work together to kindle the public imagination, who will?

(Roboteer comes out from Gollancz in July, and frankly, I can’t wait.)

Are we hitting peak social justice?

I have a prediction, and it’s this: outraged social norming on the political left is close to a tipping point beyond which the left will begin to collapse under the constraints of its own narrative. 

Do I like this prediction? No, I do not. Do I want it to be true? No. Do I have opinions about it? Yes, of course. But in this post I want to try to talk about what I think I see, rather than what I believe is right. I’m deliberately going to try to resist expressing direct sentiment here about the norming itself, or its moral value, or its targets. And that’s because I believe in a lot of the things that are being pushed for in the public dialog on the internet, but don’t want this post to automatically be about the value of my opinions or anyone else’s. Because that kind of exchange is part of what I see as the problem.

My goal here, instead, is to lay out the observations and reasons that led me to this prediction. If you disagree about the prediction and have evidence of a trend that balances it, I’d like to hear about it and know your reasoning. If you agree, then please feel free to help me figure out what the hell we can do about. And in any case, we’ll be able to watch what happens over the next five years to see if my fears play out.

I first became seriously worried about how the left-leaning dialog on the internet was functioning during the reaction to Patricia Arquette’s Oscars debacle. She made an impassioned speech about gender equality and subsequently made a less than stellar remark in the pressroom which social media seemed to pay far more attention to than her initial remarks. I didn’t understand at the time why commenters on the left would seemingly push so hard to take down a highly visible public figure promoting a progressive agenda, albeit in a flawed way.

I later learned about the internet debate around an essay written by Jonathan Chait, in which he criticized what he saw as a growing culture of political correctness. (I was apparently the last person on the internet to hear about this.) Jon’s concern was that the culture of the left was alienating its own liberal allies.

The number of loud, critical responses that were made to his piece are too numerous to link to here. If you’re interested, just google ‘jon chait pc’ and read what comes back. A lot of it is very interesting.

One particular line of reasoning that I encountered several times while reading through the responses to his piece was this: where is your actual evidence that the current form of social critiquing is doing more harm than good? 

It’s natural to see where this question comes from. A lot of the loudest, and most proactive commentary on the left comes from people who are urgently trying to advance a social good. Nobody wants to hear that their best attempts to improve the world may be counterproductive, even though the scale of their response kind of made Jon’s point for him. However, the consensus seemed to be that Jon Chait had no real data. He had anecdotes, some of which were questionable.

However, this year, we were presented with some very powerful and informative data: the surprise success of the conservative party in the British election. Against expectations, Labor were trounced. I say this as someone hoping they would win.

When British analysts attempted to understand why the polls had provided such wrong predictions of the election outcome, one phrase was extensively employed: ‘shy Tories’. In other words, people who’d decided to vote Conservative, but didn’t want to admit it.

In the wake of the Conservative victory, it wasn’t hard to find articles like this one, decrying the left as founded on a philosophy of exclusion and hate. There were other, more subdued articles such as this in the Guardian, that attempted to understand what had happened. Broadly speaking, it seemed that Britain had become increasingly populated with people whose views had not skewed left even though the dialog around them had.

I am not proposing here that the ‘shy Tory’ phenomenon was exclusively responsible for the Conservative win. It wasn’t. There was far more to the election than that. What I am doing, though, is pointing out that this is a national-level example of a particular social mechanism at work.

People’s opinions can fall out of step with the public narrative that surrounds them. When this happens, they will not necessarily admit it, but they may polarize against the narrative, and then subsequently act to obstruct or destroy it.  

Note that this phenomenon has nothing to do with the social value of the narrative being engaged in or who has the moral high-ground. All that needs to happen for the narrative to fail is that enough people feel that they cannot participate in it.

What is happening in Britain, though, is just one part of a dialog happening throughout the western world. Changes in one country will not necessarily repeat elsewhere. So, in order to make a guess about the future, it’d be useful to have some small, yet relatively globalized microcosm of politics where we could watch the polarization of social dialog play out.

Fortunately we science fiction enthusiasts have one. The tiny, hyperbolic world of fandom may give us a glimpse of the future. And yes, I’m talking about the dreaded Hugos debate again.

What amazes me most at this point about the Hugos fight is that posts are still appearing. The battle continues. The best post I saw recently was this one by Eric Flint, which I think shows both his wisdom and his exhaustion. I found myself wondering how people in the community had the energy to sustain their anger.

I now think I understand what is going on. People are validating on the conflict on both sides. By which I mean that in taking up an entrenched position and defending it, they are experiencing a neurological reward, regardless of how coherent or self-consistent that position is. Consequently, they will probably keep at it until something more distracting comes along.

If this pattern starts repeating itself in mainstream society, I suspect that the current progressive narrative on the internet will split. In the worst case, the consensus may go into reverse. We should not kid ourselves that social progress has enjoyed a smooth linear development.

Specifically, I would propose, a progressive agenda tends to have greater traction during times of collective prosperity, when the constraints on individuals are reduced. When inequality begins to dominate, social constraints tighten. Ironically, people usually become more right-wing as their freedoms shrink, sometimes dramatically so. They look around for someone to blame who they can actually reach and affect, rather than the financial barons that they cannot. Witness the rise of the radical right in Europe happening right now.

There is a lag, though—a period in which freedoms are reducing while the juggernaut of social commentary continues undeterred. I fear that we are in that gap right now. That scares me because only a unified, inclusive, non-judgmental left has any chance against the accelerating might of the world’s oligarchical class.

In reality, almost all of us are on the same side, because we will rise or fall together. Everyone who can remember how many houses they own on a given day is on our side. Everyone who struggles with the payments on their second yacht is on our side. Everyone who owns a plane but not their own jet is on our side. I say this because all of these people will suffer in the power-lockout that is evolving, even if they cannot see it yet.

And everyone who makes copious mistakes in their vision of social inclusion, but is nevertheless ready to stand up and act to defend inclusion, is on our side. To argue otherwise, I’d propose, is to participate in the death of progressivism, rather than to lead it.

(My first novel, Roboteer, comes out from Gollancz in July of this year.)

On Ostracism

In my last post, I talked about the current woes over the Hugo awards, but all the while I was writing it, I felt like there was a major point that I was missing out. That point was larger than SF fandom, and instead said something about the difference between the country I was born in (UK) and the one I live in now (US), so I left it aside.

Then, after I made the post, a commenter (AG) made the following remark:

I would also add that nobody doubts the benefits of diversity (at least nobody seems to in the sad puppies’ camp). We only differ in thinking that ideological diversity is good too.

I find AG’s comment, while apparently heartfelt, something of a stretch. (BTW, thank you for your input, AG.) While I trust that AG speaks fairly of his own opinion, some members of the Sad Puppy camp have been extremely vocal in their criticism of ways of life different from their own. That criticism does not always appear to have been designed to encourage dialog.

But my point here is not to indulge my own opinions (which lean left), or to add to the already impressive mass of Hugo-related rhetoric. Rather I was inspired by AG’s comment to address that missing critical point.

I wanted to ask the following question: what has happened to American public discourse, and can we fix it?

Science fiction fandom in the US has become tribal, as have many elements of American life. People have grown angry. One side feels impatient for change that it sees as long overdue. The other side perceives a wave of political militancy, and thinks it sees overtures of thought control because its opinions are not garnering equal respect.

This much is obvious, but why we are all so angry now? Why did this shift not happen thirty years ago?

My proposed explanation stems from the following observation: the US is a large country which, for most of its history, has been relatively empty. Furthermore, it has a lot of different kinds of people in it, and always has. For this, and a host of other reasons, the dominant mechanism for implementing politeness in the US has been what an evolutionary biologist might call ostracism. In other words, if someone says something that you can’t get along with or that strikes you as crazy, you give them room and try to ignore them. If necessary, you actively shun them. 

In Britain, by contrast, if someone you know says something crazy, society permits you, within reason, to tease them or call them out on it. Choosing to remark on someone else’s crazy is often perceived as a point of strength. Or, at least, this was still true when I moved.

Brits, and other Europeans, look at the US and struggle to understand a society that is seemingly first world, and yet supports populations of Amish on one end and holistic pet bathing enthusiasts on the other. They make television shows about it and wonder how come Americans appear to be insane.

But US culture is structured the way that it is, I’d propose, because leaving people room was always the more efficient solution given the conditions. With different ethnic groups arriving from all over the planet for the last two hundred years, simple, robust solutions to a variety of social problems have been a part of life. The US is not a European-style, self-norming, cohesive culture. It is a hyper-inclusive monoculture underpinned by a huge number of microcultures, some of which are extremely exclusive in nature.

Thus far, the US has succeeded with this model. However, the country is now presented with a problem. That mechanism of shunning or rejecting those who we cannot get along with has broken. Even without a rising population, increased urban density, and rapid transport, the internet makes it impossible. In effect, everyone is suddenly trapped in the same room. Shunning people doesn’t increase the social distance any more. It just makes people upset and more prone to aggression. And so a long stable nation has now polarized wildly, like oil and water desperately trying to escape each other while trapped in the same cup.

I find this worrying because the ostracism-first approach to social moderation is deeply baked into American thinking. The assumption that if you encounter someone who you consider intolerable, that you should exclude them, and ensure that your peers do likewise, is for many an almost instinctive response. It feels morally right. It feels just. When others fail to participate in the process, it can feel like a betrayal. It is not perceived as a cultural choice. It is just the thing that you need to do.

But there are two ironies here. The first is that what right-leaning SF fans parse as socialist thought control is, in truth, a profoundly American social behavior. The second is that left-leaning fans, in seeking to advance a social good, unwittingly resort to a traditional behavior historically more associated with conservatives. Funny, perhaps, but nobody is laughing yet.

Is there a solution? I am biassed, of course, but I would propose that the US borrow one from Britain: derision. By which I mean satire, mockery, teasing and all other forms of social reconciliation through mirth. It is not a surprise that social institutions like the Daily Show have become so valued in American society of late. They are badly needed and in short supply.

I believe that both sides in the Hugos debate, and in American society at large, need to set down their sense of outraged affront as rapidly as possible and start mocking each other instead. Mocking and accepting mockery in return. And if we find ourselves able to laugh at our own side from time to time, then we know that the healing has started. And after healing comes the potential for real, cohesive social change.

To my mind, the sooner we can achieve this, the better off we will all be, regardless of which social agenda dominates in the current debacle. Because, inevitably there will be another debacle that follows. Next time, it may be left versus left, or right versus right, and self-righteous shunning will be just as counterproductive as ever.

Similarly, in Britain, I think I see a growing trend toward the American cultural solution, perhaps because the distance between the US and UK is shrinking too. And this can’t work either. A Britain that abandons wry observation in favor of self-righteousness is likely to be a dangerous, unhappy place to live. It is too small to be otherwise, and righteous exclusion does not make anyone friends.

In short: the internet is not going away any time soon. We had better get used to it and adjust our social expectations accordingly.

(My first novel, Roboteer, comes out from Gollancz in July of this year.)

(My link in the above post is to a letter written by John C. Wright. For those seeking to understand whether, and in what specific sense, the letter may constitute resistance to ideological diversity, I strongly encourage reading the attached comments on this post. The discussion with John Wright included there makes his reasons for writing it clear.)

How to Fix the Hugos

I have spent way too much time this last week reading the various back and forth articles about the Hugo award debacle. For those of you who aren’t familiar with this, the short version is as follows.

Two groups, one of right-leaning fans (the Sad Puppies), and another of far-right fans (the Rabid Puppies), both felt that they were being marginalized in the Hugo award ballots. They gamed the voting system, which was entirely unprotected against gaming, and made sure their own candidates dominated this year’s ballot. The left-leaning press then jumped on this act, mischaracterized what had just happened by conflating the two groups and their members, and handed a small victory to the right-leaning fans. The right-leaning press then leapt in after with equally poorly researched accusations of a liberal conspiracy.

Subsequently, everyone and his aunt has jumped into the debate to express an opinion, including George R. R. Martin, who wrote a sequence of eloquent posts which, in my opinion, calmly and clearly exposed the latent reality distortion in the X Puppies positions, (where X denotes some form of disease or misery).

The X Puppies main point appears to be this: “These days, the Hugos seem to be full of dull works full of left-leaning politics. Why can’t the Hugos just be about good old space adventures without politics, like they used to be?”

The reply from GRR Martin and others is roughly this: “To my knowledge, they were never about good old space adventures without politics. Please point at a moment in history when this was true.”

The comeback from the X Puppies to me smacks of angry avoidance. Here is a quote from Larry Correia:

In your Where’s the Beef post you attempted to dismiss our allegations that there is a political bias in the awards now, by going through the history of the awards and looking at the political diversity of winners from long ago. Nice, but we are talking about a relatively recent trend.

In other words: in the 70’s, 80’s, 90’s and 00’s this award was not about politics-free space adventures. However, what concerns us is the recent trend toward it not being so.

I find this debacle both sad and fascinating. It’s sad, because the Hugos are broken now. It’s unclear if they will ever be quite the same. Something I put value in throughout my childhood has been soiled.

However, it’s also fascinating to me, because the whole thing is an extraordinary example of group psychology in action. I believe that if fandom views this unpleasant experience through the right lens, it can make itself stronger, wiser, and more diverse than ever before.

My source of inspiration in this matter is an excellent post by Django Wexler, focusing not on the awards themselves, but the attendant voting system, and how it might be repaired to discourage future weaponization.

His post encouraged me to think that instead of mourning the Hugos, perhaps we should accept their current brokenness and start playing with them. And to that end, I have a variety of suggestions, some more serious than others.

Suggestion One: Award a happy, hollow victory

One solution would be for everyone to vote for Vox Day (the leader of the far-right group) and any author who supports him in every single category. Then when they go up to collect the award each time, we laugh. We cheer and whistle. We thank them effusively for rescuing us from a nightmare of inclusiveness and equality. We give them loads of long, uncomfortable, sweaty hugs.

This, to my mind, would the the improvisors yes-and-based solution. A spoiler can only feel victory so long as it is not pressed gleefully into his hands.

Suggestion Two: Create a new category

Maybe the Hugo award organizers should create a special award category for ‘old timey space adventures with no politics, honest’ to commemorate this event. We might call this the Iron Dream award, or some such thing. Then the right-leaning fans can vote for that award instead and feel like their peculiar historic fantasy is being maintained. If other fans felt the urge to vote for the most blatantly, creakingly right-leaning fiction they could find, one could hardly blame them. A match between the Iron Dream award and Best Novel might serve as an in indicator that the voting had been something other than straightforward.

Suggestion Three: Pattern voting

Django Wexler proposed anti-votes to compensate for slate voting. What’s nice about this system is that slate voting is at a disadvantage, rather than an advantage. The issue, as has been subsequently pointed out, is that anti-votes carry a social connotation that is perhaps at odds with the Hugos and likely to lead to more argument.

So instead, I’d propose a system in which each pattern of votes counts once. Identical vote sheets end up constituting a single vote. This means that anyone who wants to force a slate through has to put in work. The more power they want to have, the larger the set of works they vote for has to become.

Is such a system gameable? Of course it is. Ken Arrow has made that clear. However, it is positive in social implications, provides an incentive for people to read broadly, and disincentivizes slates. I invite criticism to this idea, as I’d love to know where the flaws are. I’m sure they’re in there somewhere.

Suggestion four: A Hugos mission statement

If we’re being straight about this, we can admit that the X Puppies did what they did because of what they perceived as the truth, and what they perceived as injustice. That perception was, to my eye, a skewed one, but it existed for a reason.

That reason is that the people in that group felt demonized. Everyone wants to see themselves as a good guy on the side of truth and justice, so when they started to encounter a social consensus that characterized them as bad guys, they went into amygdala hijack and lashed out.

People take action in the way that the X Puppies did when their brains register that some pathway to self-validation has been compromised. Then they did what people always do under these conditions, which was to construct a goal chain with the shortest discernible path leading to a state where they could continue to self-validate safely.

Their solution was to ensure that the Hugos were unambiguously political, so that they could believe this without interior conflict, and propose that this was why they were not getting awards. So far as the X Puppies brains are concerned, job done. All else is just the post-justification that conscious reasoning affords. Now that we are all angry, nobody has to feel unworthy. We can talk about libel and conspiracies and groupthink instead.

There seem to me to be two takeaways from this. First, it’s clear that some left-leaning fans are guilty of cheap, self-serving reasoning, just as the X Puppies are. Some of the flawed journalism that occurred during this event provides clear examples of why we should always hesitate in judgement, even if only to be accurate in our critique.

To my mind, robust liberal thought is synonymous with scientific thought. We should always consider whether there’s some position we haven’t considered, just as we should always wonder if our understanding of diversity, or privilege, or justice requires modification. This is true even when considering those whose positions we find profoundly distasteful. The alternative is defensive knee-jerk reasoning, which is either bigotry, or bigotry in disguise, no matter what political credentials are trumpeted. Outrage, regardless of its target, is the enemy of reason.

Thus, perhaps we should use this as an opportunity to think more deeply than ever about diversity and how we can communicate its benefits more effectively. If the X Puppies hadn’t found themselves auto-included in groups labeled as ‘bad’ during conversations in the community, they might not have lost the plot. Even if we consider the social positions of the X Puppies to be socially untenable, how can we consider the current outcome a win? Could those people who are now defensive and angry have been more persuasively and proactively brought around?

More simply, we might also consider simply attaching a mission statement to the Hugos that makes it very clear what they stand for. Then those who can’t get behind the mission statement can feel free to disregard the awards at their leisure. The more clear we are about what we stand for, the easier it will be for those who don’t want to play to turn their noses up at us and stalk off. Good luck to them.

So, given the options, which solution do you prefer? Do you have an alternative proposal? If so, I’d be delighted to hear about it.

(My first book, Roboteer, comes out from Gollancz in July.) 

How we decide

Since my recent post on Twitter and Facebook, I’ve been thinking of airing a piece of science I was tinkering with at Princeton, and never got round to putting in the world. That science asks the following question:

What makes us believe that something is true or right when we lack direct evidence, or experience, to support it? When do we decide to add our voice to a social chorus to demand change?

This question seems particularly pertinent to me in our modern age when so much of our information arrives through electronic channels that are mediated by organizations and mechanisms that we cannot control.

For many social animals, the most important factor in making a decision seems to be what the other members of their social group are doing. Similar patterns show up in experiments on everything from wildebeest, to fish, to honey bees.

Similarly, we humans often tend to believe things if enough of our peers seem to do likewise. People have been examining this tendency in us ever since the experiments of the amazing Stanley Milgram.

It’s easy to see this human trait as a little bit scary—the fact that we often take so much on faith—but, of course, a lot of the time we don’t have a choice. We can’t independently check every single fact that comes our way, or consider every side of a story. And when many of the people who we care about are already convinced, aligning with them is easy.

Fortunately, a lot of animal behavior research suggests that going with the flow is often actually a really good way for social groups to make decisions. Iain Couzin’s lab at Princeton has done some excellent work on this. They’ve shown that increasing the number of members of a social group who aren’t opinionated, and who can be swayed by the consensus by sheer force of numbers, often increases the quality of collective decision making. Consequently. there are many people who think we should be taking a closer look at these animal patterns to improve our own systems of democracy.

But how effective is this kind of group reasoning for humans? How much weight should we be giving to the loud and numerous voices that penetrate our lives? And how often can we expect to get things dangerously wrong.

Well, the good news is that, because we’re humans rather than bees, we can do some easy science and find out. And I’m going to show you how. To start with, though, we’ll need to know how animal behavior scientists model social decision-making.

In the simple models that are often used, there are two types of agent (where an agent is like a digital sketch of a person, fish or bee). These types are what you might call decided agents, and undecided agents. Decided agents are hard to convince. Undecided agents are more flexible.

Decided agents are also usually divided into camps with differing opinions. X many of them prefer option A. Y many of them prefer option B. If the number of decided agents who like B is a lot more than the agents who like X, we assume that B is the better option. Then we look to see how easy it is for agents, as a group,  to settle on B over A.

To convince an agent, you let it meet a few others in the group to exchange opinions. If an agent encounters several peers who disagree with it in succession, (let’s say three), it considers changing its mind. And the probability of changing is, of course, higher for an undecided agent than for a decided one.

Then we put a bunch of agents in a digital room and have them mill about and chat at random. Very quickly we can make a system that emulates the kind of collective decision-making  that often occurs in nature. And, as Iain’s team found, the more undecided agents you have, the better the decision-making gets.

WellMixedPop

In this plot, we’re looking at how the quality of the decision-making scales with the size of the group. We always start with ten decided individuals, four in favor of option A and six in favor of B, and we increase the size of the undecided community, from zero up to 2550.

A score of one on the Y-axis shows an ideal democracy.  A score of zero shows the smaller group winning every time. The X-axis shows the total size of the population.

As you can see, as the group gets bigger, the quality of the decisions goes up. This is because the minority group of decided individuals take a lot of convincing. If they duke it out directly with the decided majority with nobody else around, the results are likely to be a bit random.

But think about what happens when both decided parties talk to a bunch of random strangers first. A small difference in the sizes of the decided groups has a huge difference in the number of random people they can reach. That’s because each one of those random people also talks to their friends, and the effects are cumulative.

That means that, before too long, the minority group is much more likely to encounter a huge number of people already in agreement. Hence, they eventually change their tune. Having undecided people makes that chorus of agreement bigger.

This is awesome for bees and fish, but here’s the problem: human beings don’t mill and chat with strangers at random. We inhabit networks of family and friends. In the modern world, the size, and pervasiveness, of those networks is greater than it ever has been. So shouldn’t we look at what happens to the same experiment if we put it on a network and only let agents talk to their friends?

Let’s do that. First, let’s use an arbitrary network. One that’s basically random. The result looks like this.

RandomPop

As you can see, we get the same result, nearly, but the group has to be a little bigger before we get the same decision-making quality. That doesn’t seem so bad.

But unfortunately, human social networks aren’t random. Modern electronic social networks tend to be what’s called scale-free networks. What happens if we build one of those?

ScalePop

That’s not so hot. Once the network goes past a certain size, the quality of the decision-making actually seems to degrade. Bummer. For our simple simulation, at least, adding voices doesn’t add accuracy.

But still, the degradation doesn’t seem too bad. A score of 0.95 is pretty decent. Maybe we shouldn’t worry. Except of course, that in human networks, not every voice has the same power. Some people can buy television channels while others can only blog. And many people lack the resources or the freedom to do even that.

So what happens if we give the minority opinion-holders the most social power? In essence, if we make them the hubs of our networks and turn them into an elite with their own agenda? If you do that, the result looks like this.

ScaleBiasPopAs you can see, as the system scales, the elite wins ever more frequently. Past a certain network size, they’re winning more often than not. They own the consensus reality, even though most of the conversations that are happening don’t even involve them.

My broad conclusion, then, is that we should be extremely careful about accepting what is presented to us as the truth via electronic media, even when it seems to come from our peers. The more powerful communication technology gets, the easier it is for powerful individuals to exploit it. A large, networked society is trivially easy to manipulate.

Is there something we can do about this? I think so. Remember that the randomly mixed population always does better. So maybe we should be paying a little less attention to the news and Facebook, and having more conversations with people we encounter in our day-to-day lives.

In the networked society we inhabit, we’re conditioned not to do that. Often it feels uncomfortable. However, maybe we need to be rethinking that habit if we want to retain our social voice. The more we reach out to people whose opinions we don’t know yet, and allow ourselves to be influenced by them, the less power the media has, and the stronger we collectively become.

 

On Spock

Leonard Nimoy died today. I found myself surprisingly affected by that. And as I watched the tide of sorrow pour over the internet, it occurred to me to ask: why, specifically, was I so touched? In effect, I channelled my inner Spock to process my feelings, and in doing so, partially answered my own question.

Leonard Nimoy had many talents, but for many of my generation, the fact that he was Spock effectively eclipsed the rest of them. Does that belittle the rest of his achievements? To my mind, not in the least, because what Spock symbolized was inspiring, and generation defining.

I grew up as a nerdy, troubled kid in a school that didn’t have the first clue of what to do with me. They couldn’t figure out whether to shove me into the top math stream or detention, so they did both. I was singled out for bullying by both other pupils and the school staff, and had test scores that oscillated wildly from the top of the class to the very bottom, depending on how miserable I was.

In that environment, it was trivially easy to see myself as an alien. I cherished my ability to think rationally, and came to also cherish my differentness. There weren’t many characters in popular media for a kid like that to empathize with. Spock, though, nailed it.

And while my school experience was perhaps a little extreme, I suspect that a very similar process was happening for isolated, nerdy kids all across the western world.

And here’s the root of why: Spock was strong because he was rational. Sure, he was physically powerful and had a light form of telepathy and all that, but what made him terrific was his utter calm under incredibly tough conditions. Furthermore, as Leonard Nimoy’s acting made clear, he was still quite capable of emotionally engaging, of loving, and having friends, even if he seldom admitted it to himself. Spock didn’t just give us someone to identify with. He encouraged us to inhabit that rationality, and let it define us.

Leonard Nimoy’s character kept it together when everyone around him wasn’t thinking straight, and made it look cool. In doing so, he helped to inspire a generation of computer scientists, entrepreneurs, and innovators who have changed the world, and the status of nerds within it.

The kids growing up now don’t have a Spock. Sure, they have plenty of other nerd-icons to draw from, and maybe they’re more appropriate for the current age. But for me, none of them really speak to the life-affirming power of level-headed thought in the way that Spock did.

Looking back on it, I see that Leonard Nimoy, Gene Roddenberry, and the rest of the team who created Spock’s character, helped inform the life philosophy that has guided me for years, and that’s this.

All emotions are valid, from schadenfreude to love. They’ve all part of us, and should be respected, even when we’re tempted to be ashamed of them. But emotions should have a little paddock to run around in. The point at which emotions start causing problems and eating the flowers is when you let them get out of the paddock. So long as you look after your paddock, you can transcend your limitations while remaining fully human. 

And so, today, I confess that I find the death of Leonard Nimoy incredibly sad, but its significance also, somehow, fascinating.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Social media and creeping horror

One of the things my friends have advised me to do as part of building my presence as a new author is take social media seriously. Particularly Twitter. I’ve been doing that, and for the most part enjoying it, but I’m also increasingly convinced that the medium of electronic social media is terrifying, both in its power, and its implications.

By this point, many of us are familiar with the risks of not being careful around social media. The New York Times recently published a brilliant article on it.

It’s easy to look at cases such as those the article describes and to think, “well, that was a dumb thing to do,” of the various individuals singled out for mob punishment. But I’d propose that making this kind of mistake is far easier than one might think.

A few years ago, I accidentally posted news of the impending birth of my son on Facebook at a time when my wife wasn’t yet ready to talk about it. It happened because I confused adding content to my wall with replying to a direct message. That confusion came about because the interface had been changed. I wondered subsequently, after learning more about Facebook, whether the change had been made on purpose, to solicit exactly that kind of public sharing of information.

In the end, this wasn’t a big deal. Everyone was very nice about it, including my wife. But it reminded me that any time we put information into the internet, we basically take the world on trust to use that information kindly.

However, the fact that we can’t always trust the world isn’t what’s freaking me out. What freaks me out is why.

The root of my concern can perhaps be summarized by the following excellent tweet by Sarah Pinborough.

*Looks through Twitter feed desperate for something funny.. humour feeds the soul. Nope, just people shouting their worthy into the void…*

I think the impressive Ms. Pinborough intended this remark in a rather casual way, but to my mind, it points up something crucial. And this is where it gets sciencey.

Human beings exercise social behavior when it fits with their validation framework. We all have some template identity for ourselves, stored in our brains as a set of patterns which we spend our days trying to match. Each one of those patterns informs some facet of who we are. And matching those patterns with action is mediated by exactly the same dopamine neuron system that guides us towards beer and chocolate cake.

What this means is that when we encounter a way to self-validate on some notion of social worth with minimal effort, we generally take it. Just like we eat that slice of cake left for us on the table.  And social media has turned that validation into a single-click process. In other words, without worrying too much about it, we shout our worthy into the void. 

This is scary because a one-click process doesn’t leave much room for second-guessing or self-reflection. Furthermore, the effects of clicking are often immediate. This reinforces the pattern, making it ever more likely that we’ll do the same thing again. And that’s not good for us. We get used to social validation being effortless, satisfying, and requiring little or no thought.

We may firmly assure ourselves that all our retweeting, liking, and pithy outrage is born out of careful judgement and a strong moral center, but neurological reality is against us. The human mind loves short-cuts. Even if we start with the best rational intentions, our own mental reward mechanisms inevitably betray us. Sooner or later, we get lazy.

Twenty years ago, did people spend so much of their effort shouting out repeated worthy slogans at each other. Were they as fast to outrage or shame those who’d slipped up? How about ten years ago? I’d argue that we have turned some kind of corner in terms of the aggressiveness of our social norming. And we’ve done so, not because we are now suddenly somehow more righteous. We’ve done it because it’s cheap. Somebody turned self-righteousness into a drug for us, and we’re snorting it.

But unlike lines of cocaine, this kind of social validation does not come with social criticism attached. Instead, it usually comes from spontaneous support from everyone else who’s taking part. This kind of drug comes with a vast, unstoppable enabler network built in. This makes electronic outrage into a kind of social ramjet, accelerating under its own power. And as with all such self-reinforcing systems, it is likely to continue feeding on itself until something breaks horribly.

Furthermore, dissent to this process produces an attendant reflexive response, just as hard and as sharp as our initial social judgements. Those who contest the social norming are likely to be punished too, because they threaten an established channel of validation. The off-switch on our ramjet has been electrified. Who dares touch it?

The social media companies see this to some extent, I believe. But they don’t want to step in because they’d lose money. So long as Twitter and Facebook build themselves into the fabric of our process of moral self-reward, the more dependent on them we are. The less likely we are to spend a day with those apps turned off.

Is there a solution to this kind of creeping self-manifested social malaise? Yes. Of course. The answer is to keep social media for humor, and for news that needs to travel fast. We should never shout our worthiness. We should resist the commoditization of our morality at all costs.

Instead, we should compose thoughts in a longer format for digestion and dialog. Maybe that’s slower and harder to read, but that’s the point. Human social and moral judgements deserve better than the status of viruses. When it comes to ostracizing others, or voting, or considering social issues, taking the time to think makes the difference between civilization and planet-wide regret.

The irony here is that many of those people clicking are those most keen to rid the world of bigotry. They hunger for a better, kinder planet. Yet by engaging in reflexive norming, they cannot help but shut down the very processes that makes liberal thinking possible. The people whose voices the world arguably needs most are being quietly trained to shout out sound-bites in return for digital treats. We leap to outrage, ignoring the fact that the same kind of instant indignation can be used to support everything from religious totalitarianism to the mistreatment of any kind of minority group you care to name. A world that judges with a single click is very close in spirit to one that burns witches.

In short, I propose: post cats, post jokes, post articles. Social justice, when fairly administered, is far about more about the justice than about the social.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Barricade, and opening up

I have a book coming out this year and the anticipation is affecting me. Perhaps understandably, I have become fascinated in that process that authors go through when their books hit print. Countless writers have gone through it. Some to harsh reviews, some to raves, and some, of course, to dreadful indifference. What must that be like, to have something you’ve spent years on suddenly be held up for casual judgement? I have no idea, but I’ll probably find out soon.

It’s probably natural that in trying to second guess this slightly life-changing event that I’ve looked to my peers. Specifically, I’ve looked to those other new authors that my publisher is carrying—those people a little further down the same path as myself.

In stalking them on the web, I hit my first key realization. As a writer, I should have been giving reviews, since years ago, to every writer whose work struck me in one way or another. And that’s because without such feedback, a writer is alone in the dark. A review by another writer, even an unfavorable one, is a mark of respect.

As it is, I have a tendency to lurk online, finding what I need but not usually participating in the business of commenting. However, this process of looking at the nearly-me’s out there has brought home that the web can and should be a place of dialog. It’s stronger and better when individual opinions are contributed. If I expect it from others, I should contribute myself. The reviewing habit, then is one which I am going to try to take up immediately.

Which brings me to the first Gollancz title I consumed during my peerwise investigation: Barricade, by Jon Wallace. And to my first online book review. Before I tell you what I thought of it, I should first give you a sense of what it’s about. Rather than cutting a fresh description, I will pull from Amazon.

Kenstibec was genetically engineered to build a new world, but the apocalypse forced a career change. These days he drives a taxi instead. A fast-paced, droll and disturbing novel, BARRICADE is a savage road trip across the dystopian landscape of post-apocalypse Britain; narrated by the cold-blooded yet magnetic antihero, Kenstibec. Kenstibec is a member of the ‘Ficial’ race, a breed of merciless super-humans. Their war on humanity has left Britain a wasteland, where Ficials hide in barricaded cities, besieged by tribes of human survivors. Originally optimised for construction, Kenstibec earns his keep as a taxi driver, running any Ficial who will pay from one surrounded city to another. The trips are always eventful, but this will be his toughest yet. His fare is a narcissistic journalist who’s touchy about her luggage. His human guide is constantly plotting to kill him. And that’s just the start of his troubles. On his journey he encounters ten-foot killer rats, a mutant king with a TV fixation, a drug-crazed army, and even the creator of the Ficial race. He also finds time to uncover a terrible plot to destroy his species for good – and humanity too.

My two cents:

I enjoyed this book. It had shades of Blade Runner and Mad Max, with a heavy dose of English cultural claustrophobia thrown in. I liked the way that the viewpoint character’s flattened affect lifted gently over the course of the novel. I liked the pacing. I liked the simple, self-contained quality of the dystopian world that’s presented. While the content is often bleak, sometimes to the point of ruling out hope, there is always humor there. And most of all, I appreciated the underlying message of the book. In essence, Barricade proposes (IMO) that we are saved in the end not by clever ideas or grand political visions, but by hope, humanity, and persistent, restless experimentation in the face of adversity. I sympathize with that outlook.

Is the book perfect? Of course not. No book ever is. The flattened affect, and the blunt, violent bleakness of the novel, both come with a cost in reader engagement that will no doubt vary from person to person. I was not bothered, but I can imagine others who would be.

Furthermore, the human characters, bar one, are ironically the least fully drawn (perhaps deliberately). But all creative choices come with side-effects. Barricade held my attention to the end, entertained me, and encouraged me to think.

AI and Existential Threats

So on re-reading my last AI post, I decided that it perhaps seemed a little glib. After all, some heavy-hitters from Nick Bostrom on down have seriously considered the risks from superintelligent AI. If I’m ready to write such risks off, I should at least do justice to the arguments of the idea’s proponents, and make clear why I’m not specifically concerned.

First, I should encapsulate what I see as the core argument of the Existential Threat crowd. To my mind, this quote from Nick Bostrom captures it quite nicely.

Let’s suppose you were a superintelligence and your goal was to make as many paper clips as possible. Maybe someone wanted you to run a paper clip factory, and then you succeeded in becoming superintelligent, and now you have this goal of maximizing the number of paper clips in existence. So you would quickly realize that the existence of humans is an impediment. Maybe the humans will take it upon themselves to switch you off one day. You want to reduce that probability as much as possible, because if they switch you off, there will be fewer paper clips. So you would want to get rid of humans right away. Even if they wouldn’t pose a threat, you’d still realize that human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.

In other words, once we create something than can outthink us, we are helpless if it’s goals don’t align with ours. It’s an understandable concern. After all, isn’t this the lesson that the evolution of the human race proved? The smarter the race, the bigger it wins, surely.

Well, maybe.

I would argue that the way we think about risks, and the way we think about intelligence, deeply colors our perception of this issue. We can’t think about it straight because we’re not designed to.

First, there is an issue of scale. People tend to forget that reasoning capacity is easy to come by in nature, and not always useful. There was a lovely experiment a few years back in which scientists tried to breed more intelligent fruit-flies. They succeeded with surprising ease. However, those flies had a shorter lifespan than normal flies, and didn’t obviously have a natural advantage at being a fly. Because flies are small, it’s more efficient for them to be easily adaptable at the level of evolutionary selection and have loads of children than it is to be smart. The same is even more true of smaller organisms like bacteria.

The lesson here is that intelligence confers a scale-dependent advantage. We are the smartest things we know of. However, we assume that having more smarts always equals better, despite the fact that this pattern isn’t visible anywhere in nature at scales larger than our own. There is, for instance, no evidence of superintelligent life out among the stars.

While it still might be true that smarter equals better, it might also be the case that smartness at large scales is a recipe for self-destruction. Superintelligent life might kill itself before it kills us. After all, the smarter we’ve become, the better at doing that we seem to be.

Then there is the issue of what intelligence is. Consider the toy example from Bostrom’s quote. In order for this scenario to hold, our AI must be smart enough to scheme against humans, and at the same time insufficiently self-aware as to the long term cost of pursuing that goal: namely the destruction of the entities who maintain it and enable it to make paperclips in the first place.

To resolve this paradox, we have to assume that the AI maintains itself. However, as Stephen Hawking will tell you, having an excess of intelligence does not magically bestow physical self-sufficiency. Neither does it bestow the ability to necessarily design such a system and force it to be implemented. Furthermore, we have to assume that the humans building the self-sufficient machines at no time notice that they’re constructing the tools that will inform their own obsolescence. Not impossible, but at the same time, another stretch on the Existential Threat model.

Another problem is that we tend to see intelligence as a scalar commodity that is universal in nature. This despite the vast array of neuroscientific evidence to the contrary. We see others as having more, or less than ourselves while at the same time rewarding ourselves for specific cognitive talents that are far from universal in nature: insight, empathy, spatial skills, etc. Why do we do this? Because reasoning about other intelligences is so hard that it requires that we make a vast number of simplifying assumptions.

These assumptions are so extreme, and so important to our continued functioning, that in many cases they actually get worse the more people think about AI. From the fifties to the nineties, a huge number of academics honestly believed that building logic systems based on predicate calculus would be adequate to capture all the fundamentals of human intelligence. These were smart people. Rather than giggle at them, we should somberly reflect on how easy it is to be so wrong.

Related to this, we also assume automatically that AI will reason about threats and risks in a self-centered fashion, just as we do. In Bostrom’s example, the AI has to care that humans shutting it down will result in the end of paperclip production. Why assume that? Are we proposing that the will to self-preservation is an automatic consequence of AI design? To my knowledge, there is not a single real-world AI application that thinks this way. Furthermore, none of them show the slightest tendency to start. I would propose that we have this instinct not because we’re intelligent, but because we’re evolved from unintelligent creatures that demonstrated this trait because it conferred selective advantage.

So for AI to reason selfishly, we have to propose that the trait for self-preservation comes from somewhere. Let’s say it comes from malware, perhaps. But even if we make this assumption, there’s still a problem.

Why would we propose that such an intelligence would automatically choose to bootstrap itself to even greater intelligence? How many people do you know who’d sign up for voluntary brain surgery? Particularly, brain surgery conducted by someone no smarter than themselves. Because that’s what we’re proposing here.

There is a reason that this isn’t a popular lifestyle choice. And that’s that the same will to self-preservation acts against any desire for self-surgery, because self-surgery can’t come without risk. In other words, you can’t have your self-preservation cake and eat it too.

But perhaps the greatest reason why we shouldn’t be too worried about superintelligent AI is because we can see this problem. Smart machines have been scaring us for generations and they’re not even here yet. By contrast, antibiotic-resistant bacteria evolving through the abuse of factory farming practices present a massive threat that the CDC have been desperately trying to raise awareness of. But people aren’t interested. Because they like cheap chicken.

In short, I don’t assume that superintelligent AI represents no threat. But I do strongly suspect that when something comes along and clobbers us, it’ll be something we didn’t see coming. Either that, or something we didn’t want to see.