On Spock

Leonard Nimoy died today. I found myself surprisingly affected by that. And as I watched the tide of sorrow pour over the internet, it occurred to me to ask: why, specifically, was I so touched? In effect, I channelled my inner Spock to process my feelings, and in doing so, partially answered my own question.

Leonard Nimoy had many talents, but for many of my generation, the fact that he was Spock effectively eclipsed the rest of them. Does that belittle the rest of his achievements? To my mind, not in the least, because what Spock symbolized was inspiring, and generation defining.

I grew up as a nerdy, troubled kid in a school that didn’t have the first clue of what to do with me. They couldn’t figure out whether to shove me into the top math stream or detention, so they did both. I was singled out for bullying by both other pupils and the school staff, and had test scores that oscillated wildly from the top of the class to the very bottom, depending on how miserable I was.

In that environment, it was trivially easy to see myself as an alien. I cherished my ability to think rationally, and came to also cherish my differentness. There weren’t many characters in popular media for a kid like that to empathize with. Spock, though, nailed it.

And while my school experience was perhaps a little extreme, I suspect that a very similar process was happening for isolated, nerdy kids all across the western world.

And here’s the root of why: Spock was strong because he was rational. Sure, he was physically powerful and had a light form of telepathy and all that, but what made him terrific was his utter calm under incredibly tough conditions. Furthermore, as Leonard Nimoy’s acting made clear, he was still quite capable of emotionally engaging, of loving, and having friends, even if he seldom admitted it to himself. Spock didn’t just give us someone to identify with. He encouraged us to inhabit that rationality, and let it define us.

Leonard Nimoy’s character kept it together when everyone around him wasn’t thinking straight, and made it look cool. In doing so, he helped to inspire a generation of computer scientists, entrepreneurs, and innovators who have changed the world, and the status of nerds within it.

The kids growing up now don’t have a Spock. Sure, they have plenty of other nerd-icons to draw from, and maybe they’re more appropriate for the current age. But for me, none of them really speak to the life-affirming power of level-headed thought in the way that Spock did.

Looking back on it, I see that Leonard Nimoy, Gene Roddenberry, and the rest of the team who created Spock’s character, helped inform the life philosophy that has guided me for years, and that’s this.

All emotions are valid, from schadenfreude to love. They’ve all part of us, and should be respected, even when we’re tempted to be ashamed of them. But emotions should have a little paddock to run around in. The point at which emotions start causing problems and eating the flowers is when you let them get out of the paddock. So long as you look after your paddock, you can transcend your limitations while remaining fully human. 

And so, today, I confess that I find the death of Leonard Nimoy incredibly sad, but its significance also, somehow, fascinating.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Social media and creeping horror

One of the things my friends have advised me to do as part of building my presence as a new author is take social media seriously. Particularly Twitter. I’ve been doing that, and for the most part enjoying it, but I’m also increasingly convinced that the medium of electronic social media is terrifying, both in its power, and its implications.

By this point, many of us are familiar with the risks of not being careful around social media. The New York Times recently published a brilliant article on it.

It’s easy to look at cases such as those the article describes and to think, “well, that was a dumb thing to do,” of the various individuals singled out for mob punishment. But I’d propose that making this kind of mistake is far easier than one might think.

A few years ago, I accidentally posted news of the impending birth of my son on Facebook at a time when my wife wasn’t yet ready to talk about it. It happened because I confused adding content to my wall with replying to a direct message. That confusion came about because the interface had been changed. I wondered subsequently, after learning more about Facebook, whether the change had been made on purpose, to solicit exactly that kind of public sharing of information.

In the end, this wasn’t a big deal. Everyone was very nice about it, including my wife. But it reminded me that any time we put information into the internet, we basically take the world on trust to use that information kindly.

However, the fact that we can’t always trust the world isn’t what’s freaking me out. What freaks me out is why.

The root of my concern can perhaps be summarized by the following excellent tweet by Sarah Pinborough.

*Looks through Twitter feed desperate for something funny.. humour feeds the soul. Nope, just people shouting their worthy into the void…*

I think the impressive Ms. Pinborough intended this remark in a rather casual way, but to my mind, it points up something crucial. And this is where it gets sciencey.

Human beings exercise social behavior when it fits with their validation framework. We all have some template identity for ourselves, stored in our brains as a set of patterns which we spend our days trying to match. Each one of those patterns informs some facet of who we are. And matching those patterns with action is mediated by exactly the same dopamine neuron system that guides us towards beer and chocolate cake.

What this means is that when we encounter a way to self-validate on some notion of social worth with minimal effort, we generally take it. Just like we eat that slice of cake left for us on the table.  And social media has turned that validation into a single-click process. In other words, without worrying too much about it, we shout our worthy into the void. 

This is scary because a one-click process doesn’t leave much room for second-guessing or self-reflection. Furthermore, the effects of clicking are often immediate. This reinforces the pattern, making it ever more likely that we’ll do the same thing again. And that’s not good for us. We get used to social validation being effortless, satisfying, and requiring little or no thought.

We may firmly assure ourselves that all our retweeting, liking, and pithy outrage is born out of careful judgement and a strong moral center, but neurological reality is against us. The human mind loves short-cuts. Even if we start with the best rational intentions, our own mental reward mechanisms inevitably betray us. Sooner or later, we get lazy.

Twenty years ago, did people spend so much of their effort shouting out repeated worthy slogans at each other. Were they as fast to outrage or shame those who’d slipped up? How about ten years ago? I’d argue that we have turned some kind of corner in terms of the aggressiveness of our social norming. And we’ve done so, not because we are now suddenly somehow more righteous. We’ve done it because it’s cheap. Somebody turned self-righteousness into a drug for us, and we’re snorting it.

But unlike lines of cocaine, this kind of social validation does not come with social criticism attached. Instead, it usually comes from spontaneous support from everyone else who’s taking part. This kind of drug comes with a vast, unstoppable enabler network built in. This makes electronic outrage into a kind of social ramjet, accelerating under its own power. And as with all such self-reinforcing systems, it is likely to continue feeding on itself until something breaks horribly.

Furthermore, dissent to this process produces an attendant reflexive response, just as hard and as sharp as our initial social judgements. Those who contest the social norming are likely to be punished too, because they threaten an established channel of validation. The off-switch on our ramjet has been electrified. Who dares touch it?

The social media companies see this to some extent, I believe. But they don’t want to step in because they’d lose money. So long as Twitter and Facebook build themselves into the fabric of our process of moral self-reward, the more dependent on them we are. The less likely we are to spend a day with those apps turned off.

Is there a solution to this kind of creeping self-manifested social malaise? Yes. Of course. The answer is to keep social media for humor, and for news that needs to travel fast. We should never shout our worthiness. We should resist the commoditization of our morality at all costs.

Instead, we should compose thoughts in a longer format for digestion and dialog. Maybe that’s slower and harder to read, but that’s the point. Human social and moral judgements deserve better than the status of viruses. When it comes to ostracizing others, or voting, or considering social issues, taking the time to think makes the difference between civilization and planet-wide regret.

The irony here is that many of those people clicking are those most keen to rid the world of bigotry. They hunger for a better, kinder planet. Yet by engaging in reflexive norming, they cannot help but shut down the very processes that makes liberal thinking possible. The people whose voices the world arguably needs most are being quietly trained to shout out sound-bites in return for digital treats. We leap to outrage, ignoring the fact that the same kind of instant indignation can be used to support everything from religious totalitarianism to the mistreatment of any kind of minority group you care to name. A world that judges with a single click is very close in spirit to one that burns witches.

In short, I propose: post cats, post jokes, post articles. Social justice, when fairly administered, is far about more about the justice than about the social.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Barricade, and opening up

I have a book coming out this year and the anticipation is affecting me. Perhaps understandably, I have become fascinated in that process that authors go through when their books hit print. Countless writers have gone through it. Some to harsh reviews, some to raves, and some, of course, to dreadful indifference. What must that be like, to have something you’ve spent years on suddenly be held up for casual judgement? I have no idea, but I’ll probably find out soon.

It’s probably natural that in trying to second guess this slightly life-changing event that I’ve looked to my peers. Specifically, I’ve looked to those other new authors that my publisher is carrying—those people a little further down the same path as myself.

In stalking them on the web, I hit my first key realization. As a writer, I should have been giving reviews, since years ago, to every writer whose work struck me in one way or another. And that’s because without such feedback, a writer is alone in the dark. A review by another writer, even an unfavorable one, is a mark of respect.

As it is, I have a tendency to lurk online, finding what I need but not usually participating in the business of commenting. However, this process of looking at the nearly-me’s out there has brought home that the web can and should be a place of dialog. It’s stronger and better when individual opinions are contributed. If I expect it from others, I should contribute myself. The reviewing habit, then is one which I am going to try to take up immediately.

Which brings me to the first Gollancz title I consumed during my peerwise investigation: Barricade, by Jon Wallace. And to my first online book review. Before I tell you what I thought of it, I should first give you a sense of what it’s about. Rather than cutting a fresh description, I will pull from Amazon.

Kenstibec was genetically engineered to build a new world, but the apocalypse forced a career change. These days he drives a taxi instead. A fast-paced, droll and disturbing novel, BARRICADE is a savage road trip across the dystopian landscape of post-apocalypse Britain; narrated by the cold-blooded yet magnetic antihero, Kenstibec. Kenstibec is a member of the ‘Ficial’ race, a breed of merciless super-humans. Their war on humanity has left Britain a wasteland, where Ficials hide in barricaded cities, besieged by tribes of human survivors. Originally optimised for construction, Kenstibec earns his keep as a taxi driver, running any Ficial who will pay from one surrounded city to another. The trips are always eventful, but this will be his toughest yet. His fare is a narcissistic journalist who’s touchy about her luggage. His human guide is constantly plotting to kill him. And that’s just the start of his troubles. On his journey he encounters ten-foot killer rats, a mutant king with a TV fixation, a drug-crazed army, and even the creator of the Ficial race. He also finds time to uncover a terrible plot to destroy his species for good – and humanity too.

My two cents:

I enjoyed this book. It had shades of Blade Runner and Mad Max, with a heavy dose of English cultural claustrophobia thrown in. I liked the way that the viewpoint character’s flattened affect lifted gently over the course of the novel. I liked the pacing. I liked the simple, self-contained quality of the dystopian world that’s presented. While the content is often bleak, sometimes to the point of ruling out hope, there is always humor there. And most of all, I appreciated the underlying message of the book. In essence, Barricade proposes (IMO) that we are saved in the end not by clever ideas or grand political visions, but by hope, humanity, and persistent, restless experimentation in the face of adversity. I sympathize with that outlook.

Is the book perfect? Of course not. No book ever is. The flattened affect, and the blunt, violent bleakness of the novel, both come with a cost in reader engagement that will no doubt vary from person to person. I was not bothered, but I can imagine others who would be.

Furthermore, the human characters, bar one, are ironically the least fully drawn (perhaps deliberately). But all creative choices come with side-effects. Barricade held my attention to the end, entertained me, and encouraged me to think.

AI and Existential Threats

So on re-reading my last AI post, I decided that it perhaps seemed a little glib. After all, some heavy-hitters from Nick Bostrom on down have seriously considered the risks from superintelligent AI. If I’m ready to write such risks off, I should at least do justice to the arguments of the idea’s proponents, and make clear why I’m not specifically concerned.

First, I should encapsulate what I see as the core argument of the Existential Threat crowd. To my mind, this quote from Nick Bostrom captures it quite nicely.

Let’s suppose you were a superintelligence and your goal was to make as many paper clips as possible. Maybe someone wanted you to run a paper clip factory, and then you succeeded in becoming superintelligent, and now you have this goal of maximizing the number of paper clips in existence. So you would quickly realize that the existence of humans is an impediment. Maybe the humans will take it upon themselves to switch you off one day. You want to reduce that probability as much as possible, because if they switch you off, there will be fewer paper clips. So you would want to get rid of humans right away. Even if they wouldn’t pose a threat, you’d still realize that human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.

In other words, once we create something than can outthink us, we are helpless if it’s goals don’t align with ours. It’s an understandable concern. After all, isn’t this the lesson that the evolution of the human race proved? The smarter the race, the bigger it wins, surely.

Well, maybe.

I would argue that the way we think about risks, and the way we think about intelligence, deeply colors our perception of this issue. We can’t think about it straight because we’re not designed to.

First, there is an issue of scale. People tend to forget that reasoning capacity is easy to come by in nature, and not always useful. There was a lovely experiment a few years back in which scientists tried to breed more intelligent fruit-flies. They succeeded with surprising ease. However, those flies had a shorter lifespan than normal flies, and didn’t obviously have a natural advantage at being a fly. Because flies are small, it’s more efficient for them to be easily adaptable at the level of evolutionary selection and have loads of children than it is to be smart. The same is even more true of smaller organisms like bacteria.

The lesson here is that intelligence confers a scale-dependent advantage. We are the smartest things we know of. However, we assume that having more smarts always equals better, despite the fact that this pattern isn’t visible anywhere in nature at scales larger than our own. There is, for instance, no evidence of superintelligent life out among the stars.

While it still might be true that smarter equals better, it might also be the case that smartness at large scales is a recipe for self-destruction. Superintelligent life might kill itself before it kills us. After all, the smarter we’ve become, the better at doing that we seem to be.

Then there is the issue of what intelligence is. Consider the toy example from Bostrom’s quote. In order for this scenario to hold, our AI must be smart enough to scheme against humans, and at the same time insufficiently self-aware as to the long term cost of pursuing that goal: namely the destruction of the entities who maintain it and enable it to make paperclips in the first place.

To resolve this paradox, we have to assume that the AI maintains itself. However, as Stephen Hawking will tell you, having an excess of intelligence does not magically bestow physical self-sufficiency. Neither does it bestow the ability to necessarily design such a system and force it to be implemented. Furthermore, we have to assume that the humans building the self-sufficient machines at no time notice that they’re constructing the tools that will inform their own obsolescence. Not impossible, but at the same time, another stretch on the Existential Threat model.

Another problem is that we tend to see intelligence as a scalar commodity that is universal in nature. This despite the vast array of neuroscientific evidence to the contrary. We see others as having more, or less than ourselves while at the same time rewarding ourselves for specific cognitive talents that are far from universal in nature: insight, empathy, spatial skills, etc. Why do we do this? Because reasoning about other intelligences is so hard that it requires that we make a vast number of simplifying assumptions.

These assumptions are so extreme, and so important to our continued functioning, that in many cases they actually get worse the more people think about AI. From the fifties to the nineties, a huge number of academics honestly believed that building logic systems based on predicate calculus would be adequate to capture all the fundamentals of human intelligence. These were smart people. Rather than giggle at them, we should somberly reflect on how easy it is to be so wrong.

Related to this, we also assume automatically that AI will reason about threats and risks in a self-centered fashion, just as we do. In Bostrom’s example, the AI has to care that humans shutting it down will result in the end of paperclip production. Why assume that? Are we proposing that the will to self-preservation is an automatic consequence of AI design? To my knowledge, there is not a single real-world AI application that thinks this way. Furthermore, none of them show the slightest tendency to start. I would propose that we have this instinct not because we’re intelligent, but because we’re evolved from unintelligent creatures that demonstrated this trait because it conferred selective advantage.

So for AI to reason selfishly, we have to propose that the trait for self-preservation comes from somewhere. Let’s say it comes from malware, perhaps. But even if we make this assumption, there’s still a problem.

Why would we propose that such an intelligence would automatically choose to bootstrap itself to even greater intelligence? How many people do you know who’d sign up for voluntary brain surgery? Particularly, brain surgery conducted by someone no smarter than themselves. Because that’s what we’re proposing here.

There is a reason that this isn’t a popular lifestyle choice. And that’s that the same will to self-preservation acts against any desire for self-surgery, because self-surgery can’t come without risk. In other words, you can’t have your self-preservation cake and eat it too.

But perhaps the greatest reason why we shouldn’t be too worried about superintelligent AI is because we can see this problem. Smart machines have been scaring us for generations and they’re not even here yet. By contrast, antibiotic-resistant bacteria evolving through the abuse of factory farming practices present a massive threat that the CDC have been desperately trying to raise awareness of. But people aren’t interested. Because they like cheap chicken.

In short, I don’t assume that superintelligent AI represents no threat. But I do strongly suspect that when something comes along and clobbers us, it’ll be something we didn’t see coming. Either that, or something we didn’t want to see.