How we decide

Since my recent post on Twitter and Facebook, I’ve been thinking of airing a piece of science I was tinkering with at Princeton, and never got round to putting in the world. That science asks the following question:

What makes us believe that something is true or right when we lack direct evidence, or experience, to support it? When do we decide to add our voice to a social chorus to demand change?

This question seems particularly pertinent to me in our modern age when so much of our information arrives through electronic channels that are mediated by organizations and mechanisms that we cannot control.

For many social animals, the most important factor in making a decision seems to be what the other members of their social group are doing. Similar patterns show up in experiments on everything from wildebeest, to fish, to honey bees.

Similarly, we humans often tend to believe things if enough of our peers seem to do likewise. People have been examining this tendency in us ever since the experiments of the amazing Stanley Milgram.

It’s easy to see this human trait as a little bit scary—the fact that we often take so much on faith—but, of course, a lot of the time we don’t have a choice. We can’t independently check every single fact that comes our way, or consider every side of a story. And when many of the people who we care about are already convinced, aligning with them is easy.

Fortunately, a lot of animal behavior research suggests that going with the flow is often actually a really good way for social groups to make decisions. Iain Couzin’s lab at Princeton has done some excellent work on this. They’ve shown that increasing the number of members of a social group who aren’t opinionated, and who can be swayed by the consensus by sheer force of numbers, often increases the quality of collective decision making. Consequently. there are many people who think we should be taking a closer look at these animal patterns to improve our own systems of democracy.

But how effective is this kind of group reasoning for humans? How much weight should we be giving to the loud and numerous voices that penetrate our lives? And how often can we expect to get things dangerously wrong.

Well, the good news is that, because we’re humans rather than bees, we can do some easy science and find out. And I’m going to show you how. To start with, though, we’ll need to know how animal behavior scientists model social decision-making.

In the simple models that are often used, there are two types of agent (where an agent is like a digital sketch of a person, fish or bee). These types are what you might call decided agents, and undecided agents. Decided agents are hard to convince. Undecided agents are more flexible.

Decided agents are also usually divided into camps with differing opinions. X many of them prefer option A. Y many of them prefer option B. If the number of decided agents who like B is a lot more than the agents who like X, we assume that B is the better option. Then we look to see how easy it is for agents, as a group,  to settle on B over A.

To convince an agent, you let it meet a few others in the group to exchange opinions. If an agent encounters several peers who disagree with it in succession, (let’s say three), it considers changing its mind. And the probability of changing is, of course, higher for an undecided agent than for a decided one.

Then we put a bunch of agents in a digital room and have them mill about and chat at random. Very quickly we can make a system that emulates the kind of collective decision-making  that often occurs in nature. And, as Iain’s team found, the more undecided agents you have, the better the decision-making gets.

WellMixedPop

In this plot, we’re looking at how the quality of the decision-making scales with the size of the group. We always start with ten decided individuals, four in favor of option A and six in favor of B, and we increase the size of the undecided community, from zero up to 2550.

A score of one on the Y-axis shows an ideal democracy.  A score of zero shows the smaller group winning every time. The X-axis shows the total size of the population.

As you can see, as the group gets bigger, the quality of the decisions goes up. This is because the minority group of decided individuals take a lot of convincing. If they duke it out directly with the decided majority with nobody else around, the results are likely to be a bit random.

But think about what happens when both decided parties talk to a bunch of random strangers first. A small difference in the sizes of the decided groups has a huge difference in the number of random people they can reach. That’s because each one of those random people also talks to their friends, and the effects are cumulative.

That means that, before too long, the minority group is much more likely to encounter a huge number of people already in agreement. Hence, they eventually change their tune. Having undecided people makes that chorus of agreement bigger.

This is awesome for bees and fish, but here’s the problem: human beings don’t mill and chat with strangers at random. We inhabit networks of family and friends. In the modern world, the size, and pervasiveness, of those networks is greater than it ever has been. So shouldn’t we look at what happens to the same experiment if we put it on a network and only let agents talk to their friends?

Let’s do that. First, let’s use an arbitrary network. One that’s basically random. The result looks like this.

RandomPop

As you can see, we get the same result, nearly, but the group has to be a little bigger before we get the same decision-making quality. That doesn’t seem so bad.

But unfortunately, human social networks aren’t random. Modern electronic social networks tend to be what’s called scale-free networks. What happens if we build one of those?

ScalePop

That’s not so hot. Once the network goes past a certain size, the quality of the decision-making actually seems to degrade. Bummer. For our simple simulation, at least, adding voices doesn’t add accuracy.

But still, the degradation doesn’t seem too bad. A score of 0.95 is pretty decent. Maybe we shouldn’t worry. Except of course, that in human networks, not every voice has the same power. Some people can buy television channels while others can only blog. And many people lack the resources or the freedom to do even that.

So what happens if we give the minority opinion-holders the most social power? In essence, if we make them the hubs of our networks and turn them into an elite with their own agenda? If you do that, the result looks like this.

ScaleBiasPopAs you can see, as the system scales, the elite wins ever more frequently. Past a certain network size, they’re winning more often than not. They own the consensus reality, even though most of the conversations that are happening don’t even involve them.

My broad conclusion, then, is that we should be extremely careful about accepting what is presented to us as the truth via electronic media, even when it seems to come from our peers. The more powerful communication technology gets, the easier it is for powerful individuals to exploit it. A large, networked society is trivially easy to manipulate.

Is there something we can do about this? I think so. Remember that the randomly mixed population always does better. So maybe we should be paying a little less attention to the news and Facebook, and having more conversations with people we encounter in our day-to-day lives.

In the networked society we inhabit, we’re conditioned not to do that. Often it feels uncomfortable. However, maybe we need to be rethinking that habit if we want to retain our social voice. The more we reach out to people whose opinions we don’t know yet, and allow ourselves to be influenced by them, the less power the media has, and the stronger we collectively become.

 

Advertisements

On Spock

Leonard Nimoy died today. I found myself surprisingly affected by that. And as I watched the tide of sorrow pour over the internet, it occurred to me to ask: why, specifically, was I so touched? In effect, I channelled my inner Spock to process my feelings, and in doing so, partially answered my own question.

Leonard Nimoy had many talents, but for many of my generation, the fact that he was Spock effectively eclipsed the rest of them. Does that belittle the rest of his achievements? To my mind, not in the least, because what Spock symbolized was inspiring, and generation defining.

I grew up as a nerdy, troubled kid in a school that didn’t have the first clue of what to do with me. They couldn’t figure out whether to shove me into the top math stream or detention, so they did both. I was singled out for bullying by both other pupils and the school staff, and had test scores that oscillated wildly from the top of the class to the very bottom, depending on how miserable I was.

In that environment, it was trivially easy to see myself as an alien. I cherished my ability to think rationally, and came to also cherish my differentness. There weren’t many characters in popular media for a kid like that to empathize with. Spock, though, nailed it.

And while my school experience was perhaps a little extreme, I suspect that a very similar process was happening for isolated, nerdy kids all across the western world.

And here’s the root of why: Spock was strong because he was rational. Sure, he was physically powerful and had a light form of telepathy and all that, but what made him terrific was his utter calm under incredibly tough conditions. Furthermore, as Leonard Nimoy’s acting made clear, he was still quite capable of emotionally engaging, of loving, and having friends, even if he seldom admitted it to himself. Spock didn’t just give us someone to identify with. He encouraged us to inhabit that rationality, and let it define us.

Leonard Nimoy’s character kept it together when everyone around him wasn’t thinking straight, and made it look cool. In doing so, he helped to inspire a generation of computer scientists, entrepreneurs, and innovators who have changed the world, and the status of nerds within it.

The kids growing up now don’t have a Spock. Sure, they have plenty of other nerd-icons to draw from, and maybe they’re more appropriate for the current age. But for me, none of them really speak to the life-affirming power of level-headed thought in the way that Spock did.

Looking back on it, I see that Leonard Nimoy, Gene Roddenberry, and the rest of the team who created Spock’s character, helped inform the life philosophy that has guided me for years, and that’s this.

All emotions are valid, from schadenfreude to love. They’ve all part of us, and should be respected, even when we’re tempted to be ashamed of them. But emotions should have a little paddock to run around in. The point at which emotions start causing problems and eating the flowers is when you let them get out of the paddock. So long as you look after your paddock, you can transcend your limitations while remaining fully human. 

And so, today, I confess that I find the death of Leonard Nimoy incredibly sad, but its significance also, somehow, fascinating.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Social media and creeping horror

One of the things my friends have advised me to do as part of building my presence as a new author is take social media seriously. Particularly Twitter. I’ve been doing that, and for the most part enjoying it, but I’m also increasingly convinced that the medium of electronic social media is terrifying, both in its power, and its implications.

By this point, many of us are familiar with the risks of not being careful around social media. The New York Times recently published a brilliant article on it.

It’s easy to look at cases such as those the article describes and to think, “well, that was a dumb thing to do,” of the various individuals singled out for mob punishment. But I’d propose that making this kind of mistake is far easier than one might think.

A few years ago, I accidentally posted news of the impending birth of my son on Facebook at a time when my wife wasn’t yet ready to talk about it. It happened because I confused adding content to my wall with replying to a direct message. That confusion came about because the interface had been changed. I wondered subsequently, after learning more about Facebook, whether the change had been made on purpose, to solicit exactly that kind of public sharing of information.

In the end, this wasn’t a big deal. Everyone was very nice about it, including my wife. But it reminded me that any time we put information into the internet, we basically take the world on trust to use that information kindly.

However, the fact that we can’t always trust the world isn’t what’s freaking me out. What freaks me out is why.

The root of my concern can perhaps be summarized by the following excellent tweet by Sarah Pinborough.

*Looks through Twitter feed desperate for something funny.. humour feeds the soul. Nope, just people shouting their worthy into the void…*

I think the impressive Ms. Pinborough intended this remark in a rather casual way, but to my mind, it points up something crucial. And this is where it gets sciencey.

Human beings exercise social behavior when it fits with their validation framework. We all have some template identity for ourselves, stored in our brains as a set of patterns which we spend our days trying to match. Each one of those patterns informs some facet of who we are. And matching those patterns with action is mediated by exactly the same dopamine neuron system that guides us towards beer and chocolate cake.

What this means is that when we encounter a way to self-validate on some notion of social worth with minimal effort, we generally take it. Just like we eat that slice of cake left for us on the table.  And social media has turned that validation into a single-click process. In other words, without worrying too much about it, we shout our worthy into the void. 

This is scary because a one-click process doesn’t leave much room for second-guessing or self-reflection. Furthermore, the effects of clicking are often immediate. This reinforces the pattern, making it ever more likely that we’ll do the same thing again. And that’s not good for us. We get used to social validation being effortless, satisfying, and requiring little or no thought.

We may firmly assure ourselves that all our retweeting, liking, and pithy outrage is born out of careful judgement and a strong moral center, but neurological reality is against us. The human mind loves short-cuts. Even if we start with the best rational intentions, our own mental reward mechanisms inevitably betray us. Sooner or later, we get lazy.

Twenty years ago, did people spend so much of their effort shouting out repeated worthy slogans at each other. Were they as fast to outrage or shame those who’d slipped up? How about ten years ago? I’d argue that we have turned some kind of corner in terms of the aggressiveness of our social norming. And we’ve done so, not because we are now suddenly somehow more righteous. We’ve done it because it’s cheap. Somebody turned self-righteousness into a drug for us, and we’re snorting it.

But unlike lines of cocaine, this kind of social validation does not come with social criticism attached. Instead, it usually comes from spontaneous support from everyone else who’s taking part. This kind of drug comes with a vast, unstoppable enabler network built in. This makes electronic outrage into a kind of social ramjet, accelerating under its own power. And as with all such self-reinforcing systems, it is likely to continue feeding on itself until something breaks horribly.

Furthermore, dissent to this process produces an attendant reflexive response, just as hard and as sharp as our initial social judgements. Those who contest the social norming are likely to be punished too, because they threaten an established channel of validation. The off-switch on our ramjet has been electrified. Who dares touch it?

The social media companies see this to some extent, I believe. But they don’t want to step in because they’d lose money. So long as Twitter and Facebook build themselves into the fabric of our process of moral self-reward, the more dependent on them we are. The less likely we are to spend a day with those apps turned off.

Is there a solution to this kind of creeping self-manifested social malaise? Yes. Of course. The answer is to keep social media for humor, and for news that needs to travel fast. We should never shout our worthiness. We should resist the commoditization of our morality at all costs.

Instead, we should compose thoughts in a longer format for digestion and dialog. Maybe that’s slower and harder to read, but that’s the point. Human social and moral judgements deserve better than the status of viruses. When it comes to ostracizing others, or voting, or considering social issues, taking the time to think makes the difference between civilization and planet-wide regret.

The irony here is that many of those people clicking are those most keen to rid the world of bigotry. They hunger for a better, kinder planet. Yet by engaging in reflexive norming, they cannot help but shut down the very processes that makes liberal thinking possible. The people whose voices the world arguably needs most are being quietly trained to shout out sound-bites in return for digital treats. We leap to outrage, ignoring the fact that the same kind of instant indignation can be used to support everything from religious totalitarianism to the mistreatment of any kind of minority group you care to name. A world that judges with a single click is very close in spirit to one that burns witches.

In short, I propose: post cats, post jokes, post articles. Social justice, when fairly administered, is far about more about the justice than about the social.

(My first novel, Roboteer, comes out from Gollancz in July 2015)

Barricade, and opening up

I have a book coming out this year and the anticipation is affecting me. Perhaps understandably, I have become fascinated in that process that authors go through when their books hit print. Countless writers have gone through it. Some to harsh reviews, some to raves, and some, of course, to dreadful indifference. What must that be like, to have something you’ve spent years on suddenly be held up for casual judgement? I have no idea, but I’ll probably find out soon.

It’s probably natural that in trying to second guess this slightly life-changing event that I’ve looked to my peers. Specifically, I’ve looked to those other new authors that my publisher is carrying—those people a little further down the same path as myself.

In stalking them on the web, I hit my first key realization. As a writer, I should have been giving reviews, since years ago, to every writer whose work struck me in one way or another. And that’s because without such feedback, a writer is alone in the dark. A review by another writer, even an unfavorable one, is a mark of respect.

As it is, I have a tendency to lurk online, finding what I need but not usually participating in the business of commenting. However, this process of looking at the nearly-me’s out there has brought home that the web can and should be a place of dialog. It’s stronger and better when individual opinions are contributed. If I expect it from others, I should contribute myself. The reviewing habit, then is one which I am going to try to take up immediately.

Which brings me to the first Gollancz title I consumed during my peerwise investigation: Barricade, by Jon Wallace. And to my first online book review. Before I tell you what I thought of it, I should first give you a sense of what it’s about. Rather than cutting a fresh description, I will pull from Amazon.

Kenstibec was genetically engineered to build a new world, but the apocalypse forced a career change. These days he drives a taxi instead. A fast-paced, droll and disturbing novel, BARRICADE is a savage road trip across the dystopian landscape of post-apocalypse Britain; narrated by the cold-blooded yet magnetic antihero, Kenstibec. Kenstibec is a member of the ‘Ficial’ race, a breed of merciless super-humans. Their war on humanity has left Britain a wasteland, where Ficials hide in barricaded cities, besieged by tribes of human survivors. Originally optimised for construction, Kenstibec earns his keep as a taxi driver, running any Ficial who will pay from one surrounded city to another. The trips are always eventful, but this will be his toughest yet. His fare is a narcissistic journalist who’s touchy about her luggage. His human guide is constantly plotting to kill him. And that’s just the start of his troubles. On his journey he encounters ten-foot killer rats, a mutant king with a TV fixation, a drug-crazed army, and even the creator of the Ficial race. He also finds time to uncover a terrible plot to destroy his species for good – and humanity too.

My two cents:

I enjoyed this book. It had shades of Blade Runner and Mad Max, with a heavy dose of English cultural claustrophobia thrown in. I liked the way that the viewpoint character’s flattened affect lifted gently over the course of the novel. I liked the pacing. I liked the simple, self-contained quality of the dystopian world that’s presented. While the content is often bleak, sometimes to the point of ruling out hope, there is always humor there. And most of all, I appreciated the underlying message of the book. In essence, Barricade proposes (IMO) that we are saved in the end not by clever ideas or grand political visions, but by hope, humanity, and persistent, restless experimentation in the face of adversity. I sympathize with that outlook.

Is the book perfect? Of course not. No book ever is. The flattened affect, and the blunt, violent bleakness of the novel, both come with a cost in reader engagement that will no doubt vary from person to person. I was not bothered, but I can imagine others who would be.

Furthermore, the human characters, bar one, are ironically the least fully drawn (perhaps deliberately). But all creative choices come with side-effects. Barricade held my attention to the end, entertained me, and encouraged me to think.

AI and Existential Threats

So on re-reading my last AI post, I decided that it perhaps seemed a little glib. After all, some heavy-hitters from Nick Bostrom on down have seriously considered the risks from superintelligent AI. If I’m ready to write such risks off, I should at least do justice to the arguments of the idea’s proponents, and make clear why I’m not specifically concerned.

First, I should encapsulate what I see as the core argument of the Existential Threat crowd. To my mind, this quote from Nick Bostrom captures it quite nicely.

Let’s suppose you were a superintelligence and your goal was to make as many paper clips as possible. Maybe someone wanted you to run a paper clip factory, and then you succeeded in becoming superintelligent, and now you have this goal of maximizing the number of paper clips in existence. So you would quickly realize that the existence of humans is an impediment. Maybe the humans will take it upon themselves to switch you off one day. You want to reduce that probability as much as possible, because if they switch you off, there will be fewer paper clips. So you would want to get rid of humans right away. Even if they wouldn’t pose a threat, you’d still realize that human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.

In other words, once we create something than can outthink us, we are helpless if it’s goals don’t align with ours. It’s an understandable concern. After all, isn’t this the lesson that the evolution of the human race proved? The smarter the race, the bigger it wins, surely.

Well, maybe.

I would argue that the way we think about risks, and the way we think about intelligence, deeply colors our perception of this issue. We can’t think about it straight because we’re not designed to.

First, there is an issue of scale. People tend to forget that reasoning capacity is easy to come by in nature, and not always useful. There was a lovely experiment a few years back in which scientists tried to breed more intelligent fruit-flies. They succeeded with surprising ease. However, those flies had a shorter lifespan than normal flies, and didn’t obviously have a natural advantage at being a fly. Because flies are small, it’s more efficient for them to be easily adaptable at the level of evolutionary selection and have loads of children than it is to be smart. The same is even more true of smaller organisms like bacteria.

The lesson here is that intelligence confers a scale-dependent advantage. We are the smartest things we know of. However, we assume that having more smarts always equals better, despite the fact that this pattern isn’t visible anywhere in nature at scales larger than our own. There is, for instance, no evidence of superintelligent life out among the stars.

While it still might be true that smarter equals better, it might also be the case that smartness at large scales is a recipe for self-destruction. Superintelligent life might kill itself before it kills us. After all, the smarter we’ve become, the better at doing that we seem to be.

Then there is the issue of what intelligence is. Consider the toy example from Bostrom’s quote. In order for this scenario to hold, our AI must be smart enough to scheme against humans, and at the same time insufficiently self-aware as to the long term cost of pursuing that goal: namely the destruction of the entities who maintain it and enable it to make paperclips in the first place.

To resolve this paradox, we have to assume that the AI maintains itself. However, as Stephen Hawking will tell you, having an excess of intelligence does not magically bestow physical self-sufficiency. Neither does it bestow the ability to necessarily design such a system and force it to be implemented. Furthermore, we have to assume that the humans building the self-sufficient machines at no time notice that they’re constructing the tools that will inform their own obsolescence. Not impossible, but at the same time, another stretch on the Existential Threat model.

Another problem is that we tend to see intelligence as a scalar commodity that is universal in nature. This despite the vast array of neuroscientific evidence to the contrary. We see others as having more, or less than ourselves while at the same time rewarding ourselves for specific cognitive talents that are far from universal in nature: insight, empathy, spatial skills, etc. Why do we do this? Because reasoning about other intelligences is so hard that it requires that we make a vast number of simplifying assumptions.

These assumptions are so extreme, and so important to our continued functioning, that in many cases they actually get worse the more people think about AI. From the fifties to the nineties, a huge number of academics honestly believed that building logic systems based on predicate calculus would be adequate to capture all the fundamentals of human intelligence. These were smart people. Rather than giggle at them, we should somberly reflect on how easy it is to be so wrong.

Related to this, we also assume automatically that AI will reason about threats and risks in a self-centered fashion, just as we do. In Bostrom’s example, the AI has to care that humans shutting it down will result in the end of paperclip production. Why assume that? Are we proposing that the will to self-preservation is an automatic consequence of AI design? To my knowledge, there is not a single real-world AI application that thinks this way. Furthermore, none of them show the slightest tendency to start. I would propose that we have this instinct not because we’re intelligent, but because we’re evolved from unintelligent creatures that demonstrated this trait because it conferred selective advantage.

So for AI to reason selfishly, we have to propose that the trait for self-preservation comes from somewhere. Let’s say it comes from malware, perhaps. But even if we make this assumption, there’s still a problem.

Why would we propose that such an intelligence would automatically choose to bootstrap itself to even greater intelligence? How many people do you know who’d sign up for voluntary brain surgery? Particularly, brain surgery conducted by someone no smarter than themselves. Because that’s what we’re proposing here.

There is a reason that this isn’t a popular lifestyle choice. And that’s that the same will to self-preservation acts against any desire for self-surgery, because self-surgery can’t come without risk. In other words, you can’t have your self-preservation cake and eat it too.

But perhaps the greatest reason why we shouldn’t be too worried about superintelligent AI is because we can see this problem. Smart machines have been scaring us for generations and they’re not even here yet. By contrast, antibiotic-resistant bacteria evolving through the abuse of factory farming practices present a massive threat that the CDC have been desperately trying to raise awareness of. But people aren’t interested. Because they like cheap chicken.

In short, I don’t assume that superintelligent AI represents no threat. But I do strongly suspect that when something comes along and clobbers us, it’ll be something we didn’t see coming. Either that, or something we didn’t want to see.

I, for one, welcome our new robot overlords

There has been a lot in the press of late talking about the threat of human-level AI. Stephen Hawking has gone on record talking about the risks. So has Elon Musk. Now Bill Gates has joined the chorus.

This kind of talk makes me groan. I’d like to propose the converse for a moment, so that everyone can try it on. Maybe AI is the only thing that’s going to save our asses. Why? How do I justify that? First, let’s talk about why AI isn’t the problem.

Concerns about AI generally revolve around two main ideas. First, that it’ll be beyond our control, and secondly, that it’ll bootstrap its way to unspeakable power, as each generation of AI builds a smarter one to follow it.

Yes, AI we don’t understand would be beyond our control. Just like weather, or traffic, or crop failure, or printers, or any of the other unpredictable things we struggle with every day. What is assumed about AI that supposedly makes it a different scale of threat is intent. But here’s the thing. AI wouldn’t have intent that we didn’t put into it. And intent doesn’t come from nowhere. I have yet to meet a power-hungry phone, despite the fact that we’ve made a lot of them.

Software that can be said to have intent, on the other hand, like malware, can be extremely dangerous. And malware, by some measures, is already something we can’t control. Certainly there is no one in the world who is immune to the risks of cyberattack. This despite the fact that a lot of malware is very simple.

So why do people underestimate the risks of cyberattack and overstate AI? It’s for the same reason that AI research is one of the hardest kinds of research in the world to do properly. The human mind is completely hamstrung with assumptions about what intelligence is, that we can’t even think about it straight. Our brains come with ten million years of optimized wiring that forces us to make cripplingly incorrect assumptions about topics as trivial as consciousness. When it comes to assessing AI, it’s hard enough to get the damned thing working, let alone make rational statements about what it might want to do once it got going.

This human flaw shows up dramatically in our reasoning about how AI might bootstrap itself to godhood. How is that honestly supposed to work? Intelligence is about making guesses about in an uncertain universe. We screw up all the time. Of all the species on Earth, we are the ones capable of the most spectacular pratfalls.

The things that we’re worst at guessing about are the things that are at least as complicated as we are. And that’s for a really good reason. You can’t fit a model of something that requires n bits for its expression into something that only has n-1 bits. Any AI that tried to bootstrap itself would be far more likely to technologically face-plant than achieve anything. There is a very good reason that life has settled on replicating itself rather than trying to get the jump on the competition via proactive self-editing. That’s because the latter strategy is self-defeatingly stupid.

In fact, the more you think about it, the more the idea of a really, really big pocket calculator suddenly acquiring both the desire, and the ability to ascend into godhood, the dumber it is. Complexity is not just a matter of scale. You have to be running the right stuff. Which is why there isn’t more life on Jupiter than there is here.

On the other hand, we, as a species have wiped out a third of our biodiversity since nineteen seventy. We have, as I understand it, created a spike in carbon dioxide production unknown at any time in geological history. And we have built an economy predicated on the release of so much carbon that it would be guaranteed to send the planet into a state of runaway greenhouse effect that will render it uninhabitable.

At the same time, we are no closer to ridding the world of hunger, war, poverty, disease, or any of those other things we’ve claimed to not like for an awfully long time. We have, on the other hand, put seven billion people on the planet. And we’re worried about intelligent machines? Really?

It strikes me that putting the reins of the planet into the hands of an intelligence that perhaps has a little more foresight than humanity might be the one thing that keeps us alive for the next five hundred years. Could it get out of control? Why yes. But frankly not any more than things already are.

On Piketty

When I go into my local bookshop, the first thing I see on the table in front of me is usually new fiction, or a coffee table book, or something with celebrities in it. Today, it was Capital in the Twenty First Century, by Thomas Piketty. I love this.

It makes me feel all warm and fuzzy inside that a huge economics book is a bestseller. I don’t care that most people aren’t reading it cover to cover. I haven’t finished it yet myself. What’s great about it is that it represents a longing for a functional political ideology that does not involve handing our autonomy over to people who have already been proven to be crooked.

The gist of the book is simple. Examine data from the last two hundred years and a pattern emerges: over time, wealth accumulates in the hands of a few. Large-scale social shocks like war can reverse this, but then the trend begins again.

Of course, not everyone loves this book. There are many frowny faces from economic and financial circles. The Economist cites four major kinds of criticism to his work. While not an economist, I have just spent over a year at Princeton studying wealth inequality, I would like to add my twopenneth and address each concern in turn.

1: An antipathy to markets

Piketty is accused of being biased against markets right out of the gate. The allusion to Marx’s work in the title is, to some, a clear indicator of prejudice, and a political agenda. This response strikes me as weak. The main thrust of the book is an attempt to do proper data-driven economics. If the data reveals a flaw in our conception of markets, that’s science, not bias.

What I mean by this is, how else would you expect an economist sitting on his pile of data to present a book? If I did a bunch of research about, let’s say, car crashes, and discovered that faulty tires were to blame in almost all cases, would I title the book ‘Tires Make Us Safe’? Would I frame the book as a rousing defense of current tire technology? No, that would be cray cray.

Or perhaps the concern that it was his bias that made him go and collect all that data in the first place? Maybe he should be more like the last Republican presidential bid, and spend more time unbiasing his polls.

2: Inexact economics

Piketty has been challenged on the specifics of his economic tools, such as his definition of return on capital. And on ignoring certain base principles of the field, such as the fact that capital should fall as it accumulates. Not being an economist, I find it harder to comment on this. However, I confess I confront this notion with profound skepticism.

Economics is the study of trade, and appears to take this activity as a social axiom. However, any study of the social behavior of animals makes it painfully clear that trade requires a parity of power to function. In nature, where one organism can take from another without cost, it does so. Yet when I look at the field of economics, the fact that it is inevitably backed by a structure of power that can break down seems broadly absent. So holding Piketty to the standard of a field that doesn’t describe how power works in the first place seems a pretty weak defense.

You do not need economics for the chance accumulation of advantage in one group of agents to lead to an irreversible runaway effect. It is everywhere. Pick up a book on evolutionary dynamics. You will come across this idea very quickly. It is called ‘selection’, and it does not magically stop itself.

3: An assumption that the future will resemble the past

There are those who point out that only at a significant remove does our current era resemble the Gilded Age. For starters, wealth is more frequently allocated via large salaries than by large inheritances.

This is also a sad comeback. Those large salaries are not actually wages. They are the result of an investment of reputation. In effect, they are social network capital, not payment for labor.

We know this because CEOs cannot logically be worth as much as they’re paid for the work they do. The tech sector provides a perfect illustration of this. Look at the cross section of highly paid CEOs. Some are wealthy due to the business savvy, some via the theft of ideas, some from writing good algorithms, others transparently through luck. There is no intersecting formula that cuts across these characters that codes for outstanding leadership. Thus we must ask, if not quantifiable leadership skills, what are these people being paid for?

The answer is investment potential. Because celebrity CEOs have already succeeded, human beings are prone to allocate to them a high likelihood of future success. The idea of them is worth a lot, as a magnet for the contribution of further capital and the acceleration of sales, despite the folly that this entails. These people are retained because of the mystic aura they grant their companies.

Ironically, when engaged in repeat performances in other companies, these characters are more likely to fail than succeed. And it is precisely this noise in the system that gives people hope in creating new startups. Everyone believes they stand a chance, because, of course, they do. They know in their secret hearts that business is more chance than talent. Everyone also wants to believe, though, that after the fact their luck will be recognized as skill, and that they were really great all along. And due to our collective inability to differentiate between luck and skill, this often actually happens.

Lurking behind this criticism is another form of folly, though. Those who propose that the future is necessarily different from the past must do so in the face of data that shows an accumulation of inequality.

In previous social episodes in dozens of human civilizations, the concentration of power has terminated in war. The war is not necessarily directed at those accumulating wealth. It is often directed at those closest to the people suffering most who appear to have an advantage–often a minority group. However, the effect is usually the same, a shock to the social system that destroys enough order that society can equilibrate.

Those who look at the current accumulation of wealth who propose that it will not end in war are proposing that something else will happen instead. Not a climate disaster, mind you, but something nice. However, they don’t have any prior historical examples to support this claim. Instead, they have theories that are not backed up by data because the data doesn’t exist yet. This is like people standing in a burning building claiming that, because their roof hasn’t caved in yet, that their building is different and that they should not expect it to happen. For this building, they say, they built a really tall roof on sturdy wooden ratchets so that it keeps going slowly up, and this ensures that collapse is impossible.

4: Disagreement over what should be done

There are those who agree with Piketty’s assessment, but disagree with his notion of what should be done, namely: tax the rich. For instance, one commenter I read suggested the solution was to promote growth by investing in eduction. Except of course, we have seen investment in education, and we have seen growth, and neither have done a blind bit of good at the scales we’re talking about. You cannot back out inequality by trying to jolly everyone simultaneously into the middle class.

Nevertheless, I have more sympathy for this critique because, quite simply, I believe that the rich have already accumulated enough power that they will not permit themselves to be taxed. Instead, I suspect that war will do its customary job, aided in significant part by social unrest due to climate change. Witness the rise of radicalized politics in Europe as a thrilling precursor to the fun in store.

But herein lies my greatest concern about this entire scenario. The centralization of power simply happens because it can, just as water flows downhill. There is no surprise here. Watching this happen is rather like watching what happens to a slime mold investigating food sources in a petri dish. At first, there are questing threads of protoplasm everywhere. Then the mold identifies the food sources, locks onto them, and abandons the less productive parts of its structure. This is all great while the food source is present. Take the food away, though, and the mold is in trouble. It has lost all those experimental tendrils that formerly provided useful information.

In short, the centralization of power creates a society that is optimized but inflexible. This is because it has concentrated the extra capacity in the system in agents who do not actually need it. This means it is less able to resist environmental shocks. And it is this that we should be worried about, because we already know that environmental shocks are coming. A society with concentrated wealth is not only more prone to war, it is more likely to dramatically disassemble when the conditions that create wealth are disturbed.

Unfortunately, it is also normal for people to look at this kind of picture and deny it because they do not want it to be true. The irony though, is that without early, non-destructive changes, everyone loses. And perhaps, most ironically, the people who stand to lose the most are those who support the status quo, but who lack the luck, power, and ruthlessness to be the last person standing on the iceberg as it melts.

I say this because this is a pattern that readily shows up in simulations. The pattern of aiding the rich in the hope of joining their club is the one most likely to create personal catastrophe in the end-game. Witness the account in Freakonomics on why drug dealers live with their mothers, or why academia in America is imploding.  Those people who participate in the rat-race dollar-auction but who are not at the absolute top of the pyramid are the ones who bear the fallout when the system topples. This is because they have made the largest unsustainable investment in it. It is, in short, the kind of people who write articles for magazines pooh-poohing books like Piketty’s. The people who actually win do not write those articles. They have little people to do it for them. The threat to these acolytes, though, comes not from the disbelieving readership, but from those very icons of finance that they seek to reinforce.

For the rest of us, the options are simple. A: Get behind a progressive tax on extreme wealth. B: Get ready for war. Not next year, or the one after, but sooner than you’d like.

I know which one I prefer.

a place for creative chaos