Tag Archives: simulation

Why the world has gone crazy

Do you feel like the world has gone crazy of late? I do. It doesn’t seem that long ago that Western civilization was capable of making relatively rational political decisions that crossed party lines and balanced priorities. Now all of a sudden, we are in a world where people cheer for Donald Trump even as his trade policies destroy their jobs, or cheer for Brexit as it threatens to obliterate their livelihoods, or doubt climate change even as they either roast in overpowering heat or freeze in a roaring polar vortex liberated from our broken North Pole.

Just watching this weird collective calamity unfold freaks me out, and if Twitter is anything to go by, I don’t seem to be the only person feeling this way.

So what’s going on? What happened to the world? When did our society get so very stupid? I believe there’s a simple explanation, and I’m hoping that sharing it will help.

Human beings are the most dominant species on the planet not because of our intelligence, I’d propose, but because of language. Neanderthal man had a larger cranial capacity than Homo Sapiens and may well have been more intelligent on an individual basis, yet we out-competed the Neaderthals handily. Why? Because human beings are both smart and social. Language allows us to propagate information and learn from each other to an extent that no other species on Earth is capable of. We think and act collectively, and that makes us unbeatable (to date, at least).

However, just because we have intelligence and language, that doesn’t mean that the rest of how we share information is fundamentally different from the way that any other species does it. There are a lot of social species that can learn and cooperate. Ants and bees are damned good at it. Huge shoals of fish adapt in milliseconds to avoid predators. Vast herds of wildebeest direct themselves to water sources unknown to most in the group as if by magic.

The mechanisms by which all these species make decisions are pretty similar. You start off with a few individuals expressing a preference or an idea that their group-mates then start to copy. As the animals mingle, the good ideas (usually with a few more adherents) tend to propagate faster than the bad ones (with less). Eventually, almost every member of the group is behaving the same way and a collective decision has been made. This mechanism produces decent decisions an amazing amount of the time. Try Thomas Seeley’s Honeybee Democracy for an astonishing account of how it works.

As it turns out, there is plenty of evidence that human beings use this same copying rule. We mimic each other’s choices far more than we let ourselves believe. In his recent book The Formula on the social causes of success, Albert-Laszo Barabasi makes the case clearly. The Knowledge Illusion by Sloman and Fernbach goes into greater depth on the same topic. Or for even more compelling evidence, try Robert Cialdini’s classic book Influence.

And why do we mimic each other? Because it’s an incredibly cheap and effective way to coordinate. The reason why nature employs almost the same algorithm in such a large number of unrelated species is because it’s the easiest one for natural selection to pick out. It would be weird if we didn’t copy each other contagiously. After all, that’s what language is basically for.

But there’s a problem here: we no longer live under the conditions in which we evolved. And the conditions we have now are basically a perfect storm for shitty decision making. Let me explain.

While I was briefly working at Princeton, I had the great fortune to meet and interact with Iain Couzin and the members of his lab. Iain was a pioneer in the study of how social animals make decisions. I watched a terrific talk by one of his postdocs outlining the specifics of the mingling-copying mechanism one day and thought to myself: I bet it doesn’t work on networks.

What I mean is: I suspected that the mingling-copying approach to group choice-adoption would work really well when animals were always moving around and encountering new opinions, but if you locked animals onto a social network, the method would start to break down. Why did I suspect this? Because if you’re always copying the same people, local opinions will reinforce. It’s going to be much harder for good ideas to propagate through the whole group because they’ll face bottlenecks and blockades. In a social network, there are only certain routes from one person to another. It might be that the only way for a good idea to reach the people it needs to convince is through someone committed to an idea that’s fundamentally at odds.

It only took about two hours of coding to both reproduce Couzin’s basic result and demonstrate that my suspicions regarding the effect of social networks were correct. Furthermore, the bigger the network, and the more biased the distribution of node connections, the worse the decision making got. (In a biased network, a few nodes have loads of links radiating out of them and most nodes have very few.)

Then, when I put the animals with the bad ideas on the nodes with the most connections, the decision-making went straight to hell. All the benefits of copying each other went out the window. Suddenly,  bad ideas were winning all the time.

This is a problem, because that same mingling-copying algorithm in our heads encourages us to build social networks that have exactly the wrong kind of bias. These networks are what’s called ‘scale free’, another term that comes up in Barabasi’s work.

Imagine that you’re choosing some music to listen to. You ask five friends, and three of them happen to recommend the same artist. You’re then more likely to listen to that artist and than the others you were given, and also more likely to share her work with the next person who asks. This means that those nodes (artists) with a lot of links (attention) tend to get even more. You’re more likely to find yourself listening to Taylor Swift than a local band, for instance, unless you’re trying really hard to do otherwise. Similarly, you’re more likely to choose Google for a search engine than DuckDuckGo, and that’s going to affect what you subsequently see. As technology has advanced, our tendency to bind ourselves into these kinds of social networks has exploded into a kind of digital pandemic.

The upshot of all this, in case it’s not already obvious, is that our natural collective decision-making instinct, when combined with technology, creates networks that degrade the quality of the decisions we make. Fortunately, we seem to be noticing. The push-back against the effects of social media have started. But unfortunately, the problem doesn’t end there.

This same decision-making feedback effect drives how society allocates money to people. We assess how much a person is worth by the amount of success they already have. So, successful scientists get more rewards heaped on them. Painters who are already in the right galleries get into the right museums. And CEOs trade up to new positions with ever higher pay. It’s a feedback effect baked into our very nature as animals because we simply don’t have the time or information necessary to assess everyone exclusively on their merits.

(I can tell you from personal experience that my novels were taken more seriously after I worked at Princeton than I was before, despite the fact that the novels in question were written before I worked there. Princeton is a magic word that people use to assess likely intellectual aptitude because that’s cognitively much cheaper than trying to make a fresh assessment.)

We make up stories, of course, to justify the worth we allocate to companies, individuals, artists, etc, but stories aren’t science, and the science of how we make decisions is well understood at this point.

This is not to say that talent doesn’t count. You can’t even run a functional business unless you’re competent and hungry. Just like you can’t get your painting into even one gallery without some artistic ability. Achievement is hard work and the skills we need to succeed are very real. But those traits are just the table-stakes for the game of rich-get-richer success-roulette that follows. People consistently underestimate the effects of social feedback, just like they consistently underestimate how skewed wealth distribution curves actually are.

This is why our society is increasingly shaped by a small number of billionaires and a very large number of everyone else. Which is unfortunate, because nothing affects a person’s incentives like how much money they have. As a result, what looks like a sensible policy to the people with the most social power is inevitably going to diverge from what everyone else thinks is right, or indeed what’s actually objectively a good idea.

Of course, the more power those central individuals have, the louder their voices and the more likely that their opinions will affect decision-making. On top of that, those people directly connected to very powerful individuals have a massive incentive to support the beliefs of their bosses, otherwise their positions relative to social competitors are jeopardized. This drives the belief-systems of billionaires further away from the consensus understanding of what’s going on. They just don’t get the benefit of all the facts flowing through the rest of the social network. Consider the recent Time article on Donald Trump’s intelligence briefings for an example of what this looks like.

The upshot of this is that we make billionaires dumber, the more we pay them. This is not speculation or analogy, but a quantifiable impact you can model. The more attention billionaires receive, the less able they are to process information. And the more we power we give them, the more they’re likely to gain. And this is why we are in a global runaway cascade of stupid.

There’s another important factor that bears mentioning here. While we, as a society, are getting less able to respond rationally to unwelcome information, exactly the same process is happening inside the brains of those billionaires now running the show.

Here’s how that works. People who receive a lot of money for what they do are very likely to self-validate on that fact. What differentiates them from others is their apparent ‘success’, so they’re both likely to believe that their gains reflect some intrinsic personal quality and also to value that quality highly. After all, that’s what we all want: to be good at something and have it be recognized. And when you’re super-rich, people will line up around the block to tell you how great you are.

But as a billionaire gains more wealth, the satisfaction they gain from each bump in their fortunes decreases because it’s that much easier to achieve. They naturally habituate to the sensation, so each rush of triumph is less satisfying. This means that the more they gain, the hungrier they get for more of the same. Power and attention operate like a drug. This is another very well understood and extensively studied behavioral effect.

So here’s the next takeaway: we make the billionaires sadder, needier, and more desperate, the more we pay them.

Ironically, because of these same network effects, we’re more likely to believe that wealth and attention are valuable even as it harms us. We cannot help but be impacted by the consensus delusion that those enormous fortunes we see are somehow the product of a mysterious kind of personal excellence that we may yet be able to exhibit. We believe this despite the mounting evidence that the converse is true: that most super-rich people are idiots of our creation. Having made the people in the center of our society sick, we then acquire that same sickness. We try harder and harder to validate on the fuel that the billionaires run on even as it gets steadily less likely for us to ever succeed at it.

Historically speaking, this feedback process always goes to the same place. Those leading society lock in their wealth and make progressively worse decisions until some force comes along to disrupt the social disequilibrium that’s been created. That either happens through war, or invasion, or pandemic, or some other equally fun process. See The Great Leveler by Walter Scheidel for exhaustively complete and utterly convincing details.

The upshot of all this is that without very significant social re-balancing soon, we will be unable to confront climate change or any of the dramatic consequences that arise from it. And without action, a great many will die. My guess is that the next upcoming social shock will kill about a billion people. (I’ll explain that number in a later post.) Somewhere in that difficult time, people will take to chaining oligarchs to the decks of their own yachts and letting the raging hand of Nature take its vengeance, but by then it will be too late.

Is there a solution? Of course there is. We don’t have to be blind and ignorant to social feedback effects like the civilizations before us. We have network science for crying out loud. We have neuroscience. And so we have hope.

The number of oligarchs in the world is tiny and their power resides exclusively in our imaginations. So how about each nation coordinates its efforts to simply require that all the money that anyone has over some amount, (let’s say one billion dollars), be returned to the state and distributed throughout the population.

We don’t do this out of some idea of ‘fairness’. There is no notion of fairness invoked here. Neither do we do it because it is ‘right’ in some sense. Certainly it is impossible for the billionaires to have ‘earned’ that money in any meaningful sense but arguably that’s irrelevant. We are still talking about wealth redistribution, which is always a charged concept. So why do we do it? Because the alternative is that everyone loses, including the billionaires themselves. Either we tell them that the money is going back in the pot for their own good, or all the money everyone has goes away anyway.

Who gets that money? It gets shared out equally between every adult.

Is that ‘fair’? Shouldn’t we go further and hand it out proportionally? We could, but if we do that, we create a power vacuum and start fights and the whole process will break. It’ll be like that scene in It’s a Mad Mad Mad Mad World where they can’t decide on how many shares of the treasure there should be, only with machine guns and shrapnel bombs. So we keep it very simple. The more important step is to repeat the capping process five years later, and to keep doing it indefinitely.

But what happens if the billionaires simply hide their money overseas while the wealth survey is taking place, I hear you ask? Then they are forbidden from entering or trading in that country unless they participate in the program. Those people are watched and imprisoned if they’re caught. (Remember, these people still get to keep one dollar less than the one billion threshold. That’s still more than nine-hundred-and-ninety-nine million dollars. They still have more money than everybody else and far more than they can possibly spend. Their fate is nothing to be sad about.)

Why do I think this is a sensible approach? In short, because a social wealth distribution that cannot go over a threshold will reorganize itself. People who want to retain power won’t want their visible wealth to go over the limit so they’ll find other ways to exercise social control by sinking money back into society. This system will take time and effort to figure out, so while the system is likely to eventually be gamed somehow, in the mean time, there will be plenty of opportunity for new fortunes to arise and for rationality to return.

I see this approach as far better than trying to force-equalize society because forced equalization strips away the social incentive for individuals to succeed. It might seem fairer but it’s a surefire way to make an economy nosedive while an entrenched elite of self-appointed enforcers establish themselves to replace the oligarchs who’ve just been removed.

What you actually want is something like capitalism, but with a mechanism in place to prevent runaway idiocy of the sort we have now. People have tried to do that with progressive taxation, of course, but the institutions we might look to to effect that change have already been gamed via the current process of stupidification. That means we can expect those institutions to be remarkably sluggish in their response to our votes, and to draft legislation that is more arcane and full of holes than anyone wants. Look at the legislation imposed on banks after the Credit Crunch if you want an example of how that is likely to play out.

To my mind, the fix has to be something simple, blunt, and obvious, so that there is no wiggle-room for laws to be altered and cheated. We want a law you can can fit in a tweet, because that way it’s easy to apply a social check on whether it’s actually being carried out properly. And because the process exclusively impacts a tiny, overfed, and badly-confused minority, violence in its exercise might actually be avoided.

It wouldn’t work forever, of course, but it might buy us enough time to get rational about the world we live on, and how to keep it from burning up. And that would be a lot better than what we have now.

How do we implement such a change? That’s harder. It requires coordination and persistent agitation, and is undoubtedly the topic for another blog post.

In any case, that’s my take. If you disagree, or believe you have a better solution, I’d love to hear about it. I shall be reading the comments with interest.

 

How we decide

Since my recent post on Twitter and Facebook, I’ve been thinking of airing a piece of science I was tinkering with at Princeton, and never got round to putting in the world. That science asks the following question:

What makes us believe that something is true or right when we lack direct evidence, or experience, to support it? When do we decide to add our voice to a social chorus to demand change?

This question seems particularly pertinent to me in our modern age when so much of our information arrives through electronic channels that are mediated by organizations and mechanisms that we cannot control.

For many social animals, the most important factor in making a decision seems to be what the other members of their social group are doing. Similar patterns show up in experiments on everything from wildebeest, to fish, to honey bees.

Similarly, we humans often tend to believe things if enough of our peers seem to do likewise. People have been examining this tendency in us ever since the experiments of the amazing Stanley Milgram.

It’s easy to see this human trait as a little bit scary—the fact that we often take so much on faith—but, of course, a lot of the time we don’t have a choice. We can’t independently check every single fact that comes our way, or consider every side of a story. And when many of the people who we care about are already convinced, aligning with them is easy.

Fortunately, a lot of animal behavior research suggests that going with the flow is often actually a really good way for social groups to make decisions. Iain Couzin’s lab at Princeton has done some excellent work on this. They’ve shown that increasing the number of members of a social group who aren’t opinionated, and who can be swayed by the consensus by sheer force of numbers, often increases the quality of collective decision making. Consequently. there are many people who think we should be taking a closer look at these animal patterns to improve our own systems of democracy.

But how effective is this kind of group reasoning for humans? How much weight should we be giving to the loud and numerous voices that penetrate our lives? And how often can we expect to get things dangerously wrong.

Well, the good news is that, because we’re humans rather than bees, we can do some easy science and find out. And I’m going to show you how. To start with, though, we’ll need to know how animal behavior scientists model social decision-making.

In the simple models that are often used, there are two types of agent (where an agent is like a digital sketch of a person, fish or bee). These types are what you might call decided agents, and undecided agents. Decided agents are hard to convince. Undecided agents are more flexible.

Decided agents are also usually divided into camps with differing opinions. X many of them prefer option A. Y many of them prefer option B. If the number of decided agents who like B is a lot more than the agents who like X, we assume that B is the better option. Then we look to see how easy it is for agents, as a group,  to settle on B over A.

To convince an agent, you let it meet a few others in the group to exchange opinions. If an agent encounters several peers who disagree with it in succession, (let’s say three), it considers changing its mind. And the probability of changing is, of course, higher for an undecided agent than for a decided one.

Then we put a bunch of agents in a digital room and have them mill about and chat at random. Very quickly we can make a system that emulates the kind of collective decision-making  that often occurs in nature. And, as Iain’s team found, the more undecided agents you have, the better the decision-making gets.

WellMixedPop

In this plot, we’re looking at how the quality of the decision-making scales with the size of the group. We always start with ten decided individuals, four in favor of option A and six in favor of B, and we increase the size of the undecided community, from zero up to 2550.

A score of one on the Y-axis shows an ideal democracy.  A score of zero shows the smaller group winning every time. The X-axis shows the total size of the population.

As you can see, as the group gets bigger, the quality of the decisions goes up. This is because the minority group of decided individuals take a lot of convincing. If they duke it out directly with the decided majority with nobody else around, the results are likely to be a bit random.

But think about what happens when both decided parties talk to a bunch of random strangers first. A small difference in the sizes of the decided groups has a huge difference in the number of random people they can reach. That’s because each one of those random people also talks to their friends, and the effects are cumulative.

That means that, before too long, the minority group is much more likely to encounter a huge number of people already in agreement. Hence, they eventually change their tune. Having undecided people makes that chorus of agreement bigger.

This is awesome for bees and fish, but here’s the problem: human beings don’t mill and chat with strangers at random. We inhabit networks of family and friends. In the modern world, the size, and pervasiveness, of those networks is greater than it ever has been. So shouldn’t we look at what happens to the same experiment if we put it on a network and only let agents talk to their friends?

Let’s do that. First, let’s use an arbitrary network. One that’s basically random. The result looks like this.

RandomPop

As you can see, we get the same result, nearly, but the group has to be a little bigger before we get the same decision-making quality. That doesn’t seem so bad.

But unfortunately, human social networks aren’t random. Modern electronic social networks tend to be what’s called scale-free networks. What happens if we build one of those?

ScalePop

That’s not so hot. Once the network goes past a certain size, the quality of the decision-making actually seems to degrade. Bummer. For our simple simulation, at least, adding voices doesn’t add accuracy.

But still, the degradation doesn’t seem too bad. A score of 0.95 is pretty decent. Maybe we shouldn’t worry. Except of course, that in human networks, not every voice has the same power. Some people can buy television channels while others can only blog. And many people lack the resources or the freedom to do even that.

So what happens if we give the minority opinion-holders the most social power? In essence, if we make them the hubs of our networks and turn them into an elite with their own agenda? If you do that, the result looks like this.

ScaleBiasPopAs you can see, as the system scales, the elite wins ever more frequently. Past a certain network size, they’re winning more often than not. They own the consensus reality, even though most of the conversations that are happening don’t even involve them.

My broad conclusion, then, is that we should be extremely careful about accepting what is presented to us as the truth via electronic media, even when it seems to come from our peers. The more powerful communication technology gets, the easier it is for powerful individuals to exploit it. A large, networked society is trivially easy to manipulate.

Is there something we can do about this? I think so. Remember that the randomly mixed population always does better. So maybe we should be paying a little less attention to the news and Facebook, and having more conversations with people we encounter in our day-to-day lives.

In the networked society we inhabit, we’re conditioned not to do that. Often it feels uncomfortable. However, maybe we need to be rethinking that habit if we want to retain our social voice. The more we reach out to people whose opinions we don’t know yet, and allow ourselves to be influenced by them, the less power the media has, and the stronger we collectively become.