Why the world has gone crazy

Do you feel like the world has gone crazy of late? I do. It doesn’t seem that long ago that Western civilization was capable of making relatively rational political decisions that crossed party lines and balanced priorities. Now all of a sudden, we are in a world where people cheer for Donald Trump even as his trade policies destroy their jobs, or cheer for Brexit as it threatens to obliterate their livelihoods, or doubt climate change even as they either roast in overpowering heat or freeze in a roaring polar vortex liberated from our broken North Pole.

Just watching this weird collective calamity unfold freaks me out, and if Twitter is anything to go by, I don’t seem to be the only person feeling this way.

So what’s going on? What happened to the world? When did our society get so very stupid? I believe there’s a simple explanation, and I’m hoping that sharing it will help.

Human beings are the most dominant species on the planet not because of our intelligence, I’d propose, but because of language. Neanderthal man had a larger cranial capacity than Homo Sapiens and may well have been more intelligent on an individual basis, yet we out-competed the Neaderthals handily. Why? Because human beings are both smart and social. Language allows us to propagate information and learn from each other to an extent that no other species on Earth is capable of. We think and act collectively, and that makes us unbeatable (to date, at least).

However, just because we have intelligence and language, that doesn’t mean that the rest of how we share information is fundamentally different from the way that any other species does it. There are a lot of social species that can learn and cooperate. Ants and bees are damned good at it. Huge shoals of fish adapt in milliseconds to avoid predators. Vast herds of wildebeest direct themselves to water sources unknown to most in the group as if by magic.

The mechanisms by which all these species make decisions are pretty similar. You start off with a few individuals expressing a preference or an idea that their group-mates then start to copy. As the animals mingle, the good ideas (usually with a few more adherents) tend to propagate faster than the bad ones (with less). Eventually, almost every member of the group is behaving the same way and a collective decision has been made. This mechanism produces decent decisions an amazing amount of the time. Try Thomas Seeley’s Honeybee Democracy for an astonishing account of how it works.

As it turns out, there is plenty of evidence that human beings use this same copying rule. We mimic each other’s choices far more than we let ourselves believe. In his recent book The Formula on the social causes of success, Albert-Laszo Barabasi makes the case clearly. The Knowledge Illusion by Sloman and Fernbach goes into greater depth on the same topic. Or for even more compelling evidence, try Robert Cialdini’s classic book Influence.

And why do we mimic each other? Because it’s an incredibly cheap and effective way to coordinate. The reason why nature employs almost the same algorithm in such a large number of unrelated species is because it’s the easiest one for natural selection to pick out. It would be weird if we didn’t copy each other contagiously. After all, that’s what language is basically for.

But there’s a problem here: we no longer live under the conditions in which we evolved. And the conditions we have now are basically a perfect storm for shitty decision making. Let me explain.

While I was briefly working at Princeton, I had the great fortune to meet and interact with Iain Couzin and the members of his lab. Iain was a pioneer in the study of how social animals make decisions. I watched a terrific talk by one of his postdocs outlining the specifics of the mingling-copying mechanism one day and thought to myself: I bet it doesn’t work on networks.

What I mean is: I suspected that the mingling-copying approach to group choice-adoption would work really well when animals were always moving around and encountering new opinions, but if you locked animals onto a social network, the method would start to break down. Why did I suspect this? Because if you’re always copying the same people, local opinions will reinforce. It’s going to be much harder for good ideas to propagate through the whole group because they’ll face bottlenecks and blockades. In a social network, there are only certain routes from one person to another. It might be that the only way for a good idea to reach the people it needs to convince is through someone committed to an idea that’s fundamentally at odds.

It only took about two hours of coding to both reproduce Couzin’s basic result and demonstrate that my suspicions regarding the effect of social networks were correct. Furthermore, the bigger the network, and the more biased the distribution of node connections, the worse the decision making got. (In a biased network, a few nodes have loads of links radiating out of them and most nodes have very few.)

Then, when I put the animals with the bad ideas on the nodes with the most connections, the decision-making went straight to hell. All the benefits of copying each other went out the window. Suddenly,  bad ideas were winning all the time.

This is a problem, because that same mingling-copying algorithm in our heads encourages us to build social networks that have exactly the wrong kind of bias. These networks are what’s called ‘scale free’, another term that comes up in Barabasi’s work.

Imagine that you’re choosing some music to listen to. You ask five friends, and three of them happen to recommend the same artist. You’re then more likely to listen to that artist and than the others you were given, and also more likely to share her work with the next person who asks. This means that those nodes (artists) with a lot of links (attention) tend to get even more. You’re more likely to find yourself listening to Taylor Swift than a local band, for instance, unless you’re trying really hard to do otherwise. Similarly, you’re more likely to choose Google for a search engine than DuckDuckGo, and that’s going to affect what you subsequently see. As technology has advanced, our tendency to bind ourselves into these kinds of social networks has exploded into a kind of digital pandemic.

The upshot of all this, in case it’s not already obvious, is that our natural collective decision-making instinct, when combined with technology, creates networks that degrade the quality of the decisions we make. Fortunately, we seem to be noticing. The push-back against the effects of social media have started. But unfortunately, the problem doesn’t end there.

This same decision-making feedback effect drives how society allocates money to people. We assess how much a person is worth by the amount of success they already have. So, successful scientists get more rewards heaped on them. Painters who are already in the right galleries get into the right museums. And CEOs trade up to new positions with ever higher pay. It’s a feedback effect baked into our very nature as animals because we simply don’t have the time or information necessary to assess everyone exclusively on their merits.

(I can tell you from personal experience that my novels were taken more seriously after I worked at Princeton than I was before, despite the fact that the novels in question were written before I worked there. Princeton is a magic word that people use to assess likely intellectual aptitude because that’s cognitively much cheaper than trying to make a fresh assessment.)

We make up stories, of course, to justify the worth we allocate to companies, individuals, artists, etc, but stories aren’t science, and the science of how we make decisions is well understood at this point.

This is not to say that talent doesn’t count. You can’t even run a functional business unless you’re competent and hungry. Just like you can’t get your painting into even one gallery without some artistic ability. Achievement is hard work and the skills we need to succeed are very real. But those traits are just the table-stakes for the game of rich-get-richer success-roulette that follows. People consistently underestimate the effects of social feedback, just like they consistently underestimate how skewed wealth distribution curves actually are.

This is why our society is increasingly shaped by a small number of billionaires and a very large number of everyone else. Which is unfortunate, because nothing affects a person’s incentives like how much money they have. As a result, what looks like a sensible policy to the people with the most social power is inevitably going to diverge from what everyone else thinks is right, or indeed what’s actually objectively a good idea.

Of course, the more power those central individuals have, the louder their voices and the more likely that their opinions will affect decision-making. On top of that, those people directly connected to very powerful individuals have a massive incentive to support the beliefs of their bosses, otherwise their positions relative to social competitors are jeopardized. This drives the belief-systems of billionaires further away from the consensus understanding of what’s going on. They just don’t get the benefit of all the facts flowing through the rest of the social network. Consider the recent Time article on Donald Trump’s intelligence briefings for an example of what this looks like.

The upshot of this is that we make billionaires dumber, the more we pay them. This is not speculation or analogy, but a quantifiable impact you can model. The more attention billionaires receive, the less able they are to process information. And the more we power we give them, the more they’re likely to gain. And this is why we are in a global runaway cascade of stupid.

There’s another important factor that bears mentioning here. While we, as a society, are getting less able to respond rationally to unwelcome information, exactly the same process is happening inside the brains of those billionaires now running the show.

Here’s how that works. People who receive a lot of money for what they do are very likely to self-validate on that fact. What differentiates them from others is their apparent ‘success’, so they’re both likely to believe that their gains reflect some intrinsic personal quality and also to value that quality highly. After all, that’s what we all want: to be good at something and have it be recognized. And when you’re super-rich, people will line up around the block to tell you how great you are.

But as a billionaire gains more wealth, the satisfaction they gain from each bump in their fortunes decreases because it’s that much easier to achieve. They naturally habituate to the sensation, so each rush of triumph is less satisfying. This means that the more they gain, the hungrier they get for more of the same. Power and attention operate like a drug. This is another very well understood and extensively studied behavioral effect.

So here’s the next takeaway: we make the billionaires sadder, needier, and more desperate, the more we pay them.

Ironically, because of these same network effects, we’re more likely to believe that wealth and attention are valuable even as it harms us. We cannot help but be impacted by the consensus delusion that those enormous fortunes we see are somehow the product of a mysterious kind of personal excellence that we may yet be able to exhibit. We believe this despite the mounting evidence that the converse is true: that most super-rich people are idiots of our creation. Having made the people in the center of our society sick, we then acquire that same sickness. We try harder and harder to validate on the fuel that the billionaires run on even as it gets steadily less likely for us to ever succeed at it.

Historically speaking, this feedback process always goes to the same place. Those leading society lock in their wealth and make progressively worse decisions until some force comes along to disrupt the social disequilibrium that’s been created. That either happens through war, or invasion, or pandemic, or some other equally fun process. See The Great Leveler by Walter Scheidel for exhaustively complete and utterly convincing details.

The upshot of all this is that without very significant social re-balancing soon, we will be unable to confront climate change or any of the dramatic consequences that arise from it. And without action, a great many will die. My guess is that the next upcoming social shock will kill about a billion people. (I’ll explain that number in a later post.) Somewhere in that difficult time, people will take to chaining oligarchs to the decks of their own yachts and letting the raging hand of Nature take its vengeance, but by then it will be too late.

Is there a solution? Of course there is. We don’t have to be blind and ignorant to social feedback effects like the civilizations before us. We have network science for crying out loud. We have neuroscience. And so we have hope.

The number of oligarchs in the world is tiny and their power resides exclusively in our imaginations. So how about each nation coordinates its efforts to simply require that all the money that anyone has over some amount, (let’s say one billion dollars), be returned to the state and distributed throughout the population.

We don’t do this out of some idea of ‘fairness’. There is no notion of fairness invoked here. Neither do we do it because it is ‘right’ in some sense. Certainly it is impossible for the billionaires to have ‘earned’ that money in any meaningful sense but arguably that’s irrelevant. We are still talking about wealth redistribution, which is always a charged concept. So why do we do it? Because the alternative is that everyone loses, including the billionaires themselves. Either we tell them that the money is going back in the pot for their own good, or all the money everyone has goes away anyway.

Who gets that money? It gets shared out equally between every adult.

Is that ‘fair’? Shouldn’t we go further and hand it out proportionally? We could, but if we do that, we create a power vacuum and start fights and the whole process will break. It’ll be like that scene in It’s a Mad Mad Mad Mad World where they can’t decide on how many shares of the treasure there should be, only with machine guns and shrapnel bombs. So we keep it very simple. The more important step is to repeat the capping process five years later, and to keep doing it indefinitely.

But what happens if the billionaires simply hide their money overseas while the wealth survey is taking place, I hear you ask? Then they are forbidden from entering or trading in that country unless they participate in the program. Those people are watched and imprisoned if they’re caught. (Remember, these people still get to keep one dollar less than the one billion threshold. That’s still more than nine-hundred-and-ninety-nine million dollars. They still have more money than everybody else and far more than they can possibly spend. Their fate is nothing to be sad about.)

Why do I think this is a sensible approach? In short, because a social wealth distribution that cannot go over a threshold will reorganize itself. People who want to retain power won’t want their visible wealth to go over the limit so they’ll find other ways to exercise social control by sinking money back into society. This system will take time and effort to figure out, so while the system is likely to eventually be gamed somehow, in the mean time, there will be plenty of opportunity for new fortunes to arise and for rationality to return.

I see this approach as far better than trying to force-equalize society because forced equalization strips away the social incentive for individuals to succeed. It might seem fairer but it’s a surefire way to make an economy nosedive while an entrenched elite of self-appointed enforcers establish themselves to replace the oligarchs who’ve just been removed.

What you actually want is something like capitalism, but with a mechanism in place to prevent runaway idiocy of the sort we have now. People have tried to do that with progressive taxation, of course, but the institutions we might look to to effect that change have already been gamed via the current process of stupidification. That means we can expect those institutions to be remarkably sluggish in their response to our votes, and to draft legislation that is more arcane and full of holes than anyone wants. Look at the legislation imposed on banks after the Credit Crunch if you want an example of how that is likely to play out.

To my mind, the fix has to be something simple, blunt, and obvious, so that there is no wiggle-room for laws to be altered and cheated. We want a law you can can fit in a tweet, because that way it’s easy to apply a social check on whether it’s actually being carried out properly. And because the process exclusively impacts a tiny, overfed, and badly-confused minority, violence in its exercise might actually be avoided.

It wouldn’t work forever, of course, but it might buy us enough time to get rational about the world we live on, and how to keep it from burning up. And that would be a lot better than what we have now.

How do we implement such a change? That’s harder. It requires coordination and persistent agitation, and is undoubtedly the topic for another blog post.

In any case, that’s my take. If you disagree, or believe you have a better solution, I’d love to hear about it. I shall be reading the comments with interest.

 

One thought on “Why the world has gone crazy”

  1. Hey Alex!

    Can you share your code that simulates these networks? I would love to see the logic you implemented and how the results support your argument.

    Thanks!

Leave a comment