Tag Archives: social media

How we decide

Since my recent post on Twitter and Facebook, I’ve been thinking of airing a piece of science I was tinkering with at Princeton, and never got round to putting in the world. That science asks the following question:

What makes us believe that something is true or right when we lack direct evidence, or experience, to support it? When do we decide to add our voice to a social chorus to demand change?

This question seems particularly pertinent to me in our modern age when so much of our information arrives through electronic channels that are mediated by organizations and mechanisms that we cannot control.

For many social animals, the most important factor in making a decision seems to be what the other members of their social group are doing. Similar patterns show up in experiments on everything from wildebeest, to fish, to honey bees.

Similarly, we humans often tend to believe things if enough of our peers seem to do likewise. People have been examining this tendency in us ever since the experiments of the amazing Stanley Milgram.

It’s easy to see this human trait as a little bit scary—the fact that we often take so much on faith—but, of course, a lot of the time we don’t have a choice. We can’t independently check every single fact that comes our way, or consider every side of a story. And when many of the people who we care about are already convinced, aligning with them is easy.

Fortunately, a lot of animal behavior research suggests that going with the flow is often actually a really good way for social groups to make decisions. Iain Couzin’s lab at Princeton has done some excellent work on this. They’ve shown that increasing the number of members of a social group who aren’t opinionated, and who can be swayed by the consensus by sheer force of numbers, often increases the quality of collective decision making. Consequently. there are many people who think we should be taking a closer look at these animal patterns to improve our own systems of democracy.

But how effective is this kind of group reasoning for humans? How much weight should we be giving to the loud and numerous voices that penetrate our lives? And how often can we expect to get things dangerously wrong.

Well, the good news is that, because we’re humans rather than bees, we can do some easy science and find out. And I’m going to show you how. To start with, though, we’ll need to know how animal behavior scientists model social decision-making.

In the simple models that are often used, there are two types of agent (where an agent is like a digital sketch of a person, fish or bee). These types are what you might call decided agents, and undecided agents. Decided agents are hard to convince. Undecided agents are more flexible.

Decided agents are also usually divided into camps with differing opinions. X many of them prefer option A. Y many of them prefer option B. If the number of decided agents who like B is a lot more than the agents who like X, we assume that B is the better option. Then we look to see how easy it is for agents, as a group,  to settle on B over A.

To convince an agent, you let it meet a few others in the group to exchange opinions. If an agent encounters several peers who disagree with it in succession, (let’s say three), it considers changing its mind. And the probability of changing is, of course, higher for an undecided agent than for a decided one.

Then we put a bunch of agents in a digital room and have them mill about and chat at random. Very quickly we can make a system that emulates the kind of collective decision-making  that often occurs in nature. And, as Iain’s team found, the more undecided agents you have, the better the decision-making gets.

WellMixedPop

In this plot, we’re looking at how the quality of the decision-making scales with the size of the group. We always start with ten decided individuals, four in favor of option A and six in favor of B, and we increase the size of the undecided community, from zero up to 2550.

A score of one on the Y-axis shows an ideal democracy.  A score of zero shows the smaller group winning every time. The X-axis shows the total size of the population.

As you can see, as the group gets bigger, the quality of the decisions goes up. This is because the minority group of decided individuals take a lot of convincing. If they duke it out directly with the decided majority with nobody else around, the results are likely to be a bit random.

But think about what happens when both decided parties talk to a bunch of random strangers first. A small difference in the sizes of the decided groups has a huge difference in the number of random people they can reach. That’s because each one of those random people also talks to their friends, and the effects are cumulative.

That means that, before too long, the minority group is much more likely to encounter a huge number of people already in agreement. Hence, they eventually change their tune. Having undecided people makes that chorus of agreement bigger.

This is awesome for bees and fish, but here’s the problem: human beings don’t mill and chat with strangers at random. We inhabit networks of family and friends. In the modern world, the size, and pervasiveness, of those networks is greater than it ever has been. So shouldn’t we look at what happens to the same experiment if we put it on a network and only let agents talk to their friends?

Let’s do that. First, let’s use an arbitrary network. One that’s basically random. The result looks like this.

RandomPop

As you can see, we get the same result, nearly, but the group has to be a little bigger before we get the same decision-making quality. That doesn’t seem so bad.

But unfortunately, human social networks aren’t random. Modern electronic social networks tend to be what’s called scale-free networks. What happens if we build one of those?

ScalePop

That’s not so hot. Once the network goes past a certain size, the quality of the decision-making actually seems to degrade. Bummer. For our simple simulation, at least, adding voices doesn’t add accuracy.

But still, the degradation doesn’t seem too bad. A score of 0.95 is pretty decent. Maybe we shouldn’t worry. Except of course, that in human networks, not every voice has the same power. Some people can buy television channels while others can only blog. And many people lack the resources or the freedom to do even that.

So what happens if we give the minority opinion-holders the most social power? In essence, if we make them the hubs of our networks and turn them into an elite with their own agenda? If you do that, the result looks like this.

ScaleBiasPopAs you can see, as the system scales, the elite wins ever more frequently. Past a certain network size, they’re winning more often than not. They own the consensus reality, even though most of the conversations that are happening don’t even involve them.

My broad conclusion, then, is that we should be extremely careful about accepting what is presented to us as the truth via electronic media, even when it seems to come from our peers. The more powerful communication technology gets, the easier it is for powerful individuals to exploit it. A large, networked society is trivially easy to manipulate.

Is there something we can do about this? I think so. Remember that the randomly mixed population always does better. So maybe we should be paying a little less attention to the news and Facebook, and having more conversations with people we encounter in our day-to-day lives.

In the networked society we inhabit, we’re conditioned not to do that. Often it feels uncomfortable. However, maybe we need to be rethinking that habit if we want to retain our social voice. The more we reach out to people whose opinions we don’t know yet, and allow ourselves to be influenced by them, the less power the media has, and the stronger we collectively become.

 

Social media and creeping horror

One of the things my friends have advised me to do as part of building my presence as a new author is take social media seriously. Particularly Twitter. I’ve been doing that, and for the most part enjoying it, but I’m also increasingly convinced that the medium of electronic social media is terrifying, both in its power, and its implications.

By this point, many of us are familiar with the risks of not being careful around social media. The New York Times recently published a brilliant article on it.

It’s easy to look at cases such as those the article describes and to think, “well, that was a dumb thing to do,” of the various individuals singled out for mob punishment. But I’d propose that making this kind of mistake is far easier than one might think.

A few years ago, I accidentally posted news of the impending birth of my son on Facebook at a time when my wife wasn’t yet ready to talk about it. It happened because I confused adding content to my wall with replying to a direct message. That confusion came about because the interface had been changed. I wondered subsequently, after learning more about Facebook, whether the change had been made on purpose, to solicit exactly that kind of public sharing of information.

In the end, this wasn’t a big deal. Everyone was very nice about it, including my wife. But it reminded me that any time we put information into the internet, we basically take the world on trust to use that information kindly.

However, the fact that we can’t always trust the world isn’t what’s freaking me out. What freaks me out is why.

The root of my concern can perhaps be summarized by the following excellent tweet by Sarah Pinborough.

*Looks through Twitter feed desperate for something funny.. humour feeds the soul. Nope, just people shouting their worthy into the void…*

I think the impressive Ms. Pinborough intended this remark in a rather casual way, but to my mind, it points up something crucial. And this is where it gets sciencey.

Human beings exercise social behavior when it fits with their validation framework. We all have some template identity for ourselves, stored in our brains as a set of patterns which we spend our days trying to match. Each one of those patterns informs some facet of who we are. And matching those patterns with action is mediated by exactly the same dopamine neuron system that guides us towards beer and chocolate cake.

What this means is that when we encounter a way to self-validate on some notion of social worth with minimal effort, we generally take it. Just like we eat that slice of cake left for us on the table.  And social media has turned that validation into a single-click process. In other words, without worrying too much about it, we shout our worthy into the void. 

This is scary because a one-click process doesn’t leave much room for second-guessing or self-reflection. Furthermore, the effects of clicking are often immediate. This reinforces the pattern, making it ever more likely that we’ll do the same thing again. And that’s not good for us. We get used to social validation being effortless, satisfying, and requiring little or no thought.

We may firmly assure ourselves that all our retweeting, liking, and pithy outrage is born out of careful judgement and a strong moral center, but neurological reality is against us. The human mind loves short-cuts. Even if we start with the best rational intentions, our own mental reward mechanisms inevitably betray us. Sooner or later, we get lazy.

Twenty years ago, did people spend so much of their effort shouting out repeated worthy slogans at each other. Were they as fast to outrage or shame those who’d slipped up? How about ten years ago? I’d argue that we have turned some kind of corner in terms of the aggressiveness of our social norming. And we’ve done so, not because we are now suddenly somehow more righteous. We’ve done it because it’s cheap. Somebody turned self-righteousness into a drug for us, and we’re snorting it.

But unlike lines of cocaine, this kind of social validation does not come with social criticism attached. Instead, it usually comes from spontaneous support from everyone else who’s taking part. This kind of drug comes with a vast, unstoppable enabler network built in. This makes electronic outrage into a kind of social ramjet, accelerating under its own power. And as with all such self-reinforcing systems, it is likely to continue feeding on itself until something breaks horribly.

Furthermore, dissent to this process produces an attendant reflexive response, just as hard and as sharp as our initial social judgements. Those who contest the social norming are likely to be punished too, because they threaten an established channel of validation. The off-switch on our ramjet has been electrified. Who dares touch it?

The social media companies see this to some extent, I believe. But they don’t want to step in because they’d lose money. So long as Twitter and Facebook build themselves into the fabric of our process of moral self-reward, the more dependent on them we are. The less likely we are to spend a day with those apps turned off.

Is there a solution to this kind of creeping self-manifested social malaise? Yes. Of course. The answer is to keep social media for humor, and for news that needs to travel fast. We should never shout our worthiness. We should resist the commoditization of our morality at all costs.

Instead, we should compose thoughts in a longer format for digestion and dialog. Maybe that’s slower and harder to read, but that’s the point. Human social and moral judgements deserve better than the status of viruses. When it comes to ostracizing others, or voting, or considering social issues, taking the time to think makes the difference between civilization and planet-wide regret.

The irony here is that many of those people clicking are those most keen to rid the world of bigotry. They hunger for a better, kinder planet. Yet by engaging in reflexive norming, they cannot help but shut down the very processes that makes liberal thinking possible. The people whose voices the world arguably needs most are being quietly trained to shout out sound-bites in return for digital treats. We leap to outrage, ignoring the fact that the same kind of instant indignation can be used to support everything from religious totalitarianism to the mistreatment of any kind of minority group you care to name. A world that judges with a single click is very close in spirit to one that burns witches.

In short, I propose: post cats, post jokes, post articles. Social justice, when fairly administered, is far about more about the justice than about the social.

(My first novel, Roboteer, comes out from Gollancz in July 2015)