Since my recent post on Twitter and Facebook, I’ve been thinking of airing a piece of science I was tinkering with at Princeton, and never got round to putting in the world. That science asks the following question:
What makes us believe that something is true or right when we lack direct evidence, or experience, to support it? When do we decide to add our voice to a social chorus to demand change?
This question seems particularly pertinent to me in our modern age when so much of our information arrives through electronic channels that are mediated by organizations and mechanisms that we cannot control.
For many social animals, the most important factor in making a decision seems to be what the other members of their social group are doing. Similar patterns show up in experiments on everything from wildebeest, to fish, to honey bees.
Similarly, we humans often tend to believe things if enough of our peers seem to do likewise. People have been examining this tendency in us ever since the experiments of the amazing Stanley Milgram.
It’s easy to see this human trait as a little bit scary—the fact that we often take so much on faith—but, of course, a lot of the time we don’t have a choice. We can’t independently check every single fact that comes our way, or consider every side of a story. And when many of the people who we care about are already convinced, aligning with them is easy.
Fortunately, a lot of animal behavior research suggests that going with the flow is often actually a really good way for social groups to make decisions. Iain Couzin’s lab at Princeton has done some excellent work on this. They’ve shown that increasing the number of members of a social group who aren’t opinionated, and who can be swayed by the consensus by sheer force of numbers, often increases the quality of collective decision making. Consequently. there are many people who think we should be taking a closer look at these animal patterns to improve our own systems of democracy.
But how effective is this kind of group reasoning for humans? How much weight should we be giving to the loud and numerous voices that penetrate our lives? And how often can we expect to get things dangerously wrong.
Well, the good news is that, because we’re humans rather than bees, we can do some easy science and find out. And I’m going to show you how. To start with, though, we’ll need to know how animal behavior scientists model social decision-making.
In the simple models that are often used, there are two types of agent (where an agent is like a digital sketch of a person, fish or bee). These types are what you might call decided agents, and undecided agents. Decided agents are hard to convince. Undecided agents are more flexible.
Decided agents are also usually divided into camps with differing opinions. X many of them prefer option A. Y many of them prefer option B. If the number of decided agents who like B is a lot more than the agents who like X, we assume that B is the better option. Then we look to see how easy it is for agents, as a group, to settle on B over A.
To convince an agent, you let it meet a few others in the group to exchange opinions. If an agent encounters several peers who disagree with it in succession, (let’s say three), it considers changing its mind. And the probability of changing is, of course, higher for an undecided agent than for a decided one.
Then we put a bunch of agents in a digital room and have them mill about and chat at random. Very quickly we can make a system that emulates the kind of collective decision-making that often occurs in nature. And, as Iain’s team found, the more undecided agents you have, the better the decision-making gets.
In this plot, we’re looking at how the quality of the decision-making scales with the size of the group. We always start with ten decided individuals, four in favor of option A and six in favor of B, and we increase the size of the undecided community, from zero up to 2550.
A score of one on the Y-axis shows an ideal democracy. A score of zero shows the smaller group winning every time. The X-axis shows the total size of the population.
As you can see, as the group gets bigger, the quality of the decisions goes up. This is because the minority group of decided individuals take a lot of convincing. If they duke it out directly with the decided majority with nobody else around, the results are likely to be a bit random.
But think about what happens when both decided parties talk to a bunch of random strangers first. A small difference in the sizes of the decided groups has a huge difference in the number of random people they can reach. That’s because each one of those random people also talks to their friends, and the effects are cumulative.
That means that, before too long, the minority group is much more likely to encounter a huge number of people already in agreement. Hence, they eventually change their tune. Having undecided people makes that chorus of agreement bigger.
This is awesome for bees and fish, but here’s the problem: human beings don’t mill and chat with strangers at random. We inhabit networks of family and friends. In the modern world, the size, and pervasiveness, of those networks is greater than it ever has been. So shouldn’t we look at what happens to the same experiment if we put it on a network and only let agents talk to their friends?
Let’s do that. First, let’s use an arbitrary network. One that’s basically random. The result looks like this.
As you can see, we get the same result, nearly, but the group has to be a little bigger before we get the same decision-making quality. That doesn’t seem so bad.
But unfortunately, human social networks aren’t random. Modern electronic social networks tend to be what’s called scale-free networks. What happens if we build one of those?
That’s not so hot. Once the network goes past a certain size, the quality of the decision-making actually seems to degrade. Bummer. For our simple simulation, at least, adding voices doesn’t add accuracy.
But still, the degradation doesn’t seem too bad. A score of 0.95 is pretty decent. Maybe we shouldn’t worry. Except of course, that in human networks, not every voice has the same power. Some people can buy television channels while others can only blog. And many people lack the resources or the freedom to do even that.
So what happens if we give the minority opinion-holders the most social power? In essence, if we make them the hubs of our networks and turn them into an elite with their own agenda? If you do that, the result looks like this.
As you can see, as the system scales, the elite wins ever more frequently. Past a certain network size, they’re winning more often than not. They own the consensus reality, even though most of the conversations that are happening don’t even involve them.
My broad conclusion, then, is that we should be extremely careful about accepting what is presented to us as the truth via electronic media, even when it seems to come from our peers. The more powerful communication technology gets, the easier it is for powerful individuals to exploit it. A large, networked society is trivially easy to manipulate.
Is there something we can do about this? I think so. Remember that the randomly mixed population always does better. So maybe we should be paying a little less attention to the news and Facebook, and having more conversations with people we encounter in our day-to-day lives.
In the networked society we inhabit, we’re conditioned not to do that. Often it feels uncomfortable. However, maybe we need to be rethinking that habit if we want to retain our social voice. The more we reach out to people whose opinions we don’t know yet, and allow ourselves to be influenced by them, the less power the media has, and the stronger we collectively become.
A good argument for getting to know one’s neighbors and discussing things more complex than whether the mail is late. Though in the US, we are somewhat averse to discussing difficult subjects with people we don’t know.
I’m a little skeptical about your findings to be honest. 😉
This is really interesting. I suppose ‘open’ channels (like Twitter and Facebook – but not just our friends’ feeds) make a difference as well as they’re not typically biased – just people saying things, hopefully much the same things about situations. I’ve watched the ‘Humans of Vanuatu’ feed recently as my girlfriend’s sister is in Vanuatu, on a very remote island, and only this one guy on Facebook was getting factual stuff out via his satellite feed. If we’d have watched, say, Fox News (which is only interested in self-perpetuation through bad-news-as-a-vehicle) then we’d have got a very different grounding of whether to ‘believe’ that she was okay. The thing is, you can’t tell where these people – or your friends, for that matter – got their own information, so we’re in danger of getting into a chinese-whispers scenario. The Vanuatu guy was ‘on the ground’ – as his pictures showed’ – so that was more trustworthy, hence believable.
The simple rule-based experiments you’ve set up look a lot like the rule sets for cellular automata. I imagine opinions in the public phase space pulsing between bloom and extinction, like in Conway’s Life. Digital belief networks – something to play with?
I’ve become fascinated in the epistemological issues surrounding what we accept as ‘an explanation’ of something. Merely collecting opinions and data from the world, and trying to decide which is the most believable – something we mostly do – can be modulated by trying to put those opinions into a framework whose structure we’re familiar with – i.e. trying to work out whether an opinion is ‘justified’ in some way based on some other facts with perhaps firmer grounding. I’m trying to develop a universal framework into which it should be impossible to comfortably seat a claim that is neither justified nor logically consistent with other claims that are ‘current’, and which stretch down to ever more certain realms of facts that underlie what we’ve come to accept are ‘sound’ – so far, that is. A science of understanding, I suppose. It won’t fly in the face of Gödel, but constantly shift sets of ontologies and opinion-sets until the only best possible explanations survive, and we can come, more reasonably, to ‘believe them’. 🙂
Hey Chris,
Re belief networks: You’re right. It’s not a million miles away from that. And thinking of them that way makes me realize that there’s potential here for an awesome visualization. I’ll have to think about how one could show opinions propagating over a dense network without turning it into a visual mess.
Re science of understanding: I’d love to see more about where you’re going with that. Are you building networks of inferences with confidence levels attached? Have you looked at causal networks?
I’m not using any weighting on my graphs, actually, and don’t like the messiness of probabilistic inference or Bayesian network analysis. I realise that this is likely how the brain does its low-level stuff, where weighting is reified as number of neurons attached to some other threshold neuron) but I’m much more interested in the apparent structure of how we think – a much higher-level concern – rather than how the brain itself supports that. I’ll leave that to the neuroscientists.
My tinkering so far (and there isn’t really much to show but a few local command-line interactions and a huge SQL stack) involves sets of ‘atomic’ concepts that are joined together into ‘molecular concepts’ such that they can be queried and joined together into even more complex concepts, ultimately creating a rich chemistry of meaning – the kinds of things we encounter in our day-to-day lives. It comes down to a kind of templating system, where questions and answers map directly onto relationships between nodes, and the system can gradually work out, by inspecting what relationships different types of nodes have, what general classes they fall into, and therefore what kids of questions are appropriate for other nodes that it deems to be in that class. It’s a kind of wild feedback, shifting-ontologies, heuristic plaything so far that was initially based on the questions my kids asked, with me working out why they asked those questions based on what I thought they knew so far, and why those questions were appropriate. Or, sometimes hilariously, why they weren’t appropriate – why this was funny is, I think, quite revealing about the way that kids consolidate what they’ve found out based on the type of facts they think they’re dealing with…
It’s my hope and dream, I suppose, to show ultimately that a system such as this can hold any beliefs and justifications, and answer questions on them to show those structures of belief, and that it will seem just as ‘conscious’ as any of us, given enough data and a patient enough teacher answering and asking questions of it. Whether it’ll know that itself (and be able to articulate it) is anybody’s guess.
Interesting to pursue Sue Blackmore’s idea of the meme-point-of-view approach, too, and see how beliefs spread not because of their justifiability per se, but just looking at their spreading abilities when in the context of human brains. She has an interesting suspicion that self-awareness itself is just a memeplex that spreads well and makes our brains good places for those memes to survive and copy themselves, just as our bodies are good places for sexual behaviour genes/behaviour to live.
It’s all becoming excitingly connected in this century! Just enjoying Accelerando, by Charles Stross, so I’m rather saturated with this kind of thing at the moment. 🙂