Is Reality Digital or Analog

A post I first submitted to How to Build a Universe:

After my collaboration with Tommaso Bolognesi last autumn, we noticed the following essay competition being run by FQXi. FQXi is a marvelous organization that supports frontier physics research in areas where other organizations wouldn’t dare. It’s an invaluable resource for people who’re trying hard to think outside the paradigm box, and a useful rallying point for those interested in foundational questions about how the universe actually works.

The subject of the competition: Is Reality Digital or Analog?

How could we not take part? Tommaso and I agreed that we should both submit an essay. I didn’t win, but I’m delighted to say that Tommaso received a prize. For those who’re interested, my submission can be found here, and Tommaso’s here.

Antropy Doubled

In my last post, I introduced an algorithm for turning order into chaos and back again using a turmite (otherwise known as a 2D Turing Machine). This time, I have to admit that I kept some of the truth from you. I didn’t just come up with one algorithm, I came up with two. And the second one is significantly more weird and beautiful than the first.

Where my first algorithm used a single machine head, my second one uses two. And instead of simply picking up and putting down bits, this new algorithm swaps them from one head to the other. Machine-head A passes its data to head B, and B passes its data to A. What this means is that the new algorithm is a lot faster at turning order into chaos while being no less reversible.

On top of this, the new algorithm produces some eerie physics-like results from time to time, the reasons for which still aren’t entirely clear to me. The new algorithm working on a block of bits also looks peculiarly like something rotting or rusting—something I’ve not seen before in a simple algorithm.

Once again, I’m struck by the peculiar corrosive beauty of these programs but am still not sure what they’re good for. You can find the simulations here.

Increasing Antropy

I have a new algorithm that I want to share with you. It’s interesting to watch, slightly mysterious, and I can’t help but wonder if it might turn out to be useful for cryptography or something. Before you take a look, though, I should first explain what it does, why I came up with it, and what it has to do with digital physics. (For the impatient, the cool stuff is here.)

During my collaboration with Tommaso Bolognesi at CNR-ISTI last autumn, we were looking for ways to create sparse, pseudo-random data structures. Specifically, we wanted sparsely-connected directed acyclic graphs (a requirement for building spacetime-like causal sets, a term I explained in my last post.) However we soon discovered that there weren’t any classes of data structures for which we could get the kind of results we were looking for.

For those of you with a math/computing background, this might sound like a slightly odd statement, because there have been algorithms to build sparse, pseudo-random matrices for ages. However, none of these algorithms were as simple as we wanted, or as adaptable as we wanted. For starters, most of these algorithms require that you explicitly represent numbers somewhere in your code. For our purposes, this pretty much ruled them out immediately. What we wanted was for the sparseness of the data to emerge naturally out of a process without us having to impose extra layers of interpretation on it.

To get a sense of what I mean, let’s take a look at turmites. Turmites are very simple programs of a sort that Tommaso and I have explored and are great at producing pseudo-random data. The way they work is very straightforward: you have a network of memory slots hooked up according to some geometrical rule. You also have a machine-head that can move across that network and change the contents of the memory slot it’s sitting on. You then create a simple rule for moving the machine-head based on the contents of the slot where it’s located. It’s basically like a 2D Turing Machine.

The simplest such program is probably Langton’s Ant—the first turmite ever discovered. It runs on a square grid of black and white cells, and has an operating rule says:

  • If you’re on a white cell, make it black, turn right, and step forwards.
  • If you’re on a black cell, make it white, turn left, and step forwards.

That’s it. It’s about as computationally simple as you can get and yet the output is so unexpected that computer scientists still don’t have much in the way of useful proofs about its behavior.

At face value, turmites look like a terrific fit for the sort of randomness we want to create. Furthermore, there are plenty of turmites that you can run for as long as you like, and never get repeating data. However, if you take a look at the kinds of patterns that turmites create, you may notice something about them. The patterns are all pretty dense. What I mean by this is that the balance of black and white squares that they generate is usually pretty much equal. Sure, some of them make denser patterns than others, but the density is never all that low. Furthermore, you definitely don’t get to choose in advance how dense the pattern is going to be. Your choice of ant algorithm decides that for you. You don’t have any say in the matter.

The reason for this is that in order for the ant to produce random-looking data, its behavior needs to be unpredictable. And its behavior can only be unpredictable if it has a nice rich mix of black and white cells to work with. Take away the mixture and the behavior stops being unpredictable.

One way to get very sparse data out of a turmite is to pick a rule that’s got a large number of different states. In other words, instead of only permitting you to put a one or a zero in each slot that the turmite visits, you can put one of a larger range of values, say, for instance, one of ten different values. Then, to get your sparseness, you throw away everything except one of the states when you examine the results. However, we didn’t like this solution either, as it required us to take the output of the algorithm and apply some kind of arbitrary filter to it. So we were stuck. We couldn’t even create turmites of the sort that we wanted, let alone causal sets.

Then, shortly after I got back to the US, a solution of sorts to the turmite version of the problem occurred to me. Whether the same kind of algorithm will turn out to be applicable to networks is unknown, but it seems like a an interesting starting point.

The idea here is that instead of having a rule for turning a slot in the grid on or off, instead you have a rule for picking up a bit or putting it down. This allows you to populate your environment with data as sparse as you like, and know that the density will never change as long as the program runs. There’s one other twist, so to speak. Rather than running the program on a grid of infinite size, you run it on a grid of finite size, but you hook up the edges of that grid such that leaving the top of the grid brings you back at the bottom of the grid shifted one row to the left. Likewise, leaving through the bottom brings you back a row to the right. The left and right edges of the grid are also hooked up the same way, so that the whole grid is slightly twisted.

An odd set-up, admittedly, but what it gives you is a turmite that takes whatever input you provide and mangles it for you without losing track of any of your bits. Because no bits are ever gained or lost, it also means that the ant should be reversible. We can write a program that can unmangle any mangled data we’re handed. It’s like a magic wand for turning order into chaos and back again—a kind of do-it-yourself entropy kit. In fact, it’s a little bit like a tiny universe with a finite supply of matter. Over time, everything becomes disordered, but it does so according to a rule that works as well backwards as it does forwards.

About fifty years ago, this would probably have been an awesome way of encoding messages. However, these days we have public key cryptography, so the utility of my algorithm is a little less obvious. However, there’s something about it that gives me a tingly feeling. It has practical uses, I’m sure of it. I’m just not sure what they are yet. Any ideas?

How to fold this approach back into a class of algorithms that will help build causal sets is something I’m still working on. I can use this method to approximate percolation dynamics by using the turmite to construct an adjacency matrix. However, that doesn’t help us build realistic spacetimes. Clearly, more work is required.

And now, for those of you who’ve been patient enough to read to the end, here’s another link to the simulation. Happy watching, and if you think of a use for this thing, let me know!

Adventures in Game Theory, Part Four

For those of you freshly joining this adventure, the last three posts have led us on a strange, thrilling journey that has passed through the valleys of introductory game theory, the jungles of applied improv, and the mountains of software simulation. Now, at last we arrive at our thunderous finale on the shores of Lake Awesome. I highly recommend reading from the start of the sequence, otherwise what I have to say may be too extraordinary and wonderful for your mind to fully hold!

At the end of the last installment caught me teetering on the brink of a realization–that by adding just a little more functionality to my simulation, I could start exploring some more socially useful truths about how people behave. My insight was to add status.

What this meant in practice was splitting the population of agents in my model into two groups: bosses and workers, or in training community parlance: leaders and team-members. Then, in order to make the interactions between bosses and workers a little less benign, I added two extra constraints.

One: If bosses were aggressive (nose-thumbing) to workers, workers were not empowered to reciprocate and be aggressive back in their next encounter.

Two: Bosses were unable to remember the specifics of positive interactions they had with workers. So for instance, if a boss and a worker both chose paper in one round, the worker would remember the fact, but the boss would not.

Implementing these changes was easy, as it simply required that the two memory rules I’d already added to make the first simulation work were now dependent on status. (I also added a little extra logic around the movement of the agents to ensure that workers had to interact with bosses, and to make the movements of bosses dependent on other bosses but not workers. However, while necessary, that code is somewhat beside the point.)

What happened next was wonderfully clear. Within a few seconds, all the bosses were behaving aggressively while the workers normed on a set of social standards of their own. My simulation suddenly looked a lot like some of the more awful companies I’d worked for. Without having to say anything about the kinds of people who become leaders, or about the specifics of organizational culture, I’d captured a simple truth about leadership: that without the incentives to behave otherwise and the right skills to succeed, people with power slide towards bad behavior, even if they start off thinking like saints.

What was even more interesting was that as the simulation progressed, the bosses started to bump up against the corners of the virtual environment as if desperate to leave. Because aggressive behavior was so successful for bosses in their interactions with workers, they were applying the same behavior to each other, resulting in a rapid erosion of their ability to collaborate. The lesson: by letting leaders behave badly, we ensure that leaders have less pleasant interactions with each other, as well as with us.

My goal, though, was not to engage in rhetoric about leaders, but instead to see whether models like the one I was looking at could tell us something about how to help organizations do better. To do this, I looked at what happened when I turned each of the status dependencies off in isolation.

Turning off the status dependency for remembering positive interactions is rather like sending your managers on an employee recognition course. They learn to value the specific information they get from each person they work with, and to let their team members know that they’re seen and valued.

The result in the simulation is that the culture improves significantly. The workers integrate more tightly and the bosses take on the same cultural colors as the workers they lead. Interestingly, the bosses don’t all start cooperating at once. Many of them initially retain their aggressive behavior. Then, one by one, they figure out that collaboration is more effective.

The lesson here: that training leaders to listen can make a huge difference in their effectiveness, but that the change they take on depends on their willingness to implement what they learn.

If instead, we turn off the status dependency for worker retaliation to boss aggression, the effects are even more interesting. Making this change is rather like implementing a shared accountability system like the one that revolutionized the airline industry and transformed the safety standards in air travel. Under this system, the pilots of planes are no longer the unquestionable captains of the air that they once were. If copilots think that they’re witnessing a mistake, they’re duty-bound to call the pilot on it and to report it to air traffic control if necessary. In our simulated business, we can imagine that we’re instructing the worker agents to hold their bosses accountable if they don’t uphold the collaborative social standards of their organization.

What happens when we make this change is that the behaviors of the bosses have trouble settling onto any specific color. When we watch the ‘mood’ of the agents to see how many positive or negative interactions they’re having, we see that the tables have been turned. The workers are now having a pretty great time all round and the bosses are mostly miserable–the opposite of what we see if status dependence for retaliation is left on. This is because the workers now have an advantage that the bosses don’t–they can remember and repeat positive interactions whereas bosses cannot. Because aggression no longer secures automatic results, bosses don’t have an easy way of stabilizing on a successful behavior.

The lesson here is that enabling everyone in an organization to hold leaders accountable for their behavior is what creates the incentive for leaders to improve, but that without the right training and direction, the main result is leader unhappiness.

As you might expect, turning off both status-dependent features creates a benign, functional organization that settles rapidly onto a cooperative culture. If you want to play around yourself, and have Java installed, the simulation is the second applet on this page. (It has four buttons.)

As before, red, blue and green denote different positive interactions. Gray denotes aggressive behavior. Swapping to ‘mood view’ shows the success of the agents interactions, ranging from blue (unhappy agents) to yellow (cheerful ones).

Clearly there’s a lot more to do here. For a start, in order to turn this into a science result, the simulations will need to be a lot more rigorous, which will probably mean sacrificing the visual playfulness.  Furthermore, we’ve only looked at one memory model for agents and solid research would need to try out others. However, the results seem pretty clear. We’ve gone from a simple game played in a room full of people to a model that turns business intuition into something rather like unavoidable, mathematical fact.

Thus, in the wake of our adventure, we can say with real confidence that any society or organization that doesn’t empower its people hold its leaders accountable, and which doesn’t teach those leaders how to listen, can expect its leaders to turn out bad, regardless of how ‘good’ we believe them to be as people.

This is something most of us already believe but which we often fail to implement. For instance, we’re all used to the idea of holding elected officials accountable, but explicit training in ‘voter recognition’? We leave that to chance. Similarly, we’re used to the idea that good managers are the ones who pay attention, but company-wide accountability systems? Those are pretty rare. I believe that simulations like this can make these points unavoidable, and also perhaps show us how to build measures that make our adherence to such standards quantifiable.

For any skeptics out there, my huge thanks for reading this far, and here’s a final thought to consider. Agent-based simulations of this sort have been used by biologists for years on the following basis: we can’t capture all the details of natural systems like cultures or the lives of organisms, so instead we capture only what we know is true. From that, we look to see what else must be true as a consequence. Thus we attempt to make the simplicity of the model a strength, not a weakness. In this instance, the agents are so simple that we can expect the same effects to arise regardless of the memory model we employ for our agents, so long as that memory model permits learning. Further work in this area will hopefully make that point even clearer.

That’s it. The adventure is finished. And while the ending perhaps isn’t unexpected, it feels like a step forwards to me. After all, if we can do this starting with Rock Paper Scissors, think what we can do with the game of Twister.

Causal Sets and Leaning Towers

An article freshly transplanted from my digital physics blog:

Last year I had the incredible good fortune to spend a couple of months collaborating with Tommaso Bolognesi at CNR-ISTI, in Pisa, Italy. Tommaso runs his own research program into the interface between computation and physics and is a champion of the Digital Physics cause. He hired me to see if together we could answer a very specific question:

Is it possible to build networks that have the same properties as spacetime using simple algorithms, and if so, how?

I’ve had plenty to say on the subject of modeling space before this. However, what Tommaso was looking for was very specific. He wanted us to find ways to build causal sets. Causal set theory is probably the point of closest approach between digital physics and more mainstream quantum gravity research and it’s a fascinating subject. In a nutshell, causal set theorists believe that spacetime is most usefully thought of as a discrete structure and that the way to model it is to try to mimic the kinds of relationships between events that we see in relativity. To achieve this, they connect nodes using something called a partial order—a set of relationships that define which nodes must come before others, but which falls short of providing an exact numbering for all nodes.

Broadly speaking, the Causal Set Program uses two methods to build their sets. The first, called sprinkling, is to deposit nodes at random onto a surface, and hook them together based on the geometry of that surface. The other way, called percolation dynamics, is to add nodes one by one to a set, and randomly add links from existing members of that set to each new node.

Sprinkling is useful for exploring how causal sets behave but it has a huge problem: in order to construct the discrete structure of spacetime, you have to deposit your points onto a smooth spacetime first! Clearly, if we want to come up with a background-independent theory of physics, we need to build the sets some other way. On the other hand, percolation dynamics has all the nice statistical properties that physicists would like to see and doesn’t need a background, but sadly doesn’t actually produce graphs that look like spacetime (though people are working on that).

The right solution would seem to be to come up with a third way: a process that produces the right structures without needing a background surface. However, this comes with problems. The key features that differentiate spacetime-like causal sets from others are dimensionality and Lorentz invariance.

Dimensionality essentially says that we should expect the graph that we build to have some consistent number of dimensions, rather than just being a tangled mess. Lorentz invariance is a little trickier. What it implies is that if you build your network first and then lay the nodes onto a flat surface afterward, the positions of the nodes should appear random. There should be no way you can stretch or squish the network to make it look otherwise. This is vitally important because in order to treat every relativistic reference frame the same way, as special relativity says we must, we need about the same number of links between nodes in each frame.

Another way to say this is that, thanks to Einstein, we know that no matter how fast we’re moving, space will always feel the same to us. The way a causal set works is that each link corresponds to a step through time and space taken at a certain speed. So, if for some speed of travel, our network doesn’t have enough links, it’s definitely not going to feel the same to someone traveling through it. If this happens, our model has failed. The only way that people have ever found to make Lorentz-invariant causal sets is to have the network be random.

My collaboration with Tommaso was founded on a neat way around this problem that works like this:

  • Because any causal set we can build is finite, it can only ever approximate perfect randomness.
  • Furthermore, for a finite network of given size, we can always find some algorithm that can approximate that level of randomness through a deterministic process.
  • Thus, no matter how big our network needs to be, we should still always be able to find an algorithm that could give rise to it.
  • This will always be true so long as we believe that spacetime is discrete, that the universe has finite size, and that it has existed for finite time.

In essence, what this tells us is that just because the network we want to build needs to look random, that doesn’t mean that we can’t use a completely non-random method for building it. This is all great as far as it goes, but it leaves us with an enormous problem: how to find an algorithm that can build spacetime.

In the two months we had, Tommaso and I didn’t manage to crack this problem (otherwise you would have heard about it on the news by now) but we learned some fascinating things along the way. I hope to share some of them with you in my later posts.

However, in the mean time, there are plenty of really excellent introductory papers on causal sets that are very approachable for those who’re interested. While my favorite approach to discrete physics is a little different from the causal set methodology, I can recommend this field very highly to anyone interested in learning more about quantum gravity without taking on a full-time career as a string theorist.

Adventures in Game Theory, Part Three

To those fresh to this sequence of postings, let me give you a little context. Two posts ago, I implied that some kind of wildly significant insight about how organizations and societies worked could be derived from looking at simple playground games like Rock Paper Scissors. Over the course of the last two posts, I’ve been building up the case for that statement. Now comes the next thrilling, life-changing installment—this time with some simulation results!

Before I can fully explain, though, first I have to give you a little more background.  Last week I had the good fortune to speak at the ASTD conference in Orlando, Florida, the world’s largest training and development business event. The topic of the session was the use of Tokenomics as a tool for organizational culture change. I delivered the talk with my good friend Cindy Ventrice, from MakeTheirDay.com, and to support the session we captured a large amount of material on the subject, which those interested can find on our collaboration website, techneq.com. The session went wonderfully and generated plenty of interest. However, what I’m most keen to talk about here doesn’t relate to that talk, exactly, but to the unexpected consequences of it.

In order demonstrate to the audience what the Tokenomics approach was capable of, I put together a short computer simulation based on Scissors Dilemma Party, a game which the readers of the last two posts will have already heard of. The simulation was designed to show how autonomous software agents, given nothing but a simple memory model and some behavioral rules based on token acquisition, would automatically aggregate into social groups defined by shared values.

To make the model more intuitively approachable for a conference audience, I chose to have the agents move around in a virtual environment rather like people in a workplace, interacting when they met. As well as making the simulation more visually appealing, it demonstrated how the agents’ behavior evolved over time as they learned more about their environment, much as players of the game do when they experience it at Behavior Lab.

Each agent had eight memory slots initially filled with random behaviors. With each interaction, an agent would pick a behavior from its memory and apply it. If the interaction resulted in a positive outcome for the agent (unreciprocated nose-thumbing, or a successful rock-paper-scissors match), that behavior was copied to another slot in memory. If the behavior resulted in any other outcome, that memory slot was overwritten with a new random behavior. Agents were designed to move towards other agents with whom they’d interacted positively, and away from those with whom interaction had failed.

At first, the simulation didn’t work very well. Aggressive behavior (nose-thumbing), was too seductive for the dim-witted agents and stable social groups never formed. In order to get the agents to behave a little more like people, I had to add a little extra subtlety. This came in the form of two new rules.

The first rule was that if an Agent A was aggressive to agent B, B would remember that fact and be aggressive back at the next opportunity. This captures the idea of ‘Tit for Tat’—a strategy that has proved very successful in Prisoner’s Dilemma tournaments.

The second rule was that if A and B had a successful match of rock, paper, or scissors, they’d both remember it and try for the same topic of conversation next time. This gave the agents a chance to reinforce positive relationships.

These two rules together did the trick and produced a somewhat mesmeric simulation. You can see it here, by just clicking on the first simulation button that appears. (Sadly, WordPress isn’t enthusiastic about supporting applets, otherwise I would have included it in this blog. Also, note that you’ll need Java installed for this to work. If you don’t have Java, let me know. I’m thinking of writing an HTML5 version and am keen to know whether that would make life easier for people.) In this simulation, the colors red, green, and blue take the place of rock, paper and scissors. The color gray takes the place of nose-thumbing.

However, once I’d finished the simulation, it occurred to me that I’d only scratched the surface of what could be demonstrated with this approach. I could go further, do more, and start saying something really meaningful. Better still, the tools to achieve it were already in my hands! However, I’ve promised myself that each one of these postings will be short and readable by people with day jobs, so in order to discover what I did next, you’ll have to join me for Episode Four.