Increasing Antropy

I have a new algorithm that I want to share with you. It’s interesting to watch, slightly mysterious, and I can’t help but wonder if it might turn out to be useful for cryptography or something. Before you take a look, though, I should first explain what it does, why I came up with it, and what it has to do with digital physics. (For the impatient, the cool stuff is here.)

During my collaboration with Tommaso Bolognesi at CNR-ISTI last autumn, we were looking for ways to create sparse, pseudo-random data structures. Specifically, we wanted sparsely-connected directed acyclic graphs (a requirement for building spacetime-like causal sets, a term I explained in my last post.) However we soon discovered that there weren’t any classes of data structures for which we could get the kind of results we were looking for.

For those of you with a math/computing background, this might sound like a slightly odd statement, because there have been algorithms to build sparse, pseudo-random matrices for ages. However, none of these algorithms were as simple as we wanted, or as adaptable as we wanted. For starters, most of these algorithms require that you explicitly represent numbers somewhere in your code. For our purposes, this pretty much ruled them out immediately. What we wanted was for the sparseness of the data to emerge naturally out of a process without us having to impose extra layers of interpretation on it.

To get a sense of what I mean, let’s take a look at turmites. Turmites are very simple programs of a sort that Tommaso and I have explored and are great at producing pseudo-random data. The way they work is very straightforward: you have a network of memory slots hooked up according to some geometrical rule. You also have a machine-head that can move across that network and change the contents of the memory slot it’s sitting on. You then create a simple rule for moving the machine-head based on the contents of the slot where it’s located. It’s basically like a 2D Turing Machine.

The simplest such program is probably Langton’s Ant—the first turmite ever discovered. It runs on a square grid of black and white cells, and has an operating rule says:

  • If you’re on a white cell, make it black, turn right, and step forwards.
  • If you’re on a black cell, make it white, turn left, and step forwards.

That’s it. It’s about as computationally simple as you can get and yet the output is so unexpected that computer scientists still don’t have much in the way of useful proofs about its behavior.

At face value, turmites look like a terrific fit for the sort of randomness we want to create. Furthermore, there are plenty of turmites that you can run for as long as you like, and never get repeating data. However, if you take a look at the kinds of patterns that turmites create, you may notice something about them. The patterns are all pretty dense. What I mean by this is that the balance of black and white squares that they generate is usually pretty much equal. Sure, some of them make denser patterns than others, but the density is never all that low. Furthermore, you definitely don’t get to choose in advance how dense the pattern is going to be. Your choice of ant algorithm decides that for you. You don’t have any say in the matter.

The reason for this is that in order for the ant to produce random-looking data, its behavior needs to be unpredictable. And its behavior can only be unpredictable if it has a nice rich mix of black and white cells to work with. Take away the mixture and the behavior stops being unpredictable.

One way to get very sparse data out of a turmite is to pick a rule that’s got a large number of different states. In other words, instead of only permitting you to put a one or a zero in each slot that the turmite visits, you can put one of a larger range of values, say, for instance, one of ten different values. Then, to get your sparseness, you throw away everything except one of the states when you examine the results. However, we didn’t like this solution either, as it required us to take the output of the algorithm and apply some kind of arbitrary filter to it. So we were stuck. We couldn’t even create turmites of the sort that we wanted, let alone causal sets.

Then, shortly after I got back to the US, a solution of sorts to the turmite version of the problem occurred to me. Whether the same kind of algorithm will turn out to be applicable to networks is unknown, but it seems like a an interesting starting point.

The idea here is that instead of having a rule for turning a slot in the grid on or off, instead you have a rule for picking up a bit or putting it down. This allows you to populate your environment with data as sparse as you like, and know that the density will never change as long as the program runs. There’s one other twist, so to speak. Rather than running the program on a grid of infinite size, you run it on a grid of finite size, but you hook up the edges of that grid such that leaving the top of the grid brings you back at the bottom of the grid shifted one row to the left. Likewise, leaving through the bottom brings you back a row to the right. The left and right edges of the grid are also hooked up the same way, so that the whole grid is slightly twisted.

An odd set-up, admittedly, but what it gives you is a turmite that takes whatever input you provide and mangles it for you without losing track of any of your bits. Because no bits are ever gained or lost, it also means that the ant should be reversible. We can write a program that can unmangle any mangled data we’re handed. It’s like a magic wand for turning order into chaos and back again—a kind of do-it-yourself entropy kit. In fact, it’s a little bit like a tiny universe with a finite supply of matter. Over time, everything becomes disordered, but it does so according to a rule that works as well backwards as it does forwards.

About fifty years ago, this would probably have been an awesome way of encoding messages. However, these days we have public key cryptography, so the utility of my algorithm is a little less obvious. However, there’s something about it that gives me a tingly feeling. It has practical uses, I’m sure of it. I’m just not sure what they are yet. Any ideas?

How to fold this approach back into a class of algorithms that will help build causal sets is something I’m still working on. I can use this method to approximate percolation dynamics by using the turmite to construct an adjacency matrix. However, that doesn’t help us build realistic spacetimes. Clearly, more work is required.

And now, for those of you who’ve been patient enough to read to the end, here’s another link to the simulation. Happy watching, and if you think of a use for this thing, let me know!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s