Knight Life

The Game of Life is the most famous cellular automaton ever devised. The rules are dead simple, and the expressive power of the game is enormous. People have used it as a metaphor for a whole host of other phenomena, from evolution to the fundamental laws of the universe. However, I have long had the sense that Life has a hidden weakness that makes its results far more of a special case than people tend to imagine. That weakness is its neighborhood.

What I mean by the neighborhood is the fact that each cell in the game updates according to the cells that sit around it, both adjacent and diagonal. The combination of a small neighbor set coupled with more than one kind of physical relationship packs a ton of expressive power into a simple system. However, this comes at the cost of making the Game of Life massively anisotropic. Change the relationship between cells just a tiny bit, I guessed, and the cool patterns would disappear. If this idea was right, it suggested that the Game of Life, and other automata like it, have a lot less to do with natural organisms, or the universe, than first appears.

To test my theory, I decided to see what would happen to the Game of Life if you played it with a different neighborhood–one with the same number of neighbors, but treating all neighbors exactly the same way. To do this, I replaced the standard set of neighbor relations with the set of knight-moves.

What was the result? You can see for yourself below.

As expected, the clever compact patterns from Conway’s original game disappear. However, ironically, what you gain is something that looks a lot more like what you might find under a microscope.

Advertisements

Bad science, good art

My current research project involves a lot of messing about with tiling systems, as evidenced by my previous post on Voronoi tiles. Sometimes, the results can be beautiful, even when the algorithms themselves aren’t working properly.

Here are some results of that process. In each case, different iterative methods are used to tile the plain. You won’t find the algorithms in any books, because for the most part, they’re full of “bugs”.

 

Rock Paper Scissors Lizard Spock

The other day, my friend Dan Miller pointed me at a YouTube video of a cellular automaton playing Rock Paper Scissors. Nice, I thought, from a game-theory perspective, but not all that surprising. The fact that the game has three cyclical states makes it a classic example of an excitation wave, and CA models of excitation waves have been around for a while.

In an attempt at a witty reply, I suggested to him that the authors explore a little further, and try their code with Rock Paper Scissors Lizard Spock, the five-state equivalent game invented by Sam Kass and Karen Bryla, and then popularized by the sitcom, The Big Bang Theory.

Having sent the email, I then reflected on my words. Once I’d got over the fact that I’d turned myself into a kind of ironic meta-parody of an already ironic sitcom character, it occurred to me that it was an idea worth following up. First, I built a copy of the three state automaton as a test case. As you can see, below, it’s just not that jazzy, once you get over the spirals. The patterns are extremely stable.

After that, I tried out the five state game.

Far more exciting, IMO. Each stable three-cycle in the game can establish a patch of local dominance that the others can’t invade. However, if a five-cycle becomes established, it swarms across the board like a fungus, devouring everything in its wake. Awesome.

Of course, one can go further. Why stop at 5? After all, there’s no reason you can’t extend the pattern to a higher number of states. Does this mean that you can get excitation waves made out of sets of mutually dominating excitation waves? I leave that as a mystery for you to ponder.

Voronoi Tiles

For the science project I’m currently working on, I’ve been looking a fair bit at Voronoi tiles. These tiles are what you get when you drop a whole bunch of markers onto a plane, and then segment the plane by looking at which marker each point on the plane is closest to. The result usually looks something like this.

However, as the Wikipedia page points out, you don’t have to use the Euclidean distance to compute the tiles. You can use the Manhattan distance instead, if you like. That gives you something like this.

Which is nice. However, I couldn’t help but wondering what shapes you’d get if you used other, perhaps wackier distance metrics. Like, for instance, the Minkowski metric.

 

Awesome. But why stop there? Why not, for instance, take a look at the sine of the Euclidean distance multiplied by some constant?

 

Or the cosine, for that matter.

 

Or the product of the x and y distances…

 

Or the difference between the Euclidean distance and some designated optimum…

 

Clearly, there are a whole host of interesting metrics one could use. No doubt someone has explored this already, but it strikes me as a rich vein for visual experimentation.