All posts by alexlamb

Battleship Jumps Shark

It seems there’s a new movie called Battleship coming out. Apparently, it’s science fiction. Also apparently, it’s based on the children’s game ‘Battleship’. Also also apparently, it’s bad.

What I find so hilariously wrong with this picture is that in their desperation to make as much money as possible without exposing themselves to any risk, the Mighty Lords of Hollywood have done something very risky that’s likely to lose them boat-loads of cash.

The apparently indisputable movie-business logic goes like this:

  • Only pre-tested brands can be known to make money reliably.
  • Brands that invoke nostalgia have the highest appeal.
  • The optimum demographic for movie viewing is young men, and young men like sci-fi action.

In order to wring ever more dollars out of this tired meta-formula, Hollywood scrapes the barrel of our past looking for things we’ve already bought so that it can sell them to us again. Can anyone else hear the brittle creaking of a strained paradigm here?

In the mean time, my guess is that we can get ready for the following exciting titles.

Mission Impossible–Speak and Spell: Tom Cruise and his buddies have to decrypt spy codes by spelling simple words on a cheerful plastic interface, while dangling on a wire somewhere, in order to save the world.

Simon, The Apocalypse: In order to avert a long-predicted world-shattering apocalypse that makes birds crash a lot, Nicholas Cage has to duel with a psychotic computer that communicates only through patterns of friendly tones and colors.

Hungry Hippos, The Torment Commences: In the first of eight planned films, Daniel Craig leads a ragtag team of survivors through a terrfiying post-apocalyptic wasteland overrun by zombie hippos that eat anyone on sight.

The Facebook IPO: A Tsunami of Meh

Today the BBC news website made me laugh.

Go to the front page, and the headline is ‘Facebook in stock debut rollercoaster’. Click on the link and you see an article that instead says ‘Facebook shares see modest debut‘.

Why is this funny? First, because everyone expected there to be a rollercoaster, because other tech IPOs have done just that, and there wasn’t one. But more significantly, if everyone expected there to be a rollercoaster, then having one wouldn’t have been news. We were supposed to have witnessed a blandly volatile event but we didn’t get one.

The media has grown quite accustomed, it seems, to taking dull events that have the surface appearance of drama and dressing them up as  narratives for our consumption.

When things don’t go as planned, the cracks in the system appear. The headline written before the event doesn’t get yanked. A much better, and truer title would have been ‘Facebook IPO appalls media world with mild performance’.

The Google Safari Thing

This week presented us with another ridiculous story in the ongoing technology wars between Google, Apple, and everyone else.  In case you haven’t seen this news story, here are some handy links: BBC,Wired, and then there’s this twist from PC World.

While in isolation, this story is merely annoying, it serves as a useful illustration of the techno-battle that’s unfolding around us. I’d like to paint a picture of that battle for you. But first, I’ll need to outline what I suspect was actually going on.

I think it happened like this.  Apple came up with yet another clever idea. They put cookie blocking technology into their browser that conveniently hamstrings other peoples’ web service software. This means that people like Google and Facebook can only deliver a second-rate user experience on Apple’s browser. That’s awesome for Apple, because the user experience they want to deliver, that competes with those services, isn’t dependent on their browser in the same way.

At the same time, because the browser feature is ‘blocking cookies’ and ‘protecting user privacy’, the companies trying to deliver those services aren’t going to complain. This is also great for Apple, because their competitors get thrown in the stocks and pelted with fruit if they so much as open their mouths.

So Google and Facebook find a sneaky way around the cookie-blocking software. They don’t tell people what they’re doing, because what they most want the technology for is to tune the ads that customers see, which, let’s face it, isn’t a very popular reason.

Then the inevitable happens: somebody notices. Google is caught with its trousers round its ankles. (So is Facebook, but that’s happened so many times that nobody cares any more.) Microsoft predictably jumps up and joins in the finger pointing after Apple has tapped them on the shoulder, filled them in, and patiently explained the joke to them a couple of times.

What should have happened instead was this: Google should have pointed out what Apple were doing at the start. They should have given the user a choice, and told them why tuned ads are preferable (namely they’re less annoying, and allow them to continue to get services for free). Then they should have trusted the consumers to continue to use their services and taken the risk.

Instead they chickened out. No surprise that Facebook didn’t say a word. That’s their style. Nobody expects them to play nice anyway. But Google were stupid because they’ve set themselves up as the shiny grinning Mormon of technology-land, and altogether nicer-than thou, which makes it dirt-simple for people to point the finger at them.

What’s lame about this situation is that it’s another example of business as usual as these companies claw and scratch at each other to become the dominant hegemonic shitweasel in the pack. What’s interesting about it is that we can see, in miniature, the respective characters of four of the major players.

Apple: Sly and ahead of the curve.
Microsoft: Behind the curve and struggling to emulate the younger kids.
Facebook: Crooked as they come, but nobody cares any more.
Google: Trying to remain nicer-than-thou while resorting to the same dodgy tactics as everyone else.

Here, to make things very clear, is a little strategy guide of what I see as going on:

Apple’s strategy:
* Leverage a lock-in ecosystem to actively cripple any and all competition.
* Use fanboy-lust and sly positioning to make other people look dirtier than themselves.
* Help Microsoft, because whatever they do will be awful, and that will make Apple continue to look like the nice shiny option.
Your visual aid: A dewy-eyed cheerleader with a prison-issue shiv hidden in her Hello Kitty backpack.

Microsoft’s strategy:
* Secure enough financial support to enable strong-arm tactics to work even while market-share dwindles.
* Try to secretly position themselves as partners with Facebook using loud stage-whispers while hoping nobody notices.
* Use patents, licensing, and court-cases to achieve what their software isn’t good enough to do, namely retain their position in the market.
Your visual aid: Half-blind hunchback with a big stick and a face like fire-damaged lego.

Facebook’s strategy:
* Leverage a lock-in website to actively cripple any and all competition.
* Help Microsoft in order to cheaply secure the support of a flailing company.
* Sell-out their own users in order to maintain the market dominance and hope they don’t notice.
Your visual aid: A sleazy teen with a ‘you can’t tell me what to do, Grandad’ smile and a pocket full of cut-price ecstasy he made in his own garage.

Google’s strategy:
* Give up on being nice because it looks like it’s losing money.
* Give up on being objective and data-driven because it looks like it’s losing money.
* Belatedly try to invent a lock-in ecosystem because that’s what seems to be working for other people.
Your visual aid: A Mormon missionary fumbling with the catch on an automatic while not noticing that it’s pointed at his foot.

So who do I think is going to win? Well, unless he gets his act together and works out what he’s good at, the Mormon is probably toast. The hunchback will eventually hit himself on the head with his own stick. The sleazy teen is growing up quick and will soon find it a lot harder to sell his funny pills. The only one among them with any brains is the cheerleader. On the other hand, the cheerleader’s appeal is largely based on having someone else to compare herself to. I wouldn’t be surprised if the winner will be the one player who didn’t even appear in this story. Can you guess who that is? Here’s a hint: they sell everything.

How Big is Small?

A new blog post from How to Build a Universe:

This last week, various news sites on the web have been reporting an important news story for digital physics enthusiasts. The news is this: a chap called Philippe Laurent and his colleagues have performed an extensive analysis of the 19th December 2004 gamma-ray burst in search of polarization effects that would lend support to some Loop Quantum Gravity models of spacetime. Their results demonstrated pretty convincingly that if the LQG models tested are right, that discrete units of space would have to be thirteen orders of magnitude smaller than the Planck length, which is really quite small indeed. This builds powerfully on other results released in 2009 which point in the same general direction. This is great news for digital physics as it narrows the field of possible models very nicely. The LQG theorists provided some splendidly testable predictions and consequently, the game has moved forwards.

I confess to being pleased by the result as, though I like very much what the LQG community is exploring, I would be surprised if differences in the velocity or polarization of photons yielded proof of the granular nature of space. My personal guess is that discrete spacetime doesn’t work that way.

What’s a little more disappointing is the way that the result has been reported on the web. There have been lots of statements either implying that because of this result, the voxels of spacetime must be very small, or that the idea of discrete spacetime is itself suddenly less plausible. Most likely these comments have arisen because the article originally posted on the ESA’s own website says the following: “It has shown that any underlying quantum ‘graininess’ of space must be at much smaller scales than previously predicted.”

The author appears to be a fellow called Markus Bauer, who, probably in the name of journalistic expediency, chose to leave out the key phrase “if loop quantum gravity models are correct”. His statement might have been okay if LQG was the only discrete spacetime model in town, but that’s far from true these days.

Can we forgive him? Yes. But I personally do so with a small sigh. His article sent small ripples across the web, leading to slightly wrongy statements all over the place, such as this remark in Wired UK: “An astrophysicist’s attempt to measure quantum ‘fuzziness’ to find out if we’re living in a hologram has been headed off at the pass by results suggesting that we’re probably not.”

I suppose the reason why I’m a little sad about this is because I feel like this kind of interpretation isn’t good for science. Science, as the marvelous Karl Popper pointed out many years ago, advances via refutation. It’s great that a handful of LQG models got ruled out. Philippe Laurent wasn’t ‘headed off at the pass’–he scored an awesome goal! Having a theory shot down isn’t a problem, it’s a cause for cheering because now there’s more lovely science work to do and we have better data to do it with!

Keeping this distinction straight in the minds of the public is important, IMO, because this feature of science is rather different from the way that we normally tend think about things. For instance, if a politician makes an incorrect prediction, we often condemn him or her for getting it wrong. If they change their mind a lot, we call them a ‘flip-flopper’ rather than ‘someone who’s learning’. Scientists must be professional flip-floppers and spend their entire careers getting things wrong. If they aren’t, they’re not learning. And if they’re not learning, they’re not doing their jobs.

The modern academic system already rewards people way too much for protecting pet theories and trying to look unassailably correct. In doing so, it prevents many brilliant people from making theoretical strides without fear. So let me add one voice to web chorus and say this: “Way to go Philippe Laurent! Way to go Quantum Gravity Theorists! Keep coming up with those testable predictions. Quantum gravity badly needs them!”

Is Reality Digital or Analog

A post I first submitted to How to Build a Universe:

After my collaboration with Tommaso Bolognesi last autumn, we noticed the following essay competition being run by FQXi. FQXi is a marvelous organization that supports frontier physics research in areas where other organizations wouldn’t dare. It’s an invaluable resource for people who’re trying hard to think outside the paradigm box, and a useful rallying point for those interested in foundational questions about how the universe actually works.

The subject of the competition: Is Reality Digital or Analog?

How could we not take part? Tommaso and I agreed that we should both submit an essay. I didn’t win, but I’m delighted to say that Tommaso received a prize. For those who’re interested, my submission can be found here, and Tommaso’s here.

Antropy Doubled

In my last post, I introduced an algorithm for turning order into chaos and back again using a turmite (otherwise known as a 2D Turing Machine). This time, I have to admit that I kept some of the truth from you. I didn’t just come up with one algorithm, I came up with two. And the second one is significantly more weird and beautiful than the first.

Where my first algorithm used a single machine head, my second one uses two. And instead of simply picking up and putting down bits, this new algorithm swaps them from one head to the other. Machine-head A passes its data to head B, and B passes its data to A. What this means is that the new algorithm is a lot faster at turning order into chaos while being no less reversible.

On top of this, the new algorithm produces some eerie physics-like results from time to time, the reasons for which still aren’t entirely clear to me. The new algorithm working on a block of bits also looks peculiarly like something rotting or rusting—something I’ve not seen before in a simple algorithm.

Once again, I’m struck by the peculiar corrosive beauty of these programs but am still not sure what they’re good for. You can find the simulations here.

Increasing Antropy

I have a new algorithm that I want to share with you. It’s interesting to watch, slightly mysterious, and I can’t help but wonder if it might turn out to be useful for cryptography or something. Before you take a look, though, I should first explain what it does, why I came up with it, and what it has to do with digital physics. (For the impatient, the cool stuff is here.)

During my collaboration with Tommaso Bolognesi at CNR-ISTI last autumn, we were looking for ways to create sparse, pseudo-random data structures. Specifically, we wanted sparsely-connected directed acyclic graphs (a requirement for building spacetime-like causal sets, a term I explained in my last post.) However we soon discovered that there weren’t any classes of data structures for which we could get the kind of results we were looking for.

For those of you with a math/computing background, this might sound like a slightly odd statement, because there have been algorithms to build sparse, pseudo-random matrices for ages. However, none of these algorithms were as simple as we wanted, or as adaptable as we wanted. For starters, most of these algorithms require that you explicitly represent numbers somewhere in your code. For our purposes, this pretty much ruled them out immediately. What we wanted was for the sparseness of the data to emerge naturally out of a process without us having to impose extra layers of interpretation on it.

To get a sense of what I mean, let’s take a look at turmites. Turmites are very simple programs of a sort that Tommaso and I have explored and are great at producing pseudo-random data. The way they work is very straightforward: you have a network of memory slots hooked up according to some geometrical rule. You also have a machine-head that can move across that network and change the contents of the memory slot it’s sitting on. You then create a simple rule for moving the machine-head based on the contents of the slot where it’s located. It’s basically like a 2D Turing Machine.

The simplest such program is probably Langton’s Ant—the first turmite ever discovered. It runs on a square grid of black and white cells, and has an operating rule says:

  • If you’re on a white cell, make it black, turn right, and step forwards.
  • If you’re on a black cell, make it white, turn left, and step forwards.

That’s it. It’s about as computationally simple as you can get and yet the output is so unexpected that computer scientists still don’t have much in the way of useful proofs about its behavior.

At face value, turmites look like a terrific fit for the sort of randomness we want to create. Furthermore, there are plenty of turmites that you can run for as long as you like, and never get repeating data. However, if you take a look at the kinds of patterns that turmites create, you may notice something about them. The patterns are all pretty dense. What I mean by this is that the balance of black and white squares that they generate is usually pretty much equal. Sure, some of them make denser patterns than others, but the density is never all that low. Furthermore, you definitely don’t get to choose in advance how dense the pattern is going to be. Your choice of ant algorithm decides that for you. You don’t have any say in the matter.

The reason for this is that in order for the ant to produce random-looking data, its behavior needs to be unpredictable. And its behavior can only be unpredictable if it has a nice rich mix of black and white cells to work with. Take away the mixture and the behavior stops being unpredictable.

One way to get very sparse data out of a turmite is to pick a rule that’s got a large number of different states. In other words, instead of only permitting you to put a one or a zero in each slot that the turmite visits, you can put one of a larger range of values, say, for instance, one of ten different values. Then, to get your sparseness, you throw away everything except one of the states when you examine the results. However, we didn’t like this solution either, as it required us to take the output of the algorithm and apply some kind of arbitrary filter to it. So we were stuck. We couldn’t even create turmites of the sort that we wanted, let alone causal sets.

Then, shortly after I got back to the US, a solution of sorts to the turmite version of the problem occurred to me. Whether the same kind of algorithm will turn out to be applicable to networks is unknown, but it seems like a an interesting starting point.

The idea here is that instead of having a rule for turning a slot in the grid on or off, instead you have a rule for picking up a bit or putting it down. This allows you to populate your environment with data as sparse as you like, and know that the density will never change as long as the program runs. There’s one other twist, so to speak. Rather than running the program on a grid of infinite size, you run it on a grid of finite size, but you hook up the edges of that grid such that leaving the top of the grid brings you back at the bottom of the grid shifted one row to the left. Likewise, leaving through the bottom brings you back a row to the right. The left and right edges of the grid are also hooked up the same way, so that the whole grid is slightly twisted.

An odd set-up, admittedly, but what it gives you is a turmite that takes whatever input you provide and mangles it for you without losing track of any of your bits. Because no bits are ever gained or lost, it also means that the ant should be reversible. We can write a program that can unmangle any mangled data we’re handed. It’s like a magic wand for turning order into chaos and back again—a kind of do-it-yourself entropy kit. In fact, it’s a little bit like a tiny universe with a finite supply of matter. Over time, everything becomes disordered, but it does so according to a rule that works as well backwards as it does forwards.

About fifty years ago, this would probably have been an awesome way of encoding messages. However, these days we have public key cryptography, so the utility of my algorithm is a little less obvious. However, there’s something about it that gives me a tingly feeling. It has practical uses, I’m sure of it. I’m just not sure what they are yet. Any ideas?

How to fold this approach back into a class of algorithms that will help build causal sets is something I’m still working on. I can use this method to approximate percolation dynamics by using the turmite to construct an adjacency matrix. However, that doesn’t help us build realistic spacetimes. Clearly, more work is required.

And now, for those of you who’ve been patient enough to read to the end, here’s another link to the simulation. Happy watching, and if you think of a use for this thing, let me know!

Adventures in Game Theory, Part Four

For those of you freshly joining this adventure, the last three posts have led us on a strange, thrilling journey that has passed through the valleys of introductory game theory, the jungles of applied improv, and the mountains of software simulation. Now, at last we arrive at our thunderous finale on the shores of Lake Awesome. I highly recommend reading from the start of the sequence, otherwise what I have to say may be too extraordinary and wonderful for your mind to fully hold!

At the end of the last installment caught me teetering on the brink of a realization–that by adding just a little more functionality to my simulation, I could start exploring some more socially useful truths about how people behave. My insight was to add status.

What this meant in practice was splitting the population of agents in my model into two groups: bosses and workers, or in training community parlance: leaders and team-members. Then, in order to make the interactions between bosses and workers a little less benign, I added two extra constraints.

One: If bosses were aggressive (nose-thumbing) to workers, workers were not empowered to reciprocate and be aggressive back in their next encounter.

Two: Bosses were unable to remember the specifics of positive interactions they had with workers. So for instance, if a boss and a worker both chose paper in one round, the worker would remember the fact, but the boss would not.

Implementing these changes was easy, as it simply required that the two memory rules I’d already added to make the first simulation work were now dependent on status. (I also added a little extra logic around the movement of the agents to ensure that workers had to interact with bosses, and to make the movements of bosses dependent on other bosses but not workers. However, while necessary, that code is somewhat beside the point.)

What happened next was wonderfully clear. Within a few seconds, all the bosses were behaving aggressively while the workers normed on a set of social standards of their own. My simulation suddenly looked a lot like some of the more awful companies I’d worked for. Without having to say anything about the kinds of people who become leaders, or about the specifics of organizational culture, I’d captured a simple truth about leadership: that without the incentives to behave otherwise and the right skills to succeed, people with power slide towards bad behavior, even if they start off thinking like saints.

What was even more interesting was that as the simulation progressed, the bosses started to bump up against the corners of the virtual environment as if desperate to leave. Because aggressive behavior was so successful for bosses in their interactions with workers, they were applying the same behavior to each other, resulting in a rapid erosion of their ability to collaborate. The lesson: by letting leaders behave badly, we ensure that leaders have less pleasant interactions with each other, as well as with us.

My goal, though, was not to engage in rhetoric about leaders, but instead to see whether models like the one I was looking at could tell us something about how to help organizations do better. To do this, I looked at what happened when I turned each of the status dependencies off in isolation.

Turning off the status dependency for remembering positive interactions is rather like sending your managers on an employee recognition course. They learn to value the specific information they get from each person they work with, and to let their team members know that they’re seen and valued.

The result in the simulation is that the culture improves significantly. The workers integrate more tightly and the bosses take on the same cultural colors as the workers they lead. Interestingly, the bosses don’t all start cooperating at once. Many of them initially retain their aggressive behavior. Then, one by one, they figure out that collaboration is more effective.

The lesson here: that training leaders to listen can make a huge difference in their effectiveness, but that the change they take on depends on their willingness to implement what they learn.

If instead, we turn off the status dependency for worker retaliation to boss aggression, the effects are even more interesting. Making this change is rather like implementing a shared accountability system like the one that revolutionized the airline industry and transformed the safety standards in air travel. Under this system, the pilots of planes are no longer the unquestionable captains of the air that they once were. If copilots think that they’re witnessing a mistake, they’re duty-bound to call the pilot on it and to report it to air traffic control if necessary. In our simulated business, we can imagine that we’re instructing the worker agents to hold their bosses accountable if they don’t uphold the collaborative social standards of their organization.

What happens when we make this change is that the behaviors of the bosses have trouble settling onto any specific color. When we watch the ‘mood’ of the agents to see how many positive or negative interactions they’re having, we see that the tables have been turned. The workers are now having a pretty great time all round and the bosses are mostly miserable–the opposite of what we see if status dependence for retaliation is left on. This is because the workers now have an advantage that the bosses don’t–they can remember and repeat positive interactions whereas bosses cannot. Because aggression no longer secures automatic results, bosses don’t have an easy way of stabilizing on a successful behavior.

The lesson here is that enabling everyone in an organization to hold leaders accountable for their behavior is what creates the incentive for leaders to improve, but that without the right training and direction, the main result is leader unhappiness.

As you might expect, turning off both status-dependent features creates a benign, functional organization that settles rapidly onto a cooperative culture. If you want to play around yourself, and have Java installed, the simulation is the second applet on this page. (It has four buttons.)

As before, red, blue and green denote different positive interactions. Gray denotes aggressive behavior. Swapping to ‘mood view’ shows the success of the agents interactions, ranging from blue (unhappy agents) to yellow (cheerful ones).

Clearly there’s a lot more to do here. For a start, in order to turn this into a science result, the simulations will need to be a lot more rigorous, which will probably mean sacrificing the visual playfulness.  Furthermore, we’ve only looked at one memory model for agents and solid research would need to try out others. However, the results seem pretty clear. We’ve gone from a simple game played in a room full of people to a model that turns business intuition into something rather like unavoidable, mathematical fact.

Thus, in the wake of our adventure, we can say with real confidence that any society or organization that doesn’t empower its people hold its leaders accountable, and which doesn’t teach those leaders how to listen, can expect its leaders to turn out bad, regardless of how ‘good’ we believe them to be as people.

This is something most of us already believe but which we often fail to implement. For instance, we’re all used to the idea of holding elected officials accountable, but explicit training in ‘voter recognition’? We leave that to chance. Similarly, we’re used to the idea that good managers are the ones who pay attention, but company-wide accountability systems? Those are pretty rare. I believe that simulations like this can make these points unavoidable, and also perhaps show us how to build measures that make our adherence to such standards quantifiable.

For any skeptics out there, my huge thanks for reading this far, and here’s a final thought to consider. Agent-based simulations of this sort have been used by biologists for years on the following basis: we can’t capture all the details of natural systems like cultures or the lives of organisms, so instead we capture only what we know is true. From that, we look to see what else must be true as a consequence. Thus we attempt to make the simplicity of the model a strength, not a weakness. In this instance, the agents are so simple that we can expect the same effects to arise regardless of the memory model we employ for our agents, so long as that memory model permits learning. Further work in this area will hopefully make that point even clearer.

That’s it. The adventure is finished. And while the ending perhaps isn’t unexpected, it feels like a step forwards to me. After all, if we can do this starting with Rock Paper Scissors, think what we can do with the game of Twister.