The Google Safari Thing

This week presented us with another ridiculous story in the ongoing technology wars between Google, Apple, and everyone else.  In case you haven’t seen this news story, here are some handy links: BBC,Wired, and then there’s this twist from PC World.

While in isolation, this story is merely annoying, it serves as a useful illustration of the techno-battle that’s unfolding around us. I’d like to paint a picture of that battle for you. But first, I’ll need to outline what I suspect was actually going on.

I think it happened like this.  Apple came up with yet another clever idea. They put cookie blocking technology into their browser that conveniently hamstrings other peoples’ web service software. This means that people like Google and Facebook can only deliver a second-rate user experience on Apple’s browser. That’s awesome for Apple, because the user experience they want to deliver, that competes with those services, isn’t dependent on their browser in the same way.

At the same time, because the browser feature is ‘blocking cookies’ and ‘protecting user privacy’, the companies trying to deliver those services aren’t going to complain. This is also great for Apple, because their competitors get thrown in the stocks and pelted with fruit if they so much as open their mouths.

So Google and Facebook find a sneaky way around the cookie-blocking software. They don’t tell people what they’re doing, because what they most want the technology for is to tune the ads that customers see, which, let’s face it, isn’t a very popular reason.

Then the inevitable happens: somebody notices. Google is caught with its trousers round its ankles. (So is Facebook, but that’s happened so many times that nobody cares any more.) Microsoft predictably jumps up and joins in the finger pointing after Apple has tapped them on the shoulder, filled them in, and patiently explained the joke to them a couple of times.

What should have happened instead was this: Google should have pointed out what Apple were doing at the start. They should have given the user a choice, and told them why tuned ads are preferable (namely they’re less annoying, and allow them to continue to get services for free). Then they should have trusted the consumers to continue to use their services and taken the risk.

Instead they chickened out. No surprise that Facebook didn’t say a word. That’s their style. Nobody expects them to play nice anyway. But Google were stupid because they’ve set themselves up as the shiny grinning Mormon of technology-land, and altogether nicer-than thou, which makes it dirt-simple for people to point the finger at them.

What’s lame about this situation is that it’s another example of business as usual as these companies claw and scratch at each other to become the dominant hegemonic shitweasel in the pack. What’s interesting about it is that we can see, in miniature, the respective characters of four of the major players.

Apple: Sly and ahead of the curve.
Microsoft: Behind the curve and struggling to emulate the younger kids.
Facebook: Crooked as they come, but nobody cares any more.
Google: Trying to remain nicer-than-thou while resorting to the same dodgy tactics as everyone else.

Here, to make things very clear, is a little strategy guide of what I see as going on:

Apple’s strategy:
* Leverage a lock-in ecosystem to actively cripple any and all competition.
* Use fanboy-lust and sly positioning to make other people look dirtier than themselves.
* Help Microsoft, because whatever they do will be awful, and that will make Apple continue to look like the nice shiny option.
Your visual aid: A dewy-eyed cheerleader with a prison-issue shiv hidden in her Hello Kitty backpack.

Microsoft’s strategy:
* Secure enough financial support to enable strong-arm tactics to work even while market-share dwindles.
* Try to secretly position themselves as partners with Facebook using loud stage-whispers while hoping nobody notices.
* Use patents, licensing, and court-cases to achieve what their software isn’t good enough to do, namely retain their position in the market.
Your visual aid: Half-blind hunchback with a big stick and a face like fire-damaged lego.

Facebook’s strategy:
* Leverage a lock-in website to actively cripple any and all competition.
* Help Microsoft in order to cheaply secure the support of a flailing company.
* Sell-out their own users in order to maintain the market dominance and hope they don’t notice.
Your visual aid: A sleazy teen with a ‘you can’t tell me what to do, Grandad’ smile and a pocket full of cut-price ecstasy he made in his own garage.

Google’s strategy:
* Give up on being nice because it looks like it’s losing money.
* Give up on being objective and data-driven because it looks like it’s losing money.
* Belatedly try to invent a lock-in ecosystem because that’s what seems to be working for other people.
Your visual aid: A Mormon missionary fumbling with the catch on an automatic while not noticing that it’s pointed at his foot.

So who do I think is going to win? Well, unless he gets his act together and works out what he’s good at, the Mormon is probably toast. The hunchback will eventually hit himself on the head with his own stick. The sleazy teen is growing up quick and will soon find it a lot harder to sell his funny pills. The only one among them with any brains is the cheerleader. On the other hand, the cheerleader’s appeal is largely based on having someone else to compare herself to. I wouldn’t be surprised if the winner will be the one player who didn’t even appear in this story. Can you guess who that is? Here’s a hint: they sell everything.

How Big is Small?

A new blog post from How to Build a Universe:

This last week, various news sites on the web have been reporting an important news story for digital physics enthusiasts. The news is this: a chap called Philippe Laurent and his colleagues have performed an extensive analysis of the 19th December 2004 gamma-ray burst in search of polarization effects that would lend support to some Loop Quantum Gravity models of spacetime. Their results demonstrated pretty convincingly that if the LQG models tested are right, that discrete units of space would have to be thirteen orders of magnitude smaller than the Planck length, which is really quite small indeed. This builds powerfully on other results released in 2009 which point in the same general direction. This is great news for digital physics as it narrows the field of possible models very nicely. The LQG theorists provided some splendidly testable predictions and consequently, the game has moved forwards.

I confess to being pleased by the result as, though I like very much what the LQG community is exploring, I would be surprised if differences in the velocity or polarization of photons yielded proof of the granular nature of space. My personal guess is that discrete spacetime doesn’t work that way.

What’s a little more disappointing is the way that the result has been reported on the web. There have been lots of statements either implying that because of this result, the voxels of spacetime must be very small, or that the idea of discrete spacetime is itself suddenly less plausible. Most likely these comments have arisen because the article originally posted on the ESA’s own website says the following: “It has shown that any underlying quantum ‘graininess’ of space must be at much smaller scales than previously predicted.”

The author appears to be a fellow called Markus Bauer, who, probably in the name of journalistic expediency, chose to leave out the key phrase “if loop quantum gravity models are correct”. His statement might have been okay if LQG was the only discrete spacetime model in town, but that’s far from true these days.

Can we forgive him? Yes. But I personally do so with a small sigh. His article sent small ripples across the web, leading to slightly wrongy statements all over the place, such as this remark in Wired UK: “An astrophysicist’s attempt to measure quantum ‘fuzziness’ to find out if we’re living in a hologram has been headed off at the pass by results suggesting that we’re probably not.”

I suppose the reason why I’m a little sad about this is because I feel like this kind of interpretation isn’t good for science. Science, as the marvelous Karl Popper pointed out many years ago, advances via refutation. It’s great that a handful of LQG models got ruled out. Philippe Laurent wasn’t ‘headed off at the pass’–he scored an awesome goal! Having a theory shot down isn’t a problem, it’s a cause for cheering because now there’s more lovely science work to do and we have better data to do it with!

Keeping this distinction straight in the minds of the public is important, IMO, because this feature of science is rather different from the way that we normally tend think about things. For instance, if a politician makes an incorrect prediction, we often condemn him or her for getting it wrong. If they change their mind a lot, we call them a ‘flip-flopper’ rather than ‘someone who’s learning’. Scientists must be professional flip-floppers and spend their entire careers getting things wrong. If they aren’t, they’re not learning. And if they’re not learning, they’re not doing their jobs.

The modern academic system already rewards people way too much for protecting pet theories and trying to look unassailably correct. In doing so, it prevents many brilliant people from making theoretical strides without fear. So let me add one voice to web chorus and say this: “Way to go Philippe Laurent! Way to go Quantum Gravity Theorists! Keep coming up with those testable predictions. Quantum gravity badly needs them!”

Is Reality Digital or Analog

A post I first submitted to How to Build a Universe:

After my collaboration with Tommaso Bolognesi last autumn, we noticed the following essay competition being run by FQXi. FQXi is a marvelous organization that supports frontier physics research in areas where other organizations wouldn’t dare. It’s an invaluable resource for people who’re trying hard to think outside the paradigm box, and a useful rallying point for those interested in foundational questions about how the universe actually works.

The subject of the competition: Is Reality Digital or Analog?

How could we not take part? Tommaso and I agreed that we should both submit an essay. I didn’t win, but I’m delighted to say that Tommaso received a prize. For those who’re interested, my submission can be found here, and Tommaso’s here.

Antropy Doubled

In my last post, I introduced an algorithm for turning order into chaos and back again using a turmite (otherwise known as a 2D Turing Machine). This time, I have to admit that I kept some of the truth from you. I didn’t just come up with one algorithm, I came up with two. And the second one is significantly more weird and beautiful than the first.

Where my first algorithm used a single machine head, my second one uses two. And instead of simply picking up and putting down bits, this new algorithm swaps them from one head to the other. Machine-head A passes its data to head B, and B passes its data to A. What this means is that the new algorithm is a lot faster at turning order into chaos while being no less reversible.

On top of this, the new algorithm produces some eerie physics-like results from time to time, the reasons for which still aren’t entirely clear to me. The new algorithm working on a block of bits also looks peculiarly like something rotting or rusting—something I’ve not seen before in a simple algorithm.

Once again, I’m struck by the peculiar corrosive beauty of these programs but am still not sure what they’re good for. You can find the simulations here.

Increasing Antropy

I have a new algorithm that I want to share with you. It’s interesting to watch, slightly mysterious, and I can’t help but wonder if it might turn out to be useful for cryptography or something. Before you take a look, though, I should first explain what it does, why I came up with it, and what it has to do with digital physics. (For the impatient, the cool stuff is here.)

During my collaboration with Tommaso Bolognesi at CNR-ISTI last autumn, we were looking for ways to create sparse, pseudo-random data structures. Specifically, we wanted sparsely-connected directed acyclic graphs (a requirement for building spacetime-like causal sets, a term I explained in my last post.) However we soon discovered that there weren’t any classes of data structures for which we could get the kind of results we were looking for.

For those of you with a math/computing background, this might sound like a slightly odd statement, because there have been algorithms to build sparse, pseudo-random matrices for ages. However, none of these algorithms were as simple as we wanted, or as adaptable as we wanted. For starters, most of these algorithms require that you explicitly represent numbers somewhere in your code. For our purposes, this pretty much ruled them out immediately. What we wanted was for the sparseness of the data to emerge naturally out of a process without us having to impose extra layers of interpretation on it.

To get a sense of what I mean, let’s take a look at turmites. Turmites are very simple programs of a sort that Tommaso and I have explored and are great at producing pseudo-random data. The way they work is very straightforward: you have a network of memory slots hooked up according to some geometrical rule. You also have a machine-head that can move across that network and change the contents of the memory slot it’s sitting on. You then create a simple rule for moving the machine-head based on the contents of the slot where it’s located. It’s basically like a 2D Turing Machine.

The simplest such program is probably Langton’s Ant—the first turmite ever discovered. It runs on a square grid of black and white cells, and has an operating rule says:

  • If you’re on a white cell, make it black, turn right, and step forwards.
  • If you’re on a black cell, make it white, turn left, and step forwards.

That’s it. It’s about as computationally simple as you can get and yet the output is so unexpected that computer scientists still don’t have much in the way of useful proofs about its behavior.

At face value, turmites look like a terrific fit for the sort of randomness we want to create. Furthermore, there are plenty of turmites that you can run for as long as you like, and never get repeating data. However, if you take a look at the kinds of patterns that turmites create, you may notice something about them. The patterns are all pretty dense. What I mean by this is that the balance of black and white squares that they generate is usually pretty much equal. Sure, some of them make denser patterns than others, but the density is never all that low. Furthermore, you definitely don’t get to choose in advance how dense the pattern is going to be. Your choice of ant algorithm decides that for you. You don’t have any say in the matter.

The reason for this is that in order for the ant to produce random-looking data, its behavior needs to be unpredictable. And its behavior can only be unpredictable if it has a nice rich mix of black and white cells to work with. Take away the mixture and the behavior stops being unpredictable.

One way to get very sparse data out of a turmite is to pick a rule that’s got a large number of different states. In other words, instead of only permitting you to put a one or a zero in each slot that the turmite visits, you can put one of a larger range of values, say, for instance, one of ten different values. Then, to get your sparseness, you throw away everything except one of the states when you examine the results. However, we didn’t like this solution either, as it required us to take the output of the algorithm and apply some kind of arbitrary filter to it. So we were stuck. We couldn’t even create turmites of the sort that we wanted, let alone causal sets.

Then, shortly after I got back to the US, a solution of sorts to the turmite version of the problem occurred to me. Whether the same kind of algorithm will turn out to be applicable to networks is unknown, but it seems like a an interesting starting point.

The idea here is that instead of having a rule for turning a slot in the grid on or off, instead you have a rule for picking up a bit or putting it down. This allows you to populate your environment with data as sparse as you like, and know that the density will never change as long as the program runs. There’s one other twist, so to speak. Rather than running the program on a grid of infinite size, you run it on a grid of finite size, but you hook up the edges of that grid such that leaving the top of the grid brings you back at the bottom of the grid shifted one row to the left. Likewise, leaving through the bottom brings you back a row to the right. The left and right edges of the grid are also hooked up the same way, so that the whole grid is slightly twisted.

An odd set-up, admittedly, but what it gives you is a turmite that takes whatever input you provide and mangles it for you without losing track of any of your bits. Because no bits are ever gained or lost, it also means that the ant should be reversible. We can write a program that can unmangle any mangled data we’re handed. It’s like a magic wand for turning order into chaos and back again—a kind of do-it-yourself entropy kit. In fact, it’s a little bit like a tiny universe with a finite supply of matter. Over time, everything becomes disordered, but it does so according to a rule that works as well backwards as it does forwards.

About fifty years ago, this would probably have been an awesome way of encoding messages. However, these days we have public key cryptography, so the utility of my algorithm is a little less obvious. However, there’s something about it that gives me a tingly feeling. It has practical uses, I’m sure of it. I’m just not sure what they are yet. Any ideas?

How to fold this approach back into a class of algorithms that will help build causal sets is something I’m still working on. I can use this method to approximate percolation dynamics by using the turmite to construct an adjacency matrix. However, that doesn’t help us build realistic spacetimes. Clearly, more work is required.

And now, for those of you who’ve been patient enough to read to the end, here’s another link to the simulation. Happy watching, and if you think of a use for this thing, let me know!

a place for creative chaos