How Big is Small?

A new blog post from How to Build a Universe:

This last week, various news sites on the web have been reporting an important news story for digital physics enthusiasts. The news is this: a chap called Philippe Laurent and his colleagues have performed an extensive analysis of the 19th December 2004 gamma-ray burst in search of polarization effects that would lend support to some Loop Quantum Gravity models of spacetime. Their results demonstrated pretty convincingly that if the LQG models tested are right, that discrete units of space would have to be thirteen orders of magnitude smaller than the Planck length, which is really quite small indeed. This builds powerfully on other results released in 2009 which point in the same general direction. This is great news for digital physics as it narrows the field of possible models very nicely. The LQG theorists provided some splendidly testable predictions and consequently, the game has moved forwards.

I confess to being pleased by the result as, though I like very much what the LQG community is exploring, I would be surprised if differences in the velocity or polarization of photons yielded proof of the granular nature of space. My personal guess is that discrete spacetime doesn’t work that way.

What’s a little more disappointing is the way that the result has been reported on the web. There have been lots of statements either implying that because of this result, the voxels of spacetime must be very small, or that the idea of discrete spacetime is itself suddenly less plausible. Most likely these comments have arisen because the article originally posted on the ESA’s own website says the following: “It has shown that any underlying quantum ‘graininess’ of space must be at much smaller scales than previously predicted.”

The author appears to be a fellow called Markus Bauer, who, probably in the name of journalistic expediency, chose to leave out the key phrase “if loop quantum gravity models are correct”. His statement might have been okay if LQG was the only discrete spacetime model in town, but that’s far from true these days.

Can we forgive him? Yes. But I personally do so with a small sigh. His article sent small ripples across the web, leading to slightly wrongy statements all over the place, such as this remark in Wired UK: “An astrophysicist’s attempt to measure quantum ‘fuzziness’ to find out if we’re living in a hologram has been headed off at the pass by results suggesting that we’re probably not.”

I suppose the reason why I’m a little sad about this is because I feel like this kind of interpretation isn’t good for science. Science, as the marvelous Karl Popper pointed out many years ago, advances via refutation. It’s great that a handful of LQG models got ruled out. Philippe Laurent wasn’t ‘headed off at the pass’–he scored an awesome goal! Having a theory shot down isn’t a problem, it’s a cause for cheering because now there’s more lovely science work to do and we have better data to do it with!

Keeping this distinction straight in the minds of the public is important, IMO, because this feature of science is rather different from the way that we normally tend think about things. For instance, if a politician makes an incorrect prediction, we often condemn him or her for getting it wrong. If they change their mind a lot, we call them a ‘flip-flopper’ rather than ‘someone who’s learning’. Scientists must be professional flip-floppers and spend their entire careers getting things wrong. If they aren’t, they’re not learning. And if they’re not learning, they’re not doing their jobs.

The modern academic system already rewards people way too much for protecting pet theories and trying to look unassailably correct. In doing so, it prevents many brilliant people from making theoretical strides without fear. So let me add one voice to web chorus and say this: “Way to go Philippe Laurent! Way to go Quantum Gravity Theorists! Keep coming up with those testable predictions. Quantum gravity badly needs them!”

Advertisements