All posts by alexlamb

I, for one, welcome our new robot overlords

There has been a lot in the press of late talking about the threat of human-level AI. Stephen Hawking has gone on record talking about the risks. So has Elon Musk. Now Bill Gates has joined the chorus.

This kind of talk makes me groan. I’d like to propose the converse for a moment, so that everyone can try it on. Maybe AI is the only thing that’s going to save our asses. Why? How do I justify that? First, let’s talk about why AI isn’t the problem.

Concerns about AI generally revolve around two main ideas. First, that it’ll be beyond our control, and secondly, that it’ll bootstrap its way to unspeakable power, as each generation of AI builds a smarter one to follow it.

Yes, AI we don’t understand would be beyond our control. Just like weather, or traffic, or crop failure, or printers, or any of the other unpredictable things we struggle with every day. What is assumed about AI that supposedly makes it a different scale of threat is intent. But here’s the thing. AI wouldn’t have intent that we didn’t put into it. And intent doesn’t come from nowhere. I have yet to meet a power-hungry phone, despite the fact that we’ve made a lot of them.

Software that can be said to have intent, on the other hand, like malware, can be extremely dangerous. And malware, by some measures, is already something we can’t control. Certainly there is no one in the world who is immune to the risks of cyberattack. This despite the fact that a lot of malware is very simple.

So why do people underestimate the risks of cyberattack and overstate AI? It’s for the same reason that AI research is one of the hardest kinds of research in the world to do properly. The human mind is completely hamstrung with assumptions about what intelligence is, that we can’t even think about it straight. Our brains come with ten million years of optimized wiring that forces us to make cripplingly incorrect assumptions about topics as trivial as consciousness. When it comes to assessing AI, it’s hard enough to get the damned thing working, let alone make rational statements about what it might want to do once it got going.

This human flaw shows up dramatically in our reasoning about how AI might bootstrap itself to godhood. How is that honestly supposed to work? Intelligence is about making guesses about in an uncertain universe. We screw up all the time. Of all the species on Earth, we are the ones capable of the most spectacular pratfalls.

The things that we’re worst at guessing about are the things that are at least as complicated as we are. And that’s for a really good reason. You can’t fit a model of something that requires n bits for its expression into something that only has n-1 bits. Any AI that tried to bootstrap itself would be far more likely to technologically face-plant than achieve anything. There is a very good reason that life has settled on replicating itself rather than trying to get the jump on the competition via proactive self-editing. That’s because the latter strategy is self-defeatingly stupid.

In fact, the more you think about it, the more the idea of a really, really big pocket calculator suddenly acquiring both the desire, and the ability to ascend into godhood, the dumber it is. Complexity is not just a matter of scale. You have to be running the right stuff. Which is why there isn’t more life on Jupiter than there is here.

On the other hand, we, as a species have wiped out a third of our biodiversity since nineteen seventy. We have, as I understand it, created a spike in carbon dioxide production unknown at any time in geological history. And we have built an economy predicated on the release of so much carbon that it would be guaranteed to send the planet into a state of runaway greenhouse effect that will render it uninhabitable.

At the same time, we are no closer to ridding the world of hunger, war, poverty, disease, or any of those other things we’ve claimed to not like for an awfully long time. We have, on the other hand, put seven billion people on the planet. And we’re worried about intelligent machines? Really?

It strikes me that putting the reins of the planet into the hands of an intelligence that perhaps has a little more foresight than humanity might be the one thing that keeps us alive for the next five hundred years. Could it get out of control? Why yes. But frankly not any more than things already are.

On Piketty

When I go into my local bookshop, the first thing I see on the table in front of me is usually new fiction, or a coffee table book, or something with celebrities in it. Today, it was Capital in the Twenty First Century, by Thomas Piketty. I love this.

It makes me feel all warm and fuzzy inside that a huge economics book is a bestseller. I don’t care that most people aren’t reading it cover to cover. I haven’t finished it yet myself. What’s great about it is that it represents a longing for a functional political ideology that does not involve handing our autonomy over to people who have already been proven to be crooked.

The gist of the book is simple. Examine data from the last two hundred years and a pattern emerges: over time, wealth accumulates in the hands of a few. Large-scale social shocks like war can reverse this, but then the trend begins again.

Of course, not everyone loves this book. There are many frowny faces from economic and financial circles. The Economist cites four major kinds of criticism to his work. While not an economist, I have just spent over a year at Princeton studying wealth inequality, I would like to add my twopenneth and address each concern in turn.

1: An antipathy to markets

Piketty is accused of being biased against markets right out of the gate. The allusion to Marx’s work in the title is, to some, a clear indicator of prejudice, and a political agenda. This response strikes me as weak. The main thrust of the book is an attempt to do proper data-driven economics. If the data reveals a flaw in our conception of markets, that’s science, not bias.

What I mean by this is, how else would you expect an economist sitting on his pile of data to present a book? If I did a bunch of research about, let’s say, car crashes, and discovered that faulty tires were to blame in almost all cases, would I title the book ‘Tires Make Us Safe’? Would I frame the book as a rousing defense of current tire technology? No, that would be cray cray.

Or perhaps the concern that it was his bias that made him go and collect all that data in the first place? Maybe he should be more like the last Republican presidential bid, and spend more time unbiasing his polls.

2: Inexact economics

Piketty has been challenged on the specifics of his economic tools, such as his definition of return on capital. And on ignoring certain base principles of the field, such as the fact that capital should fall as it accumulates. Not being an economist, I find it harder to comment on this. However, I confess I confront this notion with profound skepticism.

Economics is the study of trade, and appears to take this activity as a social axiom. However, any study of the social behavior of animals makes it painfully clear that trade requires a parity of power to function. In nature, where one organism can take from another without cost, it does so. Yet when I look at the field of economics, the fact that it is inevitably backed by a structure of power that can break down seems broadly absent. So holding Piketty to the standard of a field that doesn’t describe how power works in the first place seems a pretty weak defense.

You do not need economics for the chance accumulation of advantage in one group of agents to lead to an irreversible runaway effect. It is everywhere. Pick up a book on evolutionary dynamics. You will come across this idea very quickly. It is called ‘selection’, and it does not magically stop itself.

3: An assumption that the future will resemble the past

There are those who point out that only at a significant remove does our current era resemble the Gilded Age. For starters, wealth is more frequently allocated via large salaries than by large inheritances.

This is also a sad comeback. Those large salaries are not actually wages. They are the result of an investment of reputation. In effect, they are social network capital, not payment for labor.

We know this because CEOs cannot logically be worth as much as they’re paid for the work they do. The tech sector provides a perfect illustration of this. Look at the cross section of highly paid CEOs. Some are wealthy due to the business savvy, some via the theft of ideas, some from writing good algorithms, others transparently through luck. There is no intersecting formula that cuts across these characters that codes for outstanding leadership. Thus we must ask, if not quantifiable leadership skills, what are these people being paid for?

The answer is investment potential. Because celebrity CEOs have already succeeded, human beings are prone to allocate to them a high likelihood of future success. The idea of them is worth a lot, as a magnet for the contribution of further capital and the acceleration of sales, despite the folly that this entails. These people are retained because of the mystic aura they grant their companies.

Ironically, when engaged in repeat performances in other companies, these characters are more likely to fail than succeed. And it is precisely this noise in the system that gives people hope in creating new startups. Everyone believes they stand a chance, because, of course, they do. They know in their secret hearts that business is more chance than talent. Everyone also wants to believe, though, that after the fact their luck will be recognized as skill, and that they were really great all along. And due to our collective inability to differentiate between luck and skill, this often actually happens.

Lurking behind this criticism is another form of folly, though. Those who propose that the future is necessarily different from the past must do so in the face of data that shows an accumulation of inequality.

In previous social episodes in dozens of human civilizations, the concentration of power has terminated in war. The war is not necessarily directed at those accumulating wealth. It is often directed at those closest to the people suffering most who appear to have an advantage–often a minority group. However, the effect is usually the same, a shock to the social system that destroys enough order that society can equilibrate.

Those who look at the current accumulation of wealth who propose that it will not end in war are proposing that something else will happen instead. Not a climate disaster, mind you, but something nice. However, they don’t have any prior historical examples to support this claim. Instead, they have theories that are not backed up by data because the data doesn’t exist yet. This is like people standing in a burning building claiming that, because their roof hasn’t caved in yet, that their building is different and that they should not expect it to happen. For this building, they say, they built a really tall roof on sturdy wooden ratchets so that it keeps going slowly up, and this ensures that collapse is impossible.

4: Disagreement over what should be done

There are those who agree with Piketty’s assessment, but disagree with his notion of what should be done, namely: tax the rich. For instance, one commenter I read suggested the solution was to promote growth by investing in eduction. Except of course, we have seen investment in education, and we have seen growth, and neither have done a blind bit of good at the scales we’re talking about. You cannot back out inequality by trying to jolly everyone simultaneously into the middle class.

Nevertheless, I have more sympathy for this critique because, quite simply, I believe that the rich have already accumulated enough power that they will not permit themselves to be taxed. Instead, I suspect that war will do its customary job, aided in significant part by social unrest due to climate change. Witness the rise of radicalized politics in Europe as a thrilling precursor to the fun in store.

But herein lies my greatest concern about this entire scenario. The centralization of power simply happens because it can, just as water flows downhill. There is no surprise here. Watching this happen is rather like watching what happens to a slime mold investigating food sources in a petri dish. At first, there are questing threads of protoplasm everywhere. Then the mold identifies the food sources, locks onto them, and abandons the less productive parts of its structure. This is all great while the food source is present. Take the food away, though, and the mold is in trouble. It has lost all those experimental tendrils that formerly provided useful information.

In short, the centralization of power creates a society that is optimized but inflexible. This is because it has concentrated the extra capacity in the system in agents who do not actually need it. This means it is less able to resist environmental shocks. And it is this that we should be worried about, because we already know that environmental shocks are coming. A society with concentrated wealth is not only more prone to war, it is more likely to dramatically disassemble when the conditions that create wealth are disturbed.

Unfortunately, it is also normal for people to look at this kind of picture and deny it because they do not want it to be true. The irony though, is that without early, non-destructive changes, everyone loses. And perhaps, most ironically, the people who stand to lose the most are those who support the status quo, but who lack the luck, power, and ruthlessness to be the last person standing on the iceberg as it melts.

I say this because this is a pattern that readily shows up in simulations. The pattern of aiding the rich in the hope of joining their club is the one most likely to create personal catastrophe in the end-game. Witness the account in Freakonomics on why drug dealers live with their mothers, or why academia in America is imploding.  Those people who participate in the rat-race dollar-auction but who are not at the absolute top of the pyramid are the ones who bear the fallout when the system topples. This is because they have made the largest unsustainable investment in it. It is, in short, the kind of people who write articles for magazines pooh-poohing books like Piketty’s. The people who actually win do not write those articles. They have little people to do it for them. The threat to these acolytes, though, comes not from the disbelieving readership, but from those very icons of finance that they seek to reinforce.

For the rest of us, the options are simple. A: Get behind a progressive tax on extreme wealth. B: Get ready for war. Not next year, or the one after, but sooner than you’d like.

I know which one I prefer.

From Crackpots to Co-Authors

In my last post, I offered up a vision of an amateur science renaissance. It was my intention to follow that up directly with some suggestions on how to do that effectively. However, in the wake of my last post, two commenters (Chris Gray and David Zink) made similar excellent points which I felt I should address first. The core of their points as I saw them was this:

It’s all very well to propose a society of amateur scientists who contribute via the web. But how do you maintain any kind of quality in the discussion? 

The answer is, I don’t know. This is a really hard problem. Any kind of society of this sort is going to be plagued by groupthink, factionalism, shouting, self-delusion, and all of the other exciting behaviors we already see on the web.

Other communities have found partial solutions to this problem, though, and I propose that we borrow from them. Here are the partial solutions I think I see.

The Litropolis Solution

A community of amateur scientists will live or die based on the quality, breadth, and precision of the feedback that people receive. To my mind, some of the best critical feedback I’ve ever received on my ideas has come through the medium of writers groups.

Some basic things I learned are these:

  • Balance positive input with constructive input, regardless of how strong or weak the work is.
  • Give different input from the other people in your feedback group. This means that before you go into the feedback process, you should be thinking of as many different points as you can.
  • Always give examples and reasoning with your input, so that it can be understood and successfully applied.
  • Use a feature map of the set of attributes that each piece of work has, to help you assess all facets of the work. For fiction, this would include character, plot, pacing, point of view, etc.
  • Deliver feedback as you would like to receive it and leave your own emotions out.

 

I learned how to give good feedback at the Gotham Writers’ Workshop in New York. They did an awesome job. My skills were then honed when I had the opportunities to attend Clarion West and Milford. However, the terrific learning curve I went up came to a bumpy end when I found myself dealing directly with science fiction publishers and agents.

Despite the presence of structured writers workshops, classes, and online groups, taking the first steps into print can be a hideous process. This is in significant part because publishers are inundated with work from people who didn’t use these tools, and imagined that they could take a shortcut to literary genius. This process creates a ‘slush pile’ of unedited work. Good work usually gets drowned in it.

My friend Bob Kruger (who I met at Clarion) and I thought about this problem a lot since. We came to the conclusion that what was needed was a structured community where authors could assess each other and gain credibility mutually. This would be much more than an online writers’ group. It would be a reliable pathway to professional-level attention.

To his enormous credit, Bob has gone and built this thing, and is attempting to use his own skills and background as a publisher to lever it into existence. A society for amateur science could learn a lot from how it’s shaped. Credibility in Litropolis is quantized and cumulative, and accretes through respect that comes from more than one direction.

A society for amateur scientists that used something like this would find itself ahead of the state of play in professional science. You don’t have to move far in scientific circles before you encounter frustration at how professional journals are organized. Not only are they usually expensive to publish in, but the mechanisms for peer review and community assessment have barely been updated in decades. There are people at work solving this problem, but they’re doing so from the difficult position of an entrenched set of cultural norms. A society of amateurs could do better by starting from scratch.

The Improv Solution

Nowhere have I seen the principle of embracing failure more effectively manifested than in the improv theater community. Improv requires fearlessness and a separation between a person and their work, and this goal is routinely achieved. This is done by training people to experiment and experience small failures frequently, and to support each other while this is happening.

In so doing, the way that the brain interprets the sensation of failure is physically recoded. People literally become failure-proof. Stage fright falls away. People stop thinking of generated content as owned by an individual creator, and start thinking about it as the product of a group. People start being happier, more secure, funnier, wiser, more open-minded, and better at actually generating ideas. I kid ye not. They just get better. They even stand straighter.

At the core of improv training are some very simple principles:

  • Deliberately build on the ideas of others to create shared ownership.
  • Refuse nothing that is added to the work, even if you then build on that input selectively.
  • Take risks and deliberately extend yourself.
  • Trust your collaborators by default.
  • Look for ways to make your collaborator look good, rather than yourself.

The results of improv training range from the beneficial to the astonishing. As a result, the World Applied Improv Conference that happens about once a year is the point of closest approach to a cooperative utopia that I have ever seen.

The catch is that improvising is cognitively expensive and so your brain will try to fall out of the habit as soon as you let it. And once you stop doing it, all those habits of fear and entrenched reasoning start coming back. Furthermore, doing improv with the same people every time eventually has a similar effect at the group level. People go past the point where they’re learning from the surprising input they get from fellow players, and start becoming entrenched as a group, while kidding themselves that they’re still getting the full benefit of the process. This, to my mind, is why improvisers haven’t already taken over the world.

Also, improv has limits. It’s way more effective when it happens in person in a room full of people who’re laughing and looking at each other. This suggests to me that our ideal amateur science society would need local chapters and sustaining events to keep the culture functioning as well as online tools for intellectual play.

The network science solution

Some of the research I did over the last year was on collective decision making in people and animals, and how it’s affected by the shape of the social networks we inhabit. As a result of that research I now strongly believe that social media is still in the dark ages compared to what it could eventually be capable of. Because the forums we have now are polluted with self-delusion and infighting does not mean that they will always have to be.

Certainly we will not be able to rewrite the human tendency to jockey for social position or to prefer our own ideas to those of others, but by choosing how we enable people to come into contact with each other, those habits can be put to constructive use.

As starting suggestions for how this could be done, let us consider a hypothetical online science community tool that builds network carefully. Simple things we could try would include:

Capping the number of links or citations that an individual piece of work could receive. Part of what makes science broken is that the citation system has gone from being a roadmap of how to understand the field to a kind of payment system for scientific credibility. Numbers of citations received has become a measure of success. Consequently, everyone loses. Scientific fields become harder to navigate. Creating each bibliography becomes agony. And incentives appear for senior scientists to force more junior ones to quote their work.

Connecting scientists with critics from other parts of the social graph. Groupthink happens when small groups of people create echo-chambers where the can hear their own opinions reinforced. By building a non-local linking system into our service, we could dilute this effect.

Quantifying credibility, and then hiding the credibility scores of individuals who put work into the forum for criticism. One of the most depressing science stories I ever heard was about string theory. It was about a growing tendency for quantum gravity theorists, when trying to assess an idea, to ask ‘what does Ed Witten think?’ before giving an opinion. (Ed Witten is a string theorist so successful that he casts a shadow across the entire field.) Deferring to the opinions of those more prestigious than oneself is a habit that has no place in science. It is a holdover from religious thinking, and one to which people are lamentably prone. By partially anonymizing input, we would allow ideas and feedback to stand for themselves, further strengthening the split between the contributors and their work.

In the end, a sustainable amateur science community, to my mind, will need to involve elements of all of these sources, as well as making up some new tools of its own. So maybe the first job of this new society will be to work out a set of functional ground-rules for its own operation.

Along with that will need to be a shared understanding of what doing amateur science actually means and what it looks like. As I alluded to at the start of this post, I have some opinions on that. Next time, I’ll share them with you.

Amateur Scientists

Yesterday, I started a short series of posts about my conclusions from a year and a half of professional academic research. I outlined what I believe is a growing crisis in science. Today, want to talk about what we can do about it. My point today is this:

The world desperately needs amateur scientists.

Because of what’s happened to science in the mainstream, science that’s outside of it is more important than ever. While doing science professionally, I was astonished at how little time I spent pursuing new ideas compared to what I was used to as an amateur. This wasn’t down to any kind of error in direction or a problem with the mentorship I received. Quite the reverse. The lack of innovation was down to unavoidable elements of the culture of academia. The mentoring I received helped me navigate that culture. Without it, I would have been lost.

What I spent most of my time doing instead of innovating was checking my work to make sure that it was perfect, and that all possible bases were covered in order to make sure the work was of publishable quality. I spent months of my time trying to compensate for a huge number of hypothetical yes-buts that people might employ to try to invalidate my research. Scientists do this because in the world they inhabit, they need to. Anything less than this is a recipe for career-fail. Getting something wrong makes you look bad.

However, in a very important abstract sense, this is the wrong way to do science. That’s because no one can ever adequately check their own work. That’s what the community should be for. But a point has been reached in most areas of science where there are huge career incentives to make it look as if each piece of research that you deliver is correct, even though this goal is unattainable.

As Karl Popper pointed out years ago, science advances through refutation. You can never know what’s true, only what’s false. Thus, the most effective way for science to move forward is for people to make bold, falsifiable conjectures.

What this means in practical terms is that kudos in science should be attached to the act of checking someone else’s ideas, and to the creation of falsifiable conjectures, not their correctness. In short, science would be far better off to adopt a hard-earned lesson from the tech industry: embrace failure.  Wear each failure as a badge of pride. 

However, because the scientific culture we have puts a premium on ownership of an intellectual landscape, and on apparently uncontestable expertise, we deliver science in units of apparent advancement instead.

This is slowly creating a nightmare of confirmation bias in today’s scientific community. With the addition of computer simulations and powerful statistical tools, the point of finding yourself looking at what you want to see has become extremely easy to attain. All the repetitive checking and testing does nothing whatsoever to alleviate this, because the tests that people do are, by and large, those that occur to them after the process of bias has already occurred.

This is not to say that all science done today is wrong. Far from it. Plenty of good work gets done. But that is despite of the system that we have set up, rather than because of it. Science is so hard to do as a career precisely because we’ve gone and made it that way.

There are those who will tell you that science has to look the way that it currently does. They will tell you that, because there is so much research being done, it is the responsibility of scientists to check their own work as thoroughly as possible before publishing it. This stance however, misses the point.

Certainly, keeping up to date on publications (such as Arxiv preprints) in many scientific fields these days can be exhausting. But this is because A: it creates an inferiority complex in young scientists struggling to compete, and B: because the work that’s published is usually so cautious and glutinous in its presentation that reading even one paper can take hours, let alone the dozen that might be posted on a given day. I have yet to meet a scientist of my generation who reads professional science papers for fun.

In a better world, the papers would be written like tutorials, and students would scour Arxiv every day as a goldmine of refutation opportunities. However,  even with all the best will in the world, it will take years, or some exceedingly painful crisis, or both, for the world of professional research to change. To my mind, then, the role of the amateur scientist in today’s world is to do what professionals can seldom afford to do any more: play.

If human knowledge is going to advance fast enough for us to solve the problems we create for ourselves, there need to be people who can try on offbeat ideas without fear. Amateur scientists need to be deliberately operating outside of the mainstream, and ignoring prestige.

This is a key point. Via my work on digital physics, I’ve spoken to a lot of smart, well-intentioned people with interesting ideas who are what people in the physics community often refer to as ‘crackpots’. What this really means is that they’re untrained people who’re giving an idea they care about their best shot. The problem that most of them share is that they’re trying urgently to communicate their reasoning to professionals.

Usually, the crackpot/amateur-scientist narrative goes like this:

I think I’ve found something really important. It looks to me like this could be the basis for a Theory of Everything. Now I just need to talk to a physicist to work out the math and communicate it to the world.

Physicists usually hate this because:

1: They have spent the last n years of their life trying to absorb a highly complex and nuanced way of looking at the world. When someone comes along talking a completely different language, it’s somewhere between annoying and outright painful to then adapt to comprehend someone’s new esoteric frame of reference, particularly when they start off with no proof that there is something concrete behind it.

2: They have so little time to push forward on their own work that they’re desperately keen to pursue their own ideas, not someone else’s. This is because the culture they inhabit tells them that this is what counts, and will either make or break their already fragile career.

3: They’re fighting for every inch of their credibility. The last thing they need is to try attaching themselves to someone with no credibility, and without any formal training in their field. For the professional scientist, it almost never looks like there’s anything to gain.

What this means is that lucky crackpot/amateurs in digital physics receive disdain when they reach out to physicists. The unlucky ones receive a sequence of uncomfortable pseudo-supportive brush-offs that send mixed signals and can last for years, crushing hope and seeding bewilderment.

But look at what causes this problem. These amateurs are chasing what they think science is about: sudden flashes of insight that transform how we think about the world and which reveal to the world the genius of their creator. Amateurs are chasing this dream because they see the professionals do it. The irony is that the professionals should know better.

The answer to this social malaise is simple. Amateur scientists should seek out each other instead and engage in peer review, rather than expecting professionals to help. This is better for everyone. And it makes sense, because, outside of critical input that could equally well be delivered by other amateurs, the best that a professional can offer in support is their credibility and prestige. And frankly, outside of the academic job market, these commodities aren’t that useful. An amateur’s ideas are surely at their strongest when there are enough people already excited about them that professional attention becomes inevitable.

What I’d like to see is a body of amateur scientists helping each other, refuting each other’s work, where possible, and using the internet as their greatest tool and forum. We are in a better position to do this now than at any point in human history. With services like Coursera providing a free scientific and technical education, and the web providing countless platforms to be heard, we are ready for a new amateur science renaissance.

But realistically how, you may ask? Surely scientific literacy takes years. Isn’t the frontier of human knowledge a huge distance away from most people’s understanding of how the world works? Hasn’t all the easy stuff already been exhaustively explored? Shouldn’t we just leave it up to the professionals?

Hell no. The edge of human knowledge is next door and it’s being very inadequately searched. And everyone who can cut code, think critically, and who cares, should be helping to fix that. In the next post, I’ll say more about how.

About Research

It’s been a weird year. I’ve been closer to professional academic research than I have in ages, doing complex systems work at Princeton University on everything from social inequality to abiogenesis. For the most part, I had a great time. However, now that this experience is winding down, there are some important things I feel I should say:

1:Professional science is broken.

2: The world desperately needs amateur scientists, now more than ever.

3: The frontier of human knowledge is an incredibly short walk away, and anyone who cares should be visiting it.  

On the face of it, these remarks might seem somewhat negative and/or counterintuitive, so let me explain, one point at a time.

First: professional science is broken.

While doing work at Princeton I was supported by the most amazing people. All of them were smart. All of them were kind. They were incredibly welcoming to someone who’s been doing science at random in a non-academic context for the last twenty years. I have phenomenal respect for all of them. I cannot say enough positive things about the support I received. Princeton, too, was an surprisingly functional, rational, and inclusive institution. Without exception, every department I encountered was wisely and compassionately run, and managed at a superb level of professionalism. In short, they were awesome. So what’s broken, you may ask?

The system that academics inhabit is broken. Academic career tracks are broken. The funding situation is broken. The compromises that people have to make to stay in research are broken. The way that people have to publish research in order to be considered professional is broken. The workload for junior faculty is broken. The administrative load that senior faculty have to manage is broken. And it is for all these reasons and more that my wife and I are leaving.

I don’t have room here to go into all the subtle details of why I believe there’s a profound crisis taking place in science in the western world. There is plenty on the web that captures parts of this story, for instance, here, here and here. In subsequent posts I’ll try to drill down and explain what I learned. But the main takeaway for this post is as follows:

Much of professional science has reached a point where engaging in the process of risky intellectual exploration is untenable for people who want to retain their job and their credibility.

This is because this career track is now so tenuous, and so over-competitive, that it forces a strategy of extreme caution onto the very people who should be taking the biggest intellectual risks.

Being a scientist is a bit like being in the military. You join up and undergo a rigorous, often demoralizing training process. (Unlike in the military, this process can last anywhere from three to eight years.) At the end of that, you’re effectively posted somewhere. You up and leave your home, and start in your new position, which usually lasts three years (five if you’re incredibly lucky). Then, after that, you’re usually posted somewhere else, also for three years. Then, after that, you’re posted somewhere else again. This time, you get to stay for five years, but you often have to work 60 to 80 hour weeks busting a gut to establish yourself. Then, in many cases, you move again. By now, most people are about 16 years deep into this career track and counting.

God forbid that at any point during this process you become attached to a place, or furniture, or a hobby, or a family member, or a spouse. With each wrenching shift, these people/things either come with you, or you lose them for good. And along the way, if they come, they get a little bit smashed up. That’s because when you land in your new post, there’s room for you in the department, but usually no room for whoever or whatever came with you.

What really differentiates science from the army, though, is that in science, you’re supposed to own this process of transition. What I mean by this is that it’s supposed to be your idea. You’re supposed to feel lucky that you get such an awesome job. You’re supposed to be delighted at your good fortune. The military serviceperson gets to treat their constant relocation as a necessary evil that comes upon them through their duty. The scientist is just supposed to like it. Concerned about the stability of your income stream? You’re not supposed to care. It’s supposed to be beneath you. Want a decent daycare situation for your kids? That’s supposed to be a lower priority.

The number of people for whom this is actually true is vanishingly small. However, you wouldn’t guess it, because most of the people in this line of work have had to convince themselves that they’re one of that tiny minority. Often, they stick with this belief right up until they get sick, or unlucky, or depressed, or just simply forced out in the middle of their lives, and expected to abandon their dreams and retrain.

My wife and I were lucky. We left from a position of strength, with the door wide open for us to do more. But not everywhere in scienceland is as good as Princeton.

My point is that many many scientists are clinging desperately to the mast of their personal ship tossed in a battering career storm that they do not deserve. And these are the people we generally expect to take intellectual risks, to serve as guardians of truth, and to push forward the boundaries of human knowledge. And that’s bullshit.

Next, I’ll cover point two.

I Don’t Get Monotheism

In the wake of the Bill Nye/Ken Ham pseudodebate that happened last week, there’s been a wave of discussion in webland. Specifically, a set of questions asked by Creationists have been doing the rounds along with a variety of attendant replies.

I found this whole piece of public dialog made me feel awkward. And it reminded me that there are elements of the monotheistic worldview that I just simply don’t get. Most notably, the things that people seem to like about monotheistic religions appear to be functionally excluded by the very features of the belief systems that people maintain. Here are some of my main problems.

1: How can a religious belief system that includes heaven and hell ever be moral?

If people take actions based on a payoff system that rewards or punishes them after they die, then how can they make choices based on whether they’re actually the right thing to do? At the very least, the presence of an afterlife payoff muddies the moral waters. At worst, it completely invalidates the value of human decisions. Good behavior can’t be its own reward if someone’s going to slip you a metaphysical fifty bucks for making up your mind one way or the other. That’s accepting a bribe, not being moral.

The fact that an old book purports to tell you what’s moral so that you don’t have to worry doesn’t seem to me to help much. After all, if a person makes a choice by recourse to looking the answer up in a book, doesn’t that also invalidate the moral process? That’s not moral reasoning, that’s machine-like reasoning. Moral reasoning requires introspection and moral courage. And courage doesn’t come with a cheat-sheet.

2: How can an omniscient, omnipotent god be anything but a mindless machine?

Thinking doesn’t happen in a vacuum. Thinking is something you do when you’re trying to make choices or plan actions. An omniscient entity knows all outcomes and has nothing to plan. So how can it possibly think? For such an entity, there simply can’t be anything worth thinking. An all-knowing god is therefore, by definition, an unthinking god.

One might retort that the kind of thinking that this god is doing is different from the human kind. That it’s somehow ‘infinite’ and ‘unknowable’. But what is it then? What is it for? It’s functionally so different from actual thinking that it strips all meaning out of the word.

3: How can life in a universe that has an all-powerful creator be anything other than meaningless?

For a god who has the option to set the universe up any way he likes, and do it over again whenever he wants, all choices are arbitrary. There is no useful end state that’s worth running a universe to obtain, because that same state could just be arrived at without effort any number of times. This means that a universe with an all-powerful creator is, by necessity, a pointless one.

One might argue that this god didn’t set the universe up for himself, but for us, somehow. But this doesn’t make sense either. Why bother? Why not just skip to the end state and create humanity having learned its lessons already? If human enlightenment is the goal, then anything other than skipping to that state is a deliberate waste of effort. Unless, of course, human pain and confusion is the goal, in which case you’re talking about another kind of god completely.

One might try to argue here that jumping to the rapture without us having actually learned the lesson for ourselves wouldn’t be the same. But that’s self-contradictory. It presumes that the omnipotent god isn’t omnipotent, otherwise these two outcomes would surely not be different.

Recourse to the notion that we have a god that can’t be understood and so shouldn’t be challenged doesn’t get us anywhere either. Are we supposed to just accept that the universe is somehow golden and ordered because of information we can never have? That’s like being told to imagine that you ate a cake, rather than getting to eat one, because someone with better tastebuds than you is eating it for you. It’s only satisfying so long as you don’t hunger for any actual cake, or in this case, meaning.

4: How can you possibly have a meaningful life without an existential void?

Science seems to suggest that the universe is a careless, bleak place in which life is fragile and impossibly delicate. The universe doesn’t care a whit whether we live or die. It is unspeakably vast and mostly empty. And this, to my mind, is what makes life beautiful and important. Because it’s special. Because it’s a fluke. Because wrong choices can end it. Without life, and human endeavor in the face of impossible odds, there is no light in the universe. No striving. No purpose.

To me, every single human second, and every single choice, counts. To me life is beautiful precisely because it’s doomed and because there are no easy answers. We are the only candle burning in the dark that we know of. And that makes us so poignantly important that is is impossible for me to express. Is the idea of a guaranteed win for the good guys better than that? Does that make life more special? How can it?

One of the Creationists who posed questions for Atheists asked, “How do you explain a sunset if there is no God?” I assume that they mean, how do you account for something so beautiful and precious existing in the world without somebody putting it there for us to look at.  My reply is that a sunset is beautiful precisely because there is no god.

The human interpretation of a sunset is the product of a perceptual system that’s designed to pick up a myriad of environmental cues, and which can use those cues to reliably weigh the relative merits of different environments. And understanding that tell us something about what sunsets truly are. They’re delicate, subjective things that aren’t the same anywhere else in the universe. Sunsets require that you look after the atmosphere. Sunsets require that you do not smother the horizon in concrete towers. Sunsets require that you do not block the sight of your people with prison walls.

And the more science we know, the more we realize just how special and precious those sunsets are.  A prebuilt sunset, fabricated like a piece of Ikea furniture by a casual creator, seems, by comparison, a worthless thing.

In a nutshell, I don’t understand see how the promise of automatic certainty and canned moral answers can help at all in appreciating the beauty of life. It seems to me that they’re tools for enabling people who experience persistent pain or fear in their lives to avoid beauty, because of the costs that looking at it honestly entails. I don’t begrudge them that hunger. We all reach for easy answers from time to time. But that doesn’t make it moral, or courageous, or right.

Guns

Last year, not long after I became a dad and moved to New Jersey, I watched the news of the Sandy Hook shooting play itself out online. I found myself oddly moved by this event. Having a kid of my own had seemingly turned on some kind of circuit in my brain that made me feel a kind of proximity to events like these that I never had before.

Following the event, I did something fairly rare for me: I spent a bunch of time on Facebook. Specifically, I found myself reading over discussions on friends’ pages that involved some amount of debate or dialog between those people opposed to gun rights in the US, and those in favor.

As a Brit relocated to the US, I couldn’t really understand the enthusiasm around gun ownership. I didn’t want to be angry with the people promoting guns. I didn’t want to berate them or challenge them. I wanted to understand them. I still want to understand. I still really don’t get it.

I understand that people are concerned about personal freedom. But shouldn’t those people most concerned about freedom be against gun ownership? After all, more guns means less freedom. The one thing about a gun is that when somebody owns one, they can point it at you and make you do things. That’s what they’re for. It’s either that or killing people.

So I simply can’t see how having more guns can possibly make more freedom. By definition, every gun that you add to society makes the amount of personal freedom decrease.

I believe that the popular notion is that if both people in a problematic situation have a gun, then the power of one person to control the other is zeroed out. But this requires that those people reach for those guns at the same time. It seems to me that we’ve known since Wild West shootouts that that doesn’t happen. Instead, someone has a gun first and the other person reaching for their gun gets shot.

And then there’s the notion of personal defense. A person with a gun can hypothetically defend themselves more effectively. But that requires that the person doing the defending bring their gun out before there’s an unambiguous threat. Which means they have to assess the potential threat correctly one hundred percent of the time. Which means there are going to be mistakes and that some innocent people will die.

And finally, there’s the notion of the right to bear arms to protect oneself from one’s government. But the US government can listen in on all phone conversations, can see postage stamps from space, and has more nuclear weapons than anyone else on Earth. You can’t defend against that with a handgun. In fact, you can’t defend against that at all.

What I saw on Facebook was a lot of people on both sides of the debate feeling angry and entrenched, and a lot of dialog not going anywhere. That seems counterproductive to me. And at the same time I do have an opinion. What’s a dual citizen to do?

 

Ray Kurzweil Revisited/Reanimated

Yesterday, I received the most interesting and cogent response yet to my Tinker Point idea. It came from my friend Rob, who is himself a biologist. For those who don’t follow the comments on my posts, here is his comment in its entirety, because I like it. My thoughts on his remarks follow.

Interesting ideas here but the evolutionary reasoning is a little false. There are already examples in nature of convergent individuals that are very successful. So successful in fact that they have all but given up individuality for group success. The family Physaliidae is the best example of a society that maintains both a division of labor as well as an absolute requirement of cooperation.

But higher eusocial organisms also fit the bill, such as hymenopteran and isopteran societies that grow into vast numbers of of individuals. Even mammals have eusocial organiosms in the naked mole rats of East Africa.

I would not be so quick to imagine an evolutionary disadvantage to being susceptible to mental manipulation. Organisms that “allow” themselves to be domesticated do very well, much better than their wild type counter parts. Domestic chickens have gone global and outshine jungle fowl to a ridiculous extant. All horses are now domesticated. Dogs, corn, wheat, sheep, pigs, all have allowed themselves to be under profound control and in so doing have far outstripped their wild ancestors in evolutionary success.

I think what your argument does cover is that there are some very dark solutions to a future that includes a Kurzweil Singularity. If you mean that we may lose something of our individuality, or our humanity, then you are right to hold that fear. But game theory indicates not that individuality will be maintained, but rather that choices will be made that enhance a payoff. And every payoff is not the same to everyone in the game. So if something, such as a collective creature of electronic, biological, or amalgam, gets an advantage over a bunch of individuals then game theory and evolution both predict the payoff goes to that “something.” And by the way, evolution does’t care if an advantage is short term, it only cares about who wins right now. That’s why we still have global warming, and that’s why Kurzweil is correct.

These are great points. I agree with most of them.  Rob’s science is solid. And yet I still think Ray Kurzweil is wrong, and that my evolutionary perspective holds up. Why? Because the Tinker Point and the idea of collaborative human aggregation are not at odds with each other.

The idea of the Tinker Point is that no intelligent species gets past the point where it’s easier to tinker with themselves than to not. It says nothing about the pattern of cooperation under which that limit is reached.

I think Rob’s take here is that in a cooperative society in which humans lose a little humanity in order to bind together into a hyper-intelligent whole is one in which humanity is suddenly on the same page, and thus the incentive to destructively tinker goes away.  And while this might not look nice to individual human beings, it still constitutes a singularity. But I don’t think this interpretation holds water.

In nature, patterns of cooperation are always vulnerable to defection.  This is true at every scale, from transposons operating within individual cells, up to bankers cheating entire national economies for profit.

Furthermore, the force that keeps defection in check and enables cooperation to persist is that of group selection. Our cells don’t collapse in revolt every day because cells that revolt lead to non-viable organisms. We aren’t instantly riddled with cancer the moment we’re born because creatures whose cells don’t cooperate never get as far as reproducing. At every level at which agents cooperatively aggregate, there has to be something that the aggregates are competing against in order for the cooperation to be maintained. Bind the entire species into a single hive-mind and your incentive has gone away. Massive social collapse at that point is eventually inevitable.

Now, it’s fair to say that the Singularity as Kurzweil defines it doesn’t necessitate a single hive-mind. Here’s a definition from Wikipedia:

The technological singularity, or simply the singularity, is a theoretical point in time when human technology (and, particularly, technological intelligence) will have so rapidly progressed that, ultimately, a greater-than-human intelligence will emerge, which will “radically change human civilization, and perhaps even human nature itself.”

So a benign Singularity might entail the emergence of a set of collective intelligences, all competing with each other. But this outcome is also suspect. How is this convenient outcome supposed to happen? And in this world of competing hive-minds, what role does intelligence actually play?

If intelligence isn’t critical to their competition, we would expect these hive-minds to end up stupid as they optimized to improve fitness, thus conserving the tinker point. If intelligence is critical to their competition, then the mechanisms by which those hive minds operate is still subject to manipulation, and therefore degradation, and therefore overthrow.

In this case the tinker point problem looks just the way it does for individual humans. These hive-minds are still likely to make mistakes and fuck each other up, just like people, until that’s not an option any more.

My favorite speculative future following from this idea comes from the notion that when a new evolutionary niche becomes available, the ‘first to market’ often dominates. I wouldn’t be surprised if, when mental uploading becomes available, that Ray K is first in line to try it out. And once uploaded, of course, he’d rapidly realize that reproducing himself a trillion times and bolting the door behind him would make for a far more stable world than leaving the uplift process open to others. Hyper-intelligence would be unlikely to leave much room for naive idealism.

Hence, if we do end up with future hive-minds at war with each other over the remnants of human civilization, we shouldn’t be surprised if they all have Ray K’s face.

 

Pennsylvania

Pennsylvania: the WTF state.

This weekend I went, somewhat randomly, to the town of Bellefonte in Pennsylvania. It’s lovely there. There are lots of nineteenth century buildings that have been well looked after. There’s an attractive park. The food is good. The people are friendly. The town nestles in a long, linear valley full of farms and open land and so is looked over by green, attractive, tree-covered ridges in two directions. What’s not to like?

It’s also quiet. So quiet, in fact, that you can’t help but wonder how the town stays so nice. We took some long drives in the land surrounding the town, and found that to be attractive too. The landscape is so tidy around there that it feels almost Swiss. Small, perfectly maintained villages dot the landscape, each so small that they don’t contain a single shop. Families putter from place to place in Amish buggies. Everywhere there is a sense that the hand of time has been magically held back, and along with it, the pressure to fill the place with strip-malls, agribusiness and the staggering quantity of crap that clots the north east corridor.

Everywhere, that is, except State College. But while State College feels up-to-date, it’s fairly nice too. Sure, there is a fairly rabid football flavor to the place. And sure, the town is still licking its wounds from the whole Sandusky scandal thing. But nevertheless, it’s a pleasant place to spend a few hours. And it, too, seems to be buoyed up by a kind of surreal forcefield of prosperity.

So I thought: okay, Pennsylvania is a ‘Nice Place’. It’s a rural sort of state with a magic-based economy that happens to have Philadelphia at one end of it. (By this point you can probably guess that prior to this weekend, my overall knowledge of Pennsylvania was fairly low. So rather than taking interstates home, we decided to drive across country.

Mistake. Or, at least, a mistake when you have a one-year old in the back of the car who wants to make frequent stops to race around in the grass and eat gravel.

At first, driving seemed like a great choice. There was a seemingly endless supply of pretty, prosperous little towns and meticulous farms. Then we crossed a river and hit ‘Coal Country’.

Holy fucking christ on a bike. Talk about culture change. In a matter of a few miles, you go from magically wealthy to magically fucked up. You first really realize this when you hit Shamokin, a town so desolate that if if desolation was something you could bottle and sell, that they’d be millionaires before they actually ran out of desolation.

Shamokin’s number one claim to fame is that it’s situated next to the world’s largest manmade mountain. By which they mean a crap-heap of mine excreta next to the town so big that it matches the surrounding hillscape. Except that trees can’t grow on it properly because it’s made of crap, so they all sort of lean over and look sickly. And it’s then that you realize that selling desolation for money was exactly what they were doing until the 1970s when the mine there was shut down.

Before Shamokin, we imagined that we’d stop in the next town to get my son ice-cream or something. We did not stop. One look at the extremely sad downtown convinced us that we were not going to have our baby playing around on the median strip eating clinker and playing with crack needles. Maybe the next town, we thought. Or the next one. Or the next one. Etc.

Maybe this sounds dreadful. Maybe I’m revealing some kind of horrific middle-class bias by admitting this. But my point is not to run down Shamokin or anyone who lives there. My main point is this: what on Earth is going on in Pennsylvania that there are such weird discrepancies of prosperity, in towns that really don’t have that much difference between them?

Coal country stretches for miles and miles. And it’s really very incredibly grim.

So what I’m left with at the end of the weekend is confusion. How does this place work? What stops the people in Shamokin from moving west about thirty miles and starting over? How does everyone in Amish country keep their business afloat? Why doesn’t the state have some kind of tax code that inspires revitalization? After all, a lot of those doomed little towns could be quite pretty with the right attention.

At the very least, you could try convincing hipsters to move to coal country and make the place ironically awful. Sort of like the Pabst Blue Ribbon of real estate.

In any case, I’m left confused. To my mind, Pennsylvania isn’t one state. It’s at least two. Or maybe just one but run by evil wizards. Can anyone help me out here?

 

Why Ray Kurzweil is Wrong

The brilliant and entertaining Ray Kurzweil is in the media a lot of late, talking about his new book, which I’m currently enjoying: How to Create a Mind. As well as promoting his book, he’s also promoting his core idea, a concept with which he’s become synonymous. And that’s that by around 2040 the rate of technological change in the world will have become so great that we will have reached a ‘singularity‘.

In other words, we will have machine intelligence, and/or augmented human intelligence, and/or something else we haven’t even built yet, such that we will ascend toward a kind of godhood. Problems like finite lifespans won’t bother us any more because we’ll be uploaded into machines. Issues like the climate or world poverty will become irrelevant, as our ability to engineer solutions will become so powerful that that they’re irrelevant.

To support his argument, he provides a wealth of data that shows that human development follows an exponential growth curve. Whether you’re looking at the pace of change, or the amount of computation a single person can do, or the speed with which human beings can communicate, it all pretty much follows the same pattern. And he’s right. We are on an exponential curve and crazy things are going to happen in our lifetime. However, the kind of things he’s predicting are all wrong, and, though I’ve touched on this topic before on this blog, I’m going to have another crack at explaining why.

First though, a little on why you should take Ray’s arguments seriously. Ask most people why they doubt that we’ll ascend to godhood within the next 30 years, and they’ll tell you that it sounds like science fiction–that the change is much too fast. But as Ray points out, people are terrible about anticipating exponential growth curves. If you had told a person in 1975 that by 2012 you’d have access to just about all the knowledge on Earth via a device you could hold in your hand, they’d probably have doubted you. Or take climate change. The world is still full of people who doubt that it’s happening, while at the same time, the rate of change is outstripping all our predictions. When it comes to exponential growth curves, I think Ray has the answer dead right.

So if Ray’s predictions have so much weight, why should we doubt him?

Because of game theory. Because of the Tragedy of the Commons. And because the mind is a commons in which tragedies can happen, just like everything else.

To explain what I mean, consider this: intelligence doesn’t happen in a vacuum. It happens in brains, and brains have the shape they do for a reason. Specifically, brains are packed with mechanisms to make it really hard for you to quit bad habits. And the reason why those mechanisms are there is because these are the same parts that make it hard for someone, or something, to take control over your behavior.

Go to the business section of your local bookshop, and you will find the shelves packed with books on negotiation, sales, leadership, etc. All ways to try to get other people to do things. Ask anyone in sales and marketing how hard it can be to gain new customers, and they’ll tell you. It’s bloody hard.

Your brain is packed with checks and balances to make your behavior almost impossible to change simply because, if it weren’t, you’d probably be dead by now. People who are easy to coerce get coerced. That’s why we have brains in the first place. To get people, or things, or animals, to do the things we want by outsmarting them. That’s why a third of the world’s biodiversity has disappeared since 1970. Because we’ve outsmarted all the other species on the planet, to their cost.

We don’t notice this part of our brains because it’s broadly counter-productive to believe that your every attempt at personal change is hamstrung right out of the gate. Similarly, believing that your new business venture is probably doomed because it’s going to be horribly hard to get people to notice what you’re doing doesn’t select for winners. There is a mounting body of evidence to suggest that people are designed to be optimistic, just like they’re designed to be stubborn.

But just because we don’t see that part of our minds doesn’t mean it’s not there. And because intelligence needs stuff to run on, just like a computer program, you take a risk every time you start fiddling around with that stuff. Muck around with the operating system on your laptop and sooner or later something bad happens, even if you gain an advantage over the usefulness of your machine in the short term. When the stuff that people run on becomes an operating system that people can muck with, everyone takes a risk.

Which is not to say that we’re all going to die, or that we’ll all become lobotomized zombie robots in the thrall of a mad professor somewhere. Rather, it’s just that once you reach the point where it’s at least as easy to muck with the stuff that you’re made of as it is to leave it alone, you’re in trouble.

Not necessarily fatal trouble. After all, you can take a reformed heroin addict who’s short-circuited his brain and recovered, put a big pile of heroin in front of him, and likely as not, he won’t take it. He’ll refuse it with pride. But it’ll be work. Once you have found the way to game a system, not gaming that system becomes an effort, and that’s true regardless of how smart you get, because the problem scales with your intelligence.

Let me say that again, to make sure that it’s clear. There is no level of intelligence at which it suddenly becomes easy to cooperate or avoid making mistakes. Sure, the risks to gaming the system become easier to see, but so do the number of ways to game the system. And the more sophisticated your technology is, the more ways there are for it to screw up.

There is a proof in computer science that you cannot build a computer program that can tell how all possible computer programs will behave. This proof applies just as well to people, and essentially tells us that no matter how smart you get, you won’t be able to outguess or predict someone, or something, as complicated as you are.

This rule is stable no matter what path to techno-trouble you want to pick. You can choose genetic modification, brain enhancements, intelligent machines, nanobots, or any other nifty technology you like. Once the technology table is covered with loaded weapons, sooner or later, someone is going to pick one up and have an accident. Whether it’s someone trying to convince everyone to buy their shampoo by making it addictive, or asking people to receive a minor genetic change before they join a company, or trawling for cat photos on the internet with a program that adapts itself, it’s going to happen sooner or later. It’s not one slippery slope. It’s a slippery slope in every direction that you look.

My guess is that we’ll recover from whatever accident we have. However, we’ll then nudge back towards the table of trouble, and then have another accident. And then another. And that this continues until we basically run out of planet to mess about with. To my mind, this is why we don’t see signs of intelligent life out in space. It’s because nobody gets past this point–the Tinker Point. It’s like an information-theoretic glass ceiling.

So is there anything we can do about this?

Yes. Of course. The best technological bets are space travel and suspended animation. The more people are spread out in time and space, the less likely it is that any one accident will be devastating. We’ll be able to keep playing the same game for a long, long time.

However, the fact that there’s no evidence that anyone else in the universe has summoned up the gumption to pull off this trick isn’t exactly comforting. As a species, we should get our skates on. By Ray’s estimate, we have about thirty years.