A first try at playing the Big Long Game
The Big Long Game is pretty big: Everyone’s life, the future of the planet, the future of the solar system, and winning is the simple problem of humanity surviving and prospering. Forever. It certainly sounds ambitious, but actually I still don’t understand what it would mean or what we could do, so this post is an attempt to clear up my own thinking.
Let’s start by reframing the problem. Surviving and prospering forever means not being stamped on by some big existential threat between now and then. This reframe is useful, but unfortunately there are a lot of those threats and some of them are right here and now. A global superplague, a nuclear war, catastrophic climate change or protracted social unrest and civilisational collapse (think a second Dark Age or a modern analogue to the multi-cultural, multi-regional Bronze Age collapse).
Other existential threats are so far in the future that they are absurd. The death of the sun, the death of the milky way, or – slightly sooner – the slow heating of the sun and consequent boiling of all the Earth’s oceans long before the sun becomes a red dwarf. This last threat also makes another important point: like an asteroid out of some implausible Hollywood movie, it’s very difficult to know what all the possible landmines waiting out there are. Wikipedia lists some more.
Broadly though, existential threats are either internal or external – something we can do to ourselves, or an accident of nature. I’m not sure which of these groups is the bigger problem. Arguably it’s impossible to know, and anyway, we need to solve both. Leaving aside the thorny issue of prioritising them for now, I’ll put the internal threats temporarily to one side and zoom in on the external threats to figure out a framework for dealing with them.
Fundamentally, external existential threats are when nature does us in. The most common historic causes of civilisational failure would all have been overcome or avoided if these civilisations had just had a little bit more control of their physical world or a little bit more knowledge of how it worked. By the same logic, the more and more technologically advanced we are as a species, the better we will be at fending off nature’s thrusts.
As a first, very high-level conclusion then, it seems that to win the game of survival we need to continue to multiply our ability to control and understand our world. There’s a space of existential threats, some of which we know about and some of which we can solve. The more of them we put into the “we can solve” bucket the better off we are:
For certain we will run into new, bigger problems as we understand more and can do more – but these are problems we would have run into anyway and the sooner we put a problem behind us the better. While it’s convenient to think of these threats in isolation, there’s no guarantee we’ll get them one at a time. A superplague and an imminent asteroid along with a second Dark Age anyone? The more of the red circle we can put within reach as soon as possible the better!
So what does this mean? It means that our first step to making sure we win the Big Long Game is figuring out how we could massively multiply our understanding and ability to act. As a first proposal, I’m picking three research topics to focus on. Mostly present in the Transhumanist literature I want to bring them under a single, bigger roof. Those three are:
- Space exploitation – because we’ve done enough exploration
- Engineered senescence – curing the disease of death
- Strong AI – building the ultimate positive feedback loop
Each of these – what it is and why – deserves a full post of its own. But this post is long enough, so I’ll end it here! Do you think the Big Long Game is really about staving off existential threats? What are the top threats in your mind, and what can we do to make sure we can beat them? If you agree that it’s all about building capabilities, which ones would you build?