The Precipice: Existential Risk and the Future of Humanity

Toby Ord

Book cover

Serendipitous, if that’s the right word for it, that this book was released during the 2020 coronavirus pandemic–the writing was completed before the outbreak, but Ord spends a fair amount of time discussing the risks of natural and engineered pandemics.

This is a very focused book–Ord is only interested in discussing truly “existential” risks, namely, events that would either cause humanity’s extinction, an unrecoverable collapse of civilization, or eternal lock-in of a state that prevents humanity from reaching our full potential (e.g. some hellish totalitarian state). The last two seemed a little fuzzy to me, but to a first approximation, we’re talking about extinction. Ord believes that human extinction would be an especially bad consequence, above and beyond the lives lost. He was a student of Derek Parfit, who I think argued something similar in “Reasons and Persons.” In this book, he more or less takes that as given, and doesn’t spend much time on the philosophical argument for it (which I think is an interesting one, especially when considering the possibility of intelligent life elsewhere in the universe, intelligent AI “children” of humanity, or events that wipe out humanity but not the entire biosphere and therefore leave open the possibility of other forms of intelligent life evolving).

Ord divides existential risks into two categories: natural and anthropogenic. Natural existential risks include things like asteroid strikes and supervolcanoes. Anthropogenic risks include things like nuclear winter, extreme climate change (would have to be much worse than even the high-warming scenarios to count as existential), unaligned AI, and pandemics (he actually includes both engineered and natural here, since natural pandemics are much more dangerous due to modern civilization). He does some order-of-magnitude calculations to estimate the per-century likelihood of all of these risks. For the natural risks, he is able to make some estimates based on the fossil record and the amount of time humanity has survived so far. For the anthropogenic risks, it’s much more judgmental, but I think it’s admirable that Ord makes an attempt. His conclusion is that the anthropogenic risks far outweigh the natural risks, based on basically any reasonable set of assumptions you care to make; furthermore, they are high enough that humanity should set a high priority on research and institution-building to reduce these risks. Ord refers to the modern era as “the precipice” because he essentially says that within the next few centuries, humanity will either figure out a way to bring these risks down to a permanently minimal level, or will wipe itself out. I was also impressed that Ord didn’t merely stop with this recommendation, but has a fairly detailed discussion of what concrete policy directions or research paths are likely to be effective at reducing existential risks. He also has an excellent discussion of how to think about prioritization and risk reduction when risks are not necessarily independent–it’s definitely more complex than just “equalize marginal costs of risk reduction,” or at least that’s easier stated than described.

Looking out the proverbial window today, it’s certainly hard to feel optimistic about our chances. We are doing a terrible job responding to SARS-CoV2, which is a non-engineered pandemic. Luckily, there is no indication that it poses an existential threat to humanity, and perhaps an optimistic person would say that we’d do a much better job of responding to a truly existential pandemic. I’m not that person, though–I think it’s clear that we do not have the institutional capability to respond effectively. (One may reasonably argue that things could look different under different leadership, but when thinking about ensuring humanity’s future, we’re not allowed to condition on uncertain outcomes of a political system.) What’s more, a point that Ord makes in the book, a society that is not providing minimal levels of care, well-being, and respect to its people is not going to be able to focus effectively on reducing existential risk. When a large proportion of our population is scared of being murdered by law enforcement, how are we going to spend societal resources thinking about unaligned AI? When we don’t even guarantee medical care for all?

I was interested that Ord did not discuss religion at all in the book. I think this must be an intentional choice to avoid alienating some readers. The entire framing of Ord’s argument is a secular one: humanity’s only chance of fulfilling our ultimate potential is in our survival in this universe, and if we screw up badly enough just once, that’s it, forever. This point of view is fundamentally contrary to a religious one, in which our ultimate goal and value lies in a world beyond this one (and, depending on who you ask, events in this universe are overseen by an all-powerful and benevolent being anyway). This is not at all to say that religious people are somehow unconcerned with events on this earth, or don’t care about humanity’s mundane survival–many religious people are extremely dedicated to bettering the lives of their fellow humans, and often specifically feel that their religion calls them to do so. But would this extend to existential risk too? Again, I’m not saying that a religious person is going to respond to the risk of nuclear war with a shrug. But I think they are never going to feel the sense of desperation that comes from believing that this is all there is. Ord discusses at length how existential risk is under-researched and under-attended. I can’t help but wonder if this is, in part, because most people in the world have a belief system that puts ultimate value outside humanity’s mundane survival.

By the way, a cool side note. When I put this book on Goodreads, my dad commented that a kid from my small town, a few years younger than me, was now working at the same institute at Oxford as Toby Ord. I checked the acknowledgments, and more than just mentioning him, Ord credits him (Andrew Snyder-Beattie) by name for giving him the idea for the book in the first place!

My Goodreads rating: 4 stars

IndieBound