Flashpoints

Can Apocalypse Be Dealt With?

Recent Features

Flashpoints | Security | Oceania

Can Apocalypse Be Dealt With?

A recent lecture by the head of Australia’s internal security raises interesting questions around responses to catastrophic risks.

Can Apocalypse Be Dealt With?
Credit: Flickr/Kevin Doncaster

In a lecture at the Australian National University’s National Security College on October 13, Australia’s Department of Home Affairs Secretary Mike Pezzullo enumerated a long and frightening list of security risks the country – and the world – would have to reckon with over the next hundred years. Pezzullo, who became the first head of the Home Affairs ministry in 2017, took a refreshingly expansive view of the notion of security itself in his speech, interrogating traditional conceptions, with a veritable who’s who of Australia’s national security establishment in the audience.

Indeed, his was the only speech by a serving senior security official I have heard so far that included a reference to the French post-structuralist Jacques Derrida. Pezzullo’s repeated invocation of another French philosopher, Michel Foucault, was marginally less unexpected given Foucault’s work on surveillance and biopolitics – topics painfully relevant to the ongoing COVID-19 pandemic.

The pandemic in many ways provided the senior-most bureaucrat responsible for internal security in Australia a perfect entry point for the inclusion of a wide variety of threats, beyond traditional ones, which do not emanate from human actors and therefore can’t be deterred or met with through the use of force. (Noting that “overarming the state is as bad as underarming it,” at one point Pezzullo suggested, quite correctly, that when it came to Australia’s security right now, “handwashing is more important than every weapon system in the arsenal of the Australian Defense Forces.”)

Pezzullo’s self-described “apocalyptic” list of risks included catastrophic ones, defined loosely as those with the potential to inflict serious harm to humanity on a global scale, perhaps even spanning generations. Pandemics clearly are catastrophic risks; but so are many others he named, ranging from geomagnetic storms from unusual solar activity and permanent loss of natural diversity to manmade risks, including those posed by advanced technologies such as artificial intelligence (AI) and synthetic biology.

Three fundamental questions arise when it comes to catastrophic risks, none of which are easy to answer: To what extent can probabilities be assigned to these risks within a fixed time horizon and such risks compared; how much resources should a state a priori allocate to mitigating their impact; and what is the role of the national security bureaucracy in managing them?

Let’s start with the issue of estimating probabilities of catastrophic risks. Many natural ones, from the risk of direct hit from an asteroid to the existential risk posed by a supernova explosion, can be easily calculated. In a new book, Oxford scholar Toby Ord computes them: It turns out that the probability of an asteroid bigger than 10 kilometers hitting the earth over the next century is less than one out of 150 million. The chance of a supernova depleting the earth’s ozone layer by more than 30 percent over the next century is less than 1 in 50 million. However, when it came to significant mortal risks from pandemics, probabilists Pasquale Cirillo and Nassim Nicholas Taleb have mathematically established they are higher than widely assumed.

That said, when it comes to human-generated “anthropogenic” risks, odds become harder to calculate: Consider Pezzullo’s “Terminator” example or, dressed in academese, the problem of an artificial superintelligence of the kind studied by Ord’s colleague Nick Bostrom. (In Bostrom’s theorizing, it is entirely possible that such an AI could wipe out humanity leaving no possibility of regeneration in the future – a truly “existential risk.”) Such an intelligence could naturally arise out of exponential progress in machine learning within the next 10 years – or not in a 100; much depends on how you see certain technological trends projecting into the future. Absent precise, objectively reliable, ways to quantify many anthropogenic risks, pooled expert predictions are often used to arrive at a number. (One such, in 2008, put the chances of human extinction this century at 19 percent.)

This naturally leads to a very practical question: How much of government resources should be allocated to meet catastrophic risks, especially when there is a plethora of them competing for money with on-the-horizon plausible national security challenges, such defense spending in the face of great power rivals, and there is no obvious way to rank all of the risks side by side? Furthermore, planners – like armies – often tend to prep for the last contingency they faced. With COVID-19 very much still here, it is likely that it would animate debates around government spending priorities for some time to come. (However, this is not to say that the possibility another pandemic after COVID-19 is remote; if anything, systematic destruction of animal habitats and climate change very much makes it possible that another deadly virus will reappear in the foreseeable future. The point here is that to focus on that possibility alone, at the expense of other risks, would be foolish.)

But fundamentally, there’s a conceptual issue at hand. A catastrophic risk is almost by definition something with significant second and higher-order effects. (The ongoing global economic decimation from COVID-19 and attendant possibility of political chaos are cases in point.) Given that many such risks are “distributed, networked and interconnected,” as Pezzullo described them, estimating the cost of their impact (that is, pricing the risk) is extremely hard – though not impossible. Add to this the fact that different potential catastrophic risks will play out differently: For example, while the ongoing pandemic has spared the earth’s environment, that may not be the case with a supervolcanic eruption.

When it comes to mitigation strategies too, there are no silver bullets. Take the issue of machine superintelligence, as an example. Beyond repeated calls for responsible, ethical AI research and hysteria around killer robots, the fact of the matter is that a large part of the cutting-edge research in this direction is taking place in the private sector, whose compliance with a voluntary set of regulations – should they be put in place by governments – is uncertain. While it is common in some circles to note, as Pezzullo did in his lecture, that risks acquire added lethality when they transmit themselves through networks – the very reason why social distancing holds the key to beating the coronavirus, as Taleb and collaborators prophetically argued in January – a uniform strategy of shutting networks down in face of an incipient threat could also backfire in unexpected ways. Think of the economic costs of a large-scale internet shutdown, for example.

Finally comes the role of the national security bureaucracy in managing catastrophic non-traditional threats. Here too are two sides of the same coin. As some security studies scholars have long argued, declaring a threat (such as a pandemic) to be a national security one, to “securitize” it, has obvious downsides. For one, such a move restricts the flow of information which, as we saw with China’s initial reaction to the coronavirus, is singularly detrimental. At the same time, denoting something as a security threat also stands to attract significant resources to meet it and centralize response authority. And while Pezzullo, in his lecture, rightly argued that the definition of national security should not be broadened to include all policy discourse, the fact of the matter is that the national security apparatus, especially intelligence agencies, have resources (for example, intel collectors at global hotspots) that stand to significantly help mitigate emerging threats.

At the end, the answer to many of these questions may indeed lie with a proposal of the Australian Home Affairs secretary: of an “extended state” – a network of governmental organizations, businesses, civil society and others – that rises to meet security challenges rather than leaving that task to the state alone. Fleshing that idea out fully to incorporate a range of catastrophic risks — their mitigation or dealing with them when they manifest — remains an interesting exercise.