In the largely forgotten 2003 John Woo film “Paycheck,” an amnesiac Ben Affleck has to piece together his memories to stop a machine he built for evil industrialist Aaron Eckhart from destroying the world. But given that it was an adaptation of a Phillip K. Dick short story, the twist is much smarter than the film around it: The machine in question isn’t a weapon, but a device capable of seeing into the future. The machine’s predictions of war and plague convince policymakers to make decisions that precipitate exactly those outcomes.
Here in 2021, we thankfully do not have such a machine (though perhaps Ben Affleck wishes he’d had one before agreeing to star in “Paycheck”). But in keeping with my practice of trying to extract interesting ideas from mediocre films, I found myself thinking about the prediction machine (the film, sadly, never gives it a catchier name) because it is hard to ignore the feeling that the odds of a major war in the near future are increasing. Russian forces are massing by the border of Ukraine while Belarus foments a migrant crisis on its border with Poland. Meanwhile, China continues to threaten Taiwan, while observers sound increasingly urgent alarms that a confrontation between Beijing and its neighbors – which might well draw in the U.S. – is growing more likely.
With that increasingly threatening background, the question that occurred to me is: How can we get better at understanding the actual risk of war? And can better understanding translate into a lowered risk?
This isn’t a new question. Political scientists and analytical units in governments have tried to create more accurate, more rational, and more systemic ways to assess the risk of war (or other violent collapse) for decades. Obviously, given what we know about physics and the linear flow of time, a machine that can literally peer into the future is much closer to a science fictional trope than to reality. But like elections analysts, meteorologists, and sports fans, policymakers and political scientists can turn to the increasingly vast stores of data available about the world and try to parse it using sophisticated algorithms.
It is important to be clear about what the outputs of such systems are and aren’t. While colloquially called “predictions,” the actual outputs tend to be ranges of estimates based on a combination of historical data and assumptions about correlation and causation. Turning that into policy is a complex, nuanced, and fallible process. Solving the technical problems of accessing, parsing, and analyzing data and producing a more specific, more accurate forecast – an enormous task, to be clear – does nothing to decrease the procedural, institutional, and political challenges of translating that into “better” decisions.
But being able to provide an assessment – whether couched in numerical probability terms or not – of the likelihood of a war and identifying the exact circumstances and timing under which it will start are completely different things. After all, few people were surprised when World War I broke out, but no one predicted that the assassination of Archduke Franz Ferdinand in Sarajevo would be the precipitating factor. Similarly, the idea that Japan and the United States would come to blows in the Pacific was not itself a surprise – but the attack on Pearl Harbor very much was, and its tactical success largely predicated on the unpreparedness of the Pacific Fleet early on that specific Sunday morning.
Technology, to be fair, has made it much harder to hide the material preparations for war. It is impossible to hide the movement of large numbers of troops, aircraft, armored vehicles, or warships from the constellation of observation satellites operated on behalf of both governments and non-governmental entities, while open-source analysts can pick up – and broadcast -– other tells from photos, videos, and social media.
Material preparation, however, is a necessary but not sufficient condition for war. A buildup and demonstration of forces followed by a quiet drawdown is a far more common occurrence than a buildup followed by actual combat. External factors – the balance of forces, the economic and political backdrop, even the weather or the season – account for some of the difference, and to some extent can be taken account of in sophisticated models.
But ultimately, the decision to go to war is made by a different group of human beings with a fundamentally different perspective, even if they are looking at the exact same set of military and non-military factors. That difference of perspective is far harder to model than any number of complex observable factors. After all, the last two decades saw any number of optimistic claims that technological connectivity would produce greater understanding between people and nations and cultures – but the reality has been far less optimistic. And, of course, simulations of human behavior, no matter how sophisticated, are always going to be shaped by the biases of their own designers.
Humility is not, by itself, a useful analytical tool. But it is a fundamentally human conceit, and a necessary condition of sound decision-making; the awareness that one’s awareness is limited and that the game looks very different from the other side of the board. As predictive systems become more embedded into decision-making structures, perhaps the role of humans will be to embrace that uncertainty and accommodate it.