On April 13, 2018, the United States carried out airstrikes in Syria in response to the country’s use of chemical weapons. The Office of Legal Council recently released its legal case for these strikes, determining that unilateral force is legal unless or until the “anticipated nature, scope, and duration” of the conflict reaches the level of “war in the constitutional sense.” Under Article II of the Constitution, the president is Commander in Chief and he can therefore direct military force in pursuit of a range of U.S. interests: “the promotion of regional stability, the prevention of a worsening of the region’s humanitarian catastrophe, and the deterrence of the use and proliferation of chemical weapons.” The opinion raised eyebrows, but how significant is it?
From a legal standpoint, it breaks little to no new legal ground. Rather, it is fully consistent with the Obama-era opinions that relied on Article II authority for military force, including for humanitarian disaster and in response to the use of chemical weapons. As the legal memo accurately stated, “When it comes to the war powers of the President [Article II], we do not write on a blank slate. The legal opinions of executive advisers and the still weightier precedents of history established” the constitutional prerogative to use force.
What makes it more notable is not the opinion itself, but what it signifies as part of a broader trend in American politics. At least since the Korean War, American leaders have studiously avoided being seen engaging in “war.” Several factors have conspired against leaders acknowledging the onset of war.
First, in large part because of the advent of nuclear weapons, American wars actually have moved from large-scale wars to smaller-scale conflicts. The prospect of inordinate costs from a nuclear exchange deterred nuclear-armed countries from escalating, which has helped countries like the United States avoid repeating the bloodiest of its wars like the Civil War or either of the World Wars. On the other hand, modern conflicts no longer bring a sense of existential stakes that easily mobilize public support, so leaders simply try to skirt the discussion of war altogether.
For example, in 2008, the United States noticed that it was not losing in Afghanistan, but was not winning either. A review of the Afghanistan war noted that an impediment to progress was that it took 42 steps to obtain an Afghan driver’s license. Each step was an opportunity for bribery, which contributed to corruption and helped finance the Taliban insurgency. Not surprisingly, the Bush Administration was loath to release the review: “a public release will just make people scratch their heads.” How was the war taking so long? Why did the path to victory run through their equivalent Department of Motor Vehicles? The administration decided to scrap a public roll-out of the Afghan review. The question of how to sell a war with unsellable war aims presented itself in Vietnam as well. President Johnson hoped to avoid a debate on a “war” so he sidestepped all of the trappings of war, such as war taxes and massive escalation of troops. Rather, he engaged in deception and “escalated the war by stealth to avoid democratic debate.”
Second, as Tanisha Fazal shows in her new book, Wars of Law: Unintended Consequences in the Regulation of Armed Conflict, the codification of international law to govern the conduct of conflict, which has steadily evolved especially since World War II, has been a double-edged sword. To be sure, the laws of war have provided important guideposts for state behavior. States are required, at a minimum, to frame their actions in ways that are consistent with international law, which means they at least think about some form of legal behavior. But those same international legal provisions have had an unintended consequence. States have gone to great lengths to avoid being “in war” so they can sidestep legal constraints. For instance, Russia strips its soldiers of insignia so it can plausibly deny state involvement in Crimea, thereby de facto — but not de jure — annexing it.
Third, technological advancements of war have further altered the appearance of war. Leaders have increasingly embraced “light footprint warfare,” including drones, special operations forces, and cyberattacks as less risky and less resource-intensive compared to conventional attacks that require boots on the ground. Also referred to as “gray zone” conflict, or hybrid warfare, these tools have the additional virtue of providing “the thinnest veneer of deniability” and therefore help “fragment opposition to actions that otherwise invite a vocal, sometimes forceful, international response.”
All of these features of modern conflict guide leaders away from the threshold of “war.” Smaller-scale conflict is obviously preferable to the carnage that comes with large-scale war, but the move has the unintended consequence of eroding wartime accountability. Democracies are theoretically more accountable in the conduct of war because of institutions that help adjudicate decisions about war and lower the likelihood of foolhardy wars. For one, the marketplace of ideas, with a free press unrestricted by censorship, should help ferret out information about the basis for war. Opposition groups then have incentives to amplify the debate and use the levers of government to prevent the war or contain it once it has started. Taken together, the institutions should confer a “democratic advantage” in wartime, with wars shorter, less costly and more victorious than non-democracies.
The logic has intuitive appeal, but runs into headwinds in the current context. These institutions work far less fluidly when conflict is “not war,” when leaders can cast operations as below the threshold of war. As Jack Goldsmith and Matt Waxman observe, if drone strikes are out of sight, then “cyberattacks are even less visible.” These democratic institutions cannot function smoothly if wars are fought in the shadows, and the democratic advantages evaporate. Former President Obama, having presided over an expanded drone war, was criticized for acting as “accuser, prosecutor, judge, jury, and executioner,” in part because of the fecklessness of these institutions, including the legislature. It is also no coincidence that wars have become longer and costlier, with Afghanistan and Iraq two of the obvious examples.
The factors that have given rise to the current accountability crisis are unlikely to change. Since the populace itself is blissfully detached from the costs of war, it is unlikely to impose bottom-up pressures. Unfortunately, accountability then must fall on Congress, which has shown little appetite for imposing meaningful constraints. A start would be to renegotiate the 2001 Authorization for the Use of Military Force (AUMF), which has empowered three presidents to expand counterterrorism operations, virtually unfettered. Renegotiating an updated AUMF would at least require debate and a conscious decision about where and how force is permitted for counterterrorism, but would be less effective in constraining interventions such as those in Libya and Syria that appeared to rely on Article II authority. Here Congress needs to more actively insert itself in the debate or risk being relegated to bystander status.
Sarah E. Kreps is an associate professor of government and adjunct professor of law at Cornell University, and author of a recent book: Taxing Wars: The American Way of War Finance and the Decline of Democracy (Oxford University Press, 2018).