The United States Space Force is the newest branch of the American military, and as its name suggests, the most dependent upon high technology and forward thinking to carry out its mission. So it is perhaps no surprise that Dr. Joel Mozer, the space force’s chief scientist, wouldthat we are at “the brink of the age of human augmentation,” while at the Air Force Research Laboratory.
Mozer’s remarks suggest a wide view of what constitutes augmentation. He refers to more sophisticated modes of, where robotic agents would be given ever-increasing amounts of autonomy to make decisions while reducing the workload of their human controllers to making only the highest-level strategic decisions. There is an about the ethical, legal, and operational questions raised by outsourcing ever-more-significant decisions to machines, especially where those decisions might well involve taking human lives. But we are already fairly far down the road of that type of augmentation, and — in artificial intelligence research — that trend will likely continue rapidly.
The second type of augmentation to which he refers is likely to attract more controversy, rightly or wrongly. Specifically, he refers to chemically or biologically altering humans to make them more capable of interfacing with ever faster and more capable machines.
It’s important here to note the difference between what is being proposed and the usual fictional portrayals of “augmented” military personnel. From Captain America to Jean Claude Van Damme’s Universal Soldier, fictional military human enhancements tend to be heavy on the physical prowess and lighter on intellectual improvements. (One notable exception is Star Trek’s Moby Dick-quoting, though given the character’s role as a villain free of normal ethical constraints, perhaps not the example that military futurists would want to foreground.)
But in reality, what is being proposed is much more cerebral. There is apparently more to be gained by boosting mental performance — attentiveness, ability to focus for long periods — than in attempting to turn individual soldiers into Terminators. After all, mechanical systems are already capable of beingthan any flesh-and-blood soldier could hope to be, and that gap will only grow if autonomy and lethality continue to combine in ever-smaller packages.
It would be straightforward to suggest here that human augmentation — of the computer-, chemical-, and biologically-aided varieties — opens a Pandora’s Box of ethical and moral complications, and that the simplest and safest approach would be to find a way to forestall or ban it completely. Certainly, the idea of boosting human performance in order to become more lethal should prompt serious consideration of the ways in which organized violence is evolving and what we should do to manage the attendant risks.
But to suggest that human augmentation is a fundamentally new phenomenon elides the fact that various forms of it have been the norm in warfare for centuries. Combat soldiers, sailors, and pilots havein order to maintain vigilance and reaction time despite sleep deprivation and overwork. Military leaders have about the use of steroids by soldiers, given the attendant side effects, although it is difficult to establish how widespread the practice is. have generally failed when tried, but we should not let the failure of past attempts suggest that such measures will invariably come to naught.
It would also elide the fact that “augmentation” is already widely practiced in society at large. Our smartphones, as repositories of personal information and access portals to the largest collection of knowledge ever assembled by humans, have been described as mental augmentations — and. The usage of attention- and focus-boosting drugs is heavily normalized: for example, caffeine, a not-insubstantial amount of which was consumed by the author in the course of writing this column.
Obviously, there are substantial qualitative and quantitative differences between civilians drinking coffee to get through a workday and military officers taking high-potency drugs in order to more efficiently give orders to machines carrying out lethal violence; I do not mean to suggest otherwise. But there is a value in looking to the civilian world and to the ways in which the very human quest for shortcuts to self-improvement have been successfully managed — and where, more often, they have not — as we enter a bold new world where humans try to keep up with the machines we created in order to gain advantage over other humans.