Asia Defense

US Department of Defense Adopts Artificial Intelligence Ethical Principles

Recent Features

Asia Defense | Security

US Department of Defense Adopts Artificial Intelligence Ethical Principles

The Pentagon adopted a set of ethical guidelines on the use of AI.

US Department of Defense Adopts Artificial Intelligence Ethical Principles
Credit: DoD photo by Staff Sgt. Nicole Mejia

In a statement on Monday, February 24, the U.S. Department of Defense announced that it had formally adopted a set of ethical principles governing American military applications of artificial intelligence. The principles adopted were based on recommendations made to U.S. Secretary of Defense Mark T. Esper by the U.S. Defense Innovation Board in October 2019, the Pentagon said in a statement.

“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” Esper said in a statement.

“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations,” he added.

According to the Pentagon, the adopted ethical principles were the product of a process that lasted more than one year and included several artificial intelligence experts from a range of fields, including industry, academia, government, and civil society groups. “The adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems,” the Pentagon noted.

The ethical principles broadly range five areas. The Pentagon outlined these as follows:

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

The military applications of AI are manifold, ranging from autonomous vehicles to computer-assisted decision-making. The U.S., alongside China, Russia, and a range of other countries, views AI as a critical emerging technology.