Features

The World Needs Democratic AI Principles

Recent Features

Features | Security | Society

The World Needs Democratic AI Principles

Democratic artificial intelligence principles are an essential guardrail in the age of great power competition.

The World Needs Democratic AI Principles
Credit: NASA

Artificial intelligence (AI) encompasses a nearly limitless set of possibilities and the rapid integration of AI into every facet of human life offers great promise. It is also a disruptive force that threatens to destabilize the global balance of power and the foundational principles of democracy itself. 

AI generally refers to any algorithm that is capable of analyzing vast data sets to perform a given task rapidly and at scale. Improvements in AI come from data intensive deep learning that allows algorithms to get better at their tasks over time. On the cutting-edge of this technology, Silicon Valley-based OpenAI’s GPT-3 program produces coherent, free-flowing language that could eventually undergird the development of intelligent AI assistants for numerous human professions, drastically improving productivity. Conversely, this kind of intelligent AI could also supercharge targeted disinformation campaigns by overwhelming social media environments with AI bots that are indistinguishable from human users. The efforts of researchers at the Beijing Academy of Artificial Intelligence, a government-funded lab, to create a similar algorithm therefore takes on an ominous character, given the Chinese government’s track record of creating and exporting AI for authoritarian purposes.

The profound effects of recent technological innovations, such as the distortion of social media platforms to propagate disinformation that afflicts the very credibility of U.S. democracy, further highlight the dire consequences of disruptive technologies absent guardrails. To safeguard democracy against AI’s unforeseen consequences and inhibit perversion of the technology by authoritarian states, the U.S. government and industry must adopt basic principles to guide the development of AI.

Authoritarian AI Superpowers?

Like any technology, AI is infused with the values of its creators. In the United States, AI is primarily incubated in an open and competitive commercial sector where innovation thrives. Millions of people already interact with this kind of AI when they ask Amazon’s Alexa questions, use Google Maps for directions, or open Netflix. At a deeper level, numerous AI innovations, such as those currently speeding up vaccine development and improving breast cancer outcomes, are enhancing and extending the lives of Americans. However, several early warning signs underscore that even in the United States there is the need for a concerted effort to place democratic principles at the center of AI use. At least three Black Americans have already been wrongfully imprisoned due to false identification by facial recognition algorithms.

The horrific implications of the conscious abuse of AI by authoritarian powers is readily apparent in Xinjiang Uyghur Autonomous Region. There, Uyghur culture and identity are being systematically dismantled by the Chinese government with the aid of sophisticated AI-enhanced surveillance systems. Eager to export these tools of repression, China is providing surveillance technology at low cost to scores of municipalities and national governments, including developed democracies. China, and to a lesser extent Russia, are also harnessing AI to rapidly develop new intelligence, targeted disinformation, and military capabilities that pose novel threats to the United States, its allies, and democracy everywhere.

In 2017, President Xi Jinping proclaimed China’s intent to be the world’s premier AI innovation center by 2030, and Russian President Vladimir Putin remarked that the country that leads in AI will rule the world. Both leaders have since poured billions into AI development. China in particular has assumed the aura, if not yet the mantle, of AI superiority by honing existing AI technology and applying it to tasks of mass repression and espionage.

China and Russia are able to funnel huge troves of data about individuals and companies into AI projects specifically because they are illiberal. Chinese AI developers not only have access to citizens’ tax returns and criminal records, but also medical records, forced genetic screenings, bank statements, purchase histories, electronic communications, and movement data collected from personal GPS-enabled devices and AI-enhanced surveillance systems. Beyond its own citizenry, state-sponsored corporate espionage also extracts sensitive data from international companies doing business in China. Russia is blanketed with fewer sensors, but its inadequate data protection laws and corruption allow Russian companies to access huge amounts of personal data.

The U.S. Remains the World’s AI Innovator

Data is an invaluable piece of the puzzle, but AI design is, first and foremost, an art that requires human innovation. Simply throwing funding and labor at the task is insufficient to produce the innovative algorithms that will lead to the next AI breakthrough. Despite China’s investment in advanced AI research, the top AI talent needed to realize new leaps in AI’s potential continues to gravitate toward the United States. China and the United States produce roughly the same number of AI researchers each year, but the vast majority of AI innovators, including Chinese nationals, pursue graduate studies and work in the United States.

Ironically, China’s use of AI-enhanced surveillance to exert social control cultivates a sociopolitical environment that stymies the freedoms that foster innovation. Lavish state funding can overcome the problem to an extent, but vast sums of money are susceptible to corruption and can overlook or even discourage spontaneous technological discoveries. The recent three-month disappearance of Jack Ma – the CEO of tech giant Alibaba who publicly criticized Chinese regulators last year – provides even the most lauded China-based innovators and entrepreneurs a cautionary example; no one is too important for the Chinese Communist Party to cut down to size, if needed.

Supporting Innovation Infused With Democratic Values

Unfortunately, American companies and capital have helped enable China’s application of AI for repressive purposes. China currently relies on the United States and other democracies for the advanced microchips that make algorithms effective. The Urumqi Cloud Computing Center at the heart of AI-enhanced surveillance in Xinjiang is powered by microchips made by leading American semiconductor companies. Some of the largest investors in SenseTime, a principle architect of the software behind China’s AI-enhanced surveillance networks, have been American firms. The United States should coordinate export controls and human rights sanctions with its allies to deny authoritarian regimes such resources and thereby temporarily slow the development of AI-enabled digital authoritarianism. Export controls and sanctions would have consequences, but they are necessary to prevent the complicity of American entities in human rights abuses.

To maintain its edge and support the adoption of a democratic AI environment, the U.S. government must redouble investment in AI innovations in the United States and allied nations that solve public problems and improve defenses against illiberal AI technologies. In the mid-20th century, the United States funded over two-thirds of global research and development. Today it funds less than one-third. To reverse this trend, the U.S. government must not replicate the Chinese top-down investment strategy that disincentives innovation and incentivizes corruption. Instead, it should fund research and development centers at universities and expand public-private partnerships and existing government to commercial-use technology transfer programs, both at home and abroad. In all of these partnerships, government and industry must emphasize democratic principles, focusing on the protection of data privacy, individual liberties, and human rights.

Domestically, the U.S. government should establish a National Research Cloud. This would provide academic AI researchers with affordable computing power and access to large datasets held in a secure cloud environment. Such an initiative would democratize AI research and accelerate the training of a new generation of AI scientists. Abroad, the U.S. International Development Finance Corporation could provide loans to support companies and governments, such as those in Tallinn, Estonia, that are developing domestic, democratic alternatives to Chinese and Russian AI. 

Above all, the U.S. government needs to invest in people and champion scientific excellence. The U.S. educational system is in dire need of reform if it is to prepare Americans with the necessary skills to excel in the development of emerging technologies. Michael Kanaan notes in his book “T-Minus AI” that the National Defense Education Act – which improved American science curricula and provided thousands of students affordable college loans – was a key part of President Dwight D. Eisenhower’s strategy to make the United States more competitive in the space race after the Soviet Union’s successful launch of Sputnik. The United States needs the same kind of focus now to ensure the preeminence of democratic AI. 

To simultaneously facilitate the continued inflow of global talent, the U.S. government should improve pathways to residency and citizenship for skilled innovators in AI and other critical fields, particularly for foreigners who study in the United States and would otherwise take their new expertise home with them upon graduation. 

Democratic “Laws” for AI Development

Given the chilling implications of AI misuse, the tech industry is skeptical of government-led programs. But the industry has erred as well; social media technology and digital disruption have deformed and undermined democracy. Leaving AI technology free to develop absent guardrails also presents dangers. In a no-holds-barred technological arms race with China, the United States may backslide into that which it hopes to contain. Ethics must be a central concern in the development of AI. The controversial firing in December 2020 of Timnit Gebru, one of Google’s leading AI ethics researchers, and the firing last week of another AI ethicist at the company, Margaret Mitchell, highlight the need for more transparency from the tech industry. 

The U.S. government can take the first step to build trust in public-private partnerships by updating legal and regulatory frameworks to further protect civil liberties. Industry can also come together to self-regulate. The initial guidelines for ethical research on genetic engineering published at the Asilomar Conference on Recombinant DNA, which subsequently evolved into NIH guidelines, are an important precedent for industry cooperation. 

Democracies can also come together to promote democratic AI. Organizations such as UNESCO, the OECD, and the European Commission have already begun conversations about beneficial AI and ethical standards. The U.S. can advance these efforts and promote democratic AI by adding it to the U.S.-led democracy summit agenda. By taking a leading role, the U.S. can help minimize negative unintended consequences and shape AI technologies that benefit the prosperity, liberty, and security of all. 

Like the developers of the nuclear power industry in the 20th century, government and industry must share the enormous responsibility of crafting basic ethical principles to guide the emergence of what is a foreseeably disruptive technology. Isaac Asimov’s classic parable on sentient robots established three laws to govern his fictional world. The architects of AI should be equally forward thinking in developing “laws” to govern emerging AI.

Alexander Vindman, a retired U.S. Army lieutenant colonel, and former director for European Affairs at the National Security Council, is a doctoral student at the Johns Hopkins School of Advanced International Studies, Pritzker Military Fellow at the Lawfare Institute, and the author of the forthcoming memoir “Here, Right Matters.” Follow him on Twitter @AVindman.

Igor Jablokov, a technology executive, is CEO of Pryon, an enterprise artificial intelligence company. His career began at IBM before founding AI pioneer Yap, which was Amazon’s first acquisition to create Alexa. Subsequently, he was awarded Eisenhower and Truman National Security Fellowships. Follow him on Twitter @IJablokov.

Ian J. Lynch is an independent foreign policy and national security analyst. He previously led the development of girls’ education programs in Afghanistan from 2013-2018. Follow him on Twitter @Ian_J_Lynch.