Achieving consensus on a legal issue within the international community is nothing short of a monumental task, to put it mildly. But the European Union’s General Data Protection Regulation (GDPR) in 2018 managed to defy the odds, resonating with nations far and wide. Its goal – to harmonize and bolster data privacy across Europe – midwifed a worldwide race to develop local data privacy legislations. In Asia alone, a report suggested that, by 2023, there would be a 25 percent increase in regional privacy laws compared to 2021. Most, if not all, are in whole or in part inspired by the GDPR.
The meteoric ascent of AI tools like ChatGPT has sparked a new phase of global excitement. As a countermeasure, and as a broader effort to put AI expansion under its thumb, EU lawmakers came up with the second, trend-setting installment in the form of the Artificial Intelligence Act. The Act, just approved today by the European Parliament, would be the world’s first AI-focused legislation. The bill will now be the subject of negotiations with the European Council.
As the Artificial Intelligence Act moves toward full passage, will it establish itself as the gold standard in AI regulation the same way GDPR was in data privacy?
Compared to its Asian peers, Singapore has pulled ahead, introducing its Model Artificial Intelligence Governance Framework in 2019. The two guiding principles of the framework state that decisions made by AI should be explainable, transparent, and fair, and that AI solutions should be “human-centric” and designed to protect people’s safety and wellbeing.
China, another AI juggernaut, also called for greater state oversight of AI to counter the “dangerous storms” looming over the nation. The Cyberspace Administration of China (CAC) issued draft measures for managing generative AI in April this year.
Their efforts took aim at a broad swath of AI, in particular generative AI, which has enraptured many of its Western adopters even as they are caught in the teeth of its glaring limits. However, neither measure packs the same punch as their EU counterpart. The AI Act is unquestionably ambitious; it aims to regulate the whole domain of AI development and bring it under its purview. Perhaps the act had been in lengthy gestation before the AI-hype cycle took center stage, which gave lawmakers ample time to weigh up the potential benefits and drawbacks.
Embracing the same ethos as the GDPR, the AI Act amplifies the sting of breaches as a formidable deterrent. AI systems wielded in crucial domains like education and employment, which hold the power to shape an individual’s destiny, will be held to stringent standards such as heightened data accuracy and transparency. Non-compliance with these regulations may result in penalties amounting to 30 million euros or 6 percent of a company’s earnings, whichever is higher.
Granted, amid Asia’s burgeoning embrace of AI, the absence of a comprehensive regulatory framework may lead to biased and discriminatory outcomes. But while the proposed act provides an Archimedean point to other countries that are chomping at the bit to emulate, it seems unlikely that they will hasten to adopt AI laws in the same manner as the EU.
For one, even tech companies and AI specialists struggle to determine what the future holds. Take generative AI as an example. In China, the tech industry remains divided on whether to fully embrace this emerging technology. While some leaders, like Baidu CEO Robin Li and Alibaba CEO Daniel Zhang, enthusiastically champion its advancement, others, such as Tencent CEO Pony Ma and Sohu CEO Charles Zhang, urge caution against hasty adoption amid the hype surrounding it. In Japan, companies such as SoftBank and Hitachi are proactively implementing and incorporating state-of-the-art AI technology into their business practices. Unlike data privacy, the significance of which is universally acknowledged, there is no broad consensus on whether or how to tether the wild west of AI.
After all, navigating the realm of AI is an elusive exercise. Generative AI, the current whirlwind sensation, shapeshifts like a chameleon, as it adapts to tasks such as writing, research, songwriting, coding, or speech composition. Its functional tentacles extend their reach into the economy’s every nook and cranny. Attempting to corral its boundless potential risks stifling its creative expanse.
Every new AI breakthrough sends the world spinning through a dizzying waltz, only to later awaken to the lurking shadows of potential havoc. One need not look far for an example. Just this month, a U.S. lawyer was discovered leaning on concocted research created by ChatGPT, leaving the public aghast and dumbfounded. Furthermore, generative AI, albeit significant, represents merely a sliver of the vast AI mosaic. Within this sprawling landscape, the playing field extends far beyond the horizon, amplifying the intricacies of crafting all-encompassing regulations.
After all, the extent to which a nation adopts a risk-based tiered approach, like the AI Act, comes down to its appetite for risks. For example, the act classifies any remote biometric surveillance as an unacceptable risk. The same legal lever might not necessarily be interpreted the same way by some Asian countries. In fact, at the recent G-7 Digital and Technology Ministers’ Meeting in April, the Japanese government expressed a preference for softer guidelines than strict regulatory laws, as the latter cannot keep pace with changes in technology. While not conclusive, the stance signals a gentle rejection of the current direction taken by the EU for the time being.
From a strictly legal perspective, the EU’s approach to AI regulation might not align with Asian countries, with the former’s product liability concept a fragile link in AI’s intricate chain. AI’s flexibility and the complexity of its products make it challenging to determine liability, creating a Gordian knot that policymakers must untangle.
It’s little wonder then, that while G-7 leaders agreed to establish transparent and equitable AI standards under the so-called “Hiroshima AI process,” there remained nagging concerns. Each nation holds unique views on AI regulations and is keen to pursue its own agenda.
The seemingly boundless potential of AI inevitably comes with a shadow side, with potential dangers lurking beneath the surface. Common elements to consider before debuting such laws would likely include negotiating acceptable carve-outs while developing regulatory frameworks that also cater to local needs and cultural mores. Resistance from tech companies with vested interests might loom ahead, making securing their buy-in an uphill battle.
There tends to be a trade-off between the comprehensiveness of a law and its practicality. The efficacy of the AI Act can only be identified through time. As such, we might not see countries such as China and Japan scramble to follow the footsteps of the act, although South Korea might beat the EU to the punch and set up its first comprehensive AI-related statute this year. In the interim, most Asian countries might be relying on a catch-can-catch-as approach to find order in this chaotic reality. The Asian regulatory regimes might not emerge in bursts but in progressive cycles akin to the approach undertaken by Singapore.
As AI technology continues to shuffle its way into various industries in Asia, it is easy to get caught up in the excitement of innovation and neglect the potential negative consequences. Referencing the AI Act to contextualize and address issues specific to Asia could be useful to ensure that the net impact of AI’s usage remains positive. But until Asia’s AI frontrunners gain a purchase on the boundaries and potential of this powerful tool, the emergence of a mirrored Artificial Intelligence Act across the region might still be waiting on the horizon.