On Leopards and Owls: The New EU Commission’s Proposal on AI Regulations
The European Commission recently proposed the first ever legal framework on AI. Senior fellow for tech & society, Astrid Ziebarth, asks Deputy Director of GMF Digital Sam duPont and Senior Fellow Jacob Kirkegaard three questions on the new EU AI regulation proposal.
1. If the new EU AI regulation proposal was an animal, what would it be for you and why?
Sam: A leopard—the European Commission has outpaced other markets in developing a comprehensive approach to mitigating risks associated with AI, and the regulation aims to get out ahead of the technology and head off risks before they arise. It also has sharp teeth! With harsh penalties for firms that break the rules.
Jacob: An owl—the regulation appears thoughtful, intended to guide development relatively silently in the background (unlike, say, GDPR), but strikes with ferocity with outright bans and fines.
2. You both have a trade/econ background. On a scale from 1-10, how much could those regulations impact trade between the EU and U.S. and why? 1 being low impact, 10 being high impact.
Sam: “3”—Under the draft regulation, most AI developers would see little or no regulation, while a carefully defined list of high-risk AI applications face rules and requirements proportionate to the risks. Only a handful of dystopian AI applications would be prohibited altogether. As drafted, the regulation will have minimal impact on U.S.-EU trade. But this could change! The regulation allows for the list of high-risk and prohibited AI applications to grow over time, and the compliance burden could be amended to be more onerous—such changes might dissuade some U.S.-based AI providers from offering their services in the EU.
Jacob: “2”—Given that only a very small number of AI applications are outright banned in areas that only a very limited number of U.S./EU firms operate, the outright trade elimination will be small. Moreover, there will over time probably—as with, say, GDPR and California—emerge a reasonably high degree of overlap between EU regulations and either U.S. actual federal regulations on these issues, or state level provisions and voluntary corporate practices.
3. What is one area of transatlantic cooperation that you would like to see on AI regulation from where you sit?
Sam: Some applications of AI risk entrenching and exacerbating human failings—including racial discrimination and bias. Preventing such negative outcomes is a high priority for many policymakers on both sides of the Atlantic. The EU’s draft regulation would help ensure that AI systems are not trained using biased data, while some U.S. state laws offer additional avenues for identifying and eliminating racial bias in facial recognition technology. Transatlantic dialogue to exchange good ideas in this area could support the development of an AI policy that is grounded in shared democratic values and enhances racial equity.
Jacob: Regulation of emerging AI applications in social media rankings and content allocation, as it seems clear AI will greatly amplify humans’ basest instincts of disproportionally following controversy/lies/conspiracies online. Exploiting these instincts are revenue generators in online advertising business models of, say, Facebook, but has potentially dramatic destabilizing effects on our political discourse and systems. The world’s largest developed democracies should be able to agree on at least some common positions to prevent the worst excesses.