Collection

TTC in Context: AI Regulation

October 20, 2021
by
Sam duPont
10 min read
State Department photo by Ron Przysucha / Public Domain

On September 29, the European Union and the United States held the inaugural meeting of the Trade and Technology Council (TTC) in Pittsburgh. Through the operation of ten working groups, the TTC—first announced at the U.S.-EU Summit in June—addresses a set of issues including export controls, foreign investment, supply chains, technology standards (including for artificial intelligence), and platform regulation. Co-chaired by European Commission Executive Vice-Presidents Margrethe Vestager and Valdis Dombrovskis, along with Secretary of State Antony Blinken, Secretary of Commerce Gina Raimondo, and U.S. Trade Representative Katherine Tai, the TTC will work “to deepen transatlantic trade and economic relations, basing policies on shared democratic values” while simultaneously protecting “businesses, consumers, and workers from unfair trade practices, in particular those posed by non-market economies” and “countering authoritarian influence in the digital and emerging technology space.”

This article and an accompanying one on critical technologies provide context for these developments, reviewing recent U.S. and EU policy decisions as well as bilateral and multistakeholder interests. Resolving these issues can pave the way for the TTC’s efforts to bolster transatlantic cooperation on current and future generations of technological development—without undermining the existing rules-based system or embarking on a race to the bottom when it comes to standards. By creating the foundation for a collaborative framework for governing the digital world—from artificial intelligence and machine learning to the platform economy, digital taxation, and supply chain durability—the EU and the United States can pave a path forward in which innovation can flourish along with protection of human rights and security.

Eli Weiner


 

The Trade and Technology Council focus on artificial intelligence (AI) underscores the economic and geopolitical significance this and other emerging technologies will play in the immediate future. AI technologies are already becoming prevalent in our lives. Doctors use these machine-learning technologies to identify possible treatments for COVID-19; other such tools can help identify when children might be in danger or help judges make fairer decisions. At the same time, many applications of AI present risks to civil rights, human rights, health, and safety. Algorithmic decision-making can perpetuate human biases, with discriminatory effects. AI systems trained on faulty data that are used by law-enforcement authorities can make bad outcomes even worse. And as such systems become intrinsic in airplanes, automobiles, heavy machinery, and household products, it is not hard to imagine how faulty AI could have catastrophic outcomes. The joint statement by the EU and the United States in the wake of the TTC emphasized these concerns, noting that, while AI technologies have the potential to greatly benefit society, they “can [also] threaten our shared values and fundamental freedoms if they are not developed and deployed responsibly.” Moreover, they highlight the importance of developing AI according to a “human-centered approach that reinforces shared democratic values and respects universal human rights.” Both sides indicate their intent to pursue a risk-based approach for harm prevention, like the one advanced in draft EU regulations and the work begun by the National Institute of Standards and Technology’s AI Risk Management Framework, with this method offering the greatest balance between social benefit and social protection.

History of the Issue

To realize the promise of the AI while forestalling emergent risks, governments have to date focused on establishing ethical principles to govern AI, while encouraging its advancement and adoption. For example, the OECD Principles on AI, which were backed by the G20, endorse transparent AI systems that protect privacy, enable human oversight, and function with high levels of robustness and security. Yet for a technology as diverse as AI—with applications ranging from autonomous vehicles to facial recognition to medical diagnostics—much work remains to be done for governments to translate principles into policy and to identify the most effective means of bolstering the technology’s advancement.

Current State of Play

In April, the European Commission released a draft regulation for AI, the first effort to create a comprehensive law that would mitigate some of the risks associated with the technology. The draft regulation takes a risk-based approach: most applications of AI are considered “no risk” or “low risk,” and are subject to minimal transparency requirements. “High risk” applications must comply with detailed requirements and undergo conformity assessment processes, while a handful of applications posing “unacceptable risk” are prohibited altogether. In the final category are AI systems that “deploy subliminal techniques” or “exploit vulnerabilities” in order to distort human behavior in a manner likely to cause physical or psychological harm. Also prohibited are systems akin to China’s “social scoring” mechanism that evaluate the “trustworthiness” of individuals based on their “social behavior.” Finally, the draft regulation would prohibit law-enforcement authorities from deploying “real-time remote biometric identification systems” (such as live facial-recognition technology) in public spaces without a court order.

Much of the draft EU regulation is devoted to laying out the compliance requirements for high-risk applications of AI. Providers of such applications would be required to use high-quality data to train their AI systems, build logging and documentation into their software to enable auditability, to ensure the potential for human oversight and intervention, and to guarantee the robustness, accuracy, and security of their systems. The draft regulation lays out a typology of systems that would be considered high-risk, including products that are already subject to regulation (such as machinery, toys, and medical devices) and novel systems such as:

  • Biometric identification and categorization, such as facial-recognition technology;
  • Critical infrastructure safety systems;
  • Education, to support admissions decisions and student assessments;
  • Employment, to support decision-making about hiring, termination, and promotion;
  • Public services, to determine access to public assistance and credit;
  • Law enforcement and the judicial system, for “predictive policing” and other applications; and
  • Migration, to verify travel documents and evaluate asylum requests.

In June, the European Data Protection Board and the European Data Protection Supervisor published a joint opinion welcoming the EU’s risk-based approached. They also called for a comprehensive ban on the use of AI for automated recognition of biometric features in public spaces, encompassing not only facial recognition but systems able to identify individuals based on gait, voice, fingerprints, keystrokes, and DNA as well. Finally, the European Parliament passed a non-binding resolution in October that recommended a ban on police use of facial recognition technology in public places, and called for a prohibition on predictive policing as well as law enforcement use of private facial recognition databases.

In the United States, the Trump administration focused on supporting the advancement of AI technology and its uptake by industry and the government. A 2019 executive order directed federal agencies to support AI research and enable the use of government data and computing resources by the private sector. Following the executive order, a final regulation issued last November urged agencies to “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth” and laid out a set of principles that agencies should consider before issuing new regulations addressing AI.

Congress has considered targeted legislation to address specific risks presented by certain applications of AI.

Congress has considered targeted legislation to address specific risks presented by certain applications of AI. In particular, legislation was introduced in the House and the Senate in June that would prohibit the use of biometric technology by federal and state government entities, including facial recognition, while several bills introduced in the previous Congress would have similarly addressed the deployment of facial-recognition technology by law-enforcement agencies. Legislation has also been developed addressing the use algorithms more broadly, forbidding discriminatory or biased algorithms, mandating increased transparency and disclosure about the ways in which they are used, and establishing a baseline safety and effectiveness standard. At the level of state and local government, certain jurisdictions have prohibited the use of facial-recognition technology by law-enforcement authorities, while Washington state has passed a broad law disciplining governmental use of facial recognition. At least 17 states have introduced bills or resolutions addressing the use or development of AI—Colorado, for example, passed a law in July prohibiting insurance providers from using discriminatory algorithms or predictive models. As of now, Congress has not contemplated a more comprehensive law to address cross-cutting risks across a wide range of AI applications.   

In April, the Federal Trade Commission asserted its jurisdiction over AI, indicating that discriminatory algorithms may violate existing laws intended to prohibit “unfair or deceptive practices” and discrimination on the basis of race, religion, sex, and other qualities. This assertion suggests that the United States’ existing legal framework may already address certain risks arising from AI.

Meanwhile, members of the Biden administration have begun calling for an AI “bill of rights” that would guarantee protection from biased or inaccurate algorithms, ensure transparency, and safeguard citizens from pervasive or discriminatory surveillance. In pursuit of this goal, the White House Office of Science and Technology Policy initiated a fact-finding exercise in October examining the past, present, and prospective uses of biometric technologies able to identify and evaluate individuals, including their mental and emotional states. And in announcing the creation of a bill of rights, administration officials have previewed possible enforcement options, from using the federal government’s power of the purse to setting federal contracting requirements and adopting new laws and regulations.

U.S. and EU Interests

As evinced by the issuance of the draft regulation by the European Commission, the EU’s primary interest with respect to AI appears to be setting legal limits on its use with a view to protecting human rights and public safety. This interest aligns with the EU’s push to lead the world in making rules to govern the digital economy, and tracks with parallel efforts to regulate online content, competition in digital markets, and other areas. Still, many European leaders have emphasized the importance of regulating in a manner that does not impede technological innovation, in the interest of ensuring that EU companies can be globally competitive in the development and implementation of AI.

To date, the United States has emphasized the advancement of AI and ensuring its uptake, with many policymakers placing it at the center of a technological competition with China.

To date, the United States has emphasized the advancement of AI and ensuring its uptake, with many policymakers placing it at the center of a technological competition with China. Still, many U.S. leaders share the EU’s interest in mitigating risks associated with AI. Following the introduction of the European Commission’s draft regulation, U.S. National Security Advisor Jake Sullivan issued a tweet welcoming it, indicating the Biden administration’s potential interest in fostering “trustworthy AI.” As the EU moves ahead with writing its rules for AI, some U.S. leaders have encouraged it to do so in ways that are narrow and do not create unnecessary trade barriers between the allies. To this end, ensuring a streamlined conformity assessment process will likely be key.

Stakeholder Interests

Technology companies have cautioned against excessive regulation that could stifle innovation. At the same time, some firms active in AI have supported more active government involvement in addressing risks arising from the technology, and even stepped back from the development or provision of particularly troublesome applications, such as facial recognition. Following the release of the EU’s draft regulation, private-sector representatives welcomed its risk-based approach and urged European lawmakers to pursue “flexible regulations” that focus on the highest-risk applications.

Civil society groups and actors focused on digital rights have played an instrumental role in identifying the risks arising from certain applications of AI and advocating for government intervention to address these.

Civil society groups and actors focused on digital rights have played an instrumental role in identifying the risks arising from certain applications of AI and advocating for government intervention to address these. They criticized the Trump administration’s efforts on AI as insufficiently attentive to the risks presented by the technology. Since the publication of the EU’s draft regulation, they have continued to push for more stringent protections, with more AI applications subject to prohibition, and stricter enforcement mechanisms.

Next Steps

The EU’s draft AI regulation has now entered the “trilogue” process, by which the European Commission engages with European Parliament and the member states in the European Council. Through these negotiations, the regulation will be amended and refined in a process can take multiple years before it can be passed by the European Parliament.

Meanwhile, it remains to be seen how the Biden administration will approach AI. While the United States is not likely to pursue a comprehensive legislative approach to addressing AI risks, the EU’s draft regulation includes many ideas that could be adapted for more piecemeal or sector-specific policy advancement in the United States.

The views expressed in GMF publications and commentary are the views of the author alone.

TTC in Context: Critical Technologies

The Trade and Technology Council’s emphasis on crafting a collaborative set of strategic policies governing technology transfers and investment controls comes as U.S. officials across administrations have highlighted China’s growing strength as a challenge for national security and economic prosperity.