Averting a Crisis of Confidence in Artificial Intelligence

November 19, 2020
by
R. David Edelman
6 min read
The Challenge: An AI Revolution, Derailed

The Challenge: An AI Revolution, Derailed

The devices and systems in our lives are being slowly infused with artificial intelligence (AI) technologies driven by machine learning. Some deliver delights, like smarter cameras in our phones that turn casual clicks into works of art. Others breed more ambivalence, like ads in your newsfeed showing precisely what you needed to buy this week; convenient, yes, but to some, invasively prescient.

But more socially significant applications of AI are getting far less attention, despite representing the greatest risk and opportunity for that technology in the coming decade. They include systems that can convince a bank to extend a loan to an under-served borrower with a thin credit file, or that can have a human-like conversation with a refugee to help them navigate byzantine asylum bureaucracies. But they also include facial-recognition systems that are partially blind to large swathes of the population, with plunging performance when presented with female or black faces, and AI-driven employment systems that silently deny opportunity to those who do not live near or sound like those already successful at the same job.

AI technologies are still in their infancy—with immense potential, largely untapped, but also with fundamental usability questions still unclear. The performance of these systems varies wildly: AI systems that excel at one complex task might fail spectacularly at another that is, to human minds, adjacent. Many of the most powerful systems have little to no ability to explain themselves. They are only as accurate as the data they are trained upon, and even then, performance against edge cases is often imperfect. In short, AI systems are constantly surprising researchers in what they can do and what they fail to do—and that raises significant implications for public policy.

As societies, we have learned to be tolerant of computer failure in our lives: a dropped call here, a few minutes of lost writing there. When socially significant systems let us down, though, they do not just take something from the user; they take something from society. They erode trust in the systems used by our government, and thus in the government itself. They antagonize the very communities that the police most need to partner with to tackle crime. They do not just hold back opportunity in ways that cause social and economic stagnation—they may well be illegal.

AI systems deployed in socially significant areas before the technology is ready may lead us to skepticism of their capabilities: a crisis of confidence in AI with implications far beyond technology—for economic dynamism, social justice, education, and more. Fixing this skepticism will not just be a matter of filing a bug report. Blame will lie not just with the programmers but with public officials and will carry public consequences. So the challenge of the next few years is to get ahead of this crisis and show that it is within the power of governments, with the right insights, to apply the tools of public policy to check the harms of misbehaving AI—and in so doing to coax into existence a friendlier, more reliable breed of machine.

The Solution: To Govern AI, Evolve from Principles to Practice

Governments around the world are rushing to demonstrate they have a handle on the social and economic complexity—and opportunities—of AI. There are innumerable new commissions, study groups, task forces and high-level statements, especially on both sides of the Atlantic. So, what should the aim of all these efforts on such a new, general-purpose technology be—particularly when AI’s true significance in our daily lives has yet to be seen?

If all the governmental projects to govern AI to date can be summed up in a single word, it would be: “principles.” From the European Commission to the U.S. Defense Department, Google to Microsoft, the OECD to the G20, high-level statements of principles abound. Most have an explicitly ethical orientation, a conscious counterpoise to a decade in which many declared technological systems “neutral by design,” leaving democracies to pick up the pieces of their ill-design. And many have substantial overlap, highlighting the importance of privacy, accountability, and—with slightly weaker consensus—transparency and human control. Many represent careful consideration of the harms of AI run amok. Yet few if any purport to offer solutions, particularly in the real world of public policy, where even the simplest decision comes with tradeoffs.

This is where the opportunity lies to avert the crisis of confidence in AI; to match the ambition and ethical orientation of these principles with the hard, exacting, and context-specific work of policymaking. The first step is recognizing that most governments will not have a singular “AI policy” any more than they can have one “computing policy”—the concept is so broad as to be meaningless. Rather, the last 30 years have given rise to a diversity of laws governing computer-enabled conduct, like balancing rightsholders’ interests and fair-use principles, or defining crimes like “intrusion” in the digital space and limiting governments’ ability to access private data and networks. Over these same three decades, policymakers have gradually developed an instinct to know when digital systems can be trusted to support human judgment and when they are best left out of the decision-making process.

The challenge for this era of AI is to do both, again: to develop the detailed policies that allow us to contextualize AI systems and govern them accordingly, and to develop the “gut sense” of their readiness to play more significant roles in our lives. 

The first step is for the regulators of banks and lending, medicine, employment, and other key sectors to understand how AI systems are being used; to visualize the consequences of systems of limited (or exceptional) performance; and to adapt their regulations and enforcement to confront those very real harms. The threshold of acceptable transparency is bound to differ in seeking the reasons for a loan rejection as compared to a parole denial. The method to prove the performance of a safe cargo drone will certainly differ from proving a hiring system does not discriminate. The immediate task for government is to determine how precisely to apply long-standing protections to this new technology—and, where necessary, establish the parameters of policy-aware design so that those building systems understand what is required to build with equity and accountability in mind. A government with a national AI strategy is one that can point to all of its obligations—especially responsibilities to protect—and explain how it is applying the principles that animate laws to the uses of AI, built atop technical expertise in agencies buttressed by appropriate regulation.

Conclusion

After this hard, unglamourous, crucial work has completed its first stages, themes will undoubtedly emerge. They may well reflect some of those high-level principles of the last few years. More likely, gnarled by contact with policy realities, they will be more recognizable as best practices, common regulatory frameworks, or even shared datasets and evaluation mechanisms for the use of AI in socially significant systems. Here there is immense potential, particularly between the United States and Europe, to develop a commonality in the evaluation of systems necessary for common markets. It is harder to arrive at common safety standards for vehicles than it is to talk about our shared commitment to protect passengers. But governments have managed to do both before and they have the opportunity to do so now with socially siginifcant AI systems—before the full extent of envisaged harm has come to pass.

Download the full report »

Photo Credit: everything possible / Shutterstock

R. David Edelman is based at MIT’s Internet Policy Research Initiative, where he holds joint appointments at the Computer Science & AI Lab (CSAIL) and Center for International Studies (CIS) and teaches in the Department of Electrical Engineering and Computer Science. He previously served at the State Department and White House, most recently as special assistant to President Obama for economic & technology policy.