Taking Stock of the First AI Insight Forum
Technology’s biggest names gathered in Washington last week for the first of 10 much-hyped AI Insight Forums, part of US Senator Chuck Schumer’s SAFE Innovation Framework. This initiative, announced in June, aims to educate lawmakers on AI’s opportunities and risks as they consider balancing regulation with encouraging innovation.
Scientists and researchers sometimes admit they do not quite know how their AI systems work. It is, therefore, encouraging to see Congress express humility about the subject’s complexity and their limited knowledge. (Who can forget “Senator, we run ads”?) But while the framework’s launch claimed lawmakers were “starting from scratch”, the AI Insight Forum is far from the first time that legislators are getting their feet wet on AI. With a significant number of recent actions and initiatives taken by the executive branch, federal agencies, and Congress to set guardrails for AI and its uses, AI policy looks more like this crowded wave pool.
Initial reporting and leaks focused on the panel’s big names—Google’s Sundar Pichai, OpenAI’s Sam Altman, Nvidia’s Jensen Huang, Elon Musk, Bill Gates, and others. But labor, civil society, and academic experts such as Mozilla’s Deb Raji, an expert on algorithmic auditing, and AFL-CIO President Liz Schuler were also represented, and it is important to have such non-industry voices participate. There is intense disagreement among technology firms, and among industry, civil society, academics, and developers, about AI’s risks and opportunities, and about government regulation. Some senators also criticized the closed-door nature of the hearings and the inability to provide remarks or questions. While it is important for lawmakers to sometimes merely sit and listen, instead of making remarks before ducking out, dialogue in future forums is important. Pre-prepared remarks among CEOs are of limited effect if the forums’ goal is to allow lawmakers to delve into the issues and ask questions without the fear of looking uninformed.
Future forums should build on the work and wealth of experience that lawmakers, their staffs, and agency officials and experts have been diligently building, with the inevitable fits and starts inherent to the policymaking process. A major focus should be the enforcement of existing laws—those ranging from consumer protection to anti-discrimination to competition—as applied to AI. There is also a chance to learn from international approaches, particularly as the EU finalizes its AI Act, and scrutinize the requirements for ensuring the four pillars of the SAFE Framework—Security, Accountability, Foundations (or aligning AI systems with democratic values and protecting elections), and Explain—are upheld. While the forum’s attendees may agree in principle on the need for AI regulation, the devil will be in the details.
The success of this and future forums should ultimately be measured by two metrics. In the short term, a diverse range of experts will be critical for providing participating lawmakers with a full picture of the issues. In the long term, lawmakers and their staff who meet frequently with experts should build on existing institutional knowledge and initiatives to craft informed AI policy that encourages innovation within guardrails needed to protect citizens from demonstrated harms.