Can the Transatlantic Community Align on AI Safety?
Recent actions and rhetoric during the first year of US President Donald Trump’s second term have fueled European perceptions a full-scale American retreat from artificial intelligence (AI) safety. Vice President JD Vance told world leaders at the Paris AI Action Summit in February 2025 that “the AI future is not going to be won by hand-wringing about safety. It will be won by building.” A month later, the National Institute of Standards and Technology (NIST) instructed partner scientists to remove mention of “AI safety”, “responsible AI”, and “AI fairness” from the skills expected of members and to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness”. And in June 2025, the Department of Commerce renamed its AI Safety Institute the Center for AI Standards and Innovation (CAISI), and announced a shift in its mission from “help[ing] define and advance the science of AI safety” to policies that “evaluate and enhance U.S. innovation” in AI.
Beyond the headlines, however, AI safety is a rare policy issue on which there has been regulatory momentum and even bipartisan agreement in Washington. The Trump administration has maintained some Biden-era policies on high-risk AI use cases, and US states have become increasingly involved in regulating specific use cases and transparency measures. This opens opportunities for transatlantic shared learning and approaches on AI safety, particularly when they are framed in terms of individual policy questions such as children’s safety or catastrophic risk.
The Background
AI safety refers to technical solutions, policies, and guidelines that minimize the risks AI systems pose to humanity and that secure AI’s benefits. AI safety measures seek to mitigate a wide range of harms. Current damage includes privacy and cybersecurity violations, such as the unauthorized collection of personal data, and training data bias, which has resulted in hiring systems that penalize résumés containing the word “women”. Long-term damage could comprise existential risks to humanity, such as AI-engineered bioweapons and pathogens. AI safety, in addition to its broad scope, spans a range of tools, including robustness testing and validation against adversarial attacks, bias detection and mitigation, and governance frameworks for risk management.
The concept of AI safety has existed for decades, but it generated mainstream political attention in 2023 amid rising concerns about generative AI and artificial general intelligence. These culminated in the first AI Safety Summit at Bletchley Park in November of the same year. Since 2025, the term “AI safety” has become increasingly politically contentious, as have AI safety topics related to diversity, equity, and inclusion (DEI) such as discriminatory model behavior related to gender or race. Other related topics that receive broad bipartisan US and multilateral support are transparency around adverse AI incidents and the prohibition of deepfake intimate images.
American Initiatives
Several US federal government policies from the past year include AI safety measures, especially for risk-management practices and guardrails to prevent detrimental AI use. These policies include:
- two Office of Management and Budget memos on the expansion, integration, and acquisition of AI in federal agencies that require them to implement compliance plans, human oversight, and risk-management practices for “high-impact AI”. The language used mirrors that of some Biden-era policies.
- a US AI Action Plan that also calls on federal agencies to address high-risk AI use cases. It directs CAISI to evaluate frontier AI systems for national security risks and recommends that NIST develop deepfake guidelines and forensic benchmarks.
- keeping the United States in the International Network for Advanced AI Measurement, Evaluation and Science—formerly the International Network of AI Safety Institutes—with eight other member countries and the EU. The network aims to strengthen the science that underpins AI evaluation.
- the TAKE IT DOWN Act, signed in May 2025, which targets nonconsensual intimate imagery (NCII), including that created through AI deepfakes. The law criminalizes the publication of NCII and requires websites and social media platforms to remove such images at the request of the victim within 48 hours, with stricter rules for images of minors. The legislation had bipartisan sponsors in the Senate, and the First Lady supported its passage.
Beyond the federal level, US states are leading the development of AI safety policies, particularly on transparency and high-risk use cases. California and New York have similar laws on transparent safety practices of AI developers, and seven other states have introduced bills with significant similarities. California’s Transparency in Frontier Artificial Intelligence Act (SB-53) requires large frontier AI developers to write, implement, comply with, and publish a safe governance framework on their websites. It also mandates that developers report critical safety incidents to the state’s Office of Emergency Incidents and develop anonymous reporting channels for whistleblowers. Companies that do not report their activities may incur fines up to $1 million per violation. New York’s RAISE Act is almost identical to SB-53. Differences are confined to applicability, enforcement, and whistleblower protections (New York has whistleblower protections that may already apply). The other states that have introduced transparency-focused legislation borrow language from these laws, but each bill is distinct. Utah’s AI Transparency Act, for example, also requires developers to publish child protection plans. Of the nine bills that have been introduced, Democrats proposed four, as did Republicans, and one had bipartisan sponsors.
Other AI safety measures include North Dakota’s expansion of its harassment laws to criminalize AI robots’ role in stalking. Legislation in Illinois and Utah bans or regulates the use of AI therapists. California’s “Companion Chatbot” law implements consumer safeguards on chatbot operators, with additional requirements if the operator knows the user is a minor. Cities have also passed AI safety legislation. New York City requires that companies planning to use AI in hiring decisions conduct audits for bias before deploying automated employment decision tools.
State-level AI policymaking has raised concerns within the Trump administration about a “patchwork of 50 different regulatory regimes”, and a recent executive order aims to restrict state AI legislation. An order from December 2025 “preempts State AI laws that conflict” with federal AI policy and seeks to advance a “minimally burdensome national policy framework for AI”. The order includes an exemption for state laws on child safety protections. It also establishes an AI Litigation Task Force that can challenge state laws inconsistent with the national policy framework on the basis that they “unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations”, or “require AI models to alter their truthful outputs, or … compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment”. Many legal experts believe the order is likely to face court challenges because the Constitution gives only Congress the authority to preempt state regulations. No matter the legal outcome, observers argue that the order may have a chilling effect on new state-level AI legislation, including on AI safety.
What Should Europe Do?
European policy stakeholders seeking common approaches and capacity sharing with the United States on AI safety have more opportunities than they may think. High-risk use cases, particularly those involving national security, remain a US federal and state government priority, and transparency requirements are gaining increased traction at the state level. As such, Europeans should frame their efforts to collaborate with the United States not as AI safety initiatives but as actions to address catastrophic risk, cybersecurity, children’s safety, or privacy. This is more likely to result in productive dialogue and collaboration. Potential forums and stakeholders for such exchange could be global institutions, such as the International Network for Advanced AI Measurement, Evaluation, and Science and the writing group of the International AI Safety Report. AI safety enforcement agencies such as the EU’s AI Office, the California Office of Emergency Services and Department of Technology, and the oversight office within New York’s Department of Financial Services could also prove to be good venues.
Transatlantic AI safety policy exchange and alignment can advance, but it may require a narrower focus and new connections between European officials and US state and local lawmakers.
The views expressed herein are those solely of the author(s). GMF as an institution does not take positions.