of

Beyond "US Innovates, Europe Regulates"

Lessons and recommendations from a Transatlantic AI Exchange
March 30, 2026

Despite diverging transatlantic approaches and perceptions of legislation’s role in artificial intelligence (AI) deployment, lawmakers seeking to govern and develop the technology for public benefit on both sides of the Atlantic face shared challenges: heightened political tensions, tradeoffs between rapid deployment and thoughtful guardrails, and the inherently wide swath of policy and economic areas that AI inhabits. To help address such challenges, take stock of differences, and foster mutual understanding in times of transatlantic tension, GMF Technology conducted a study tour in December 2025 to Paris and Brussels with a bipartisan delegation of US state lawmakers as part of the Transatlantic Tech Exchange (TTX). The delegation met with members of the European and French parliaments, European Commission staff, civil society, innovators, and researchers for meetings focused on AI systems, data, and topics relevant to the AI value chain. These subjects included the natural resources, industrial and network infrastructure, data, and public and private investments and inputs that create the machine learning systems known or marketed as AI. The technology’s value chain framework can help policymakers draw connections among discrete AI policy issues to advance governance and technology in the public interest.

Based on conversations from the study tour, this piece presents three key findings and recommendations for transatlantic AI cooperation. They are: 

  • Children’s safety and high-risk AI redlines represent the clearest areas of near-term transatlantic policy alignment, and lawmakers should establish an AI dialogue on these two areas.

  • Structural barriers to AI competitiveness extend beyond regulation and span the entire AI value chain. Governments on both sides of the Atlantic should leverage their roles as market makers to enhance their competitiveness and power to set standards.

  • Legislators and the legislatures to which they belong are not structured to address the whole-of-society challenges of AI. They should invest in cross-sectoral AI expertise. 

 

FINDINGS AND RECOMMENDATIONS

Finding: Kids’ safety and AI redlines are natural bridges for cooperation.

Across the AI value chain, children’s safety emerged as the clearest point of convergence and a potential transatlantic bridge for policy cooperation. Lawmakers, however, often lack a complete picture of the range of child-safety initiatives that have been tried and tested in jurisdictions outside their own. Given the universality of interest and shared motivation in this area, they need greater room to exchange and learn from legislative successes and failures on both sides of the Atlantic. Topics ripe for further exchange include live-wire issues such as privacy-protecting age verification methods (especially for children who may not have official identification), the impact of AI chatbots on children’s mental health, and detection and prevention of AI-generated child sexual abuse material.

A second area of opportunity for policy cooperation is around “AI redlines”, use cases that present unacceptable risk either to civil or fundamental rights or to national security. Such redlines can include social scoring systems or malicious cyberattacks on critical infrastructure.

Recommendation: US and European policymakers should launch an AI dialogue on exchanging lessons and developing approaches on narrow areas of shared concern, focused first on children’s safety and AI redlines. The dialogue would create a structured environment to regularly bring together EU, member-state, and US state-level legislators to share best practices. To support this initiative and highlight developments from both sides of the Atlantic, the dialogue should be accompanied by a knowledge-sharing mechanism that could take the form of a dashboard, database, or repository managed by a third-party organization. It would provide an up-to-date, centralized repository of ongoing global initiatives, such as the results of transparency and audit reports mandated by state or local jurisdictions and results of information- and capacity-sharing initiatives including red-teaming for testing AI models. Centralizing this information will allow for comparisons, sharing of lessons learned, and iterative—rather than duplicative—policy efforts.

FindingThe “US innovates, Europe regulates” stereotype oversimplifies the transatlantic AI picture, giving the impression of a lack of shared priorities. While both sides seek to protect citizens and boost competitiveness, structural barriers to AI competitiveness transcend regulation and exist along the entire AI value chain.

The window of opportunity for transatlantic cooperation on technology has in some sense never seemed smaller. Recent US tariff threats and diplomatic pressure to change EU laws that impact US technology companies, such as the Digital Services Act and the Digital Markets Act, along with travel restrictions on researchers and policymakers, have put technology at the heart of tense transatlantic relations. European Commission President Ursula von der Leyen vowed in her annual State of the Union address that Europe would stand by its rules, stating that “we set our own standards, we set our own regulations.” 

The emphasis on familiar stereotypes of a Europe that seeks only to regulate technology and a United States that only innovates can give the impression that the two sides lack shared AI priorities. The Trump administration’s National Security Strategy, whichcriticized Europe’s “failed focus on regulatory suffocation”, was released just three days before the delegation arrived in Brussels. AI governance is also at the heart of tension between US federal and state policymaking. As the US state lawmakers departed for Europe, the AI policy world’s attention focused on the imminent release of Executive Order14365, which aims to preempt certain state AI legislation in an effort to “check the most onerous and excessive laws emerging from the States that threaten to stymie innovation”. 

Discussions during the exchange complicated and challenged the “US innovates, Europe regulates” narrative. American lawmakers expressed interest and excitement about new AI regulations on both sides of the Atlantic. Study tour conversations, however, shifted the lawmakers’ preconceptions that Europe is singularly focused on heavy-handed AI governance and regulation. In fact, many EU officials discussed their support for the Commission’s simplification agenda, which aims to “radically lighten the regulatory load and related costs” and improve enforcement. A cornerstone of this agenda, the Commission’s November 2025 Digital Omnibus proposal,would even loosen or eliminate some requirements of the AI Act.

AI’s rapid development creates new urgency around economic competitiveness, along with uncertainty about balancing the costs, benefits, and tradeoffs of regulation. EU and US lawmakers face the shared challenge of ensuring that the technology fosters growth and that societies do not miss out on its benefits while mitigating AI risks and harms and determining the appropriate scope of regulation.

Analyses such as the 2024 Draghi Report highlight regulation as a significant roadblock to EU AI competitiveness, but the study tour revealed that bloc-wide obstacles far beyond regulation impact that competitiveness. More immediate barriers include a financing environment characterized by fractured capital markets and venture ecosystems, energy costs, talent retention, and access to compute and datasets. Such factors also complicate the notion that innovation and regulation are mutually exclusive. Moreover, the EU’s lag in AI is unlikely to be explained meaningfully by the AI Act itself when many of the legislation’s provisions have yet to enter into force. At the same time, US companies operating in Europe already cooperate with EU enforcement mechanisms pertaining to issues such as data portability or interoperability.

The EU’s ongoing simplification agenda also aims to avoid a policy environment in which regulation impedes innovation. The agenda exemplifies the competing pressures and debates over how to modernize and consolidate laws, and boost competitiveness, all while maintaining fundamental-rights protections. Conversations throughout the exchange revealed that simplification means different things to different stakeholders. Supporters of the Commission’s proposed approach argue that EU tech legislation must be reduced. Critics, especially from civil society groups, perceive a 180-degree turn by the Commission, from championing tech regulations to undermining them and subsequently undermining rights protections. This creates regulatory uncertainty, which is counterproductive for business. But questions on the simplification process itself remain. For example, should the same laws apply to small but often rapidly growing companies since easing requirements on startups could complicate establishing a consistent regulatory landscape for businesses of all sizes and maturities?

Recommendation: Governments on both sides of the Atlantic should use innovation and industrial policy to advance the dual objectives of increasing AI competitiveness and setting market-making governance standards for safety and consumer protection. In so doing, they can generate iterative real-world data on AI’s impact on the economy and society to inform policy choices.

Embracing their role as AI customers and de facto standard-setters, governments can use industrial policy and public procurement to act as consumer and market-maker. At its core, top-down industrial policy should bolster AI competitiveness across the value chain and address nonregulation barriers to innovation. European legislators should advance the 28th Regime proposal to simplify the legal landscape for startups, and lawmakers on both sides of the Atlantic should make bold investments in clean and next-generation energy technologies to fuel AI energy demands. These lawmakers should also support the creation of transatlantic public interest datasets to train future AI systems. Rules about AI adoption and use by government actors and agencies can also create a foundation for industry standards. White House Office of Management and Budget guidelines, for example, require risk-management practices for some AI use cases, and the European Commission provides model contractual clauses for public procurement of AI.

To create a virtuous cycle in which government policy is responsive to evolving societal needs, lawmakers should pair such initiatives with iterative efforts to gather real-world data on the economic and social impact of AI itself and related policies. Reporting requirements, for example, can help provide valuable information about AI companies’ growth and resource requirements down the value chain. They can also provide data and insight into workforce and employment trends, economic growth and productivity, energy demands, or data governance and privacy. Regulatory sandboxes can foster innovation and generate data about real-world effects, and well-crafted auditing requirements can help build citizen trust and inform responsible AI decision-making.

Finding: Lawmakers are increasingly aware that AI policymaking requires literacy in a wide range of interlinked sectors. They seek ways to harness expertise and improve engagement with peers.

US and European legislators often focus on discrete topics, such as children’s safety legislation, AI fraud, or data center permitting, but they lack the breadth of knowledge across the AI value chain to meet societal objectives with the full spectrum of policy tools at their disposal. The questions and policy levers that AI encompasses range from data center development and industrial policy to intellectual property protections, data governance and privacy policies, safety guardrails, energy, antitrust, compute, specialized applications, health, and the future of work. Yet due to the technology’s breadth, lawmakers working on AI often lack knowledge and visibility of areas beyond those on which they are focused. As such, these lawmakers often fail to contemplate upstream and downstream effects of proposed initiatives. They are also unable to harmonize policies or leverage synergies and coalitions across the value chain. Legislatures, too, are structured to tackle discrete challenges and ill-equipped to tackle wide-ranging societal change.

Discussions during the exchange revealed a clear appetite among lawmakers for initiatives to further their own AI literacy through engaging outside experts and bringing together actors from the engineering, regulatory, and startup sectors, and from small- and medium-sized enterprises. They also seek to engage peers from across the Atlantic with complementary sectoral expertise. For European legislators, exchanges with US state officials are also wanted. They and their American counterparts see the need to connect issue areas and educate their peers working in other areas of the AI value chain to spur initiatives.

Recommendation: Federal and state governments should invest in AI expertise for the policymaking process, and AI-focused legislators should educate peers on links to policy areas not traditionally seen through a tech policy lens.

A dedicated and technically skilled civil service focused on AI is crucial to implement policy effectively and serve as a resource for lawmakers who draft legislation on topics such as AI auditing, adversarial testing, threat modeling, and model documentation. The EU’s AI Office, in charge of implementing the AI Act, should consider waiving requirements for EU nationality for its employees so that it can attract candidates with the specialized knowledge required for key enforcement roles and expand its pool of technical talent. US states could create incentive programs to allow AI experts from the private sector or academia to rotate through AI-related public-service positions in local and state government, as the United Kingdom’s No10 Innovation Fellowship does.

US lawmakers working on AI legislation should focus on committee and party leadership, even those not involved in tech issues, to push initiatives that can be seen as too niche or technical. They should ensure that fellow lawmakers not focused on AI are informed about the impact of policies related to the technology on sectors of concern. Support from a strong, behind-the-scenes team of nonpartisan technical experts is key for boosting the credibility of outreach by AI-focused lawmakers to those less familiar with the technology, helping them to define what is feasible and craft effective laws.

 

CONCLUSION

The TTX to the EU helped identify areas in which US and EU approaches on AI policy are closer than they may seem. The exchange also underscored that maintaining avenues for transatlantic peer exchange and collaboration is as important as ever. 

 

BACKGROUND

The TTX is a signature program of GMF Technology that aims to build trust and mutual understanding between European and American lawmakers at the federal and state levels, facilitate policy alignment through dialogue between them and innovators, and generate policy analyses and recommendations to advance innovations grounded in democratic principles and the public interest. This inaugural TTX to the EU was organized with the generous support of the Project Liberty Institute (PLI). The delegation included Utah state Representative Doug Fiefia (Republican), Virginia state Delegate Michelle Maldonado (Democrat), and Massachusetts state Senator Michael Moore (Democrat), each a leader in technology policymaking in their states.

The views expressed herein are those solely of the author(s) and do not necessarily reflect those of the delegation, GMF, or PLI. GMF as an institution does not take positions.