AI Policy in EU Illiberal Democracies: The Experience in Hungary and Poland

January 29, 2024

This paper examines the emergence of AI policies in Hungary and Poland under illiberal governments, and highlights their potential social and political consequences, particularly for democratic values and civil and fundamental rights. It focuses on the adoption of AI in the public sector, encompassing research and development, public administration, law enforcement, migration, and economic policy. In their AI policies, both countries’ governments have prioritized industry demands and subordination to the expectations of large foreign corporations (which is inconsistent with their digital sovereignty rhetoric). Meanwhile, they have neglected societal consultations and the needs of the scientific community.  

The AI policies implemented in Hungary and Poland by the Fidesz and Law and Justice (PiS) parties) have been characterized, respectively, by centralization and fragmentation, with varying outcomes. The AI systems deployed do not safeguard citizens’ rights as the political takeover of the justice system and partisan control of law enforcement have undermined redress mechanisms and limited legal protection from AI-related violations. The increasing use of AI in election campaigns, coupled with the lack of democratic oversight, increases the risk of mass disinformation campaigns and electoral manipulation in both countries.  

The cases of Hungary and Poland highlight some key implications for democracy and human rights in the EU where illiberal actors control AI policies and governance and disregard these values. The new EU AI Act may offer some protection for the rule of law and individual rights, but its potential loopholes could allow the unlawful deployment of AI systems in vital areas. AI policies in both countries have reflected their governments’ illiberal tendencies, expanding their control over citizens and curtailing democratic processes. The centralized governance raises concerns about the potential for mass surveillance and censorship, while the lack of transparency and inclusivity in AI policymaking could further marginalize minority groups and vulnerable populations. However, there are key steps that the EU can take to address these issues.  

First, the EU should significantly increase funding for EU AI companies and provide the new AI Office with a mandate to be involved in distributing funds for promising AI projects. This will help companies compete with global leaders and develop AI solutions aligned with human rights and democratic values.  

Second, the EU and its member states should provide sufficient resources to the European Commission (including a new AI Office) to effectively enforce the AI Act. This includes allocating funding for algorithm audits, legal compliance, and training for public-sector institutions. 

Third, they should provide funding and resources to civil society organizations (CSOs), consumer-protection agencies, and other relevant societal stakeholders to develop the legal expertise and networking capabilities necessary to exercise their rights and hold government institutions accountable. 

Fourth, since the AI Act does not cover defense and national security, and makes exceptions for public security and migration control, efforts are needed to establish parallel national measures aligned with EU fundamental values to address potential abuses in these areas.  

Fifth, the EU must work to make the AI regulatory system more inclusive and transparent. It can do so by developing new methods of multi-stakeholder consultations involving CSOs, consumers, patients, workers, and vulnerable communities for future AI legislation and governance discussions. At the member-state level, public institutions could also be asked to voluntarily register AI systems that could pose potential threats and publish information about the potential risks.