The EU AI Act Proposal: Europe’s Opportunity to Safeguard the Rights of People on the Move

July 08, 2022
by
Petra Molnar
10 min read
Photo Credit: Gopixa / Shutterstock.com

Editor's Note: The paper is part of The Dialogue on Tech and Migration, DoT.Mig. series, see Exploring the Potential of Data Stewardship in the Migration Space and Digital Wallets and Migration Policy: A Critical Intersection to read the related pieces. 

Key Takeaways

Expand All

Introduction

In April 2021, the Commission of the European Union (EU Commission) tabled its proposed Regulation on Artificial Intelligence (AI Act). This draft governance document is the first regional attempt to regulate a broad group of technologies that can be classified as automated or that employ artificial intelligence (AI). These technologies rely on vast data sets and algorithms and utilize partial or full automation to make decisions.

The AI Act crosscuts various use cases, including commercial development, criminal justice, public state administration, and border enforcement. As the use of AI increases exponentially worldwide, the EU’s sweeping regulatory approach attempts to balance innovation with robust governance.

1. Why does the AI Act place migration in the high-risk category?

The AI Act recognizes that automated technologies can pose various levels of risk to individuals and communities. As such, the act presents a risk-matrix of five categories, from low risk to a total ban, to categorize and demarcate risks and responsibilities when developing and deploying automated technologies in varied contexts. Migration technologies are currently classified as ‘high risk.’

Migration technologies, particularly when partially or fully automated, pose significant human rights risks to people crossing borders, seeking refugee status, or immigrating. For example, AI-based polygraph machines piloted at borders can be highly discriminatory and inaccurate due to problematic assumptions about human behavior, while surveillance technologies using predictive analytics can infringe on people’s ability to claim asylum safely and humanely. Fundamental and internationally protected rights such as the right to life, liberty, and security of person; right to seek asylum; freedom from discrimination; right to privacy; and freedom of expression are all affected when automated technologies make decisions at or around the border. Administrative and procedural rights are also impacted, such as the right to a fair and impartial decision-maker, right of appeal, and the right to procedural fairness. These fundamental rights are particularly important to consider when evaluating the high-risk, opaque, and discretionary decision-making that underpins immigration and refugee processing worldwide.

2. What high-risk AI systems relate to migration, asylum, and border control management?

Migration technologies encompass a vast array of use cases. The AI Act recognizes a variety of categories of migration and border technologies, including:

Predictive analytics: The use of vast data sets to predict population movement is increasingly on the rise by state actors, interjurisdictional players like Frontex, and international organizations like the United Nations High Commissioner for Refugees (UNHCR) and the International Organization on Migration (IOM). Prediction of population movement can constitute an infringement on the fundamental rights to life, liberty, and security of person if used for interdiction measures on land or sea that lead to loss of life. For example, Frontex, the European Border, and Coast Guard Agency have been testing various predictive analytics and unpiloted AI-powered military-grade drones in the Mediterranean for the surveillance and interdiction of migrant vessels seeking European shores to file asylum applications. If unregulated, these technologies may be used to support illegal interdiction measures, both on land and at sea, in order to prevent people from seeking refuge.

Biometrics and emotion recognition: Biometrics rely on data from the human body to make decisions and predictions, while various facial recognition-type technologies claim to go as far as to recognize emotion. Projects such as iBorderCTRL, an AI-type lie detector, or AVATAR, an ‘automated virtual agent for truth assessment in real-time,’ highlight the appetite for automated technologies at the border and immigration. However, in border and immigration contexts, which are already replete with opaque decision-making, insufficient mechanisms of redress, and vast power differentials, biometrics and emotion-recognition are extremely high risk—facial recognition has been proven to be highly discriminatory and biased, particularly against racialized groups and marginalized individuals, not to mention inaccurate and culturally insensitive.

Border Enforcement: Partially or fully automated surveillance tools are increasingly being rolled out along the borders of Europe. For example, Greece’s Centaur system features drone flights over newly built refugee camps on the Aegean islands to detect incidents, perimeter violation alarms with cameras, control gates with metal detectors and integrated cameras, and x-ray machines. Other examples include sentry towers, thermal cameras, Long Range Acoustic Devices (LRADs) or Sound Cannons, and the experimental ROBORDER project, which includes a border surveillance system encompassing AI-enabled "heterogenous robotic capability"  through aerial, surface, underwater, and ground vehicles. These technologies can be used to prevent people from crossing into European territory to exercise their right to asylum, potentially even leading to death at land or sea borders as a result.

Immigration and refugee processing: Automation is also making its way into individual immigration and refugee processing through the streamlining of data sharing and AI-enabled border control gates, identity verification technologies such as voice recognition projects for refugee applicants in Germany, and various risk assessments at and around the border. Europe is not alone in these types of innovations. Other jurisdictions, including the United Kingdom and Canada, have already integrated automated decision-making into their immigration processing.

Data sharing and interoperable databases: As more data is collected on people crossing borders, automated decision-making is underpinned by vast data sets and interoperable databases such as the EURODAC (European Asylum Dactyloscopy Database), SIS (Schengen Information System), and Eu-LISA (European Union Agency for the operational management of large-scale IT Systems), among others. The collection, storage, and sharing of sensitive data can lead to high-risk applications of migration technologies, especially when insufficient safeguards exist against the sharing of sensitive data across contexts, such as when data collected for immigration purposes may be shared with law enforcement.

3. What is positive about this EU AI proposal on migration?

The EU’s proposal to regulate AI is the first regional attempt to govern a vast array of divergent technologies that impinge on people’s rights and freedoms. The EU is taking a leading approach globally through this fulsome set of proposals. Technology is difficult to regulate due in part to the private sector’s impetus to innovate without regulatory constraints, which creates a lucrative market that insufficiently engages with the risks of migration technologies. The AI Act recognizes migration technologies to be high risk, opening space for discussion on how best to regulate a broad class of technologies at and around the border, including conversations around human rights and data protection impact assessments, among other measures.

4. What does the AI Act miss on migration?

While classifying migration technologies as ‘high risk’, the AI Act does not sufficiently acknowledge that migration technologies both create and exacerbate systemic racism and discrimination, particularly of historically marginalized groups such as refugees, people crossing borders, and immigrants. AI-type technologies can create new and replicate existing biases against already marginalized populations. Various international laws recognize the rights of people on the move to be free from discrimination, have their privacy safeguarded, and their asylum rights protected. The act should supplement the framing already recognized in international law including the responsibilities of businesses (and states) to develop technologies which do not infringe on people’s fundamental rights.

The act, as currently written, does not present an opportunity to fully ban migration technologies, even ones such as predictive analytics and individual risk assessments which threaten people’s right to life, liberty, and security of person along with other fundamental human rights. Its analysis of the impact of migration technologies lacks gender framing and fails to employ an intersectional approach recognizing the vast power differentials in society between the actors who develop and deploy technologies and the individuals and communities at the receiving end of high-risk innovation.

The act can also be difficult for civil society and affected communities to understand and engage with, thus limiting robust debate and stifling participation in policymaking for those who are most affected. However, a coalition of groups and academics, including EDRi, AccessNow, the Migration and Technology Monitor, Privacy International and others (including this author), are calling for amendments to the AI Act to recognize the harms that border technologies create. In particular, the coalition is calling for the following changes:

  1. Update the AI Act’s prohibited AI practices (Article 5) to include ‘unacceptable uses’ of AI systems specifically in the context of migration. This would include prohibitions on AI-based individual risk assessment and profiling systems that draw on personal and sensitive data; AI polygraphs; predictive analytic systems used for the interdiction, curtailment, and prevention of migration; and a full prohibition on remote biometric identification and categorization in public spaces, including in border and migration control settings.
  2. Include within ‘high-risk’ use cases AI systems in migration control that require clear oversight and accountability measures. This would incorporate other AI-based risk assessments; predictive analytic systems used in migration, asylum, and border control management; biometric identification systems; and AI systems used for monitoring and surveillance in border control.
  3. Amend Article 83 to ensure AI used in large-scale EU IT databases is within the scope of the AI Act and that the necessary safeguards apply for uses of AI in the EU migration context.

5. What are next steps for the AI proposal generally?

The AI Act is slowly moving through the EU’s governance-making machinery. The proposed act will go through various debates and hearings over the summer of 2022, including opportunities for civil society, critics, and policymakers to weigh in on framings, call for bans, and engage in debates about what should or should not be included in the governance framework.

The AI Act remains a proposed piece of regulation and various aspects of it may change in the coming years as it reaches the final stage of ratification and eventually comes into force.

6. What parts should policy stakeholders on migration watch?

Greater attention is finally being given to the high-risk impacts of migration technologies. As public awareness of these issues grows, policy stakeholders on migration will need to be at the forefront of these complex discussions to ensure that technological development does not exacerbate—or create—far-reaching human rights risks for people on the move. More discussions are also needed about which types of border and migration technologies should be included in the high-risk category or outright banned (such as prohibitions on predictive analytics for the uses of interdictions at sea or land, individual risk assessments for the purposes of refugee or immigration applications, and AI-type polygraph machines, among others).

Highlighting the lived experiences of people who are at the sharpest edges of technological development will illuminate the vast number of ways that migration technologies present very high risks and should be strictly regulated, if not banned outright.

Useful Readings:

 

The DoT.Mig In Brief paper series is part of the The Dialogue on Tech and Migration, DoT.Mig.

DoT.Mig provides a learning platform to connect the dots between digital technologies and their use and impact on migration policy, as well as connecting relevant stakeholders.

The DoT.Mig In Brief paper series highlights debates and concepts relevant to navigate the emerging field of Tech and Migration.

DoT.Mig is a forum by the Migration Strategy Group on International Cooperation and Development (MSG). The MSG is an initiative by the German Marshall Fund of the United States, the Bertelsmann Foundation, and the Robert Bosch Stiftung.

The views expressed in this publication are the views of the author alone and do not necessarily reflect those of the partner institutions.