Ban racist and deadly AI from the borders of Europe

Photo of author

By Webdesk


The European Union is in the final stages of drafting unique legislation to regulate the harmful use of artificial intelligence. However, as it stands, the proposed law, dubbed the EU AI Law, contains a deadly blind spot: it does not ban the many harmful and dangerous uses of AI systems in the context of immigration enforcement.

We, a coalition of human rights groups, call on EU lawmakers to ensure that this groundbreaking legislation protects everyone, including asylum seekers and others on the move at Europe’s borders, from dangerous and racist surveillance technologies. We are calling on them to ensure that AI technologies are used for #ProtectNotSurveil.

AI makes borders more deadly

Europe’s borders are becoming deadlier by the day. Data-intensive technologies, including artificial intelligence systems, are increasingly being used to make Fortress Europe impenetrable. Border and police authorities are using predictive analytics, risk assessments through colossal interoperable biometric databases and AI-augmented drones to keep people moving and driving them away from EU borders. For example, the European border agency Frontex, which has been accused of being complicit in serious human rights abuses at many EU borders, is known to use various AI-powered technological systems to enable violent and illegal pushback operations.

From lie detectors to drones and other AI-powered systems, border security tools have been proven to push people down more precarious and deadly routes, stripping them of their basic privacy rights and unfairly compromising their immigration status claims. These technologies are also known to criminalize and racially profile people on the move and facilitate unlawful deportations in violation of humanitarian protection principles.

The EU AI law can resist the oppressive use of technology

At a time when EU member states are racing to develop anti-migration policies in defiance of their national and international legal obligations, limiting and regulating the use of artificial intelligence in migration control is critical to prevent harm.

It is also an opportunity not to be missed to prevent the accumulation of deadly, inhumane powers in the hands of authoritarian governments – both in the EU and in countries where the EU is trying to expand its borders.

EU AI law can provide important red lines and accountability mechanisms to help protect the fundamental rights of people subject to AI systems in the context of migration control. As outlined in our proposed amendments to the AI ​​bill, these could include banning the use of racist algorithms and predictive analytics to label humans as “threats”, as well as dubious AI-based “lie detectors” and other emotion recognition tools to unlawfully to push people away from the border. The EU has long been working to protect its citizens from biometric mass surveillance and such protections are expected to be part of the final EU AI law. These efforts should not discriminate on the basis of nationality and racial perceptions of risk and should be extended to all people in Europe

Power to the people, not the private sector

We also fear that leaving the use of AI in migration control to EU member states will lead to a global race towards more intrusive technologies to prevent or deter migration – technologies that would fundamentally change the lives of real people or, in worst case, would end.

If the EU’s AI law fails to regularize and restrict the use of AI technologies in migration enforcement, private actors will quickly exploit the loophole to forcefully push new products. They will send their products to our borders unchecked, just as applications that fall under the AI ​​law will be subject to stricter regulations and barriers to entry.

This is a lucrative multi-billion dollar industry. Frontex has spent 434 million euros ($476 million) on military-grade surveillance and IT infrastructure from 2014 to 2020. Technologies will be deployed and trained at the expense of people’s fundamental rights and later reused in contexts other than migration control, with crucial oversight at the design stage.

We’ve already seen private actors – such as Palantir, G4S and the lesser-known Buddi Ltd – take advantage of governments’ desire for increased surveillance to sell technology that enables inhumane practices at borders and violations of the fundamental rights of people on the move .

There is still time for the EU to do the right thing: ensure that unacceptable use of AI in the context of migration is banned and all loopholes are closed so that EU standards on privacy and other fundamental rights apply equally to all.

Signatories

Lucie Audibert, attorney, Privacy International

Hope Barker, senior policy analyst, Border Violence Monitoring Network

Mher Hakobyan, AI regulation advocate, Amnesty International

Petra Molnar, Deputy Director, Refugee Law Lab, York University; comrade, Harvard Law School

Derya Ozkul, senior research fellow, University of Oxford

Caterina Rodelli, EU policy analyst, Access Now

Alyna Smith, Platform for International Cooperation on Undocumented Migrants

The views expressed in this article are those of the authors and do not necessarily reflect the editorial view of Al Jazeera.



Source link

Share via
Copy link