In 2019, the guards on the borders of Greece, Hungary, and Latvia began experimenting with artificial intelligence. The system, called iBorderCtrl, analyzed the movement of the face in an attempt to detect when a person is lying to a border guard. The case was backed up by nearly $ 5 million in European Union research fees, as well as nearly 20 years of research at Manchester Metropolitan University, UK.

The case was sparked by controversy. Polygraphs and other technologies designed to detect physical lies are said to be unreliable by psychiatrists. Recently, errors were reported from iBorderCtrl, too. Media reports indicate that his propaganda strategy did not work, and the project page acknowledged that technology “could mean a threat to human rights.”

This month, Silent Talker, a company that came out of Manchester Met that introduced the technology of iBorderCtrl, melted down. But that is not the end of the story. Lawyers, human rights activists, and lawmakers are pushing for a European Union law to regulate AI, which could ban practices that claim to recognize human trafficking – citing iBorderCtrl as an example of potential harm. Former Silent Talker officials were not immediately available for comment.

Prohibition of false AI border crossings is one of the many thousands of AI Act changes being considered by EU officials and members of the European Parliament. The law aims to protect the basic human rights of EU citizens, such as the right to a fair and just life. It lists some AI events as “extremely dangerous,” some “dangerous,” and prohibits others from doing so. Proponents of the AI ​​Act include human rights groups, corporations, and companies such as Google and Microsoft, who want the AI ​​Act to distinguish between those who create AI systems, and those who use them for other purposes.

Last month, advocacy groups including the European Digital Rights and Platform for International Cooperation on Undocumented Migrants called for the practice to ban the use of AI polygraphs that test things like eye movement, tone of voice, or facial expressions at borders. Statewatch, a non-profit organization that fights for human rights, issued a warning that the AI ​​Act as it was written would allow the use of systems like iBorderCtrl, adding to the “government-funded AI system” in Europe. The review calculated that over the past two decades, about half of the € 341 million ($ 356 million) funding for cross-border AI applications, such as transit immigration, went to private companies.

The use of border AI monitoring tools effectively creates new immigration principles through technology, says Petra Molnar, assistant director of nonprofit Refugee Law Lab, calling everyone skeptical. He said: “You must prove you are a refugee, and they will think you are a liar unless you prove it. “That concept supports everything. It supports false AI illumination, and it contributes to more control and pushing boundaries.”

Molnar, an immigration attorney, says people often avoid eye contact with border officials or immigrants for no apparent reason – such as cultural, religious, or emotional trauma – but doing so is sometimes misinterpreted as a sign that someone is hiding something. People often find it difficult to connect with different cultures or to talk to people who have experienced trauma, say, so why would people believe a machine could do so well?



Source link