The AI Act — a breach of EU fundamental rights charter?

The AI Act — a breach of EU fundamental rights charter? | INFBusiness.com

The use of particularly intrusive AI systems, such as the ones that (claim to) infer the emotions of persons from their biometric data, affects the fundamental right to privacy, autonomy and dignity of the person concerned (Photo: Hitesh Choudhary)

The AI Act, which is now set to be finally adopted by MEPs in April, provides some specific rules for the use of emotion recognition systems (ERS) for law enforcement. For instance, police authorities deploying ERS are not required to inform people when they are exposed to these systems.

The use of AI systems that claim to infer emotions from biometrics (such as face, and voice) is only prohibited “in the areas of workplace and education institutions” (subject to an unclear ‘safety’ exception), not in contexts such as law enforcement and migration.

The scientific validity of ERS raises serious concerns, notably since the expression of emotions varies considerably across cultures and situations, and even within a single person, thus not only being inaccurate but also inherently discriminatory.

The scientific basis of facial-emotion recognition systems has been called into question, by equating their assumptions with pseudo-scientific theories, such as phrenology or physiognomy.

It is about systems such as IBorderCtrl, where a virtual policeman uses a webcam to scan your face and eye movements for signs of lying. At the end of the interview, the system provides you with a QR code that you have to show to a guard when you arrive at the border. The guard scans the code using a handheld tablet device, takes your fingerprints, and reviews the facial image captured by the avatar to check if it corresponds with your passport. The guard’s tablet displays a score out of 100, telling him whether the machine has judged you to be truthful or not.

In addition to the ‘snake-oil AI’ issue, there is more:

First, the intrusive nature of these systems will certainly increase the imbalance of power between the person concerned and the public authority.

Second, the possible ‘black box’ nature of the AI system: once made subject to ERS, also ‘just’ to assist the decision-making, the action of the law enforcement or migration authorities becomes ‘unpredictable’: what will be the impact on my reliability score of any of my voluntary or involuntary micro-expressions?

The EU Court of Justice previously argued that AI that produces adverse outcomes, which are not traceable nor contestable, are incompatible with the protection of fundamental rights. And fundamental rights cannot limited by an AI which is neither fully understandable nor fully controlled in its learning.

Third, the use of such AI entails the objectification of the person and a systemic disregard for bodily sovereignty. The use of particularly intrusive AI systems, such as the ones that (claim to) infer the emotions of persons from their biometric data, affects the fundamental right to privacy, autonomy and dignity of the person concerned.

Dignity

The concept of dignity featured prominently in the rejection by the top court of certain examination methods in the context of migration that intrude into the personal sphere of asylum seekers. In a previous case, the court ruled that sexual orientation ‘tests’, conducted by national authorities in the assessment of fear of persecution on grounds of sexual orientation, would by their nature infringe human dignity.

In my view, this would be the case, and even more so, of ERS AI for the assessment of an applicant for asylum.

Notwithstanding the unacceptable risks for the rights and freedoms of the person affected by the use of these systems, notably by public authorities, ERS and other AI ‘tools’ such as the polygraph (the lie-detector), are classified as high-risk AI.

But these systems would be ‘CE-marked’ and enjoy free movement within the internal market.

Sign up for EUobserver’s daily newsletter

All the stories we publish, sent at 7.30 AM.

By signing up, you agree to our Terms of Use and Privacy Policy.

The fundamental rights impact assessment, to be done by the users of the AI before its deployment, can hardly be considered a satisfactory remedy.

First, it would not solve the ‘foreseeability’ issue (a person may or may not ultimately be subject to ERS AI, depending on the outcome of the ‘case by case’ assessment). Second, and most importantly, the issue at stake is not about ‘risks’. The harm to mental integrity suffered by the asylum seeker is not a risk: it is a certainty.

The right to be free from statistical inferences on our state of mind is indeed a right, and not a matter of circumstances. Morphing rights into risks is the unprecedented regulatory shift of the AI Act, which in many cases in my view does not stand the test of legality.

The risk-regulation approach can be at odds with the ‘right to have rights’. The harm to dignity stemming from physiognomic AI — which is per se unfair and deceptive — is not a matter of procedural safeguards, such as ‘human in the loop’ or ‘notice and consent’, or of technical fixes.

If we consider that in most cases the requirements for the management of risks are entrusted to technical standards — entering areas of public policy, such as fundamental rights — , the shaky legal foundations of the AI Act become even more apparent.

In advocating for an AI regulation that puts human rights before other considerations, I focussed on the ‘datafication’ of the human body. Differential treatment of persons because of inferences (‘biometric categorisations’, ‘recognised’ thoughts, inclinations, beliefs, intentions) from the body (face, voice, the way we walk) made by machine: it is not just a matter of ‘snake oil’ AI, it is a line no good society should cross.

Other aspects of the AI Act, namely the regulation of the use by police of real-time and retrospective facial recognition of persons in publicly accessible spaces; the ‘categorising of biometric data in the area of law enforcement’; and predictive police based on profiling, might also be assessed by the EU court as lacking the requirements of foreseeability, necessity and proportionality.

For these reasons, I hope MEPs will not approve the AI Act in its current text. The EU is a guiding light on fundamental rights in a challenging world.

The normalisation of arbitrary ‘algorithmic’ intrusions on our inner life via our physical appearance (be it at job interviews, when walking the street, or via chat or video bots at the borders) provides a legacy of disregard for human dignity.

The decision-making based on our putative emotions inferred by the AI from face or voice (‘what the body would say’) threatens our fundamental rights’ heritage: the dream of our parents, which became somehow reality after the Second World War, and that we should carefully entrust to our children.

The trilogue agreement on the AI Act will be voted by the parliament’s internal market committee on Tuesday (13 February) and by the plenary in April as final steps to adopt the text

Source: euobserver.com

Leave a Reply

Your email address will not be published. Required fields are marked *