AI Act — leaving oversight to the techies will not protect rights

AI Act — leaving oversight to the techies will not protect rights | INFBusiness.com

These technical elements will have a direct impact on people's right to privacy, and knock-on effects for their rights to protest, due process, health, work, and participation in social and cultural life (Photo: Tobias Tullius)

In May, the European Parliament is scheduled to vote on the landmark Artificial Intelligence Act — the world’s first comprehensive attempt to regulate the use of AI.

A lot has been said about the Act’s risk-based approach, and the manner in which certain technologies have been classified under the act — from remote biometric technologies, to emotion recognition, to the use of AI in migration contexts.

Much less attention, however, has been paid to how the key aspects of the act — those relating to “high risk” applications of AI systems — will be implemented in practice. This is a costly oversight, because the current envisioned process can significantly jeopardise fundamental rights.

Technical standards — who, what and why it matters?

Under the current version of the act, the classification of high risk AI technologies include those used in education, employee recruitment and management, the provision of public assistance benefits and services, and law enforcement. While they are not prohibited, any provider who wants to bring a high risk AI technology to the European market will need to demonstrate compliance with the act’s “essential requirements.”

However, the act is vague on what these requirements actually entail in practice, and EU lawmakers intend to cede this responsibility to two little-known technical standards organisations.

The European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) are identified in the AI Act as the key bodies to develop standards that set out the technical frameworks, requirements, and specifications for acceptable high risk AI technologies.

These bodies are almost exclusively composed of engineers or technologists that represent EU member states. With little to no representation from human rights experts or civil society organisations, there is a real danger that these bodies will have the de facto power to determine how the AI Act is implemented without the means to ensure that its intended objective — to protect people’s fundamental rights — is truly met.

At ARTICLE 19, we have been working for over half a decade on building and strengthening the consideration of human rights in technical standardisation bodies, including the Internet Engineering Task Force (IETF), the Institute for Electrical and Electronics Engineers (IEEE), and the International Telecommunication Union (ITU). We know from experience that they are not set up to meaningfully engage with these considerations.

When it comes to technology, it is impossible to completely separate technical design choices from real-world impacts on the rights of individuals and communities, and this is especially true of the AI systems that CEN and CENELEC would need to address under the current terms of the act.

The standards they produce will likely set out requirements related to data governance, transparency, security, and human oversight.

All of these technical elements will have a direct impact on people’s right to privacy, and knock-on effects for their rights to protest, due process, health, work, and participation in social and cultural life. However, to understand what these impacts are and effectively address them, engineering expertise is not sufficient; we need human rights expertise to be part of the process, too.

Sign up for EUobserver’s daily newsletter

All the stories we publish, sent at 7.30 AM.

By signing up, you agree to our Terms of Use and Privacy Policy.

Although the European Commission has made specific references to the need for this expertise, as well as the representation of other public interests, it will be hard to achieve in practice.

With little exception, CEN and CENELEC membership is closed to participation from any organisations other than the national standards bodies that represent the interests of EU member states. Even if there was a robust way for human rights experts to participate independently, there are no commitments or accountability mechanisms in place to ensure that the consideration of fundamental rights will be upheld in this process, especially when these considerations come into conflict with business or government interests.

Standard setting as a political act

Standardisation, far from a purely technical exercise, will likely be a highly political one, as CEN and CENELEC will be tasked with answering some of the most complicated questions left open in the essential requirements of the Act — questions that would be better addressed through open, transparent, and consultative policy and regulatory processes.

At the same time, the European Parliament will not have the ability to veto the standards mandated by the European Commission, even when the details of these standards may require further democratic scrutiny or legislative interpretation. As a result, these standards may dramatically weaken the implementation of the AI Act, rendering it toothless against technologies that threaten our fundamental rights.

If the EU is serious about their commitment to regulating AI in the way that respects human rights, outsourcing those considerations to technical bodies is not the answer.

A better way forward could include the establishment of a “fundamental rights impact assessment” framework, and a requirement for all high risk AI systems to be evaluated according to this framework as a condition of being placed on the market. Such a process could help ensure that these technologies are properly understood, analysed and, if needed, mitigated on a case-by-case basis.

The EU’s AI Act is a critical opportunity to draw some much-needed red lines around the most harmful uses of AI technologies, and put in place best practices to ensure accountability across the lifecycle of AI systems. EU lawmakers intend to create a robust system that safeguards fundamental human rights and puts people first. However, by ceding so much power to technical standards organisations, they undermine the entirety of this process.

Source: euobserver.com

Leave a Reply

Your email address will not be published. Required fields are marked *