The European Union's push to regulate has faced intense corporate lobbying attempts at every stage of the policy-making process (Photo: Emiliano Vittoriosi)
The launch of ChatGPT in November last year has sparked a worldwide debate on Artificial Intelligence systems. Amidst Big Tech’s proclamations that these AI systems will revolutionise our daily lives, the companies are engaged in a fierce lobbying battle to water down regulations.
In April 2021, EU commissioners Margarethe Vestager and Thierry Breton presented a proposal for a European legal framework on AI. It was celebrated as the first global attempt to regulate AI — a technology that, as the commission observed, would “have an enormous impact on the way people live and work in the coming decades.”
But AI is also big business. OpenAI, the creator of ChatGPT, doubled in value as Microsoft poured in $10bn [€9.44bn]. Google, in conversations with the EU, was presented as an “AI first company” with “AI driving all their products.”
Unsurprisingly, then, the European Union’s push to regulate has faced intense corporate lobbying attempts at every stage of the policy-making process.
A new report by Corporate Europe Observatory reveals how Big Tech has been able to slowly pick the AI Act apart.
Via years of direct pressure, covert groups, tech-funded experts — and a last-ditch push by the US government — tech companies have reduced safety obligations, sidelined human rights and anti-discrimination concerns, and secured carve-outs for some of their key AI products.
From the social media feeds we see on our timelines to AI-operated medical devices, AI is increasingly becoming part of our daily lives.
Whilst some uses of AI may be beneficial, these systems also come with serious risks. AI systems often do not work as intended, and they can be unaccountable. They can discriminate based on gender, disability or race. Indeed, the potential to exacerbate inequality has been criticized by the EU Fundamental Rights Agency and the UN High Commissioner for Human Rights.
Perhaps the best-known example is the now-infamous Dutch SyRI system that falsely accused families of social benefits fraud, in breach of the European Convention on Human Rights.
It’s this possibility of abuse that informed the European approach to regulating AI. As the risk has increased AI systems must conform to stricter rules. This meant that chatbots could be limited risk, while social scoring would be banned entirely. About 10-15 percent would be considered high-risk.
But when EU institutions were discussing regulation, a question popped up: what to do about systems that can be used for a wide variety of applications, which are both low and high-risk?
These have become known as “general purpose AI systems” that serve as the basis for more specialised AI and that can process audio, video, text, and physical data. Because of the scale and amounts of memory, data and hardware required, general purpose AI systems are primarily developed by American tech giants such as Google and Microsoft, which has announced it will build ChatGPT into all its Office products.
It is no surprise that when the EU institutions announced they would include general purpose AI in the upcoming AI Act, this set off several alarm bells in Big Tech’s well-funded European lobby networks.
Shady lobbying tactics
To give just an idea of the scale of these ongoing lobbying efforts, our new report documents at least 565 meetings between MEPs and business interests on the AI Act.
The efforts to de-fang the AI act are not always conducted out in the open, but through funding interest groups claiming to represent SMEs or start-ups that in reality parrot the lines of Big Tech.
These shady lobbying tactics have become increasingly controversial.
In February CEO and LobbyControl with the support of a cross party alliance of MEPs launched lobbyleaks.eu, a leak box to expose exactly this kind of Big Tech interference.
Help from Uncle Sam
In the Council, things are not much better. Meta, in private, admitted it was “in touch” with member states on the AI Act, but no data on Council lobby meetings is available. In an open letter to the Czech presidency of the Council, Microsoft saw “no need for the AI Act to have a specific section on [general purpose AI].”
Crucially, Big Tech has been able to get the US government to back up its position. Tech companies spent $70m lobbying the US Congress in 2021, and 2022 was described as a “gold rush” for AI lobbying in the US.
In an unusually overt display of interference in European law-making, in September 2022, the US government shared a “non-paper” with the Czech presidency pushing for the exclusion of general purpose AI and a narrower definition of AI. All were demands that closely resembled those in Big Tech’s position papers.
As it stands, the lobby blitz generated the desired results. In its latest positions the Council has postponed the discussion on regulating general purpose AI. The institutions also narrowed the definition of AI systems, limiting the number of systems to which scrutiny would be applied.
The AI Act is now nearing its final stages, the secretive trilogue negotiations. What compromise will be reached remains uncertain. But the opaque nature of the trilogue process is especially beneficial to well-connected and well-funded lobbyists, which leaves ample room for Big Tech to further water down the AI act.
It is now up to the European Parliament to make sure fundamental rights are well protected. We should not allow Big Tech to roll out another toxic business model this time.
Source: euobserver.com