An act with no teeth left
The debate over regulating AI has evolved considerably since the EU first considered regulation. When ethics professor Thomas Metzinger proposed, in the European Commissionâs high-level expert group on AI, to draw red lines for artificial general intelligence and âthe use of AIs that people can no longer understand and control,â he was not taken seriously. Not even five years later, companies openly expressing their intent to make and significantly develop artificial general intelligence, such as OpenAI and Google, are Europeâs top lobbyists on AI. EU negotiators consider regulating âadvancedâ models that âmay not yet be fully understood.â
In some ways, the debate has come full circle. Big Tech set out calling for self-regulation, self-assessment, and voluntary codes of conduct for AI. Intense lobbying has already prevented external controls for most high-risk products, and recent compromises have further undermined and hollowed out the proposed regulation by allowing AI providers to internally assess whether their systems are high-risk or not.
Voluntary frameworks also made a comeback. Together with Google and the US government, the European Commission pitched a voluntary pact, calling on âall majorâ companies to self-regulate generative AI like ChatGPT, while waiting for the AI Act to take effect. The focus on existential risk was useful to Big Tech firms too. It was a good way of creating access at a political level, and signalling virtue through voluntary commitments for the distant future, while distracting from the harms AI is already causing in society.
In other ways, little has changed. European member states have requested deep carve-outs for using high-risk AI for law enforcement and national security purposes. They remain unlikely to budge on this sticking point. In fact, while one part of the Commission is involved in regulating AI, another â DG Home Affairs â is betting on widespread use of AI, pushed by tech companies, to scan for sexual abuse material online. Experts and MEPs have warned that this will amount to mass surveillance of all encrypted communications.
Despite a break-neck push by the military-industrial complex to integrate AI into weaponry, military applications are excluded from the AI Act.
Furthermore, military applications of AI were from the start excluded from the AI Act from the start. This is despite a break-neck push by the military-industrial complex to integrate AI into weaponry that has pulled in a wide range of tech corporations â from Googleâs military contracts with Israel and Palantirâs military AI in Ukraine to Spotify founder Daniel Ek investing in Helsing, a European military contractor building military AI âto serve democracies.â There are few, if any, international rules on the development and application of military AI and the European debate on the matter remains in its infancy.
And even when agreement is reached among the trilogue negotiators, much will remain to be decided â in implementing acts by the Commission, but also by the standard-setting bodies. As Corporate Europe Observatory previously observed, this leaves another avenue open for Big Tech to influence the process in the future. One participant in discussions on standard-setting for technology, who spoke to Corporate Europe Observatory anonymously, told us how this works and the level of influence that the tech giants have in this process:
"You need to have a lot of time, and a bit of money. Most NGOs donât have that. And you cannot sit back; you need to dedicate hours per week. Thatâs why all big companies have standardisation teams. On AI, Big Tech companies with a lot of funding start leading the process â so the editor and main reviewer may be with Big Tech, or a consultant who has been paid by Big Tech. If youâre a consultant, you can join the standardisation body but you do not have to disclose who your clients are. You may have one staff member on the group, but six or seven consultants. You pay to buy the votes."
They also highlighted the strong confidentiality clauses, which prevented them from speaking publicly on the workings of the group. They concluded:
âIt is so captured [the standard-setting process], there is no way the AI standards will be anything near the spirit of the act.â
In the run-up to the European parliamentary elections and after a term of unprecedented digital rule-making, this raises the question if Big Tech has become too big to regulate. From surveillance advertising to unaccountable AI systems, with its massive lobby firepower and privileged access, Big Tech has all too often succeeded in preventing regulation which could have reined in its toxic business model. Big Tech cannot be seen as just another stakeholder, as its corporate profit model is in direct conflict with the public interest. Just as Big Tobacco has been excluded from lobbying law-makers as its interests are contrary to the public health of people, the same lessons should now be drawn from the year-long struggle against Big Tech.
Just as Big Tobacco has been excluded from lobbying law-makers as its interests are contrary to the public health of people, the same lessons should now be drawn from the year-long struggle against Big Tech's toxic business model.
Or in the words of Nobel Peace Prize laureates Dmitry Muratov and Maria Ressa, it is time to âchallenge the extraodinary lobbying machinery of big tech companiesâ.