Byte by byte - how Big Tech underminded the AI Act

Byte by byte

How Big Tech undermined the AI Act

Tech giants are receiving privileged and disproportionate access to senior European policymakers during the final stages of negotiating the AI Act. Drawing on internal documents and data analysis, Corporate Europe Observatory reveals Big Tech’s push, with support from major EU member states, to undermine or limit regulation of foundation models, the likes of ChatGPT on which many AI applications can be built, and “high-risk” AI. Fundamental rights and protections of copyright and the environment are being sacrificed for the sake of corporate profits.

ChatGPT took the world by storm. In late 2022, it was launched by Open AI, a company with deep ties to Microsoft. The artificial intelligence model drew widespread praise. The New York Times, for example, called it “quite simply, the best artificial intelligence chatbot ever released to the general public.” TIME magazine put the model on its iconic cover. Artificial intelligence, or AI, was considered “finally mainstream.”

ChatGPT reportedly left competitors scrambling. Google hastily released BARD, its own AI chatbot. Others were scrambling to keep up too, including European Union officials, parliamentarians, and diplomats, who were already in the process of negotiating the EU’s legislative approach to AI: the AI Act.

As Corporate Europe Observatory has previously documented, the European debate about the regulation of AI, heavily influenced by Big Tech, featured lofty promises: AI would spur economic development; it would even cure cancer or solve climate change! Rights advocates on the other hand have long warned that the increasing use of AI technology risked sacrificing the rights of citizens, in particular those of the most vulnerable people, for the sake of corporate profit margins and law enforcement priorities.

In the second half of 2023, during the trilogues – the final, secretive stages of negotiation over the AI Act between the European Parliament, Council, and Commission – the release of ChatGPT spurred increasing discussion on the need to regulate “foundation models”: AI models that can be adapted for many possible uses. Big Tech lobbied to avoid or at least minimise regulation of these models.

Tech companies enjoyed disproportionate, often top-level, access over the course of 2023.

Analysis of internal documents and lobby data by Corporate Europe Observatory reveals that tech companies enjoyed disproportionate, often top-level, access over the course of 2023. Through this privileged access, Big Tech largely succeeded in preventing much-needed requirements being brought in for foundation models, a core AI product for Big Tech, thus reducing external oversight of potentially harmful AI within the EU.

How ChatGPT became the AI Act’s hot potato

In 2021 the European Commission launched its proposal to regulate AI using a risk-based approach. Under this proposal, some applications would be banned, some “risky” applications would be subject to stricter rules, but most AI would be subject to very few regulation, if any. As the European Council and Parliament reviewed the Commission proposal, much of the debate revolved around which applications would be classified as high-risk, what obligations such applications would be subject to, and whether this classification would be verified by companies themselves or independent external reviewers.

AI Risks: Amplifying social prejudice and Big Tech monopoly power

It is now widely recognised that as AI models are applied across society, they can recreate or amplify existing patterns of social prejudice, bias, and inequality. Flawed or biased training data may lead AI to discriminate against people based on race, class, gender, disability, sexuality, and age, or to create new, artificial boundaries to exclude people. Social groups who are already vulnerable or discriminated against, such as migrants, refugees seeking protection, job seekers, or people seeking state support, have been shown to be most at risk from profiling or biometric mass surveillance.

This is not hypothetical. Across the world, for example in the Netherlands, the UK, and Australia, biased algorithms have falsely accused thousands of people of defrauding social security benefits.

Large language models (LLMs) – algorithms trained to ‘understand’ or generate language – or other “foundation models,” on which many specific AI applications are built, have been shown to have many risks and costs associated with them. These includes environmental costs due to the high energy usage, as well as potentially re-inforcing social bias, with models using discriminating or extremist language. Structural inequalities in the training data used can be built into the system. There are also serious questions, and ongoing lawsuits, over the alleged violation of copyright laws in the development of these large AI models.

Alarmingly, Big Tech firms have recently fired or trimmed down their ethics teams, which have in some cases called out some of these dangers.

A report by The Future Society identified various risks associated with these “cutting-edge models” including “technical opacity.” It also highlighted another risk – corporate irresponsibility:

“Corporate irresponsibility is evidenced by the developers’ acknowledgement or even active warning to regulators of the great dangers posed by the increasingly capable technology. Yet, paradoxically, these same developers continue to aggressively compete in the race for increasingly capable models or AGI [Artificial General Intelligence], thus actively developing even more capable and dangerous models.”

Concerningly, a small group of companies are developing (near) monopoly power in the advanced AI market. The high cost of developing foundation models (with many possible uses) creates significant barriers to entry. And because AI is also foundationally reliant on resources owned and controlled by a handful of companies, the AI Now Institute concludes: “There is no AI without Big Tech.”

The debate also increasingly focused on how to regulate AI models that could have many possible applications, both low and high risk. In late 2022, just after the launch of ChatGPT, the European Council proposed that some requirements for high-risk AI systems would also apply to “general-purpose AI systems” – AI models that could be used in many different ways. But the Council pushed the hard questions around regulating general purpose models into the future, to be handled later by the European Commission. As detailed previously by Corporate Europe Observatory, this came after an intense push by Big Tech, which tried to evade responsibility for compliance by allocating it to “downstream deployers”.

Big Tech monopolises AI

General purpose AI was also heavily debated in the European Parliament – from as early as the summer of 2022 – as it formulated its position on the AI Act. The furore over ChatGPT kicked that debate into higher gear. By March 2023, the co-rapporteurs working to develop the EU Parliament’s position, Brando Benifei (S&D) and Dragoș Tudorache (Renew), proposed that AI systems that could produce complex texts without human oversight, such as ChatGPT, should be designated high-risk. This was met with skepticism from right-wing MEPs like Axel Voss (EPP), who said it “would make numerous activities high-risk that are not risky at all.”

The tech sector’s lobby budgets saw a 16% increase in two years time to 113 million euro, mostly because of increasing budgets of Big Tech firms.

The proposal to regulate general purpose AI systems, which would see the products of tech firms subjected to additional requirements, coincided with an increase in Big Tech lobbying. The tech sector’s lobby budgets saw a 16% increase from 97 million euro in 2021 to 113 million euro, mostly because of increasing budgets of Big Tech firms.

Corporate lobbying of the Parliament into high gear

Much of Big Tech’s focus in 2023 was initially on influencing the Parliament’s position. According to data provided by Parltrack, in 2023 MEPs registered 277 meetings on AI; 225 of those meetings took place before the Parliament’s proposal came up for a vote in June 2023.  Sidenote Data on European Parliament meetings were scraped by Parltrack for this investigation on 12 October 2023 and are available through this link. A total of 51,846 lobby meetings in Parliament were identified, of which 12,924 took place in 2023. Meetings were coded as concerning AI if the field “title” or “related” referred to AI, artificial intelligence, or any similar phrase (in English, French, German, or any other language). Reported lobbyists were categorised on the basis of publicly available information as academia/research, civil society, industry, public body/government, or trade association. If no public information was available or could be identified, this category was left blank.  Sidenote

Two-thirds of these meetings – 185 out of 277 – were held with industry and trade associations. This was an increase from the period 2019-2022, during which time 56% of MEP meetings on this topic were with industry and trade associations. Meetings with civil society were only one in ten of MEP meetings in 2023, down from an already meagre 13% in previous years.

 

Like in previous years, large tech companies Google and Microsoft, as well as the American Chamber of Commerce (which represents US tech companies), had by far the most meetings with MEPs. OpenAI was a newcomer among the top lobbyists with seven meetings in 2023.

The month of March 2023, when obligations for general purpose AI were proposed by Parliamentarians, saw the most intense lobby effort, with 67 reported meetings. That month alone, the American Chamber of Commerce clocked seven meetings with MEPs, while Google, Microsoft, and OpenAI each registered four.

Kim van Sparrentak, a Dutch MEP in the Greens/EFA coalition, described her meetings with various lobbyists to Corporate Europe Observatory. She reported that OpenAI gave a presentation on the capabilities of ChatGPT (a recurring feature of OpenAI’s lobby meetings) and assured her that they did "all they could to ensure safe AI,” but also called for “flexibility.” Van Sparrentak said that members of Microsoft’s ethics team she met with in March pushed for voluntary rather than mandatory commitments.

There is little information available on what lobbyists discussed with other MEPs – who, unlike the European Commission, cannot be asked through freedom of information requests to disclose information about their meetings. But minutes of a meeting between the Commission and Microsoft in March 2023, obtained through freedom of information request, show the tech company called for regulation focused on “the risk of the applications and not on the technology.” OpenAI had made the same argument to the EU just before releasing ChatGPT.

This would mean that the development of many of these general AI systems, on which other AI is built, would basically go unregulated. Only the companies and actors who would deploy this technology, and make it into specific applications would have to comply with any requirements, leaving Big Tech off the hook.

Infographic Google lobbying AI Act

From general-purpose to foundation models: how the Parliament aimed to regulate advanced AI systems

The concerted lobbying attempts of Big Tech to avoid and reduce regulation appear to have been successful. In the months following the lobby push described above, the Parliament moved away from proposing high-risk requirements for all general purpose AI systems. Instead, it proposed a “tier-based approach”, under which general purpose AI would not be considered high-risk by default, but some models – a newly introduced category of “foundation models” and generative models such as ChatGPT – would be subject to obligations.

The concept of foundation models was introduced in a Stanford University paper, which defined such models as “models trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks.” (The Parliament largely followed this definition). These models could power other AI applications, but were not without risks: the Stanford paper identified these as including intrinsic biases in the models, environmental costs, the use of copyright-protected data in training, furthering wage inequality and monopoly power of Big Tech firms, mass data collection and surveillance capacities, and the misuse of models for harmful purposes. (For more on the risks of foundation models, see Box 1 above.)

A new article (28b) in the Parliament’s draft position specified that providers of foundation models would have to demonstrate to the competent authorities the “reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law.” Generative foundation models, the heaviest regulated general purpose AI according to the Parliament’s classification, would also need to “ensure transparency about the fact the content is generated by an AI system, not by humans”, and make a summary of the use of training data protected under copyright law publicly available.

But foundation models would not automatically be considered high-risk.

Big Tech executives’ lobbying blitz

Civil society expressed scepticism regarding Parliament’s proposed requirements on foundation models, arguing that they did not go far enough. Sarah Chander, a prominent civil society advocate with European Digital Rights, said “Parliament’s position only covers such systems [foundation models] to a limited extent and is much less broad than the previous work on general-purpose systems.”

Big Tech pulled out the big guns to water down the Parliament's proposal.

Providers of AI foundation models thought the opposite – that the regulation would prove too onerous. They pulled out the big guns to try to water down the proposal. OpenAI CEO Sam Altman shuttled between Brussels and various European capitals and claimed OpenAI would “try to comply, but if we can’t comply we will cease operating [in the EU].” (He later walked back the threat to depart from Europe.) Google’s Sundar Pichai met with Commissioner Thierry Breton to announce a voluntary AI Pact, a non-enforcable commitment to self-regulate while waiting for the AI to take effect; Google meanwhile said it would “continue to work with the EU government to make sure that we understand their concerns.” Microsoft’s President made a trip to the European Commission a month later.

Tech companies had reasons to be concerned. Even limited regulatory requirements on foundation models could have great economic consequences for this fast-growing industry that is nearly entirely concentrated in the hands of Big Tech. Near-monopolies in AI by Big Tech are reinforced through billion-dollar partnerships with “start-ups”, such as between Amazon and Anthropic or Microsoft and OpenAI. Max von Thun of the Open Markets Institute explained:

Generative AI did not appear out of nowhere. It is built on existing concentration in data and computing power which are the result of the monopolisation of key digital markets, including social media, cloud computing and search engines.

Furthermore, a study by Stanford University found that providers largely did not comply with the requirements proposed by Parliament. The study found that these companies “rarely disclose adequate information regarding the data, compute, and deployment of their models as well as the key characteristics of the models themselves.” Many models scored especially low in Stanford’s assessment on the disclosure of copyrighted data and energy usage.

In the shadow of the visits of the top executives, Big Tech’s lobbyists also increased their efforts to influence the senior levels of the Commission. The top five lobby actors were again Big Tech companies, with Google topping the list with ten AI related meetings with European Commissioners and their cabinets reported in 2023 – only two less than all meetings held with civil society organisations in aggregate.

Out of the 97 meetings held by senior Commission officials on AI in 2023, analysis by Corporate Europe Observator shows that 84 were with industry and trade associations, twelve with civil society, and just one with academics or research institutes.

EU Commission meetings

The trilogues: playing the institutions against each other

The Parliament finally agreed its position on the AI Act in June 2023. This moved the process of the EU’s initiative to regulate AI into the final phase – a “trilogue” between the European Council, Parliament, and Commission, intended to find a compromise between the different draft regulations.

Immediately after the Parliament position became clear, meetings with the European Commission on AI peaked. In June and July 2023, lobbyists flocked to the Commission to exert their influence, showing that they saw the Commission not as the neutral mediator in the trilogues – its official role – but as an actor through which the outcome of the trilogues could be influenced.

Secrecy of the trilogues

Transparency groups, including Corporate Europe Observatory, have documented how the secrecy of the trilogues process benefits corporate lobbyists, including those representing Big Tech. A recent petition by Corporate Europe Observatory and LobbyControl calling for an end to secrecy in the trilogues was signed by thousands of people. The European Ombudsman has also called on the Parliament to consider publishing trilogue documents proactively, to increase transparency and public scrutiny.

Documents obtained through freedom of information requests to the Swedish government shed some light on the lobby strategies deployed by Big Tech during the trilogues. Corporate Europe Observatory is publishing these documents in full, to provide increased transparency about the lobbying process.

This was the same old Big Tech attempt to avoid external controls and to put the burden for compliance on the deployers, rather than the creators, of the AI technology.

Both Google and Microsoft shared several position papers with EU member states during the trilogue process. In early September, Google wrote that the regulation “should focus only on the most capable foundation models when they are deployed for high-risk uses.” Microsoft argued that the Council’s approach would classify many general purpose AI as high-risk, despite them being low-risk in the company’s opinion, and welcomed the Parliament’s “more targeted approach”. Nevertheless, Microsoft wanted to “ensure feasible requirements for foundation model providers”, and thus argued in favour of “remov[ing] references to independent experts” and “exempt[ing] foundational models that are not placed on the market.” This was the same old Big Tech attempt to avoid external controls and to put the burden for compliance on the deployers, rather than the creators, of the AI technology.

Microsoft’s trilogue recommendations, dated 11 September 2023.

The tech giants also pushed back on three issues that would undermine their existing approaches to developing foundational models: increased transparency, disclosure of copyrighted materials, and reporting environmental impact.

Tech firms argued that transparency – including a requirement to inform someone that they are interacting with an AI – should only apply to “image, audio and video” (Microsoft) or “deep fakes” (Google). Importantly, this would exclude the textual output of LLMs and leave many chatbots, the likes of ChatGPT, free from transparency obligations. The same would go for the search systems of Google and Microsoft, which are increasingly powered by AI.

Both Google and Microsoft also argued for the removal of copyright disclosure requirements for foundation models. Large language models are often trained on hundreds of billions of words of text, and many of these include copyrighted materials. Big Tech argued that this was “disproportionate and burdensome” (Microsoft); “unnecessary and difficult to implement” (Google); or “not the right place to regulate copyright” (Meta/Facebook). The fact that Google, Microsoft and OpenAI, and Meta have all been sued for copyright violations presumably had nothing to do with their recommendations on this topic!

The proposed requirement to report the environmental impact of foundation models was another sore point. Microsoft called for a focus on “energy efficiency” rather than “environmental impact”; Google and Meta also objected. But a growing body of studies show the negative environmental consequences of the full supply chain of foundation models AI. These models may increase search engine energy usage by a factor of five, could consume more electricity annually than Ireland, and more water than half the UK. Tech companies actively campaigned to avoid having to report these consequences.

Meta’s 134-page document

While all the Big Tech companies suggested detailed, word-for-word changes to articles in the proposed regulation, such as article 28b, Meta went so far as to submit a 134-page four-column document to member states, akin to the document the trilogue negotiators were working with. Meta’s document listed the Commission, Parliament, and Council position, and where for the trilogue negotiators it would have listed the “draft agreement,” Meta put in a fourth column – “Meta’s suggestion” – along with justifications for its rewording of dozens of articles and sub-articles. This was a level of detail only an organisation with substantial resources, and numerous dedicated full-time lobbyists would be able to provide – and something which rights advocates struggle to compete with, given their much lower level of funding and human resources.

How European industry joined the ranks of Big Tech

The tech giants got support from expected and unexpected corners. As reported by Politico, the United States government sent a “non-paper” to European policymakers with specific “suggested edits” in August 2023. (This followed an earlier US government non-paper that aimed to influence the Council’s position, as Corporate Europe Observatory has previously reported.) Among the US suggestions was the recommendation that experts controlling foundation models “need only be ‘sufficiently’ independent, not necessarily fully independent.”

The US government recommendation for “sufficiently”, but not “fully”, independent experts. Source: Politico

Doing away with independent verification by outside experts was an important Big Tech demand: they considered this potential requirement cumbersome and wanted checks to be undertaken internally only. Previous research has documented how a significant number of European AI experts receive funding from Big Tech, calling into question their independence.

Support for Big Tech’s positions also came, somewhat unexpectedly, from European governments, in particular France, which is looking to build its own AI industry. The French government said the proposed obligations were too stringent. Former French Digital Economy Minister and co-founding advisor of Mistral AI, Cédric O, wrote that the provisions in the AI Act to regulate foundation models would be “counterproductive” and “de facto prohibits the emergence of European [Large Language Models].”

Support for Big Tech’s positions also came from European governments, in particular France, which is looking to build its own AI industry.

Cedric O was also one of the initiators (with René Obermann, the Chairman of the Board of Directors of aerospace and defence giant Airbus, and Jeannette zu Fürstenberg, founding partner of the tech venture fund La Famiglia) of an open letter signed by 150 European companies. This letter stated that foundation models… “would be heavily regulated, and companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks.” According to the letter, this would stifle innovation and create a “critical productivity gap between the two sides of the Atlantic.”

France has pinned its hopes for a “European ChatGPT” on Mistral AI, a start-up launched in May 2023, that raised 105 million euros four weeks after its founding. A source with knowledge of the trilogue negotiations told Corporate Europe Observatory that the French position was increasingly supported by Germany. The German start-up Aleph Alpha AI recently raised 500 million dollar in investment for the development of European large language models to compete with existing – primarily Big Tech-owned – foundation models. German Minister for Economic Affairs Robert Habeck, at a meeting with diplomats and industry representatives from France, Italy, and Germany, said “Europe's future competitiveness depends crucially on whether we succeed in developing AI in Europe in the future.”

To some, the Franco-German push signalled a genuine belief in a burgeoning AI industry for European companies. Others question whether a company of eighteen people, like Mistral AI, will be able to compete with American tech giants that benefit from near-monopolies of the market for foundation models. Max von Thun of the Open Markets Institute said it would be far more likely that in the future we will see “European applications built on top of American foundation models.” In so doing, the European industry and states’ lobbying put additional pressure to weaken the EU’s requirements for AI foundation models.

American tech funding European start-ups?

The European AI start-ups have extensive transatlantic ties. Mistral AI’s founders previously worked for Meta and Google. Former Google head Eric Schmidt, the head of the US National Security Commission on Artificial Intelligence and a long-standing advocate against EU regulation of AI, is one of the major investors in Mistral AI. Athur Mensch, one of Mistral’s co-founders previously worked for Google’s DeepMind and sits on the newly launched French committee for artificial general intelligence – alongside Cedric O and representatives from Google and Meta.

Before founding Aleph Alpha, Jonas Andrulis sold a company to Apple and went on to work at the tech giant. While primarily backed by European investors, Aleph Alpha partnered with Oracle for cloud infrastructure and NVIDIA and Hewlett Packard for hardware, including the development of Europe's "fastest commercial AI data center" in 2022. HP recently also became an Aleph Alpha investor.

UK-based Stability AI developed an exclusive partnership with Amazon.

Knowing the news before the news

By October 2023, the trilogue negotiations had advanced. The Council reportedly considered Parliament’s suggestion of a tiered regulation of foundation models. This would now also include a top tier of “very capable foundation models” with “capabilities [that] go beyond the current state-of-the-art and may not yet be fully understood.” Under consideration for these most advanced models were regular external vetting and controls by independent auditors, and the need to have a strong risk mitigation strategy. Providers of foundation models would also need to demonstrate measures to follow EU copyright law and allow rightsholders to opt out.

According to a report by The Future Society, there would currently be approximately ten providers – including Google, Microsoft, OpenAI, Meta, and Amazon – that would fall within the highest, and thus most regulated, category of “very capable foundation models”.

The introduction of this tiered approach by the EU was reported publicly on 17 October 2023. Yet days earlier, emails show that Microsoft already knew what was happening, clearly indicating that the company had insider knowledge of the trilogue discussions.

Microsoft’s updated trilogue recommendations, dated 13 October 2023.

In an “updated position” paper (dated 13 October 2023), shared with member states and obtained by Corporate Europe Observatory through freedom of information request, Microsoft said:

the AI Act should advance foundation models, focusing only on the most capable category of models on the market and anticipated.

This shows a remarkable versatility and tactical responsiveness: Microsoft knew the debate was changing and sought to use this to its advantage by arguing that all less powerful foundation models should be excluded from obligations under the AI Act. As one European tech expert (who requested anonymity, “because they could destroy my career if they want to”) said, “Microsoft is just the best lobbying entity in Brussels. They detect everything.”

Google, a couple of days later, called for the tier for advanced foundation models “to be rejected by the trilogue negotiators”, and said the proposed approach was akin to focusing on the details of the production of a car rather than conducting crash tests.

At the time of writing this issue remains under heated political discussion in the trilogues. In the latest trilogue meeting, France, Germany and Italy reportedly pushed against any type of regulation for foundation models. It led Parliament negotiators to walk away and may even put the entire AI Act at risk if no agreement is reached before the European elections. Whether the outcome is a severe weakening or scrapping of foundation model requirements, or further postponement of the AI Act altogether, it evidently plays into the hands of Big Tech corporations.

Microsoft lobbying the AI Act

Major win for Big Tech: self-assessing if its systems are risky

In the midst of the debate about foundation models, another important issue remained unsolved – the issue of the high-risk AI systems. Under the Parliament draft regulation, AI systems in eight categories would be considered high-risk: biometric-based AI, critical infrastructure, education, employment, access to public and private services, law enforcement, migration, and justice and democratic process.

A wide group of tech firms expressed concern to the Commission about their area of operations being designated high-risk. Uber said it was “problematic that, instead of a case-by-case assessment, all AI systems used in employment are classified as high-risk.”

Snap, the maker of the Snapchat application, protested the high-risk classification of biometric categorisation. Spotify was concerned about “the high-risk categories,” although they did not specify which one especially concerned them. Many other European and non-European companies have long pushed for removing the concept of high-risk categories or allowing flexibility in how the risk is assessed. Their preference, not surprisingly, is that they themselves undertake this assessment when developing products, rather than this being done by external evaluators.

This push appears to have had some success. Under the initial Commission proposal, systems would be automatically considered high-risk if they fell into the above listed categories. Parliament introduced more demanding regulations for high-risk systems, including the requirement to conduct an assessment of the impact on fundamental rights. But it also introduced a major loophole: AI companies, if they believed that their AI did not pose significant harm, could notify authorities that they were not subject to the high-risk requirements.

The possibility of opting out was welcomed by the usual suspects; for example, Microsoft supported Parliament’s approach, although it still pushed for additional leniency. It said that requiring providers to notify authorities “can lead to enforcement challenges in practice due to supervising authorities’ lack of resources.” It would be better, according to Microsoft, that companies undertake the assessment internally and only make it available upon request to competent authorities. Why stop when you’re winning?

During the trilogues, the Commission floated a “compromise proposal” of a filter for high-risk AI systems. This set three conditions outlining when AI would be deemed to pose no significant risk of harm, which would be determined by self-assessment, with Commission guidelines.

The same company that would be subject to the law is given the power to unilaterally decide whether or not it should apply to them.

Civil Society statement

Rights advocates immediately baulked at the introduction of a “dangerous loophole.” A statement signed by over a hundred civil society organisations warned that “the same company that would be subject to the law is given the power to unilaterally decide whether or not it should apply to them.” Furthermore, the European Parliament’s legal service slammed the compromise in a legal opinion. It wrote that

the proposed compromise leaves wide room to producers to decide autonomously whether their AI systems should be treated as high risk or not. This appears in contrast with the general aim of the AI act to address the risk of harm posed by high-risk AI systems.

The trilogue negotiators seem to have largely ignored the negative legal opinion. Although some fine-tuning of the proposal occurred, the use of the “filters” was agreed in principle in late October.

An act with no teeth left

The debate over regulating AI has evolved considerably since the EU first considered regulation. When ethics professor Thomas Metzinger proposed, in the European Commission’s high-level expert group on AI, to draw red lines for artificial general intelligence and “the use of AIs that people can no longer understand and control,” he was not taken seriously. Not even five years later, companies openly expressing their intent to make and significantly develop artificial general intelligence, such as OpenAI and Google, are Europe’s top lobbyists on AI. EU negotiators consider regulating “advanced” models that “may not yet be fully understood.”

In some ways, the debate has come full circle. Big Tech set out calling for self-regulation, self-assessment, and voluntary codes of conduct for AI. Intense lobbying has already prevented external controls for most high-risk products, and recent compromises have further undermined and hollowed out the proposed regulation by allowing AI providers to internally assess whether their systems are high-risk or not.

Voluntary frameworks also made a comeback. Together with Google and the US government, the European Commission pitched a voluntary pact, calling on “all major” companies to self-regulate generative AI like ChatGPT, while waiting for the AI Act to take effect. The focus on existential risk was useful to Big Tech firms too. It was a good way of creating access at a political level, and signalling virtue through voluntary commitments for the distant future, while distracting from the harms AI is already causing in society.

In other ways, little has changed. European member states have requested deep carve-outs for using high-risk AI for law enforcement and national security purposes. They remain unlikely to budge on this sticking point. In fact, while one part of the Commission is involved in regulating AI, another – DG Home Affairs – is betting on widespread use of AI, pushed by tech companies, to scan for sexual abuse material online. Experts and MEPs have warned that this will amount to mass surveillance of all encrypted communications.

Despite a break-neck push by the military-industrial complex to integrate AI into weaponry, military applications are excluded from the AI Act.

Furthermore, military applications of AI were from the start excluded from the AI Act from the start. This is despite a break-neck push by the military-industrial complex to integrate AI into weaponry that has pulled in a wide range of tech corporations – from Google’s military contracts with Israel and Palantir’s military AI in Ukraine to Spotify founder Daniel Ek investing in Helsing, a European military contractor building military AI “to serve democracies.” There are few, if any, international rules on the development and application of military AI and the European debate on the matter remains in its infancy.

And even when agreement is reached among the trilogue negotiators, much will remain to be decided – in implementing acts by the Commission, but also by the standard-setting bodies. As Corporate Europe Observatory previously observed, this leaves another avenue open for Big Tech to influence the process in the future. One participant in discussions on standard-setting for technology, who spoke to Corporate Europe Observatory anonymously, told us how this works and the level of influence that the tech giants have in this process:

"You need to have a lot of time, and a bit of money. Most NGOs don’t have that. And you cannot sit back; you need to dedicate hours per week. That’s why all big companies have standardisation teams. On AI, Big Tech companies with a lot of funding start leading the process – so the editor and main reviewer may be with Big Tech, or a consultant who has been paid by Big Tech. If you’re a consultant, you can join the standardisation body but you do not have to disclose who your clients are. You may have one staff member on the group, but six or seven consultants. You pay to buy the votes."

They also highlighted the strong confidentiality clauses, which prevented them from speaking publicly on the workings of the group. They concluded:

“It is so captured [the standard-setting process], there is no way the AI standards will be anything near the spirit of the act.”

In the run-up to the European parliamentary elections and after a term of unprecedented digital rule-making, this raises the question if Big Tech has become too big to regulate. From surveillance advertising to unaccountable AI systems, with its massive lobby firepower and privileged access, Big Tech has all too often succeeded in preventing regulation which could have reined in its toxic business model. Big Tech cannot be seen as just another stakeholder, as its corporate profit model is in direct conflict with the public interest. Just as Big Tobacco has been excluded from lobbying law-makers as its interests are contrary to the public health of people, the same lessons should now be drawn from the year-long struggle against Big Tech.

Just as Big Tobacco has been excluded from lobbying law-makers as its interests are contrary to the public health of people, the same lessons should now be drawn from the year-long struggle against Big Tech's toxic business model.

Or in the words of Nobel Peace Prize laureates Dmitry Muratov and Maria Ressa, it is time to “challenge the extraodinary lobbying machinery of big tech companies”.

This article continues after the banner

Support CEO so we can stay independent!