Bias baked in

Setting the rules of their own game: how Big Tech is shaping AI standards

Brussels, 9 January 2025 - In March 2024, the EU passed the world's first regulation on artificial intelligence. However, the devil is in the details of how the AI Act is actually implemented. Sure enough, new research from Corporate Europe Observatory (CEO) shows how Big Tech is in the driving seat in deciding crucial rules on how to implement the AI Act through harmonised standards. 

Standard-setting bodies such as CEN-CENELEC – private organisations historically dominated by industry – set the quality requirements for EU products. Up to now, standards have been used for technical issues such as machine safety or chemicals in toys. But AI is a very different type of 'product' with broad-ranging social impacts. Worryingly, the AI Act is the first time harmonised standards will be used to implement rules on fundamental rights, transparency, and fairness. 

Researchers and civil society organisations have voiced concerns over the EU’s decision to use technical standards to tackle the kind of complex and broad societal risks that AI poses.

Corporate Europe Observatory’s (CEO) new report Bias baked in - How Big Tech sets its own AI standards  exposes how, behind closed doors, Big Tech is writing the standards that will govern their own AI products and their compliance with fundamental rights obligations.

Bram Vranken, Corporate Europe Observatory researcher and campaigner, says:

"The European Commission's decision to delegate public policy-making on AI to a private body is deeply problematic. For the first time, standard-setting is being used to implement requirements related to fundamental rights, fairness, trustworthiness and bias. 

This opaque process is dominated by corporate interests, and difficult for civil society to participate in. Big Tech is effectively setting its own rules for AI standards, prioritising lightweight and difficult-to-enforce rules over the public interest and fundamental rights.”

Key findings from the report include: 

  • Corporate influence in AI standard-setting: An analysis identified 143 members of JTC21, the Joint Technical Committee on Artificial Intelligence set up by the European standardisation bodies CEN and CENELEC. More than half (55%) of these members represent companies or consultancies, with 54 from companies and 24 from consultancies.
  • US tech giants dominate: Almost a quarter of the corporate representatives are from US companies, including four members each from Microsoft and IBM, and two from Amazon. Other representatives come from Google, Intel, Oracle, Qualcomm and DIGITALEUROPE, a Brussels-based tech lobby group.
  • Microsoft is especially dominant in setting standards for AI. Three national delegations to JTC21 - Germany, the UK, and Ireland - are regularly headed by Microsoft representatives. Microsoft has also sponsored and hosted a plenary session of JTC21 in 2024.
  • Chinese presence: Although small, the Chinese presence in JTC21 is entirely made up of Huawei, with four representatives linked to the company.
  • Consultant transparency concerns: Consultants increase corporate influence in the JTC21, but there is little transparency about whose interests they represent. Several interviewees have highlighted how seemingly separate experts in JTC21 would back the same point, and it was only after additional scrutiny that it became clear these experts were all working for the same Big Tech companies.
  • Big Tech holds the pen at the national level as well: Big Tech firms have the additional advantage that as multinational firms they can co-write the standards in multiple national ‘mirror committees’ at the same time. CEO was able to obtain information about membership of the national standard-setting bodies working on AI in France, the UK, and the Netherlands. In these countries, the share of experts representing corporate interests is respectively 56%, 50%, and 58%.
  • Limited civil society participation: Only 9% of JTC21 members are from civil society organisations, raising concerns about inclusivity in the standard-setting process.
  • Big Tech's push for favourable standards: Large tech companies are promoting international standards from ISO-IEC, which often conflict with the EU's AI Act. The  Vice President of Futurewei - the R&D arm of Huawei - chairs the ISO-IEC working group on AI standards (SC42), further cementing Big Tech's influence.
  • Close coordination on GPAI standards: The Code of Practice on General-Purpose AI, which is expected to shape future standards, is being drafted with significant input from Big Tech companies.

ENDS

For media inquiries, please contact

Bram Vranken, Corporate Europe Observatory researcher and campaigner 

bram@corporateeurope.org ; +32 497 131464

Marcella Via, Corporate Europe Observatory press officer

media@corporateeurope.org ; +39 348 4201435

Notes to editor

  • Corporate Europe Observatory (CEO) has previously documented the outsized industry influence of the AI Act leading to a severely watered-down text.
  • In March 2024, Corporate Europe Observatory published a report revealing how European startups Mistral AI and Aleph Alpha, together with Big Tech, successfully hijacked the policy-making process and undermined the AI Act.

 

This article continues after the banner

Support CEO so we can stay independent!