EU Watchdog Radio logo and the words: Episode 9: From Huaweigate to the AI Act: how to bake bias in

From Huaweigate to the AI Act: how to bake bias in

New episode of EU Watchdog Radio

Brand new podcast episode of EU Watchdog Radio, about the recent lobbying scandal involving Huawei, and how Big Tech is writing the rules that will govern their own AI products, particularly the standards it will after have to comply with.

Last week, Brussels went reeling under another corruption scandal! This time it's Chinese big tech giant Huawei whose offices just behind the European Parliament have been raided - along with those of 15 former and current MEPs from the EPP and S&D groups. Huawei is, according to the Belgian prosecutors, being investigated for ”active corruption within the European Parliament," including "remuneration for taking political positions, excessive gifts like food and travel expenses and regular invitations to football matches ... with a view to promoting purely private commercial interests in the context of political decisions”. 

The research was done by Follow the Money, Le Soir and Knack and the police raided 21 addresses in Brussels, Flanders, Wallonia and in Portugal and arrested several people. But while all eyes are on Huawei and China, we at CEO want to highlight a deeper, systemic scandal that was there in Qatargate and is here now: and that is the longstanding failure of the European institutions to properly defend democracy from influence operations. There’s ongoing and systemic failures of lobby monitoring, transparency, and ethics enforcement (including regarding MEP gifts and conflicts of interest). The EU needs to consolidate and speed up implementation of the ethics body to set up common ethical standards across EU institutions.

In this episode, Bram Vranken, campaigner and reseracher at CEO will discuss a report he published in January and which focuses on the standard setting process of the AI act. He uncovered that many of the world’s major tech corporations - among them Huawei - are deeply involved in creating permissive, light-weight standards that risk hollowing out the EU’s AI Act. In short, in it Bram shows that with little to no transparency, private standard-setting organisations are writing rules that have legal status in the EU. Independent experts and civil society are out-numbered, under-funded, and struggling in the face of the corporate dominance.

Who we are

This podcast is produced by CEO and Counter Balance. Both NGOs raise awareness on the importance of good governance in the EU by researching issues like lobbying of large and powerful industries, corporate capture of decision making, corruption, fraud, human rights violations in areas like Big Tech, agro-business, biotech & chemical companies, the financial sector & public investment banks, trade, energy & climate, scientific research and much more…

You can find us wherever you listen to your podcasts. Stay tuned for more independent and in-depth information that concerns every EU citizen!


Transcript of the episode (there might be slight changes to the final audio version)

Hi, welcome! I’m Joana Louçã, comms officer at Corporate Europe Observatory, or CEO.

In this episode of EU Watchdog Radio, about the recent lobbying scandal involving Huawei, and will talk to my colleague Bram Vranken about how Big Tech sets big tech is writing the rules that will govern their own AI products, particularly the standards it will after have to comply with.

First, the talk of the town. Last week, Brussels went reeling under another corruption scandal! This time it's Chinese big tech giant Huawei whose offices just behind the European Parliament have been raided - along with those of 15 former and current MEPs from the EPP and S&D groups. Huawei is, not only a challenging word for me to pronounce, but also, and more important to the matter at hand, according to the Belgian prosecutors, being investigated for and I quote, ”active corruption within the European Parliament," including "remuneration for taking political positions, excessive gifts like food and travel expenses and regular invitations to football matches ... with a view to promoting purely private commercial interests in the context of political decisions”.

The research was done by Follow the Money, Le Soir and Knack and the police raided 21 addresses in Brussels, Flanders, Wallonia and in Portugal and arrested several people.

But while all eyes are on Huawei and China, we want to highlight a deeper, systemic scandal that was there in Qatargate (which I remind you saw several members of the European Parliament arrested and investigated) and is here now: and that is the longstanding failure of the European institutions to properly defend democracy from influence operations.

The EU institutions' inaction is Exhibit A in how NOT to prevent a scandal! There’s ongoing and systemic failures of lobby monitoring, transparency, and ethics enforcement (including regarding MEP gifts and conflicts of interest). The EU needs to consolidate and speed up implementation of the ethics body to set up common ethical standards across EU institutions. This was agreed right after the Qatargate scandal, but not yet implemented (psst, it’s been more than two years), and it also must ensure proper enforcement of online disclosure of MEP lobby meetings. Astonishingly, the rightwing EPP group of MEPs has been preparing to get rid of both of these reforms, with support of the far right… As a consequence of this new scandal they cannot get away with it.

But it gets worse. The EPP is also trying to pin blame for foreign influencing on drum roll please, NGOs! The all evil, three eyed, green coloured, covered in scales, back with protruding horns, snake-like arms, foul, vile NGOs. Yet our own decade of research shows that out of 128 lobby vehicles, only 1 front group NGO surfaced in our investigations into repressive regime influence. Instead, repressive regimes have primarily made use of lobby, PR, law firms, and think-tanks to influence the EU. The Lobby Register review in June, btw, is an opportunity to introduce long overdue legally-binding rules, ensuring transparency and accountability, with meaningful sanctions.

But going back to Huawei, what do we know about it? It is on the high end of corporate lobby spenders in Brussels. According to LobbyFacts.eu between 2012 and 2023 it declared spending at least 26 million on lobbying the EU. To help Huawei achieve their goals, they have employed several Brussels lobby consultancies and experts, many of whom go through the revolving door, and it is a member of numerous Brussels trade groups, both in the technology field – including DigitalEurope, and the European Internet Forum – as well as general corporate lobby groups such as BusinessEurope. Huawei in October declared it had 11 full-time EU lobbyists, of which have accreditation to access the European Parliament as they please - btw, that has been rescinded since the scandal broke.

But what are they lobbying for, you’re probably wondering. A key lobbying aim for Huawei from 2019 onward was to persuade Europe to accept the innovative – and cheaper – Chinese 5G mobile broadband technology. A fierce lobbying battle has been waged in Europe about it; the US called for a blanket ban on Chinese technology companies Huawei and also ZTE’s 5G mobile network infrastructure being installed in Europe. While this may be in part about competition, the tech giant has been under pressure over concerns its products could offer a backdoor for Chinese state surveillance, raising fears the technology could entail a security risk to data and privacy. After all, surveillance and intelligence gathering is exactly what the US required its own giant technology firms to do, as revealed by NSA whistleblower Edward Snowden in 2013. In fact, in 2023 the European Commission announced moves to block Huawei and ZTE from EU research funding and to stop contracting operators using Chinese equipment.
If this interests you and you want to know further, we suggest you check out our reports “Follow the New Silk Road”, and “Beyond Qatargate”, as well as “Bias baked in: How Big Tech sets its own AI standards”. But since then, Huawei has been busy with, among others, with the AI act and now I can finally jump to my conversation with my colleague Bram Vranken now and apologies for such a long introduction, but our conversation was recorded well before the scandal broke, so you had to hear that from me and not him, but here’s Bram.

What the AI Act sets out to do?  

The AI act has a double purpose, which also makes it problematic. So the first purpose of the AI act is actually to make sure that AI is as much adopted by society as possible. So it has a very commercial goal. And how the EU wants to do that is by making AI trustworthy. That's the term the EU uses. So they wanna make sure that people trust AI to be functioning well so that it is more widely adopted across  society.

But that has proven to be dangerous.

So there have been numerous scandals in the last couple of years where AI has been used for problematic cases. So for example, and then just to give one example, in the Netherlands the government administration used an algorithm to detect fraud in the case of child care benefits. But that algorithm was extremely biased against people, single parents, people who didn't speak Dutch, they were being singled out as fraud in the system. But in reality, the algorithm was flawed. So these people hadn't committed fraud. So they were accused, they were innocent. They were in debt.  uh... they had to pay that money back to the dutch government's  uh... They lost their houses. They developed because of all the stress they went through.  Some of them developmental issues. Subsequent research even showed that a lot of people lost their kids because they were put in foster care because their parents couldn't take care of them anymore. So this is an algorithm used by the Dutch government which had really disastrous effects on thousands and thousands of lives. So that shows how dangerous this adoption of AI can be and we know that the Netherlands is not a standalone case so every EU government is using AI, is using algorithms  to make  public administration more efficient but often what it sets out to do is to pick on those people who are already vulnerable in society and they have no way to respond because well if the computer says you're frauding then you must be frauding and it's  very difficult to defend yourself against something like that.

So really see...  in the legislation and the AI act is double purpose where often the commercial purpose will dominate and that's also why the EU decided to make use of standard setting in the first place because that makes a very pro-business  piece of legislation.

The report Bram published in January focuses on the standard setting process of the AI act. He uncovered that many of the world’s major tech corporations - among them, you’re guessed it by now, Huawei - are deeply involved in creating permissive, light-weight standards that risk hollowing out the EU’s AI Act. In short, in it Bram shows that with little to no transparency, private standard-setting organisations are writing rules that have legal status in the EU. Independent experts and civil society are out-numbered, under-funded, and struggling in the face of the corporate dominance. Here’s Bram again. You published a report about artificial intelligence standard setting process. So can you just kind of quickly explain what it is and why it is so important?

So. What you need to understand is that standard setting is a very industry-led process. Where the aim is, is to come to common technical standards. And historically that has been, that process has been used to define very technical issues. For example, the fact that if you plug in your laptop and it's 220 volts coming out of the electric grid, that's because there is an international standard saying that it should be 220 volt.  So it's a very, it's a very technical process where industry comes together to kind of define this technical standard. And once a product has gone through a standardisation process, it will get this CE number if it has gone through a European standardisation process or an ISO number if it has gone through an international standard setting process. 
And how it works is that people from industry, experts, they come together, they negotiate, it's based on consensus, and at the end of the day,  they will have set the standard that will become adopted by all of industry. And the standard setting bodies themselves are private organisations. So how they make their money is by selling access to standards.  So that sounds  quite innocent but the problem is  that  while start the standardisation is a very corporate process now it has since the rise of neoliberalism it is increasingly being used for societal questions to standardise questions related to environmental norms  and now with the AI act  also standardise issues related to fundamental rights.

So we interviewed, for example, somebody from a big tech company. And when we asked him if this wasn't problematic, he said, historically, standardisation is a corporate process, is a corporate system. Standards are developed by industry.  What do you expect? And that's... tells a lot about how these experts from industry look at standardisation. They see this is our garden, this is where we play, and everybody who represents societal interest should stay out of the way. So we also interviewed for example somebody from the trade unions and she said well standardisation is market-driven profit led where industry develops technical specifications and it is a sort of public private relationship, so it's a very very neoliberal way  of  Decision-making but it has it is really important  

Within the framework of the AI act  how specifically will then these standards be used for the AI?  

So a lot of the requirements in the AI Act are very vaguely formulated. So for example, the AI Act will say the risk to fundamental rights after the risk mitigation process needs to be at an acceptable level.

What is this acceptable level? Who is going to define what  an acceptable level to fundamental rights is?  

And that's where standards are coming in. That's where standard setting is coming in. So it's up to CEN-CENELEC, is the EU standard setting body to define the risk mitigation process, to define how this works. And this is really problematic, but it's also the first time ever that standard setting is being used on issues related to fundamental rights. So it's not very difficult to imagine how things can go wrong if you leave questions related to fundamental rights to uh... private bodies dominated by industry experts.

Why is this important for the AI act in particular?

In this case, the EU, the Commission has asked CEN-CENELEC, the EU standard setting body, to develop these standards, which means that they will have what is called presumption of conformity. So the standards  will be considered as EU law, as part of EU law. So when a company complies with the  standards, it also complies with the AI Act. So by shaping these standards, industry can really define how the AI Act is implemented and enforced. And that is of course a very interesting thing for any big tech company to get involved in because that way they can also shape the AI Act.  

Okay, and can you  explain how you then did this research?  

So standard setting is this classic tale of a closed-off institution where industry writes its own rules. So there is little to no transparency. So that was real challenge to do the research. So for example, CEN-CENELEC like codes of conduct says that revealing the identity of participants in standard setting  is not allowed.  So when we interviewed people, they would say we cannot reveal the expert were involved in this process because we have scientist code of conduct. So that was a real challenge because of course if you want to research  corporate dominance in standard setting then you need to know who is involved. So how we set out to do that was through  LinkedIn actually.  So how we set out to do this was through LinkedIn because a lot of the experts involved in AI standard setting actually are quite open about it on their social media. So in the end we were able to identify 150 people, which is the large majority of experts involved in standard setting. We know that because the commission later confirmed that 200 people in total are involved in standard setting. And the numbers are pretty clear. So 54 of those 150 people people represent corporations  and 24 represent consultancies and together they represent more than half, 55 % of the identified members involved in AI standards. And then of course,  a big part of those people are from US big tech corporations. So for instance,  Microsoft alone had four experts involved, IBM as well. There were two Amazon representatives and at least one Google, Intel, Oracle representative as well. And interestingly, there were even two people involved from Digital Europe, which is a lobby group has lobbied on the AI act. So that already tells part of the story. It shows that standard setting on AI is heavily dominated by corporate interests. But of course, we also wanted to know how Big Tech is exactly shaping  standard setting. So that's why we interviewed  over  more than 10 people  to get a better picture  on some of  the tactics and strategies used.

We’re almost at the end of this episode, and I just want to remind you that Digital Europe, which Bram just mentioned is of course one of the trade groups that, among others, Huawei is a part of. In his report, Bram looked also at the international AI standard setting, from the International Organisation for Standardisation and the International Electrotechnical Commission. There is  sparse information publicly available its working group on AI but it supports the image of Big Tech dominance. In fact, the committee is chaired by the Vice President of Huawei’s Research and Development arm. Many of its working groups are convened by corporate representatives, including employees from Huawei.

And what were then the tactics that you kind of...  

So first of all,  the aim of Big Tech is to make standards as  lightweight as possible  and as difficult as possible to enforce. And a lot of these Big Tech companies, they have internal policies on how to develop AI. So what they set out to do was to make these standards as close to their internal policies as possible. So that if they have to comply with the AI act, they almost have to change nothing. uh...  so they have they use a couple of strategies and one strategy I've already hinted at and that is making sure that they have as much muscle power in standard setting as possible so  contrary to a lot of  uh... other  companies or civil society organisations or unions  what big tech can do  is they can they can send several people to CEN-CENELEC because how CEN-CENELEC  works  is  it's people, experts from the national level, from national standard setting bodies who are sent to the European level to then discuss  European standards. So these national levels are really important  and  uh... multinationals of course have an advantage because there are they can send  experts to through several national standard setting bodies at the same time. And that's an advantage nobody else has. For example, an NGO which is based in France can only send one representative through the French national standard setting body. So what we would see is that Microsoft especially, but also other tech companies were very, very dominant in this whole process because they can just send them enough people, enough muscle power to really shape the standards. In addition, a lot of tech companies also pay consultants who then represent the big tech interests at CEN-CENELEC. So one person we interviewed, for example, said,  I have seen people from several different countries raising their hands at the same time and voicing very similar opinions. So you think there is consensus, but in fact you realize that they all have the same line manager. And other interviewees confirmed this, so somebody else said, we have seen situations where you have an expert of company A defending a comment of an expert of a different country, but also company A. So that shows how this works. People who participate in standard setting, seems at first sight that you're all participate on an equal basis, but actually you have one company or  a few companies who really dominate this whole process.  

So a second strategy is exactly this club within a club. These big tech companies,  know it's a very, standard setting is a very small world. They know each other  and they form this kind of club and they try to exclude other societal interests. So NGOs have a really, really hard time in gaining access to standard setting. And one of the persons we interviewed, for instance, said it took nine months to get access to the relevant documents. And in the end, this person also said, he dropped out because he didn't see, he wasn't able to make a difference.  And the EU, the Commission has been trying to make standard setting more inclusive,  but still only 9 % of people, of experts we identified, were from civil society. And the problem is that at the member state level, in these national standard setting bodies, the problem is even bigger. So often  there would only be one or two people from civil society  participating.

And the last strategy I want to highlight is the Big Tech strategy of going international. So  ISO, the International Standard Setting Organization, is also writing AI standards. And these are even more industry friendly than what is going on in Europe. So what Big Tech is trying to do is delay the process and make sure that international standards are being adopted in Europe, even though they're not in compliance with the AI Act.

And then what happens now? What is the next  step? I realize I've already said this.  I think,  next steps in this process.  

I think  what we need to do is really fundamentally question  standard setting as a process which can be used for public policy making because it's  neoliberalism on steroids. You cannot,  you really cannot outsource decision making on fundamental rights to a private body dominated by industry experts.  And of course the AI Act always had  this pro-business goal in the first place that's now being reflected in  the use  of standard setting.  But we need democratic institutions to make these decisions because...  AI will become an important technology, but we as citizens,  we need to be able to decide when and how it is being used. And it cannot be this  black box  which it has  been till now, know, where  algorithms and AI is just being used on people without them knowing and  having really detrimental effects to their livelihoods.  So we need to take decision-making back out over the hands of these private bodies  and back in the democratic domain.

We have come to the end of this podcast. If you liked it make sure you also listen to the previous episode that our colleagues at Counter Balance where they interviewed a certain Yannis Varoufakis about Europe’s financial future. Till next time, bye bye!
 

This article continues after the banner

Subscribe to our newsletter