close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Former MEP and tech critic talks AI and big tech policy
aecifo

Former MEP and tech critic talks AI and big tech policy

The scene

Marietje Schaake, author of new book is a former Dutch member of the European Parliament, where she focused on technology policy. She is now a member of Stanford’s Cyber ​​Policy Center and the Institute for Human-Centered AI.

She spoke to Semafor about the book and how she sees the technology debate on both sides of the Atlantic.

Questions and answers

Reed Albergotti: We’re talking on Zoom from the other side of the world, which to me is still a little mind-boggling. How to reconcile all the advantages of technology with its disadvantages? Are you afraid of coming across as “anti-technology”?

Marietje Schaake: As I write in the book, this is not an anti-technology book. It is a book for democracy. I mean, who could be against technology? It’s brought so many amazing things and it’s still so promising. But what I object to, and what I think is the central message of the book, is this unchecked corporate power that dominates the entire tech ecosystem.

This is big technology. This is small technology. Sometimes these are very specific anti-democratic technologies like spyware or ever-growing monopolistic companies that may have started out as cool startups, challenging the incumbents, but are now firmly the incumbents, eliminating the competing and truly consolidating, including when it comes to government decisions. I wanted to shed light on this problem.

If you look at Facebook after the 2016 election, there was a lot of outcry that forced the company to change. Do you think there are positive aspects to the concentration of power, such as having a single company to put pressure on?

I used to use the hypothetical example of a tech CEO with a very strong political agenda who would mobilize all of his money and the platform he ran for that agenda. People were literally telling me, “You watch too many movies like this. »And now we have Elon Musk.

In the example of Facebook, it is quite tragic that it took years and years of warnings, not only from experts who were warning about the poor results of these algorithmic parameters, but let’s not forget not the many, many painful lessons that have been learned around the world. Burma and Kenya.

It wasn’t until shit broke in the US that they became concerned about the reputational damage and started changing their behavior.

COVID was actually a major turning point with lies about vaccines actually leading to a public health crisis.

There is a theory that when you try to silence misinformation on these platforms, it almost has the opposite effect of the intended effect. What kind of research did you do for the book?

When I looked at this years ago, I wanted to see how a platform like Amazon handled measles information at the time. I typed measles into Amazon and the first thing I got was a book. It was celebrating measles and the body’s resistance. It’s not so much about silencing the discourse. These are the algorithmic parameters that allow the gamification of this type of content, that allow the virality of things (that) are truly morally reprehensible.

Companies should not have so much discretion to decide not only how this information is transmitted to hundreds of millions of people, but also that there is no way to review it as an academic, journalist or civil society leader. .

Is government regulation the solution?

I believe that transparency, although it may need to be enforced by regulation, will lead to a variety of potential outcomes. For example, with more transparency, we might discover that there is an overestimation of the impact of misinformation on public health. We can learn different aspects of the business model that lead to different causes and effects.

Governments can also use their purchasing power in purchases. They can create markets that decide to devote public resources, not so much to these already wealthy technology companies, but rather to public interest solutions, alternatives that better serve the public.

We should have higher standards for cybersecurity, such as a three-strike system when companies are negligent or fail to do what they say they will do. We learn a lot about these incidents through their failures and there is a kind of blind faith in them. I just think it doesn’t work anymore.

I focus on data centers, where companies deceive the public when they want to build data centers and want energy and water contracts. I don’t think it’s too much to say, “We’re going to demand that these companies at least tell us who they are.” »

What’s really disrupting these big companies are new startups with new technologies. Do you think governments should start spending more to foster innovation?

More government investment in an R&D ecosystem is always a good idea. But what is missing, with regard to technology in particular, is that these investments are conditioned on certain fundamental characteristics of public interest. For example, so that the public becomes informed about these technologies, so that there are better accountability mechanisms, so that there is more coordination between local, state and federal governments.

In the case of many European countries, there are many more creative and, I think, public interest ways that governments can spend. So yes, spending can help, but let’s see how.

You spent time in the United States, in the heart of Silicon Valley. Can you compare this culture to the European point of view?

One of the key takeaways from my time in Silicon Valley is that it’s largely about money rather than real innovation.

In Europe, people want the success of Silicon Valley, but not social inequalities or the consequences for society. The idea is that there needs to be some buffer for the less wealthy in society in terms of opportunities. There’s no good word for it in English, but we have a nice word to describe how you can be opportunity-rich.

Access to capital remains a problem in the EU, which is unfortunate. When I look at it from a “values” perspective, I think many in the United States think that the Europeans are passing these laws because they want to go after American companies.

More often than not, the need to protect people from abuse of power by corporations and governments is much more rooted in history. Data protection rules were actually put in place because of World War II, when information about Jews was used against them as a weapon. When I was in the European Parliament, many people who grew up in the Soviet Union had been profiled by the Stasi as activists, dissidents or journalists.

What we find unfortunately is that even in the United States, some of these cautionary tales, which are not hard to find, have to catch on before people take them seriously.

In a recent podcast, Mark Zuckerberg said that he and Facebook accepted responsibility for some things that were actually more political and beyond their control. And that seems like a point worth exploring.

Of course, we can’t blame everything on businesses. But my point is that these tech companies are making incredibly important policy decisions. They are political actors. Silicon Valley is a political hub. It’s just not treated that way.

Considering the persistent discrimination in many applications of AI, we can expect all sorts of disasters to come. Who do you think will pay for the fallout?

With the new generation of AI models, the scale becomes so enormous that it doesn’t take very large and wealthy companies to build it? Who else would do it?

The question is that this technology exists. Do we know enough? Does it make sense to say that it is enough to say that a new discovery has been made to commercialize it as quickly as possible? This is the current dynamic, even if everyone recognizes the unpredictable nature of AI. So it’s a single live experience, without many guardrails.

You call it a technological coup, but it seems like there is an opportunity for governments to take back the reins because they have a lot of leverage in developing AI on such massive scales that actually require the government participation.

I recommend that democratic governments reassert themselves, but also ensure that they are subject to checks. Because we have seen the worst instincts of governments and technology.

There are therefore many reasons for more democratic control. Personally, I think the government asking companies what to do next is an illustration of the tech coup.