close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

a16z VC Martin Casado explains why so many AI regulations are so wrong
aecifo

a16z VC Martin Casado explains why so many AI regulations are so wrong

The problem with most attempts to regulate AI so far is that lawmakers are focused on a mythical future AI experiment, instead of truly understanding the new risks that AI actually introduces.

That’s what Andreessen Horowitz, general partner, VC Martin Casado argued to a standing-room-only crowd at TechCrunch Disrupt 2024 last week. Casado, who heads a16z’s $1.25 billion infrastructure arm, has invested in AI startups such as World Labs, Cursor, Ideogram and Braintrust.

“Transformative technologies and regulation have been an ongoing conversation for decades, right? The problem with all the talk about AI is that it seems to have come out of nowhere,” he told the crowd. “They’re kind of trying to make new regulations without learning any lessons.”

For example, he said: “Have you actually seen the definitions of AI in these policies? We can’t even define it.

Casado was part of a sea of ​​voices from Silicon Valley who rejoiced when California Governor Gavin Newsom vetoed state’s attempted AI governance law, SB 1047. The law wanted to introduce a so-called kill switch into very large AI models – that is, something that would disable them. Those who opposed the bill said it was so poorly worded that instead of saving us from an imaginary future AI monster, it would have simply sowed confusion and thwarted the hot stage of AI development. AI in California.

“I regularly hear founders hesitant to move here because of what it says about California’s attitude toward AI: that we prefer bad legislation based on sci-fi concerns rather than on tangible risks.” he posted on weeks before the bill was vetoed.

Even though this particular state law is dead, the fact that it exists still bothers Casado. He fears that other bills, similarly constructed, could materialize if politicians decide to pander to the general population’s fears of AI, rather than governing what it actually does. technology.

He understands AI technology better than anyone. Before joining the famous venture capital firm, Casado founded two other companies, including a network infrastructure company, Nicira, which he sold to VMware for $1.26 billion just over ten years ago. years. Before that, Casado was an IT security expert at Lawrence Livermore National Laboratory.

He says many proposed AI regulations have not come from or been supported by those who best understand AI technology, including academics and the commercial sector that makes AI products. ‘AI.

“We must have a notion of marginal risk which is different. For example, how is today’s AI different from someone using Google? How is today’s AI different from someone just using the internet? If we have a model that shows how it’s different, you have some notion of marginal risk, and then you can apply policies that address that marginal risk,” he said.

“I think we’re a little bit early in terms of starting to look at a set of regulations to really understand what we’re going to regulate,” he says.

The counterargument – ​​brought up by several people in the audience – was that the world didn’t really see the kinds of damage the Internet or social media could cause until that damage fell upon us. When Google and Facebook launched, no one knew they would dominate online advertising or collect so much data on individuals. No one understood things like cyberbullying or echo chambers when social media was young.

Proponents of AI regulation now often point to these past circumstances and argue that these technologies should have been regulated from the start.

Casado’s response?

“There is a robust regulatory regime in place today that has been developed over 30 years,” and it is well equipped to develop new policies for AI and other technologies. It’s true that at the federal level alone, regulatory agencies include everything from the Federal Communications Commission to the House Committee on Science, Space, and Technology. When TechCrunch asked Casado on Wednesday after the election whether he stood by his opinion — that AI regulation should follow the path already set by existing regulatory bodies — he said yes.

But he also believes that AI should not be targeted because of problems with other technologies. Instead, the technologies causing the problems should be targeted.

“If we got it wrong on social media, you can’t fix it by putting it on AI,” he said. “People who regulate AI say, ‘Oh, we got it wrong in social, so we’ll get it wrong in AI,’ which is an absurd statement. Let’s go solve this problem on social media.