MLOps Community
+00:00 GMT
Sign in or Join the community to continue

California and the Fight Over AI Regulation

Posted Aug 15, 2024 | Views 77
# AI Regulation
# Sen. Scott Wiener
# The Washington Post
Share
speakers
avatar
Senator Scott Wiener
State Senator @ California
avatar
Gerrit De Vynck
Tech Reporter @ The Washington Post

Gerrit De Vynck is a Tech Reporter for The Washington Post. He writes about Google, artificial intelligence, and the algorithms that increasingly shape society. He previously covered tech for seven years at Bloomberg News.

+ Read More
SUMMARY

Regulators around the world are debating how to develop laws to govern AI. California has often set the standard for business regulation nationally, and a similar pattern seems to be developing with AI. Sen. Scott Wiener is advocating for one of the most comprehensive AI laws in the world, but some in the industry are pushing back. We'll cover the latest developments in the AI regulation debate and discuss what impact an AI law in California might have on the industry.

+ Read More
TRANSCRIPT

Gerrit De Vynck [00:00:09]: He represents us in the California state Senate here in San Francisco. And, senator, let's just quickly dive into this, because I know probably a bit of limited time at this point, but, you know, there's all these bills in Sacramento related to AI, related to regulation. You know, there was a number, there was 50 at one point. I think it's down to 30 now. There's obviously yours, which you've been working on for a long time. Can you just quickly give us a state of play when it comes to AI regulation in Sacramento and what we might be seeing come out into law over the coming months?

Senator Scott Wiener [00:00:44]: Sure. Well, thank you so much for having me. And, you know, AI is incredibly important. It is. I mean, obviously it's been around for a long time, but it's accelerating. The innovation is accelerating. There is so much promise for the good that AI can do for humanity. And it's very, very exciting.

Senator Scott Wiener [00:01:07]: And I'm proud to represent the beating part of AI innovation in San Francisco. It's also, it's a very powerful technology, and it's a good time for us to sort of step back and see is there anything we should be doing to prepare for the continued growth of AI, to make sure that people can innovate and that it's benefiting society and that we are getting ahead of any risks, because as human beings, we have a tendency sometimes to ignore risk until there's a problem and let's get ahead of it, and California is the place to do it. So I have a bill around requiring basic safety evaluations of very large models before they're trained or released, not limiting or restricting or banning them in any way, but simply requiring a safety evaluation. There is a significant bill to address algorithmic discrimination as AI models increasingly make.

Gerrit De Vynck [00:02:13]: Thank you, senator. We actually, there's now a Wi Fi problem, so I'm gonna just mute for 1 second and they're gonna try to fix it. And then, but we did get most of that answer, so we're, you know, there's that. Okay, I'm just gonna dive in because we've got audio, we got video. So, I mean, you mentioned your bill and you mentioned how you, you want to focus on very large models and kind of get ahead of future risks. I believe your bill still has a certain threshold in terms of the amount of training data, as well as sort of the money being spent on training that. And I know that some of those numbers have concerned some people in Silicon Valley as sort of maybe an arbitrary limit or sort of something that might make it hard for people who do have a lot of data to sort of do what they want with their AI models. And I'm just wondering why you believe that sort of setting those thresholds is an important part of the regulation going forward.

Senator Scott Wiener [00:03:12]: Well, let's just be very, very clear. I've been around the block in terms of some of these issues for a long time. And in the tech sector there are people who just don't want any regulation, and I respect that. They're absolutely entitled to that opinion. And some folks are going to, they will criticize you for setting the threshold too low. They'll criticize you for setting the threshold too high. They'll criticize you for not setting a threshold. So no matter what we do, we're going to get criticized by some people in the tech sector.

Senator Scott Wiener [00:03:45]: Others are being very positive about this. So we are focused on the very, very large, powerful models that frankly don't exist now, but will soon. So we set the threshold originally at ten to the 26th flop. Some people thought that was too low. Some people thought it was too high. Some people thought it wasn't flexible enough. And some people thought that said that over time it's going to take a lot less compute to create very powerful models. So we added the $100 million minimum cost for training, and we did that to make very clear.

Senator Scott Wiener [00:04:21]: We're talking about the largest models that are very expensive to train, that frankly, the large labs are going to be doing. We don't want this to cover startups. We want startups to be able to do their thing and not have to worry about this. And that's going to be the case. Unfortunately, there are some scare tactics happening now where some people are telling startup founders that they're going to be covered, which they're not, and spreading some other fear that's really misrepresenting the bill. This is a very light touch piece of regulation, and I intended it that way just to require the largest labs doing the, creating the largest model, training the largest models to do basic safety evaluations.

Gerrit De Vynck [00:05:06]: I mean, I remember you and I first spoke about the bill and this issue, I think it was back in February. And you know, if I'm honest, when I put that story out, it didn't seem to get that much traction. And it's only in the last month or so that I've seen a lot of people online and here in Silicon Valley want to talk about it. And I'm just wondering about the relationship between regulators and the tech sector. I mean, a lot of the time when you speak with people in the tech world, they see government as sort of something they want to avoid or get around if they can. And I'm just wondering, is there like a better way for us to have this relationship? Should people be talking to you more or what do you think might make this better?

Senator Scott Wiener [00:05:45]: Yeah, I mean, representing San Francisco, I have a lot of relationships in the tech world, and so that. But a lot of policymakers don't, just because of where they represent. And I think for a lot of people in tech, people are busy building their companies or just creating things, making things, living their lives, doing their work. And so I don't expect them to be attuned to everything happening in Sacramento or Washington, DC. I actually did took the fairly extraordinary step last September, 5 months before we introduced the bill, of introducing an outline version of the bill and making that very public. And I sent it around to some folks trying to sort of flush out feedback, because my goal on this is to get it right. And we were not getting a lot of feedback for the first eight months that this idea that one version of the bill or another was out there. It was very quiet.

Senator Scott Wiener [00:06:52]: And then in late April is when some of the large accelerationist accounts focused on it and started tweeting about it. And that sort of began much deeper engagement. And I'm actually grateful for that engagement. We've had some really helpful meetings with folks in the open source world, with folks who are much more libertarian about these things, folks who are critics of the bill, in addition to people who are supportive of the bill. And we made some significant amendments to the bill in response to that feedback from the open source community. In particular, I support open source. Open source is incredibly important. It democratizes tech innovation and we want to foster it.

Senator Scott Wiener [00:07:34]: And there are some concerns from some open source folks about the bill. And so we've been talking to them, and it's, you know, we want to do whatever we can to make sure that we are taking in that feedback and working constructively with people.

Gerrit De Vynck [00:07:51]: I know your bill built off of some of the ideas in the federal Biden White House executive order last year. And a lot of politicians in DC talk about AI. We haven't really seen anything concrete in terms of something we can expect to become law soon come out of DC. And, you know, I know in the past, California has sort of become a bit of a de facto regulator on issues such as, you know, material safety and privacy more recently. I mean, is that what you are sort of trying to do yourself, or is that what we can see here? And do you expect maybe the federal regulators to. To also sort of follow suit and bring in their own legislation in the coming months.

Senator Scott Wiener [00:08:35]: Yeah, I think in a lot of areas, it's ideal to have one federal national standard. The unfortunate reality is I'm not optimistic, and I doubt many people are, that Congress would act on some sort of reasonable AI safety legislation. I just don't see it happening in the near future. I hope I'm wrong, but the reality is that we're in 2024. In the year 2024, there is not a federal data privacy law. Nothing in the year 2024 other than banning TikTok. Congress has done nothing on social media in 2024. Congress has done nothing on net neutrality.

Senator Scott Wiener [00:09:19]: So I authored California's net neutrality law in 2018, hoping that the federal government might actually step in at enacting a law. And it has not in 20, I believe 19, give or take. We passed data privacy law in California. We passed some social media safety legislation. And so it's true on climate action, too. Congress has now done some really good things, but there are other things it hasn't done. And so, especially around tech policy, Congress, with the exception of banning tick tock, hasn't really done anything since the 1990s. And so California has a responsibility to lead, and we're well positioned to lead as the heartland of so much tech innovation.

Senator Scott Wiener [00:10:09]: And I'm proud that we're playing that role and we should be and are working with folks in the industry, in addition to academics and advocates and others, to craft that policy.

Gerrit De Vynck [00:10:22]: I'm just, you know, what are some of the biggest risks with AI? I mean, concretely that are in your mind? I mean, are we talking about some of the concerns that seem a little bit more futuristic, such as giving people the ability to do things that maybe they needed PhDs and security clearances to do, or are we talking about some of the, you know, infusion of racist or sexist biases into algorithms, which we've already dealt with for many years up to this point?

Senator Scott Wiener [00:10:51]: Well, I think it's various things. I mean, we mentioned earlier the, and as you just mentioned, algorithmic bias, and we're, we're already seeing the growth of AI making decisions about people's lives about, you know, processing an application or whatnot, and, or an AI banning someone from ban being triggered on, say, an online service, which could be accurate or mistaken. And I think it's very powerful. And so algorithmic discrimination is something that does need to be addressed. Deep fakes, deep fake revenge porn and so forth. And then in terms of safety, we, in our bill, it focuses on catastrophic risks. So whether it's a chemical, biological, nuclear weapon, more than $500 million in damage to critical infrastructure, or committing a cyber crime that does more than $500 billion in damage. So we're talking about really big catastrophic risks and simply asking developers, when they are training a massively large model, more than $100 million in training costs to do basic safety evaluation for these catastrophic risks.

Senator Scott Wiener [00:12:27]: And if you determine that there is such a risk that it's real to take reasonable steps not to eliminate the risk but to mitigate it, these are things that we should probably expect people to do simply as human beings. But sometimes you need to put it into the law. And to be clear, the large labs all said that they're either already doing this safety testing or they intend to. They signed an agreement in Seoul, South Korea, recently saying, committing to doing these safety tests, they've gone to the White House and said they're going to do it and so on and so forth. So we're not asking them to do anything that they have not already committed to do.

Gerrit De Vynck [00:13:04]: I guess there's maybe a difference between the voluntary commitments and actually having to follow the law. But I mean, just lastly here, and then we will let you go. I mean, for people who are maybe here in the room, they're building their startup, they're spending 80, 90 hours doing that. And, I mean, they maybe don't have a bunch of time to tweet about their vision or to hire a lobbyist to come and knock on your door in Sacramento. I mean, how can they be involved in sort of being heard as these laws are being debated and decided on right now?

Senator Scott Wiener [00:13:37]: Yeah. Well, first of all, people can always reach out to me, message me on twitter or otherwise reach out to me. And I do like to get feedback from people. Even when people are telling me things I don't want to hear, I always, you have to be an active listener as an elected official. And I try to do that, and I do really value people's feedback. All of the hearings on this bill and every bill are public. And people can attend and provide testimony. People can, through the state senate, state assembly website, you can provide written feedback, submit basically a letter in support or opposition to a bill.

Senator Scott Wiener [00:14:21]: And, you know, and I just encourage people. And I, again, I know what people, especially when you're building a company or product or just working at a company, you're super busy and then put maybe family obligations or personal obligations on top of that. People don't have a lot of time. The only thing I ask. I never have a problem being criticized or opposed in any bill that I propose. I do always ask people to know what's actually in the bill. And like I said, there's been some significant misinformation being spread, particularly on Twitter about the bill, about how developers are going to go to prison under the bill, which is just completely, completely made up and some other just really extreme things. This is very light touch.

Senator Scott Wiener [00:15:11]: You don't need to get permission or a license or approval from any government entity to train or release any model. If you are spending more than $100 million to train your model, if you're not spending $100 million to train your model, bill doesn't apply to you. If you are spending over $100 million, train your model, then you have to engage in basic safety testing, which again, all of the large labs that are doing models at this scale, the training models at this scale, have already said they're doing or intend to do. So I just always just ask for people to just know what's in the bill, and we're happy to provide people with information if they're having trouble accessing that information.

Gerrit De Vynck [00:15:57]: Awesome. Thank you so much for joining us and for bearing with us. And, yeah, all the best.

Senator Scott Wiener [00:16:02]: Senator Scott Weinner, one, thank you.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLOps Community’s Code of Conduct and Privacy Policy.

Watch More

20:57
1984 All Over Again? An Open Ecosystem to Fight Closed Models
Posted Jun 20, 2023 | Views 402
# LLM in Production
# LLM
# Premai.io
# Redis.io
# Gantry.io
# Predibase.com
# Humanloop.com
# Anyscale.com
# Zilliz.com
# Arize.com
# Nvidia.com
# TrueFoundry.com
# Continual.ai
# Argilla.io
# Genesiscloud.com
# Rungalileo.io