What to know about California’s proposed landmark AI regulations?
by artyscount ·
Sweeping advances in artificial intelligence have sparked warnings from industry leaders about the potential for serious risks, including rogue weapons systems and massive cyberattacks.
A lawmaker in California, home to many of the largest artificial intelligence companies, proposed a landmark bill this week that would impose regulations to address those risks.
The bill requires mandatory testing of large-scale AI products before they reach users. The bill adds that every major model of artificial intelligence must be equipped with a way to stop the technology if something goes wrong.
“When we talk about safety risks related to extreme risks, it’s much better to put protections in place before those risks occur rather than trying to catch up,” Senator Scott Wiener, the bill’s sponsor, told ABC. News. “Let’s get ahead of this.”
Here’s what to know about what the bill does and how it could impact AI regulation nationwide:
What will the bill do to monitor AI risks?
The bill would increase the scrutiny that large AI models face before they get a broad option, ensuring that state officials test products before they are released.
In addition to mandating emergency shutdowns, the bill would implement hacking protections to make AI less vulnerable to bad actors.
To enhance enforcement, this measure would create a Model Frontiers Division within the California Department of Technology as a means of implementing the regulations.
Weiner said that since the legislation focuses on extreme risks, it will not apply to small-scale AI products.
“Our goal is to foster innovation while keeping safety in mind,” Weiner added.
Furthermore, the bill would advance the development of artificial intelligence by creating CalCompute, a publicly owned initiative that would facilitate shared computing power between companies, researchers, and community groups.
Terry Ollie, director of the nonprofit Economic Security of California, told ABC News that the effort would help lower the technical threshold for small companies or organizations that may lack the massive computing power of larger companies.
“By expanding this access, it will allow AI research, innovation and development to be conducted in line with the public interest,” said Ollie, whose organization helped develop this feature in the bill.
Sarah Myers West, managing director of the AI Now Institute, a nonprofit group that supports AI regulation, praised the measure’s precautionary approach.
“It’s great to see the focus on addressing and mitigating damage before it enters the market,” Myers-West told ABC News.
However, she added that many of the current risks posed by AI remain unaddressed, including bias in the algorithms used to set workers’ wages or grant access to healthcare.
“There are many places where AI is already being used to influence people,” Myers-West said.
For his part, Weiner said that the California legislature has taken up other bills to address some of the ongoing harms caused by artificial intelligence. “We’re not going to solve every problem in one bill,” Weiner added.
How might the bill impact AI legislation nationally?
California’s action on the extreme risks of AI comes amid a rise in AI-related bills in statehouses across the country.
As of September, state legislatures had introduced 191 AI-related bills in 2023, representing a 440% increase from all of the previous year, according to BSA the Software Alliance, an industry group.
However, Ollie, of California Economic Security, said California’s proposed legislation carries special weight, given that many of the largest AI companies are located in the state.
“The regulations in California set the standards,” Ollie said. “By complying with these standards in California, you are influencing the market.”
Despite recent policy debates and hearings, Congress has made little progress toward comprehensive action to address AI risks, Myers-West said.
“Congress has been kind of stuck,” Myers-West added. “That means there’s a really important role for the states.”
Dylan Hoffman, executive director of the California and Southwest region at industry lobby group TechNet, emphasized the importance of AI regulation in the United States shaping the global rules surrounding the technology.
“America must set the standards for the responsible development and deployment of artificial intelligence in the world,” Hoffman told ABC News in a statement. “We look forward to reviewing the legislation and working with Senator Wiener to ensure that any AI policies benefit all Californians, address any risks, and enhance our global competitiveness.”
While drafting the bill, Weiner borrowed some concepts from an executive order issued by President Joe Biden in October, such as the threshold used to determine whether an AI model will reach a large enough scale to justify regulation, Weiner said.
However, Weiner said he remains skeptical about the prospect of federal legislation that would mimic the approach taken by the California bill.
“I would like Congress to pass a strong AI law that supports innovation and safety,” Weiner added. “I don’t have much confidence that Congress will be able to do anything in the near future. I hope they prove me wrong.”
(Tags for translation) Article (R) 107095465