California lawmakers aim to regulate artificial intelligence

California lawmakers aim to regulate artificial intelligence

SAN FRANCISCO — A California lawmaker introduced a bill Thursday that aims to force companies to test their most powerful artificial intelligence models before launching them — a landmark proposal that could inspire regulation across the country as state legislatures increasingly embrace the fast-developing technology.

The new bill, sponsored by Sen. Scott Wiener, a Democrat who represents San Francisco, would require companies that train new AI models to test their tools to detect “unsafe” behavior, create hacking protections, and develop the technology in a way that it can be shut down. In full, according to a copy of the bill.

AI companies must disclose their testing protocols and barriers to the California Department of Technology. If the technology causes “serious harm,” the state attorney general can sue the company.

Weiner’s bill comes amid an explosion in state bills addressing artificial intelligence, as policymakers across the country worry that years of inaction in Congress have created a regulatory vacuum that benefits the tech industry. But California, home to many of the world’s largest technology companies, plays a unique role in setting a precedent for technology industry guardrails.

“You can’t work in software development and ignore what California says or does,” said Lawrence Norden, senior director of the Elections and Government Program at the Brennan Center.

Federal lawmakers have held numerous hearings on AI and proposed several bills, but none have passed. Advocates for AI regulation now worry that the same pattern of debate without action that occurred with previous technology issues such as privacy and social media will repeat itself.

“If at some point Congress can pass a strong, pro-innovation, pro-safety AI law, I would be the first to welcome that, but I’m not holding my breath,” Weiner said in an interview. “We need to move forward on this to maintain public confidence in AI.”

Weiner’s party has an overwhelming majority in the state Legislature, but tech companies have fought hard against regulation in the past in California, and they have powerful allies in Sacramento. However, Weiner says he believes the bill could be passed by the fall.

“We were able to pass some very strict policies related to technology,” he said. “So yes we can pass this bill.”

California isn’t the only state pushing AI legislation. There are 407 AI-related bills currently active in 44 US states, according to an analysis by BSA The Software Alliance, an industry group that includes Microsoft and IBM. This is a significant increase since the BSA’s last analysis in September 2023, which found that states introduced 191 AI bills.

Several states have already signed bills into law that address the acute risks of artificial intelligence, including its potential to exacerbate employment discrimination or create deep fakes that could disrupt elections. About a dozen states have passed laws requiring the government to study the impact of technology on employment, privacy and civil rights.

But as the most populous state in the United States, California has unique authority to set standards that have impact across the country. For decades, California’s consumer protection regulations have served as national and even international standards for everything from harmful chemicals to cars.

In 2018, for example, after years of debate in Congress, the state passed the California Consumer Privacy Act, which established rules for how technology companies collect and use people’s personal information. The United States still does not have a federal privacy law.

Weiner’s bill is based largely on an executive order issued by President Biden last October that uses emergency powers to require companies to conduct safety tests on powerful AI systems and share those results with the federal government. California’s action goes further than the executive order, explicitly requiring hacking protections, AI whistleblower protections, and forcing companies to conduct testing.

The bill is likely to face criticism from much of Silicon Valley, which says regulators are moving too aggressively and risk enshrining regulations that make it difficult for startups to compete with larger companies. Both the executive order and California legislation put the spotlight on big AI models, something some startups and venture capitalists have criticized as short-sighted about how the technology will evolve.

Last year, debate raged in Silicon Valley about the dangers of artificial intelligence. Prominent researchers and AI leaders from companies including Google and OpenAI have signed a letter stating that the technology is on par with nuclear weapons and pandemics in its ability to cause harm to civilization. The group that organized this statement, the Center for AI Safety, was involved in drafting the new legislation.

Technology workers, executives, activists and others were also consulted about how best to approach AI regulation, Weiner said. “We have done tremendous outreach to stakeholders over the past year.”

The important thing is that there is a real conversation about the risks and benefits of AI, said Josh Albrecht, co-founder of AI startup Imbue. “It’s good that people are thinking about this at all.”

Experts expect the pace of AI legislation to accelerate as companies launch increasingly powerful models this year. The proliferation of bills at the state level may increase industry pressure on Congress to pass AI legislation, because complying with a federal law may be easier than responding to a variety of different state laws.

“There is great benefit from clarity across the country on the laws governing AI, and strong national law is the best way to provide that clarity,” said Craig Albright, BSA’s senior vice president for U.S. government relations. “Then companies, consumers and all implementers know what is needed and expected.”

Any legislation in California could have a major impact on AI development more broadly because many of the companies developing the technology are based in the state.

“The California legislature and advocates working in that state are much more attuned to the technology and its potential impact, and are very likely to be ahead of the curve,” Norden said.

States have a long history of moving faster than the federal government on technology policy. Since California passed its 2018 privacy law, nearly a dozen other states have enacted their own laws, according to an analysis by the International Association of Privacy Professionals.

States have also sought to regulate social media and child safety, but the tech industry has challenged many of those laws in court. Later this month, the Supreme Court is scheduled to hear oral arguments in landmark social media cases involving social media laws in Texas and Florida.

At the federal level, partisan battles have distracted lawmakers from developing partisan legislation. Senate Majority Leader Charles E. Schumer (D.N.Y.) has created a bipartisan group of senators focused on AI policy that is expected to soon unveil an AI framework. But the House’s efforts are far less advanced. At a Post Live event on Tuesday, Rep. Marcos J. Molinaro (RN.Y.) said House Speaker Mike Johnson called for a working group on artificial intelligence to help move legislation along.

“More often than not, we fall far behind,” Molinaro said. “Last year left us even further behind.”

You may also like...

Leave a Reply