Big tech companies want to regulate AI. The rest of Silicon Valley is skeptical.

Big tech companies want to regulate AI.  The rest of Silicon Valley is skeptical.

After months of high-level meetings and discussions, government officials and leaders of major technology companies agree on one thing about AI: AI that has the potential to change the world. Technology needs some ground rules.

But many in Silicon Valley are skeptical.

A growing group of tech heavyweights — including influential venture capitalists, CEOs of mid-sized software companies and proponents of open source technology — oppose it, claiming that AI regulations could eliminate competition in a vital new field.

For these dissenters, the willingness of the biggest AI players, like Google, Microsoft, and OpenAI, the maker of ChatGPT, to embrace regulation is just a cynical ploy by those companies to maintain their advantages as current leaders, essentially. He pulled the ladder behind them. Tech leaders’ concerns were amplified last week, when President Biden signed an executive order laying out a plan to have the government set testing and approval guidelines for AI models — the underlying algorithms that drive “generative” AI tools like chatbots and image makers. .

“We’re still in the early days of generative AI, and it’s essential that governments don’t preemptively pick winners and shut down the competition by adopting burdensome regulations that only big companies can meet,” said Gary Tan, Y’s president. Combinator, a San Francisco-based startup incubator that helped nurture companies including Airbnb and DoorDash when they were in their infancy. Tan said the current discussion has not sufficiently included the voices of small businesses, which he believes is key to fostering competition and engineering safer ways to harness AI.

Companies like influential AI startup Anthropic and OpenAI are closely linked to big tech companies, from which they have received massive amounts of investment.

“They don’t speak for the vast majority of people who have contributed to this industry,” said Martin Casado, general partner at venture capital firm Andreessen Horowitz, which has made early investments in Facebook, Slack and Lyft. Most AI engineers and entrepreneurs are watching regulatory discussions from afar, focusing on their companies rather than trying to lobby politicians, he said.

“A lot of people want to build, they are innovative, they are the silent majority,” Casado said. He said the executive order showed these people that regulation may be coming sooner than expected.

Casado’s venture capital firm sent a letter to Biden outlining its concerns. It was signed by prominent AI leaders, including Replit CEO Amjad Massad and Mistral’s Arthur Mensch, as well as more established tech leaders like Tobi Lutke, CEO of e-commerce company Shopify, who tweeted: “Regulating AI is a terrible idea,” he said. The executive order was announced.

Requiring AI companies to report to the government may make developing new technology more difficult and expensive, Casado said. It could also impact the open source community, said Casado and Andrew Ng, an AI research pioneer who helped found Google’s AI Lab.

As companies scramble to launch and monetize new AI tools since OpenAI launched ChatGPT nearly a year ago, governments have grappled with how to respond. Numerous congressional hearings have addressed the topic, and bills have been proposed in federal and state legislatures. The European Union is working to revamp AI regulation that has been in the works for several years, and Britain is trying to portray itself as an AI-friendly innovation island, recently hosting a large gathering of government and business leaders to discuss the technology.

Throughout the discussions, representatives of the most powerful AI companies said openly that the technology presented serious risks, and that they were keen on regulation. Enacting good regulation can prevent bad outcomes, encourage more investment in AI and make citizens more comfortable with the rapidly advancing technology, the companies said. At the same time, being part of the regulatory conversation gives business leaders influence over the types of rules that are developed.

“If this technology goes wrong, it could go absolutely wrong,” Sam Altman, CEO of OpenAI, said at a congressional hearing in May. Lawmakers, including Senate Majority Leader Charles E. Schumer (D.N.Y.), have said they want to regulate AI early, rather than take a more relaxed approach as the government has done with social media.

OpenAI is setting its ambitions to compete with big tech companies

Days after Biden’s executive order, government representatives attending an AI safety summit hosted by the United Kingdom signed a statement supporting the idea of ​​giving governments a role in testing AI models.

“So far, the only people testing the safety of new AI models are the companies developing them. We shouldn’t rely on them to grade their homework, as many of them agree,” British Prime Minister Rishi Sunak said in a statement.

Both Demis Hassabis, CEO of Google’s DeepMind AI division, and Dario Amodei, CEO of Anthropic, added their support to the statement. Spokespeople for Google and Anthropic did not comment. A Microsoft spokesperson declined to comment but pointed to congressional testimony from the company’s vice president and president, Brad Smith, in which he supported the idea of ​​AI being licensed by an independent government body.

An OpenAI spokesperson declined to comment but pointed to Altman’s tweet in which he said that while he supports regulation of more established AI companies working on robust AI models, governments should be careful not to hurt competition.

Many big breakthroughs in technology over the past few decades have occurred because developers have made their technology available for others to use for free. Now, companies are using open source AI models to build their own AI tools without having to pay Google, OpenAI, or Anthropic for access to their models.

With lobbyists for big tech companies working hard in Washington, these companies may be able to influence regulation in their favor — at the expense of smaller companies, Ng said.

Critics of emerging regulatory frameworks also say they are based on exaggerated concerns about the risks of AI. Influential AI leaders, including executives from OpenAI, Microsoft, Google, and Anthropic, have warned that AI poses a risk on par with pandemics or nuclear weapons to human societies. Many prominent AI researchers and entrepreneurs say the technology is advancing so rapidly that it may soon surpass human intelligence and begin making its own decisions.

These concerns, which featured prominently at the UK AI Summit, give governments cover to pass regulations, said Ng, who now regrets not addressing concerns of “existential risk” more forcefully. “I find it difficult to see how humanity can become extinct,” he said.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *