More than 200 US AI companies join government safety consortium
Technological progress has a proven track record of productively transforming society.
Many technological innovations, from the sewing machine to the automobile and the elevator, have flourished under industry standards and government oversight designed with ethical guidelines, transparency, and responsible dissemination in mind.
But more often than not, these policy frameworks came later and were operationalized after assessing the impact of the technology.
seat beltFor example, it was not mandatory equipment in cars until 1968.
But when it comes to the impact of AI, governments around the world are increasingly looking to ensure the responsible development and deployment of the technology on a more rapid timeline, with concerns growing about its far-reaching innovation capabilities and the potential impact of its misuse. Across work, politics, everyday life and beyond.
The United States on Thursday (February 8) announced a new consortium to support the safe development and deployment of generative AI that will be supported by more than 200 organizations, including academic institutions, leading AI companies, non-profit organizations and other key players from within the world. Prosperous. AI ecosystem.
The newly formed American Artificial Intelligence Safety Institute (AISIC) Consortium, created by the National Institute of Standards and Technology (NIST), is designed to foster collaboration between industry and government to advance the safe use of AI, helping prepare the United States to handle AI capabilities. Next generation AI systems with appropriate risk management strategies.
“AI is moving the world into very new territory,” Lori E. Locascio, Under Secretary of Commerce for Standards and Technology and Director of the National Institute of Standards and Technology (NIST), said in a statement. “And like every new technology, or every new application of technology, we need to know how to measure its capabilities, limitations, and impacts. That’s why NIST brings together these amazing collaborations between representatives from industry, academia, civil society, and government, all coming together to address challenges that matter.” Nationalism.
At a press conference announcing AISIC, Commerce Secretary Gina Raimondo emphasized that the work the Safety Institute does cannot be “done in a bubble separate from the industry and what happens in the real world.”
See also: How do AI companies plan to build superintelligence and then control it?
AI pioneers continue to lead the charge
Among the more than 200 members of AISIC, Adobe, OpenAI, Meta, Amazon, Palantir, Apple, Google, Anthropic, Salesforce, IBM, Boston Scientific, Databricks, Nvidia, Intel, and many others represent the field of AI, but they are not. t alone.
Financial institutions including Bank of America, JP Morgan Chase, Citigroup, Wells Fargo and financial services companies including Mastercard have also agreed to contribute their support to the safe and responsible development of the domestic AI industry.
“Progress and responsibility must go hand in hand,” Nick Clegg, Meta’s head of global affairs, said in a statement. “Working together across industry, government and civil society is essential if we are to develop common standards around safe and trustworthy AI. We are excited to be part of this consortium and work closely with the AI Safety Institute.”
“The new AI Safety Institute will play a critical role in ensuring made-in-the-USA AI is used responsibly and in ways people can trust,” added IBM Chairman and CEO Arvind Krishna. IBM is proud to support the Institute with our AI technology and expertise, and we applaud Secretary Raimondo and the Administration for making responsible AI a national priority.
Read also: NIST says defending AI systems from cyberattacks ‘not resolved’
NIST has been thrust into the forefront of the US government’s approach to AI, having been tasked by a White House executive order with developing domestic guidelines for evaluating AI models and red teaming; facilitating the development of consensus-based standards; and providing test environments to evaluate artificial intelligence systems, among other duties.
PYMNTS Intelligence found that about 40% of executives believe there is urgency to adopt generative AI, and 84% of business leaders said they believe the impact of generative AI on the workforce will be positive.
“(AI) is the general-purpose technology most likely to lead to exponential growth in productivity,” Avi Goldfarb, head of AI and healthcare at Rotman and professor of marketing at the University of Toronto’s Rotman School of Management, told PYMNTS in an interview. Published in December. “…The important thing to remember in all discussions of AI is that when we slow it down, we also slow down its benefits.”
But AISIC will have difficulty operating. AI safety is a many-sided, many-headed beast.
“There is a difference we see between cybersecurity and AI security,” Kojin Ueshiba, co-founder of end-to-end AI security platform Robust Intelligence, told PYMNTS in an interview published in January. “CISOs know the different components of cybersecurity, such as database security, network security, email security, etc., and can get a solution for each of them. But with AI, the components of AI security and what needs to be done for each are different.” Widely known The landscape of risks and required solutions is unclear.
By bringing together the efforts and perspectives of more than 200 ecosystem players who support AISIC, it becomes possible to create a more robust and responsible framework for developing and deploying generative AI technologies.
For all of our PYMNTS AI coverage, subscribe to our daily newsletter Amnesty International newsletter.