Are standards important for cars, plugs, Wi-Fi and artificial intelligence?

Are standards important for cars, plugs, Wi-Fi and artificial intelligence?

Artificial intelligence holds a lot of promise for innovation and progress, but it also holds the potential to cause harm. To enable the responsible development and use of AI, the International Organization for Standardization (ISO) recently released ISO/IEC 42001, a new standard for AI management systems. According to ISO, this standard “provides organizations with the comprehensive guidance they need to use AI responsibly and effectively, even as the technology rapidly evolves.”

As AI has developed rapidly and spread widely around the world, there has been a tangle of conflicting standards from major AI companies like Meta, Microsoft, and Google. (Although in November, Meta disbanded its Responsible AI group.) The Institute for Responsible AI, based in Austin, Texas, has its own assessments and certification program for ethical uses and applications of AI. However, maintaining consistent standards and practices has also been a long-standing challenge throughout the entire history of technology. Standards maintenance organizations such as ISO and IEEE can be natural places to turn to a widely agreed-upon set of standards for developing and using AI responsibly.

“If there is this kind of buy-in from organizations that promote the development and responsible use of AI, other organizations will follow.” —Virginia Dignum, Umeå University, Umeå, Sweden

In the case of ISO, its standard is about artificial intelligence administration Systems. They are catalogs or inventories of the different AI systems a company uses, along with information about how, where and why those systems are used, says Umang Bhatt, an assistant professor and faculty fellow at New York University and an advisor to the Institute for Responsible AI. As the standard defines, an AI management system “is intended to establish policies and objectives, as well as the processes necessary to achieve those objectives, relating to the responsible development, provision or use of artificial intelligence systems.”

So the new ISO standard provides a set of concrete guidelines — rather than just high-level principles — that support responsible AI, says Hoda Haidari, who co-leads the Responsible AI Initiative at Carnegie Mellon University. The standard also gives AI developers confidence that “appropriate processes were followed in creating the system and evaluating it before it was released, and that there are appropriate processes in place to monitor it and address any negative findings,” adds Heydari.

IEEE, ISO, and governments are looking

while, IEEE SpectrumThe parent organization of the IEEE, it also maintains and develops a wide range of standards in many areas of technology. As of press time, Domain I’m aware of at least one effort now underway within the broad global scope of the IEEE standards-setting organizations to develop responsible AI standards. It is said to be a result of the 2020 Recommended Practice Standard for the Development and Use of Artificial Intelligence. In addition, the IEEE Global Initiative on the Ethics of Autonomous and Intelligent Systems has made available this document that encourages the ethically compliant development of autonomous systems.

As with some standards in technology, the ISO standard is not a mandatory standard. “What compels companies to adopt this? A standard by itself is not enough, you have to have a reason and an incentive for these developers to adopt it,” says Chirag Shah, founding co-director of RAISE, a center for responsible AI at the University of Washington. It also views the standard as a common task, especially for small organizations that do not have sufficient resources or even large companies that already have their own standards.

“It’s a track record that I hope will become part of the culture of the software development community.” —Umang Bhatt, New York University

Virginia Dignum, professor of responsible AI and director of the AI ​​Policy Lab at Umeå University in Sweden, agrees, noting that a standard “only really works when there are enough organizations adopting it, and by doing so, we also determine what What will and won’t work in the standard.” To address this problem, Dignum suggests turning to big tech companies and convincing them to adopt the standard, because “if there is this kind of buy-in from organizations that are promoting the responsible development and use of AI, other organizations will follow suit.” For example, Amazon’s AWS co-created the standard and is now seeking to adopt it.

Another motivation for implementing the standard is to prepare and create a framework for looming regulations from other standard-making bodies, which may be compatible with the new ISO standard. For example, the US government recently issued an executive order on artificial intelligence, while the European Union’s AI law is expected to come into full effect by 2025.

Trust is also important

An additional incentive for AI companies to adopt this standard is to enhance trust with end users. In the United States, for example, people are expressing more concern about excitement over the impact of AI on their daily lives, and these concerns extend to the data used to train AI, its biases and inaccuracies, and its potential for misuse. “When there are standards and best practices, and we can assure consumers that they are being followed, they will trust the system more, and they will be more willing to interact with it,” Haidari says.

Similar to a car’s braking system, which is designed and tested according to certain standards and specifications, “even if users don’t understand what the standard is, it will provide them with confidence that things have been developed in a certain way, and that there is also some audit or inspection process,” says Dignum. And oversight of what has been developed.”

For AI companies looking to adopt the standard, Bhatt advises viewing it just as you would the practices you created to track any issues with your AI system. “These standards will be applied in a very similar way to the continuous monitoring tools you might create and use,” he says. “It’s a track record that I hope will become part of the culture of the software development community.”

Beyond implementation, Haidari hopes the new ISO standard will spur a mindset shift in AI companies and the people who create them. She points to design choices when training machine learning models as an example: It may seem like just making another engineering or technical decision that has no meaning outside of the mechanism they’re dealing with, but “all of these choices have huge implications when the resulting model is going to be used in operations,” she says. decision making or to automate practices on the ground.” “The most important thing for developers of these systems is to keep in mind that whether they know it or not, and whether they accept it or not, many of the choices they make have real-world consequences.”

From articles on your site

Related articles around the web

You may also like...

Leave a Reply