Managing Human-Allied Artificial Intelligence – The Hindu BusinessLine

Managing Human-Allied Artificial Intelligence – The Hindu BusinessLine

Some recent developments have led people to believe that the “singularity” — a hypothetical future point in time in which artificial intelligence becomes more intelligent than humans — is imminent. The reality is that current AI systems still blindly follow patterns in data and do not truly understand the world.

What we need now is the development of human-allied AI systems, where humans and AI work together to amplify their capabilities and mutually cover up their shortcomings. To enable such systems, one needs to adhere to the principles of responsible AI. Here are the main ideas behind these principles.

Explainable artificial intelligence

In order to trust the results of AI models, we must be able to understand the reasons for the decisions/recommendations the model makes. Success for many AI applications is achieved through the use of what are known as “black box models,” where the calculation process is known, but the reasons for the results are not fully understood.

Even models that are not black boxes can only be explained in terms of the statistical properties of the model. The “responsible AI system” will provide explanations so that anyone can understand the decisions it makes. This may require aligning AI models with the accepted method of explanations in the application domain.

Data discipline is one of the aspects of responsible AI that has received the most attention. The EU’s General Data Protection Regulation was the first ever comprehensive framework created to govern data collection, access and use. The rights of the end user, whose data is used to build AI systems, are of paramount importance in this context. India’s Digital Personal Data Protection Act, 2023 also aims to do the same.

The justice and ethical aspects of artificial intelligence have received significant attention in both research and popular media. Reports are often seen of artificial intelligence systems routinely identifying people of a particular race as more likely to be criminals, or women being more likely to become nurses than doctors. AI chatbots are known to become toxic in their language with proper encouragement from the other participant. While significant progress has been made in addressing such issues, technologies and policies need to be developed for adoption in the local social context. India has its own dimensions of discrimination, and one cannot blindly adopt Western views on this. Existing biases against certain classes or groups or towards people from certain regions will be reflected in AI systems trained on this data. This must be determined and taken care of. For this reason, AI researchers and social scientists must work in close collaboration to understand current human biases and the ways in which they manifest.

Another aspect of responsible deployment of AI models is the “performance expectations” of these systems. AI systems are not simple programs. They solve complex problems and the result of their calculations may not always be correct. The end user of such software often does not fully understand its implications. So, when a designer says my system will be correct 93 times out of 100, does that mean that an AI-powered medical test won’t detect the disease in 7 patients? It is possible for an AI system to say that a patient has a disease simply because the x-rays are damaged!

Hence, it is necessary to establish regulations that impose performance guarantees in each application area. At the same time, one must understand that researchers cannot protect against all possible eventualities, and thus one will need appropriate insurance models for AI systems.

work as one team

AI capabilities can be fully realized when humans and AI systems work together. Responsible deployment of AI systems will require understanding how business is disrupted with AI, and developing new AI-in-the-loop protocols to solve problems. Companies will have to reskill workers to operate effectively in such a hybrid environment.

The Center for Responsible Artificial Intelligence (CeRAI) has been set up at IIT Madras to study these issues under three themes – Making AI Senseable, AI and Safety, and AI and Society. The Center has been set up in a multi-stakeholder consortium model, with participation from industry, government, legal experts, social scientists and industry bodies apart from various academic institutions.

(Professor Balaraman Raveendran is Head, Center for Responsible Artificial Intelligence, IIT Madras)

This is your final free article.

    (Tags for translation) Human-Allied AI 

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *