Jose Luis Hernandez, computer scientist: “It’s very easy to trick AI into launching an attack” | technology

Jose Luis Hernandez, computer scientist: “It’s very easy to trick AI into launching an attack” |  technology

José Luis Hernández Ramos (Murcia, 37 years old) is a Marie Sklodowska-Curie Foundation researcher at the University of Murcia, Spain, where he also obtained bachelor’s, master’s, and doctoral degrees in computer science. “When I was a kid and played with little machines like Game Boys, I always wondered how all those images could be produced by simply inserting a game cartridge,” he says. During his career, he served as a scientific officer at the European Commission and published more than 60 research papers, as well as a five-year collaboration with the European Union Cybersecurity Agency (ENISA) and the European Cybersecurity Organization (ECSO), where he began looking for work that would It has real life implications. “Any researcher should wonder what AI can do in their field,” he says. Now, Hernandez’s “Gladiator” project is one of 58 research initiatives selected for the BBVA Foundation’s 2023 Leonardo Grant Program to develop an artificial intelligence tool capable of detecting cybersecurity threats and analyzing malware.

a question. How would you summarize the goal of the Gladiator Project?

Answer. The project seeks to implement or use a large language model, such as ChatGPT, Bard, or Llama, to understand how we can use these programs to address cybersecurity problems. When we want to adapt one of these models to a specific domain, such as cybersecurity, we need to fit the model to terminology related to that specific discipline. We want to understand how to adapt these models to detect attack and, at the same time, adapt them to cybersecurity information to improve their performance and make them capable of solving problems of this type. The project will continue until March 2025.

s. How will the linguistic models be adapted to the needs of the project?

a. You take cybersecurity information, with databases containing threat information, and train or fine-tune your model using that information. This is how you will be able to improve your understanding of the cybersecurity threat.

s. How does artificial intelligence detect cybersecurity threats and how to combat them?

a. Many AI-based systems rely on knowing what is and is not an attack. For example, using datasets related to network connections in a particular environment. What we are looking for in the project is to be able to analyze cybersecurity information that comes in text format, that may be related to vulnerabilities or that can be found on social media and other sources, and then determine whether it represents a threat or not.

s. What is the difference between the systems used in the past to combat cybersecurity and those used today?

a. Security systems need to be smarter to detect potential threats, by considering artificial intelligence techniques. In the past, these systems detected attacks by searching for known threats in databases. Systems need to evolve so they can identify attacks you don’t know about.

s. What types of attacks can be prevented or identified?

a.Applying AI techniques in cybersecurity will allow us to improve the identification and detection of a wide range of attacks. A phishing attack is a clear example of how using language modeling can help by analyzing the text or content that appears in an email. We can determine whether multiple devices are colluding to launch an attack, and also whether the attack is coming from not just one source, but multiple sources.

José Luis Hernandez Ramos next to a group of computers at the University of Murcia, Spain.BBVA Foundation

s. In the home context, how can AI be used to combat attacks?

a . We now have 24-hour access to AI through tools like ChatGPT, and this gives us the ability to enhance cybersecurity education and awareness. Everyone can ask the tool how to protect themselves or how to configure the device to be less vulnerable. It is important to know that models are not perfect, and we still need to test the results and answers they provide.

s. Will AI help detect if an app has been tampered with?

a. definitely. It would help detect, for example, whether an app is fake or malicious. In fact, one of the things you also see with this type of application analysis, for code and software in general, are language model initiatives for analyzing program code.

s. Is AI able to detect data theft or misinformation?

a. Yes, although attackers are becoming more creative and using more sophisticated tools.

s. Will AI help those who want to create misinformation and those who want to fight it?

a. Yes, it is a double-edged sword. If they fall into the wrong hands, they can be used to launch increasingly sophisticated attacks. And here also lies the danger of the access you now have, in general, to AI systems, such as ChatGPT or others.

s. How can a tool like ChatGPT be used to identify or create an attack?

a. When you ask a system like ChatGPT to create an attack for you, the first thing it tells you is that it won’t create it because it might create a cybersecurity issue. But it’s very easy to trick the tool, by telling it that you need to create the attack in order to study it, because you want your system to be more robust, or you want to teach the attack in the classroom. In these cases, the system will give you the answer.

s. Will the project system allow a tool to be designed without sharing sensitive data?

a. This research project is an attempt to try to understand the issues and limitations so that the language model can be designed in a decentralized manner. Currently, the model is trained and fine-tuned using different sources of information, such as what I provide to it myself when I interact with the system. The idea is that this process is decentralized, and instead of having to share sensitive information, relevant information can be shared, but without having to send information about the specific vulnerability of the system to identify the attack.

s.What goal do you hope to achieve with your project when it is completed in 2025?

a.To improve our ability to understand linguistic models that can help address cybersecurity problems, and to create a system that helps us identify an attack, understand how and why it happened, and look for relationships between different attacks that can help predict whether or not a specific system will be attacked. We also want to know how AI can create measures that can address and resolve a cybersecurity attack, for example, by automatically implementing a security patch.

Subscribe to Our weekly newsletter For more English-language news coverage from EL PAÍS USA Edition

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *