What Risks Should the European Agreement on Artificial Intelligence address?

Germany, France and Italy have laid the groundwork for the future European Agreement on Artificial Intelligence, which will seek to contain the risks associated with the use of this technology.

The boom in artificial intelligence in recent years has been spectacular. We discover new and surprising applications of this technology almost every day and its capabilities seem limitless.

However, its use also brings with it the emergence of new risks, such as its use for criminal purposes or the threat posed by its development without considering aspects such as ethics and explainability.

In response to this situation, the European Union is taking action. For example,  the EU’s new Artificial Intelligence Law was approved last March, although it will not come into full force until 2026, as explained in this report.

And the EU now wants to take a new step forward, with the launch of the European Agreement on Artificial Intelligence, which aims to lay the foundations to try to contain the risks associated with the use of this technology.

Thus, Germany, France and Italy have already reached some points of consensus in a treaty that is part of the negotiations of the future Artificial Intelligence Act, a regulation that seeks to avoid discriminatory effects and mitigate cyberthreats that may be carried out by organised groups of cybercriminals, but without losing sight of the innovative potential of artificial intelligence, as indicated by Panda Security.

However, the cybersecurity company warns about the fact that only three EU countries have signed this agreement. For example, it stresses that Spain is not part of the treaty due to several differences in the approach towards the regulation of fundamental artificial intelligence models.

In this regard, Germany, France and Italy advocate mandatory self-regulation through codes of conduct and oppose strict and untested regulations for these models. On the other side of the spectrum are those countries that consider that this philosophy may not be sufficient to guarantee the protection of human rights and prevent the misuse of artificial intelligence.

In any case, apart from this issue, which will be the subject of intense debate and negotiations, the future European Agreement on Artificial Intelligence must address a series of security risks associated with the advance of this technology.

Transparency of artificial intelligence

One of the most important aspects is the transparency of models. “When promoting transparency in AI models, developers are asked to create descriptions that explain how artificial intelligence models work, what their capabilities are and also their limitations and weaknesses,” the cybersecurity company notes.

This way, if the information falls into the hands of cybercriminals, they can use it to find and exploit these weaknesses. For example, they could design specific attacks to make the model make mistakes or fail in important situations. Such as in security, health or financial systems.

Increasing the attack surface

“The more AI is used in new applications, the more access points are created where hackers can try to attack. And, of course, every time you integrate AI into a new system, you add new possibilities for something to go wrong. Or that a vulnerability is created that can be exploited by cybercriminal groups,” warns Panda Security.

It is therefore essential to carry out exhaustive and continuous monitoring of all systems that interact with artificial intelligence, in addition to continuously updating security measures to protect them from possible attacks.

Artificial intelligence is not infallible

The cybersecurity company reminds it is still easy to trick AI models with “adversarial attacks”, creating inputs or data to make them make mistakes or act incorrectly.

“These are a kind of ‘trick questions’ designed to confuse artificial intelligence into giving wrong answers, with which hackers can achieve potentially critical effects in areas such as healthcare or infrastructure,” Panda Security explains.

Risk of data breaches

Artificial intelligence systems often require large amounts of data to function properly, which can include personal and sensitive information about individuals. “By collecting and analysing so much data, significant privacy concerns arise,” the company outlines.

“If the systems that store and process all the data are not well protected, hackers could access or leak this information,” it adds.

This can lead to problems such as identity theft to access victims’ bank accounts, apply for loans in their name or commit other frauds.

The exposure of certain data could also lead to situations where individuals are treated unfairly or in a discriminatory manner based on personal information that should be private, such as their medical history or financial background.