A Lack of AI Regulation

A Lack of AI Regulation

A Kaspersky study of C-level decision-makers shows that only 22% are considering internal company rules on the use of AI.

Despite this, 95% of respondents are already using AI solutions. 53% even admit that it supports “important processes”. For example, 24 percent state that they want to automate parts of their IT security management with the help of AI.

The Kaspersky report “Enterprise cybersecurity and increasing threats in the era of AI: Do business leaders know what they are doing?” also reveals that 59% of companies in Europe are concerned about data leaks in the context of AI. From the perspective of the study’s authors, the results show a clear discrepancy between the awareness of threats and the urgency of measures to address them.

Companies therefore need to adapt their cyber security. Especially as 77% of respondents had experienced at least one cyber security incident in their own organisation in the past two years – 75% of these were serious attacks; human error was responsible for 64%.

The black box of AI

“Companies should allocate cybersecurity resources effectively and prepare. While CEOs, CISOs and employees are generally aware of the dangers of cybersecurity breaches, business leaders are often overwhelmed by the task of protecting their organisations from digital attacks,” says Waldemar Bergstreiser, General Manager Central Europe at Kaspersky.

The lack of action by management is possibly due to the fact that AI is still a closed book for many C-level decision-makers: 91 percent of respondents stated that they wanted to better understand how generative AI and its data management process work. Nevertheless, 95 percent of the companies surveyed are already using it in various areas without this knowledge. For 53% of respondents, it is mainly used for important processes, for 50% for automation, for 44% for process optimisation, for 40% to close skills gaps and for 26% to promote innovation.