“AI Will Have a Major Impact on Cybersecurity – for Defenders and Attackers”
APT groups are distributing AI models with hidden backdoors, warns Waldemar Bergstreiser of Kaspersky in an interview.
Mr. Bergstreiser, AI is increasingly being used by cybercriminals. How do you assess the current situation in Germany?
Waldemar Bergstreiser: AI has significantly changed cybersecurity for attackers, companies, and security providers alike. Our latest study reveals that more than half of companies in Germany are experiencing an increase in cyberattacks. Of these, 37% believe that the majority of these attacks are supported by AI technologies, while 69% express concerns about the use of AI by cybercriminals.
Although awareness of these threats is growing, many companies are still insufficiently prepared. As a result, a significant number of organizations plan to reassess their cybersecurity strategies and implement proactive, holistic solutions.
How are cybercriminals using AI, and what potential threats could arise from future developments?
Waldemar Bergstreiser: Cybercriminals primarily use AI for phishing and deepfakes, but they also employ it for more advanced technical methods. AI helps them analyze security measures, identify vulnerabilities, and bypass them in a targeted manner. For example, firewalls or intrusion detection systems (IDS) are analyzed by AI algorithms. Based on this analysis, attackers adapt their techniques to remain undetected.
This technology also enables the automatic detection and exploitation of security vulnerabilities in applications or operating systems, giving attackers access to sensitive data.
Does automation help cybercriminals?
Waldemar Bergstreiser: AI-supported attacks enable attackers to operate at scale and with minimal human intervention. For instance, AI optimizes brute force attacks by analyzing patterns in password databases and developing targeted strategies to efficiently compromise systems. AI also enhances the effectiveness of distributed denial-of-service (DDoS) attacks by optimizing resource coordination and bypassing defense mechanisms.
AI-based malware is particularly dangerous. Such malware analyzes defense mechanisms in real time and dynamically adjusts its attack strategies to avoid detection. Polymorphic engines, for example, constantly modify malware code without affecting functionality, bypassing security software. Additionally, AI can modify existing malware or simulate user behavior, complicating detection by automated systems. While many of these methods remain theoretical or are in the research phase, their potential risks are substantial.
What about deepfakes?
Waldemar Bergstreiser: Deepfake attacks present a particularly serious risk as they aim to erode trust in a company. If customers or business partners are convinced that manipulated content is authentic, a company’s reputation can suffer irreparable damage. Companies in Germany reported an average of 13 security incidents last year, which incurred average costs of approximately 1.02 million euros.
What else can we expect from attackers regarding AI?
Waldemar Bergstreiser: AI will continue to play a major role in cybersecurity for both defenders and attackers. On the attackers’ side, our experts predict a significant trend for 2025: they foresee APT groups distributing AI models with hidden backdoors. This could involve compromising open-source AI models and datasets by injecting malicious code or introducing subtle distortions that are difficult to detect and may later be widely disseminated.
Furthermore, we anticipate that Large Language Models (LLMs) will be increasingly employed for automated vulnerability detection, reconnaissance, and the creation of malicious code. The use of deepfake technologies is also expected to rise, enabling attackers to impersonate individuals with near-perfect realism. This could lead to more targeted social engineering attacks, such as realistic-looking messages or videos designed to manipulate employees or trick them into revealing sensitive information.
Is there a way to protect yourself?
Waldemar Bergstreiser: Effective protection requires a combination of technologies, human expertise, and training. Next-generation solutions that detect threats in real-time are crucial. Tools like deepfake detectors and AI-supported anomaly detection are particularly valuable, as they can identify unusual patterns at an early stage.
A zero-trust security model is also essential to minimize the damage caused by successful attacks by restricting access rights. Regular training ensures that employees can recognize threats, such as phishing or deepfake attacks, and respond appropriately in suspicious situations.
Companies should also implement digital signatures to verify the authenticity of audio and video content. These signatures function similarly to SSL certificates and can be automatically validated in the background. However, technology alone is not enough—employees must remain vigilant and well-trained to identify suspicious content and close potential security gaps. Lastly, a clear emergency response plan is indispensable.
Waldemar Bergstreiser
General Manager DACH at Kaspersky