AI Hype in Cybersecurity – Is It Worth the Investment?

AI Hype in Cybersecurity – Is It Worth the Investment?

Artificial Intelligence (AI) is increasingly being utilised in cybersecurity, particularly for identifying deviations in network traffic. But does the application of AI truly make sense? Dr Sebastian Schmerl from Arctic Wolf delves into this question.

A study conducted by Arctic Wolf revealed that, despite the current AI hype, only 22% of companies are allocating the majority of their cybersecurity budgets to AI-driven solutions. Why this hesitation? Nearly 40% of respondents expressed concerns about the lack of technical expertise and skills within their teams to use AI effectively. Cost, potential impact, and return on investment are also factors delaying or even preventing the implementation of AI technology into business systems.

Should companies forgo AI altogether? Certainly not. However, it is vital to discern between areas where AI is merely a buzzword riding on the hype and those where it delivers real value. In cybersecurity, AI, particularly through methods like machine learning (ML) and large language models (LLMs), can significantly enhance system protection and improve processes.

Threat Analysis and Anomaly Detection

Every user demonstrates a “normal behaviour” pattern based on their role and tasks. AI systems can “learn” this behaviour over time through training. For example, a sales manager might typically access the CMS and CCM solution from their Berlin office. This behavioural pattern becomes a baseline. If a user deviates dramatically from this norm – for instance, attempting to access financial accounting systems – the AI detects the anomaly, flags it as a potential risk, and raises an alert.

Hackers who infiltrate company systems and assume user identities often display behaviour inconsistent with that of the legitimate user, such as attempting to access systems not part of their routine workflow. Security engineers then assess the reported anomaly to determine whether it constitutes an actual threat.

AI can monitor not just human behaviour but also system and service processes, which typically follow specific patterns. When these processes deviate, AI reacts to suspicious activities, such as compromised identities, and raises alarms. To remain effective, AI requires continuous training to adapt to changing behavioural patterns and evolving software. One-off training is insufficient for long-term anomaly detection.

Incident Response Automation

AI can streamline incident response by recommending pre-learned response measures to security teams when anomalies occur. This approach is akin to Amazon’s recommendation system: “Forensic engineers who observed this anomaly took these measures.”

By providing context based on past incidents, AI enables IT managers, even those with limited cybersecurity expertise, to take appropriate actions swiftly. Moreover, AI can automatically block suspicious network traffic or communication from specific IP addresses, enhancing response efficiency.

Phishing Detection

AI-based security solutions can analyse writing styles in documents and emails, considering elements like spelling, syntax, and phrasing. This capability is invaluable for identifying phishing attempts. By comparing language patterns to learned norms, AI can flag suspicious emails, such as those deviating from a known sender’s typical style.

For example, if Ms Müller from Accounting rarely makes spelling mistakes or never requests account details via email, an out-of-character message would raise an alert. AI’s role in detecting spam and social engineering attempts is crucial, as cybercriminals continually refine their techniques, making it increasingly difficult for humans to identify fraudulent messages unaided.

Malware Detection

Malware detection employs two approaches: static and behaviour-based dynamic analysis. Static analysis involves identifying malware patterns, such as specific code strings. However, dynamic analysis, which examines how malicious software behaves in a sandbox environment, is becoming more prevalent.

While not foolproof, advanced malware detection complicates attackers’ efforts, often deterring them from targeting well-defended systems. Comprehensive AI-supported malware defence thus significantly reduces an organisation’s attractiveness to cybercriminals.

Risk Assessment and Vulnerability Management

As IT environments grow more complex – spanning cloud, hybrid, and on-premises systems – effective risk and vulnerability management hinges on understanding interrelationships and dependencies among IT services and systems.

Two critical questions must be addressed:
1. What systems and applications are in use? This requires a detailed inventory and prioritisation of their criticality and relevance.
2. What dependencies exist among these systems? Traffic and log data can reveal these connections.

AI helps process vast amounts of data, uncovering correlations and dependencies. However, human security officers must validate these findings to ensure accuracy.

Security Information and Event Management (SIEM)

SIEM systems aggregate large volumes of data, which must be centralised and analysed for patterns. LLMs simplify this task by enabling analysts to query systems in natural language, eliminating the need for SIEM-specific expertise.

AI provides security officers with a clear overview of system access patterns, highlighting unnecessary authorisations that can be removed to minimise potential attack vectors. By restricting access rights, organisations limit attackers’ reach within their IT infrastructure.

Predictive Analytics

AI can predict potential system issues by analysing past incidents involving specific software or hardware combinations. For example, if certain Java-based applications frequently encounter compatibility issues, AI can flag them as high-risk, enabling proactive intervention.

AI in Security Operations Centres (SOCs)

In SOCs, AI can generate natural language threat reports from forensic data during security incidents or translate threat reports into actionable queries to search for attack traces. Additionally, AI can group related security alerts, streamlining incident management. SOC-as-a-Service providers like Arctic Wolf leverage these capabilities to safeguard their clients’ IT infrastructures.

Human-Machine Communication in Natural Language

Generative AI is revolutionising communication between systems and users by acting as a natural language interface. For instance, IT teams can now issue queries in plain English rather than relying on specialised query languages, making advanced security tools accessible to non-experts.

Conclusion: Lifelong Learning for Humans and AI

AI holds immense potential, but cybercriminals are quick to exploit its capabilities. Thus, defenders must also embrace AI to stay ahead. However, the efficacy of AI depends on its targeted and cost-effective application. Human oversight remains indispensable, as AI can still misinterpret data or produce inaccurate results.

Analysts will increasingly focus on prompt engineering, understanding AI limitations, and validating its outputs. The synergy between humans and AI promises exciting developments in cybersecurity, ensuring robust defences against evolving threats.

Dr Sebastian Schmerl

Dr Sebastian Schmerl

Vice President, Security Services EMEA, Arctic Wolf