Sophos: ‘The Most Frequent Use of AI by Cybercriminals is to “”Turbo Feed’ their Social Scams’

We interview John Shier, CTO Field at Sophos, who reviews the threats arising primarily with the use of artificial intelligence by cybercriminals.

Cybersecurity solutions provider Sophos held its annual Sophos Day event this week, reviewing the main threats facing organisations.

Of course, artificial intelligence was the main protagonist. These algorithms are being used by increasingly professionalised cybercriminals. In fact, the company explains that any user with minimal knowledge in the field can become a cybercriminal thanks to the use of AI. The challenges are ever greater, while the attack vectors are expanding at a dizzying rate in different areas that were previously unthinkable.

On the occasion of this event, we had the opportunity to interview John Shier, CTO Field at Sophos, who talks about the current security situation and how these attack vectors are evolving. Here is the interview with Shier:

-How is AI changing the cybersecurity landscape for attackers and defenders? Which areas have been most affected?

AI is a wonderful tool that acts as a force multiplier for both attackers and defenders. We have already seen cyber attackers start to use AI to help execute pig butchering scams. As AI models become more capable and able to write code and perform complex tasks, they are likely to play a larger role in generating malicious elements, planning and executing campaigns more autonomously. As for the use of AI in cybersecurity defence, AI will be able to assist the Security Operations Centre and streamline the workflow of security analysts. In addition, when AI models become more efficient and hardware can support larger models, AI will help detect and block a greater proportion of threats, both known and unknown.

-How is AI perfecting phishing techniques and other social scams, and what are the main challenges in detecting them?

John Shier, CTO Field at Sophos

AI enables accurate translation at scale, dramatically increasing the quality of social scams. Some cybercriminals may train a model with legitimate emails from a compromised account to further mask their true nature. This results in phishing emails that are of high quality and indistinguishable from the real thing, making them difficult to detect.

-Is the idea that AI is democratising cybercrime overblown, and what skills are still essential for a cybercriminal?

Novice and aspiring cybercriminals have always had the ability to acquire toolkits to aid them in their goals. AI models could support these novices and help them acquire a higher level, but their use requires some expertise in other areas, be it being able to host open source models that have no guardrails or security barriers, or bypassing the guardrails and security measures implemented in API-based models from OpenAI, Anthopic or other model providers.

-Beyond the hype, how are cybercriminals using AI effectively and what types of attacks have you seen benefit from this technology?

The most frequent use of AI in cybercrime is to ‘turbo feed’ their social scams. In addition to creating phishing lures that are indistinguishable from the real thing, fraudsters are also using AI chatbots to initiate conversations with potential pig butchering victims to set the bait. Once the victim is captured, humans often take over, but they can still use AI to help with translation and grammar.

Sophos has mentioned its use of multi-modal AI to anticipate threats. Could you explain how this works and what advantages it brings over traditional strategies?

Machine learning and AI models can only be as good as the data that goes into them. Models that operate in multiple modalities are able to extract more information from the input data, and are able to use that information to make better decisions. Younghoo Lee, principal data scientist at Sophos AI, recently presented a talk at Virus Bulletin in which he demonstrated the limitations of trying to detect malicious emails and HTML, and how using simultaneous data, both text and images, can increase performance in such applications. In this way, multi-modal AI becomes a more capable and powerful assistant to a threat analyst in detecting and dealing with cyber threats. By being able to ingest and process different types of data, such as email subjects, content, logos and attached images, in addition to email text, the model can identify more malicious indicators.

-Are businesses and cybersecurity teams prepared for the challenges posed by AI, and what is needed to strengthen defences in this new environment?

It is difficult to predict the trajectory that AI will take in cybersecurity, as the field of AI itself has undergone transformational change in recent years. Sophos remains vigilant and invested in the use of AI to help automate parts of the analyst workflow, and our threat researchers are constantly monitoring developments in the field that require our attention and response.

-Based on the preliminary results of the Active Adversary Report, what new behaviours or trends do you anticipate from cybercriminals in the coming years?

While common hacking tools and remote access software continue to be used, preliminary analysis suggests that attackers are also relying heavily on Windows-specific functionality (e.g. living-off-the-land attacks, which use legitimate tools in a malicious way) to go undetected. We have also observed a sustained and continued effort to attack virtualisation technologies by encrypting the underlying operating system and rendering all virtual machines unusable. On the other hand, we have also detected criminals abusing vulnerabilities in peripheral network devices, especially VPN technologies, to gain initial access to networks.

-Does innovation in cybersecurity keep pace with the tactics of cybercriminals and how can Sophos ensure your defences maintain that edge?

Sophos X-Ops is a dedicated team of more than 500 researchers who keep a constant eye on what criminals are doing today and where they are going, across multiple disciplines. This vigilance helps inform the technologies we build and the protections we build into all our products.

-AI has been referred to as a ‘sixth sense’ in cyber defence. How does this capability help prevent threats before they become effective attacks?

The term ‘sixth sense’ refers to the ability to predict outcomes. The sixth sense of AI is interpreted as the ability to analyse huge volumes of data and make predictions about events and behaviours that may occur in the future. While some AI systems are capable of analysing a series of events, correlating them and acting on the results, it is unclear whether current AI models are capable of achieving a level of true predictive insight.