HP Uncovers How Generative AI Enables Cybercriminals to Create Malware
It analyses a threat in which the structure of scripts, comments in lines of code, and the names of functions and variables show that the perpetrator used artificial intelligence.
Cybercriminals are leveraging generative artificial intelligence (AI) technology to write malicious code.
This is evidenced by HP’s latest Threat Analysis Report, which has detected a campaign in which AI technology is contributing to malware development. At least this is evidenced by the structure of the scripts, the comments explaining each line of code and the use of native language for function and variable names.
In the case analysed, screenshots and keystrokes of the victims are recorded. HP warns that AI facilitates endpoint infection.
‘Speculation about the use of AI by attackers is numerous, and while testing has been limited, it makes this finding significant,’ notes Patrick Schläpfer, principal threat researcher in HP’s Security Lab.
‘Generally, attackers prefer to hide their intentions so as not to reveal their methods,’ he says, ’so this behaviour suggests that an AI assistant was used to help write the code.’
‘These capabilities further lower the barrier of entry for threat actors,’ he notes, ’allowing novices without coding skills to write scripts, develop infection chains and launch more damaging attacks.’