Fraudulent Voice Cloning of Celebrities
A recent study by Bitdefender Labs documents the sharp increase in AI-produced video deepfakes.
Cybercriminals are increasingly distributing video deepfakes via adverts on social media platforms such as Facebook, Instagram and Messenger. The videos feature cloned celebrities who direct victims to convincing e-commerce websites with enticing offers. However, many of the videos are not of the best quality and contain visible artefacts or errors such as distorted images and asynchronous lip movements. An attentive observer can recognise them. Bitdefender Labs estimates that at least one million users in the USA and numerous European countries are reached by such fraudulent scams. One of the adverts observed reached around 100,000 users in various age groups between 18 and 65.
AI-based voice generator
In the manipulated video content, the attackers used AI-based voice generators to imitate the voices of well-known celebrities such as Jennifer Aniston, Elon Musk, Tiger Woods, the Romanian president and Ion Tiriac. The supposed celebrities then promise high-quality goods such as the latest iPhone models, Macbooks, Chicco car seats, luxury bags from Michael Kors or Coach and Dyson hoovers as gifts. Only minimal delivery charges are payable, such as €9.95 for a MacBook.
Other fraudulent content advertises competitions and attractive investment opportunities. As with other fraudulent offers, the goods are supposedly only available to a small group of interested parties for a limited time. In order to increase the credibility of the offers, the cyber criminals set up websites of daily newspapers such as the New York Times, as well as professional-looking check-out pages for the purchase of the products. The main aim of the users is to steal personal data such as credit card numbers.
Voice cloning: Synthetic copies of a voice
In voice cloning, the creators use AI tools to create synthetic copies of an individual human voice. These sophisticated technologies utilise deep learning and use the actual voice of the original as a starting point. They then make the clones speak using text-to-speech systems. First, the authors collect voice samples. Just a few seconds of material from a video, a social media sequence or another recording are sufficient. The AI analyses the voice for its individual characteristics such as pitch or depth, speaking speed, tone of voice or volume. A machine learning model is trained based on this analysis data. The software is then able to generate spoken speech and a voice clone. The quality of the clone can be further optimised with additional data.
Voice cloning has many legitimate use cases in education, healthcare or entertainment, such as personal video assistants, overdubs for actors, audiobooks, legitimate social media content or helping people with speech disabilities. However, cybercriminals around the world use voice impersonation to defraud relatives, blackmail, cyberbullying, fake kidnappings or to direct people to phishing links. In CEO fraud, fraudsters pretend to be CEOs in order to induce employees to hand over confidential information or transactions.
Dealing with voice cloning scams
Users should be careful when dealing with video/audio content and consider the following advice:
Check the quality and consistency of the voice
Defects in voice quality are often audible. An unusual tone of voice, a static-sounding voice or an inconsistent speech pattern (manner of speaking, incorrect pronunciation and intonation) indicate a forgery.
Background noise and artefacts
Pure voice cloning minimises background noise. These can often still be recognised in audio/video clips. Digital artefacts are also a warning sign.
Data minimisation also applies to voice samples
Anyone using the Internet should not only disclose as little personal information about themselves as possible. Speech samples should also not be shared with strangers. Just a few seconds of audio material can be enough to create a video clone.
Scrutinise unusual requests and overly attractive offers
The same rules apply to campaigns with deep fake video content as with other content from the internet: Overly attractive offers, creating time pressure on the addressee and requests for personal information are warning signs of fraudulent intentions.
If in doubt, contact the alleged provider
A phone call to the alleged provider will quickly clarify the situation.
Notify the police
Users should always report fraudulent voice clones to the authorities.
Inform yourself and protect yourself
Consumers should take information about campaigns seriously. Equally important is security software to protect your digital identity against AI-supported phishing or other scams on computers and smartphones.