Technology to Unmask Deepfakes

Deepfakes are becoming increasingly difficult to discover, which is why several technologies that allow them to be detected automatically are being developed.

Deepfakes have ceased to be an innocent game and for entertainment and are becoming a real threat.

These types of videos are used to misinform, for pornographic purposes, to damage the image of a brand or a person… But cybercriminals are going a step further and we are already seeing new applications that threaten security.

Just this week, the FBI has issued a warning informing about the use of stolen personal data and deepfakes by cybercriminals in order to present themselves as viable candidates for remote jobs. The goal is none other than to be hired and thus be able to steal financial data or company patents, access customer credentials and company databases, etc.

Although deepfakes still present anomalies that can help to detect them, the quality they have reached makes it difficult to discover them if we are not forewarned. For this reason, it is essential to have technological tools that help to unmask them automatically.

For example, Microsoft launched in 2020 its Microsoft Video Authenticator, an intelligent platform for detecting deepfakes in photos and videos. Amnesty International also makes available to all users a reverse video analysis tool that can detect whether images in a video have been used before. Other tools are also available, such as InVID, a browser extension specialized in analyzing videos; Montage, which allows the analysis and tagging of YouTube videos; and Truepic, a photo and video verification platform.

All these initiatives have now been joined by a new project, developed by researchers from the K-riptography and Information Security for Open Networks (KISON) and Communication Networks & Social Change (CNSC) groups, belonging to the Internet Interdisciplinary Institute (IN3) of the Universitat Oberta de Catalunya (UOC). The DISSIMILAR initiative is the result of collaboration between the UOC, Warsaw University of Technology (Poland) and Okayama University (Japan).

This project aims to develop technological tools to help users automatically distinguish between original and altered multimedia content, using data hiding techniques and artificial intelligence.

“The project has a twofold objective. On the one hand, to provide content creators with tools to watermark their creations, making any modifications easily detectable. On the other hand, to provide social network users with tools based on state-of-the-art signal processing and machine learning methods to detect fake digital content,” explains David Megías, lead researcher at KISON and director of the IN3.

The UOC explains that digital watermarks are a set of techniques, from the branch of data hiding, that consist of embedding imperceptible information in the original file in order to easily and automatically verify a multimedia document. “They can be used to indicate the legitimacy of a content, for example, to verify that a video or a photograph has been distributed by an official news agency. They can also be used as an authentication mark, which would be removed in case of modification of the content, or to trace the origin of the data. In other words, to find out if a source of information – for example, a Twitter account – is distributing false content,” specifies Megías.

DISSIMILAR will also employ digital content forensic analysis techniques. The aim is to use signal processing technology to detect intrinsic distortions produced by the devices and programs used when creating or modifying any audiovisual document. Such manipulations produce alterations, such as sensor noise or optical distortions, which can be identified with machine learning models.

“The idea is that the combination of all these tools will improve the results compared to using only one type of solution,” says the researcher.