AI-Generated Deepfakes Challenge Biometric Authentication
Gartner believes that by 2026, 3 out of 10 enterprises will no longer consider identity verification solutions to be reliable in isolation.
Cybercriminals have learned to use artificial intelligence (AI) technology to strengthen their attacks, and this is something that companies need to be aware of, as it is even affecting areas such as authentication and identity verification.
Consulting firm Gartner believes that by 2026, 3 in 10 companies will no longer consider authentication and verification solutions reliable as a standalone method due to attacks that rely on deepfakes.
“In the last decade, there have been several inflection points in AI fields that enable the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” explains Akif Khan, vice president analyst at Gartner.
“As a result,” he continues, “organizations may begin to question the reliability of authentication and identity verification solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.”
Experts recommend a combination of presentation attack and injection attack detection, as well as image inspection, to counter this issue.