Forensic Protection - Services Forensic Protection - Rates Forensic Protection - FP_System Forensic Protection - Feedback
Trade article

Understanding Deepfakes

While there is no official definition, Deepfakes are generally accepted to be computer generated representations of a person in a scenario that never actually occurred. While this includes audio and images, the most notable Deepfakes are synthetic videos. It may never be possible to definitively detect Deepfakes, and here is why.

Early Deepfakes inserted a person's face into a pre-existing video, and were often used to create revenge porn to implicate a person in a situation that they knew nothing of. These early fakes could be detected by the unnatural face movements and lighting relative to the body they had been matched too. The makers of Deepfake software addressed this issue by adding recursive processing iterations that blended the affected areas to produce a more natural looking end result.

These improved fakes could be detected by the head movements not matching movements of the body that they were being attached to, as well as the inconsistent timing of eye blinking. So the Deepfake software makers used the detection software to further train and refine additional iterations of artificial intelligence (AI) processing.

This cat-mouse relationship may never end, and each advancement in detection software becomes the inspiration and testing tool that improves the next generation of Deepfakes. This has been proven time and time again in blind experiments where participants were asked to choose which of two images or videos are fake, and they incorrectly picked about half the time.

Fortunately, most Deepfakes are made using earlier generations of AI software, which can often be detected by the latest generations of detection software.

Suggested next article or this one

Copyright © Forensic Protection
QuickLinks | Main page | Case study | Media | FAQs | Contact us