Facebook and Twitter earlier this week took down social media accounts associated with the Internet Research Agency, the Russian troll farm that interfered in the U.S. presidential election four years ago, that had been spreading misinformation to up to 126 million Facebook users. Today, Facebook rolled out measures aimed at curbing disinformation ahead of Election Day in November. Deepfakes can make epic memes or put Nicholas Cage in every movie, but they can also undermine elections. As threats of election interference mount, two teams of AI researchers have recently introduced novel approaches to identifying deepfakes by watching for evidence of heartbeats.
Existing deepfake detection models focus on traditional media forensics methods, like tracking unnatural eyelid movements or distortions at the edge of the face. The first study for detection of unique GAN fingerprints was introduced in 2018. But photoplethysmography (PPG) translates visual cues such as how blood flow causes slight changes in skin color into a human heartbeat. Remote PPG applications are being explored in areas like health care, but PPG is also being used to identify deepfakes because generative models are not currently known to be able to mimic human blood movements.
In work released last week, Binghamton University and Intel researchers introduced AI that goes beyond deepfake detection to recognize which deepfake model made a doctored video. The researchers found that deepfake model videos leave behind unique biological and generative noise signals — what they call “deepfake heartbeats.” The detection approach looks for residual biological signals from 32 different spots in a person’s face, which the researchers call PPG cells.
“We propose a deepfake source detector that predicts the source generative model for any given video. To our knowledge, our approach is the first to conduct a deeper analysis for source detection that interprets residuals of generative models for deepfake videos,” the paper reads. “Our key finding emerges from the fact that we can interpret these biological signals as fake heartbeats that contain a signature transformation of the residuals per model. Thus, it gives rise to a new exploration of these biological signals for not only determining the authenticity of a video, but also classifying its source model that generates the video.”
In experiments with deepfake video data sets, the PPG cell approach detected deepfakes with 97.3% accuracy and identified generative deepfake models from the popular deepfake data set FaceForensics++ with 93.4% accuracy.
The researchers’ paper, “How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals,” was published last week and accepted for publication by the International Joint Conference on Biometrics, which will take place later this month.
In another recent work, AI researchers from Alibaba Group, Kyushu University, Nanyang Technological University, and Tianjin University introduced DeepRhythm, a deepfake detection model that recognizes human heartbeats from visual PPG. The authors said DeepRhythm differs from previously existing models for identifying live people in a video because it attempts to recognize rhythm patterns, “since fake videos may still have the heart rhythms, but their patterns are diminished by deepfake methods and are different from the real ones.”
DeepRhythm incorporates a heart rhythm motion amplification module and learnable spatial-temporal attention mechanism at various stages of the network model. Researchers say DeepRhythm outperforms numerous state-of-the-art deepfake methods when using FaceForensics++ as a benchmark.
“Experimental results on FaceForensics++ and Deepfake Detection Challenge-preview data set demonstrate that our method not only outperforms state-of-the-art methods but is robust to various degradations,” the team wrote. The paper, titled “DeepRhythm: Exposing DeepFakes with Attentional Visual Heartbeat Rhythms,” was published in June and revised last week, and it was accepted for publication by the ACM Multimedia conference set to take place in October.
Both groups of researchers say they want to explore ways to combine PPG systems with existing video authentication methods in future work. This would allow them to achieve more accurate or robust ways of identifying deepfake videos.
Earlier this week, Microsoft introduced the Video Authentication deepfake detection service for Azure. As part of its launch, Video Authentication is being made available to news media and political campaigns through the AI Foundation’s Reality Defender program.
As concerns about election interference kick into high gear, at present, doctored videos and falsehoods spread by President Trump and his team appear to pose bigger threats than deepfakes.
On Monday, White House director of social media Dan Scavino shared a video that Twitter labeled as “manipulated media.” The original video showed Harry Belafonte asleep in a news interview, while in the doctored version it was Democratic presidential candidate Joe Biden who appeared to be asleep. A CBS Sacramento anchor joined in calling the video a fake on Monday, and Twitter has removed the video due to a report filed by the copyright owner. But the doctored video has been viewed more than a million times.