Wednesday, February 14, 2024

Can the average person spot a deepfake?

As deepfake technology advances, the ability of the average person to differentiate between authentic and manipulated content is being put to the test.

Sarah Brady
February 14, 2024
Credit: Shutterstock/Tero Vesalainen

In January 2024, US voters in the state of New Hampshire received a call from President Joe Biden, urging them to refrain from voting in the primary election. Only, they didn’t. It was a robocall using deepfake audio to impersonate the President.

The likeness of the AI generated audio to President Biden’s voice (which can be listened to here) speaks to the growing sophistication of deepfakes, raising alarm bells about the potential misuse of the technology.

In the lead-up to major elections, concerns about the proliferation of deepfake content have surged, prompting investigations into the average viewer’s ability to discern between genuine and artificially generated media.

The challenge in detecting deepfakes lies in the technology’s increasing refinement. High-end manipulations, often focus on facial transformations, make it challenging for viewers to discern authenticity.

GlobalData analyst Emma Christy warns, “a significant number of people will be unable to discern deepfake audio from reality, with catastrophic implications for countries holding elections this year.

Christy cites a 2023 University College London study in which participants were only able to identify fake speech 73% of the time, only improving slightly after they received training to recognise aspects of deepfake speech. “The samples used in the study were created with relatively old AI algorithms, which suggests humans might be less able to detect deepfake speech created using present and future AI,” says Christy.

A recent study, published in iScience, revealed that people struggle to reliably detect manipulated video content. Despite being informed that half of the videos were authentic, participants guessed that 67.4% were genuine.

As the ability to generate deepfakes has become more accessible, concerns about accountability and the use of deepfakes in deceptive campaigns, such as mass voter misinformation efforts, are coming to the fore.

MIT’s DetectFakes Project, developed as a research project, explores how well ordinary individuals can distinguish authentic videos from those produced by artificial intelligence.

The Kaggle Deepfake Detection Challenge (DFDC) enlisted the collaborative efforts of industry giants like AWS, Facebook, and Microsoft, along with academic institutions, to incentivize the development of innovative technologies for deepfake detection, awarding a substantial $1m to the competition winners.

The challenge posed by deepfakes goes beyond traditional fake news, as these AI-generated content pieces are more convincing and tend to create false narratives that resonate with individuals’ beliefs.

Former Google fraud czar, Shuman Ghosemajumder, warned of the societal concern surrounding deepfakes, emphasising their potential to damage individuals and influence public opinion.

Research indicates that people struggle to differentiate between real and deepfake content, with the potential for deepfakes to sow uncertainty and erode trust in genuine media.

How do you spot a deepfake?

Pay attention to the face


Deepfakes typically involve facial transformations – that is, alternate faces transposed on another body.

Founder of prompt library company, AIPRM, Christopher Cemper says that examining fine skin textures and facial details is another important factor to consider.

“While the deepfake generation has rapidly advanced, fully photorealistic reproduction of complex human skin and the minute muscular motions around our eyes, noses and mouths remains extremely challenging,” adds Cemper.

It’s all in the hand

While the complexity of hand anatomy stumps even the best artists, AI image generators are, too, notorious for their inability to produce a realistic hand. Often, they’re missing fingers or have one too many; maybe they have joins, maybe they don’t.

A lack of spatial understanding in some AI models can result in unrealistic hand shapes and unnatural posing, especially in images of hands performing fine motor tasks, suck as grasping small objects.

Look at the lip-syncing


For videos, Deepfakes may struggle with accurate synchronisation. Check for lip-syncing errors, where the audio and the movement of the lips do not match.

There may be other audio anomalies such as unnatural pauses, glitches, or artefacts – or the speech may be stilted and off-pitch

Try an AI image detector


There are plenty of free AI image detectors including Everypixel Aesthetics and Illuminarty. These platforms use neural networks to analyse images for consistencies in AI generated images.

Jaime Moles, senior technical manager at ExtraHop, says that there are already detection algorithms in place to catch some Deepfakes, which work by scanning where the digital overlay connects to the actual face being masked.

“This tech is broadly called a ‘general adversarial network’ (GAN), and these tools have reported 99% accuracy in catching Deepfakes. The challenge is that GANs are used to train AI models to improve performance, meaning the tools that catch the Deepfakes are used to train those same models to avoid being caught in future,” says Moles.

Check the source


Compare the video with known source material, such as other videos or images of the same person. Look for discrepancies in appearance, voice, and behaviour.

If the source cannot easily be found, check the metadata of the file.

Metadata is automatically inserted into images or videos during their creation by the camera, and some media editing programs also incorporate it into files, so metadata can offer valuable insights into the origin of a video.

However, relying solely on metadata is not enough alone to identify deepfakes. Embedded metadata can be easily manipulated by saving a video in a different format, processing it through editing software may erase the original metadata, a file can be re-uploaded without its initial title, and creators can manually alter the metadata.

A multi-faceted approach


Efforts to detect and prevent deepfakes are underway, with researchers developing software and proposing updates to election campaign fraud rules. However, the continual advancement of AI technology poses a persistent challenge, making it crucial to educate the public on the existence of deepfakes and how to identify them.

As deepfakes become more accessible and convincing, addressing this threat requires a multi-faceted approach, involving technological advancements, regulatory measures, and public awareness initiatives.

Such a big election year in 2024 may serve as a critical testing ground for society’s ability to navigate the challenges posed by AI-generated content.

No comments:

Post a Comment