Machines spot deepfake pictures better than humans, but people outperform AI in detecting deepfake videos
Artificial intelligence may be better than people at spotting fake faces in photos — but humans still have the upper hand when those fakes start moving.
In a large recent study, psychologists and computer scientists at the University of Florida found that AI programs were up to 97% accurate at detecting pictures of deepfake faces. Participants in the study performed no better than chance.
However, the algorithms’ performance declined sharply when it came to detecting deepfake videos. In those tests, programs performed at chance levels, while humans correctly identified real and fake videos about two-thirds of the time. Human participants appeared to pick up on subtle inconsistencies in movement, facial expressions and timing — cues the algorithms struggled to interpret.
As sophisticated fake images and videos, known as deepfakes, continue to improve and spread widely online, distinguishing real from AI imagery becomes more important.
“The significant decisions that are made by individuals and governments need to be based on real and accurate information. We need to know if people can tell what’s real or not as the technology gets more sophisticated at fooling us,” said Brian Cahill, Ph.D., a professor of psychology at UF and co-author of the study.
Cahill collaborated on the study with researchers across UF, including the Florida Institute for National Security’s Didem Pehlivanoglu, Ph.D., and Mengdi Zhu, Ph.D., along with senior author and Professor of Psychology Natalie Ebner, Ph.D. Their study was published Jan. 7 in the journal Cognitive Research: Principles and Implications.
The researchers created and curated hundreds of real and fake images and videos featuring static faces and people talking. Thousands of participants were then asked to rate the reality of the images. The same images and videos were then put through algorithms designed to separate real images from fake ones.
The findings suggest that for still images, automated detection tools may now outperform human judgment alone. But people still have an advantage when it comes to identifying deepfake videos.
“I think we were all a little shocked to see humans outperform AI on videos,” Cahill said. “But the videos have more cues, it’s a richer context. There’s more stuff for the human brain to pick up on.”
People’s abilities and even mood made a difference in how well they detected deepfake videos. Perhaps unsurprisingly, those who scored higher in analytical thinking and internet skills were better at detecting AI-generated videos. Participants who reported being in a better mood performed worse at detecting deepfake videos, which may reflect greater trust when feeling positive.
The study tested specific types of faces and videos under controlled conditions, which may not reflect the full complexity of real-world online content. And both AI systems and deepfake technology are evolving rapidly, meaning the balance between humans and machines could shift again.
The unfortunate reality, the authors note, is that with deepfake imagery rapidly progressing, determining truth online requires increasing vigilance from everyone.
“We don’t necessarily need to be able to detect everything ourselves,” Zhu said. “But we do need to stay alert, question what we see and look for evidence to support it.”
Journal
Cognitive Research Principles and Implications
Method of Research
Experimental study
Subject of Research
People
Article Title
Is this real? Susceptibility to deepfakes in machines and humans
“AI slop” hurts consumers and creators. But high-quality AI could help both.
Wading through a sea of low-quality, AI-generated content on platforms like YouTube, Reddit or TikTok can turn off consumers while making it hard for professional artists, writers and other content creators to stand out.
That’s according to a new study outlining the market effects of unleashing AI on creative endeavors, allowing novices to flood the market with barely acceptable “AI slop.” However, the same study suggests that, as generative AI tools improve, consumers will have access to increasingly better content while professionals can benefit from enhancing their already high-quality work.
The study was inspired by the rapid rise in AI-generated or AI-enhanced content on most social media platforms. The generative AI tools can make it easier for beginners to enter creative spaces, but it quickly overwhelms consumers and platforms.
“Now there is a flood of relatively low-quality content. Because the quantity is so large, it congests the recommendation systems, so it gets harder to encounter the truly high-quality content,” said Tianxin Zou, Ph.D., a professor of marketing in the University of Florida’s Warrington College of Business and co-author of the new report.
Using economic modeling, Zou and his colleagues explored the effects of generative AI on content marketplaces as the quality of AI tools increased from low quality — now often derided as AI slop — to expert level. When the AI content is middling, the authors found, it harms both consumers and professionals by making it harder to find content worth consuming.
Platforms would be better served by clearly labeling AI-generated content. That transparency would make it easier for consumers to decide what to engage with before they give up on a platform entirely while helping professionals stand out.
“If consumers can clearly identify what content is created by the professionals, then there wouldn’t be this problem because then consumers could just go to them,” Zou said.
We may still be in the slop phase of generative AI for artistic endeavors like video production. But if current trends continue, even professional artists may benefit from using generative AI to bring their expert-quality work to the next level, the researchers found.
“For professionals, the best thing for them to do is learn to use generative AI and combine it into their workflow,” Zou said. “At the same time, they have to pay attention to whether consumers like the way they incorporate generative AI.”
Zou collaborated with Zijun Shi, Ph.D., of Hong Kong University of Science and Technology and Yue Wu, Ph.D., of the University of Pittsburgh, on the study, which was published recently in the Journal of Marketing Research.
Journal
Journal of Marketing Research
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
Welfare Implications of Democratization in Content Creation: Generative AI and Beyond
No comments:
Post a Comment