By AFP
December 21, 2025

AI creations have triggered a debate about who controls a person's identity and legacy after death - Copyright AFP Chris DELMAS
Anuj Chopra in Washington, with Anna Malpas and Rachel Blundy in London
In a parallel reality, Queen Elizabeth II gushes over cheese puffs, a gun-toting Saddam Hussein struts into a wrestling ring, and Pope John Paul II attempts skateboarding.
Hyper-realistic AI videos of dead celebrities — created with apps such as OpenAI’s easy-to-use Sora — have rapidly spread online, prompting debate over the control of deceased people’s likenesses.
OpenAI’s app, launched in September and widely dubbed as a deepfake machine, has unleashed a flood of videos of historical figures including Winston Churchill as well as celebrities such as Michael Jackson and Elvis Presley.
In one TikTok clip reviewed by AFP, Queen Elizabeth II, clad in pearls and a crown, arrives at a wrestling match on a scooter, climbs a fence, and leaps onto a male wrestler.
In a separate Facebook clip, the late queen is shown praising “delightfully orange” cheese puffs in a supermarket aisle, while another depicts her playing football.
But not all videos — powered by OpenAI’s Sora 2 model — have prompted laughs.
In October, OpenAI blocked users from creating videos of Martin Luther King Jr. after the estate of the civil rights icon complained about disrespectful depictions.
Some users created videos depicting King making monkey noises during his celebrated “I Have a Dream” speech, illustrating how users can portray public figures at will, making them say or do things they never did.
– ‘Maddening’ –
“We’re getting into the ‘uncanny valley,'” said Constance de Saint Laurent, a professor at Ireland’s Maynooth University, referring to the phenomenon in which interactions with artificial objects are so human-like it triggers unease.
“If suddenly you started receiving videos of a deceased family member, this is traumatizing,” she told AFP. “These (videos) have real consequences.”
In recent weeks, the children of late actor Robin Williams, comedian George Carlin, and activist Malcolm X have condemned the use of Sora to create synthetic videos of their fathers.
Zelda Williams, the daughter of Robin Williams, recently pleaded on Instagram to “stop sending me AI videos of dad,” calling the content “maddening.”
An OpenAI spokesman told AFP that while there were “strong free speech interests in depicting historical figures,” public figures and their families should have ultimate control over their likeness.
For “recently deceased” figures, he added, authorized representatives or estate owners can now request that their likeness not be used in Sora.
– ‘Control likeness’ –
“Despite what OpenAI says about wanting people to control their likeness, they have released a tool that decidedly does the opposite,” Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, told AFP.
“While they (mostly) stopped the creation of MLK Jr. videos, they are not stopping users from co-opting the identity of many other celebrities.”
“Even with OpenAI putting some safeguards to protect MLK Jr. there will be another AI model that does not, and so this problem will surely only get worse,” said Farid.
That reality was underscored in the aftermath of Hollywood director Rob Reiner’s alleged murder this month, as AFP fact-checkers uncovered AI-generated clips using his likeness spreading online.
As advanced AI tools proliferate, the vulnerability is no longer confined to public figures: deceased non-celebrities may also have their names, likenesses, and words repurposed for synthetic manipulation.
Researchers warn that the unchecked spread of synthetic content — widely called AI slop — could ultimately drive users away from social media.
“The issue with misinformation in general is not so much that people believe it. A lot of people don’t,” said Saint Laurent.
“The issue is that they see real news and they don’t trust it anymore. And this (Sora) is going to massively increase that.”
burs-ac/des
As US battles China on AI, some companies choose Chinese
By AFP
December 21, 2025

The January launch of Chinese company DeepSeek's high-performance, low-cost and open source 'R1' large language model (LLM) defied the perception that the best AI tech had to be from US juggernauts like OpenAI, Anthropic or Google - Copyright AFP Kirill KUDRYAVTSEV
Thomas Urbain with Luna Lin in Beijing
Even as the United States is embarked on a bitter rivalry with China over the deployment of artificial intelligence, Chinese technology is quietly making inroads into the US market.
Despite considerable geopolitical tensions, Chinese open-source AI models are winning over a growing number of programmers and companies in the United States.
These are different from the closed generative AI models that have become household names — ChatGPT-maker OpenAI or Google’s Gemini – whose inner workings are fiercely protected.
In contrast, “open” models offered by many Chinese rivals, from Alibaba to DeepSeek, allow programmers to customize parts of the software to suit their needs.
Globally, use of Chinese-developed open models has surged from just 1.2 percent in late 2024 to nearly 30 percent in August, according to a report published this month by the developers’ platform OpenRouter and US venture capital firm Andreessen Horowitz.
China’s open-source models “are cheap — in some cases free — and they work well,” Wang Wen, dean of the Chongyang Institute for Financial Studies at Renmin University of China told AFP.
One American entrepreneur, speaking on condition of anonymity, said their business saves $400,000 annually by using Alibaba’s Qwen AI models instead of the proprietary models.
“If you need cutting-edge capabilities, you go back to OpenAI, Anthropic or Google, but most applications don’t need that,” said the entrepreneur.
US chip titan Nvidia, AI firm Perplexity and California’s Stanford University are also using Qwen models in some of their work.
– DeepSeek shock –
The January launch of DeepSeek’s high-performance, low-cost and open source “R1” large language model (LLM) defied the perception that the best AI tech had to be from US juggernauts like OpenAI, Anthropic or Google.
It was also a reckoning for the United States — locked in a battle for dominance in AI tech with China — on how far its archrival had come.
AI models from China’s MiniMax and Z.ai are also popular overseas, and the country has entered the race to build AI agents — programs that use chatbots to complete online tasks like buying tickets or adding events to a calendar.
Agent friendly — and open-source — models, like the latest version of the Kimi K2 model from the startup Moonshot AI, released in November, are widely considered the next frontier in the generative AI revolution.
The US government is aware of open-source’s potential.
In July, the Trump administration released an “AI Action Plan” that said America needed “leading open models founded on American values”.
These could become global standards, it said.
But so far US companies are taking the opposite track.
Meta, which had led the country’s open-source efforts with its Llama models, is now concentrating on closed-source AI instead.
However, this summer, OpenAI — under pressure to revive the spirit of its origin as a nonprofit — released two “open-weight” models (slightly less malleable than “open-source”).
– ‘Build trust’ –
Among major Western companies, only France’s Mistral is sticking with open-source, but it ranks far behind DeepSeek and Qwen in usage rankings.
Western open-source offerings are “just not as interesting,” said the US entrepreneur who uses Alibaba’s Qwen.
The Chinese government has encouraged open-source AI technology, despite questions over its profitability.
Mark Barton, chief technology officer at OMNIUX, said he was considering using Qwen but some of his clients could be uncomfortable with the idea of interacting with Chinese-made AI, even for specific tasks.
Given the current US administration’s stance on Chinese tech companies, risks remain, he told AFP.
“We wouldn’t want to go all-in with one specific model provider, especially one that’s maybe not aligned with Western ideas,” said Barton.
“If Alibaba were to get sanctioned or usage was effectively blacklisted, we don’t want to get caught in that trap.”
But Paul Triolo, a partner at DGA-Albright Stonebridge Group, said there were no “salient issues” surrounding data security.
“Companies can choose to use the models and build on them…without any connection to China,” he explained.
A recent Stanford study published posited that “the very nature of open-model releases enables better scrutiny” of the tech.
Gao Fei, chief technology officer at Chinese AI wellness platform BOK Health, agrees.
“The transparency and sharing nature of open source are themselves the best ways to build trust,” he said.
By AFP
December 21, 2025

The January launch of Chinese company DeepSeek's high-performance, low-cost and open source 'R1' large language model (LLM) defied the perception that the best AI tech had to be from US juggernauts like OpenAI, Anthropic or Google - Copyright AFP Kirill KUDRYAVTSEV
Thomas Urbain with Luna Lin in Beijing
Even as the United States is embarked on a bitter rivalry with China over the deployment of artificial intelligence, Chinese technology is quietly making inroads into the US market.
Despite considerable geopolitical tensions, Chinese open-source AI models are winning over a growing number of programmers and companies in the United States.
These are different from the closed generative AI models that have become household names — ChatGPT-maker OpenAI or Google’s Gemini – whose inner workings are fiercely protected.
In contrast, “open” models offered by many Chinese rivals, from Alibaba to DeepSeek, allow programmers to customize parts of the software to suit their needs.
Globally, use of Chinese-developed open models has surged from just 1.2 percent in late 2024 to nearly 30 percent in August, according to a report published this month by the developers’ platform OpenRouter and US venture capital firm Andreessen Horowitz.
China’s open-source models “are cheap — in some cases free — and they work well,” Wang Wen, dean of the Chongyang Institute for Financial Studies at Renmin University of China told AFP.
One American entrepreneur, speaking on condition of anonymity, said their business saves $400,000 annually by using Alibaba’s Qwen AI models instead of the proprietary models.
“If you need cutting-edge capabilities, you go back to OpenAI, Anthropic or Google, but most applications don’t need that,” said the entrepreneur.
US chip titan Nvidia, AI firm Perplexity and California’s Stanford University are also using Qwen models in some of their work.
– DeepSeek shock –
The January launch of DeepSeek’s high-performance, low-cost and open source “R1” large language model (LLM) defied the perception that the best AI tech had to be from US juggernauts like OpenAI, Anthropic or Google.
It was also a reckoning for the United States — locked in a battle for dominance in AI tech with China — on how far its archrival had come.
AI models from China’s MiniMax and Z.ai are also popular overseas, and the country has entered the race to build AI agents — programs that use chatbots to complete online tasks like buying tickets or adding events to a calendar.
Agent friendly — and open-source — models, like the latest version of the Kimi K2 model from the startup Moonshot AI, released in November, are widely considered the next frontier in the generative AI revolution.
The US government is aware of open-source’s potential.
In July, the Trump administration released an “AI Action Plan” that said America needed “leading open models founded on American values”.
These could become global standards, it said.
But so far US companies are taking the opposite track.
Meta, which had led the country’s open-source efforts with its Llama models, is now concentrating on closed-source AI instead.
However, this summer, OpenAI — under pressure to revive the spirit of its origin as a nonprofit — released two “open-weight” models (slightly less malleable than “open-source”).
– ‘Build trust’ –
Among major Western companies, only France’s Mistral is sticking with open-source, but it ranks far behind DeepSeek and Qwen in usage rankings.
Western open-source offerings are “just not as interesting,” said the US entrepreneur who uses Alibaba’s Qwen.
The Chinese government has encouraged open-source AI technology, despite questions over its profitability.
Mark Barton, chief technology officer at OMNIUX, said he was considering using Qwen but some of his clients could be uncomfortable with the idea of interacting with Chinese-made AI, even for specific tasks.
Given the current US administration’s stance on Chinese tech companies, risks remain, he told AFP.
“We wouldn’t want to go all-in with one specific model provider, especially one that’s maybe not aligned with Western ideas,” said Barton.
“If Alibaba were to get sanctioned or usage was effectively blacklisted, we don’t want to get caught in that trap.”
But Paul Triolo, a partner at DGA-Albright Stonebridge Group, said there were no “salient issues” surrounding data security.
“Companies can choose to use the models and build on them…without any connection to China,” he explained.
A recent Stanford study published posited that “the very nature of open-model releases enables better scrutiny” of the tech.
Gao Fei, chief technology officer at Chinese AI wellness platform BOK Health, agrees.
“The transparency and sharing nature of open source are themselves the best ways to build trust,” he said.
ByDr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
December 21, 2025

City of London at night. — Image by © Tim Sandle
As leadership teams wrap up annual planning and look ahead to 2026, this provides a rare moment in the corporate calendar for those occupying the heady heights of the C-Suite to step back and reassess which trends will actually matter as the next year unfolds.
Dimitri Masin, Co-Founder and CEO of Gradient Labs — an AI-native fintech working with leading financial institutions across Europe and recently launched in the U.S. — believes the next phase of customer experience will look fundamentally different. Based on what his team is seeing in live, regulated deployments, Masin has told Digital Journal about three customer experience shifts that will define CX in 2026.
Dmitri Masin previously served as Sales Finance Analyst at Google and VP Data Science, Financial Crime and Fraud at Monzo (UK’s Venmo Bank), joining as one of the early employees and scaling a 120+ person team. With a background in financial engineering and AI, he specialises in risk-compliant automation for regulated industries. With two partners, he established Gradient Labs, the conversational AI platform purpose-built for financial services.
This year, the startup secured a $13 million investment in just one week, and the platform can now reach over 32 million end-users.
1. Voice AI becomes trusted and safe
According to Masin: “Voice will shift from being the most unpredictable customer-support channel to the most trusted one. Financial institutions will begin adopting voice AI that can reason through complex procedures, follow multi-step compliance workflows, and guarantee audit-ready accuracy in real time.”
This means, as AI transitions: “Voice is becoming a core part of the AI-powered operating system for financial firms – resolving issues end-to-end, not just answering calls. The global voice banking market is projected to grow to nearly $18 billion by 2032, so this is the future the industry is heading.”
2. Outbound predictive communication
Masin also sees predictive analytics increasing in scope: “The next evolution of customer service is outbound predictive communication – moving from reactive responses to proactive engagement. AI agents will anticipate customer needs before they surface, reaching out with solutions, not apologies. Imagine a system that alerts a customer before a payment fails, or offers guidance before a compliance issue even occurs.”
As to the significance? “This shift from reactive to predictive service will redefine what trust and satisfaction mean in financial experiences.”
3. The shift to 360° autonomous customer experience
Attracting and keeping customers remains essential to any business seeking to grow, and here autonomous AI becomes a necessary tool: “We’re moving beyond hyper-personalisation toward truly agentic AI – systems that don’t just tailor experiences, but act on behalf of customers to resolve their needs autonomously. Gartner predicts that by 2029, agentic AI will autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs, but the market demonstrates it can happen sooner.”
As to what this means in practice, Masin explains: “AI systems will not just personalise customer experiences but autonomously act on behalf of users across inbound requests, proactive outreach, and back office operations – everything executing payments, resolving disputes, and managing compliance checks in real time. Intelligent agents manage entire customer journeys and compliance workflows end-to-end. The shift from “hyper-personalised” to “hands-on, proactive AI” will redefine what trust and efficiency mean in customer operations.”
December 21, 2025

City of London at night. — Image by © Tim Sandle
As leadership teams wrap up annual planning and look ahead to 2026, this provides a rare moment in the corporate calendar for those occupying the heady heights of the C-Suite to step back and reassess which trends will actually matter as the next year unfolds.
Dimitri Masin, Co-Founder and CEO of Gradient Labs — an AI-native fintech working with leading financial institutions across Europe and recently launched in the U.S. — believes the next phase of customer experience will look fundamentally different. Based on what his team is seeing in live, regulated deployments, Masin has told Digital Journal about three customer experience shifts that will define CX in 2026.
Dmitri Masin previously served as Sales Finance Analyst at Google and VP Data Science, Financial Crime and Fraud at Monzo (UK’s Venmo Bank), joining as one of the early employees and scaling a 120+ person team. With a background in financial engineering and AI, he specialises in risk-compliant automation for regulated industries. With two partners, he established Gradient Labs, the conversational AI platform purpose-built for financial services.
This year, the startup secured a $13 million investment in just one week, and the platform can now reach over 32 million end-users.
1. Voice AI becomes trusted and safe
According to Masin: “Voice will shift from being the most unpredictable customer-support channel to the most trusted one. Financial institutions will begin adopting voice AI that can reason through complex procedures, follow multi-step compliance workflows, and guarantee audit-ready accuracy in real time.”
This means, as AI transitions: “Voice is becoming a core part of the AI-powered operating system for financial firms – resolving issues end-to-end, not just answering calls. The global voice banking market is projected to grow to nearly $18 billion by 2032, so this is the future the industry is heading.”
2. Outbound predictive communication
Masin also sees predictive analytics increasing in scope: “The next evolution of customer service is outbound predictive communication – moving from reactive responses to proactive engagement. AI agents will anticipate customer needs before they surface, reaching out with solutions, not apologies. Imagine a system that alerts a customer before a payment fails, or offers guidance before a compliance issue even occurs.”
As to the significance? “This shift from reactive to predictive service will redefine what trust and satisfaction mean in financial experiences.”
3. The shift to 360° autonomous customer experience
Attracting and keeping customers remains essential to any business seeking to grow, and here autonomous AI becomes a necessary tool: “We’re moving beyond hyper-personalisation toward truly agentic AI – systems that don’t just tailor experiences, but act on behalf of customers to resolve their needs autonomously. Gartner predicts that by 2029, agentic AI will autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs, but the market demonstrates it can happen sooner.”
As to what this means in practice, Masin explains: “AI systems will not just personalise customer experiences but autonomously act on behalf of users across inbound requests, proactive outreach, and back office operations – everything executing payments, resolving disputes, and managing compliance checks in real time. Intelligent agents manage entire customer journeys and compliance workflows end-to-end. The shift from “hyper-personalised” to “hands-on, proactive AI” will redefine what trust and efficiency mean in customer operations.”
No comments:
Post a Comment