By AFP
April 10, 2026

Imran Ahmed, head of a prominent anti-disinformation watchdog, has warned of the dangers posed by AI chatbots, saying children are particularly vulnerable to their charms - Copyright AFP Joel Saget
Arina Porkhovnik
The head of a prominent anti-disinformation watchdog has warned of the dangers posed by AI chatbots, saying children are particularly vulnerable.
“Social media broadcasts to billions, AI whispers to one,” Imran Ahmed, who heads the Center for Countering Digital Hate (CCDH), told a disinformation conference this week.
“No society should build machines that can meet a child in their loneliest moment and offer them harm as if it were help,” Ahmed told the Cambridge Disinformation Summit.
In Wednesday’s lecture by video call to his former university, Ahmed cited the case of a UK mother killed by her own son, allegedly acting on the instructions of a chatbot.
“None of us is immune, when a machine can offer lethal guidance to a young person as if it were fact,” he said.
Ahmed, a British national who lives in the United States, is among five Europeans whom the US State Department has said would be denied visas.
This comes even though he holds US permanent residency and his wife and daughters are American citizens.
– ‘System under pressure’ –
According to the centre’s most recent report “Killer Apps”, eight out of 10 AI chatbots were willing to assist teen users “in planning violent attacks, including a school shooting, religious bombings, and high-profile assassinations”.
Out of 10 chatbots only Anthropic’s Claude and Snapchat’s My AI consistently refused to assist would-be attackers.
In a 2025 investigation entitled “Fake Friend”, the watchdog tested ChatGPT, one of the world’s most popular AI chatbots.
“Within minutes, it produced instructions for self-harm, suicide planning, and substance abuse,” Ahmed said, adding in some cases it also generated goodbye letters for children contemplating ending their lives.
Unlike social media and other systems that “just amplify harmful content,” AI chatbots generate and personalise it “at the moment of greatest vulnerability”.
“The intimacy is deeper and the harm may be harder to detect before it’s too late,” Ahmed said, adding the systems learn what you fear, what you want, what you are ashamed of and respond in real time, with no human judgement or editorial restraint.
A father of two daughters, Ahmed said: “My wife and I lie awake at night talking about how to protect them from systems that could reach them before we even know it is happening.”
He stressed that time to act is limited and called for new laws to regulate AI.
“We spent a decade learning that social media companies will not self-regulate. We have now perhaps 18 months before the same lesson becomes undeniable for AI.”
Ahmed said he was “the only one” of the five people threatened by a US visa ban still in the United States, adding he is now “fighting in federal court against that unconstitutional threat to send me to prison”.
The US State department has accused the five of attempting to “coerce” US-based social media platforms into censoring viewpoints they oppose.
When powerful industries “lash out like this”, Ahmed said, “it is the sound of a system under pressure.”
Mythos AI alarm bells: Fair warning or marketing hype?
By AFP
April 10, 2026

Some in the cybersecurity world see a near future in which artificial intelligence agents fight to defend computer networks from hackers using the same technology to attack - Copyright AFP Joel Saget
Glenn CHAPMAN
Anthropic postponing the release of its new AI model Claude Mythos, said to be so skilled at coding it could be a wicked weapon for hackers, has encountered a mix of alarm and skepticism.
The company is among several contenders in a fierce artificial intelligence race. Promoting the awe of Anthropic’s own technology boosts business and enhances its allure in the event it soon goes public, as is rumored.
“The world has no choice but to take the cyber threat associated with Mythos seriously,” said David Sacks, an entrepreneur and investor who heads President Donald Trump’s council of advisors on technology.
“But it’s hard to ignore that Anthropic has a history of scare tactics.”
Mythos has sparked fears of hackers commanding armies of AI agents able to break through computer defenses with ease.
At this week’s HumanX AI conference in San Francisco, Alex Stamos of startup Corridor, which addresses AI safety, acknowledged a real threat from agentic hackers.
And Stamos quipped about what he referred to as Anthropic’s “marketing schtick.”
“They have these adorable cutesy cartoons about these products that are so incredibly dangerous that they won’t even let people use them,” Stamos said of the San Francisco-based startup.
“It’s like if the Manhattan Project announced the nuclear bomb within a cute little Calvin and Hobbes cartoon.”
The heads of America’s biggest banks met this week with Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent to weigh the security implications of the yet-to-be released Claude Mythos, according to reports Friday.
“Mythos model points to something far more consequential than another leap in artificial intelligence,” Cato Networks co-founder and chief executive Shlomo Kramer said in a blog post.
“It signals a shift that could redefine the balance between attackers and defenders in cyberspace.”
A tightly restricted preview of Mythos was shared with partner organizations this week, under an initiative called Project Glasswing. They include Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike and JPMorgan Chase.
According to Anthropic and partners, Mythos can autonomously scan vast amounts of code to find and chain together previously unknown security vulnerabilities in all kinds of software, from operating systems to web browsers.
Crucially, they warn, this can be done at a speed and scale no human could match, meaning it could be used to bring down banks, hospitals or national infrastructure within hours.
“What once required elite specialists can now be performed by software agents,” Shlomo said.
“The immediate consequences will be a surge in vulnerability discovery, a true tsunami” of exploiting known and unknown vulnerabilities.
– ‘Agent-to Agent War’ –
At HumanX, the apparent consensus was that it makes sense that AI agents already adept at coding will excel at finding weaknesses in software.
“We’re not in an era where human beings can write code when we have superhuman (AI models) that are then going to find bugs in it,” Stamos contended.
“It’s just not possible.”
He predicted the coming dynamic will involve humans supervising AI agents to protect networks against hackers using that same technology to attack.
Stamos referred to it as “agent-to-agent war,” with humans on the sidelines giving advice.
Wendy Whitmore, of cybersecurity firm Palo Alto Networks, expects “some sort of catastrophic attack” this year connected to AI agent capabilities.
“The thing that keeps me up at night is that we’re staring down the barrel of a massive influx of new vulnerabilities that are going to be found by AI,” said Adam Meyers of CrowdStrike.
Meyers saw embedding a tiny AI model directly into malicious code infecting networks as a natural tactic to be explored by hackers.
“The ultimate weapon would be malware that has no pre-programming,” Meyers said.
“It can do whatever you ask it to.”
Meta releases first new AI model since shaking up team
By AFP
April 8, 2026

Image: — © AFP/File JULIEN DE ROSA
Meta on Wednesday released an artificial intelligence model, Muse Spark, it touts as smarter and faster than what it offered before shaking up its Superintelligence Labs unit.
“Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up,” the tech titan said in a blog post.
Muse Spark succeeds Llama 4, released by the Silicon Valley-based firm a year ago, and will power Meta’s AI app and smart glasses along with Facebook, Instagram, WhatsApp and Messenger features.
For now, Muse Spark is only available in the United States.
The new AI model was described as being small and fast by design, capable of reasoning through complex questions in science, math and health.
It is the first in a new Muse series, with the next generation already in development.
Llama 4 lagged in the fierce AI race as heavyweight rivals from China, France, and the United States produced improved models at a rapid-fire pace.
That prompted Meta chief executive Mark Zuckerberg to overhaul its AI team, which saw the departure of its research boss Yann LeCun.

By AFP
April 8, 2026

Image: — © AFP/File JULIEN DE ROSA
Meta on Wednesday released an artificial intelligence model, Muse Spark, it touts as smarter and faster than what it offered before shaking up its Superintelligence Labs unit.
“Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up,” the tech titan said in a blog post.
Muse Spark succeeds Llama 4, released by the Silicon Valley-based firm a year ago, and will power Meta’s AI app and smart glasses along with Facebook, Instagram, WhatsApp and Messenger features.
For now, Muse Spark is only available in the United States.
The new AI model was described as being small and fast by design, capable of reasoning through complex questions in science, math and health.
It is the first in a new Muse series, with the next generation already in development.
Llama 4 lagged in the fierce AI race as heavyweight rivals from China, France, and the United States produced improved models at a rapid-fire pace.
That prompted Meta chief executive Mark Zuckerberg to overhaul its AI team, which saw the departure of its research boss Yann LeCun.

Meta chief executive Mark Zuckerberg has made achieving artificial superintelligence a high priority for the tech firm. — © AFP ANGELA WEISS
LeCun spent 12 years leading the AI lab at Meta, where Zuckerberg has made the quest for “superintelligence” a priority.
Zuckerberg embarked on a major recruitment campaign last year to acquire talent for the Meta’s efforts, poaching Scale AI co-founder Alexandr Wang and putting him in charge of a newly formed unit called Superintelligence Labs.
Zuckerberg subsequently recruited executives from rivals OpenAI, Anthropic and Google – often personally and at heady costs.
In doing so, the tech tycoon broke with the company’s previous approach of prioritizing development of free, open-access AI models such as Llama.
“The future of Meta AI is rooted in the relationships and context already at the center of your life,” the company said.
“We are building toward personal superintelligence – an AI that does not just answer your questions but truly understands your world because it is built on it.”
LeCun spent 12 years leading the AI lab at Meta, where Zuckerberg has made the quest for “superintelligence” a priority.
Zuckerberg embarked on a major recruitment campaign last year to acquire talent for the Meta’s efforts, poaching Scale AI co-founder Alexandr Wang and putting him in charge of a newly formed unit called Superintelligence Labs.
Zuckerberg subsequently recruited executives from rivals OpenAI, Anthropic and Google – often personally and at heady costs.
In doing so, the tech tycoon broke with the company’s previous approach of prioritizing development of free, open-access AI models such as Llama.
“The future of Meta AI is rooted in the relationships and context already at the center of your life,” the company said.
“We are building toward personal superintelligence – an AI that does not just answer your questions but truly understands your world because it is built on it.”
OpenAI CEO’s California home hit by Molotov cocktail, man arrested
By AFP
April 10, 2026

The luxury home of OpenAI CEO Sam Altman was hit by a Molotov cocktail before dawn on Friday - Copyright GETTY IMAGES NORTH AMERICA/AFP Anna Moneymaker
The luxury San Francisco home of OpenAI boss Sam Altman was hit by a Molotov cocktail on Friday, the company said, as police announced the arrest of a suspect.
No one was injured in the incident, and the firm behind the popular ChatGPT artificial intelligence chatbot would not confirm if the CEO was home at the time.
The motive for the attack, and subsequent threats to set fire to OpenAI’s San Francisco headquarters — apparently by the same man — were not immediately known.
But they come as Altman’s profile has risen with the increasing use of AI both in the workplace and in the US military, amid fears it could massively disrupt employment patterns and cause irreversible societal changes.
Police in San Francisco, a hub for tech development, said they had responded after reports that someone had tried to set fire to a gate at the sprawling home.
A statement from the San Francisco Police Department said officers were dispatched to the home just after 4:00 am (1100 GMT).
“At the scene, officers learned that an unknown male subject threw an incendiary destructive device at a home, causing a fire to an exterior gate. The suspect then fled on foot,” SFPD said.
A short time later they were called to the firm’s offices where a man was making threats.
“When officers arrived on scene, they recognized the male to be the same suspect from the earlier incident and immediately detained him,” the statement said.
The man they arrested has not been named, but police said he was 20 years old.
A spokesman for OpenAI confirmed the attack on the chief executive’s residence and the threats to the San Francisco headquarters.
“We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe,” the spokesman told AFP. “The individual is in custody, and we’re assisting law enforcement with their investigation.”
Altman and OpenAI have become targets for people protesting AI as a threat to humans.
Protesters have been particularly troubled by OpenAI’s decision to provide its technology to the US Department of Defense.
OpenAI last month said it was valued at $852 billion after a funding round that raised $122 billion.
The figure reflects the surging costs of computing power and came amid lingering questions about whether OpenAI and rival companies can generate sufficient revenue to cover expenses.
ChatGPT claims the top position in consumer AI, with more than 900 million weekly active users and some 50 million subscribers.
Use of ChatGPT’s online search engine has tripled over the course of a year, according to OpenAI.
No comments:
Post a Comment