Showing posts sorted by date for query HONG KONG. Sort by relevance Show all posts
Showing posts sorted by date for query HONG KONG. Sort by relevance Show all posts

Monday, May 04, 2026

 

Sweden Boards and Detains Falsely-Flagged Tanker off Trelleborg

Courtesy Swedish Coast Guard
Courtesy Swedish Coast Guard

Published May 3, 2026 3:10 PM by The Maritime Executive

 

Sweden's coast guard has boarded and detained a Russia-facing shadow fleet tanker in the Baltic, the latest in a growing campaign of Nordic/European interventions to impede a vast fleet of tankers that evade Western sanctions and operate outside of international regulatory structures. 

The vessel in question is a sanctioned tanker with IMO number 9430272, currently operating under the name Jin Hui (ex names Yi Bao, Celcius Roskilde). While she claims Syrian registration, her Equasis record indicates that this is false, as is increasingly common in the shadow fleet. 

She was reported sold to undisclosed interests in December 2025. The former owner shared a Hong Kong letterbox address with a sanctioned petchem trading company linked to Iran (among many other firms). 

As of Sunday, Jin Rui was anchored in the Baltic off Trelleborg, Sweden. 

"The vessel is suspected of being part of the Russian shadow fleet and for sailing under false flag. There are also concerns regarding insufficient seaworthiness and insurance," said Swedish Prime Minister Ulf Kristersson in a statement. "The vessel is included on the sanctions lists of the EU, the UK and Ukraine. We protect our waters."

Jin Hui has a recent history of PSC issues. At her last inspection, conducted last month in Turkey, port state control boarded Jin Hui and identified eight deficiencies, including issues with her oil record book, fire alarms, fire doors, auxiliary engine and her voyage planning. 

AIS data shows that Jin Hui has recently visited a wide diversity of jurisdictions, from South America to Russia to the Mediterranean and India, including extensive trading between and among regional ports. The pattern indicates a high likelihood of commercial voyages serving charterers outside Russia, despite sanctions. 

ANALYSIS


'We cannot give up': Hong Kong journalists navigate fear, surveillance and shrinking space



Hong Kong’s government on Friday slammed foreign media and press freedom groups, rejecting claims of a crackdown on press freedom as “slander” after jailed media tycoon Jimmy Lai was awarded a free speech prize in Germany. Press freedom in the city has sharply declined since a 2020 National Security Law clamped down on dissent. Journalists face visa denials, surveillance, self-censorship and legal threats, while independent outlets struggle to survive.



Issued on: 03/05/2026 - 
FRANCE24
By: Natasha LI


An Apple Daily employee works in the printing room after the last edition of the newspaper is printed in Hong Kong early on June 24, 2021. © Anthony Wallace, AFP

In a defiant statement slamming foreign media on Friday, Hong Kong accused an “anti-China organisation” of attempts to “sugarcoat” the “criminal acts” of imprisoned media tycoon Jimmy Lai, who was awarded Thursday a Freedom of Speech Prize by Germany’s Deutsche Welle.

In the same statement, authorities dismissed Reporters Without Borders’ latest Press Freedom Index as biased, saying it was being used to “smear” Hong Kong. The index now ranks the city 140th globally, down from 18th when it was first published in 2002.

Once widely seen as a beacon of free expression in Asia, Hong Kong has increasingly become a place where journalism itself can carry legal risk.

And that reality is no longer limited to local reporters.


Earlier this week, RSF revealed that a French journalist had been denied entry to Hong Kong, detained at the airport and deported back to Paris – the first publicly documented case of its kind involving a foreign correspondent.
Detained and deported

For Antoine Védeilhé, a former FRANCE 24 China correspondent now working on a documentary for France Télévisions, the case marked a turning point.

He has reported across Asia for nearly a decade and covered Hong Kong extensively since 2016, including the 2019 pro-democracy protests. Until recently, he says entering the city had never been a problem.

That changed in November 2025.

“At passport control, they stopped me immediately,” he said. “They took me into an immigration room, kept me there for three hours, interrogated me, searched all my belongings, and carried out a full body search.”


He was then escorted directly to a flight back to Paris.

“They gave no explanation and no documents. Nothing,” he said. “Only that it was for immigration reasons.”

Later, through sources in Hong Kong’s immigration department, he was told he had been flagged as a “foreign agent” – a label commonly used in cases linked to national security concerns.

The following day, his employer received an anonymous email warning against broadcasting his documentary, “Hong Kong ne répond plus” (Hong Kong Is No Longer Answering), which examines the city’s political transformation under Beijing’s tightening control.

“It was clearly meant to intimidate us,” Védeilhé said. “They were suggesting that even in France, the National Security Law could apply.”

His cameraman, who was allowed entry, was followed by plainclothes officers from the moment he arrived at his hotel.


“They didn’t try to hide it,” he said. “It was exactly like mainland China.”

Fearing for the safety of sources, the team cancelled all planned interviews.

“This is how reporting stops,” he said. “People won’t meet you if it puts them at risk.”
Visa weaponisation

“What Antoine was subjected to was unprecedented, even among foreign correspondents,” said Aleksandra Bielakowska, advocacy manager for Asia-Pacific at RSF.

While at least 13 journalists have been denied visas, refused renewals or barred from entering Hong Kong in recent years, she says this case marks an escalation.

“This is really an intensification because it is the first time we see this scale of transnational repression reaching foreign journalists in Europe,” she said.

Bielakowska said the evidence strongly suggests the operation was coordinated by national security police.

“They had a file on him, with his photo, identifying him as an agent. They knew his sources, they knew who he was working with, and his contacts were also harassed,” she said.

She added that Hong Kong is increasingly adopting the same pressure tactics long used by Beijing against foreign media – visa refusals, surveillance and intimidation.

“China has used visa weaponisation for years,” she said. “But what is happening now in Hong Kong is different because it is no longer just about refusing access. It is about creating fear everywhere.”

She says the message to journalists is clear: reporting critically on Hong Kong can carry consequences even outside the city.
‘Criminalisation of journalism itself’

Hong Kong’s press freedom crisis accelerated after Beijing imposed the sweeping National Security Law in June 2020, following the mass pro-democracy protests of 2019.


For many journalists, the decisive moment came two months later, when police raided Apple Daily and arrested its founder Jimmy Lai.

“That was the message,” Bielakowska said. “If you keep reporting, you will face the same charges.”

Since then, independent media outlets including Apple Daily, Stand News and Citizen News have shut down, while dozens of journalists have been arrested, prosecuted or forced into exile.

Earlier this year, Hong Kong courts handed Lai what was described as the harshest sentence – 20 years – ever imposed on a journalist under national security charges –effectively condemning the 78-year-old publisher, imprisoned since 2020, to spend the rest of his life behind bars.

According to the Committee to Protect Journalists, at least eight journalists are currently imprisoned in Hong Kong.

Lai was awarded Deutsche Welle’s Freedom of Speech Award in absentia on Thursday.

For Bielakowska, the trend is unmistakable.

“Press freedom in Hong Kong is facing systemic collapse,” she said. “This is the criminalisation of journalism itself.”
Invisible red lines

For the journalists who remain, the challenge is often less direct censorship than navigating an invisible red line – the unclear boundaries of what authorities will tolerate.

“There are red lines that cannot be crossed,” Bielakowska said. “But no one tells you exactly where they are.”

Unlike mainland China, where independent journalism has largely been pushed underground, Hong Kong still has a small number of independent outlets trying to survive.

But they work in constant uncertainty.

Mak Yin-ting, an RFI correspondent and former head of the Hong Kong Journalists Association, says authorities rarely need to ban stories outright.

Instead, ambiguity itself becomes the tool.

“If they don’t like what you’re writing, they can accuse you of sedition,” she said.

Under Article 23, Hong Kong’s domestic national security legislation, sedition charges can carry up to 10 years in prison for publishing false or misleading statements – wording journalists say remains dangerously vague.

“It’s basically up to interpretation,” Mak said. “They are importing the same methods of censorship used in mainland China.”

Self-censorship has become routine.

Many outlets now avoid politically sensitive commentary altogether. Some no longer seek outside analysis on controversial issues, while others simply reproduce government statements word for word without presenting the original facts being disputed.

“That is already part of self-censorship,” Mak said. “You write (only) the government’s statements, but not what actually happened.”

Even accessing basic information has become harder.

“Government data is becoming very hard to find,” she said. “They are basically deleting everything that might be sensitive.”

Public databases and official reports that were once available online for more than a decade are now removed after one or two years, making investigative reporting significantly harder.

Private archives are also disappearing, with some major outlets deleting years of previous reporting.

“It’s not only about fear of arrest,” Bielakowska added. “Even gathering information becomes harder because sources themselves are afraid to speak.”

Many officials, academics and civil servants no longer agree to interviews, even on conditions of anonymity.

“The authorities have created such an atmosphere of fear that many first-hand sources simply don’t want to go on record anymore,” she said.
‘They can be next’

Despite the pressure, some journalists continue reporting – fully aware of the risks.

“They know that at any time, they can be next,” said Bielakowska.

To protect junior reporters and freelancers, some editors choose to sign all articles under their own names.

“The editor-in-chief becomes the face of the media,” Bielakowska said. “If arrests happen, it becomes the sacrifice of one person rather than the whole newsroom.”

She points to the Hong Kong Journalists Association – one of the few remaining independent press organisations still operating in the city – as proof that resistance remains.

“It’s not only courage, but commitment to press freedom,” she said.

Veteran journalists who remember a freer Hong Kong continue to hold the line.

“It was top of the top,” Bielakowska said of Hong Kong’s press corps in the early 2000s. “Some of the best investigative journalists in the world were there.”



That memory still drives many reporters today.

“They remember what Hong Kong was. That is why they still have the strength to continue.”

For Tom Grundy, founder and editor-in-chief of Hong Kong Free Press, the pressure has become part of daily newsroom life.

“Since the onset of the security law, the city has seen the harassment of journalists, over 60 civil society groups disappear, newsrooms raided and journalists jailed.”

His own outlet has not been spared.

“In short, HKFP has unfortunately suffered harassment, intimidation and bureaucratic scrutiny, and it has escalated over recent years,” he said.

Still, he insists there remains a narrow space for independent journalism. “The space gets tighter and tighter, but it’s not quite mainland China.”

“We can still show up to press conferences and ask tough questions to officials,” he said. “It’s better to be in than out, and we can still maintain accuracy, nuance and understanding by being in the city with Hong Kongers.”

But the limits are increasingly visible.

“Nevertheless, it’s harder to get people to speak from all parts of the political spectrum,” he said. “For features, opinion pieces – these kinds of things – it’s very, very tough.”

For many, simply continuing to publish has become an act of resistance.

“We try to keep calm and carry on and navigate the red lines,” Grundy said.
‘We cannot give up’

For press freedom advocates, the greatest danger is not only repression inside Hong Kong, but the growing sense abroad that the battle has already been lost.

“There is this thinking among policymakers in Europe and the US that Hong Kong is lost – that there is nothing left to do,” Bielakowska said. “That is a mistake.”

She warns that treating the city’s clampdown on freedoms as inevitable only strengthens Beijing’s strategy.

“There should be no normalisation.”

But sustaining that work depends on external support – from visa pathways and legal protection to funding for independent journalism.

Neighbouring countries have become part of this fragile support network. Taiwan, in particular, has emerged as an important refuge for journalists and activists fleeing pressure from Hong Kong and mainland China, offering a place where some have been able to rebuild their work in relative safety.

Bielakowska describes the island, which ranks 28th out of 180 countries, as one of the few remaining spaces in the region where press freedom is still broadly protected. South Korea ranks 47th while Japan ranks 62nd.

Yet she says support remains inconsistent and largely ad hoc. While some individuals have been quietly assisted or allowed to settle, there is still no structured system for supporting exiled media workers.

And even where journalists do find safety abroad, she warns the pressure does not necessarily end. Democracies, she says, must take transnational repression more seriously.

“What happened to Antoine shows this is no longer only a Hong Kong issue,” she said.

For Mak, the fight for press freedom has become a simple question of endurance.

“It is like tug-of-war,” she said. “If one side abandons, you lose everything.”

As long as independent journalists remain – in Hong Kong or in exile – she says silence is not an option.

“We cannot give up.”

Monday, April 27, 2026

Rule by Secrecy – How Covert Regime Change Shaped Our World

In Covert Regime Change, Lindsey A. O’Rourke reconstructs the hidden architecture of US power and shows how Western democracies repeatedly destroyed foreign political orders.

by  | Apr 27, 2026 |

The modern international order rests on a contradiction rarely examined in full daylight. Western states present themselves as guardians of international rules, democracy, and self-determination, yet the historical record of their behavior abroad tells a different story — one written not in treaties or speeches, but in classified cables, deniable operations, and shattered political systems. Covert Regime Change, first published in 2018, matters because it documents, with unusual rigor, how this contradiction became a governing method. Lindsey A. O’Rourke, Associate Professor at Boston College, does not ask whether covert intervention occasionally went wrong. She demonstrates that it became a routine instrument of statecraft, one whose predictable consequences were political collapse, mass violence, and long-term instability.

The book’s starting point is empirical, not rhetorical. O’Rourke assembles the most comprehensive dataset to date of U.S.-backed regime change attempts during the Cold War, identifying seventy cases between 1947 and 1989. Sixty-four were covert. Only six were overt. This imbalance is not incidental. It reveals a strategic preference for secrecy as a means of exercising power without democratic constraint. Covert regime change allowed policymakers to intervene repeatedly while insulating themselves from public accountability.

O’Rourke also dismantles the notion that covert regime change primarily served democratic ends. Statistically, covert interventions overwhelmingly produced authoritarian outcomes. Where democratic transitions occurred – and they are hard to find – , they were more often associated with overt interventions, where public scrutiny imposed limits. Secrecy correlated with repression, not reform. O’Rourke’s findings dispel the myth that the US fought for democracy during the Cold War: “The United States supported authoritarian forces in forty-­four out of sixty-­four covert regime changes, including at least six operations that sought to replace liberal demo­cratic governments with illiberal authoritarian regimes. Yet, Washington’s proclivity for installing authoritarian regimes was also not absolute. In one-­eighth of its covert missions and one-­half of its overt interventions, Washington encouraged a demo­cratic transformation in an authoritarian state.” In other words: Washington supported whatever regime or rebel group served its interests — and showed little concern for democracy.

What makes the book so unsettling is that it refuses to stop at the moment of intervention. O’Rourke tracks what followed. Using comparative statistical analysis, she shows that states targeted by covert regime change were significantly more likely to experience civil war and mass killings. Her statistical analysis shows that “states targeted for covert regime change were 6.7 times more likely to experience a Militarized Interstate Dispute with the United States in the ten years following intervention.” US regime change operations also steeply increased episodes of mass killing: “States targeted in successful operations were 2.8 times more likely to experience an episode of mass killing, whereas states targeted in failed covert missions ­were 3.7 times more likely.”

Vietnam demonstrates how covert regime change could deepen rather than prevent war. Before large-scale U.S. troop deployments, Washington pursued covert efforts to shape South Vietnam’s leadership. O’Rourke reconstructs the U.S. role in facilitating the 1963 coup against President Ngo Dinh Diem. Rather than stabilizing the regime, the coup fragmented power and intensified dependence on U.S. military support. What began as covert political manipulation ended in a war that killed millions of Vietnamese and devastated the region.

In the Western Hemisphere, the United States utilized hegemonic operations to enforce a brutal regional conformity, often at the direct expense of democratic institutions. The CIA-backed overthrow of Jacobo Árbenz in 1954 destroyed Guatemala’s young democracy. Guatemala’s subsequent trajectory: decades of military rule, a civil war lasting more than thirty years, and the killing of roughly 200,000 people, the majority civilians. Indigenous communities were systematically targeted.

The case of the Dominican Republic illustrates the cold transition from secret meddling to open violence. The US first backed Rafael Trujillo’s dictatorship. Following the 1961 assassination of Trujillo — an operation in which the CIA provided the weapons — the country attempted a fragile democratic opening. When the reformist Juan Bosch won the presidency in 1962, his refusal to launch a McCarthyite purge of domestic leftists led Washington to view him as a “weak link” in the regional defense against communism. After Bosch was ousted in a military coup, a popular uprising in 1965 sought to restore the democratic constitution. Fearing a “second Cuba,” the Johnson administration launched a massive overt invasion to crush the rebellion and install a more compliant regime. The empirical record here is clear: for American planners, the survival of a pro-Washington hierarchy was far more important than the survival of a Caribbean democracy.

One of the book’s most analytically important findings concerns repetition. States subjected to one covert regime change attempt were far more likely to experience subsequent interventions. Covert action did not resolve instability; it institutionalized it. Political systems weakened by external manipulation became perpetual sites of interference.

The moral failure documented in Covert Regime Change is therefore not accidental. It is structural. Secrecy enabled policymakers to externalize violence, displace responsibility, and treat foreign societies as experimental terrain. Civil wars prolonged, civilians killed, and political futures destroyed were foreseeable consequences of deliberate choices. 

Proxy Wars and Moral Evasion

One of the most revealing dimensions of Covert Regime Change is the attention it pays to proxy warfare. Covert intervention rarely meant the United States acted alone. It meant empowering others to act violently on its behalf, often with full awareness of who those actors were and what they represented.

The rollback operations in Eastern Europe during the early Cold War provide one of the clearest illustrations. O’Rourke documents U.S.-backed covert efforts to destabilize Soviet-aligned regimes in countries such as Albania, Romania and Ukraine through the infiltration of exile groups and paramilitary networks. These operations were conceived as low-risk alternatives to direct confrontation with the Soviet Union. In practice, they relied heavily on émigré militias whose ideological and historical backgrounds were deeply compromised.

Many of these groups included former collaborators with Nazi Germany and fascists, implicated in wartime atrocities. This was not incidental. They were selected precisely because of their militant anti-communism and organizational cohesion. O’Rourke shows that U.S. officials were aware of these backgrounds and proceeded regardless. The operations themselves were militarily ineffective. Infiltrators were frequently captured or killed soon after insertion. What they did achieve was the reinforcement of authoritarian control. The existence of covert Western-backed networks confirmed Soviet narratives of external subversion and justified intensified repression across Eastern Europe.

Afghanistan represents the most consequential case of proxy warfare in the book. During the Soviet occupation, the United States conducted one of its largest and most expensive covert operations, channeling billions of dollars in weapons and support to Afghan resistance fighters. These forces were often described in sanitized terms, but O’Rourke is clear about their ideological character. Most were brutal Islamist extremists, organized around rigidly authoritarian visions of society.

The objective of the operation was narrowly defined: bleed the Soviet Union and force its withdrawal. On those terms, it succeeded. What followed, however, was political collapse. After the Soviets left, U.S. engagement rapidly diminished. Afghanistan descended into civil war as rival militias turned their weapons on one another and on civilians. Out of this chaos emerged the Taliban, followed by transnational jihadist networks whose violence would reverberate globally. The intervention did not merely fail to build a viable state; it actively contributed to the conditions under which one of the most repressive regimes of the late twentieth century took power.

Western publics rarely saw the consequences of policies carried out in their name. Violence was outsourced to proxies. Responsibility was fragmented across agencies and allies. Failure could be reframed as complexity or local pathology. What Covert Regime Change ultimately makes impossible is the claim that these outcomes were unfortunate side effects of well-intentioned policies. The evidence shows that policymakers repeatedly chose secrecy over accountability, power politics over democracy, and short-term advantage over human cost. The victims were not abstractions. They were civilians caught between armed factions, dissidents silenced, and societies denied the chance to determine their own futures. 

Power Without Reckoning

By the end of Covert Regime Change, the accumulation of evidence leaves little room for comforting interpretation. It documents a system of intervention that functioned as intended — discreet, flexible, and largely insulated from domestic scrutiny — while producing outcomes that were consistently destructive for the societies it targeted. Failure abroad rarely translated into accountability at home. The result was a cycle in which intervention became easier precisely because its consequences were borne elsewhere.

The statistical findings reinforce this interpretation with striking consistency. States subjected to covert regime change were more likely to experience adverse regime transitions — coups followed by coups, fragile governments replaced by more repressive ones. Civil wars in these countries lasted longer and were harder to resolve. These were not marginal increases. They were structural shifts in political trajectory, affecting millions of lives over decades.

O’Rourke’s insistence on evidentiary discipline gives these conclusions their force. She shows how similar mechanisms produced similar outcomes under varying conditions. Whether in Latin America, Africa, Europe, or Asia, covert regime change followed a recognizable script: identify a political outcome deemed unacceptable, undermine it quietly, empower local actors willing to use force, and withdraw once immediate objectives were met. What followed — repression, civil war, or long-term instability — was treated as local failure rather than external design.

Covert Regime Change challenges the reader to reconsider how international responsibility is assigned. Violence that is indirect is no less real. Harm that is delayed is no less consequential. Political destruction carried out through intermediaries is no less deliberate.

As a work of scholarship, the book is meticulous and restrained. As a historical record, it is devastating. It reveals an era in which power was exercised without witness and accountability. The world that emerged from those decisions — fractured, militarized, and distrustful — is their legacy. The enduring lesson of Covert Regime Change is that secrecy does not merely hide violence; it makes it sustainable, allowing great powers to destroy other societies while preserving the illusion of innocence at home.

Michael Holmes is a German-American freelance journalist specializing in global conflicts and modern history. His work has appeared in Neue Zürcher Zeitung – the Swiss newspaper of record – Responsible Statecraft, Psychologie Heute, taz, Welt, and other outlets. He regularly conducts interviews for NachDenkSeiten.  He has reported on and traveled to over 70 countries, including Iraq, Iran, Palestine, Lebanon, Ukraine, Kashmir, Hong Kong, Mexico, and Uganda.  He is based in Potsdam, Germany.

Sunday, April 26, 2026



AI poses the biggest threat to service sector jobs

By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 24, 2026


Image: — © AFP

An April 2026 report on job automation found that Malta faces the biggest threat from AI replacing workers. With Amazon cutting 16,000 international roles to let AI handle the same tasks, a new study by Planera shows which countries have the most people working in jobs that machines will soon be able to do.


Malta has the highest automation risk in the world, with nearly half of workers holding jobs that AI can replace.

For the U.S. some 96 million American workers could lose their jobs to AI, making them the largest at-risk workforce. However, the worst is found elsewhere. Service-focused countries like Greece and Spain face bigger risks, with hospitality and retail roles most exposed to AI.

The research tracked employment across different economic sectors to find how exposed each country’s workforce is to AI automation. The report used official labour data from government sources and matched them with automation risk probabilities for each industry. These probabilities measure how likely machines are to replace human workers in sectors like hospitality, finance, retail, and professional services. Countries were ranked by how much of their workforce is doing work that AI can handle.

The top 10 countries where workers face the highest automation risk

CountryWeighted AI
 Exposure Index
Total Emp
 (000s)
Total Weighted Employees at Risk of Replacement 
 (Emp×Risk)
Malta46.56%332.80155.00
Canada44.87%8,865.003,977.60
Greece44.84%5,525.102,477.30
Cyprus44.77%508.40227.60
Luxembourg43.82%538.90236.20
Netherlands43.67%10,890.004,755.20
United States 43.63%158,286.0069,067.90
Spain43.35%23,091.1010,010.40
Belgium43.28%5,575.702,413.00
Italy42.22%28,746.1012,136.90
As indicated by the table, Malta faces the biggest job automation risk in the world. Nearly half the workforce here holds jobs that AI can replace, putting 155K people at risk of displacement. The island economy depends on admin work, hospitality, and professional services, all sectors where automation is easiest. Malta’s small size means these workers can’t easily move to safer sectors either, and with 1 in 2 jobs vulnerable, the country faces greater threats than larger economies.

Canada wobbles

Canada comes second with close to 4 million workers employed in roles that machines can handle. That’s about 45% of the local workforce. Information technology and hospitality drive the risk here, with 75% of Canadian tech jobs predicted to be automated soon, while food service faces the same risks at 72%. Unlike Malta, Canada has options to retrain workers, but the sheer number of people affected means displacement will still hit hard across provinces and cities.

Greece matches Canada’s 45% automation risk, with 2.5 million workers holding jobs that AI can replace. The Greek economy depends on tourism and services, sectors where automation is advancing fastest. Accommodation and food services employ 730K people at 72% replacement risk, while wholesale and retail trade adds another 880K workers at 51% risk. The country’s ongoing economic struggles make retraining difficult as well, so many Greeks face automation threats without safety nets
.

Cyprus ranks fourth with 45% of its workforce exposed to automation. About 228K Cypriots work in roles that machines can handle, a large share for an island with just over half a million employed. Like Malta and Greece, Cyprus built its economy on tourism and professional services. Legal, accounting, and scientific jobs here face 70% automation risk, while hospitality is at 72%. The island’s geographic isolation makes job mobility harder, so workers who lose positions to AI have fewer options than people in bigger countries.

Luxembourg rounds out the top five as automation threatens 236K jobs. Lawyers and accountants here face worse odds than bankers, with 70% of their work easily replaceable compared to 51% in finance. Being one of the world’s richest countries means Luxembourg can retrain its workforce better than others. However, that will still require nearly half of all workers to start over in new careers.
Service sector risks

The data shows service jobs are more at risk. Manufacturing was already automated decades ago, so the workers left are doing tasks robots can’t handle yet. But admin assistants, retail clerks, and hospitality staff are all doing repetitive work that AI can learn quickly.

AI Deletes Routine White Collar Jobs – OpEd

By 

A new study predicts that 86% of AI unemployment will be women. And not just any women: rich Democrat women. 

Tragically, AI is coming for the notorious Karen who’s overpaid for what she produces but still needs to see the manager.

The reason is the Industrial Revolution took jobs from people who work with their hands — routine physical work. But AI is taking routine paperwork — people who forward emails, schedule meetings, and sit on diversity committees.

Last week, think tank Brookings issued a new study estimating 37 million American workers are “highly exposed” to AI replacement. 

Brookings thinks that most of those will easily transition into a different role because they have broad skill sets or they’re smart.

For example, software or finance are on the firing line for AI replacement. But they’ll adapt as automation creates new jobs since it raises incomes and deepens production — products get better.

But Brookings estimates there’s about 6 million of those who will not adapt, primarily in clerical and administrative roles.

What’s interesting is the distribution: Brookings estimates 86% are women. And they work at big organizations with lots of routine paperwork — colleges, hospitals, big companies, government. 

In healthcare, for example, hundreds of thousands of workers never see a patient — they see paperwork. 

It’s worse with federal workers, who are also mostly women. 

Going by the fact nothing got worse when DOGE fired 300,000 of them, many are useless. Just imagine how useless they’ll be when AI can do their job for free.

A lot of these women may be low skill, but they’re high education — and high income. Which they’re about to lose. Which won’t make them happy.

In a recent CNBC interview, Palantir CEO Alex Karp laid it out: “If you’re going to disrupt the economic and political power of highly-educated female voters who vote Democrat while increasing the power of vocationally trained working class males…and you think that’s going to work out politically, you’re in an insane asylum.”

So who are these soon to be jobless Karens? 

A study by AI company Anthropic — of Claude fame — thinks AI could ultimately replace over 90% of tasks in administrative, clerical, and middle management. And over 80% in arts and media and law firms — don’t let your kids go to Hollywood. Or become lawyers.

This sounds dire, but remember the adaption. Software, for example, has been automating for 50 years — punch-card feeders are long gone. During dot-com, website developers were supposed to be obsolete any day now, and 50 years later they’re still here. Because the work got more complicated even as the basic stuff automated.

So the vast majority of that 80 to 90% will reskill, including software, finance, marketing and managers.

The problems are those clerks and admins, secretaries, sales assistants, customer service, payroll…HR. Heaven help you if you’re a diversity consultant who doesn’t know how to flip a burger.

I’ve argued that AI will be the opposite of the Industrial Revolution: Instead of replacing routine physical jobs, it replaces routine white collar jobs. With robots coming decades later since you need one AI for 8 billion people but you need 5 robots per McDonald’s.

This creates a generation-long blue collar boom as automation itself makes us rich, which raises demand — and pay — for blue-collar jobs. But it will absolutely redistribute income — and power — from high-income, high-education, largely female white collar workers to the plebs.

This is terrifying for Karen — she already makes less than the plumber, she would make less than the Uber guy.


This article was published at Brownstone Institute and republished from
the author’s Substack

AI firms flex lobbying muscle on both side of Atlantic


By AFP
April 25, 2026


Image: — © AFP


Daxia ROJAS

AI developers are ramping up efforts to win over the hearts and minds of officials in Europe and the United States, hoping to sway governments as they weigh high-stake regulatory frameworks for the ever more powerful technology.

Flush with cash, the firms are also wooing the general public, insisting that artificial intelligence will be a force for good — and not a destroyer of jobs or an existential threat for humanity.

ChatGPT maker OpenAI unveiled this month a 13-page “Industrial Policy for the Intelligence Age” that calls for new taxation and expanded safety nets to ensure society withstands the arrival of superintelligent systems.

It has even bought TBPN, a technology-focused talk show, to help shape the narrative.

But the policy document also came just days after a public backlash forced the company to halt plans for a sexually explicit chatbot.

OpenAI has also faced legal challenges from families of teenagers who say ChatGPT caused harm and even suicide among young people, prompting the company to introduce an age-verification system.

“This is a turning point” for the industry, and companies “are spending a fortune to try to get favourable measures passed in their patch”, said Alexandra Iteanu, a Paris-based lawyer specialising in digital law.

– Politicians in pocket? –

The AI industry has transformed Washington lobbying at extraordinary speed, with more than 3,500 federal lobbyists — one-fourth of the total — working on AI issues last year, a 170 percent increase over three years, according to Public Citizen, a consumer advocacy group.

The established giants like Meta, Google and Microsoft still dominate spending, but AI start-ups like OpenAI and Anthropic have rapidly built out their Washington presence, hiring elite firms and expanding in-house policy shops.

Anthropic for example has focussed its message on promoting AI safety and tighter regulation.

But OpenAI is also actively pushing the industry’s top legislative priority of preventing US states from passing their own laws governing AI, an effort that has twice failed in Congress but remains very much alive, backed by a sympathetic White House.

The influence campaign has moved into electoral politics, with a pro-AI campaign called Leading the Future assembling a $100 million war chest to back AI friendly candidates in the 2026 midterms.

President Donald Trump, a fierce opponent of AI regulation, counts OpenAI’s cofounder Sam Altman and its president Greg Brockman among his biggest donors.

European regulators are also feeling the heat, with the French start-up Mistral recently presenting in Brussels a 22-point plan to accelerate AI development on the Continent.

Lobbying outlays by the tech industry have surged 55 percent since 2021 to reach 151 million euros ($177 million) last year, according to study by the Corporate Europe Observatory and LobbyControl, a nonprofit.

– ‘Concentration of wealth’ –

For Margarida Silva of the Centre for Research on Multinational Corporations (SOMO, a Dutch nonprofit), AI firms are working from playbook of the oil and smoking industries, but with one major difference.

“They’re just the wealthiest companies in the world, so they have a lot of money that they can use to put towards lobbying,” Silva said.

“When you have such intense corporate lobbying that is based on having such a concentration of wealth, and that is standing in the way of public interest regulations… we are really talking about a democratic threat,” she added.

Many executives also cultivate friendships with elected officials to have “privileged channels” with public administrations, said Charles Thibout, a politic science professor at the Sciences Po Strasbourg university in eastern France.

He noted the phalanx of tech moguls at Trump’s inauguration last year, and the close ties between Mistral’s cofounder Arthur Mensch and French President Emmanuel Macron.

Political leaders are often keen to be seen with AI’s top names, Thibout added, if only to help get some of their huge development spending for their states or regions.

But “lawmakers are not fooled”, said Iteanu, as enthusiasm for AI has not dispelled public wariness about its potential consequences.

Despite the colossal spending in the United States, for example, opinion polls regularly show that Americans remain highly sceptical about the technology’s benefits, and more worried that it spells doom for millions of jobs.



China’s DeepSeek releases long-awaited new AI model

ByAFP
April 24, 2026


ChatGPT maker OpenAI's initiative to help countries build infrastructures for 'sovereign' artificial intelligence systems comes as it faces competition from China-based DeepSeek - Copyright AFP NICHOLAS KAMM

Chinese startup DeepSeek released a new artificial intelligence model Friday, more than a year after it stunned the world with a low-cost reasoning model that matched the capabilities of US rivals.

DeepSeek-V4 “features an ultra-long context of one million words,” the company said in a statement on social media platform WeChat, hailing it as “cost-effective” in a separate announcement on X.

The announcement came as Meta said it planned to cut a tenth of its staff as it looks for productivity gains from the rest of the workforce while investing heavily in artificial intelligence. Reports said Microsoft was also looking to trim its ranks.

DeepSeek-V4’s context length, which determines how much input a model is able to absorb to help it complete tasks, “(achieves) leadership in both domestic and open-source fields across agent capabilities, world knowledge, and reasoning performance”.

A “preview version” of the open source model is now available, the company said.

DeepSeek-V4 is released as two versions, DeepSeek-V4-Pro and DeepSeek-V4-Flash, with the latter being “a more efficient and economical choice” because it has smaller parameters.

V4-Pro has 1.6 trillion parameters while the V4-Flash has 284 billion parameters, which refine models’ decision-making ability.

The model has also been “optimised” for popular AI Agent products such as Claude Code, OpenClaw, OpenCode and CodeBuddy, the statement said.

“In world knowledge benchmarks, DeepSeek-V4-Pro significantly leads other open-source models and is only slightly outperformed by the top-tier closed-source model, (Google’s) Gemini-Pro-3.1,” the statement added.

Hangzhou-based DeepSeek burst onto the scene in January last year with a generative AI chatbot, powered by its R1 reasoning model, that upended assumptions of US dominance in the strategic sector.

This so-called “DeepSeek shock” sparked a sell-off of AI-related shares and a reckoning on business strategy in what was also described as a “Sputnik moment” for the industry.

The chatbot performed at a similar level to ChatGPT and other top American offerings, but the company said it had taken significantly less computing power to develop.

However, its sudden popularity raised questions over data privacy and censorship, with the chatbot often refusing to answer questions on sensitive topics such as the 1989 Tiananmen crackdown.

At home, DeepSeek’s AI tools have been widely adopted by Chinese municipalities and healthcare institutions as well as the financial sector and other businesses.

This has been partly driven by DeepSeek’s decision to make its systems open source, with their inner workings public — in contrast to the proprietary models sold by OpenAI and other Western rivals.

“China-made large AI models spearheaded the development of the global open-source AI ecosystem,” Chinese Premier Li Qiang told an annual gathering of China’s top decision-makers last month.

The AI race has intensified the rivalry between China and the United States, and the White House on Thursday accused Chinese entities of a massive effort to steal artificial intelligence technology.

“The US has evidence that foreign entities, primarily in China, are running industrial-scale distillation campaigns to steal American AI,” science and technology chief Michael Kratsios said in a post on X.

“We will be taking action to protect American innovation.”

Five things to know about Chinese AI startup DeepSeek


ByAFP
April 24, 2026


Photo illustration shows the DeepSeek app on a mobile phone in Beijing - Copyright AFP/File GREG BAKER


Luna LIN

As DeepSeek releases its first major new artificial intelligence model in over a year — DeepSeek-V4 — here are five things to know about the Chinese startup:



– ‘Sputnik moment’ –

Founded by Liang Wenfeng in the eastern Chinese tech hub Hangzhou, DeepSeek started life in 2023 as a side project of Liang’s data-driven hedge fund that had access to a cache of powerful AI processors made by US chip giant Nvidia.

It shot to global attention in January 2025 with the release of its R1 deep-reasoning large language model, which sparked a US tech share sell-off.

Industry insiders were stunned by R1’s high performance — at a level similar to ChatGPT and other leading US chatbots — and DeepSeek’s claims to have developed it at a fraction of the cost.

Venture capitalist Marc Andreessen described it as a “Sputnik moment” — referencing the 1957 launch of Earth’s first artificial satellite by the Soviet Union that stunned the Western world.



– Censorship concerns –

Like other Chinese chatbots, DeepSeek’s AI tools eschew topics usually censored in the world’s second-largest economy, such as the 1989 Tiananmen crackdown.

That and data privacy concerns have led DeepSeek AI to be banned or restricted on government-issued devices in several countries, including the United States, Australia and South Korea.

However, its low cost and ease of deployment have made it a popular choice in developing countries, analysts say.

The company holds four percent of global market share for chatbots, according to web traffic analysis company Similarweb. ChatGPT dominates at 68 percent.



– Open source –

DeepSeek’s systems are open-source — meaning their inner workings are public, allowing programmers to customise parts of the software to suit their needs.

That is the same for other major Chinese AI players, including tech giant Alibaba, in contrast to the “closed” models sold by OpenAI and other Western rivals.

The Chinese government has trumpeted its lead in open-source AI technology, which it says can accelerate innovation.

“Chinese AI models are leading the way in the open-source innovation ecosystem,” National People’s Congress spokesman Lou Qinjian told policymakers this month.



– Startup boost –

The success of DeepSeek has galvanised China’s AI scene, despite hurdles posed by rivalry with the United States, and fears of a global market bubble.

Shares in two leading Chinese AI startups, Zhipu AI and MiniMax, soared on their market debuts in Hong Kong this year, and it has been a similar story for Chinese chipmakers such as MetaX.

Shi Yaqiong and her team at Beijing-based Jinqiu Capital told AFP there has been a “clear surge” in enthusiasm around Chinese AI — and competition among investors — since the DeepSeek shock.



– Chip smuggling reports –

DeepSeek’s rise has not been without controversy.

Reports, including in technology outlet The Information, say DeepSeek has been skirting a US ban on the export of top-end chips to China to train its new V4 model.

The Information said in December, citing six people with knowledge of the matter, that DeepSeek developed V4 using thousands of chips dismantled in third countries and smuggled to China.

DeepSeek did not respond to AFP’s request for comment. Nvidia did not respond to a request for comment but told The Information that they had not seen any evidence of this and that “such smuggling seems farfetched”.


China’s top AI players


By AFP
April 24, 2026


Startup DeepSeek has shaken up the global AI scene with its "R1" model - Copyright AFP/File MLADEN ANTONOV

Luna Lin

China’s artificial intelligence boom is in full swing, with the release of a new large language model (LLM) by top startup DeepSeek on Friday highlighting the country’s rapid progress despite US export restrictions on advanced microchips.

Here’s a look at the companies, big and small, driving China’s AI ambitions:



– Legacy players –



Chinese internet giants Baidu, Alibaba and Tencent are racing to invest in AI, using existing vast user bases and cloud infrastructure to their advantage.

Search engine provider Baidu, sometimes called China’s Google, has been a vocal proponent of the potential of AI in the country for over a decade.

Although it has recruited prominent AI researchers and its “Ernie” tool was one of the country’s first AI chatbots, Baidu’s fortunes have remained tied to its massive search and online marketing business.

Alibaba, the e-commerce behemoth behind shopping platforms like Taobao, is known for its open-source “Qwen” AI models — popular with programmers worldwide because they can be freely customised.

The Qwen chatbot mobile app had more than 200 million monthly active users in January, according to AI ranking site AICPB.

Top gaming and social media firm Tencent, which launched an AI model in 2023 and a chatbot the following year, is seen as a cautious player.

Tencent’s founder, Pony Ma, recently vowed to increase investment in AI, reportedly calling it “the only field worth investing in” in January.



– Beyond TikTok –



ByteDance, the Chinese company behind TikTok, is increasingly shifting its focus to AI as pressure on its overseas social media business intensifies.

And it is going well: Doubao, ByteDance’s AI chatbot, is the most popular of its kind in China, with over 100 million daily active users.

This year, the firm’s slick AI video generator, SeeDance 2.0, raised concerns over copyright and potential future job losses with its cinematic-looking clips created using just simple prompts.



– China’s AI hero –



Startup DeepSeek started life in 2023 as a side project of a data-driven hedge fund, but shook up the global AI scene with its “R1” model in January 2025.

DeepSeek’s low-cost, high-performance R1 chatbot challenged assumptions of US dominance in what some have called the “Sputnik moment” for AI.

Its open-source approach has galvanised the country’s AI industry and accelerated the global diffusion of Chinese models.

Its newest V4 model, released Friday, promises performance similar to leading closed-source models at lower cost, according to the company.

DeepSeek-V4 features an ultra-long context of one million tokens and 1.6 trillion parameters for the Pro version — measures that determine how much input the model can absorb and its decision-making ability.

“In world knowledge benchmarks, DeepSeek-V4-Pro significantly leads other open-source models and is only slightly outperformed by the top-tier closed-source model, (Google’s) Gemini-Pro-3.1,” DeepSeek said in a statement on Friday.



– Startup ‘tigers’ –



The startups Zhipu AI, MiniMax and Moonshot AI are nicknamed China’s “AI tigers” — challenging legacy tech giants on AI foundation model research.

Zhipu AI emerged from the prestigious Tsinghua University and was initially known for its strong focus on computing research.

The firm is a major provider of chatbot tools to Chinese businesses, and the performance of its latest “GLM-5” model impressed developer communities.

MiniMax targets the consumer market with its multimedia tools, from AI companions to video generators.

Both Zhipu and MiniMax saw their stock prices soar when they went public in Hong Kong in January, but both have also faced challenges.

A year ago, Washington put Zhipu on its export control blacklist over national security concerns, while Disney and other US entertainment outfits are suing MiniMax for copyright infringement.

Moonshot AI’s Chinese name, Yue Zhi Anmian, pays tribute to Pink Floyd’s album “The Dark Side of the Moon”, reflecting the rock music passion of its co-founder Yang Zhilin.

Its latest offering, “Kimi K2.5”, is one of the most popular AI models on developer platform OpenRouter.

Kimi K2.5’s success is reflected in the company’s revenues. Moonshot AI reportedly earned its 2025 full-year revenue in just weeks since its launch.



OpenAI CEO apologizes to Canadian town for not reporting mass shooter


By AFP
April 24, 2026


A vigil after a mass shooter killed eight people in Tumbler Ridge, BC, 
Canada, in February - Copyright AFP/File Paige Taylor White

OpenAI’s CEO Sam Altman has apologized to a Canadian town devastated by a February mass shooting, saying he was “deeply sorry” the company did not notify police about the killer’s troubling ChatGPT account.

OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18-year-old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.

The account was banned over concerns about usage linked to violent activity, but OpenAI said it did not inform police because nothing pointed toward an imminent attack.

Canadian officials condemned OpenAI’s handling of the case and summoned company leaders to Ottawa to explain its security protocols.

The family of a girl who was shot and gravely wounded at the school is suing the US tech giant for negligence.

In a letter Thursday addressed to the community of Tumbler Ridge, published Friday by the local news site Tumbler RidgeLines, Altman said “no one should ever have to endure a tragedy like this.”

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote.

“While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

Van Rootselaar killed her mother and brother at the family’s home before heading to the local secondary school, where she shot dead five children and a teacher.

She died of a self-inflicted gunshot wound after police entered the building.



Canada’s Cohere buys Germany’s Aleph Alpha to take on US AI giants

ByChris Hogg
DIGITAL JOURNAL
April 24, 2026

File photo: Aidan Gomez, co-founder and CEO of Cohere, speaks at Toronto Tech Week. - Photo courtesy Toronto Tech Week

Two of the most prominent AI companies outside the United States are joining forces.

Cohere, the Toronto-based enterprise AI firm founded in 2019, is acquiring Aleph Alpha, a German company founded the same year and once positioned as Germany’s answer to OpenAI.


The deal, endorsed by the Canadian and German governments, was announced Friday in Berlin and aims to give enterprise and public sector customers a credible alternative to American AI dominance.


Cohere focused on building enterprise AI tools for businesses and governments, winning federal investment, government contracts, and commercial partnerships with companies like Bell and RBC along the way.

Aleph Alpha has taken a different path. After failing to keep pace with OpenAI and Anthropic on foundation model development, it abandoned that race and pivoted to helping governments and enterprises deploy AI they could control.

The combination brings those two stories together into a single company pitched as a transatlantic answer to Silicon Valley AI giants.

Financial terms of the acquisition were not disclosed by the companies.

German business daily Handelsblatt, which first reported the deal, valued the combined entity at roughly $20 billion, citing sources in government and industry.

Germany’s Digital Minister Karsten Wildberger announced the transaction at a press conference in Berlin on Friday, joined by Canada’s Minister of AI and Digital Innovation Evan Solomon, Cohere CEO Aidan Gomez, Schwarz Digits chief Rolf Schumann and Aleph Alpha co-founder Samuel Weinbach.

German Chancellor Friedrich Merz approved the deal, according to Handelsblatt.
The deal

Cohere will retain its name and operate dual headquarters in Canada and Germany, with Heidelberg becoming a second global headquarters.

Cohere shareholders will hold about 90% of the combined entity, with Aleph Alpha shareholders taking about 10%, according to Handelsblatt.

The acquisition is subject to regulatory approval.

Alongside the announcement, Germany’s Schwarz Group committed €500 million to Cohere’s upcoming Series E round. Schwarz, parent company of discount retailers Lidl and Kaufland, was already a lead backer of Aleph Alpha.

Cohere CFO Francois Chadwick told Reuters the company expects to close the funding round in the coming months.

The combined company will target regulated sectors including public services, finance, defence, energy, manufacturing, telecommunications and healthcare. Both the Canadian and German governments said they would use Cohere technology.

Cohere has raised about $1.6 billion since its founding, with investors including Nvidia and AMD. Its most recent valuation was roughly $6.8 billion after a $500 million raise in August 2025.

That’s a fraction of what OpenAI, Anthropic and Google have mobilized for training infrastructure, talent and chip supply.

Aleph Alpha’s pivot away from foundation model development brought its business model closer to Cohere’s. Its existing contracts with the German federal ministry for digital affairs and the Baden-Württemberg regional government give Cohere a direct foothold in European public sector procurement.
What this means for Canadian technology leaders

At the press conference, Solomon framed the deal as a stand against concentrated power.

“We need to make sure that the power does not rest in the hands of a few dominant players,” he said. Wildberger said the two countries were “creating a global AI leader.”

The announcement builds on the Sovereign Technology Alliance that Canada and Germany signed earlier this year.

It also extends a visible stretch of Canadian government support for Cohere.

Ottawa finalized a $240 million investment in March 2025 to help fund Cohere’s $725 million data centre project in Cambridge, Ontario. In August 2025, the federal government signed a memorandum of understanding with Cohere to explore deploying its tools across public service operations.

Commercial momentum has followed. Bell Canada announced its own partnership with Cohere last July, making it Cohere’s preferred Canadian infrastructure partner. Last week, Innovation, Science and Economic Development Canada began rolling out Cohere’s North platform to up to 1,400 staff.

Industry Minister Mélanie Joly told The Logic last week that Canada needs a “trading bloc” of like-minded countries to counter U.S. protectionism and the power of hyperscalers. She called Cohere “a gem” and said her goal in conversations with the German government was to “build a national champion.”

Cohere has publicly committed to maintaining Canadian operations. The company retains its name, will continue to operate from Toronto as its primary headquarters, and the Canadian government remains a customer. Cohere’s CFO told Reuters the merger will help Cohere reach more customers in regulated markets.

The Series E close, expected later in 2026, will be the next concrete signal on how investors value the combined company and how the capital gets deployed between the two headquarters.

Aleph Alpha’s existing European public sector contracts will shape part of the product roadmap, which Canadian enterprise buyers should factor into their planning.
Final shotsWatch the Series E close. The valuation investors settle on for the combined company will signal more about near-term trajectory than any ministerial statement.
For other Canadian AI firms, Cohere and Aleph Alpha is a model worth studying. Scale reached through partnership, with sovereignty framing as the political cover.
Watch the product roadmap. European public sector contracts will shape product decisions for a company now marketed as transatlantic.



Written ByChris Hogg

Chris is an award-winning entrepreneur who has worked in publishing, digital media, broadcasting, advertising, social media & marketing, data and analytics. Chris is a partner in the media company Digital Journal, content marketing and brand storytelling firm Digital Journal Group, and Canada's leading digital transformation and innovation event, the mesh conference. He covers innovation impact where technology intersections with business, media and marketing. Chris is a member of Digital Journal's Insight Forum.

‘Clearly me’: AI drama accused of stealing faces

By AFP
April 24, 2026


This photo illustration taken in Hong Kong shows phones displaying screenshots of a video from Chinese model and influencer Christine Li accusing an AI microdrama of stealing her likeness without consent - Copyright AFP Mahmoud RIZK


Sophia Xu and Purple Romero

Christine Li is a model and influencer, but not an actor, so when she saw herself playing a cruel character in a Chinese microdrama she felt bewildered, then angry and afraid.

The 26-year-old is one of two people who told AFP their likenesses were cast without consent in the AI-generated show “The Peach Blossom Hairpin”, which ran on Hongguo, a major microdrama app owned by Tiktok parent company ByteDance.

Li plans to sue the drama makers and the platform, highlighting new legal and regulatory grey areas created by artificial intelligence.

“I was genuinely shocked. It was clearly me,” said Li, who lives in Hangzhou in eastern China.

“It was so obvious that they used a specific set of photos I took two years ago” and had posted on social media, she said.

Microdramas are ultra-short, online soap operas hugely popular in China and elsewhere.

When Li’s fans alerted her to the series, she was horrified to find her digital twin shown slapping women and mistreating animals.

“I also felt a deep fear. I kept wondering what kind of person would do something like this,” Li said.

Hongguo hosts thousands of free, bite-sized shows — both live-action and AI-generated — whose episodes are two or three minutes long.

As of October, the platform had around 245 million monthly active users, according to data cited by Wenwen Han, president of the Short Drama Alliance.

A Hongguo statement in early April said it had taken the series down because the producers had violated platform rules and contractual obligations.

– ‘Sleazy’ antagonist –

AI’s ability to mimic real people has sparked global concern for actors’ jobs, and over such deepfakes being used for scams and propaganda.

Li and a man who says he was portrayed as her AI husband in the series, which became a hit last month on Hongguo, spoke out online about their separate unwelcome discoveries.

But even as their stories sparked a public outcry about AI ethics, AFP saw that “The Peach Blossom Hairpin” kept running for days before its removal, with the disputed characters quietly replaced.

The man, a stylist specialised in traditional Chinese clothing and make-up, had posted photos of himself in costume on the Instagram-like Xiaohongshu app.

Like Li, he was upset by the “ugly” portrayal of his likeness as a “sleazy” antagonist in the show.

“Will it have an impact on me, on my job, on my future work opportunities?” said the man, who asked to use the pseudonym Baicai.

To keep audiences hooked, microdramas are often full of shocking, larger-than-life moments.

Li and Baicai both showed AFP their original photos and the characters in “The Peach Blossom Hairpin”, which bore a strong resemblance.

– Legal risk –

For low-budget AI microdramas, Chinese regulations say platforms must be the primary checkpoint for potentially dodgy content.

If they do not carry out mandatory content reviews, the videos will be forcibly taken down, according to the National Radio and Television Administration.

If the platforms were aware of any infringement but failed to act on it, parties affected can alert China’s cyberspace authorities which can impose administrative penalties, according to Zhao Zhanling, a partner at Beijing Javy Law Firm.

Hongguo said in a second statement this month it would continue to strengthen how it reviews content and how it authorises creators, among other steps.

It said it had dealt with 670 AI microdramas that violated regulations, with most taken down, and warned it would crack down on repeated breaches.

When approached for comment, parent company Bytedance referred AFP to the two Hongguo statements.

Li and Baicai say they need more information from Hongguo to confirm the identity of the drama’s creator — with two companies potential candidates.

One is linked to a verified account on the Chinese version of TikTok that also published the series. Another is listed as the drama’s producer on an official Chinese filing system.

AFP contacted both firms but received no response.

Using AI to slash costs may be tempting in the fast-growing, multi-billion-dollar microdrama market.

But featuring someone in a demeaning way without permission “may constitute an infringement of both portrait rights and reputation rights”, said Li’s lawyer Yijie Zhao, from Henan Huailv Law Firm.

– ‘Associated with controversy’ –

National regulations require microdrama makers to register to obtain a licence — a step made mandatory for AI-generated animations from this month.

But producers could remain in the shadows by registering temporary outfits, Zhao said, while some allegedly use overseas servers to hide.

In 2024, a Beijing court ordered a company to apologise and pay compensation to a celebrity after its AI software enabled users to produce a virtual persona using his photos and name that could exchange intimate messages.

But lawyers told AFP that compensation for plaintiffs like Li likely won’t amount to much due to the limited commercial value of an ordinary likeness.

Li worries that the saga may cost her opportunities in the modelling industry, as she is now “associated with controversy”.

Baicai has not launched legal action, but hopes to see more measures from regulators and platforms to protect people like him.

“There are probably plenty of cases with unknown victims,” he said.

Anthropic says Google to pump $40 bn into AI startup

By AFP
April 24, 2026

Anthropic CEO Dario Amodei has visited the White House as the startup stands its ground regarding safe use of its artificial intelligence - Copyright AFP/File CLEMENT MAHOUDEAU

Google is planning to invest up to $40 billion in Anthropic, the artificial intelligence firm confirmed Friday, expanding a long-standing alliance between the two companies.

The investment builds on a partnership in which Anthropic will use custom Google chips and cloud computing services to power its technology.

An Anthropic representative confirmed to AFP that the agreement sees an initial $10 billion investment from Google. The remaining $30 billion will depend on meeting performance milestones.

The announcement came just days after Amazon revealed plans to boost its collaboration with Anthropic with a new $5 billion investment, and a plan to invest $20 billion more if performance goals are met.

For its part, Anthropic said it has committed to spending more than $100 billion on Amazon Web Services (AWS) technology to power AI in the coming decade.

Anthropic is among AI sector rivals spending tens of billions of dollars on computing infrastructure to lead in the technology.

Anthropic said in early April that it had tripled its annualized revenues quarter-on-quarter to over $30 billion — outpacing OpenAI for the first time.

Anthropic chief executive Dario Amodei visited the White House, where both sides struck a friendly tone, following a dispute over the tech company’s refusal to grant the military unconditional use of its AI models.

Earlier this month, Anthropic announced its newest AI model Mythos, withholding it from public release due to its potential cybersecurity risks.

However, Anthropic said this week that it is investigating unauthorized access to Mythos, a powerful model which the company itself worries could be a boon for hackers.

Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers.


AI united Altman and Musk, then drove them apart


By AFP
April 24, 2026


As OpenAI chief Sam Altman and tech tycoon Elon Musk battle in court, artificial intelligence rivals continue racing ahead with the technology - Copyright AFP Kirill KUDRYAVTSEV

Thomas URBAIN

Elon Musk and Sam Altman bonded over artificial intelligence in a project that became OpenAI, but a clash of visions will see the polarizing figures face off in court in a trial that opens next week.

Silicon Valley lore traces their first meeting back to 2012, in an encounter prompted by investor Geoff Ralston.

Nearly 14 years younger than Musk, who was born in June of 1971, Altman was said to be impressed by the Tesla chief’s powers of persuasion.

While yet to reach the age of 30, Altman already had a tech world reputation as a brilliant dealmaker.

Altman’s unassuming, friendly demeanor contrasted sharply with Musk’s abrasive style, but they shared an entrepreneurial spirit and a penchant for risk-taking.

Libertarian Musk and the apolitical Altman found common ground in a shared belief about the future of AI.

Musk saw Google, and its subsidiary DeepMind, as out to create AI that thinks sharper than people do with little regard for controlling it.

Just months before OpenAI was officially founded in early 2015, Altman published a blog post calling for measures to “limit the threat” posed by AI, complete with concrete proposals.

This philosophy was set as the guiding principle at OpenAI: born a non-profit organization dedicated to the responsible advancement of AI and boasting a commitment to making its research and source code freely accessible to the public.

Altman successfully pitched the OpenAI concept to Musk, who went on to invest at least $38 million to get the nascent entity established.



– Altruistic AI? –



In February of 2018, the South Africa-born entrepreneur behind Tesla, SpaceX and other companies resigned from OpenAI’s board to ostensibly focus on his other commercial endeavors.

Behind the scenes, however, Musk and Altman were clashing over a proposed shift of OpenAI to a for-profit business that could attract investors in the capital-intensive AI race.

OpenAI completed that transformation in 2025, some three years after its ChatGPT digital assistant made AI and those who build it all the rage in the tech world.

After years as a champion of an approach in which AI serves society rather than corporate coffers, Musk muddied his message by launching a private xAI startup in July of 2023.

The mission statements for xAI and its chatbot Grok give scant mention to dangers of the technology even though Musk once called it an “existential threat” to humanity.

The rift between Altman and Musk widened as the world’s richest man moved to Texas and became an ally of US President Donald Trump while OpenAI stayed in San Francisco and focused on improving its technology.

Musk has used his social media platform X to go on the offensive with posts that include likening Altman to a “Game of Thrones” character seen as a master manipulator.

Musk, 54, even filed a lawsuit seeking to oust 41-year-old Altman as OpenAI chief executive. Selection of jurors in a trial for that case is set for Monday.

Altman has fired back on social media, contending Musk’s agenda is to rule over the most powerful AI.

“The current struggle between the two billionaires is shaped by their egos and belief that the winner will control a new technology,” contended Darryl Cunningham, author of a book about Musk.

“It seems doubtful to me that either can control AI.”

Billionaire Elon Musk enters courtroom showdown with OpenAI



ByAFP
April 25, 2026


Elon Musk. — © AFP Brendan SMIALOWSKI
Benjamin LEGENDRE

Jury selection is to begin Monday in a high-profile legal battle between billionaire Elon Musk and artificial intelligence startup OpenAI, which he accuses of betraying its non-profit mission.

The clash in a courtroom across the bay from San Francisco pits the world’s richest man against a startup that Musk once backed and now competes against in the booming AI sector.

OpenAI’s ChatGPT is a formidable rival to the Grok chatbot made by Musk’s xAI lab.

While the lawsuit filed by Musk is part of a feud between him and OpenAI chief executive Sam Altman, it spotlights a debate whether AI should ultimately benefit the privileged few or society as a whole.

Court filings lay out how Altman tried to convince Musk to back OpenAI in 2015, acting as a co-founder for a non-profit lab whose technology “would belong to the world.”

Musk pumped some $38 million into the lab before he left.


Elon Musk (l) and OpenAI chief executive Sam Altman are both on the witness list for the trial in a case filed against the startup by the Tesla tycoon – Copyright AFP/File Frederic J. BROWN, Jung Yeon-je

OpenAI is now valued at $852 billion, with Microsoft among its backers, and is preparing to go public on the stock market.

The judge presiding over the trial is aiming for a jury to decide by late May whether OpenAI broke a promise to Musk in its drive to be a leader in AI or just smartly rode the technology to glory.

– Musk duped? –

Musk argues in his lawsuit that he was deceived about OpenAI’s mission being altruistic.

The tycoon cites an email from Altman in 2017 claiming that he remained “enthusiastic about the non-profit structure” of their AI venture after Musk threatened to cut off funding for the lab.

Just a few months later, however, OpenAI established a commercial subsidiary in the face of needing to invest hundreds of billions of dollars in data centers to power its technology.

Over the course of the following two years, Microsoft pumped billions of dollars into OpenAI and the tech stalwart’s stake in the startup is now valued about $135 billion.

Microsoft chief executive Satya Nadella is among those slated to testify at the trial.

– Aimed at Altman –

Along with calling for OpenAI to be forced to revert to a pure nonprofit, Musk’s suit urges the ousting of Altman and OpenAI co-founder and president Greg Brockman.

Musk is also seeking as much as $134 billion in damages and to have the court make OpenAI sever ties with Microsoft.

During pre-trial hearings, US Judge Yvonne Gonzalez Rogers mused that Musk team seemed to be “pulling numbers out of the air” when it came to calculating damages.

If the jury sides with Musk, it will be left to Rogers to determine any remedies or payment.

In what OpenAI has dismissed as a public relations stunt, Musk has vowed that any damages awarded in the suit will go to the startup’s nonprofit foundation.

– Quest for control? –

OpenAI internal communications brought to light by the lawsuit reveal tensions that culminated with the temporary ouster of Altman as AI chief executive in late 2023.

Musk’s legal team highlighted a 2017 entry in Brockman’s personal journal reasoning that it would be lying if Altman publicly asserted OpenAI would stay a nonprofit but became a corporation a short time later.

OpenAI now has a hybrid governance structure giving its nonprofit foundation control over a for-profit arm.

In court filings, OpenAI countered that its break-up with Musk was due to his quest for absolute control rather than its nonprofit status.

“This case has always been about Elon generating more power and more money for what he wants,” OpenAI said in a post on X, a platform Musk owns.

“His lawsuit remains nothing more than a harassment campaign that’s driven by ego, jealousy and a desire to slow down a competitor.”

The startup noted that days after Musk entered the AI race in 2023 he called for a 6-month moratorium on development of advanced AI.


Op-Ed: GPT 5.5 — Hype, obviously, but redefining the AI environment as well


ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
April 24, 2026


OpenAI says it is building a 'superapp' that combines ChatGPT, a coding tool, online search, and AI agent capabilities - Copyright AFP SEBASTIEN BOZON

If you read OpenAI’s blurb on GPT 5.5, “Introducing GPT 5.5,” you’ll notice a very upbeat description of the new platform, but with an interesting addition in plain sight.

OpenAI is clearly trying to address the many issues arising from prior platforms and consumer grumblings on multiple levels. This is critical because so far, the responses to AI problems have been largely useless and anything but reassuring.

This is important. AI hype has been seriously getting on people’s nerves, notably the people paying for it. Most professional IT commentators are far less than impressed with the constant sales pitch, particularly when it includes glossing over major issues like security and just getting things done properly.

There are problems. There are risks. We’re also talking about big outlays for businesses and significant challenges in core functions for just about everybody.

The market isn’t helping itself with absurd situations like its idiotic dismissal of Software as a Service, aka SaaS, assuming coding somehow is a thing of the past, when it’s absolutely integral to every step forward with AI. Future coding for AI is likely to look very different and evolve into something totally new overnight. It will need to be hyper-efficient, perhaps totally rewritten to manage basic operations. You will need SaaS like you will need to breathe.

What’s desperately needed is clarity, and above all, credible responses to criticisms. This clarity needs to be at the consumer, tech, and business levels, and structured to address all of the issues.

This is why “Introducing GPT 5.5” needs to be seen as an actual response. OpenAI have gone to some lengths to try to fit all of these minefields into one press release, and they’ve managed to keep it interesting.

I won’t rehash the blurb. Just read it and watch the priorities emerge. Suffice to say that it’s still a sales pitch, but at least it’s believably ballpark for addressing this daily-growing encyclopaedia of situations. They’ve even managed to address heuristics, the thankless and much-bitched-about frontier of vibe coding and inferences for prompts, etc., into the mix.

Now we can get to the environment. AI is creating new environments for itself and the world at incredible speed. It’s easy to forget that most of the current issues weren’t even beginning to be mainstream issues a year ago. This level of response from a major player like OpenAI is new and indicates a level of market awareness not particularly noticeable a year ago, too.

So what is this new environment? It’s a patchy, buggy, vague businessscape, a consumer wading pool with sudden deep ends, and more. It’s an arena for half-baked employment issues. It’s also a self-inflicted problem for Big AI.

The adoption and deployment of AI are looking pretty chaotic. It’s nebulous in areas where it needs to be well-defined. It’s looking like an ADHD version of the early internet.

What’s needed is much more clarity, and plenty of it, preferably in LEGO form. Digitization required systemic training of everybody on Earth, workplace practices and protocols definition, and above all, personal-level familiarity with the real-world applications.

OpenAI doesn’t actually say in so many words that they’ve at long last declared war on slop. They’re just constantly talking about refining all the areas that generate slop. To be fair, there’s a clear drive to quality control and oversighted functionality.

Can somebody tell me why any of these countless AI dysfunctions are tolerated at all by anyone? Nobody needs expensive chatty evasive unreliable automated idiots. You can get verbose excuse factories at any meeting for free. This is business.

Almost unnoticed in “Introducing GPT 5.5” is a very welcome nod and acknowledgement of the super high-value scientific AI. This is the true Golden Goose of AI. How they fitted it into the blurb, I don’t know, but it desperately needed to be there. This level of operations is bread and butter for top level AI, and cutesy chatbots are nothing by comparison.

That’s good news for the high end of AI applications. The public face of AI is somewhere between Ronald McDonald and Freddy Krueger.

The overall look is terrible. “It’s great, it’s wonderful, it’s dangerous, and we may or may not know which at any given moment” isn’t good enough. The pity of it is that this look is pretty accurate.

This bizarre look creates instant sales resistance as well as some pretty justifiable fear and loathing. It’s totally counterproductive.

Some questions:

How much of future AI development looks at objectives like modelled values for users?

Can you put an ROI on a given AI task before you start it?

What are the opt-out and standalone choices for people who don’t want third-party involvement in high-value IP work?

Is there such a thing as an Off switch?

How do people pin down AI management and fixes into a demonstrable and quantifiable contract service cost? That’s not at all clear.

At what point does AI stop being an undefinable threat?

GPT 5.5 may well be great. It’s the business environment you need to talk about.


Op-Ed: Who needs AI? You do and you don’t.

ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNA
April 23, 2026


A Bernstein Research analyst says Open AI CEO Sam Altman has the power to crash the global economy or take everyone 'to the promised land' as the startup behind ChatGPT races to build artificial intelligence infrastructure costing billions of dollars - Copyright GETTY IMAGES NORTH AMERICA/AFP JUSTIN SULLIVAN

The question of who needs AI is a very fast-moving target. Using AI is about adapting it to your skills. Not you adapting to it. Things get personal when it’s your own work.

Case in point – As usual, when writing an article, I open a Word doc and turn off Copilot. I write about 1 million words a year, according to Grammarly. I don’t need Copilot largely because it gets in the way of flow, continuity, even basic syntax, and expression. Both Copilot and Grammarly have problems with tenses, syntax, and missing the whole point of sentences entirely.

I’ll believe in the omniscience of LLMs when I see it. AI is an actual nuisance in creative writing. That is a real issue. My creative writing is up to Grade 18, according to Word. I do not need an automated pedant cluttering up my work and consuming time in the process.

Then there’s functionality. AI is a lot better at that. I also go straight to the AI summary when I search. It’s useful, unlike many of the actual searches. As a research tool, it’s always ballpark, unlike some of the waffly and downright useless additives to searches I see daily on Google.

This is where “need” defines itself. This match of skills, functional needs, and quality control is unavoidable. AI is a new factor in your work with dubious credentials and much more dubious applied values and laughable sales pitches in so many ways.

Consider the pre-AI environment:

Most businesses reported that they were doing brilliantly and that they were therefore geniuses.

Basic business was just a digital version of paper business.

Admin was extremely efficient by any pre-digital standards.

Data had to pass scrutiny, particularly on balance sheets.

Data wasn’t unquestioned and unquestionable.

In short, looking at this heavily biased description, you could make a good superficial case that nobody ever needed AI.

Er, um, ah, well… Not quite. Data loads were pretty much according to Moore’s Law to the letter. The tech eventually overran those parameters, and the Cloud took most of the weight.

It’s just as easy to argue that AI has stepped in to meet an inevitable demand for much higher processing efficiency and increased operational scope. That, at least, if nothing else, is perfectly true.

This isn’t quite a replay of the Industrial Revolution. It’s a very klutzy, badly mismanaged version of it, with the tech stumbling and bizarrely turning itself into the lowest common denominator for so many tasks. The Spinning Jenny is now an AI gopher agent who books things for you and does other mundane work.

Even the reliability and effectiveness of these AI agents is now seriously and rightly in question. Employment, education, and everything else people seem determined to avoid have been sucked into AI’s black-hole-like maw. We’re now arguing about whether kids should be able to do their homework themselves.

To coin a phrase or two –

“So far, so what?”

“Chatbot, schmatbot.”

Pretty mystic, don’t you think? But accurate.

In some ways, nothing has changed, just verbose and largely useless perspectives. The work still needs doing. Efficiency is measured by suspiciously obliging productivity metrics that look more like excuses than hard assets.

People and businesses don’t need this level of ambiguity. Life’s tough enough without a barely comprehensible tech you don’t necessarily like or want to understand. Getting an AI skill set is undeniably useful and valuable, but does that make the tech any better?

In the interests of balanced journalism, I asked Grok, “Who needs AI?” and got this thing, a pretty straightforward response with an interesting quote:

The deeper truth: AI needs humans more than the reverse in many ways. It excels at scale, speed, and pattern-matching but lacks true judgment, emotional intelligence, ethical nuance, leadership, and the ability to handle true novelty or messy real-world context without guidance.

As a writer, I know a scripted response when I see one. This is chapter and verse of a considered PR response as well as an answer. Not bad, really. Maybe LLMs do earn their keep.

However, and in fairness to a class of tech that is going to be underfoot for generations to come, what if it’s an honest answer? It’s correct as far as it goes. AI can’t lead. It also can’t process so many issues that are out of its depth.

That leads us to a slight and currently very unfashionable conundrum.

Could it be that AI is an unintentional risk of honesty?



Op-Ed: Anthropic Mythos — The monster that could be a saviour?



ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNA
April 23, 2026


Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books - Copyright AFP JOEL SAGET

Anthropic’s Mythos is causing a massive flurry of ridiculous levels of interest in AI as a security threat. This threat has been monotonously predicted by cybersecurity experts for years.

The main difference is that it’s now visible in a tangible form. A report that Mythos was accessed by unauthorized users hasn’t helped.

Anthropic has been very cautious and understandably reticent about Mythos. It seems that Mythos has a unique capacity for finding flaws in IT security. “Unauthorised access” is exactly what you don’t want with this capability.

There are other possible issues. If Mythos can be duplicated, or some kind of its flaw finding capabilities can be cloned, the possibilities are all too obvious. The IP damage alone could be catastrophic.

The cybersecurity angle is much more dangerous. Anthropic aren’t suffering from some sort of implied hypochondria. According to some reports, Mythos can crack smaller IT systems, which could be a direct lead into other larger systems.

That’s another major issue. This type of breach is a routine existing problem in cybersecurity. It’s a backdoor way of getting into associated businesses and other systems. Even if the big systems are OK, these compromised systems are likely gateways.

AI systems add a level of difficulty in their scope of operations, able to generate agents, and are infamous for their weird behaviors. Now add an AI that specializes in cybersecurity, going rogue.

Put it this way: can anyone on Earth create a prompt for an AI dysfunctional rampage? Yes.

You definitely do not need a cybersecurity specialist AI going on a bender in this environment. Even a relatively minor event can escalate into a market panic, with or without serious damage. It’s a monster in too many ways.

OK, this is where it gets interesting.

Mythos seems to have a real major asset ready to go in plain sight. This expertise in finding flaws could be a huge plus for global cybersecurity.

Try this for a bit of tenuous logic:

AI can generate a sort of SSL, Secure Sockets Layer, a multilayered hard target like the SSL used by financial institutions. It can do this in seconds.

Now the “saviour” bit. This is fascinating.

AI can predict. This is where Mythos may have a huge advantage. If you’ve ever played against Stockfish, the super-chess computer, it can plot moves at least 40 moves in advance. Apply this to “breach theory” and apply an AI prediction of how a breach behaves, and the possible moves of a hack.

Hacks have a weakness, too. Some things must be done to access and run anything. AI can monitor behaviors and predict next steps by bad actors long before they happen. It can block actions, redirect them, and/or simply stop them in real time.

This is existing tech. Don’t even need to look for the codes for move prediction and easy for an LLM to train as required. LEGO for cybersecurity, in effect. Mythos could easily outperform any hack, and at AI speeds.

Mythos knows where the weaknesses are. It can predict how a hack has to behave to do anything.

The problem may be its own solution.

___________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.

Anthropic cyberattack highlights how the modalities are becoming harder to contain

By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 22, 2026


Investigators and researchers are still learning of the scope of the cyberattack which has hit US government agencies and other victims around the world - AFP

With both OpenAI and Anthropic introducing more “cyber-permissive” models (in tightly controlled releases), this indicates that advanced vulnerability discovery and exploit reasoning are becoming more accessible and potentially harder to contain. A recent incident demonstrates this.

This week it was announced how unauthorised users were able to access Anthropic’s Mythos model, PC Mag reports. The way the rogue agents accessed the server was reportedly by just changing a model name.


Anthropic’s Mythos model is a powerful AI tool capable of identifying undiscovered security holes that have existed for decades.

Bloomberg has reported that an as yet unnamed group tried multiple ways to gain access to the AI model, and then finally they were able to get through to the system, via a third-party vendor.

The issue demonstrates how easily such systems can be exposed. This signals that AI capabilities are already out there and in the wrong hands they can accelerate how quickly vulnerabilities can be detected and exploited.

Consequently, software teams will need to look at how to harden their code so those vulnerabilities cannot be exploited to begin with.

Several experts reached out to Digital Journal to explain about the ramifications and ongoing significance of the incident.
Patching is expected

The first to do so is Steve Povolny, Vice President of AI Strategy & Security Research at Exabeam. Povolny focuses on the seeming simplicity of the attack: “The reality is, Pandora is out of the box. If it was as relatively easy as it sounds to gain access to the world’s most talked-about security model, it’s very likely a much larger group will have access to Mythos far sooner than originally intended.”

He then turns his attention to the future, considering: “What will be most interesting is observing whether researchers or adversaries can leverage the tech more effectively – will we see widespread exploitation or widespread discovery and patching first? Or will this be another DeepSeek moment? Overreactions and underwhelming impact. Either way, should be interesting to watch this unfold.”
Difficult steps ahead

The second IT specialist to pitch in is Isaac Evans, founder and CEO of Semgrep. Evans seeks to put the incident in perspective: “This infiltration is a minor hiccup compared to the idea of someone exfiltrating the models’ weights, which would be a game-changing scenario, and one that has occurred in part before with the distillation of OpenAI models into Deepseek. Anthropic has to protect Mythos against distillation or outright theft.”

Evans then ponders the future move for Anthropic: “Mythos’ ability to find zero-days in so much of the software stack that SaaS vendors rely on is evidence that security bugs are plentiful, not scarce, in the software Anthropic and the broader community use. The security team at Anthropic has a very difficult job: securing the model on a software stack that was designed for high velocity over high assurance, against some of the most sophisticated threat actors in the world.

He is also cautious about what happens next: “Until we are able to reach a new steady state by patching all of the vulnerabilities LLMs can find, expect a lot of successful offensive activity.”
Building offensive-grade AI

The third commentator is Gabrielle Hempel, Security Operations Strategist at Exabeam. Hempel is interested in how the attack was devised: “Any time you build a high-capability system and expose it even to a semi-distributed environment (partners, contractors, “trusted” ecosystems), you’re expanding your attack surface beyond what you can realistically control. While everyone seems focused on securing against sophisticated nation-state actors, we’ve increasingly seen third-party access paths becoming the weakest link. “

She next looks at the inherent weaknesses that opened the door for the attackers: “From a defender’s perspective, this is the point we’ve been reinforcing until we’ve gone blue in the face: your security perimeter isn’t just the infrastructure you own, it’s your entire supply chain.”

Stepping back, Hempel weighs up the situation of an offensive AI world: “I think the interesting thing is that everyone is going to focus on the headlines touting, “AI tool capable of cyberattacks falls into the wrong hands. The real problem, however, is that this model was never supposed to be broadly accessible, it was intentionally restricted to a small set of orgs due to dual-use risk, and it still leaked almost immediately due to a contractor environment. The uncomfortable truth here is that we are rapidly building offensive-grade AI capability into tooling and assuming that policy, contracts, and limited access lists are going to sufficiently control the sprawl.”