Showing posts sorted by date for query CONTRACTORS. Sort by relevance Show all posts
Showing posts sorted by date for query CONTRACTORS. Sort by relevance Show all posts

Sunday, April 26, 2026

‘The Corruption Is in Plain Sight’: Protesters Decry Ellison-Trump Dinner as Megamerger Looms

“Tonight’s dinner appears to be nothing more than a transparent bid to flatter the Trump administration into rubber-stamping David Ellison’s proposed Paramount-Warner Bros. merger.”


David Ellison, the CEO of Paramount Skydance, walks through Statuary Hall to the State of the Union address on February 24, 2026, in Washington, DC.
(Photo by Anna Moneymaker/Getty Images)

Jake Johnson
Apr 24, 2026
COMMON DREAMS

A coalition of free speech organizations, progressive lawmakers, and antitrust advocates gathered outside the US Institute of Peace in Washington, DC on Thursday to protest a private dinner hosted inside the building by Paramount Skydance CEO David Ellison, who is seeking regulatory approval from the Trump administration for a megamerger of his company and Warner Bros. Discovery.

The invite-only dinner was billed as an “intimate gathering in celebration of the First Amendment honoring the Trump White House”—which has waged war on press freedom—“and CBS White House correspondents.” Norm Eisen, co-founder of Democracy Defenders Action, said during Thursday’s protest that the dinner “resembles the First Amendment in the same way that a book burning is a celebration of the written word.” President Donald Trump attended the dinner, which critics dubbed the “Paramount Corruption Gala.”

Organizers of Thursday’s demonstration warned that the proposed merger of Paramount and Warner Bros., the parent company of CNN, would be catastrophic for media and free expression. If the merger is approved, David Ellison—the son of Trump megadonor Larry Ellison—would control CBS, CNN, HBO, and other major media properties.

“Tonight’s dinner appears to be nothing more than a transparent bid to flatter the Trump administration into rubber-stamping David Ellison’s proposed Paramount-Warner Bros. merger, which would be a disaster for American news media and media consumers,” said Robert Weissman, co-president of the watchdog group Public Citizen. “This proposed acquisition perfectly illustrates the domino effect of corporate and wealth concentration: David Ellison is only positioned to propose this merger because his father, Larry Ellison, the co-founder of Oracle, has become richer than any person should be allowed to be.”

Craig Aaron, co-CEO of the advocacy group Free Press, said that “no company should have this much media power, but especially not this company.”

“We’re here tonight to defend free speech. We’re here tonight to defend press freedom,” said Aaron. “We’re here to stop government censorship. We’re here to stop corruption and stop the Ellisons from trashing even more of our media.”

Aaron called on those gathered to say it “loud so that state attorneys general” across the country can hear the message clearly.

“Stop the merger!” they shouted. “Stop the merger!”

Watch the full protest:




The dinner was held hours after Warner Bros. shareholders approved the proposed merger with Paramount, a company that just last summer received approval from the Trump administration to merge with Skydance—a decision that was widely viewed as corrupt. The proposed merger of Paramount Skydance and Warner Bros. has drawn vocal opposition from Hollywood actors, directors, and producers, who released an open letter earlier this month warning that the combination would “threaten the sustainability of the entire creative community.”

Two members of Congress, Reps. Jamie Raskin (D-Md.) and Becca Balint (D-Vt.), spoke at Thursday’s protest, decrying what they called Ellison and Trump’s “corrupt merger scheme.”

“We’re here to say, ‘Hell no,’” said Raskin, the top Democrat on the House Judiciary Committee. The Maryland lawmaker called Ellison’s private event “a lavish oligarch’s dinner for Donald Trump.”

Balint told protesters that as she spoke, Ellison was probably “raising a glass to his friend, his supporter, his patron, Donald Trump.”

“That’s what they’re celebrating: power and corruption,” said Balint. “And in this instance, the corruption is in plain sight.”

Trump family could pocket billions from IRS suit: analyst

Ewan Gleadow
April 26, 2026
RAW STORY

President Donald Trump will funnel a potential IRS payout into a family shell company, a political analyst has claimed.

Trump and his sons are negotiating with the Internal Revenue Service to settle a $10 billion lawsuit without trial. Trump filed the lawsuit after taking office, claiming an IRS contractor leaked his tax information. The motion for settlement extension was filed with IRS consent, requesting time for parties to engage in discussions and avoid protracted litigation.

Trump acknowledged in January that he is essentially negotiating with himself, stating he could make the settlement "a substantial amount" before directing funds to charities.

Heather Delaney Reese believes that, should Trump's lawsuit against the Internal Revenue Service be a success, the payout will not be headed to charity.

Reese wrote, "Trump and his lawyers are currently in settlement talks with the Department of Justice over this lawsuit. The same DOJ that he controls. If those talks result in a payout, it would be Trump’s own administration writing Trump and his family a check from the United States Treasury. That would be taxpayer money being spent.


"And if he does donate the winnings to charity, as he suggested on Air Force One, do not hold your breath waiting to find out which one. This is a family with a history of creating entities that look like charities on paper.

"The Trump Foundation was shut down under court supervision after the New York Attorney General found that Trump had repeatedly used its funds for his own personal, business, and political interests.

"He was ordered to pay $2 million in damages. He made 19 admissions of illegal activity. His three adult children were required to undergo mandatory charity law training as part of the settlement. So when he says the money could go to charity, it might not mean what we imagine that to mean."

Reese went on to suggest that the lawsuit could be set to collapse by May after a federal judge asked a pointed question about the point of the suit.

"But on Friday, a federal judge named Kathleen Williams, an Obama appointee sitting in Miami, looked at the case and asked a question that cut through what this lawsuit really was about: money," Reese wrote. "She pointed out that Trump is the sitting president who directly oversees both the IRS and the Treasury Department.

"His named adversaries in this lawsuit are agencies whose decisions are subject to his direction. She questioned whether the parties are even 'sufficiently adverse to each other' for the lawsuit to be constitutional under Article III, which requires an actual controversy between genuinely opposing parties."


‘These People Are Shameless’: RFK Jr.’s Son Launches Healthcare Investment Fund

“The festering swamp of corruption and self-dealing surrounding the Trump White House just got even deeper.”


Robert F. Kennedy Jr., with his son Finn behind him, spoke during a rally in Aurora, Colorado on May 19, 2024.
(Photo by Helen H. Richardson/MediaNews Group/The Denver Post via Getty Images)

Jake Johnson
Apr 25, 2026
COMMON DREAMS

US Health and Human Services Secretary Robert F. Kennedy Jr.'s son, Finn Kennedy, is reportedly seeking to raise $100 million for a new healthcare industry investment fund that will seek to capitalize on “policy initiatives in government”—including RFK Jr.'s so-called Make America Healthy Again agenda.

The Financial Times reported Friday that Finn Kennedy’s fund, Victura Ventures, has already secured roughly $70 million in commitments. The fund is “targeting early-stage growth companies involved in healthcare AI, consumer health, and other health technologies,” FT reported, citing an offering document.

“Kennedy’s foray into healthcare investing marks the latest example of the cozy relationship between the Trump administration and close associates who have sought to capitalize on it,” the newspaper added. “Sons of President Donald Trump and Commerce Secretary Howard Lutnick have invested in cryptocurrency businesses as Trump has promoted alternative currenciesDonald Trump Jr. has joined the board of 1789 Capital, a fund founded by pro-Trump donors in 2023. At least four of 1789’s portfolio companies have won contracts from the Trump administration. 1789 has also invested in big government contractors, such as Anduril and Elon Musk’s SpaceX.”

Additionally, as Common Dreams reported on Thursday, Eric Trump appeared on Fox Business to brag about a $24 million Pentagon contract secured by Foundation Future Industries, where the president’s son serves as chief strategy adviser.

“These people are shameless,” journalist Doug Henwood wrote in response to the reporting on Finn Kennedy’s new fund.

The advocacy group Protect Our Care said the FT reporting and a Friday story in The New York Times—which detailed how a top Kennedy aide “was advising on changes to the American health system while running a rapidly growing wellness company poised to benefit from Trump administration health policies”—show that “the festering swamp of corruption and self-dealing surrounding the Trump White House just got even deeper.”

According to the Times, Kennedy aide Calley Means “held between $25 million and $50 million in stock in the company, Truemed, through November, as he continued to serve as its president.”

“For months, Mr. Means has ignored questions from Democrats in Congress about his finances, including the extent of his stake in Truemed, and how they related to federal policy,” the Times added.

Kayla Hancock, the director of Protect Our Care’s Public Health Project, said in a statement Friday that “it’s perhaps easy for RFK Jr. to look at Donald Trump and Commerce Secretary Lutnick blatantly abuse the power of the White House to enrich themselves, family members, and big donors, and say, ‘Why not me?’”

“Kennedy claims he’s following ethics rules, but why did he keep the barn door open for his son and close associates to profit off his policy decisions?” asked Hancock. “It follows a corrupt pattern of Trump administration officials exploiting loopholes to steer money into their family and friends’ pockets at the same time they rip away healthcare from millions of Americans and push policies that hike costs on everything from insurance premiums, gas, to groceries.”

‘Unprecedented Kleptocracy’: Sanders Slams Trump Family’s Presidential Profiteering

“The Trump family has made $4 billion off the presidency,” the senator said.



US Sen. Bernie Sanders (I-Vt.) speaks during a Fighting Oligarchy Tour rally at the UIC Forum in Chicago on August 24, 2025.
(Photo by Scott Olson/Getty Images)

Brett Wilkins
Apr 24, 2026
COMMON DREAMS

Amid renewed scrutiny of self-dealing by President Donald Trump and his relatives ahead of this weekend’s Mar-a-Lago gala for top investors in the $TRUMP meme coin—whose value has plummeted more than 90% from its high—Sen. Bernie Sanders on Thursday took aim at the First Family’s corruption.

“The Trump family has made $4 billion off the presidency,” Sanders (I-Vt.) said on X following reporting by New Yorker staff writer David Kirkpatrick and others detailing how Trump and relatives have profited from his position during his second term.

Sanders listed sources of Trump family presidential profiteering, including more than $3 billion from cryptocurrencies like $TRUMP and $MELANIA—the latter whose value has plunged by over 99%—Persian Gulf deals worth over $425 million, $150 million in the form of a luxury jumbo jet gifted by Qatar, and various business ventures and deals the senator slammed as part of an “unprecedented kleptocracy.”

In addition to the two meme coins, many of those crypto gains are linked to ventures including American Bitcoin and World Liberty Financial—which has raised eyebrows for being co-founded by Trump’s sons, with disclosures showing 75% of its token sales going to a Trump-linked entity.

Democrats on the US House Oversight Committee have published their own running tally showing nearly $2.5 billion in “Trump family digital grift profits”—including more than $634 million from foreign sources—and $6 billion in “Trump family digital grift wealth.”

“While Americans struggle to buy groceries and pay rent, Donald Trump is making his family richer through digital grift schemes—collecting profits through digital wallets and granting pardons to the highest bidders,” the House Oversight Democrats said.

Sanders isn’t the only US lawmaker to denounce what Sen. Elizabeth Warren (D-Mass.) last year called Trump’s “superhighway of crypto corruption.”

Also last year, Rep. Jamie Raskin (D-Md.), ranking member of the House Judiciary Committee, released a report detailing how “Trump and his family have transformed the presidency into a personal money-making operation, adding billions of dollars to his net worth through cryptocurrency schemes entangled with foreign governments, corporate allies, and criminal actors.”

“President Trump and his family kept lining their pockets while he and his allies in Congress closed down the federal government—refusing to extend tax credits to make healthcare affordable for American families, putting continued food benefits for women and children in doubt, and placing active-duty military personnel in danger of missing their next paycheck,” House Judiciary Democrats said.

Trump is the only president to ever be convicted of felony crimes. In 2024, while he was running for a second term, a New York jury found him guilty of 34 felony charges related to the falsification of business records regarding hush money payments to cover up sex scandals during the 2016 presidential election.

Last year, a New York appeals court tossed a $355 million civil fraud judgment—which increased to more than half a billion dollars with interest—against Trump and his two eldest sons in a separate case in which the trio exaggerated the wealth of their business organization. The ruling upheld the fraud finding and banned Trump and his sons from leading businesses in the state for 2-3 years.
'Stupid move': Fury as Trump fires entire science board with no warning or explanation

Daniel Hampton
April 25, 2026 
RAW STORY


President Donald Trump triggered outrage when he fired what House Science Committee Democratic staff described as the entirety of the independent board overseeing the nation's premier basic science funding agency on Friday, sending boilerplate termination emails that offered no explanation and no warning.

Members of the National Science Board, which helps govern the $9 billion National Science Foundation, received messages from the Presidential Personnel Office simply stating their positions were "terminated, effective immediately," The Washington Post reported Saturday. The foundation funds Antarctic research stations, telescopes, research vessels, and the basic science behind MRIs, cellphones, and LASIK eye surgery.

"This is the latest stupid move made by a president who continues to harm science and American innovation," Rep. Zoe Lofgren (D-CA) said. "The NSB is apolitical. It advises the president on the future of NSF. It unfortunately is no surprise a president who has attacked NSF from day one would seek to destroy the board that helps guide the foundation."

Board member Keivan Stassun, a physicist at Vanderbilt University, confirmed that a third of the board had received the termination emails. Fellow member Marvi Matos Rodriguez said she had been reviewing an 80-page report as part of her board work just days before being fired.

Trump's fiscal year 2027 budget proposes deep cuts to NSF, and the board has been actively advising Congress on the agency's importance, helping beat back a proposed 55 percent budget cut last year.

‪Alondra Nelson‬ wrote on Bluesky that she resigned from the National Science Board in May after seeing "meaningful oversight became untenable."

"I respect colleagues who stayed to serve the NSB's mission. Today's news clarifies that was an erosion of oversight and function has become open elimination of the Board itself," said Nelson.

‪Princess Vimentin, a cancer biologist, wrote on Bluesky, "We are seeing more destruction of science. Trump fired the entirety of the National Science Board (NSB). The purpose of the NSB is to advise Congress & President on on NSF. The NSB was established in the National Science Foundation Act of 1950."



Trump Is Telling All The Wrong People, ‘You’re Fired’ and Devastating America

Entire careers and livelihoods have been destroyed by this dictator using the White House to vastly enrich himself and his cronies.


Supporters hold signs as former US Agency for International Development employees terminated after the Trump administration dismantled the agency collect their personal belongings at the USAID headquarters on February 27, 2025 in Washington, DC.
(Photo by Chip Somodevilla/Getty Images)


Ralph Nader
Apr 26, 2026
Common Dreams

On my radio show-podcast—the Ralph Nader Radio Hour—interviews of knowledgeable people have detailed the ravages by the cruel, serial law violator, Tyrant Trump, inflicted on millions of Americans. Still, the report from the V-Dem Institute at Sweden’s University of Gothenburg produced a jolting Common Dreams headline: ‘Trump is Dismantling US Democracy at a Speed ‘Unprecedented in Modern History.’

The report described the first year of President Donald Trump’s second term as achieving in one year what budding autocracies take a decade to accomplish, adding that “the speed of decline is comparable to some coups d’état.”

To wreck, weaken, and endanger our country, Trump disrupts the lives of millions of civil servants, contractors, small businesses, and their families. He fired or forced out hundreds of thousands of federal civil servants staffing programs that protect the health, safety, and economic well-being of tens of millions of Americans, relying on food supplements, Medicaid, government-backed loans, and innumerable other social safety nets.

Trump has especially targeted law enforcement programs directed at enforcing worker and consumer safety, financial protections, and environmental health against toxic corporations. He is taking federal cops off the corporate crime beat.

Multiply this story of undeserved misery and fragility hundreds of thousands of times.

Here are some specifics. Qualified foreign doctors have had their visas rejected. The US has a doctor shortage, especially in rural areas. These physicians were blocked by Trump from extending care in areas with no doctors.

Huge, arbitrary cuts for scientific research have closed or curtailed labs, left individual scientists pursuing crucial discoveries to save lives without the government grants funding vital promising projects. He has also accelerated a brain drain from the US to Europe and China, and reduced the number of scientists, engineers, and nurses coming to the US to work, where they are seriously needed.

Entire careers and livelihoods have been destroyed by this dictator using the White House to vastly enrich himself and his cronies.

Let’s be more specific. The New York Times published a front-page story about what is happening to employees of the US Agency for International Development (USAID), illegally closed down in the first week of Trump’s regime. This reckless action jeopardizes millions of impoverished lives abroad. The article opened with: “She was fired by email while on maternity leave, given 24 hours to clear out her desk, and left with three days of health insurance and no severance.” Her husband, also working with funding from USAID, lost his job. They are now relying on food stamps, Medicaid, and a supplemental nutrition program—long-standing programs being cravenly slashed by the Trumpsters, while giving huge tax escapes to the super rich and large corporations like Apple.

Multiply this story of undeserved misery and fragility hundreds of thousands of times. Through Elon Musk’s criminal enterprise, the Department of Government Efficiency (DOGE), whole agencies were being illegally shattered, and virtually shut down, e.g., the Department of Education, the Consumer Financial Protection Bureau, and the US Institute of Peace. Others were being strip-mined like the Department of Health and Human Services, the Environmental Protection Agency, and the Department of Agriculture.

Trump tore up civil service union contracts. The unions are suing Trump for this breach of contract. Such lawsuits drag on interminably and are hardly covered by the media. What the union leaders and members should be doing is peaceably encircling the White House for round-the-clock vigils and featuring large signs calling Trump out in vivid language. After all, the headquarters of the AFL-CIO is less than a block from the White House for easy logistics.

What are the pretexts coming out of Trump’s snarling mouth to justify such devastation of America? One is that he accuses these agencies of being “woke,” an ill-defined word for “leftists” that he has turned into another of his four-letter epithets for his ever-true believers.

A more frequent declaration issued without substantiation is that his decisions are based on “a grave threat to national security.” His lies don’t pass the laugh test.

This pretext is always applied to Trump’s blockage of offshore wind turbines, which he strangely has long called “ugly.” Trump recently exempted oil and gas drilling in the Gulf of Mexico from measures to protect endangered species. Self-described warrior of God and Jesus Christ, Defense Secretary Pete Hegseth, stated that such exemptions would bolster national security by increasing domestic oil production.

Trumpian effrontery gets worse. He issued an executive order removing collective bargaining rights from hundreds of thousands of federal employees employed by a dozen agencies on national security grounds. The 1978 law he falsely invoked applied to “intelligence officers,” not to cleaners, guards, clerks, etc., in federal buildings. Again, the expected lawsuits were filed. Amid judicial delays, Trump gets his way.

When pressed by reporters to explain these pretexts, Trump’s flaks come up with ridiculous assertions promptly rebutted by specialists in each area. (See The New York Times, April 19, 2026—“Trump Has a Go-To Justification for His Contentious Decisions: National Security.”)

Who elected Trump? The Democratic Party’s feeble, cowardly, and uninspiring performance in 2024—repressing through its corporate-conflicted consultants’ decisive input from its progressive wing and civic and labor leaders—was a big factor. (See the August 27, 2024, letter to Liz Shuler).

Who unleashed this runaway felonious politician violating daily innumerable federal laws, regulations, international treaties, and constitutional provisions, constituting serious impeachable offenses? (See H.Res.1155).

First, the congressional Republicans have abjectly surrendered their oath of office to constitutionally lead the congressional branch of government. In addition, the cowardly Democrats, who could have conducted scores of “shadow hearings” to inform the media and citizenry are largely MIA.

It is time for citizens to press their Senators and Representatives to stop this Trump rampage—before it is too late. The Congressional Switchboard number is 202-224-3121.



AI poses the biggest threat to service sector jobs

By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 24, 2026


Image: — © AFP

An April 2026 report on job automation found that Malta faces the biggest threat from AI replacing workers. With Amazon cutting 16,000 international roles to let AI handle the same tasks, a new study by Planera shows which countries have the most people working in jobs that machines will soon be able to do.


Malta has the highest automation risk in the world, with nearly half of workers holding jobs that AI can replace.

For the U.S. some 96 million American workers could lose their jobs to AI, making them the largest at-risk workforce. However, the worst is found elsewhere. Service-focused countries like Greece and Spain face bigger risks, with hospitality and retail roles most exposed to AI.

The research tracked employment across different economic sectors to find how exposed each country’s workforce is to AI automation. The report used official labour data from government sources and matched them with automation risk probabilities for each industry. These probabilities measure how likely machines are to replace human workers in sectors like hospitality, finance, retail, and professional services. Countries were ranked by how much of their workforce is doing work that AI can handle.

The top 10 countries where workers face the highest automation risk

CountryWeighted AI
 Exposure Index
Total Emp
 (000s)
Total Weighted Employees at Risk of Replacement 
 (Emp×Risk)
Malta46.56%332.80155.00
Canada44.87%8,865.003,977.60
Greece44.84%5,525.102,477.30
Cyprus44.77%508.40227.60
Luxembourg43.82%538.90236.20
Netherlands43.67%10,890.004,755.20
United States 43.63%158,286.0069,067.90
Spain43.35%23,091.1010,010.40
Belgium43.28%5,575.702,413.00
Italy42.22%28,746.1012,136.90
As indicated by the table, Malta faces the biggest job automation risk in the world. Nearly half the workforce here holds jobs that AI can replace, putting 155K people at risk of displacement. The island economy depends on admin work, hospitality, and professional services, all sectors where automation is easiest. Malta’s small size means these workers can’t easily move to safer sectors either, and with 1 in 2 jobs vulnerable, the country faces greater threats than larger economies.

Canada wobbles

Canada comes second with close to 4 million workers employed in roles that machines can handle. That’s about 45% of the local workforce. Information technology and hospitality drive the risk here, with 75% of Canadian tech jobs predicted to be automated soon, while food service faces the same risks at 72%. Unlike Malta, Canada has options to retrain workers, but the sheer number of people affected means displacement will still hit hard across provinces and cities.

Greece matches Canada’s 45% automation risk, with 2.5 million workers holding jobs that AI can replace. The Greek economy depends on tourism and services, sectors where automation is advancing fastest. Accommodation and food services employ 730K people at 72% replacement risk, while wholesale and retail trade adds another 880K workers at 51% risk. The country’s ongoing economic struggles make retraining difficult as well, so many Greeks face automation threats without safety nets
.

Cyprus ranks fourth with 45% of its workforce exposed to automation. About 228K Cypriots work in roles that machines can handle, a large share for an island with just over half a million employed. Like Malta and Greece, Cyprus built its economy on tourism and professional services. Legal, accounting, and scientific jobs here face 70% automation risk, while hospitality is at 72%. The island’s geographic isolation makes job mobility harder, so workers who lose positions to AI have fewer options than people in bigger countries.

Luxembourg rounds out the top five as automation threatens 236K jobs. Lawyers and accountants here face worse odds than bankers, with 70% of their work easily replaceable compared to 51% in finance. Being one of the world’s richest countries means Luxembourg can retrain its workforce better than others. However, that will still require nearly half of all workers to start over in new careers.
Service sector risks

The data shows service jobs are more at risk. Manufacturing was already automated decades ago, so the workers left are doing tasks robots can’t handle yet. But admin assistants, retail clerks, and hospitality staff are all doing repetitive work that AI can learn quickly.

AI Deletes Routine White Collar Jobs – OpEd

By 

A new study predicts that 86% of AI unemployment will be women. And not just any women: rich Democrat women. 

Tragically, AI is coming for the notorious Karen who’s overpaid for what she produces but still needs to see the manager.

The reason is the Industrial Revolution took jobs from people who work with their hands — routine physical work. But AI is taking routine paperwork — people who forward emails, schedule meetings, and sit on diversity committees.

Last week, think tank Brookings issued a new study estimating 37 million American workers are “highly exposed” to AI replacement. 

Brookings thinks that most of those will easily transition into a different role because they have broad skill sets or they’re smart.

For example, software or finance are on the firing line for AI replacement. But they’ll adapt as automation creates new jobs since it raises incomes and deepens production — products get better.

But Brookings estimates there’s about 6 million of those who will not adapt, primarily in clerical and administrative roles.

What’s interesting is the distribution: Brookings estimates 86% are women. And they work at big organizations with lots of routine paperwork — colleges, hospitals, big companies, government. 

In healthcare, for example, hundreds of thousands of workers never see a patient — they see paperwork. 

It’s worse with federal workers, who are also mostly women. 

Going by the fact nothing got worse when DOGE fired 300,000 of them, many are useless. Just imagine how useless they’ll be when AI can do their job for free.

A lot of these women may be low skill, but they’re high education — and high income. Which they’re about to lose. Which won’t make them happy.

In a recent CNBC interview, Palantir CEO Alex Karp laid it out: “If you’re going to disrupt the economic and political power of highly-educated female voters who vote Democrat while increasing the power of vocationally trained working class males…and you think that’s going to work out politically, you’re in an insane asylum.”

So who are these soon to be jobless Karens? 

A study by AI company Anthropic — of Claude fame — thinks AI could ultimately replace over 90% of tasks in administrative, clerical, and middle management. And over 80% in arts and media and law firms — don’t let your kids go to Hollywood. Or become lawyers.

This sounds dire, but remember the adaption. Software, for example, has been automating for 50 years — punch-card feeders are long gone. During dot-com, website developers were supposed to be obsolete any day now, and 50 years later they’re still here. Because the work got more complicated even as the basic stuff automated.

So the vast majority of that 80 to 90% will reskill, including software, finance, marketing and managers.

The problems are those clerks and admins, secretaries, sales assistants, customer service, payroll…HR. Heaven help you if you’re a diversity consultant who doesn’t know how to flip a burger.

I’ve argued that AI will be the opposite of the Industrial Revolution: Instead of replacing routine physical jobs, it replaces routine white collar jobs. With robots coming decades later since you need one AI for 8 billion people but you need 5 robots per McDonald’s.

This creates a generation-long blue collar boom as automation itself makes us rich, which raises demand — and pay — for blue-collar jobs. But it will absolutely redistribute income — and power — from high-income, high-education, largely female white collar workers to the plebs.

This is terrifying for Karen — she already makes less than the plumber, she would make less than the Uber guy.


This article was published at Brownstone Institute and republished from
the author’s Substack

AI firms flex lobbying muscle on both side of Atlantic


By AFP
April 25, 2026


Image: — © AFP


Daxia ROJAS

AI developers are ramping up efforts to win over the hearts and minds of officials in Europe and the United States, hoping to sway governments as they weigh high-stake regulatory frameworks for the ever more powerful technology.

Flush with cash, the firms are also wooing the general public, insisting that artificial intelligence will be a force for good — and not a destroyer of jobs or an existential threat for humanity.

ChatGPT maker OpenAI unveiled this month a 13-page “Industrial Policy for the Intelligence Age” that calls for new taxation and expanded safety nets to ensure society withstands the arrival of superintelligent systems.

It has even bought TBPN, a technology-focused talk show, to help shape the narrative.

But the policy document also came just days after a public backlash forced the company to halt plans for a sexually explicit chatbot.

OpenAI has also faced legal challenges from families of teenagers who say ChatGPT caused harm and even suicide among young people, prompting the company to introduce an age-verification system.

“This is a turning point” for the industry, and companies “are spending a fortune to try to get favourable measures passed in their patch”, said Alexandra Iteanu, a Paris-based lawyer specialising in digital law.

– Politicians in pocket? –

The AI industry has transformed Washington lobbying at extraordinary speed, with more than 3,500 federal lobbyists — one-fourth of the total — working on AI issues last year, a 170 percent increase over three years, according to Public Citizen, a consumer advocacy group.

The established giants like Meta, Google and Microsoft still dominate spending, but AI start-ups like OpenAI and Anthropic have rapidly built out their Washington presence, hiring elite firms and expanding in-house policy shops.

Anthropic for example has focussed its message on promoting AI safety and tighter regulation.

But OpenAI is also actively pushing the industry’s top legislative priority of preventing US states from passing their own laws governing AI, an effort that has twice failed in Congress but remains very much alive, backed by a sympathetic White House.

The influence campaign has moved into electoral politics, with a pro-AI campaign called Leading the Future assembling a $100 million war chest to back AI friendly candidates in the 2026 midterms.

President Donald Trump, a fierce opponent of AI regulation, counts OpenAI’s cofounder Sam Altman and its president Greg Brockman among his biggest donors.

European regulators are also feeling the heat, with the French start-up Mistral recently presenting in Brussels a 22-point plan to accelerate AI development on the Continent.

Lobbying outlays by the tech industry have surged 55 percent since 2021 to reach 151 million euros ($177 million) last year, according to study by the Corporate Europe Observatory and LobbyControl, a nonprofit.

– ‘Concentration of wealth’ –

For Margarida Silva of the Centre for Research on Multinational Corporations (SOMO, a Dutch nonprofit), AI firms are working from playbook of the oil and smoking industries, but with one major difference.

“They’re just the wealthiest companies in the world, so they have a lot of money that they can use to put towards lobbying,” Silva said.

“When you have such intense corporate lobbying that is based on having such a concentration of wealth, and that is standing in the way of public interest regulations… we are really talking about a democratic threat,” she added.

Many executives also cultivate friendships with elected officials to have “privileged channels” with public administrations, said Charles Thibout, a politic science professor at the Sciences Po Strasbourg university in eastern France.

He noted the phalanx of tech moguls at Trump’s inauguration last year, and the close ties between Mistral’s cofounder Arthur Mensch and French President Emmanuel Macron.

Political leaders are often keen to be seen with AI’s top names, Thibout added, if only to help get some of their huge development spending for their states or regions.

But “lawmakers are not fooled”, said Iteanu, as enthusiasm for AI has not dispelled public wariness about its potential consequences.

Despite the colossal spending in the United States, for example, opinion polls regularly show that Americans remain highly sceptical about the technology’s benefits, and more worried that it spells doom for millions of jobs.



China’s DeepSeek releases long-awaited new AI model

ByAFP
April 24, 2026


ChatGPT maker OpenAI's initiative to help countries build infrastructures for 'sovereign' artificial intelligence systems comes as it faces competition from China-based DeepSeek - Copyright AFP NICHOLAS KAMM

Chinese startup DeepSeek released a new artificial intelligence model Friday, more than a year after it stunned the world with a low-cost reasoning model that matched the capabilities of US rivals.

DeepSeek-V4 “features an ultra-long context of one million words,” the company said in a statement on social media platform WeChat, hailing it as “cost-effective” in a separate announcement on X.

The announcement came as Meta said it planned to cut a tenth of its staff as it looks for productivity gains from the rest of the workforce while investing heavily in artificial intelligence. Reports said Microsoft was also looking to trim its ranks.

DeepSeek-V4’s context length, which determines how much input a model is able to absorb to help it complete tasks, “(achieves) leadership in both domestic and open-source fields across agent capabilities, world knowledge, and reasoning performance”.

A “preview version” of the open source model is now available, the company said.

DeepSeek-V4 is released as two versions, DeepSeek-V4-Pro and DeepSeek-V4-Flash, with the latter being “a more efficient and economical choice” because it has smaller parameters.

V4-Pro has 1.6 trillion parameters while the V4-Flash has 284 billion parameters, which refine models’ decision-making ability.

The model has also been “optimised” for popular AI Agent products such as Claude Code, OpenClaw, OpenCode and CodeBuddy, the statement said.

“In world knowledge benchmarks, DeepSeek-V4-Pro significantly leads other open-source models and is only slightly outperformed by the top-tier closed-source model, (Google’s) Gemini-Pro-3.1,” the statement added.

Hangzhou-based DeepSeek burst onto the scene in January last year with a generative AI chatbot, powered by its R1 reasoning model, that upended assumptions of US dominance in the strategic sector.

This so-called “DeepSeek shock” sparked a sell-off of AI-related shares and a reckoning on business strategy in what was also described as a “Sputnik moment” for the industry.

The chatbot performed at a similar level to ChatGPT and other top American offerings, but the company said it had taken significantly less computing power to develop.

However, its sudden popularity raised questions over data privacy and censorship, with the chatbot often refusing to answer questions on sensitive topics such as the 1989 Tiananmen crackdown.

At home, DeepSeek’s AI tools have been widely adopted by Chinese municipalities and healthcare institutions as well as the financial sector and other businesses.

This has been partly driven by DeepSeek’s decision to make its systems open source, with their inner workings public — in contrast to the proprietary models sold by OpenAI and other Western rivals.

“China-made large AI models spearheaded the development of the global open-source AI ecosystem,” Chinese Premier Li Qiang told an annual gathering of China’s top decision-makers last month.

The AI race has intensified the rivalry between China and the United States, and the White House on Thursday accused Chinese entities of a massive effort to steal artificial intelligence technology.

“The US has evidence that foreign entities, primarily in China, are running industrial-scale distillation campaigns to steal American AI,” science and technology chief Michael Kratsios said in a post on X.

“We will be taking action to protect American innovation.”

Five things to know about Chinese AI startup DeepSeek


ByAFP
April 24, 2026


Photo illustration shows the DeepSeek app on a mobile phone in Beijing - Copyright AFP/File GREG BAKER


Luna LIN

As DeepSeek releases its first major new artificial intelligence model in over a year — DeepSeek-V4 — here are five things to know about the Chinese startup:



– ‘Sputnik moment’ –

Founded by Liang Wenfeng in the eastern Chinese tech hub Hangzhou, DeepSeek started life in 2023 as a side project of Liang’s data-driven hedge fund that had access to a cache of powerful AI processors made by US chip giant Nvidia.

It shot to global attention in January 2025 with the release of its R1 deep-reasoning large language model, which sparked a US tech share sell-off.

Industry insiders were stunned by R1’s high performance — at a level similar to ChatGPT and other leading US chatbots — and DeepSeek’s claims to have developed it at a fraction of the cost.

Venture capitalist Marc Andreessen described it as a “Sputnik moment” — referencing the 1957 launch of Earth’s first artificial satellite by the Soviet Union that stunned the Western world.



– Censorship concerns –

Like other Chinese chatbots, DeepSeek’s AI tools eschew topics usually censored in the world’s second-largest economy, such as the 1989 Tiananmen crackdown.

That and data privacy concerns have led DeepSeek AI to be banned or restricted on government-issued devices in several countries, including the United States, Australia and South Korea.

However, its low cost and ease of deployment have made it a popular choice in developing countries, analysts say.

The company holds four percent of global market share for chatbots, according to web traffic analysis company Similarweb. ChatGPT dominates at 68 percent.



– Open source –

DeepSeek’s systems are open-source — meaning their inner workings are public, allowing programmers to customise parts of the software to suit their needs.

That is the same for other major Chinese AI players, including tech giant Alibaba, in contrast to the “closed” models sold by OpenAI and other Western rivals.

The Chinese government has trumpeted its lead in open-source AI technology, which it says can accelerate innovation.

“Chinese AI models are leading the way in the open-source innovation ecosystem,” National People’s Congress spokesman Lou Qinjian told policymakers this month.



– Startup boost –

The success of DeepSeek has galvanised China’s AI scene, despite hurdles posed by rivalry with the United States, and fears of a global market bubble.

Shares in two leading Chinese AI startups, Zhipu AI and MiniMax, soared on their market debuts in Hong Kong this year, and it has been a similar story for Chinese chipmakers such as MetaX.

Shi Yaqiong and her team at Beijing-based Jinqiu Capital told AFP there has been a “clear surge” in enthusiasm around Chinese AI — and competition among investors — since the DeepSeek shock.



– Chip smuggling reports –

DeepSeek’s rise has not been without controversy.

Reports, including in technology outlet The Information, say DeepSeek has been skirting a US ban on the export of top-end chips to China to train its new V4 model.

The Information said in December, citing six people with knowledge of the matter, that DeepSeek developed V4 using thousands of chips dismantled in third countries and smuggled to China.

DeepSeek did not respond to AFP’s request for comment. Nvidia did not respond to a request for comment but told The Information that they had not seen any evidence of this and that “such smuggling seems farfetched”.


China’s top AI players


By AFP
April 24, 2026


Startup DeepSeek has shaken up the global AI scene with its "R1" model - Copyright AFP/File MLADEN ANTONOV

Luna Lin

China’s artificial intelligence boom is in full swing, with the release of a new large language model (LLM) by top startup DeepSeek on Friday highlighting the country’s rapid progress despite US export restrictions on advanced microchips.

Here’s a look at the companies, big and small, driving China’s AI ambitions:



– Legacy players –



Chinese internet giants Baidu, Alibaba and Tencent are racing to invest in AI, using existing vast user bases and cloud infrastructure to their advantage.

Search engine provider Baidu, sometimes called China’s Google, has been a vocal proponent of the potential of AI in the country for over a decade.

Although it has recruited prominent AI researchers and its “Ernie” tool was one of the country’s first AI chatbots, Baidu’s fortunes have remained tied to its massive search and online marketing business.

Alibaba, the e-commerce behemoth behind shopping platforms like Taobao, is known for its open-source “Qwen” AI models — popular with programmers worldwide because they can be freely customised.

The Qwen chatbot mobile app had more than 200 million monthly active users in January, according to AI ranking site AICPB.

Top gaming and social media firm Tencent, which launched an AI model in 2023 and a chatbot the following year, is seen as a cautious player.

Tencent’s founder, Pony Ma, recently vowed to increase investment in AI, reportedly calling it “the only field worth investing in” in January.



– Beyond TikTok –



ByteDance, the Chinese company behind TikTok, is increasingly shifting its focus to AI as pressure on its overseas social media business intensifies.

And it is going well: Doubao, ByteDance’s AI chatbot, is the most popular of its kind in China, with over 100 million daily active users.

This year, the firm’s slick AI video generator, SeeDance 2.0, raised concerns over copyright and potential future job losses with its cinematic-looking clips created using just simple prompts.



– China’s AI hero –



Startup DeepSeek started life in 2023 as a side project of a data-driven hedge fund, but shook up the global AI scene with its “R1” model in January 2025.

DeepSeek’s low-cost, high-performance R1 chatbot challenged assumptions of US dominance in what some have called the “Sputnik moment” for AI.

Its open-source approach has galvanised the country’s AI industry and accelerated the global diffusion of Chinese models.

Its newest V4 model, released Friday, promises performance similar to leading closed-source models at lower cost, according to the company.

DeepSeek-V4 features an ultra-long context of one million tokens and 1.6 trillion parameters for the Pro version — measures that determine how much input the model can absorb and its decision-making ability.

“In world knowledge benchmarks, DeepSeek-V4-Pro significantly leads other open-source models and is only slightly outperformed by the top-tier closed-source model, (Google’s) Gemini-Pro-3.1,” DeepSeek said in a statement on Friday.



– Startup ‘tigers’ –



The startups Zhipu AI, MiniMax and Moonshot AI are nicknamed China’s “AI tigers” — challenging legacy tech giants on AI foundation model research.

Zhipu AI emerged from the prestigious Tsinghua University and was initially known for its strong focus on computing research.

The firm is a major provider of chatbot tools to Chinese businesses, and the performance of its latest “GLM-5” model impressed developer communities.

MiniMax targets the consumer market with its multimedia tools, from AI companions to video generators.

Both Zhipu and MiniMax saw their stock prices soar when they went public in Hong Kong in January, but both have also faced challenges.

A year ago, Washington put Zhipu on its export control blacklist over national security concerns, while Disney and other US entertainment outfits are suing MiniMax for copyright infringement.

Moonshot AI’s Chinese name, Yue Zhi Anmian, pays tribute to Pink Floyd’s album “The Dark Side of the Moon”, reflecting the rock music passion of its co-founder Yang Zhilin.

Its latest offering, “Kimi K2.5”, is one of the most popular AI models on developer platform OpenRouter.

Kimi K2.5’s success is reflected in the company’s revenues. Moonshot AI reportedly earned its 2025 full-year revenue in just weeks since its launch.



OpenAI CEO apologizes to Canadian town for not reporting mass shooter


By AFP
April 24, 2026


A vigil after a mass shooter killed eight people in Tumbler Ridge, BC, 
Canada, in February - Copyright AFP/File Paige Taylor White

OpenAI’s CEO Sam Altman has apologized to a Canadian town devastated by a February mass shooting, saying he was “deeply sorry” the company did not notify police about the killer’s troubling ChatGPT account.

OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18-year-old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.

The account was banned over concerns about usage linked to violent activity, but OpenAI said it did not inform police because nothing pointed toward an imminent attack.

Canadian officials condemned OpenAI’s handling of the case and summoned company leaders to Ottawa to explain its security protocols.

The family of a girl who was shot and gravely wounded at the school is suing the US tech giant for negligence.

In a letter Thursday addressed to the community of Tumbler Ridge, published Friday by the local news site Tumbler RidgeLines, Altman said “no one should ever have to endure a tragedy like this.”

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote.

“While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

Van Rootselaar killed her mother and brother at the family’s home before heading to the local secondary school, where she shot dead five children and a teacher.

She died of a self-inflicted gunshot wound after police entered the building.



Canada’s Cohere buys Germany’s Aleph Alpha to take on US AI giants

ByChris Hogg
DIGITAL JOURNAL
April 24, 2026

File photo: Aidan Gomez, co-founder and CEO of Cohere, speaks at Toronto Tech Week. - Photo courtesy Toronto Tech Week

Two of the most prominent AI companies outside the United States are joining forces.

Cohere, the Toronto-based enterprise AI firm founded in 2019, is acquiring Aleph Alpha, a German company founded the same year and once positioned as Germany’s answer to OpenAI.


The deal, endorsed by the Canadian and German governments, was announced Friday in Berlin and aims to give enterprise and public sector customers a credible alternative to American AI dominance.


Cohere focused on building enterprise AI tools for businesses and governments, winning federal investment, government contracts, and commercial partnerships with companies like Bell and RBC along the way.

Aleph Alpha has taken a different path. After failing to keep pace with OpenAI and Anthropic on foundation model development, it abandoned that race and pivoted to helping governments and enterprises deploy AI they could control.

The combination brings those two stories together into a single company pitched as a transatlantic answer to Silicon Valley AI giants.

Financial terms of the acquisition were not disclosed by the companies.

German business daily Handelsblatt, which first reported the deal, valued the combined entity at roughly $20 billion, citing sources in government and industry.

Germany’s Digital Minister Karsten Wildberger announced the transaction at a press conference in Berlin on Friday, joined by Canada’s Minister of AI and Digital Innovation Evan Solomon, Cohere CEO Aidan Gomez, Schwarz Digits chief Rolf Schumann and Aleph Alpha co-founder Samuel Weinbach.

German Chancellor Friedrich Merz approved the deal, according to Handelsblatt.
The deal

Cohere will retain its name and operate dual headquarters in Canada and Germany, with Heidelberg becoming a second global headquarters.

Cohere shareholders will hold about 90% of the combined entity, with Aleph Alpha shareholders taking about 10%, according to Handelsblatt.

The acquisition is subject to regulatory approval.

Alongside the announcement, Germany’s Schwarz Group committed €500 million to Cohere’s upcoming Series E round. Schwarz, parent company of discount retailers Lidl and Kaufland, was already a lead backer of Aleph Alpha.

Cohere CFO Francois Chadwick told Reuters the company expects to close the funding round in the coming months.

The combined company will target regulated sectors including public services, finance, defence, energy, manufacturing, telecommunications and healthcare. Both the Canadian and German governments said they would use Cohere technology.

Cohere has raised about $1.6 billion since its founding, with investors including Nvidia and AMD. Its most recent valuation was roughly $6.8 billion after a $500 million raise in August 2025.

That’s a fraction of what OpenAI, Anthropic and Google have mobilized for training infrastructure, talent and chip supply.

Aleph Alpha’s pivot away from foundation model development brought its business model closer to Cohere’s. Its existing contracts with the German federal ministry for digital affairs and the Baden-Württemberg regional government give Cohere a direct foothold in European public sector procurement.
What this means for Canadian technology leaders

At the press conference, Solomon framed the deal as a stand against concentrated power.

“We need to make sure that the power does not rest in the hands of a few dominant players,” he said. Wildberger said the two countries were “creating a global AI leader.”

The announcement builds on the Sovereign Technology Alliance that Canada and Germany signed earlier this year.

It also extends a visible stretch of Canadian government support for Cohere.

Ottawa finalized a $240 million investment in March 2025 to help fund Cohere’s $725 million data centre project in Cambridge, Ontario. In August 2025, the federal government signed a memorandum of understanding with Cohere to explore deploying its tools across public service operations.

Commercial momentum has followed. Bell Canada announced its own partnership with Cohere last July, making it Cohere’s preferred Canadian infrastructure partner. Last week, Innovation, Science and Economic Development Canada began rolling out Cohere’s North platform to up to 1,400 staff.

Industry Minister Mélanie Joly told The Logic last week that Canada needs a “trading bloc” of like-minded countries to counter U.S. protectionism and the power of hyperscalers. She called Cohere “a gem” and said her goal in conversations with the German government was to “build a national champion.”

Cohere has publicly committed to maintaining Canadian operations. The company retains its name, will continue to operate from Toronto as its primary headquarters, and the Canadian government remains a customer. Cohere’s CFO told Reuters the merger will help Cohere reach more customers in regulated markets.

The Series E close, expected later in 2026, will be the next concrete signal on how investors value the combined company and how the capital gets deployed between the two headquarters.

Aleph Alpha’s existing European public sector contracts will shape part of the product roadmap, which Canadian enterprise buyers should factor into their planning.
Final shotsWatch the Series E close. The valuation investors settle on for the combined company will signal more about near-term trajectory than any ministerial statement.
For other Canadian AI firms, Cohere and Aleph Alpha is a model worth studying. Scale reached through partnership, with sovereignty framing as the political cover.
Watch the product roadmap. European public sector contracts will shape product decisions for a company now marketed as transatlantic.



Written ByChris Hogg

Chris is an award-winning entrepreneur who has worked in publishing, digital media, broadcasting, advertising, social media & marketing, data and analytics. Chris is a partner in the media company Digital Journal, content marketing and brand storytelling firm Digital Journal Group, and Canada's leading digital transformation and innovation event, the mesh conference. He covers innovation impact where technology intersections with business, media and marketing. Chris is a member of Digital Journal's Insight Forum.

‘Clearly me’: AI drama accused of stealing faces

By AFP
April 24, 2026


This photo illustration taken in Hong Kong shows phones displaying screenshots of a video from Chinese model and influencer Christine Li accusing an AI microdrama of stealing her likeness without consent - Copyright AFP Mahmoud RIZK


Sophia Xu and Purple Romero

Christine Li is a model and influencer, but not an actor, so when she saw herself playing a cruel character in a Chinese microdrama she felt bewildered, then angry and afraid.

The 26-year-old is one of two people who told AFP their likenesses were cast without consent in the AI-generated show “The Peach Blossom Hairpin”, which ran on Hongguo, a major microdrama app owned by Tiktok parent company ByteDance.

Li plans to sue the drama makers and the platform, highlighting new legal and regulatory grey areas created by artificial intelligence.

“I was genuinely shocked. It was clearly me,” said Li, who lives in Hangzhou in eastern China.

“It was so obvious that they used a specific set of photos I took two years ago” and had posted on social media, she said.

Microdramas are ultra-short, online soap operas hugely popular in China and elsewhere.

When Li’s fans alerted her to the series, she was horrified to find her digital twin shown slapping women and mistreating animals.

“I also felt a deep fear. I kept wondering what kind of person would do something like this,” Li said.

Hongguo hosts thousands of free, bite-sized shows — both live-action and AI-generated — whose episodes are two or three minutes long.

As of October, the platform had around 245 million monthly active users, according to data cited by Wenwen Han, president of the Short Drama Alliance.

A Hongguo statement in early April said it had taken the series down because the producers had violated platform rules and contractual obligations.

– ‘Sleazy’ antagonist –

AI’s ability to mimic real people has sparked global concern for actors’ jobs, and over such deepfakes being used for scams and propaganda.

Li and a man who says he was portrayed as her AI husband in the series, which became a hit last month on Hongguo, spoke out online about their separate unwelcome discoveries.

But even as their stories sparked a public outcry about AI ethics, AFP saw that “The Peach Blossom Hairpin” kept running for days before its removal, with the disputed characters quietly replaced.

The man, a stylist specialised in traditional Chinese clothing and make-up, had posted photos of himself in costume on the Instagram-like Xiaohongshu app.

Like Li, he was upset by the “ugly” portrayal of his likeness as a “sleazy” antagonist in the show.

“Will it have an impact on me, on my job, on my future work opportunities?” said the man, who asked to use the pseudonym Baicai.

To keep audiences hooked, microdramas are often full of shocking, larger-than-life moments.

Li and Baicai both showed AFP their original photos and the characters in “The Peach Blossom Hairpin”, which bore a strong resemblance.

– Legal risk –

For low-budget AI microdramas, Chinese regulations say platforms must be the primary checkpoint for potentially dodgy content.

If they do not carry out mandatory content reviews, the videos will be forcibly taken down, according to the National Radio and Television Administration.

If the platforms were aware of any infringement but failed to act on it, parties affected can alert China’s cyberspace authorities which can impose administrative penalties, according to Zhao Zhanling, a partner at Beijing Javy Law Firm.

Hongguo said in a second statement this month it would continue to strengthen how it reviews content and how it authorises creators, among other steps.

It said it had dealt with 670 AI microdramas that violated regulations, with most taken down, and warned it would crack down on repeated breaches.

When approached for comment, parent company Bytedance referred AFP to the two Hongguo statements.

Li and Baicai say they need more information from Hongguo to confirm the identity of the drama’s creator — with two companies potential candidates.

One is linked to a verified account on the Chinese version of TikTok that also published the series. Another is listed as the drama’s producer on an official Chinese filing system.

AFP contacted both firms but received no response.

Using AI to slash costs may be tempting in the fast-growing, multi-billion-dollar microdrama market.

But featuring someone in a demeaning way without permission “may constitute an infringement of both portrait rights and reputation rights”, said Li’s lawyer Yijie Zhao, from Henan Huailv Law Firm.

– ‘Associated with controversy’ –

National regulations require microdrama makers to register to obtain a licence — a step made mandatory for AI-generated animations from this month.

But producers could remain in the shadows by registering temporary outfits, Zhao said, while some allegedly use overseas servers to hide.

In 2024, a Beijing court ordered a company to apologise and pay compensation to a celebrity after its AI software enabled users to produce a virtual persona using his photos and name that could exchange intimate messages.

But lawyers told AFP that compensation for plaintiffs like Li likely won’t amount to much due to the limited commercial value of an ordinary likeness.

Li worries that the saga may cost her opportunities in the modelling industry, as she is now “associated with controversy”.

Baicai has not launched legal action, but hopes to see more measures from regulators and platforms to protect people like him.

“There are probably plenty of cases with unknown victims,” he said.

Anthropic says Google to pump $40 bn into AI startup

By AFP
April 24, 2026

Anthropic CEO Dario Amodei has visited the White House as the startup stands its ground regarding safe use of its artificial intelligence - Copyright AFP/File CLEMENT MAHOUDEAU

Google is planning to invest up to $40 billion in Anthropic, the artificial intelligence firm confirmed Friday, expanding a long-standing alliance between the two companies.

The investment builds on a partnership in which Anthropic will use custom Google chips and cloud computing services to power its technology.

An Anthropic representative confirmed to AFP that the agreement sees an initial $10 billion investment from Google. The remaining $30 billion will depend on meeting performance milestones.

The announcement came just days after Amazon revealed plans to boost its collaboration with Anthropic with a new $5 billion investment, and a plan to invest $20 billion more if performance goals are met.

For its part, Anthropic said it has committed to spending more than $100 billion on Amazon Web Services (AWS) technology to power AI in the coming decade.

Anthropic is among AI sector rivals spending tens of billions of dollars on computing infrastructure to lead in the technology.

Anthropic said in early April that it had tripled its annualized revenues quarter-on-quarter to over $30 billion — outpacing OpenAI for the first time.

Anthropic chief executive Dario Amodei visited the White House, where both sides struck a friendly tone, following a dispute over the tech company’s refusal to grant the military unconditional use of its AI models.

Earlier this month, Anthropic announced its newest AI model Mythos, withholding it from public release due to its potential cybersecurity risks.

However, Anthropic said this week that it is investigating unauthorized access to Mythos, a powerful model which the company itself worries could be a boon for hackers.

Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers.


AI united Altman and Musk, then drove them apart


By AFP
April 24, 2026


As OpenAI chief Sam Altman and tech tycoon Elon Musk battle in court, artificial intelligence rivals continue racing ahead with the technology - Copyright AFP Kirill KUDRYAVTSEV

Thomas URBAIN

Elon Musk and Sam Altman bonded over artificial intelligence in a project that became OpenAI, but a clash of visions will see the polarizing figures face off in court in a trial that opens next week.

Silicon Valley lore traces their first meeting back to 2012, in an encounter prompted by investor Geoff Ralston.

Nearly 14 years younger than Musk, who was born in June of 1971, Altman was said to be impressed by the Tesla chief’s powers of persuasion.

While yet to reach the age of 30, Altman already had a tech world reputation as a brilliant dealmaker.

Altman’s unassuming, friendly demeanor contrasted sharply with Musk’s abrasive style, but they shared an entrepreneurial spirit and a penchant for risk-taking.

Libertarian Musk and the apolitical Altman found common ground in a shared belief about the future of AI.

Musk saw Google, and its subsidiary DeepMind, as out to create AI that thinks sharper than people do with little regard for controlling it.

Just months before OpenAI was officially founded in early 2015, Altman published a blog post calling for measures to “limit the threat” posed by AI, complete with concrete proposals.

This philosophy was set as the guiding principle at OpenAI: born a non-profit organization dedicated to the responsible advancement of AI and boasting a commitment to making its research and source code freely accessible to the public.

Altman successfully pitched the OpenAI concept to Musk, who went on to invest at least $38 million to get the nascent entity established.



– Altruistic AI? –



In February of 2018, the South Africa-born entrepreneur behind Tesla, SpaceX and other companies resigned from OpenAI’s board to ostensibly focus on his other commercial endeavors.

Behind the scenes, however, Musk and Altman were clashing over a proposed shift of OpenAI to a for-profit business that could attract investors in the capital-intensive AI race.

OpenAI completed that transformation in 2025, some three years after its ChatGPT digital assistant made AI and those who build it all the rage in the tech world.

After years as a champion of an approach in which AI serves society rather than corporate coffers, Musk muddied his message by launching a private xAI startup in July of 2023.

The mission statements for xAI and its chatbot Grok give scant mention to dangers of the technology even though Musk once called it an “existential threat” to humanity.

The rift between Altman and Musk widened as the world’s richest man moved to Texas and became an ally of US President Donald Trump while OpenAI stayed in San Francisco and focused on improving its technology.

Musk has used his social media platform X to go on the offensive with posts that include likening Altman to a “Game of Thrones” character seen as a master manipulator.

Musk, 54, even filed a lawsuit seeking to oust 41-year-old Altman as OpenAI chief executive. Selection of jurors in a trial for that case is set for Monday.

Altman has fired back on social media, contending Musk’s agenda is to rule over the most powerful AI.

“The current struggle between the two billionaires is shaped by their egos and belief that the winner will control a new technology,” contended Darryl Cunningham, author of a book about Musk.

“It seems doubtful to me that either can control AI.”

Billionaire Elon Musk enters courtroom showdown with OpenAI



ByAFP
April 25, 2026


Elon Musk. — © AFP Brendan SMIALOWSKI
Benjamin LEGENDRE

Jury selection is to begin Monday in a high-profile legal battle between billionaire Elon Musk and artificial intelligence startup OpenAI, which he accuses of betraying its non-profit mission.

The clash in a courtroom across the bay from San Francisco pits the world’s richest man against a startup that Musk once backed and now competes against in the booming AI sector.

OpenAI’s ChatGPT is a formidable rival to the Grok chatbot made by Musk’s xAI lab.

While the lawsuit filed by Musk is part of a feud between him and OpenAI chief executive Sam Altman, it spotlights a debate whether AI should ultimately benefit the privileged few or society as a whole.

Court filings lay out how Altman tried to convince Musk to back OpenAI in 2015, acting as a co-founder for a non-profit lab whose technology “would belong to the world.”

Musk pumped some $38 million into the lab before he left.


Elon Musk (l) and OpenAI chief executive Sam Altman are both on the witness list for the trial in a case filed against the startup by the Tesla tycoon – Copyright AFP/File Frederic J. BROWN, Jung Yeon-je

OpenAI is now valued at $852 billion, with Microsoft among its backers, and is preparing to go public on the stock market.

The judge presiding over the trial is aiming for a jury to decide by late May whether OpenAI broke a promise to Musk in its drive to be a leader in AI or just smartly rode the technology to glory.

– Musk duped? –

Musk argues in his lawsuit that he was deceived about OpenAI’s mission being altruistic.

The tycoon cites an email from Altman in 2017 claiming that he remained “enthusiastic about the non-profit structure” of their AI venture after Musk threatened to cut off funding for the lab.

Just a few months later, however, OpenAI established a commercial subsidiary in the face of needing to invest hundreds of billions of dollars in data centers to power its technology.

Over the course of the following two years, Microsoft pumped billions of dollars into OpenAI and the tech stalwart’s stake in the startup is now valued about $135 billion.

Microsoft chief executive Satya Nadella is among those slated to testify at the trial.

– Aimed at Altman –

Along with calling for OpenAI to be forced to revert to a pure nonprofit, Musk’s suit urges the ousting of Altman and OpenAI co-founder and president Greg Brockman.

Musk is also seeking as much as $134 billion in damages and to have the court make OpenAI sever ties with Microsoft.

During pre-trial hearings, US Judge Yvonne Gonzalez Rogers mused that Musk team seemed to be “pulling numbers out of the air” when it came to calculating damages.

If the jury sides with Musk, it will be left to Rogers to determine any remedies or payment.

In what OpenAI has dismissed as a public relations stunt, Musk has vowed that any damages awarded in the suit will go to the startup’s nonprofit foundation.

– Quest for control? –

OpenAI internal communications brought to light by the lawsuit reveal tensions that culminated with the temporary ouster of Altman as AI chief executive in late 2023.

Musk’s legal team highlighted a 2017 entry in Brockman’s personal journal reasoning that it would be lying if Altman publicly asserted OpenAI would stay a nonprofit but became a corporation a short time later.

OpenAI now has a hybrid governance structure giving its nonprofit foundation control over a for-profit arm.

In court filings, OpenAI countered that its break-up with Musk was due to his quest for absolute control rather than its nonprofit status.

“This case has always been about Elon generating more power and more money for what he wants,” OpenAI said in a post on X, a platform Musk owns.

“His lawsuit remains nothing more than a harassment campaign that’s driven by ego, jealousy and a desire to slow down a competitor.”

The startup noted that days after Musk entered the AI race in 2023 he called for a 6-month moratorium on development of advanced AI.


Op-Ed: GPT 5.5 — Hype, obviously, but redefining the AI environment as well


ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
April 24, 2026


OpenAI says it is building a 'superapp' that combines ChatGPT, a coding tool, online search, and AI agent capabilities - Copyright AFP SEBASTIEN BOZON

If you read OpenAI’s blurb on GPT 5.5, “Introducing GPT 5.5,” you’ll notice a very upbeat description of the new platform, but with an interesting addition in plain sight.

OpenAI is clearly trying to address the many issues arising from prior platforms and consumer grumblings on multiple levels. This is critical because so far, the responses to AI problems have been largely useless and anything but reassuring.

This is important. AI hype has been seriously getting on people’s nerves, notably the people paying for it. Most professional IT commentators are far less than impressed with the constant sales pitch, particularly when it includes glossing over major issues like security and just getting things done properly.

There are problems. There are risks. We’re also talking about big outlays for businesses and significant challenges in core functions for just about everybody.

The market isn’t helping itself with absurd situations like its idiotic dismissal of Software as a Service, aka SaaS, assuming coding somehow is a thing of the past, when it’s absolutely integral to every step forward with AI. Future coding for AI is likely to look very different and evolve into something totally new overnight. It will need to be hyper-efficient, perhaps totally rewritten to manage basic operations. You will need SaaS like you will need to breathe.

What’s desperately needed is clarity, and above all, credible responses to criticisms. This clarity needs to be at the consumer, tech, and business levels, and structured to address all of the issues.

This is why “Introducing GPT 5.5” needs to be seen as an actual response. OpenAI have gone to some lengths to try to fit all of these minefields into one press release, and they’ve managed to keep it interesting.

I won’t rehash the blurb. Just read it and watch the priorities emerge. Suffice to say that it’s still a sales pitch, but at least it’s believably ballpark for addressing this daily-growing encyclopaedia of situations. They’ve even managed to address heuristics, the thankless and much-bitched-about frontier of vibe coding and inferences for prompts, etc., into the mix.

Now we can get to the environment. AI is creating new environments for itself and the world at incredible speed. It’s easy to forget that most of the current issues weren’t even beginning to be mainstream issues a year ago. This level of response from a major player like OpenAI is new and indicates a level of market awareness not particularly noticeable a year ago, too.

So what is this new environment? It’s a patchy, buggy, vague businessscape, a consumer wading pool with sudden deep ends, and more. It’s an arena for half-baked employment issues. It’s also a self-inflicted problem for Big AI.

The adoption and deployment of AI are looking pretty chaotic. It’s nebulous in areas where it needs to be well-defined. It’s looking like an ADHD version of the early internet.

What’s needed is much more clarity, and plenty of it, preferably in LEGO form. Digitization required systemic training of everybody on Earth, workplace practices and protocols definition, and above all, personal-level familiarity with the real-world applications.

OpenAI doesn’t actually say in so many words that they’ve at long last declared war on slop. They’re just constantly talking about refining all the areas that generate slop. To be fair, there’s a clear drive to quality control and oversighted functionality.

Can somebody tell me why any of these countless AI dysfunctions are tolerated at all by anyone? Nobody needs expensive chatty evasive unreliable automated idiots. You can get verbose excuse factories at any meeting for free. This is business.

Almost unnoticed in “Introducing GPT 5.5” is a very welcome nod and acknowledgement of the super high-value scientific AI. This is the true Golden Goose of AI. How they fitted it into the blurb, I don’t know, but it desperately needed to be there. This level of operations is bread and butter for top level AI, and cutesy chatbots are nothing by comparison.

That’s good news for the high end of AI applications. The public face of AI is somewhere between Ronald McDonald and Freddy Krueger.

The overall look is terrible. “It’s great, it’s wonderful, it’s dangerous, and we may or may not know which at any given moment” isn’t good enough. The pity of it is that this look is pretty accurate.

This bizarre look creates instant sales resistance as well as some pretty justifiable fear and loathing. It’s totally counterproductive.

Some questions:

How much of future AI development looks at objectives like modelled values for users?

Can you put an ROI on a given AI task before you start it?

What are the opt-out and standalone choices for people who don’t want third-party involvement in high-value IP work?

Is there such a thing as an Off switch?

How do people pin down AI management and fixes into a demonstrable and quantifiable contract service cost? That’s not at all clear.

At what point does AI stop being an undefinable threat?

GPT 5.5 may well be great. It’s the business environment you need to talk about.


Op-Ed: Who needs AI? You do and you don’t.

ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNA
April 23, 2026


A Bernstein Research analyst says Open AI CEO Sam Altman has the power to crash the global economy or take everyone 'to the promised land' as the startup behind ChatGPT races to build artificial intelligence infrastructure costing billions of dollars - Copyright GETTY IMAGES NORTH AMERICA/AFP JUSTIN SULLIVAN

The question of who needs AI is a very fast-moving target. Using AI is about adapting it to your skills. Not you adapting to it. Things get personal when it’s your own work.

Case in point – As usual, when writing an article, I open a Word doc and turn off Copilot. I write about 1 million words a year, according to Grammarly. I don’t need Copilot largely because it gets in the way of flow, continuity, even basic syntax, and expression. Both Copilot and Grammarly have problems with tenses, syntax, and missing the whole point of sentences entirely.

I’ll believe in the omniscience of LLMs when I see it. AI is an actual nuisance in creative writing. That is a real issue. My creative writing is up to Grade 18, according to Word. I do not need an automated pedant cluttering up my work and consuming time in the process.

Then there’s functionality. AI is a lot better at that. I also go straight to the AI summary when I search. It’s useful, unlike many of the actual searches. As a research tool, it’s always ballpark, unlike some of the waffly and downright useless additives to searches I see daily on Google.

This is where “need” defines itself. This match of skills, functional needs, and quality control is unavoidable. AI is a new factor in your work with dubious credentials and much more dubious applied values and laughable sales pitches in so many ways.

Consider the pre-AI environment:

Most businesses reported that they were doing brilliantly and that they were therefore geniuses.

Basic business was just a digital version of paper business.

Admin was extremely efficient by any pre-digital standards.

Data had to pass scrutiny, particularly on balance sheets.

Data wasn’t unquestioned and unquestionable.

In short, looking at this heavily biased description, you could make a good superficial case that nobody ever needed AI.

Er, um, ah, well… Not quite. Data loads were pretty much according to Moore’s Law to the letter. The tech eventually overran those parameters, and the Cloud took most of the weight.

It’s just as easy to argue that AI has stepped in to meet an inevitable demand for much higher processing efficiency and increased operational scope. That, at least, if nothing else, is perfectly true.

This isn’t quite a replay of the Industrial Revolution. It’s a very klutzy, badly mismanaged version of it, with the tech stumbling and bizarrely turning itself into the lowest common denominator for so many tasks. The Spinning Jenny is now an AI gopher agent who books things for you and does other mundane work.

Even the reliability and effectiveness of these AI agents is now seriously and rightly in question. Employment, education, and everything else people seem determined to avoid have been sucked into AI’s black-hole-like maw. We’re now arguing about whether kids should be able to do their homework themselves.

To coin a phrase or two –

“So far, so what?”

“Chatbot, schmatbot.”

Pretty mystic, don’t you think? But accurate.

In some ways, nothing has changed, just verbose and largely useless perspectives. The work still needs doing. Efficiency is measured by suspiciously obliging productivity metrics that look more like excuses than hard assets.

People and businesses don’t need this level of ambiguity. Life’s tough enough without a barely comprehensible tech you don’t necessarily like or want to understand. Getting an AI skill set is undeniably useful and valuable, but does that make the tech any better?

In the interests of balanced journalism, I asked Grok, “Who needs AI?” and got this thing, a pretty straightforward response with an interesting quote:

The deeper truth: AI needs humans more than the reverse in many ways. It excels at scale, speed, and pattern-matching but lacks true judgment, emotional intelligence, ethical nuance, leadership, and the ability to handle true novelty or messy real-world context without guidance.

As a writer, I know a scripted response when I see one. This is chapter and verse of a considered PR response as well as an answer. Not bad, really. Maybe LLMs do earn their keep.

However, and in fairness to a class of tech that is going to be underfoot for generations to come, what if it’s an honest answer? It’s correct as far as it goes. AI can’t lead. It also can’t process so many issues that are out of its depth.

That leads us to a slight and currently very unfashionable conundrum.

Could it be that AI is an unintentional risk of honesty?



Op-Ed: Anthropic Mythos — The monster that could be a saviour?



ByPaul Wallis
EDITOR AT LARGE
DIGITAL JOURNA
April 23, 2026


Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books - Copyright AFP JOEL SAGET

Anthropic’s Mythos is causing a massive flurry of ridiculous levels of interest in AI as a security threat. This threat has been monotonously predicted by cybersecurity experts for years.

The main difference is that it’s now visible in a tangible form. A report that Mythos was accessed by unauthorized users hasn’t helped.

Anthropic has been very cautious and understandably reticent about Mythos. It seems that Mythos has a unique capacity for finding flaws in IT security. “Unauthorised access” is exactly what you don’t want with this capability.

There are other possible issues. If Mythos can be duplicated, or some kind of its flaw finding capabilities can be cloned, the possibilities are all too obvious. The IP damage alone could be catastrophic.

The cybersecurity angle is much more dangerous. Anthropic aren’t suffering from some sort of implied hypochondria. According to some reports, Mythos can crack smaller IT systems, which could be a direct lead into other larger systems.

That’s another major issue. This type of breach is a routine existing problem in cybersecurity. It’s a backdoor way of getting into associated businesses and other systems. Even if the big systems are OK, these compromised systems are likely gateways.

AI systems add a level of difficulty in their scope of operations, able to generate agents, and are infamous for their weird behaviors. Now add an AI that specializes in cybersecurity, going rogue.

Put it this way: can anyone on Earth create a prompt for an AI dysfunctional rampage? Yes.

You definitely do not need a cybersecurity specialist AI going on a bender in this environment. Even a relatively minor event can escalate into a market panic, with or without serious damage. It’s a monster in too many ways.

OK, this is where it gets interesting.

Mythos seems to have a real major asset ready to go in plain sight. This expertise in finding flaws could be a huge plus for global cybersecurity.

Try this for a bit of tenuous logic:

AI can generate a sort of SSL, Secure Sockets Layer, a multilayered hard target like the SSL used by financial institutions. It can do this in seconds.

Now the “saviour” bit. This is fascinating.

AI can predict. This is where Mythos may have a huge advantage. If you’ve ever played against Stockfish, the super-chess computer, it can plot moves at least 40 moves in advance. Apply this to “breach theory” and apply an AI prediction of how a breach behaves, and the possible moves of a hack.

Hacks have a weakness, too. Some things must be done to access and run anything. AI can monitor behaviors and predict next steps by bad actors long before they happen. It can block actions, redirect them, and/or simply stop them in real time.

This is existing tech. Don’t even need to look for the codes for move prediction and easy for an LLM to train as required. LEGO for cybersecurity, in effect. Mythos could easily outperform any hack, and at AI speeds.

Mythos knows where the weaknesses are. It can predict how a hack has to behave to do anything.

The problem may be its own solution.

___________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.

Anthropic cyberattack highlights how the modalities are becoming harder to contain

By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
April 22, 2026


Investigators and researchers are still learning of the scope of the cyberattack which has hit US government agencies and other victims around the world - AFP

With both OpenAI and Anthropic introducing more “cyber-permissive” models (in tightly controlled releases), this indicates that advanced vulnerability discovery and exploit reasoning are becoming more accessible and potentially harder to contain. A recent incident demonstrates this.

This week it was announced how unauthorised users were able to access Anthropic’s Mythos model, PC Mag reports. The way the rogue agents accessed the server was reportedly by just changing a model name.


Anthropic’s Mythos model is a powerful AI tool capable of identifying undiscovered security holes that have existed for decades.

Bloomberg has reported that an as yet unnamed group tried multiple ways to gain access to the AI model, and then finally they were able to get through to the system, via a third-party vendor.

The issue demonstrates how easily such systems can be exposed. This signals that AI capabilities are already out there and in the wrong hands they can accelerate how quickly vulnerabilities can be detected and exploited.

Consequently, software teams will need to look at how to harden their code so those vulnerabilities cannot be exploited to begin with.

Several experts reached out to Digital Journal to explain about the ramifications and ongoing significance of the incident.
Patching is expected

The first to do so is Steve Povolny, Vice President of AI Strategy & Security Research at Exabeam. Povolny focuses on the seeming simplicity of the attack: “The reality is, Pandora is out of the box. If it was as relatively easy as it sounds to gain access to the world’s most talked-about security model, it’s very likely a much larger group will have access to Mythos far sooner than originally intended.”

He then turns his attention to the future, considering: “What will be most interesting is observing whether researchers or adversaries can leverage the tech more effectively – will we see widespread exploitation or widespread discovery and patching first? Or will this be another DeepSeek moment? Overreactions and underwhelming impact. Either way, should be interesting to watch this unfold.”
Difficult steps ahead

The second IT specialist to pitch in is Isaac Evans, founder and CEO of Semgrep. Evans seeks to put the incident in perspective: “This infiltration is a minor hiccup compared to the idea of someone exfiltrating the models’ weights, which would be a game-changing scenario, and one that has occurred in part before with the distillation of OpenAI models into Deepseek. Anthropic has to protect Mythos against distillation or outright theft.”

Evans then ponders the future move for Anthropic: “Mythos’ ability to find zero-days in so much of the software stack that SaaS vendors rely on is evidence that security bugs are plentiful, not scarce, in the software Anthropic and the broader community use. The security team at Anthropic has a very difficult job: securing the model on a software stack that was designed for high velocity over high assurance, against some of the most sophisticated threat actors in the world.

He is also cautious about what happens next: “Until we are able to reach a new steady state by patching all of the vulnerabilities LLMs can find, expect a lot of successful offensive activity.”
Building offensive-grade AI

The third commentator is Gabrielle Hempel, Security Operations Strategist at Exabeam. Hempel is interested in how the attack was devised: “Any time you build a high-capability system and expose it even to a semi-distributed environment (partners, contractors, “trusted” ecosystems), you’re expanding your attack surface beyond what you can realistically control. While everyone seems focused on securing against sophisticated nation-state actors, we’ve increasingly seen third-party access paths becoming the weakest link. “

She next looks at the inherent weaknesses that opened the door for the attackers: “From a defender’s perspective, this is the point we’ve been reinforcing until we’ve gone blue in the face: your security perimeter isn’t just the infrastructure you own, it’s your entire supply chain.”

Stepping back, Hempel weighs up the situation of an offensive AI world: “I think the interesting thing is that everyone is going to focus on the headlines touting, “AI tool capable of cyberattacks falls into the wrong hands. The real problem, however, is that this model was never supposed to be broadly accessible, it was intentionally restricted to a small set of orgs due to dual-use risk, and it still leaked almost immediately due to a contractor environment. The uncomfortable truth here is that we are rapidly building offensive-grade AI capability into tooling and assuming that policy, contracts, and limited access lists are going to sufficiently control the sprawl.”