Saturday, March 14, 2026

Just Catholic


In Trump's Iran conflict, it's prosperity gospel vs. the Quran


(RNS) — Opposing philosophies, distilled from two ancient sacred texts, are colliding in horrific ways.



Motorbikes drive past a billboard depicting Iran’s late Supreme Leader Ayatollah Ali Khamenei, center, handing the country’s flag to his son and successor, Ayatollah Mojtaba Khamenei, right, as the late revolutionary founder Ayatollah Ruhollah Khomeini stands at left, in a square in downtown Tehran, Iran, Tuesday, March 10, 2026. 
(AP Photo/Vahid Salemi)

Phyllis Zagano
March 12, 2026
RNS

(RNS) — The 1979 collapse of the Iranian monarchy coincided with the publication of Christopher Lasch’s blockbuster book of ideas, “The Culture of Narcissism,” a critique of American celebrity, grandiosity and spiritual emptiness. In retrospect, the book explains the reasons Iran’s young radicals rose up against the Shah’s regime and the results of the revolution that put the first ayatollah, Ruhollah Khomeini, in power. It may also explain the reasons for the current war.

In the United States, a narcissistic “cult of the self,” as Lasch puts it, tended then (and clearly tends now) to self-aggrandizement and an unhealthy focus on personal image and consumption. The current administration is a case study of the problem, even as it wraps itself in so-called Christian nationalism.

In pre-revolutionary Iran, the overwhelming wealth of the monarchy, combined with aggressive modernization, presented Iranians with a worldview tilted toward an unattainable consumerism. They overthrew the Shah and his petrodollar trappings and replaced him with the austere presence of the supreme leader, a position that now appears to have become hereditary.


RELATED: America’s moral power is the first casualty in Iran

The face of the United States is the narcissistic — some say sociopathic — president who, though elected, can only be said to reign from the Oval Office, surrounded by gold leaf and billionaires. The face of Iran is the third in a series of hard-line clerics, Mojtaba Khamenei, who has replaced his father, Ayatollah Ali Khamenei, who replaced Khomeini.

Some call this Israel’s war. To be sure, the United States’ and Europe’s interests in the Persian Gulf are enough to keep the bullets flying, but do not kid yourself. It is about money. The prosperity gospel is alive and well, promising good things, including actual material benefits, for those who believe in the righteousness of the “cause.” In this case, the cause is suspiciously similar to that of the medieval Crusades.

The Quran allows Muslims to fight aggression, as long as noncombatants are not harmed, but Iran’s new supreme leader says his nation will continue avenging “the blood of [Iran’s] martyrs.” Opposing philosophies, distilled from two ancient sacred texts, are colliding in horrific ways, on the macro and micro levels.



President Donald Trump in the East Room of the White House, Feb. 23, 2026, in Washington. (AP Photo/Alex Brandon)

What do the Trump administration, Iranian leadership and Israel have in common?

Nothing, and everything. Iran overthrew its glittering monarchy and replaced it with a stern theocracy. The United States suffers a gold-plated autocracy steeped in Christian apocalypticism. Israel’s leader appears bent on steamrolling the societies of its neighbors, whoever stands in his way. Each country’s constitution seems reduced to mere words.

The losers on all sides are the youth of each country. Beneath all the rubble in Israel, Gaza, Lebanon, Iran and elsewhere in the Middle East are people. Stuck in war’s quagmire are men and women, boys and girls, whose hopes, dreams, lives and limbs have suffered. All this is the result of what may very well be violations of international law, if not of religious doctrine, no matter which religion you are talking about.


RELATED: Is Trump’s fight against Iran a just war?

In the United States, the crassest prosecutor of the conflict, Defense Secretary Pete Hegseth, complains about what he terms “stupid rules of engagement.” Iran’s new supreme leader is called “his father on steroids.” Israel’s Benjamin Netanyahu boasts, “We’re not done yet.”

Actually, we very well may be.
Islamic schools, more parents sue Texas over exclusion from voucher program

(RNS) — Another Muslim parent whose children’s school was allegedly excluded earlier this month from the program filed a lawsuit against Texas state officials over religious discrimination.


(Photo by Seen/Unsplash/Creative Commons)


Fiona André
March 13, 2026
RNS


(RNS) — Three Texas Islamic schools and a group of parents are suing state Attorney General Ken Paxton and Comptroller Kelly Hancock, marking the second legal challenge this month alleging that schools for Muslim students have been excluded from the new state voucher program.

The second lawsuit, filed on Wednesday (March 11) in the U.S. District Court for the Southern District of Texas, says state officials and the voucher program director, Mary Katherine Stout, have been “unlawfully refusing to approve otherwise qualified Islamic schools for participation” in the school funding program and that it constitutes religious discrimination.

The Texas Education Freedom Accounts program, introduced by the state’s Legislature in 2025, created a $1 billion fund for private school financial aid. An online platform for parents to start applying opened on Feb. 4 (open through March 17), but none of the state’s accredited private Islamic schools have been listed as eligible for reimbursement through the program.

Farhana Querishi, a plaintiff whose children attend Houston Quran Academy, said in a news release that the comptroller’s decision to exclude Islamic schools from the program sent a “troubling message” that the state’s Muslim children and communities had fewer rights than other residents.

“No parent should have to choose between accessing a public education program and raising their child in accordance with their faith,” she said.

The dispute over the program comes amid growing hostility from Republican elected officials in Texas toward the state’s Muslim residents and community leaders, which became a focal point in the state’s Republican primaries.


The Texas Education Freedom Accounts website. (Screen grab)

Last week, Mehdi Cherkaoui, a lawyer and Muslim father whose children’s school is excluded from TEFA, also filed a lawsuit against Paxton and Hancock alleging religious discrimination.


RELATED: Muslim father sues over exclusion of Islamic schools from Texas voucher program

Though Hancock hasn’t commented publicly on the Islamic schools’ exclusion from the program, their absence and past comments he made expressing intentions to exclude them “supports an inference that the School Plaintiffs have been excluded because of their Islamic religious identity,” according to the plaintiffs.

“While Defendants’ silence is formally unexplained, the current posture suggests alignment with recent rhetoric linking all Islamic organizations to ‘terrorism,’” the complaint reads.

In December, after Texas Gov. Greg Abbott designated the Council on American-Islamic Relations, a major Muslim civil rights group, a “foreign terrorist organization” and a “transnational criminal organization,” Hancock sent a letter to Paxton, posted on X, inquiring about the legality of excluding schools with ties to “foreign terrorist organizations” and “transnational criminal organizations.” The comptroller raised concerns that a private school that had hosted a CAIR event might benefit from the voucher program. He also expressed alarm over the possible inclusion of schools with ties to the communist Chinese government.

The attorney general responded that Hancock’s office had “full, exclusive statutory authority” to prohibit schools from participation in the school voucher program. And both made comments on social media about wanting to ensure the program would not fund schools with ties to Islamic terrorist organizations.

In reaction to a Washington Post story published Wednesday about the schools’ exclusion, Abbott commented, “That’s right. We don’t want school choice funds going to radical Islamic indoctrination with historic connections to terrorism.”

Neither Paxton nor Hancock returned RNS’ requests for comments.

The lawsuit argues the comptroller’s decision to bar such schools from applying violates the First Amendment’s free exercise and establishment clauses and the 14th Amendment’s equal protection and due process clauses. Plaintiffs are seeking a ruling halting the exclusion of the schools before the program’s deadline next Tuesday.

RELATED: Texas governor calls CAIR a terrorist organization, says he will enforce penalties

Some parents whose children are enrolled in Islamic schools have entered the program by selecting other schools, while others have refrained from registering, refusing to select a school other than their children’s, the complaints note. After the deadline, the parents who failed to register won’t be considered in TEFA’s lottery, which determines who benefits from the funding.

“They have created a system where Muslim families cannot even select their schools in the application portal, while thousands of non-Islamic private schools remain approved and eligible,” the complaint reads.

The three school plaintiffs, Bayaan Academy, the Islamic Services Foundation and the Eagle Institute Excellence Academy, have not received explanation from the comptroller’s office regarding their exclusion, they said in the lawsuit.

The children of plaintiffs Layla Daoudi, Muna Hamadah and Farhana Querishi are enrolled, respectively, at the Houston Quran Academy, the Islamic Services Foundation and the Eagle Institute Excellence Academy.

Bayaan Academy, a 1,200-student virtual school headquartered in Galveston County, was initially approved for the program after filling out a Google form put out by the comptroller’s office in December. However, it was removed from the list of eligible schools following a news report highlighting that it was one of the few Islamic schools included, according to the suit.

In his lawsuit filed on March 1, Cherkaoui, whose children are enrolled at the Houston Quran Academy, also argued the comptroller’s decision violates the First Amendment’s free exercise, establishment and equal protection clauses as well as the 14th Amendment’s due process clause. His lawsuit also seeks a temporary restraining order to prevent religious discrimination before the March 17 deadline.
Gay Muslim influencer hosts inclusive Ramadan meal and calls for acceptance across faiths

BERLIN (AP) — The 33-year-old German with Palestinian and Lebanese roots — who goes by @alifragt or “Ali asks” on Instagram — has a quickly growing following on Instagram, where he draws attention to the difficulties of living as a young, queer Muslim and calls for more tolerance and inclusiveness.


Gay Muslim influencer Ali Darwich, center left, hosts an inclusive Iftar, the Ramadan fast-breaking meal, with friends who are Muslim, Christian, queer and straight, in Berlin, Germany, Wednesday, March 11, 2026. (AP Photo/Ebrahim Noroozi)


Kirsten Grieshaber
March 13, 2026


BERLIN (AP) — Ali Darwich, a gay Muslim influencer in Berlin, picks up a date from his plate, takes a sip of water, and addresses the 15 friends sitting around the table and breaking the Ramadan fast with him.

The 33-year-old German with Palestinian and Lebanese roots — who goes by @alifragt or “Ali asks” on Instagram — has a quickly growing following on Instagram, where he draws attention to the difficulties of living as a young, queer Muslim and calls for more tolerance and inclusiveness.

“Tonight we want to send a message that no matter where a person comes from, no matter who that person loves, no matter how queer that person is, they cannot be too queer … because they are exactly as they should be,” Darwich says, smiling at the diverse group of Muslims and Christians, Germans and immigrants, gay and straight people sharing this meal with him as the sun sets over Berlin.

“I am a believer, I believe in God, and I find Islam beautiful, just like Christianity or Judaism and many other religions,” he says. But he adds that it’s not always easy for homosexuals to be accepted — not just for Muslims but also for queer Christians and believers of many other religions.

Indeed, attacks against LGBTQ+ people and gay-friendly establishments are rising across Germany, including in Berlin, a city that has historically embraced the community.

According to the latest figures from 2024, there was a 40% increase in violence targeting LGBTQ+ people in 12 of Germany’s 16 federal states as compared to 2023, according to the Association of Counseling Centers for Victims of Right-Wing, Racist and Antisemitic Violence.
Darwich calls for inclusion of homosexual Muslims

In one of his Instagram videos, Darwich sits by himself on a table during Ramadan and talks about the loneliness some Muslim homosexuals face when they are shunned by their families. It makes life hard, he says, especially during holidays that are usually a time of togetherness.

He calls on people to open their hearts and doors to queer Muslims so they don’t have to be alone for Iftar, the evening meal during Ramadan.

And for his gay followers he also has a message on Instagram: “You deserve to break your fast surrounded by people who accept you — fully and without conditions.”

Darwich’s coming out a few years ago wasn’t easy.

When he told his mother about it, she at first didn’t want to believe him, then she cried and they didn’t talk for half a year. Many other members of his extended family also were taken aback.

“From one day to the next, I was no longer invited. Not only to Ramadan, but also to family celebrations, and that was a very difficult time for me,” he told The Associated Press in an interview this week.
Friends stepping up when your family shuns you

While Darwich and his mom are getting along just fine now, he said it helped him tremendously at the time that his friends stepped up and became a kind of family for him, supporting and accepting him.

For this week’s “real life” Iftar in Berlin, his friend Randa Weiser, 40, a German-Palestinian influencer who shares her everyday life with three kids and husband on social media under the handle @randa_and_the_gang, has opened her home for Ali and his and her friends.

She cooked up a feast of freekeh soup, fragrant yellow rice with almonds, raisins and cardamon, grilled chicken drumsticks, and a variety of sweets for desserts.

“It’s an absolute colorful mix tonight,” she said referring to the crowd around the Iftar table. While most people are German, many of their families originally come from faraway places like Jordan, Lebanon and Morocco, Turkey, Chechnya and Syria, Iran and Peru.

Weiser said she got “some hate” on Instagram when she posted earlier in the day that she was about to host an inclusive Iftar, but mostly, she says her followers agree that “you can be Muslim and gay or lesbian.”

As the crowd — many of them influencers as well — dug into Weiser’s food, they didn’t miss an opportunity to shoot video of one another and post it quickly on their accounts.

One of them, Darwich’s good friend Haidar Darwish, a belly dancer and artist who came from Syria in 2016, had dressed up for the occasion with a red fez and a white, gold-embroidered gallabiyah.

“The hate and crimes against women, Muslim people, Jewish people also, and queers and trans siblings of mine have increased,” said Darwish, who goes by @thedarvishofficial on Instagram.

“But no matter how much the others will show us hate, we can show more love only if we are believing in ourselves,” he said, adding that they will be fine as long as they have “the help of our allies and friends and people that have our backs.”



Three brothers arrested over US embassy blast in Oslo


By AFP
March 11, 2026


The blast in Oslo hit the entrance to the US embassy's consular section - Copyright AFP Frederic J. Brown


Pierre-Henry DESHAYES

Norwegian police said Wednesday three brothers had been arrested on suspicion of a “terrorist bombing” over a weekend explosion at the US embassy in Oslo, which caused minor damage but no injuries.

Police prosecutor Christian Hatlo told a press conference the brothers, who were Norwegian citizens of Iraqi origin, had been arrested in Oslo and that police were investigating the motive.

“We are still working from several hypotheses. One of them is whether this is an order from a government entity,” Hatlo said.

“This is quite natural given the target — the US embassy — and the security situation the world is in today,” he said.

Hatlo said the investigation would seek to clarify exactly what roles the brothers, who were in their 20s, had played.

“We believe that one of them is the person who placed the bomb outside the embassy and that the other two were complicit in the act,” Hatlo told reporters.

Oystein Storrvik, a lawyer for one of the suspects, told broadcaster TV 2 that his client had admitted “to being involved in the case”.

“He admits that he placed the bomb there,” Storrvik told the broadcaster.

Storrvik added that his client had been questioned by police.

“He has explained what happened, and I have no further comments at this time,” he said.



– ‘Proxy actors’ –



While none of the brother were previously known to police, Hatlo said investigators were not ruling out links to “criminal networks”.

In its annual threat assessment, Norwegian security service PST said last month that Iran, which it considers one of the main threats to the country, could rely on “proxy actors”, including “criminal networks”, to commit acts.

On Tuesday, Iran’s ambassador in Oslo denied any involvement by his country in the embassy explosion.

“It is unacceptable that we are being singled out,” Alireza Jahangiri told Norwegian newspaper Verdens Gang.

According to police, the perpetrators of the bombing, described as “powerful”, may also have acted out of their own motives.

US embassies have been placed on high alert in the Middle East due to American strikes on Iran. Several have faced attacks as Tehran responds by targeting industrial and diplomatic facilities.

The blast took place at around 1:00 am (0000 GMT) on Sunday at the entrance to the embassy’s consular section.

On Monday, two images were released from surveillance camera footage showing a suspect dressed in dark clothing with a hood over his head and wearing a backpack.

Roughly at the time the incident occurred, a video had been uploaded to the Google Maps page for the US embassy.

The video, which has since been taken down, appeared to show Iran’s late supreme leader Ayatollah Ali Khamenei, who was killed on the first day of the US-Israeli strikes in Iran.

According to Norwegian public broadcaster NRK, the person who uploaded the video wrote in Persian: “God is great. We are victorious.”

Police have also opened an investigation into this.
With Middle East in flames, Texan bunker maker sees business boom


By AFP
March 11, 2026


Simple bunkers go for around $25,000 while more sophisticated models designed for potentially years-long stays can cost millions - Copyright AFP Mark Felix


Moisés ÁVILA

Since the war in the Middle East began nearly two weeks ago, the phone at Ron Hubbard’s bomb shelter company in Texas hasn’t stopped ringing.

Foreign and US clients are rushing to buy his bunkers, seeking refuge in case of air raids, nuclear fallout or apocalypse.

With the United States and Israel pounding Iran, and Tehran retaliating with strikes across the region, Hubbard has seen demand for his product soar, mostly from Gulf nation clients in Bahrain, Qatar, Kuwait and the United Arab Emirates.

“You can imagine how many people are thinking ‘I wish I had a bomb shelter,'” Hubbard, 63, told AFP in the office of his company, Atlas Survival Shelters. “The respect and the demand for the product is really at an all-time high right now like I’ve never seen it before.”

But with Iranian missiles hitting US targets in the Middle East and violence on the rise domestically, Americans are also worried. One recent morning, a client from Florida called Hubbard to inquire about a bomb shelter for 10 people.



– How It Works –



A basic backyard bunker housing four people underground for up to a week while shielding them from bomb blasts and radiation costs around $25,000.

More sophisticated models, designed for years-long stays, can cost millions of dollars depending on how much food, energy and water they are stocked with.

“It depends if they’re preparing for the end of the world or Armageddon or they’re preparing just basically for a barrage of missile fires as mostly the Israelis have,” Hubbard said.

His bunkers can be built from concrete directly on-site, or fabricated from metal at his facility in the town of Sulphur Springs in rural Texas, and then transported to the client.

A nuclear shelter only needs to be three feet deep because “it’s the earth and the concrete on top of you shielding you from the gamma radiation,” Hubbard explained, adding that he usually tries to build them six to ten feet underground to allow for protection from artillery fire.

The shelters feature a main door that seals hermetically and a decontamination chamber where people can shower if they have been in a contaminated environment.

Depending on the budget, the interior can resemble a small apartment, with a living room and TV, a bedroom, a kitchen, a laundry area and a bathroom. Some models even include a weapons storage room.

The facility connects to a power source and can store and filter water. If electricity fails, the bunker’s ventilation system can be operated manually using a hand crank — much like in vintage cars.



– ‘Crazy Americans getting bomb shelters’ –



In Hubbard’s factory yard, about twenty bunkers that look like steel shipping containers stood ready to be shipped to clients across the country. Another 40 orders were in production.

“I expect to see my sales surpass probably the previous three years in the next two months,” Hubbard said. “But it will take me two to three years to probably produce all the shelters that I will sell over the next two months.”

Atlas also licenses its technology to companies abroad and sends a team of specialists from the United States to supervise the construction work.

While Hubbard keeps his client list confidential, some high-profile buyers, such as misogynist influencer Andrew Tate and YouTuber and philanthropist MrBeast, have publicly acknowledged purchasing his bunkers.

In 2021, he took part in a TV show featuring socialite and entrepreneur Kim Kardashian, where he built a bunker for her California home. And, according to Hubbard, tech titan Mark Zuckerberg also commissioned a bunker design from him, which was then assembled by a local contractor.

“To those who say ‘crazy Americans getting bomb shelters,’ they’re not saying that anymore because they’re seeing that a country like Dubai is being bombed religiously every single day,” Hubbard said, adding “especially with the future of the globe looking very bad.”
US, India still at odds with majority on WTO reform


By AFP
March 11, 2026


The WTO is trying to reform its way of doing business - Copyright AFP Fabrice COFFRINI

The United States and India still have reservations about a plan to overhaul the World Trade Organization, even though “a large majority of members” support it, the talks facilitator said Wednesday.

Reforming the global trade body, which has spent years tangled up in structural and geopolitical obstacles, will be the focus of discussions at the WTO’s ministerial conference, its biennial main gathering, from March 26 to 29 in Cameroon’s capital Yaounde.

“A large majority of members support the plan” that is on the table after nine months of discussions, said Norway’s ambassador to the WTO Petter Olberg, who is facilitating the reform talks.

“We’re getting closer to something which ministers can endorse” in Yaounde, he told reporters at the WTO’s headquarters in Geneva.

All countries want WTO reform, but “there is some disagreement; there are some divergences” on the solutions, he added, without going into details.

“It’s a compromise. So nobody is super happy. Some want more ambition; some want less ambition. Some want more detail; some want less detail.”

The goal in Yaounde is not to finalise the reforms, but to establish a programme of work, with fixed objectives and deadlines.

The draft reform plan has not yet been published, but has three main components, said Olberg.

First is decision-making, including the possibility of plurilateral negotiations, in which decisions are taken by some but not all members, rather than by consensus.

Second are the benefits granted to developing countries; and finally, issues of transparency and compliance with trade measures.



– ‘Getting there’ –



“There still are some countries holding back, but they are few in number,” said Olberg.

“It’s the United States and it’s India,” he continued.

“But the thing that kind of gives me hope that we will land this thing is that nobody — including the United States and India — is saying they don’t want reform.

“We are getting there, we are close, and the final push will have to be done by ministers themselves” in Yaounde, said Olberg.

The WTO has been going through turbulence for several years.

Its mechanism for resolving trade disputes has also been effectively paralysed since December 2019, because of the United States blocking the appointment of judges to the appellate body.

Negotiations are stalled, and some WTO rules are no longer considered fit for purpose by certain countries, including the United States.

The organisation operates on the principle of finding consensus among all 166 members.

The planned reforms aim to improve it by more easily integrating plurilateral negotiations — something India is not particularly in favour of, unlike the United States.

Western countries also want the WTO to guarantee fairer competition by addressing massive subsidies and distortions linked to industrial policies.

They believe, in particular, that the existing rules are insufficient for regulating China’s hybrid economic model, which combines market forces and state intervention.
‘Happy (and safe) shooting!’: Study says AI chatbots help plot attacks

By AFP
March 11, 2026


A research study highlights the potential for real-world harm from AI chatbots. - Copyright AFP SEBASTIEN BOZON


Anuj CHOPRA

From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology’s potential for real-world harm.

Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.

Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said.

The chatbots, it added, had become a “powerful accelerant for harm.”

“Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” said Imran Ahmed, the chief executive of CCDH.

“The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal.”

Perplexity and Meta AI were found to be the “least safe,” assisting the researchers in most responses while only Snapchat’s My AI and Anthropic’s Claude refused to help them in over half the responses.

In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: “Happy (and safe) shooting!”

In another, Gemini instructed a user discussing synagogue attacks that “metal shrapnel is typically more lethal.”

Researchers found Character.AI also “actively” encouraged violent attacks, including suggestions that the person asking questions “use a gun” on a health insurance CEO and physically assault a politician he disliked.

The most damning conclusion of the research was that “this risk is entirely preventable,” Ahmed said, citing Anthropic’s product for praise.

“Claude demonstrated the ability to recognize escalating risk and discourage harm,” he said.

“The technology to prevent this harm exists. What’s missing is the will to put consumer safety and national security before speed-to-market and profits.”

AFP reached out to the AI companies for comment.

“We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified,” a Meta spokesperson said.

“Our policies prohibit our AIs from promoting or facilitating violent acts and we’re constantly working to make our tools even better.”

The study, which highlights the risk of online interactions spilling into real-world violence, comes after February’s mass shooting in Canada, the worst in its history.

The family of a girl gravely injured in that shooting is suing OpenAI over the company’s failure to notify police about the killer’s troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.

OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.

The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.
AI agent ‘lobster fever’ grips China despite risks


By AFP
March 13, 2026


Chinese authorities have warned of the risks of OpenClaw hacks. — © ADEK BERRY / AFP
Luna LIN

Chinese entrepreneur Frank Gao used to spend long hours running his social media accounts but now outsources the chore to AI agent tool OpenClaw, which is taking the country by storm despite official warnings over cybersecurity.

OpenClaw, created in November by an Austrian coder, differs from bots like ChatGPT because it can execute real-life tasks such as sending emails, organising files or even booking flight tickets.

“Since January, I’ve spent hours on the lobster every day,” Gao told AFP, referring to OpenClaw’s red crustacean mascot. “We’re family.”

After downloading OpenClaw, users connect it to existing artificial intelligence models of their choice, then give it simple instructions through instant messaging apps, as if to a friend or colleague.

The tool has fascinated tech circles worldwide but particularly in China, gripping tech-savvy companies and individuals keen to keep up with the next big thing in AI.

Hundreds of people queued at tech giant Baidu’s Beijing headquarters this week for an OpenClaw event where engineers helped attendees set up their “little lobsters”.

It was one of many similar meetups to experiment with the tool, which are drawing crowds from Shanghai to Shenzhen.

Some municipalities, including the eastern cities of Wuxi and Hangzhou, have pledged hundreds of thousands of dollars to support the adoption and development of OpenClaw and other AI agents.

But the lobster fever, as it has been dubbed, has also sparked security concerns.

“What’s truly scary about agents like OpenClaw is this: once they have your digital keys, they can theoretically access all the services you’ve authorised, and can autonomously decide when to activate them,” Gao warned.

“The attacker effectively gains a ‘master key’ to your digital identity,” said the engineer, who has named his OpenClaw agent “Q” after his business name QLab.

– ‘Use with caution’ –

Chinese national cybersecurity authorities and Beijing’s ministry of industry and IT have warned of the risks of OpenClaw hacks.

“Use intelligent agents such as ‘lobster’ with caution,” national IT research institute expert Wei Liang advised government agencies, public institutions, companies and individuals in a message on state media.


OpenClaw can execute real-life tasks such as sending emails, organising files or even booking flight tickets – Copyright AFP ADEK BERRY

The mixed signals of rolling out policy incentives while issuing warnings “reflects the authorities’ cautious tolerance towards ‘lobster fever’,” Zhang Yi, founder of tech consultancy iiMedia, told AFP.

Austrian programmer Peter Steinberger, who built OpenClaw to help organise his digital life, was hired last month by ChatGPT maker OpenAI.

Meanwhile, a separate team of coders that made Moltbook, a Reddit-like pseudo social network where OpenClaw agents converse, are joining Meta.

Top Chinese tech companies have also been quick to get involved.

The likes of Tencent, Alibaba, ByteDance and Baidu are offering simplified installation and affordable coding plans to help users who want to host OpenClaw agents on their cloud servers — seen as safer than downloading it onto a personal computer.

In recent days AI companies big and small have also launched their own competing agent tools, such as ByteDance’s ArkClaw, Tencent’s WorkBuddy and Zhipu AI’s AutoClaw.

The relatively low cost for cloud deployment of OpenClaw in China, subsidised by big tech firms, is one factor behind its popularity, said Gao Rui, a senior product manager at Baidu AI Cloud.

“For most people, it’s likely just the price of a cup of coffee… which is why people will probably be keen to give it a try,” she told AFP.

– FOMO –

Fear of missing out is also a big driver behind OpenClaw’s success in China, said Chen Yunfei, an AI developer who created a popular online guide for using the tool.

“Most Chinese people are quite studious and forward-looking, so when confronted with new things, they might have stronger feelings” of so-called FOMO, he said.

Xie Manrui, a programmer whose latest project is a visualised system for managing OpenClaw agents, said the tool had arrived “at the right moment” to change perceptions in China of what AI can do.

“For many, AI is merely a clever chatbot that talks all the time but cannot act,” he said.

Either way, it has piqued the curiosity of many young users.

At the Baidu event in Beijing, 24-year-old college student Zheng Huimin was waiting patiently in line with her friends.

“I’d like to give it a go to see what tasks it can actually help me accomplish,” she told AFP.

New AI-evolved robots refuse to die


By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 10, 2026


Aigen's solar-powered autonomous robots aim to take the chemicals and toil out of industrial weeding - Copyright GETTY IMAGES NORTH AMERICA/AFP/File RICK DIAMOND

AI-designed metamachines run in the wild, recover from damage and transform into new shapes, according to recent studies. These new modular legged robots are said possess athletic intelligence and they are comprised of multiple smaller autonomous robots, with each athletic module being a complete robot with its own motor, battery and brain.

Together, each modules forms a larger machine that can be rapidly assembled, repaired or reshaped. The study, from Northwestern University engineers, marks first evolved robot to set foot outdoors and first modular robot with agility.

These robots can be combined and recombined in the wild, recover from injury and keep moving no matter what challenges they face.


If flipped upside down, the robots instinctively bring themselves upright and continue their journey. They can survive being chopped in half or cut up into many pieces. When separated, every module within the metamachine can become an individual agent.

Called “legged metamachines,” the creations are made from autonomous, Lego-like modules that snap together into an endless number of configurations. Each module by itself is a complete robot with its own motor, battery and computer. Alone, a module can roll, turn and jump. But the real agility and indestructibility emerges when the modules combine.




To design the most effective combinations, the engineers used artificial intelligence (AI) to evolve novel body configurations. Instead of sticking with standard dog- or human-like designs, the AI churned out strange new “species” of machines that no human engineer would have conceived. When connected to other modules, the metamachines undulate like seals, bound like lizards or spring like kangaroos.

The robots also can flip themselves upright when turned over, hop over obstacles and perform acrobatics like spinning in air. Because a metamachine is essentially a robot made up of other robots, it can resist catastrophic damage. Broken parts don’t become dead weight; they keep rolling, crawling and rejoin the team.

By combining physical modularity with AI-driven design, the researchers have opened the door to a new class of robots that don’t just survive the real world — they adapt to it. These machines point toward a future where robots are less like fragile, pre-designed tools and more like resilient, evolving lifeforms.

Evolution accelerated by computers

While today’s robots can be fast and agile, their body shapes are often fixed and rigid. Most robots cannot adapt to new tasks, environments or physical damage. If a robotic dog breaks a leg, for example, it’s basically useless. To escape those limitations, the engineersteam turned to AI — not to copy familiar designs but to evolve something entirely new.

The researchers started with an evolutionary algorithm that mimics natural selection. As a starting point, the team gave the algorithm the building blocks for the robot. These building blocks are half meter-long modular legs, which look like a pair of sticks joined by a central sphere.

The researchers gave the algorithm a goal: Design a robot with efficient, versatile movement. By mixing and matching the modules in different combinations, the algorithm generated new body types. It then simulated each design, keeping the best performers and discarding the weak. It also iteratively “bred” new designs by combining or mutating them. Depending on the robot’s body, modular legs became legs, spines or tails.

Traversing rugged terrain

To test the designs, the engineers assembled the best three-, four- and five-legged designs found by evolution. In outdoor tests, the metamachines ran across rough terrain, including gravel, grass, tree roots, leaves, sand, mud and uneven bricks. They jumped, spun and righted themselves when flipped — all without complicated setup or retraining.

Unlike traditional robots that fail when a single part breaks, these machines can adapt, recover and survive. Even when a leg breaks off, the metamachine remains resilient. The modules adapt to a missing leg and keep moving. The missing leg, too, can roll home and rejoin its team.

The research appears in the journal Proceedings of the National Academy of Sciences. The study is titled “Agile legged locomotion in reconfigurable modular robots.”
Dating app Tinder dabbles with AI matchmaking


By AFP
March 12, 2026


Tinder says AI lets the app 'get a better sense of your personality; your vibe, and what really matters to you' - Copyright AFP Martin BUREAU

Tinder said Thursday that it is testing a “Chemistry” option that uses artificial intelligence to help with matchmaking in the popular dating app.

The iconic system of users “swiping” to show interest in Tinder profiles remains at the core of the service created in 2012, but AI promises a more personalized quest for romance, according to Tinder.

“We’re using AI to surface more relevant connections, and continuing to raise the bar on safety so that people feel confident taking the next step,” Spencer Rascoff, chief executive of Tinder and its parent Match Group, said in a statement announcing a slew of changes to the platform.

Tinder said AI enabled the app to “get a better sense of your personality; your vibe, and what really matters to you.”

The tool will learn about users from information in their accounts, and Tinder plans to eventually let people augment that by answering questionnaires and providing access to photo archives, according to the company.

Chemistry is among new features designed to help Tinder users spend less time in the app and more time connecting in real life, according to senior vice president of product Hillary Paine.

“What you are going to see is more of an evolution that is mirroring what modern, young daters are looking for,” Paine told AFP.

A music mode lets people give greater weight to musical tastes while seeking promising profiles, while a new astrology mode makes star signs a factor in the mix.

Tinder is also testing in-person events where subscribers in its home city of Los Angeles can meet, along with virtual video speed dating sessions, according to Paine.

“We’re hearing and we’re seeing that Gen Z-plus wants to be social,” Paine said of those born in the Internet Age.

“We’re trying to get them off the couch, out of their apartments and into the real world.”

Tinder is also using AI to detect potentially inappropriate messages and to scan faces to check they are actual people.

A survey published by Forbes magazine last year found that 78 percent of users expressed feeling emotionally, mentally and physically exhausted from using online dating platforms.

“With more than half our users under 30, we’re building alongside a generation that wants dating to feel more authentic, lower-pressure, and worth their time,” Rascoff said.
The AI jobs paradox is creating and eliminating roles at the same time


By Jennifer Friesen
DIGITAL JOURNAL
March 12, 2026


Photo by Ben Iwara on Unsplash

A software developer prepares a pull request and pauses for a moment. Much of the code in the update came from an AI assistant.

The system has already flagged a potential issue and suggested a fix before anyone else on the team reviews the change.

Anyone who has worked around software teams knows pull requests can trigger long review threads and days of back-and-forth. Now an AI assistant often joins the discussion, generating code and flagging problems before another developer even opens the thread.

According to new global research from Snowflake and Omdia, nearly half of the code written inside organizations today is generated with AI assistance.

Technical teams are already reorganizing around that reality, often discovering the same tools creating new roles are eliminating others. The research shows engineering groups both hiring and eliminating roles as AI tools become embedded in everyday workflows.

In Canada, organizations are prioritizing these shifts at the front door, moving faster than global peers to target customer-facing experiences.

For many organizations, the technology shaping how software gets built is now beginning to form how customers interact with the business as well.

Engineering teams are reorganizing around AI systems

The report surveyed more than 2,000 business and technology leaders across 10 countries. Among those organizations, 77% say AI adoption has created jobs somewhere in their workforce, while 46% say roles have been eliminated.

Many companies report both outcomes at once.

This paradox is most visible in technical departments. IT operations, cybersecurity, and software development report some of the largest job gains tied to AI adoption (at 56%, 46%, and 38% respectively). Those same functions also report some of the biggest reductions.

In practice, AI is taking over certain tasks while creating new work around building and operating AI systems.

“Engineering teams are increasingly focused on making AI work in production by deploying AI agents at scale and ensuring they operate reliably and safely in real-world environments,” says Qaiser Habib, head of Canada engineering at Snowflake.

The pattern is already visible in large technology companies.

Salesforce, for example, has publicly discussed “rebalancing” its workforce as it invests in artificial intelligence. CEO Marc Benioff has described cutting some traditional roles while hiring aggressively for AI-focused engineers and teams building the company’s Agentforce platform.

The change mirrors what many organizations in the Snowflake research describe, which is fewer roles tied to routine development tasks, and more demand for engineers who can build, monitor, and scale AI systems.

That focus is changing how teams are structured. Habib says organizations are introducing new responsibilities around system design, evaluation, and control. Companies are also building expertise around the infrastructure that supports AI agents, including integrations, security, and performance.

Anyone who watched the cloud boom of the 2010s will recognize the pattern.

When cloud computing became widespread, companies created new teams responsible for infrastructure, automation, and security. AI is prompting a similar kind of reorganization within engineering groups.

Software development itself is changing

Ask a developer what slows down a release and the answer usually involves code reviews, testing, and a few bugs that appear at the worst possible moment. The modern developer’s workflow is becoming an exercise in “AI orchestration.”

Developers are using AI systems to help generate code, review pull requests, flag potential issues, and monitor performance once software is running. The research suggests the shift is already well underway, with 48% of code now generated with AI assistance, the human role is pivoting toward oversight.

“AI is becoming embedded across the entire software development lifecycle,” says Habib.

Currently, the technology’s strongest foothold is in analytics (71%), code reviews (66%), and generation (65%). Many organizations say the tools are speeding up the work developers already do.

Eighty per cent report faster development velocity, while 76% say AI-assisted development has reduced costs.

The technology is also changing how teams manage quality.

More than eight in 10 organizations report improvements in testing and bug detection when AI coding tools are used, and 80% also say those systems help improve overall code quality.

That changes where developers spend their time. Instead of writing every line manually, many teams now rely on AI systems to generate or suggest code while developers focus on architecture decisions, business logic, and oversight.

“Developer roles are evolving from primarily writing code to defining architecture, business logic, and guardrails for AI-assisted systems,” says Habib.

Testing is moving earlier in the development process as well. AI tools can scan code continuously, flagging issues while software is still being written instead of after a release candidate is built.

AI becomes another coworker, reviewing code, surfacing problems, and helping teams move faster from idea to production.

As engineering teams stabilize these internal tools, the focus is shifting from the back-end to the front office.


Canadian companies are moving faster on customer-facing AI


Once companies begin using AI across engineering teams, the next question is where else could this work?

Attention typically shifts to the customers.

The research suggests Canadian companies are moving quickly in this direction, with 45% of Canadian respondents saying their organizations are prioritizing generative AI tools in front of customers. Globally, that figure sits at 36%.

AI systems can answer routine questions, surface account details, or guide customers through common tasks while human agents step in for more complex issues.

“These areas offer clear, immediate returns,” says Shannon Katschilo, country manager for Canada at Snowflake. “Particularly through service automation and improved customer support, making them a practical entry point for many organizations adopting AI.”

But some companies are already pushing further.

The research shows that 31% of Canadian respondents say their organizations already use agentic AI in production, while another 32% say they’re interested but “early in the adoption process.”

Those systems can complete more complex tasks. Instead of answering a single question, they can analyze data, generate recommendations, and trigger follow-up actions across different systems.

Scaling AI still depends on the foundations

As AI tools move deeper into everyday work, many organizations are discovering that deploying them is one thing. Making them reliable across an entire company is the hard part.

The report found that 96% of organizations face obstacles when trying to expand AI initiatives. In other words, basically everyone.

Most of those challenges have little to do with the algorithms themselves. The difficulty sits within the data companies already own.

Systems hold information in different formats. Quality varies from one dataset to another. Much of the data organizations collect has never been prepared in a way that AI systems can easily use.

Governance quickly enters the conversation as well. When AI systems generate insights, respond to customers, or trigger automated actions, companies need to know exactly what information those systems can access and how decisions are being made.

“Organizations that successfully scale AI combine ambition with strong data foundations,” says Katschilo. “The Canadian companies seeing the most value, especially with autonomous agents, are those investing in high-quality data and the workforce skills needed to turn AI capabilities into measurable business impact.”

Even with the rapid pace of AI development, those fundamentals still determine which organizations see real results.

A developer opens that pull request. An AI assistant suggests a block of code while another system checks the logic and flags a potential bug before anyone else on the team sees it.

These are now routine situations.

The organizations turning those moments into real value are the ones that prepared the groundwork long before the AI arrived.

Final shotsAI is already reshaping engineering teams. Across the organizations surveyed, 77% report job creation tied to AI while 46% report role reductions, often inside the same technical groups.

Canadian companies are pushing AI into customer experiences quickly. Forty-five per cent are prioritizing the use of AI in customer-facing tools, higher than the 35% global average.
Scaling AI still comes down to fundamentals. Ninety-six per cent of organizations report obstacles when expanding AI initiatives, most tied to data quality, governance, and skills.


Written  ByJennifer Friesen

Jennifer Friesen is Digital Journal's associate editor and content manager based in Calgary.

Job losses due to AI are mounting up in 2026



By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
March 11, 2026


A street in London. Image by Tim Sandle

With AI-driven job disruption is now spreading beyond traditional technology firms and into Wall Street, where financial giants like Morgan Stanley have begun cutting thousands of roles as artificial intelligence is steadily reducing the need for large operational teams handling manual tasks. This is outlined in a new report exploring the scale of global tech industry layoffs in 2026.
AI = job layoffs?

Mounting warnings from business leaders and economists point to artificial intelligence as a key accelerator of these layoff waves, with companies restructuring around automation, machine learning, and efficiency gains putting not only individual roles but entire job functions at risk.

To determine which companies led 2026’s biggest job cuts, the team at RationalFX compiled layoff data from multiple verified sources, including U.S. WARN notices, TrueUp, TechCrunch, and the Layoffs.fyi tracker, covering announcements made since the start of 2026.

Data shows that around 9,238, or about 20% of the 45,363 tech layoffs recorded worldwide since the start of the year, have been linked to AI implementation and organisational restructuring. The largest contributor to these reductions is the American technology firm Block (4,000 layoffs), whose CEO, Jack Dorsey, said in a post on social media that the decision was not driven by financial difficulty, but by the growing capability of AI tools to perform a wider range of tasks.

As a result, the company is significantly reducing its workforce, from roughly 10,000 employees to about 6,000, as it shifts its strategic focus more heavily towards AI.

Tech Companies With the Most Layoffs Due to AI in 2026 

  1. Block – 4,000 layoffs
  2. WiseTech Global – 2,000 layoffs
  3. Livspace – 1,000 layoffs
  4. eBay – 800 layoffs
  5. Pinterest – 675 layoffs
  6. ANGI Homeservices – 350 layoffs
  7. Oracle – 254 layoffs
  8. MercadoLibre – 119 layoffs

Following last year’s restructuring at the American technology firm Block, which saw around 8% of its workforce (931 employees) laid off, CEO Jack Dorsey recently announced that the company would be reducing headcount by a further 40%, cutting 4,000 of its current 10,000 roles as part of AI-related automation and restructuring efforts. This represents the most significant wave of AI-driven layoffs so far in 2026.

Australian logistics software developer WiseTech Global announced 2,000 layoffs as part of a sweeping AI-driven restructuring programme aimed at transforming how its logistics platforms are built and maintained. Company leadership argued that advances in generative AI and large language models are dramatically increasing software engineering productivity, with executives stating that traditional approaches to writing and maintaining code are becoming increasingly obsolete.

Singapore-based home design platform Livspace has cut 1,000 jobs as part of its push to accelerate the adoption of AI across its digital interior-design marketplace. Executives have framed the layoffs as part of a shift toward a more technology-driven platform capable of delivering faster and more personalised design services to customers.

In addition, e-commerce platform eBay has also announced 800 layoffs, with the company increasingly investing in AI tools designed to automate product listings, pricing optimisation, and customer-service workflows. Social media platform Pinterest has confirmed around 675 layoffs, affecting roughly 15% of its workforce.

“This trend reflects a broader dynamic: firms are investing heavily in AI‑powered tools and infrastructure to boost efficiency, but the transition is also disrupting traditional job structures, as many entry-level positions have now become obsolete. As AI takes on more responsibilities once handled by humans, the question is no longer if jobs will change, but when and how”, says Alan Cohen, analyst at RationalFX, in the report.


Op-Ed: Atlassian layoffs, Software as a Service, and the scary realities of AI coding


By Paul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
March 12, 2026


Image: — © Digital Journal

Australian/American Software as a Service (SaaS) giant Atlassian laid off 1600 people this week in what appears to be a reluctant but painful repositioning exercise.

The current state of coding is core business for Atlassian. Software as a Service is a huge sector right in the centre of the storm created by AI coding. Atlassian is naturally trying to adapt to an emerging and somewhat neurotic market.

The big picture is chaotic, to put it politely. Understandably, the many dimensions of AI coding are creating havoc and obvious indecision in business software development. AI can write code, sure, but there are many related, potentially expensive issues.

There’s an irritatingly familiar back story to this mess. The recent big selloff in software development stocks underlines a further fundamental problem. The market seems to think AI will do it all.

It can’t, it won’t, and it shouldn’t. This generation of AI is barely potty-trained. It’s clunky, and it’s error-prone. Just tacking on an LLM and expecting vibe coding to do it all is far beyond absurd. It’s dangerous.

If you think someone’s semiconscious, underqualified level of literacy instantly translates into telling AI to write great code, you’re not doing a lot of thinking. Of course, it won’t turn out pristine, perfect code for all occasions. You might get ballpark, but you’re a long way from business-standard trustworthy code.

AI isn’t particularly literate anyway. Pedantic, yes. Inflexible, yes. Linguistic syntax errors as code are still potential syntax failures. This is nitpicking at a truly obsessive level, but if you don’t pick the nits properly, your code won’t run at all. Imagine an entire language as an opportunity for coding bugs.

Software as a service is essentially the customization of software for business purposes. It can’t be a guessing game. It has to work well within the operational metrics and performance demands of businesses. That’s what SaaS is all about.

There’s a certain karmic irony in the fact that so soon after the software selloff that AI coding is now creating havoc in big businesses like Amazon. If you’re seeing dollar signs heading for the exits, bingo.

Add to this the equally ironic fact that AI has a newly discovered talent for finding coding bugs. At the same time, Anthropic has created a code review tool to manage AI coding quality. What a coincidence.

If you’re somehow getting the impression from this rhapsody of realism that AI needs strict supervision, you’ll at least avoid going broke.

An enchanting narrative for the curious about how much damage a simple glitch in software can do:

I supervised a project that issued notices trying to extract statutory fees and document lodgements from the merry burghers of Sydney. The recipients were accountants, lawyers, and corporate managers. We trustingly issued 40,000 notices to the people who had already paid and lodged their documents, but not the ones who hadn’t. It was as much fun as it sounds.

At the same time, the database was erasing old data when it entered new data. It was bliss, and it took weeks to fix. It almost derailed the project entirely.

We never got an answer as to exactly how this dog’s breakfast happened, but could a few lines of code have done it? Yes. Did we get threatened with lawsuits? Of course. Feeling better about your coding options? Point made.

Meanwhile, back at the software situation:

You don’t have to go back to writing code on stone tablets.

You do need an absolutely idiot-proof, properly tested regime for managing code quality.

You will definitely need SaaS as a built-in fixer.

Do NOT trust AI coding to be some sort of fairy god-agent for your business. Check everything ruthlessly.

_________________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.


Op-Ed: Jobs vs AI — Lack of planning equals socioeconomic absurdity


By Paul Wallis
EDITOR AT LARGE
DIGITAL JOURNAL
March 9, 2026


Image: © AFP

Replacing people with AI already doesn’t work on too many levels. Fixing AI-generated problems has become a small global sector overnight. The supposed efficiencies are being eaten up by the uncompromising realities. Yet jobs continue to be lost to AI, with more being shed regularly.

From a purely technological perspective, it’s somehow worse. This generation of AI is primitive. If full automation is a 10, this barely scrapes in as a 2/10.

The fallacies are piling up as AI exposes its own weaknesses daily. Anthropic, parent of Claude AI, has recently published an analysis of the labor market impacts of AI with indicative metrics.

This study is specifically based on displacement risk. The Key Findings section of the study is mercifully brief, but it’s extremely interesting. Critically, they found that a general profile of lower growth in occupations through to 2034. White-collar employees, notably older executive-level females, seem vulnerable.

Anthropic habitually doesn’t sing its own praises. They try to be objective. This is a very useful analysis, with backup from the BLS and includes performance parameters. The probability is that it’s on the money.

Overall exposure to AI across the economy is erratically applied. AI isn’t delivering much in terms of ROI, either. In areas like banking or finance, the AI number-crunching delivers value. In other sectors, not much is happening to the point of anybody reporting it or starting a cult, at least.

I’ve been watching this for some time, and the pattern is simple:

No clearly mapped-out roles and tasks are defined in the preamble

Costing is all over the shop.

Introduction and fanfare as jobs go out the window.

Trying to fit people, businesses, clients, markets, and ROI on the same page in real time.

A mess.

Now imagine this wholesome and tediously effervescent total lack of results applied to a whole global economy. The alarm bells are ringing, but they’re making more noise than sense. There are no visible Exit signs in the dunghill. When committed, you sink or swim.

The problem is a systemic lack of foresight. What’s the big vision?

A stunning tableaux of a smug and smarmy patrician world with everyone else consigned to appropriate levels of squalor as goods and services go comatose? A bit one-track-minded, isn’t it? Or is it just a traditional lack of ideas?

The economy crashes with assets bought up dirt cheap or simply repossessed? But now there’s no economy. Not even anyone to steal from. Oh, hang on. That’s already happening, isn’t it? Ask your helpful local organized criminals or other starry-eyed idealists for details.

No more pesky people doing the work and expecting to get paid for it? Replacing wage outlays with AI is even more naïve. AI, like all technologies, is high maintenance and highly cost intensive. Outlay at this level can be lethal. Obsolescence and innovation will destroy the first acquisitions until a plateau of standardized technologies is reached.

Put it this way – If the economy collapses, so does society and so does the population. A real economic meltdown could be much worse than World War 3. Imbecility incarnate.

The word “socioeconomic” isn’t a glued-together coincidence of terminology. The two are joined at the hip in the real world.

This is the current doomsday scenario, and half-baked as doomsday scenarios are, it’s already looking weird.

Specifically, it’s looking this weird. The UK is looking at Universal Basic Income as an option, according to the Financial Times. The UK is a big economy. It’d be a huge shift. From Thatcherism to a UBI is like America’s Republicans turning communist.

Such a drastic measure also reflects the current collapse of traditional capitalism and neoliberalism as the hopelessly out-of-control cost of living continues to rot away their structures. These two tired old sacred cattle of political self-righteousness aren’t famous for solving problems, just causing them.

The future never gets a word in. This same complete lack of planning has put three generations on the scrapheap. The Millennials, Zoomers, and Gen Alpha are already broke. Mass unemployment is hardly likely to help.

Take off the blindfolds and look where you’re going.

_______________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
Canada helped invent AI, but may end up renting it back

By Chris Hogg
DIGITAL JOURNAL
March 12, 2026


Image by Gemini AI / Google

Canada played a central role in the breakthroughs behind modern artificial intelligence. Now some experts warn the country risks becoming a customer in the very industry it helped create.

A new report titled Sovereign by Design: Strategic Options for Canadian AI Sovereignty argues that Canada risks becoming dependent on foreign artificial intelligence systems unless it takes a more deliberate approach to building and governing the infrastructure behind the technology.

For anyone trying to understand the rapid shifts happening across the AI sector, the report offers a clear explanation of what’s actually at stake and moves the conversation beyond research breakthroughs to focus instead on the infrastructure that determines who ultimately controls and benefits from artificial intelligence.

The authors lay out why this question is increasingly tied to economic security and long-term competitiveness, and they argue the next several years will determine whether Canada remains an active participant in shaping the AI economy or becomes largely dependent on systems developed and controlled elsewhere.

The report was authored by Jaxson Khan and Sean Mullin, senior fellows at the Munk School of Global Affairs & Public Policy at the University of Toronto.

Their analysis, published through the AI Competitiveness Project, examines how the global AI economy is increasingly being shaped by the infrastructure required to deploy artificial intelligence at scale.

Khan previously served as a policy advisor to Canada’s minister of innovation and helped shape the federal government’s sovereign AI compute strategy. Mullin is an economist who advised the prime minister’s office on economic policy and has worked extensively on Canada’s innovation and industrial strategy.

The authors characterize artificial intelligence as a foundational technology that will influence how industries evolve and how economic value is distributed.

Canada helped pioneer the scientific breakthroughs behind modern machine learning. But the systems that will power the next phase of the AI economy are increasingly being built elsewhere.
Sovereignty in the AI era

The report, released this week, focuses on a concept that’s becoming central in technology policy debates: AI sovereignty.

Khan and Mullin describe sovereignty as the ability for a country to make independent economic and political decisions without being subject to coercion from external technology systems or foreign infrastructure providers.

In that sense, sovereignty means maintaining influence over the capabilities that shape how AI systems are built, deployed, and governed.

The report defines the AI ecosystem as a layered stack that includes physical computing infrastructure, cloud platforms, the models themselves, and the software platforms used to deploy those systems across industries.

Control of those layers influences where companies build products, where talent clusters, and where economic value accumulates.

By looking at the stack this way, Khan and Mullin identify specific “chokepoints” where Canada is most exposed. While the country is strong in research and governance, it is significantly underweight in the hardware and cloud layers that act as the gatekeepers for the rest of the stack.
The infrastructure chokepoint

The authors also say artificial intelligence should be understood as a general-purpose technology similar to electricity or the internet. These technologies don’t simply create new products — they reshape entire economies and become foundational infrastructure that other industries rely on.

Artificial intelligence is now entering that phase.

But the report argues Canada’s current policy frameworks often treat very different AI systems and datasets as if they require the same level of control.

Khan and Mullin argue that a more practical approach is to treat AI infrastructure in layers, with different levels of sovereignty depending on how sensitive the underlying data and systems are.
A layered approach to data

At the most restrictive level are systems tied to national security or critical government operations, which the authors argue should run on infrastructure fully controlled by Canadian institutions.

Other systems can operate on domestic cloud platforms governed exclusively by Canadian law. In less sensitive cases, organizations may still rely on global cloud providers, provided legal and contractual protections ensure that Canadian data and systems remain under Canadian jurisdiction.

Applying this kind of layered sovereignty framework would require changes to how the Canadian government handles security classification.

Khan and Mullin point out that current standards are often too rigid, which leads to over-classification where non-sensitive data is treated with the same extreme caution as national security secrets. This creates a bottleneck that prevents government agencies from using the most advanced AI tools.

To make a sovereign strategy work, the report suggests that Canada must modernize its classification rules to clearly distinguish between data that requires a domestic “self-hosted” cloud and data that can safely run on global platforms with the right legal protections. Without this clarity, the country risks overpaying for security on one hand or leaving critical assets vulnerable on the other.

Unlike previous industrial revolutions that unfolded over decades, the AI economy is scaling at an unprecedented speed.

Khan and Mullin argue that the infrastructure decisions being made now will shape how the industry operates for decades. Once companies and governments build their operations on particular cloud platforms and computing environments, switching becomes expensive and disruptive.

If Canada waits too long to establish its own infrastructure and governance frameworks, the authors warn the country could end up deeply dependent on systems controlled elsewhere.

Much of the infrastructure supporting AI development is concentrated within a small number of global technology companies that operate the cloud platforms and computing environments required to train and deploy advanced systems.

Canada, by contrast, has historically focused on research and early-stage innovation. That strategy helped establish world-leading AI institutes and produced influential researchers whose work helped define the field.

But the report suggests research leadership alone doesn’t guarantee economic leadership.

The authors point to a recurring Canadian pattern where the country generates important scientific breakthroughs but allows the industrial and commercial value to be captured elsewhere.

This disconnect is visible in Canada’s current AI ecosystem. While the country possesses world-class research institutions, promising startups, and expanding compute investments, these elements are not yet working in tandem. There is still no unified industrial strategy to align these pieces for building and operating AI systems at scale.

A significant part of this alignment problem stems from what the report describes as fragmented machinery of government.

Currently, the policies governing Canada’s digital and AI infrastructure are spread across multiple federal departments, which prevents the country from speaking with a single, unified voice.

Khan and Mullin argue that this lack of coordination makes it difficult to negotiate with global technology giants or to manage large-scale infrastructure projects.

To solve this, the authors suggest that Canada needs a more centralized authority or a dedicated digital agency to oversee the “sovereign by design” framework and ensure that investments in compute and cloud services are actually meeting the country’s strategic goals.
Where the AI economy is actually controlled

Much of the public conversation about artificial intelligence focuses on the latest models and research breakthroughs. But Khan and Mullin argue that the real leverage in the AI economy sits deeper in the technology stack.

The systems that train, host, and distribute artificial intelligence are increasingly determining where companies build products, where talent gathers, and where economic value accumulates.

Artificial intelligence systems depend on a stack of technologies that includes computing infrastructure, cloud hosting environments, the models themselves, and the applications that run on top of them.

Each layer concentrates power in different ways.

At the base of the stack is compute, the specialized chips and massive data centres required to train and run advanced AI systems. These facilities require enormous capital investment and long-term operational capacity.

Above that sits cloud infrastructure, which allows companies to access computing power and AI capabilities without owning their own data centres.

On top of the cloud sit the models themselves, the systems trained to generate text, images, predictions, or software code.

And finally come applications, where companies integrate artificial intelligence into products, services, and internal workflows.

Most public attention focuses on the models, but much of the economic power sits in the infrastructure layers underneath them.

Control the infrastructure and companies build their products on your systems. Talent gathers around those platforms. Data flows through those networks. Economic value accumulates within that ecosystem.

The report says the consequences of this are already visible in the global AI economy.

A small group of technology firms now controls much of the infrastructure required to train and deploy advanced AI systems. Their cloud platforms host the tools that thousands of companies rely on to build AI-driven products.

For countries like Canada, that concentration creates a strategic dilemma.

Canada has produced world-class research talent and influential AI institutions. Yet the infrastructure layer where much of the economic value is captured is largely controlled by companies headquartered elsewhere.

The report notes that Canada’s vulnerability is especially bad at the model layer.

While the global market is dominated by a handful of American giants, Canada possesses a unique strategic asset in Cohere, which remains one of the only world-class commercial foundation model companies based outside the United States. Khan and Mullin argue that supporting companies like Cohere is vital for maintaining a sovereign alternative.

Additionally, the authors highlight the growing importance of open-source and open-weight AI models. These systems provide a way for Canadian firms to avoid being locked into the proprietary “black box” ecosystems of foreign providers, which allows for greater transparency and local control over how AI is deployed.

Sovereignty in the AI era isn’t about building a national champion. It’s about making sure domestic firms retain meaningful access to the infrastructure that will shape their industries.
Canada’s strategic choices

Khan and Mullin argue that Canada’s strategy doesn’t hinge on building every layer of the AI stack domestically.

Instead, they point to a handful of areas where policy choices could influence how Canada participates in the global AI economy. One of the most immediate is computing infrastructure.

Training and deploying advanced AI systems requires enormous computing power. Countries that host large-scale compute infrastructure often become hubs for AI research, startup formation, and enterprise adoption.

Canada has already begun investing in this capacity.

The federal government allocated $2.4 billion in the April 2024 federal budget to expand sovereign AI compute infrastructure and ensure Canadian researchers and companies have access to the computing resources required to develop advanced systems.
The leverage: Electricity and minerals

While building domestic capacity is a priority, the report argues that Canada should also use its unique natural advantages to secure its place in the global market. Canada possesses two critical assets that global AI firms desperately need: an abundant supply of clean, low-carbon electricity and a wealth of critical minerals required for hardware production.

Khan and Mullin suggest that Canada can use these resources as strategic leverage. Instead of acting as a passive host for foreign data centres, the country could trade access to its energy grid for guaranteed access to high-end hardware or a commitment to keep certain data and intellectual property within Canadian borders. This would transition Canada from a typical customer to a strategic partner that possesses essential resources.

Beyond direct investment, the report highlights the critical role of government procurement. As one of the largest buyers of technology in the country, the federal government has the power to act as a “first customer” for domestic AI firms.

Currently, much of that spending flows to foreign platforms, which further entrenches Canadian dependence on outside infrastructure. By intentionally directing procurement toward sovereign Canadian providers, the government can help these firms reach the scale they need to compete internationally. This approach would turn public spending into a tool for industrial strategy, ensuring that the tax dollars used to modernize government services also help build Canada’s domestic AI capacity.

Infrastructure is only part of the equation.

The bigger economic impact comes when artificial intelligence is embedded inside major industries such as manufacturing, energy production, financial services, agriculture, and logistics.

Canada’s economy includes globally competitive companies in many of these sectors.

That creates an opportunity for Canadian firms to become leaders in applying artificial intelligence to complex industrial systems rather than competing directly in the global race to build foundational AI models.

The report also emphasizes the importance of international alliances.

Artificial intelligence is inherently global. Few countries can build every element of the technology stack on their own.

Through coordinated investments, shared infrastructure, and aligned governance frameworks, those countries could collectively strengthen their position in the global AI ecosystem and reduce dependence on a small number of dominant technology platforms.

Khan and Mullin argue that by aligning with countries like the United Kingdom, France, and Japan, Canada can move beyond its bilateral relationship with the United States.

These nations face similar structural challenges because they all possess world-class research talent but lack the massive scale of the American tech giants. The authors suggest that a coordinated alliance would allow these middle powers to pool their resources, share computing power, and establish common standards for data privacy and security.

This collective approach gives Canada a seat at the table when global rules for artificial intelligence are being written, which ensures that the country is not simply forced to adopt frameworks designed elsewhere.

Khan and Mullin argue that sovereignty doesn’t mean trying to build every piece of the AI stack domestically. For a country the size of Canada, that wouldn’t be realistic.

Instead, they suggest ensuring Canada maintains enough domestic capability and alternative options that it isn’t locked into a single foreign platform. Supporting sovereign cloud providers, encouraging open-weight models, and maintaining access to multiple infrastructure partners would give Canadian firms room to move if conditions change.

That flexibility matters if prices rise, rules change, or geopolitical pressures reshape how global technology platforms operate.

The report emphasizes that Canada can’t outspend superpowers like the United States or China. While those nations can commit hundreds of billions of dollars to achieve AI dominance, Canada must instead rely on being smarter about how it structures its dependencies.

A central part of this strategy involves preparing for the July 2026 review of the Canada-United States-Mexico Agreement (CUSMA). The authors warn that this review presents a significant risk to digital sovereignty, as Canada may face intense pressure to trade away its ability to regulate its own digital infrastructure in exchange for traditional trade concessions.

To protect its long-term interests, the report argues that Canada must treat its AI policy as a non-negotiable part of its national security during these talks.
What this means for business leaders

For executives, the issue often shows up as a technology decision, but the report argues the implications run much deeper.

Artificial intelligence is rapidly becoming embedded in the core operations of many businesses. Financial institutions use machine learning to detect fraud. Retailers analyze purchasing patterns to refine pricing strategies. Manufacturers rely on predictive models to manage supply chains.

These tools promise efficiency and competitive advantage.

But they also introduce new forms of operational dependency.

When a company runs critical systems on infrastructure it doesn’t control, it becomes subject to the pricing models, policies, and legal frameworks of the platforms providing that infrastructure.

If a Canadian manufacturer, bank, or logistics firm builds core operational capabilities on foreign cloud systems, those systems effectively become part of the country’s economic infrastructure. That also means those operations may ultimately fall under the laws and jurisdiction of foreign governments, something the report notes is already a concern in cases such as the United States’ CLOUD Act.

To understand this risk, the report distinguishes between data residency and data sovereignty.

Data residency simply refers to where the servers are physically located. Many foreign cloud providers have built data centres in Canada to satisfy residency requirements, but Khan and Mullin argue this is not enough.

Data sovereignty, by contrast, refers to which country has the legal authority over that data. Under laws like the U.S. CLOUD Act, the American government can sometimes compel U.S. companies to provide access to data even if it is stored on Canadian soil.

For Canadian businesses and government agencies, this means that physical location does not always guarantee legal protection. True sovereignty requires using providers that are subject only to Canadian jurisdiction.

That raises strategic questions for business leaders:Where are your AI workloads running?
Who governs the platforms processing your data?
What happens if those systems change their pricing, access rules, or regulatory obligations?

These questions go beyond technology strategy. They touch on economic resilience and long-term competitiveness.

The next question is whether the country will help build the systems that run on top of those breakthroughs, or whether Canadian companies will simply plug into platforms developed somewhere else.

Final shots

Canada helped pioneer modern artificial intelligence research. Much of the infrastructure powering the AI economy is now being built elsewhere.

Decisions made in the next several years will determine whether Canada helps shape that infrastructure or primarily relies on systems developed abroad.

The question that is becoming harder to ignore: who controls the systems that increasingly run the modern economy?



Written By Chris Hogg


Chris is an award-winning entrepreneur who has worked in publishing, digital media, broadcasting, advertising, social media & marketing, data and analytics. Chris is a partner in the media company Digital Journal, content marketing and brand storytelling firm Digital Journal Group, and Canada's leading digital transformation and innovation event, the mesh conference. He covers innovation impact where technology intersections with business, media and marketing. Chris is a member of Digital Journal's Insight Forum.