Thursday, May 14, 2026



AI Companies Are Recklessly Racing Toward a Cybersecurity Crisis

May 11, 2026

WASHINGTON - Google researchers announced Monday that cybercriminals recently used an artificial intelligence model to help create a dangerous zero-day vulnerability capable of exploiting computer networks at scale, marking what experts say is a major turning point in the cybersecurity landscape. A “zero-day” vulnerability is a hidden flaw or weakness in software that hackers discover before the company or public knows about it or has a fix available. It’s considered especially dangerous because attackers can exploit the flaw immediately, giving defenders “zero days” to protect themselves.

The findings come as leading AI companies, including Anthropic and OpenAI, continue developing increasingly advanced models capable of identifying and exploiting critical software vulnerabilities. Google warned that malicious actors are already using AI to increase the speed, scale, and sophistication of cyberattacks, while researchers have observed state-backed hacking groups linked to China, Russia, and North Korea leveraging AI technologies to automate and refine offensive cyber operations. The developments have intensified concerns that powerful AI systems are being deployed faster than governments and regulators can establish meaningful safeguards to prevent catastrophic misuse.

In response to the growing concerns, Public Citizen’s AI governance and technology policy counsel, J.B. Branch, issued the following statement:

“Cybersecurity experts are sounding the alarm, yet AI companies continue racing to release increasingly powerful models with little regard for the societal consequences. It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward. Americans are increasingly rejecting this destabilizing AI arms race. We need enforceable AI regulations that require rigorous safety testing, independent review, and meaningful oversight before these systems ever reach the public. Regulators cannot remain in a perpetual game of catch-up while Big Tech gambles with the safety and stability of modern society.”


Public Citizen is a nonprofit consumer advocacy organization that champions the public interest in the halls of power. We defend democracy, resist corporate power and work to ensure that government works for the people - not for big corporations. Founded in 1971, we now have 500,000 members and supporters throughout the country.


AI rivalry overshadows push for guardrails at Xi-Trump talks: experts


By AFP
May 12, 2026


Image: — © AFP



Luna Lin with Katie Forster in Tokyo

Fears that artificial intelligence could help people design bioweapons or hack into national infrastructure are mutual concerns for Xi Jinping and Donald Trump, despite their countries’ fierce rivalry over the technology, analysts say.

As the leaders prepare for a rare summit in Beijing this week, policy experts have stressed the importance of US-Chinese discussions on steps to contain the risks, such as a hotline for de-escalation when an AI crisis hits.

But with China set on narrowing the United States’ lead in the strategic sector, the stakes will be high.

“There is a kind of shared concern about where this AI arms race might be going,” and if it could create an “out of control” scenario, said Michael Jinghan Zeng, a professor at City University of Hong Kong.

“Despite critical disagreements on a wide range of issues, there is also this kind of understanding from both sides” on the need for AI guardrails, he told AFP.


The White House recently accused Chinese entities of “industrial-scale” efforts to steal US technology on artificial intelligence
 – Copyright AFP Kent NISHIMURA

The White House recently accused Chinese entities of “industrial-scale” efforts to steal US technology, while Beijing blocked the acquisition of a Chinese-founded AI agent tool by tech giant Meta.

In 2024, Xi agreed with Trump’s predecessor Joe Biden that humans must remain in control of the decision to fire nuclear weapons.

Although little more has followed, Xi and Trump could “commit to some rhetorical signal” in Beijing as a basis for further cooperation, Zeng said.

– ‘Catastrophic risks’ –

The AI cybersecurity threat has been highlighted by Mythos, a powerful new model that US startup Anthropic withheld from public release to stop it from being exploited by hackers.

And “if a non-state actor uses an AI model to develop a biological weapon, that could pose catastrophic risks to both the United States and China,” Chris McGuire of the Council on Foreign Relations wrote in a recent article.

“Over the long term, addressing these risks will require cooperation,” McGuire said, cautioning that China’s “willingness to make and abide by robust international commitments on AI safety is low”.

Washington says the latest AI model from Chinese startup DeepSeek — considered the country’s most advanced — is about eight months behind the top offerings from US companies.

To stop Chinese tech firms catching up too quickly, the United States bars them from purchasing the most cutting-edge chips made by California-based Nvidia.

China has boosted its domestic AI chip industry in response, and could be hoping to use its control over rare earths as leverage at the summit on Thursday and Friday.

– ‘Intertwined’ –

Top US executives, including Tesla’s Elon Musk and Apple’s Tim Cook, will accompany Trump — with Nvidia boss Jensen Huang a last-minute addition to the trip.

Chen Liang, founder of Strategic Times Consulting, told AFP he did not expect any “dramatic breakthroughs”.

Trump’s visit will merit attention if he and Xi manage to “shelve the most sensitive issues” while establishing “rule-based tracks” on points of cooperation, Chen said.

But competition is likely to remain stiff “in high-tech sectors like AI chips that directly involve the core interests of both sides”.

Beijing has refuted accusations made by the White House of large-scale Chinese AI “distillation” of US rivals — a practice often used by companies to create cheaper, smaller versions of their own models.

Meanwhile, China’s top economic planning body has blocked Meta’s $2-billion bid for China-founded, Singapore-based AI agent startup Manus.

The move, which followed a regulatory review, has been seen as a sign of China’s growing oversight of its AI sector.

Yet “the talent, capital, and supply chains underpinning the field are deeply intertwined across the United States and China,” said Grace Shao, a China AI analyst and author of the AI Proem newsletter.

“Any delusion of full decoupling isn’t realistic on any near-term horizon”, she told AFP.

“Leadership in the technology… will define the next decade of productivity and growth, so it’s in everyone’s interest that the two superpowers find common ground on sensible guardrails for AI.”



Canada moves to close its AI infrastructure gap


By Digital Journal Staff
May 13, 2026


The Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario (Photo by Sam Barnes:Web Summit via Sportsfile)

Canada’s AI ambitions need physical infrastructure to back them up, and the federal government is opening up its wallet to build it.

On Monday, Minister of Artificial Intelligence and Digital Innovation Evan Solomon announced that Canada and Telus are advancing work under the government’s Enabling large-scale sovereign AI data centres initiative, with a proposed large-scale data centre project in British Columbia. The initiative ran a call for proposals earlier this year.

On Tuesday, at Web Summit Vancouver, Solomon announced $66 million in support for 44 Canadian companies through the AI Compute Access Fund. This is aimed at helping small and mid-sized Canadian companies afford high-performance computing, allowing them to continue building at home.

Taken together, these announcements represent a federal government trying to address Canadian AI development from two ends. The first, building the large-scale infrastructure that underpins the whole ecosystem, followed by lowering the cost of entry for the companies trying to build on top of it.

Telus CEO Darren Entwistle pointed directly to demand: “The unprecedented demand that completely sold out our first AI factory in Rimouski proves that Canadian innovators want cutting-edge AI infrastructure built right here on Canadian soil.”

Added Solomon, “Canada cannot compete in the AI economy without the infrastructure to back it up. By advancing this project with Telus, we are taking concrete action to build sovereign AI capacity here in Canada, so Canadian innovators, researchers and businesses have access to the compute they need, while keeping Canadian data, intellectual property and economic advantage on Canadian soil.”

The government has pointed to Canada’s geography, climate, sustainable energy sources, and network infrastructure as reasons the country is well-positioned to attract AI infrastructure investment.

The compute fund addresses a more immediate barrier. For many Canadian SMEs, the cost of high-performance computing has been the wall between an AI idea and a viable product.

The 44 companies receiving support span the life sciences, health, energy, advanced manufacturing, agriculture, finance, natural resources, and transportation sectors. Of the $66 million announced Tuesday, $16.8 million supports eight British Columbia projects.

Additional funding offers are still being finalized.

For Canadian companies that have been waiting on compute, the message from Ottawa this week was straightforward.

Hang tight, it’s coming.

Final Shots

No funding has been committed to the Telus data centre project yet. Monday’s announcement doesn’t complete the work, but it’s a step in the right direction.

The compute fund’s first $66 million goes to 44 companies across eight sectors. Of that, $16.8 million supports eight BC projects.

Both announcements fall under Canada’s Sovereign AI Compute Strategy, which is designed to keep AI development, jobs, and intellectual property on Canadian soil.

When AI writes the code and ships it at 3 a.m.


By David Potter
DIGITAL JOURNAL
May 13, 2026


Shaun Guthrie of RJC Engineers and Nicole Donatti of Data Elephant discuss AI readiness with Matthew Duffy at the 2026 CIOCAN Peer Forum in Vancouver — Photo by Jennifer Friesen, Digital Journal

Someone built a data pipeline using generative AI to write the code, then pushed it into production at three in the morning.

By the time the team managing the company’s data systems found out about it, the pipeline was already running, pulling data from internal systems and pushing it into reports the business relied on to make decisions.

None of the people responsible for that data environment had read the code. None of them knew it was being written.

That’s exactly what happened at a Canadian energy company earlier this year, and a similar version is happening in organizations across the country.

The example was shared at the CIO Association of Canada‘s 2026 Peer Forum in Vancouver by Nicole Donatti, transformation leader at Data Elephant, a Vancouver-based data and analytics firm.

The code in the story was generated using vibe coding, the term for using generative AI to write working software from plain-English prompts. The user describes what they want, the AI produces the code, and the user deploys it.

This makes code easy to generate, but also means the people prompting the tool are often employees with no coding background and none of the skills necessary to check the code.

Technology teams have always called this kind of unauthorized work shadow IT. Vibe coding has made it more dangerous by adding working software to the category.

“It is great for experimentation, great for an idea in your tight little group,” says Donatti. “But you need governance around the vibe coding. It can get you to what you think is a great solution really quickly, and get you into trouble just as fast.”

Most Canadian boardrooms are still focused on adoption.

Donatti is working with companies on what to do once the decision to adopt has been made, and where code is already running in places nobody approved.
Maybe IT shouldn’t be the last to know

“It is running rampant through our organization right now,” says Shaun Guthrie, business technology leader at RJC Engineers. “Everyone is just coming towards us with Anthropic vibe coding through Claude Code, and we’re getting inundated with it.”

He described a recent example in which an employee built an invoice automation tool using generative coding tools, presented it to the technology team, and asked to put it into production.

The tool worked. It also assumed a connection to an ERP system the company is in the middle of replacing.

The employee didn’t know the ERP was being replaced. The technology team didn’t know the tool had been built until it was finished. The work had been happening for months in a place where the people accountable for the technology environment couldn’t see it.

Guthrie’s response has been to spend less time issuing policy and more time building relationships across the business. The technology team can’t catch what employees are building if employees don’t tell them. Employees won’t tell them if they think the answer will be no.

“Shadow IT is somewhat our fault for not actually getting in front of it,” he says. “If you do that and you get out and you’re in front of people, it has nothing to do with anything technical. It’s just relationship building. And then they’re actually more open and honest with you, and they’ll tell you what they’re doing.”
The code is only as good as what it touches

A board directive at one of Donatti’s clients told the leadership team to make the company AI-ready and attached a budget.

Eighty proofs of concept followed, each one a small AI experiment meant to test what was possible. By the time Donatti was brought in, the data foundation wasn’t as solid as everyone had been told. Pipelines had been built without a common framework, key context about what the numbers meant was missing, and the reasoning behind business decisions had stayed in people’s heads.

That’s the gap Snowflake’s chief data and analytics officer has called documentation debt, the institutional knowledge about what a column means, how it’s calculated, and when it should be used that has to exist in writing for an AI to read.

Vibe coding tools assume the data they’re touching is documented and well understood. In most Canadian organizations, it isn’t.

Most organizations never documented their systems thoroughly before AI tools arrived, leaving unattended code to run into problems the company didn’t anticipate.
Jeff Reichard of Veeam at the 2026 CIOCAN Peer Forum — Photo by Jennifer Friesen, Digital Journal

What ends up in front of the tribunal

Canadian companies running unattended AI are accountable for what it does, and many are moving faster than their legal, regulatory, and security thinking can keep up.

Jeff Reichard, senior director of product strategy at Veeam, pointed attendees at the Peer Forum to the Air Canada chatbot case. The chatbot told a customer he could file for bereavement fares retroactively.

The airline argued in court that the chatbot was a separate entity from the airline. The British Columbia civil tribunal disagreed and ruled against Air Canada in 2024. The chatbot was running on the airline’s website, and the airline owned the output.

Air Canada lost because the tribunal treated the chatbot’s output as the airline’s responsibility.

The broader regulatory environment is moving in the same direction. Federal Algorithmic Impact Assessment requirements apply to public sector AI systems, the European Union’s AI Act is in force with fines up to €35 million for the most serious categories of violation, and the courts are increasingly stepping in where regulation remains unclear.

Canadian companies don’t yet have the structures in place to handle that kind of accountability.

PwC Canada’s February 2026 Trust in AI report found that while 72% of them name responsible AI a top priority, 36% still have no dedicated governance function. Another 65% say they struggle to identify who owns existing AI systems or track where those systems are running.
Pulling AI work back into view

Technology leaders trying to get ahead of the wave are making it easier for employees to bring the work into the open before it ships. Guthrie and Donatti are both doing versions of the same thing.

Guthrie’s response inside RJC has been to slow the AI conversation down enough to make the foundations visible, then move quickly.

He nominated 15 people from across the business to an AI working group, sat them in a boardroom for two days, and asked them to generate ideas. They came up with about 100.

The group distilled those to nine, then to three. Each one went into a business case. Each business case was tested against the same question: what data would this need, and do we have it? The answer, mostly, was no.

The result is an AI program that has evolved into a data governance program with AI use cases attached.

Donatti’s response with her clients has been to add a triage layer to incoming work. A scoring system that checks whether teams actually have the data, staffing, and internal support needed to maintain what they build. The goal is to stop teams from shipping things they can’t maintain.

Vibe coding isn’t going away. The work in front of Canadian technology leaders is making sure the next pipeline arrives at their desk before it ships, not at three in the morning without them.

Final shotsVibe coding is a powerful tool when used with the right guardrails, and a serious problem without them.

Vibe coding depends on data that many Canadian organizations still haven’t documented well enough for AI systems to use reliably.

Companies are accountable for what their AI produces. The Air Canada chatbot ruling and the EU AI Act both establish that, and Canadian regulators are likely to follow.

The technology leaders getting ahead of the wave are letting employees use the tools while making it easier for them to bring the work to IT before it ships.

Digital Journal is the official media partner of the CIO Association of Canada.


Written ByDavid Potter

David Potter is Senior Contributing Editor at Digital Journal. He brings years of experience in tech marketing, where he’s honed the ability to make complex digital ideas easy to understand and actionable. At Digital Journal, David combines his interest in innovation and storytelling with a focus on building strong client relationships and ensuring smooth operations behind the scenes. David is a member of Digital Journal's Insight Forum.



No comments: