By David Potter
DIGITAL JOURNAL
March 25, 2026

JP Lalonde speaks at the 2025 CIO Association of Canada Peer Forum. — Photo by Jennifer Friesen, Digital Journal
The business model is familiar. Build something useful, make it cheap to adopt, wait until you’re hooked, then monetize you.
It worked with social media. The same people are running the same play with AI, and this time what’s at stake isn’t your attention. It’s your thought process and secrets, or those of your employer.
Employees are feeding AI tools their most sensitive work. Strategy documents, client communications, internal deliberations. Organizations are sharing institutional knowledge through platforms owned by a small number of companies that operate under U.S. law, with terms that keep changing. Most haven’t stopped to ask who owns what’s being collected.
That’s the argument JP Lalonde has been making as he prepares to take the stage at the CIO Association of Canada’s Peer Forum conference in Vancouver on April 15 and 16.
Lalonde spent years inside the federal government working on AI systems used to track money laundering, disrupt terrorist financing, and identify human trafficking networks. He has spent a long time thinking about what it means to protect sensitive data and who ultimately controls it.
Today, his concern is less about hackers than the platform model itself.
A familiar playbook
Lalonde draws a direct line from the tobacco industry to the food industry to social media to AI.
The mechanism, he argues, is consistent. Take a genuine human need, strip it down to its most addictive components, and build a business model around the dependency.
Social networks took connection and distilled it to likes, swipes, and algorithmic feeds. The consequences have taken years to reach the courts.
On March 24, a New Mexico jury found Meta liable for misleading consumers about the safety of its platforms and endangering children, ordering the company to pay $375 million. It is Meta’s first loss in a series of child safety trials currently underway across the United States.
“It’s the same people, it’s the same business model,” says Lalonde.
The signs are already visible in AI.
Microsoft announced in October 2024 that it would begin placing ads inside Copilot conversations, including within paid subscription tiers.
Pricing has followed, with Microsoft’s Copilot licensing costs increasing in early 2025. Organizations that embedded AI tools into their workflows on one set of terms are discovering those terms were not fixed.
“They’re going to use our personal data to sell us more s*** through AI,” says Lalonde.
Another concern is what people are putting into these systems.
Individuals are using AI tools as journals, therapists, and memory aids, pouring personal conversations, photographs, and intimate details into platforms owned by corporations. That data, Lalonde argues, shouldn’t belong to anyone but the person who created it.
He says it should be treated as a matter of human rights that corporations cannot own people’s memories.
For organizations, the question is parallel. Where is your institutional knowledge going, who controls it, and what happens when the terms change again?
What sovereign AI means, and for whom
It’s hard to turn on the news without hearing a politician talking about sovereign AI, but the term means different things depending on where you sit.
At the national level, it means having AI infrastructure that operates under Canadian law, not at the discretion of foreign platforms subject to foreign governments.
At the individual level, it’s about owning the data and memory you’ve built inside AI systems rather than surrendering it to a platform that can change what it does with that material at any time.
For businesses, it’s about controlling where organizational knowledge lives and ensuring that a vendor’s policy shift doesn’t become your governance crisis.
Digital sovereignty is increasingly a business issue, and the conversation is arriving faster than most organizations anticipated.
Harvard professor Shoshana Zuboff, whose work on surveillance capitalism identified the underlying logic of the platform economy, has argued that AI represents the next and more powerful iteration of the same dynamic: behavioural data harvested at scale, used to predict and shape what people do next.
These conversations are connected. An employee using a U.S.-hosted AI tool for sensitive internal work is making an individual decision with organizational and national consequences.
Most of those decisions are being made right now, without a framework for thinking about them.
Building the exit
The response Lalonde and a growing number of engineers, government technologists, and entrepreneurs working on the same problem are building is a move away from dependence on closed platforms controlled by a small number of companies whose interests don’t necessarily align with the organizations that use them.
The architecture they’re advocating is open-source large language models running on your own infrastructure, with no data leaving your perimeter.
The performance gap between closed frontier models and open-source alternatives has been narrowing rapidly.
A 2025 Menlo Ventures survey of enterprise technical leaders found that 13% of enterprise AI workloads now run on open-source models, down from 19% at the start of the year. The January 2025 DeepSeek R1 release reshaped expectations about what open models could do, but enterprise adoption has since consolidated around a handful of closed-source providers. For many organizational use cases, the capability is there. Whether most organizations have the will and the internal capacity to operate it themselves is a different matter.
Some Canadian organizations are beginning to make this shift.
Telus launched Canada’s first sovereign AI factory in Rimouski, Quebec in September 2025. It’s a facility that keeps computing power and sensitive data inside the country, with early partners across healthcare, financial services, and enterprise software.
Lalonde’s Project Pronghorn is another example of what sovereign AI infrastructure can look like in practice.
The platform was developed in collaboration with Janak Alford, Alberta’s Deputy Minister of Technology and Innovation, and is released as a free, open-source tool under MIT licence.
Alford has described it as Alberta’s AI Factory for Digital Government — a sovereign, open-source platform designed to help government modernize legacy IT systems at a fraction of the usual time and cost. It’s designed to let organizations build enterprise-class systems on their own infrastructure, connected to whichever language models they choose, with the ability to exit any of them without losing data and institutional memory.
“We are telling people how to extract that memory and bring it into another model,” says Lalonde. He adds that the goal is to keep organizations from being held hostage by vendors when it comes to their own information and data.
There’s also a capacity question to consider. Running open-source models on-prem requires people who understand systems architecture, infrastructure, and security.
While teaching at the Alberta AI Academy, an open-access AI literacy training program for public servants, Lalonde saw a significant drop-off as training moved from foundational AI literacy to technical implementation.
The people who can actually build and maintain sovereign infrastructure are not evenly distributed, and most organizations don’t have enough of them.
The case for agency
Lalonde is stepping back from his role at the Impact Assessment Agency of Canada to focus on the work he’s been building outside government. The conversation, he says, is just getting started.
“I think we all have our best interests in mind and I think we could all work together and innovate together,” he says.
The stakes are concrete. Microsoft has stated publicly that US law takes precedence over Canadian data sovereignty commitments when the two conflict. For Canadian organizations hosting sensitive data on US-owned platforms, that’s the current condition. Canada’s national AI strategy has made sovereign infrastructure a stated priority, but the practitioners building it aren’t waiting for policy to lead the way.
For leaders making platform decisions right now, the AI tools being adopted today are making decisions about data that will be difficult to reverse. The organizations building exits now will have options that those who waited won’t.
Sovereignty belongs in the architectural decisions being made right now.
Final shotsThe performance gap between closed frontier models and open-source alternatives is narrowing faster than most enterprise AI strategies anticipated, but closing that gap on your own infrastructure requires technical capacity most organizations are still building.
Canada’s national AI strategy creates a policy window, but the teams building sovereign infrastructure aren’t waiting for it to close.
The leader who hasn’t defined exit conditions for their AI platforms should treat that as an open governance gap, not a future consideration.
Digital Journal is the national media partner for the CIO Association of Canada.

Written ByDavid Potter
David Potter is Senior Contributing Editor at Digital Journal. He brings years of experience in tech marketing, where he’s honed the ability to make complex digital ideas easy to understand and actionable. At Digital Journal, David combines his interest in innovation and storytelling with a focus on building strong client relationships and ensuring smooth operations behind the scenes. David is a member of Digital Journal's Insight Forum.
No comments:
Post a Comment