‘AI deception is a threat to democracy. It’s time the law caught up’

Last month George Freeman MP woke up to find a video circulating online that appeared to show him defecting from the Tories to Reform UK. It was slickly produced, believable enough to fool casual viewers, and entirely fake.
Freeman went to the Norfolk Police. They took it seriously and initially treated it as a potential false communications offence under the Online Safety Act 2023. However, they later decided that it did not meet the legal test for a crime.
The wrong that has been perpetrated against Freeman and the public in this case is clear: intent to deceive.
‘Satire makes fun of power. Deceit undermines truth.’
Our law already recognises that impersonation can be an offence. For example, pretending to be a police officer is a criminal offence because society relies on believing that the person in uniform really has the authority they claim. The same logic should apply to politics.
There have long been calls for new rules to govern election content, aiming to prevent misleading communication in our politics. The arrival of generative AI has made it even more urgent to bring forward a new code of conduct for campaigning. The main objection to doing so is that it might curtail free speech.
However, when someone deliberately uses technology to impersonate a political candidate, they’re not exercising free expression; they’re committing a form of democratic fraud. The difference between satire and deceit is as old as politics itself. Satire makes fun of power. Deceit undermines truth.
READ MORE: Artificial intelligence: ‘If progressives don’t harness AI, the populist right will’
‘Using AI to impersonate a candidate can cross into deception’
There was a period of time when there was a lot of hype about generative AI in politics but minimal signs of its impact. That has now shifted. Candidates and parties are increasingly using AI-generated videos, images and voice clips to shape messaging, to dramatise policy stakes, and to mock opponents. This is the new normal.
For example, in October, Donald Trump reposted an AI-generated video showing himself as a fighter-pilot wearing a crown and dropping sludge-like “faeces” on protesters during the “No Kings” demonstrations. It was clearly fantastical in nature. Nobody thought he was claiming to have actually done it.
And in New York’s mayoral race, Andrew Cuomo’s campaign released an AI-generated video on Halloween, depicting his opponent Zohran Mamdani trick-or-treating; among other barbs, it showed Mamdani taking 52% of the candy on offer instead of the customary one as a way of attacking his tax policy. The video was fairly realistic and featured a very accurate voice clone. But the production was still a bit shonky. The situation was clearly satirical. And it featured a large disclaimer throughout highlighting the use of AI. As such, nobody would have mistaken it as genuine.
These examples illustrate that generative AI can be used to depict politicians but stay on the right side of the line. But using AI to impersonate a candidate in a video can cross into deception. In Cuomo’s case, the combination of cloning Mamdani’s image/voice and presenting him in ways he never had, means the Halloween advert came close to that line. Had the disclaimer only appeared at the end, or the quality been 20% higher, it may have crossed it.
READ MORE: ‘Streeting, Akehurst, who next? How campaigns can fight deepfake attacks’
‘AI is enabling those intent on manipulating voters’
The test as to whether a piece of content is satire or misrepresentation should be whether the median voter believes the candidate genuinely said or did what is being presented. The moment the audience questions – even for a second – if they’re in the realm of drama, or reality, it’s a problem.
It’s worth noting, AI isn’t a necessary ingredient for misleading claims in election content. For years, our politics has been poisoned by things like dodgy leaflets posted through doors designed to mislead voters into thinking a candidate said something they did not. AI is simply enabling those intent on manipulating voters to make their misleading claims much more believable.
While we must address deception, it is equally important to recognise why parody must be protected. Britain has a rich tradition of political parody, from Spitting Image to Private Eye to the endless stream of mash-ups that lampoon ministers every week. These aren’t threats to democracy; they’re part of it. Parody signals to the audience that it’s a joke. The humour only works because we know it’s not real. That’s the line any sensible law must hold. Outlawing political impersonation shouldn’t muzzle humour, commentary or criticism. It should simply criminalise the use of artificial intelligence or digital tools to deceive voters about who’s speaking.
‘AI could change an election unless the law catches up’
A deep-fake designed to make a candidate appear to say something they never said, especially during an election, is no different in spirit from distributing a fake ballot paper. It’s a direct assault on the integrity of the democratic process.
The Norfolk Police’s decision not to progress Freeman’s case makes clear that there is currently a significant gap in the law. The Online Safety Act 2023 doesn’t cover it. Section 106 of the Representation of the People Act 1983 makes it illegal to publish false statements about a candidate’s personal character or conduct during an election. That’s a narrow and outdated safeguard. It doesn’t cover impersonation, and it doesn’t apply outside campaign periods. Defamation law is too slow and too costly to help in the middle of a fast-moving online storm. By the time a candidate proves a deep-fake is false, the clip has already gone viral.
New rules, properly enforced, could change this and the upcoming Elections Bill is the perfect opportunity. Within that legislation, as part of a code of conduct on campaigning, we need an offence that focuses on intent and authenticity. A simple principle: it should be illegal to create or distribute digital content that falsely purports to be a political candidate (or claims to be speaking for them), with the intent to deceive voters. Alongside that, a clear exemption for parody, satire or artistic expression. A law like this wouldn’t protect politicians from mockery but it would protect the public from manipulation.
Freeman, speaking at a recent event in the House of Commons on disinformation in campaigning, said: “I dread an election in which videos are put out 24 hours before saying that Justin has a secret habit and everyone goes, ‘Well, I’m not voting for him.’ It could change an election”. He’s right. It could, and soon it will, unless the law catches up.
Cyber Proofing

(Article originally published in Sept/Oct 2025 edition.)
With the Coast Guard's final cybersecurity rule in effect as of July 16, 2025 and the training mandate due January 12, 2026, the marine transportation system is being pushed to treat cyber risk as an operational reality.
Two voices, one from a maritime technology company and one from a safety-and-risk leader, point to the same answer: If the industry wants resilience, it must move beyond checklists toward engineered controls, measurable hygiene and contracts that create accountability.
SECURITY OPERATIONS CENTERS
Now a part of DNV, Singapore-based CyberOwl is focused on a stubborn problem for shipowners: the practical visibility of onboard operational technology.
"One of the toughest challenges in cybersecurity for shipping is how to simplify gaining visibility of both connected and presumably unconnected operating technology (OT) systems," says CEO Daniel Ng. "As part of DNV, collaboration with OEMs has become significantly more meaningful."
The aim is to push some responsibility upstream so security data is logged, passed and delivered consistently from design through operation. That reduces fragile retrofit needs and gives owners a clearer evidentiary trail.
Ng argues that fear of a security operations center (SOC) often comes from enterprise pricing that does not fit maritime economics. "Putting in place a SOC service does not have to be as hard or expensive as people imagine," he says, if costs are predictable on a per-vessel/per-day basis.
He recommends a minimum viable capability where a complete SOC is not feasible. Configure alerts for a few safety-critical use cases with onboard equipment, focusing on network bridging (creating a direct path between two otherwise separated networks at the data-link level, making them act as one) and remote access. Keep a "zero-hour" (when a new threat first appears) incident response arrangement so a maritime-experienced team can deploy quickly.
He cautions that this stopgap usually costs more over time than a right-sized SOC that shuts incidents down early and steadily improves hygiene.
ATTACK VECTORS
The attack picture is blunt.
"Unfortunately, the top vector is still USB," Ng notes. "This represents 75 percent of all the malware incidents we saw during 2024. That trend continues in 2025 so far." Physical USB locks do not fix the problem. He calls them "a poorly understood control that is clearly not working" because crews can unlock them and warns that inspecting for them encourages security theater and distracts from controls that lower risk.
Remote access as an ingress route is also rising "from four percent of the incidents we saw in 2023 to 13 percent in 2024," a byproduct of digitalization and supply chain exposure.
The practical prescription remains simple and effective: Segment critical systems and implement USB controls that work in practice rather than appearance.
DETECTING THREATS & BUILDING DEFENSES
CyberOwl's answer to the so-called "evidence problem" is to make proof easy to produce.
The company's OT Security Manager mines maintenance documents and spreadsheets to build a defensible inventory to roughly sixty to seventy percent accuracy without installing software onboard. Crews then verify the remainder through targeted walkthroughs or scans while AI flags inconsistencies. Medulla, CyberOwl's cybersecurity monitoring platform, then turns that baseline into a hygiene scorecard mapped to IMO guidance, IACS E26 and E27 and NIST so crews can produce a ready-to-show evidence pack in minutes.
Ng also urges the industry to look upstream at how E26 (which sets minimum requirements for the cyber resilience of ships) plays out in practice. He concludes that while E26 is imperfect, it's a reasonable step.
The issue is where implementation begins, at the shipyard. Many newbuilds come from yards where cybersecurity receives less attention, and that mindset flows to owners who must operate and maintain the result. He cautions that some yards simplify for convenience by pushing a single template for network architecture, securing class approval and then telling owners it's the only way to safeguard a ship. That approach hinders fleetwide harmonization and locks in design choices that may not serve the operator.
Procurement is where behavior changes fast.
"We're seeing an increasing number of charterparty contracts demanding minimum-level cybersecurity, particularly in the oil and gas segments," Ng says. He wants OEM supply-and-service contracts to spell out responsibilities, liabilities and incident support for safety-critical systems where owners lack direct control over vendor equipment, such as black boxes.
FROM RULES TO READINESS
Michael DeVolld, Senior Director for Maritime Cybersecurity at ABS Consulting, says cyber connects when it lives in daily work: "Cyber resonates best when integrated into the policies and procedures crews already use, not treated as something separate."
He points to tabletop exercises alongside fire or spill drills and real-world cases where cyber events disrupted navigation, cargo operations and port logistics. "At the end of the day, cyber connects best when framed in the same terms crews already live by – safety, reliability and continuity."
He widens the lens to the economy: "Too often, cyber is seen as an IT issue, not a supply chain crisis. The Suez Canal blockage is a telling parallel. It wasn't a cyber incident, but one ship that caused billions in losses and cascading congestion. A cyberattack could create the same disruption, only faster and across multiple ports or vessel classes. The knock-on effects – demurrage, delayed cargo, missed contracts and even inflation – can scale quickly. Insurance exclusions are growing, leaving operators more exposed than they realize. The risk is not just corporate, it's systemic. The sooner we recognize cyber as an economic stability issue, the better prepared we'll be."
Training is both the near-term test and the long-term project.
"I see a split in how operators approach the 2026 training deadline," DeVolld says. "Some are leaning in early, working with the Coast Guard and experienced consultants to understand the intent and tailor training to operations. Others assume their corporate off-the-shelf training will check the box, and that could result in a major gap. Generic programs often do not prepare crews, facility operators or contractors for the realities of maritime systems."
His fix is to embed role-based practice in the safety culture that crews already live by and to scale sensibly for smaller firms: "Cybersecurity should be taught from the beginning, built into maritime academies, woven into STCW training and treated as core professional knowledge."
On the technology side, he wants higher-quality software and systems from shipyards, vendors and integrators through secure coding, rigorous testing and disciplined integration so vulnerabilities are engineered out before they reach operations: "AI can support defense by detecting anomalies faster and making sense of complex systems, but it's not mature enough to be a foundation. Every new product seems AI-driven, yet AI cannot replace the basics – governance, configuration, skilled people and tested plans."
He also warns: "Phishing campaigns are more sophisticated. Deepfakes and disinformation are eroding trust, and reconnaissance tools are mapping targets with alarming speed. Combine that with risks of model manipulation and reduced human oversight, and vulnerabilities multiply. AI should be a tool to augment human operators, not a substitute for the fundamentals that make systems resilient."
Vendors are part of daily operations, so procurement must carry weight.
ABS Consulting recommends writing clear cyber expectations into contracts and service levels, covering how remote access is authorized and logged, the timelines for fixes, and incident support. Modernization should be deliberate on ship and shore with critical networks separated, remote access tightened, failover paths tested and coordination with authorities exercised before inspections or incidents.
EMBEDDED RESILIENCE
Maritime cybersecurity will mature when evidence replaces theater, design anticipates operations and people practice until response feels routine.
From bridge to berth, the playbook is simple: Measure what matters, contract for accountability, modernize deliberately and train for the job you actually do. Do that, and resilience becomes embedded in daily work everywhere.
The opinions expressed herein are the author's and not necessarily those of The Maritime Executive.
No comments:
Post a Comment