When AI 'enhances' or 'improves' an image
Issued on: 24/03/2026 -
Using AI to "enhance" or "upscale" an image can make it look sharper. But it can also be dangerous. Several pieces of misinformation have recently circulated using these kinds of images.
In the US city of Minneapolis in January, a woman named Renée Good was shot and killed by an agent from the Immigration and Customs Enforcement force (ICE).
Videos filmed at the scene show the agent wearing a facemask. But online users asked Grok, Elon Musk's AI chatbot, to "unmask" the ICE agent.
Grok generated clear, detailed images showing the man's full face. But there was a problem: Grok had, of course, invented the faces.
Grok even gave the man a name that was shared across social media. The result? Men with the same name or looking like the photo were falsely accused of shooting Renée Good.
AI tools turn a telephone into a gun
Another image from Minneapolis shows Alex Pretti, an intensive care nurse who was killed by agents from the US Border Patrol.
Federal officials said he was holding a gun. Online users started sharing images that had been supposedly "enhanced" by AI. The images appeared to show a gun in Pretti’s hand.
But analysis of multiple videos from the incident, filmed from different angles, showed that Pretti was holding a telephone, not a gun. In "enhancing" the quality, the AI tools ended up turning a telephone into a gun.
When you see an image that has been "enhanced" by AI, be prudent.
- Check the original image.
- Compare it with the "enhanced" version.
- Verify it with other sources.
And only share an image when you're sure it's real.
This article was published on the occasion of France's Media in Schools Week, March 23-27, 2026.
No comments:
Post a Comment