Deepfakes on the rise: Italian forensic expert explains how to spot AI deception at cOcOn 2025
Mail This Article
Kochi: At a time when deepfakes — digitally manipulated images or videos — are becoming more convincing and harder to detect, experts at c0c0n 2025, Kerala Police’s annual cybersecurity conference in Kochi, warned of the growing threat they pose. Massimo Iuliani, an Italian forensic expert and analyst at Amped Software, took the stage to show how artificial intelligence is making it increasingly difficult to distinguish real images from fake ones and how forensic tools can help fight back.
Iuliani’s session, titled ‘From pixels to proof: Forensic techniques for image authentication and deepfake detection’, combined caution with practical insight. Using real examples, he demonstrated how deepfakes, which were once clumsy and easy to spot, have evolved into sophisticated forgeries that can fool even trained eyes. “You can’t rely on what you see anymore. These fakes are so realistic that anyone could be misled, from news readers to law enforcement,” Iuliani warned.
He explained that traditional cues like odd shadows, mismatched reflections, or warped features are no longer enough to spot AI-generated content. Instead, investigators need advanced forensic tools that can uncover hidden digital clues.
Even when a deepfake image has been altered, shared, or stripped of its metadata — the hidden information embedded in a file about when, where, and how it was created — the software can detect tiny patterns left behind in the pixels themselves. “These patterns act like a digital fingerprint. They give experts hints that an image might have been generated by AI rather than captured by a camera,” Iuliani said.
Iuliani presented Amped Authenticate, a tool designed by his company to identify such traces, including images created by AI models like Midjourney, Stable Diffusion, and DALL·E.
The software can assign a confidence score to each image, which is essentially a measure of how likely the system believes the image was created by AI. A higher score indicates stronger evidence that the image is a deepfake, while a lower score suggests it is more likely to be real. He emphasised that while no system is foolproof, these tools provide a crucial first line of defence in a world where AI can easily mimic reality.
Beyond the technology, Iuliani focused on the broader implications of deepfakes. They are not just a cybersecurity challenge; they pose a societal threat, capable of spreading misinformation, framing innocent people, or undermining trust in digital media. He urged professionals to combine forensic tools with careful observation, verification, and cross-checking, so that false images and videos do not fool anyone.
“Detecting deepfakes is not just about software. It is about understanding how these images are made, noticing what does not quite add up, and knowing where to look for clues,” Iuliani said.