Deepfakes and the Next Frontier of Insurance Risk

What happens when you can no longer trust your own eyes—or your boss’s voice on a video call?
The insurance world is now facing one of its most unusual challenges yet: deepfakes. Once a cinematic gimmick and internet joke, deepfakes have become sophisticated enough to impersonate CEOs, manipulate claims, and even fabricate criminal evidence. The implications for insurers are vast, stretching from cybercrime and fraud to brand reputation and liability.

The conversation around deepfakes has moved well beyond Hollywood. In early 2024, Arup Engineering, a world-renowned firm best known for designing the Sydney Opera House, fell victim to a deepfake-enabled cyber scam. An employee in finance joined what appeared to be a routine video call with several senior executives—including their CEO. Everyone looked and sounded legitimate. Within an hour, $25 million had been transferred across 15 separate transactions. Every participant in that call, except for one, was an AI-generated replica.

This was not a crude attempt at phishing or a spoofed email; it was a full-fledged, hyper-realistic digital impersonation of real company officers. And the most striking part? There is no public record of insurance coverage linked to the event. Whether Arup lacked a cyber policy or insurers denied the claim, the silence speaks volumes. For all its technological progress, insurance has yet to fully adapt to the deepfake era.

Losses from AI-generated fraud are expected to exceed $40 billion by 2027, up from just $12 billion in 2023. Humans can correctly identify deepfakes only 55% of the time—barely better than chance. Tools like Sora AI and similar text-to-video models now produce footage indistinguishable from reality. Fraudsters don’t even need technical expertise; they just need access to enough publicly available data—your face, your voice, your LinkedIn photo—and a compelling story.

The insurance implications run deep.
Cyber policies, once designed for ransomware and data breaches, now face entirely new use cases. Some carriers like Coalition have begun explicitly covering AI-related security events and even “clawback” stolen funds when possible. But most cyber policies still exclude social media impersonation and deepfake-related financial loss, leaving clients exposed to modern threats that don’t fit legacy definitions.

Even niche lines like Directors & Officers (D&O) insurance face pressure to evolve. A director fooled by a deepfake could, in theory, trigger claims of negligence or fiduciary breach. Yet no publicly documented case exists where D&O coverage has been invoked for a deepfake event. The absence of precedent means the industry lacks both the pricing models and the confidence to cover these new liabilities.

The problem isn’t confined to corporations. Consumers, too, are weaponizing AI to commit insurance fraud. In the UK, an auto repair company recently submitted a photo of a van with a “damaged” bumper to trigger a payout. Investigators later discovered the same image—undamaged—on the company’s Instagram profile. It wasn’t a deepfake, technically, but what experts call a “shallow fake”—a digitally edited photo that mimics real loss. Small claims like this often slip through because the cost of investigation exceeds the payout amount. As noted on the episode, “Sometimes it’s cheaper for insurers to just pay a £50 claim than to send an adjuster.”

These incidents raise a critical question: how do insurers verify reality in a world where proof itself can be fabricated?
Manual investigation can’t scale, and AI detection tools remain unreliable, prone to false positives and easily fooled by the same technology they’re meant to stop. As Harsh Chandnani pointed out, “You could upload the Bible to an AI detector and it might flag it as AI-generated—because the system has seen it before.” For insurers, the challenge is no longer just underwriting risk, but underwriting truth.

Beyond fraud, deepfakes create reputational and psychological risks that insurance can’t yet quantify. Imagine opening your phone to see yourself starring in a fast-food commercial you never filmed, or your likeness used in political propaganda. Advertisers and platforms are already experimenting with personalized AI content that mirrors users’ faces and voices. The day may not be far when your television—or your social feed—shows you eating at McDonald’s, smiling back at yourself. What’s the reputational or legal liability when your digital twin endorses something you never consented to?

Deepfakes blur not just identity, but accountability. They sit at the intersection of cybercrime, misinformation, and psychology—territory where insurance has little historical precedent. And as the technology improves, the distinction between “fraud” and “mistake” becomes less clear. Was it deception, negligence, or simply human error? In a world where digital forgeries can outsmart human senses, risk modeling itself may need a rewrite.

Insurance has always evolved to protect society’s blind spots—from the first marine policies to cyber coverage. Deepfakes represent the next frontier: not just a technical threat, but an existential one. When even video evidence and voice authentication can’t be trusted, the foundation of modern trust—and by extension, the business of insurance—must be rebuilt.

Listen to the full conversation on the Coverage & Coffee Podcast to explore how deepfakes are reshaping the boundaries of truth, trust, and insurability.

You may also enjoy these

You may also enjoy these

You may also enjoy these

Ready to transform

your brokerage?

Ready to transform your brokerage?

Ready to transform

your brokerage?

© 2025 Parasol. All rights reserved.

© 2025 Parasol. All rights reserved.

© 2025 Parasol. All rights reserved.