top of page

Is this Real? - Deepfakes and the Emerging Executive Threat


In my last article, I examined how the identities of service members and veterans are hijacked for scams. But the threat landscape has shifted. Criminals no longer need to steal your likeness — they can manufacture it.


Deepfakes and synthetic voice cloning are not new threats. The concept has been around for years. What makes them urgent today is their expansion in scale, speed, and accessibility. What once required specialized skills and resources can now be done in real time with off-the-shelf tools.


When Video Isn’t Proof

Earlier this year, a finance employee at Arup, a global engineering firm, joined what looked like a routine Zoom call. On the screen were the CFO and several colleagues. Except they weren’t real. Every face and voice was a deepfake, generated in real time. Convinced the meeting was legitimate, the employee transferred more than $25 million to criminals.¹


This wasn’t a phishing email or a spoofed account. It was a full video conference with “trusted” leaders — and it worked.


From Robocalls to Politics

The same technology is already influencing our elections. During the New Hampshire primary, voters received robocalls from what sounded like President Joe Biden, urging them not to vote. The voice was AI-generated.²


The FCC responded swiftly, outlawing AI voices in robocalls and giving itself authority to fine perpetrators and block carriers who transmit them.³ Multi-million-dollar penalties have followed, a sign that regulators recognize this is not a fringe concern but a growing risk.⁴


Families on the Front Line

Beyond corporate boardrooms and political campaigns, families are in the crosshairs. The FTC has tracked a sharp rise in scams where criminals clone the voice of a child or grandchild in a staged emergency: “I’ve been in an accident, send money.” Seniors, already bearing the brunt of impersonation scams, are especially vulnerable.⁵ The National Council on Aging has documented real cases where parents wired funds within minutes, convinced their child was in danger.⁶


Why We Can’t Rely on Human Judgment

Research shows that people are far less capable of spotting deepfakes than we assume. A 2025 study by iProov found that out of 2,000 participants, only 0.1% were able to correctly identify all deepfake and real media presented — even after being told some were fake.⁷ Another academic study in 2024 concluded that even with feedback-based training, detection accuracy improved by only about 20%, while participants reported increased anxiety and self-doubt.⁸

And as fast as detection models are built, adversaries evolve. The pace of change since 2019 is staggering: what once took days of processing now happens in real time, with consumer-grade software. The gap between defense and offense isn’t closing — it’s widening.


A Lesson From 2019 to Today

Many remember Jordan Peele’s 2019 “Obama” deepfake video as the public wake-up call — a creative stunt that showed the world what was possible. What was once a novelty has since become a daily reality.


In my own work, I’ve had the privilege of collaborating with Dr. Hany Farid, one of the foremost experts in digital forensics. Our conversations have reinforced a sobering truth: detection is always chasing a moving target. 


Hany has cautioned that while some companies boast near-perfect detection rates, most of those claims fall apart in the wild.⁹ He’s also highlighted how deepfake makers are now replicating even physiological signals, like heartbeats, to outsmart forensic tools.¹⁰ And he has been clear that no publicly available detection model today can be fully trusted in high-stakes scenarios.¹¹


The lesson is clear: as fast as defenders adapt, adversaries are adapting just as quickly — if not faster.


The Consequences

Deepfakes and synthetic voices are expanding threats with real-world fallout:


  • Financial fraud: As the Arup case proved, even video meetings can no longer be trusted.

  • Reputation attacks: A single fabricated clip of a senior official or executive can destroy credibility overnight.

  • Political disinformation: AI-generated voices and videos can suppress votes or polarize communities.

  • Erosion of trust: When “seeing” or “hearing” no longer guarantees truth, digital communication itself is destabilized.


Closing the Gap

We can’t treat deepfakes as tomorrow’s problem. They are here now, and expanding. Closing the gap requires:


  • Enterprise controls: Out-of-band verification for high-risk transactions.

  • Technical standards: Adoption of C2PA content credentials to authenticate legitimate media.

  • Legal protections: Europe is advancing with laws like Denmark’s draft bill granting ownership of digital identity. The U.S. must follow.

  • Public awareness: Families — especially seniors — need to understand the risks of cloned voices and fake videos.


Closing Thought

Deepfakes are not new, but they are expanding, accelerating, and embedding themselves into the criminal playbook. The reality is simple: detection is chasing a moving target, and attackers aren’t slowing down. If identity theft defined the last decade, synthetic identities will define the next. Criminals don’t need your password when they can become you.


What’s Next

Deepfakes and synthetic voices show us how technology can distort what we see and hear. But the bigger battlefield is not just the individual clip or phone call — it’s the information ecosystem itself.


In my next article, I’ll examine how synthetic media fuels misinformation and disinformation campaigns: how they scale across social media, shape public perception, and even test the resilience of national security.


Sources


  1. CNN — “Hong Kong employee duped into transferring $25 million in deepfake scam.”

  2. AP News — “Fake Biden robocalls used AI voice to suppress voters.”

  3. FCC Press Release — “FCC outlaws AI-generated voices in robocalls,” Feb. 2024.

  4. CNN — “FCC fines consultant $6 million for AI robocalls,” July 2024.

  5. FTC — “Consumer Alert: Fighting harmful voice cloning,” 2023.

  6. NCOA — “AI scams: How to protect yourself from AI voice cloning fraud,” 2024.

  7. iProov — “Deepfake Detection Research Report,” 2025.

  8. Inability to Detect Deepfakes — academic study, 2024.

  9. Berkeley iSchool — “Hany Farid evaluates new deepfake detection firms,” 2024.

  10. Berkeley iSchool — “Hany Farid reflects on research replicating physiological signals,” 2025.

  11. Biometric Update — “New studies warn of difficulty detecting audio deepfakes,” 2024.

 
 
 

Comments


bottom of page