top of page

The Weaponization of Publicly Available Information (PAI): Social Engineering at Scale

ree

Executives and senior leaders don’t just carry responsibility — they carry risk. While organizations deploy firewalls and systems to secure data, a more pervasive threat lurks in plain sight: Publicly Available Information (PAI) that adversaries harvest for highly targeted social engineering attacks.

What makes PAI so dangerous is that adversaries don’t have to break in to steal it. The keys are already lying in plain sight.


What Is PAI?

PAI includes any data legally accessible online:

  • Social media content, family photos, and professional updates

  • Property ownership and professional profiles

  • Press features, conference bios, and event imagery

  • Government filings or open-source databases


A single data point may appear benign. But assembled, these fragments form a digital blueprint adversaries can exploit.


The Social Engineering Attack Chain

It’s not malware or systems that are being hacked — it’s people. PAI → Social Engineering → Social Media → Accelerated by AI

Here are real-world cases that prove just how destructive this chain has become:

  • CEO Impersonation Scams: In one high-profile quarter, over 105,000 deepfake scams targeted U.S. companies, costing them over $200 million. Scammers posed as executives from firms like Ferrari and Arup using AI-generated audio and video.¹

  • AI Impersonation Surge: A 2025 report shows AI impersonation scams surged 148%, including a case where scammers successfully mimicked a CFO’s voice to trick an employee into wiring $25 million.²

  • Romance Scams via Deepfake Video: A woman in Los Angeles was duped by AI-generated videos of a celebrity, ultimately losing $431,000, including selling her family home.³

  • Investment Fraud via Deepfake Video: In India, an investor lost Rs 66 lakh after scammers used a synthetic video of a finance minister promoting a fake investment opportunity.⁴


As @Rachel Tobac, CEO of @SocialProof Security Security, has demonstrated: the danger lies not just in the data itself, but in how adversaries use it to hack people. When I was in government, I had several good conversations with Rachel about the mechanics of social engineering. Her insights echoed what I saw firsthand with DP3: adversaries don’t need to hack systems when they can hack people.

In every case, the raw material is PAI, the weaponization is social engineering, and the delivery platform is social media.


PAI + AI: The Industrialization of Deception

When I built the Digital Persona Protection Program (DP3), adversaries were already exploiting PAI. But today, artificial intelligence has changed the game completely:

  • Speed → Weeks of manual research reduced to minutes through automated scraping.

  • Sophistication → AI-generated phishing, deepfake audio/video, and realistic personas almost indistinguishable from reality.

  • Scale → A single adversary can now run thousands of social engineering operations simultaneously.


The result: social engineering is no longer artisanal — it’s industrialized.


Why This Matters for Leaders

  • Executives in Office → Their digital persona is tied to organizational trust. One impersonation can cause financial loss, reputational damage, or undermine leadership credibility.

  • Retired Senior Leaders → Still influential, still connected — but no longer under the institutional umbrella. That makes them soft targets for scams and disinformation.

  • Family Members → A spouse’s tagged location, or a child’s school post, may be all adversaries need to launch an attack.


Bridging the Protection Gap

If PAI is treated as harmless, leaders will continue to be blindsided. Protecting against weaponization requires:

  • Proactive monitoring of exposed PAI and impersonation accounts.

  • Reducing digital footprints through training for leaders and families.

  • Cross-sector adoption of DP3 principles into the corporate world.

  • AI for defense to detect deepfakes, impersonation, and coordinated social engineering campaigns.


Closing Thought

The digital battlefield has shifted. Adversaries don’t need to hack networks when they can exploit what’s already online. By combining PAI with social engineering and AI, they can manipulate identities, reputations, and trust itself — faster, smarter, and at scale.

This is the second article in my series on executive digital risk, OSINT, and the responsible integration of AI into cybersecurity. In the next piece, I’ll look at the unique digital risks facing retired senior leaders — and why protection too often ends when service does, but the threats do not.


References
  1. Wall Street Journal — AI Drives Rise in CEO Impersonator Scams: https://www.wsj.com/articles/ai-drives-rise-in-ceo-impersonator-scams-2bd675c4

  2. TechRadar — AI impersonation scams are sky-rocketing in 2025: https://www.techradar.com/computing/cyber-security/ai-impersonation-scams-are-sky-rocketing-in-2025-security-experts-warn-heres-how-to-stay-safe

  3. People — L.A. Woman Loses Life Savings After Scammers Use AI to Pose as “General Hospital” Star: https://people.com/woman-loses-life-savings-after-scammers-use-ai-to-pose-as-general-hospital-star-11800052

  4. Mitnick Security — Social Engineering Examples – Deepfake Investment Fraud: https://www.mitnicksecurity.com/blog/social-engineering-examples

 
 
 
bottom of page