What Happened and How to Stay Safe
Last week, one of our clients US branch (whom we do not support directly) was caught off guard by a remarkably sophisticated scam that combined AI-generated content, impersonation via deepfake technology, and classic social engineering. The attempt was so convincing that it nearly led to a large financial transfer. Here’s how it played out – and how it was ultimately discovered before any real damage was done.
The Setup: A “Confidential” Deal
A senior employee received an email that appeared to come from a high-level colleague. It referenced a private acquisition deal and insisted on discussing the details only on WhatsApp, ostensibly to maintain confidentiality. The email language was polished, used real legal terminology, and looked entirely legitimate.
The email ended with a firm instruction: Do not reply here—confirm receipt via WhatsApp.
The Twist: Deepfake Calls and Cloned Profiles
Soon after, the employee received a WhatsApp call from someone posing as the same colleague. Though the connection was poor, the voice and face seemed familiar. A follow-up Teams meeting included a person claiming to be a London-based lawyer, alongside the supposed colleague.
It turned out that:
- Publicly available images were used to impersonate the colleague’s account.
- The voice or video might have been artificially generated or manipulated.
- The actual colleague had no idea these conversations were taking place.
What Raised Suspicions
In the midst of the urgency, a few details didn’t add up. The colleague’s usual sign-off was missing, and the use of WhatsApp for something so critical felt unusual. Conflicting details in the conversation also set off alarm bells. Fortunately, the employee paused and reached out through a known, verified channel. That single moment of caution prevented a costly wire transfer.
How the Attack Likely Worked
- Compromised Personal Email: The scammers may have gained access to a personal mailbox making the senders emails appear legitimate. (We have not been able to directly verify this as we do not directly support this user – we are awaiting responses at this time)
- AI-Written Scripts: The message’s language was polished, likely aided by AI. It was carefully tailored to the firm’s line of business.
- Deepfake: The attackers used photos of real individuals (and potentially a short, bad quality deepfake call which was terminated due to “bad connection”) to add an extra layer of credibility.
- Pressure Tactics: They repeatedly emphasised secrecy and urgency, aiming to short-circuit normal checks and balances.
Takeaways and Advice
- Verify Through Known Channels
If you get an urgent request to transfer money or share sensitive data, confirm by calling the requester on a number you already trust or by speaking in person. - Enable 2FA Everywhere
Any account, personal or work, can be a target. Two-factor authentication significantly reduces the risk of someone breaking in with a stolen password. - Check for Odd Communication
Notice if someone suddenly switches to new channels (like WhatsApp) or uses unusual sign-offs. These can be clues that something is off. - Educate Your Team
Training is your best defense. Make sure colleagues understand these tactics and know how to spot red flags.
We’ve written more on preventing these kinds of attacks here:
- Preventing Security Risks from Unofficial Communication Channels
- Protect Your Business: Lessons from a £20,000 Scam Attempt
- How to Warn Someone of a Phish
Final Word
This incident shows that AI-based scams are evolving beyond simple phishing emails. Attackers are more convincing and more patient than ever, relying on deepfake tools, realistic email copy, and multi-channel communication to lure people in.
The good news is that a moment of skepticism can be all it takes to protect yourself and your organisation. Always double-check when something feels odd, and never hesitate to ask for a second opinion from runPCrun. Stay vigilant!