August 16, 2025
Insight
Deepfakes at Work: Train Verification, Not Detection
Deepfakes aren't a problem for politics and celebrities anymore. In UK businesses, voice cloning attacks are already hitting finance and exec teams. You won't win by training staff to spot them - the only defence that works is verification through trusted channels.
Deepfake scams are no longer rare. In Q2 2025, voice-enabled fraud surged 170%, outpacing traditional attack methods. In the UK alone, 26% of people reported receiving a call containing a deepfake voice - and of those, nearly 40% were successfully scammed.
Here, voice cloning isn't for Hollywood. It's live calls to your finance team or executive assistants, impersonating the CEO to extract payments or data. No clip playback-real-time I‑need‑this‑now scam.
That's why "spot the wobble" detection training is pointless. Humans under pressure-you know, the ones with money‑sign in their inbox-don't analyse audio. They comply.
Why Detection Training Lets You Down
Deepfakes are frighteningly accurate. It takes is 20-30 seconds of audio to build a clone that registers over 99% similarity (we should know, we do it daily).
Attacks exploit urgency. When your 'CEO' says it's now-or-never, logic is the first to take a holiday.
Authority bias kills skepticism. If it's a phone call from 'the boss,' your response is "Yes, right away," not "Verify."
Detection training breeds overconfidence. Believing you can tell a fake just makes it easier to fall for a sophisticated one.
The Pivot: Verification, Not Detection
The strategy isn't teaching people to trust their gut-it's teaching them to confirm independently. Habits that stop fraud even when voices are flawless:
Independent confirmation. If a request comes in by phone, verify via Slack or in-person. If it's email, call the known line-not the one they provided.
Pre-set runbooks. Define steps: "For any request above £X, call the sender's company number and ask, 'Did you send that?'"
Challenge-response questions. Use project nicknames, recent meeting topics-details only the real person would know.
Dual authorisation. High-risk actions require two approvers acting through separate channels.
Built-in delay. Always confirm today's 'urgent' asks with, "I'll come back to you in 30 minutes via our standard check." Staff should know that any time pressure still needs proper checks and balances.
Make It a Habit - from Interview Room to Inbox
Don't simulate in calm rooms-simulate in drama. When your team is at peak stress-product launch, board review, month-end-you should already be practicing deepfake scenarios that induce that stress.
Test mixed channels. Throw in voice, email, SMS. Test boundary cases: senior execs vs peers. The goal isn't to spook them; it's to hardwire verification as routine-even when pressure peaks.
Compliance Meets Credibility
For UK tech scale-ups: FCA operational resilience demands evidence that you can withstand the push of social engineering. Verification logs, not click stats, prove it.
For UK/EU fintechs: DORA Article 17 expects resilience testing-including realistic social-engineering vectors-and NIS2 Article 23 requires role-appropriate training. This is exactly that.
Insurers now ask: "How do you manage deepfake risk?" Having protocols and test logs makes renewal conversations a lot smoother.
What Success Looks Like
Staff automatically confirm unexpected requests via a second channel-even from a 'CEO.'
Simulations show consistent verification, reaction time benchmarks, and no panic-led mistakes.
You have logs-a clean chain of timestamps, channels, decisions-ready for audit and conversation.
Simulations form part of your operational rhythm, wargamed before real danger strikes.
Getting Started Checklist
Ask: "If your boss rings demanding a confidential payment, what do you do?"
Draft a simple playbook: verify all unusual requests via a known, separate channel.
Run a spoof-ceo day and see if your people revert to déjà vu-or protocol.
Log the outcomes. Those logs are as valuable as your tech stack.