By Angus Loten
Sept. 16, 2025
Faced with an onslaught of increasingly sophisticated deepfake scams, some companies are turning to low-tech tactics to foil artificial intelligence-powered audio and video impostors.
The tactics, including verbal passphrases, off-topic questions and hand-drawn signs, offer a rare instance of targeted companies getting the upper hand in an escalating AI arms race with deepfake attackers, cybersecurity experts say. Done right, these and other analog tricks can unmask even the most well-trained AI models, they say.
iStock image
“If someone’s daughter or granddaughter calls saying she’s being held for ransom, ask her what she had for dinner yesterday,” said VS Subrahmanian, director of the Northwestern Security and AI Lab at Northwestern University.
Deepfake scammers feed a subject’s audio and video clips into AI models to create a highly convincing digital impersonator. In a typical corporate attack, AI-generated executives reach out to lower-level workers—by phone or video—instructing them to wire money to an account linked to the attacker. The transactions, which can also include handing over sensitive data and other business assets, are often portrayed as urgent, such as a fast-moving acquisition or supply-chain issue.
Of roughly 300 cybersecurity leaders at U.S. companies recently surveyed by Gartner, an information-technology research and consulting firm, about 40% said they had been targeted by a deepfake call or video in the last 12 months. Voice scams, including those generated with AI, surged 442% between the first and second half of last year, according to cybersecurity firm CrowdStrike.
By some estimates, global losses from AI-generated CEO and other executive impersonations exceeded $200 million in the first quarter of the year.
The mounting threat has spawned a thriving market for anti-deepfake software tools. Among them: AI-powered apps designed to scan calls and videos on Microsoft Teams, Cisco Webex, Zoom and other platforms for signs of AI-generated impersonators, like nuances in composition or speech that can give away the con. Or the tools can simply flag questionable links and attachments.
But fighting AI with more AI might not be enough to beat tech-savvy scammers, some experts say.
“Ninety percent of attacks still come through people, not tech,” said Brian Long, CEO and Co-founder of OpenAI-backed cybersecurity firm Adaptive Security. “If your security plan doesn’t include your employees, you’re leaving the front door wide open.”
Akif Khan, a Gartner cyber analyst, agreed that the human element to cyber-fraud—convincing people to follow malicious instructions—is key.
“It’s not just about the quality of the deepfake, it’s the fact that it’s combined with social engineering,” he said.
Intruders take advantage of dutiful employees deferential to their higher-ups, said Khan. A critical step, he said, is to let lower-level workers know it’s alright to delay action if they suspect a scam. “Encourage employees to think on their feet, perhaps try to ask an off-the-cuff question that an attacker might not know the answer to,” Khan said.
In one well-documented case, a deepfake impersonation of Ferrari CEO Benedetto Vigna last year was thwarted when a suspicious executive on the call asked the attacker about a book Vigna had recommended earlier in the week.
Theresa Payton, CEO of cyber company Fortalice Solutions and once chief information officer at the White House under George W. Bush, said a simple doodle might be all it takes to outsmart a cloned executive.
Recently, a client suspected a job candidate on a video interview was a deepfake, judging by odd speech delays and glitchy facial movements, said Payton. “They asked the candidate to draw a smiley face and hold it up,” prompting the impostor to immediately drop the call, she said.
Payton also instructs clients to seek physical proof. “Have everyone hold up their passport to the camera or show a unique desk item—say, a quirky mug or pen—that AI can’t fake on the fly,” she said.
Analog tactics work because attackers expect their quarry to behave a certain way, she said: “So when they expect our clients to zig, we give them processes that make our clients zag.”
Sarah Barrington is a researcher at the UC Berkeley School of Information who co-wrote a recent study on the growing effectiveness of deepfakes. Predetermined safe words or passphrases are a good step, she said, but not foolproof. “It is important that people agree to these verbally, rather than via other digitally vulnerable forms of communication, such as text or email,” she said.
Even secret verbal passcodes should be combined with other protocols, such as hanging up on a suspected impostor and calling them back on a known line, she said.
Most deepfakes are unable to handle curveballs, for instance, asking the video clone to move the camera, said Adaptive Security’s Long.
“Real people can do that instantly. Fakes struggle,” he said.
Write to Angus Loten at Angus.Loten@wsj.com
This Wall Street Journal article was legally licensed by AdvisorStream.
Dow Jones & Company, Inc.