Remote hiring changed everything about how we evaluate candidates — and fraudsters noticed. In 2025, a growing number of applicants are using AI-generated video to misrepresent their identity in video interviews, pass as someone else, and gain access to organizations under false pretenses. By the time most teams find out, the onboarding has happened and the access has been granted. Here's what's happening, why it's accelerating, and what recruiters can do about it.
What Is Deepfake Hiring Fraud?
Deepfake hiring fraud is a form of candidate identity fraud where an applicant uses AI-generated or AI-manipulated video to impersonate another person — or to present a fabricated identity — during the hiring process. It's the most technically advanced variant of a broader problem that also includes proxy interviews (a stand-in takes the call), identity impersonation (using stolen credentials), and credential fraud (fabricated degrees and employment history). What makes the deepfake variant particularly difficult to catch is that it looks, at a glance, like a legitimate video call.
How Common Is It?
The scale is larger than most recruiters realize. North America saw a 1,740% increase in deepfake fraud incidents in 2022 alone, according to data from Sumsub. 23% of companies encountered proxy interview fraud in 2023, and 17% of hiring managers reported suspected impersonation in the same period — up from 3% the prior year.
Real Incident · 2024
The U.S. Department of Justice revealed that over 300 American companies had unknowingly hired IT workers with ties to North Korea — using stolen identities, proxy interviewers, and fraudulent documentation. The FBI has issued a formal public advisory warning hiring teams specifically about AI-assisted impersonation in remote job interviews.
Who Is Being Targeted?
Any organization hiring remotely is at risk, but certain roles attract disproportionate attention. Positions with access to sensitive systems, codebases, financial data, or customer records — particularly in software engineering, IT security, data science, and finance — are the most common targets. Remote-first companies face higher exposure simply because they rely entirely on video for candidate evaluation, with no in-person checkpoint where identity is naturally confirmed.
What Are the Consequences of Hiring a Deepfake Candidate?
Security breaches
A fraudster hired into a technical role gains legitimate access to internal systems and data from day one. The North Korea incident is the most documented example, but it's not an isolated one — it's an illustration of what's possible when identity verification is absent from the hiring process.
Financial loss
The median loss from occupational fraud is $250,000 per incident, according to the ACFE. The average cost per proxy hire — including investigation, legal, and lost productivity — is $28,000. For fraud that originates at the point of hire, those costs compound over months before discovery.
Compliance and legal liability
Depending on your industry, hiring an employee who has misrepresented their identity can expose your organization to regulatory liability — particularly in finance, healthcare, and government contracting where identity verification requirements are strict.
Reputational damage
Discovering that your interview process failed to catch a fraudulent candidate raises uncomfortable questions for clients, partners, and leadership about the integrity of your hiring practices.
Why Traditional Background Checks Don't Catch It
Background checks verify a name, SSN, and credentials against databases — they don't verify that the person who interviewed is the same person whose background was checked. A fraudster with a stolen or synthetic identity can pass a background check with ease. The vulnerability is specifically at the video interview stage, where identity is assumed but never confirmed. That's the gap that needs its own step in your process.
What Recruiters Can Do Right Now
Add pre-interview identity verification
Before sending a video interview link, require candidates to verify their government ID and complete a liveness check. This confirms that the face on the call matches the identity on file — before the interview ever starts and before any of your team's time is invested.
Look for visual and behavioral red flags
Soft edges around the face, lip sync delays, reluctance to turn sideways, and evasiveness about their environment are all worth noting. See our full guide on how to detect a deepfake in a video interview.
Standardize your process across the team
Ad hoc vigilance isn't reliable. A consistent, systematic verification step — applied to every candidate regardless of role or seniority — is the only way to eliminate blind spots. Once you've invested hours in a candidate, the pull to continue is real. Front-loading identity verification removes that pressure entirely.
Your hiring process needs one more step.
Stop Deepfake Candidates adds identity verification before the interview — without adding friction for legitimate candidates. Government ID matching, liveness detection, and real-time risk signals, all in under 2 minutes. $5 per verification. No subscription.
Get Started Free →