← Blog

Deepfake Detection

How to Detect a Deepfake in a Video Interview

For most of hiring history, the interview was the gut check. You met the person, made a call, moved on. Remote hiring changed that. When every interview happens over video, you're not meeting someone — you're trusting a face on a screen. And 76% of hiring managers say AI has made it harder to detect impostor applicants — up from a problem almost no one was tracking just a few years ago. Here's what to look for, and why looking isn't enough.

What Is a Deepfake Interview Candidate?

A deepfake candidate uses AI-generated or AI-manipulated video to impersonate someone else — or to present a fabricated identity — during a live video interview. Unlike older forms of proxy fraud where a stand-in physically showed up on camera, AI-assisted impersonation uses real-time face filtering, voice synthesis, and generated interview responses to lower the barrier to fraud that was previously too technical for most bad actors. The person on screen can respond to questions, move naturally, and maintain the illusion throughout a full interview loop.

Visual Signs of a Deepfake in a Video Call

01

Unnatural facial edges

Look for a subtle but persistent blur or shimmer around the hairline, ears, and jaw. Deepfake models often struggle with fine hair detail and ear rendering, producing a slightly soft or inconsistent border between the face and the background.

02

Lighting inconsistencies

In a real video call, light falls on the face and background from the same source. In AI-generated video, the face lighting may not match the room — shadows fall at different angles, or the face appears uniformly lit regardless of the environment behind it.

03

Unnatural blinking and eye movement

Eye movement under rapid head turns can appear slightly delayed, mechanical, or out of sync with facial expression. Watch for eyes that seem to track independently of natural head motion.

04

Mouth and lip sync issues

Watch closely when the candidate speaks quickly or forms complex mouth shapes. Misalignment between audio and lip movement — even a few frames off — is a strong indicator of AI-generated video.

05

Texture and skin anomalies

Deepfake faces can appear overly smooth or slightly waxy, particularly under bright conditions. Pores, facial hair, and fine lines are difficult for AI models to render consistently across frames.

Behavioral Signs That May Indicate Fraud

01

Reluctance to turn sideways or look away from camera

Deepfake rendering degrades significantly at sharp angles. Candidates using AI video manipulation may avoid turning their head more than a few degrees from center, or may position the camera in a way that limits the available angle range.

02

Evasiveness about their environment

Legitimate candidates have no reason to avoid showing their workspace. A candidate who refuses to reposition the camera, shows only a tight facial crop, or gives inconsistent reasons for their setup warrants closer attention.

03

Inconsistency between their application photo and live appearance

Always compare the live video to the photo on the candidate's application, LinkedIn profile, or submitted ID before the call. Significant differences in bone structure, skin tone, or facial proportions are worth flagging — and worth having a process for.

Technical Checks Your Team Can Run

01

Ask the candidate to perform an unannounced physical task

Request that they hold a specific object up to the camera, write their name on paper, or perform an unexpected gesture you haven't pre-announced. Deepfake pipelines struggle to incorporate real-time physical props. This is one of the most effective in-interview checks available to a recruiter without additional tools.

02

Ask them to change camera angles

Request that the candidate reposition their camera or turn sideways. Deepfake software is typically configured for a specific camera position — changing it can expose artifacts or degrade the feed visibly.

03

Record and review at slower speed

Most video platforms allow recording with consent. Reviewing footage at 50% speed makes lip sync issues, edge artifacts, and lighting inconsistencies far more visible than they are in real time.

Why Human Detection Alone Isn't Enough

Only 0.1% of people can accurately detect deepfakes through visual inspection alone. That's not a training problem — it's a fundamental limitation of the human visual system against AI-generated content that's specifically optimized to fool it. And the gap is widening: deepfake generation quality improves every quarter while our biology doesn't. By the time your interviewer notices something feels off, the interview is already over.

The More Reliable Approach: Verify Before the Interview Starts

The most effective defense isn't sharpening your team's eye for visual tells — it's removing the opportunity for fraud to enter the interview in the first place. By verifying a candidate's government ID and running liveness detection before they ever receive the interview link, you confirm that the person on screen is real before any of your team's time is spent. That's the gap a manual visual check can never reliably close.

Stop deepfakes before they ever reach your interview.

Stop Deepfake Candidates verifies identity in under 2 minutes — government ID matching, liveness detection, and real-time risk signals — before you ever send the calendar invite. $5 per verification. No subscription.

Start Verifying Free →