The CEO on the frontlines of deepfake defense

Remote work has opened new doors for job seekers, employers, and now, fraudsters. At Pindrop Security, a voice security company with more than $100 million in annual recurring revenue, CEO Vijay Balasubramaniyan says nearly 17% of job applicants are fake. One recent interviewee forgot to leave the Zoom room and asked his handler for a forged résumé that referenced Texas, where he supposedly lived.

“The numbers are climbing fast,” Balasubramaniyan says. While in-person roles get around 100 applicants and hybrid ones attract 200, remote jobs now pull in more than 800. And many of those résumés aren’t real. That means Balasubramaniyan’s recruiting team is spending more time vetting identities than assessing skills. “It’s taking up a big chunk of time for our human recruiters,” he adds.

Pindrop started as a voice-authentication platform. But as fraud evolved from phone scams to deepfake coworkers, Balasubramaniyan widened the company’s focus to broader identity verification and AI-driven threat detection. Fake IT staffers calling in for password resets are no longer rare. Neither are voice clones sophisticated enough to fool friends or even employers.

To stay ahead, he scrutinizes behavioral tells: lip movements that don’t match speech, microphones suspiciously covering mouths. “I talk to other CEOs who now have code words they give friends and family,” he says, as a way to verify calls that might otherwise sound real.

His team is also on the cutting edge of defending against synthetic voices. Pindrop was one of a few firms granted early access to NVIDIA’s Riva Magpie, a voice-cloning model so powerful it was initially held back from public release. With just five seconds of audio, it can replicate a person’s voice. But Pindrop’s early tests detected over 90% of those synthetic samples, improving to 99.2% after retraining, all while keeping false positives under 1%.

As Balasubramaniyan sees it, the threat isn’t abstract. “Zero-day” voice cloning attacks, where new AI models are used before detection systems can catch up, are already here. And whether it’s a job applicant, coworker, or long-lost friend on the phone, he warns: the line between real and fake is now only seconds long.