Tech CEOs are raising alarms over a growing trend of fake applicants flooding remote job openings in the U.S. These individuals often present false credentials or use deceptive tactics to secure interviews or job offers. The surge in fraudulent applications poses challenges for companies trying to hire legitimate remote talent and maintain efficient recruitment processes. As remote work becomes more widespread, businesses are being forced to rethink and tighten their vetting procedures to prevent such hiring scams.
When Pindrop Security, a voice verification startup, recently advertised a job, one application immediately drew attention. A Russian software developer named Ivan appeared to be an ideal fit for a senior-level engineering position.But during a video interview, the recruiter noticed something off - his lip movements didn’t quite match his speech.
This discrepancy was due to Ivan being an impersonator using advanced deepfake tech and AI tools to try and land the position, according to CEO Vijay Balasubramaniyan. He explained that generative AI has made it difficult to distinguish betwe en human and machine behavior.
People are now crafting completely fake personas - using AI-generated visuals, voices, and even faces - to get hired, sometimes even using others’ appearances during the hiring process. While companies have battled cybercriminals targeting their systems and staff for years, a fresh challenge has emerged: fraudulent job applicants who craft phony resumes, IDs, and even real-time responses.
Gartner, a global research firm, predicts that by 2028, one in four job candidates globally will be fake. The potential impact varies - some may implant malware, steal proprietary data, or access funds, while others may simply collect wages they wouldn’t earn legitimately. Cybersecurity and crypto startups have seen a spike in these fraudulent candidates, experts told CNBC, especially because remote jobs are attractive to such bad actors.
Ben Sesser, head of BrightHire, said he first became aware of the issue a year ago and has seen a dramatic increase in fake candidates in 2025. His company helps major firms in finance, healthcare, and tech assess potential hires through video interviews. He pointed out that hiring is inherently vulnerable, with many handoffs and people involved, making it a prime target for exploitation.
And this isn’t only affecting tech. Over 300 American companies - including defense contractors, automakers, and even a national TV network - have unknowingly hired North Korean-linked impostors, the DOJ reported in May. These workers used stolen U.S. identities, masked their actual locations with remote access tools, and funneled wages back to North Korea’s weapons programs, according to officials.
The case, involving accomplices including an American citizen, highlighted what U.S. authorities describe as a vast overseas operation of IT workers tied to North Korea. Additional cases have since followed. Lili Infante, CEO of CAT Labs, which operates in the cybersecurity and crypto space, said her startup is flooded with fake applications.She joked that every job posting seems to attract scores of North Korean spies, many with polished resumes full of relevant keywords.
To combat this, her team uses identity-checking services like iDenfy, Jumio, and Socure to filter out fraudulent applicants - a growing niche industry. Fake applicants now come from beyond North Korea, including criminal networks in Russia, China, Malaysia, and South Korea, noted cybersecurity veteran Roger Grimes.
Ironically, some of these fakes are exceptionally good at their jobs, he added. In some cases, they’ve performed so well that managers regretted letting them go after discovering the fraud. Grimes’ company, KnowBe4, accidentally hired a North Korean developer last year.
The individual used AI to tweak a stock photo, combined it with a stolen but valid U.S. identity, and passed several background checks and video interviews. His deception was only uncovered after unusual activity was flagged on his account. Despite high-profile cases like this, many recruiters remain unaware of the growing risk, according to Sesser.
Recruiters are focused on hiring strategies, not cybersecurity, and may not even realize they're being targeted, he said. As deepfake tools become more sophisticated, he warned, spotting fakes will only get harder. In Pindrop’s case, their new video verification system helped unmask “Ivan X” as an AI-driven fraud. While Ivan claimed to be in western Ukraine, digital forensics revealed an IP address located in eastern Russia, potentially within a military base near North Korea.
Pindrop, which counts Citi Ventures and Andreessen Horowitz among its investors, originally focused on audio fraud detection but may soon pivot toward video verification tools. Their clients include major banks, insurance companies, and healthcare firms. Balasubramaniyan summed it up starkly: “Our senses can no longer be trusted - without smart tech, spotting fakes is worse than guessing randomly.”
For questions or comments write to contactus@bostonbrandmedia.com
Source: cnbc