
‘100 Video Calls Per Day’: AI scam face models and the new fraud front
The rise of AI-enabled fraud is not just synthetic content on autopilot. Scam organizations are hiring people to appear on camera and speak live with targets, then augmenting those interactions with face swaps and cloned voices. These AI scam face models are turning romance, investment, and pig-butchering video calls into sustained campaigns that can hit scores of targets per day, and the mix of human presence with synthetic overlays is making scams feel disarmingly authentic to victims and even corporate staff [1][2].
What ‘real face’ front work looks like
Investigators have traced a professionalized labor market where human “models” sign fixed-term contracts to staff fraud funnels from online casinos and scam compounds. Telegram HR-style channels openly advertise roles and connect “AI models” with operators, treating the job as routine recruiting rather than a hidden backchannel [1]. In one documented case, a woman whose image was used to target the author’s mother in a pig-butchering scheme later advertised herself in a Telegram recruiting channel for $5,000 to $6,000 per month after completing a one-year contract in Cambodia. Listings describe relentless call volume and quotas that can reach up to hundreds of live interactions per day [1].
This human layer is central to conversion. Romance and investment scammers build trust across repeated video calls before escalating to money, a cadence that is harder to dismiss when a friendly, consistent face keeps showing up on camera [2]. When the same on-screen persona then asks for a wallet transfer or pitches a time-sensitive investment, many targets are already emotionally committed [2].
The tech enablers: live deepfakes and voice cloning
Scammers now combine these live operators with tools that swap faces in real time and clone voices from seconds of audio. Live deepfake scams do not require extensive training data or postproduction, which lowers the barrier to deploying them repeatedly across calls and accounts [2]. Voice cloning fraud further reduces friction, letting operators mimic accents or known figures and keeping the performance consistent across channels [2].
Enterprises are already seeing the spillover. Threat reports describe corporate fraud cases where video deepfakes impersonate executives during routine remote meetings to push through large transfers or confidential actions [2]. The format feels familiar and urgent, which can short-circuit normal checks.
AI scam face models: defenses that work in practice
The labor economics are part of the draw. Recruiters promise pay far above local averages, creating powerful incentives for young workers to sign multi-month contracts. Posts in open channels discuss “real face models” and package the work as a service that increases conversion rates across a scam center’s funnels, from romance to investment hooks. The same personas can be augmented with overlays, turning a single operator into multiple characters on demand [1][2].
Brand impersonation and the ‘OpenAI’ job scheme
Scammers are also exploiting the halo of well-known AI brands. A recent operation used Telegram and a ChatGPT-branded app to convince international workers, especially in Bangladesh, that they had landed microtask jobs with “OpenAI.” Over time, the scheme nudged recruits into investing their earnings in cryptocurrency platforms linked by the operators. Cultural deference to perceived authority, combined with the brand’s prestige, made the grift more persuasive and harder to question for low-wage workers seeking stable online work [3].
Case evidence: romance, pig-butchering, and corporate impersonation
Open-source reporting shows a sustained ecosystem rather than isolated incidents. The same woman who fronted a pig-butchering approach later resurfaced marketing herself for new contracts in Telegram HR channels, an indicator of turnover and repeatable pipelines for talent placement in compounds and online casinos [1]. Separately, threat intelligence reporting details how live face swaps, low-data voice cloning, and repeated video calls are now common tactics in romance and investment schemes, and how corporate meetings have been abused to impersonate executives for high-dollar authorizations [2].
Business risks and why this matters now
For companies, the risk is shifting from spoofed emails to convincing live interactions. Unauthorized transfers stemming from executive impersonation, fraudulent vendor changes confirmed over video, or staff manipulated by trusted-seeming faces can drive direct losses and reputational fallout [2]. Customers can be exploited by impostors who appear to represent your brand, and employees can be targeted by job offers or onboarding flows that mirror legitimate tools and logos [3].
Resources and further reading
- Frank on Fraud’s case reporting on recruiting pipelines and pay incentives [1]
- TRM Labs’ overview of AI-enabled fraud techniques and executive impersonation scenarios [2]
- WIRED’s investigation into the “OpenAI” job scam and its use of branded apps and Telegram onboarding [3]
Also see ToolScopeAI’s playbooks for practical controls: Explore AI tools and playbooks.
For background on synthetic media risks, see Europol’s analysis external.
Sources
[1] The AI Model That Scammed My Mom Wants A New Job
https://frankonfraud.com/the-ai-model-that-scammed-my-mom-wants-a-new-job/
[2] AI-enabled Fraud: How Scammers Are Exploiting Generative AI
https://www.trmlabs.com/resources/blog/ai-enabled-fraud-how-scammers-are-exploiting-generative-ai
[3] ‘OpenAI’ Job Scam Targeted International Workers Through Telegram
https://www.wired.com/story/openai-job-scam/