Back to home
INTERNATIONAL16 March 2026
The Digital Doppelgängers: How AI Scams Are Recruiting Human Faces
Women are being recruited through Telegram to become 'AI face models' for sophisticated scams that generate up to 100 video calls daily. This disturbing trend represents a new evolution in cybercrime, combining human authenticity with AI scalability to create more convincing fraud operations.
La
La Rédaction
The Vertex
5 min read

Source: www.wired.com
In the shadowy corners of Telegram, a disturbing trend is emerging: dozens of channels are advertising for 'AI face models' to participate in sophisticated online scams. These listings, primarily targeting women, promise easy money for what amounts to lending one's likeness to fraudulent operations that can generate up to 100 video calls per day.
The mechanics are chillingly simple yet effective. Scammers use AI technology to create deepfake videos of these models, then deploy them in romance scams, investment fraud, and other confidence schemes. The human face adds an irreplaceable layer of authenticity that pure AI-generated content cannot yet achieve, making victims more likely to trust and ultimately part with their money.
This phenomenon represents a troubling evolution in cybercrime. Unlike traditional scams that relied on stolen photos or completely fabricated identities, these operations create a perverse symbiosis between human models and AI technology. The models provide the essential human element—voice, mannerisms, and appearance—while AI handles the scalability and automation.
What makes this particularly insidious is the exploitation of economic vulnerability. Many of these women, often from developing countries, may not fully understand how their likeness will be used or the scale of harm their digital doppelgängers will cause. They're simply trying to earn money in a gig economy that offers few alternatives.
The implications extend beyond individual victims. As AI technology becomes more accessible, we're likely to see an arms race between scammers and verification systems. The human face, once a reliable indicator of authenticity, has become just another tool in the fraudster's arsenal. This raises profound questions about identity, consent, and the future of trust in our increasingly digital world.
Looking ahead, the proliferation of such scams suggests we may need to fundamentally rethink how we establish trust online. Traditional verification methods are becoming obsolete, and new systems—perhaps based on blockchain or other emerging technologies—may be necessary to combat this new breed of AI-powered deception.