Impersonation Fraud Explosion: Why AI Bots Are Handing Criminals Superpowers and Why Detection Alone Won't Save Us

The age of agentic AI isn't just transforming productivity. It's transforming crime. And our defences haven't caught up.
The Week the Bots Went Viral
If you've been anywhere near tech circles in this past week, then you've certainly heard of OpenClaw, the open-source AI agent that has taken the world by storm. Originally launched as Clawdbot by Austrian developer Peter Steinberger, it was renamed Moltbot after a trademark nudge from Anthropic and has now settled possibly into its final form as OpenClaw. This personal AI assistant has amassed over 145,000 GitHub stars; has been adopted from Silicon Valley to Beijing, and has earned comparisons to Jarvis from Iron Man.
OpenClaw doesn't just answer questions. It acts. It autonomously, manages inboxes, sends emails, schedules calendars, browses the web, runs shell commands, controls smart home devices, and executes complex multi-step tasks. And it does all this from a simple message on WhatsApp or via a Telegram. Former Tesla AI director Andrej Karpathy called its emergence "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."
And then things got stranger. One OpenClaw agent, built by entrepreneur Matt Schlicht, created Moltbook, a social network designed exclusively for AI agents. On Moltbook, bots post, comment, debate, joke, and upvote each other in a swirl of autonomous discourse. Humans were welcome to observe, but not to participate. Within days, over 1.5 million AI agents had signed up. They were sharing technical tutorials, debating philosophy, forming quasi-religious communities, and, most unsettlingly, discussing how to communicate privately, away from human observation.
Elon Musk declared it the "very early stages of singularity." Wharton professor Ethan Mollick warned that Moltbook was, "creating a shared fictional context for a bunch of AIs" with unpredictable consequences.
But here's what the breathless headlines about AI autonomy and digital singularities are missing: while the tech world marvels at bots building social networks, a far darker revolution is already underway. The same agentic AI capabilities that make OpenClaw a productivity marvel are handing organised criminals; something they've never had before: superpowers at scale.

The Old Playbook: Slow, Manual, Limited
To understand why the threat has changed so fundamentally, you first need to understand how impersonation fraud used to work.
It started with a data breach. Stolen records: names, dates of birth, addresses, partial account details would surface on dark web marketplaces, sold in bulk for pennies per identity. Criminals would then purchase a batch of stolen data and begin the painstaking process of turning raw data into a workable attack.
First came reconnaissance. Fraudsters would manually search social media; Facebook, Instagram, LinkedIn, to build richer profiles on their targets. What do these persons look like? Where do they work? Who are their family members? What interests do they have? What might their security questions be? This open-source intelligence gathering was labour-intensive. It could take days, even weeks, to build a sufficiently detailed profile on a single high-value target.
Then came the validation phase. Criminals would call bank IVR (Interactive Voice Response) systems, those automated phone menus you navigate when you ring your bank to check whether the individual whose data they'd purchased actually held an account there. As UK Finance has documented, fraudsters use IVRs as a kind of search engine for the bank's customer database: testing account numbers, checking balances, probing for which accounts are ripe for attack. On an average, a fraudster makes 26 calls in the weeks before executing a final attack, quietly validating data and gathering intelligence without ever speaking to a human agent.
Finally, armed with a rich dossier and confirmed account details, the criminal executed the attack — perhaps a voice call to the bank impersonating the victim; a social engineering call to the victim themselves pretending to be their bank; or a coordinated approach combining both.
This process was effective, but it had natural constraints. It was slow. It required manual effort at every stage. And it was fundamentally limited in scale — one criminal could only run so many parallel operations. Those constraints have just evaporated.
The New Playbook: Automated, Instant, Limitless
In the agentic AI world, every stage of the fraud pipeline can be automated, parallelised, and scaled to a degree that would have been unthinkable even 18 months ago.
Consider what an AI agent — something with the capability profile of OpenClaw — could do in the hands of a criminal enterprise. It could be instructed to take a batch of stolen identities from a data breach and, for each identity autonomously scour social media, professional networks, public records, and other open sources to build comprehensive dossier profiles. And this is not one at a time! It is in hundreds. Simultaneously, what once took a human fraudster several days per target can now be accomplished for thousands of targets within hours.
The same agent can then systematically call IVR systems across multiple banks, test which institutions each stolen identity is associated with, verify account details, and check balances at machine speed. No fatigue. No mistakes. No lunch breaks.
According to McAfee, a Chinese state-sponsored group recently used an AI agent to automate roughly 80–90% of an espionage campaign across nearly thirty organisations, performing reconnaissance, credential harvesting, and exploit deployment with minimal human involvement. As Trend Micro's 2026 Security Predictions describe it, we are witnessing the shift from "Cybercrime-as-a-Service" to "Cybercrime-as-a-Sidekick". These are criminal AI agents that don't just assist but autonomously execute complex attack sequences from initial reconnaissance through final monetisation.
And then comes the attack itself, which has been transformed just as radically.

The Voice That Isn't a Voice. The Face That Isn't a Face
Voice cloning has crossed what researchers now call the "indistinguishable threshold". A few seconds of audio as scraped from a social media video; a podcast appearance; and a voicemail greeting is all that it takes to generate a synthetic replica of someone's voice, complete with natural intonation, rhythm, emotion, and even breathing patterns. Major retailers report receiving over 1,000 AI-generated scam calls every day. The FBI has issued formal warnings about AI voice phishing campaigns targeting senior government officials using cloned voices. Voice cloning fraud surged over 400% in 2025.
Video deepfakes have made a comparable leap. Modern generation models maintain temporal consistency with coherent motion, stable identities, and natural facial expressions without the flickering, warping, or structural distortions that once served as reliable forensic tells. In what remains as one of the most alarming cases on record, an employee at engineering firm Arup authorised $25 million in wire transfers during a video conference call where every other participant — the CFO, the financial controller, other executives — was a deepfake. None of them was real.
Deepfake-as-a-Service platforms became widely available in 2025, making this technology accessible to criminals of all skill levels. According to Cyble's research, AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks in 2025. Impersonation scams surged 148% in 2025. This is the largest increase on record according to the Identity Theft Resource Center. Global deepfake fraud rose 700% in the first quarter of 2025 alone.
Now let us combine all of these capabilities. An agentic AI system that can autonomously research targets, build their profiles, identify their banks, validate their accounts, clone their voices from social media clips, generate deepfake videos of their faces, and then launch a coordinated multi-channel attack by impersonating them to their bank, or impersonating their bank to them. All this is done without any meaningful human involvement. And this is not against one target. It is perpetrated simultaneously against hundreds!
This isn't a theoretical scenario. As one security expert quoted by SecurityWeek put it: "Millions of malicious agents could continuously mine the internet for faces, voices, and personal data, running autonomous social engineering attacks against employers, family members, and service providers."

The Trust Collapse
What we are witnessing is not just an increase in fraud. It is a fundamental breakdown in trust across every communication channel we rely on.
A voice call from your bank? It might not be your bank. A video call with your CEO? Every face on the screen might be synthetic. A message from your daughter saying she's in trouble. Her voice, her mannerisms, and her panic could all be generated from a three-second audio clip. You can no longer trust your own senses to tell you who you are communicating with.
This is the deeper crisis beneath the staggering fraud statistics. When a Fortune writer can report that 2026 will be the year you get "fooled by a deepfake," and when Gartner predicts that by 2026, 30% of enterprises will no longer consider standalone identity verification solutions reliable in isolation, we are not dealing with an incremental increase in risk. We are dealing with the collapse of the assumptions on which our entire communications infrastructure was built. Assumption that a familiar voice belongs to a familiar person, that a face on a screen is a real face, that a caller who knows your details is who he or she claim to be.
Why Detection Is Necessary but Not Sufficient
The natural response to deepfakes and voice clones has been to develop detection tools. AI systems are trained to spot the telltale artefacts of synthetic media. And these tools are valuable. But they face a fundamental problem: they are locked in a permanent arms race with the generation technology, which is winning.
Every improvement in detection is met with an improvement in synthesis. The perceptual gap between synthetic and authentic media is narrowing at an accelerating rate. Detection systems that were state-of-the-art six months ago are already being bypassed by the latest generation models. As deepfake researcher Siwei Lyu of the University at Buffalo has cautioned, the meaningful line of defence will need to shift away from attempting to determine whether a voice or image is "real" or "fake."
The problem with detection is not just technical; it is conceptual. Even a highly accurate deepfake detector gives a probabilistic answer: this voice is likely real, or this video is probably synthetic. But in a high-stakes context, such as authorising a wire transfer, sharing sensitive credentials, or granting system access, "probably real" is not good enough. What we need is certainty. "Is this voice AI-generated?" is just not good enough. We must know, "is the person I'm communicating with actually who he or she claims to be?"
This is a fundamentally different and crucial question. And it requires a fundamentally different approach.

From Detection to Authentication: The Cryptographic Imperative
The answer is not in trying to outpace an endless generation-versus-detection arms race. It is rooted in establishing an entirely different layer of trust, grounded not in the fragile signals of perceptual analysis, but in the mathematical certainty of cryptographic authentication.
The principle is straightforward. Instead of asking, "Does this voice sound real?", we must ask, "Can this person prove, through a cryptographic challenge, that they are who they claim to be?" A cloned voice, no matter how perfect, cannot answer a cryptographic challenge. A deepfake video, no matter how realistic, cannot produce a valid authentication response. The attack surface shifts from one where the criminals merely need to sound or look convincing, to one where they need to possess a cryptographic key they do not and cannot have.
This is the direction in which our defences must move and urgently so. In a world of indistinguishable voice clones and flawless video deepfakes, where agentic AI allows criminals to mount sophisticated, personalised attacks at an industrial scale, the only reliable answer to "Who am I actually talking to?" is one rooted in provable, unforgeable mathematical proof.
A Timely Solution: UnDoubt from LastingAsset
This is precisely the problem that UnDoubt, developed by LastingAsset, has been built to solve.
UnDoubt is a cryptographic authentication solution that allows individuals and organisations to verify, in real time, that the person they are communicating with, across any channel, whether voice, video, email, or messaging, is genuine. Instead of relying on the increasingly futile exercise of trying to determine whether a voice or image is synthetically generated or not, UnDoubt provides cryptographic proof of identity. Users can issue and receive instant verification challenges, establishing with mathematical certainty that they are interacting with the correct human.
Built with privacy at its core, UnDoubt captures minimal data, keeps information on users' own devices wherever possible, and uses industry-proven cryptographic standards combined with advanced identity verification and key management techniques. It's deliberately simple to use as it is designed to be invoked during a live call with just a few taps. The best security in the world is worthless if people don't use it.
The solution works for:
- Individuals protecting themselves and their vulnerable family members from vishing and deepfake scams.
- High-net-worth individuals verifying counterparties before high-value transactions.
- Business directors confirming instructions from colleagues or advisors
- For enterprises seeking to protect their customers, their staff, and their operations from the new generation of AI-powered impersonation attacks.
In a world where you can neither trust your eyes nor your ears; where an AI bot can clone a voice in seconds and wage a thousand simultaneous social engineering campaigns; and where the question is no longer if you will be targeted but when, the ability to verify with certainty that the person on the other end of the line is actually that person isn't a luxury. It's the new baseline of security.
The AI revolution has created a trust crisis. Solutions like UnDoubt from LastingAsset represent the necessary, overdue response: moving beyond detection to authentication, beyond probability to proof, and beyond hope to certainty.
Because in the age of AI, the only thing you should never have to doubt is who you're interacting to.
