
Sundar Balasubramanian, Managing Director, Check Point Software Technologies, India & South Asia
Just a few years ago, deepfakes were little more than digital novelties, convincing in parts, clumsy in others, and often dismissed as internet humor. By 2025, however, they have evolved into powerful tools: widely accessible, scalable, and fully weaponised. What once passed as clever video editing is now driving large-scale social engineering, fraud, and identity theft.
The surge in deepfake-related cyber frauds in India, including the projected losses of around Rs 700 billion (approximately $8.4 billion) by 2025, is attributed to a Pi-Labs report titled “Digital Deception Epidemic: 2024 Report on Deepfake Fraud’s Toll on India.” The sharp 206 per cent increase in cybercrime losses in 2024 alone, with total financial fraud losses rising to over Rs 228.45 billion, was recently informed by the Ministry of Home Affairs to the Indian Parliament, based on data from the National Cyber Crime Reporting Portal and other government systems. Surveys showing that over 75 per cent of Indians have encountered deepfake content, with 38 per cent falling victim to scams, come from studies conducted by McAfee and cited in related cybersecurity reports.
These sources collectively highlight the significant and growing impact of deepfake-enabled cybercrime on Indian businesses and individuals, underscoring the urgent need for awareness and stronger defenses.
According to Check Point Research’s AI Security Report 2025, we have reached a pivotal moment: deepfake technology now spans from basic offline generation to fully autonomous, real-time impersonation engines, capable of deceiving even seasoned professionals.
Deepfakes by the Numbers: Where We Stand
Over $35 million in fraud losses have been attributed to deepfake video scams in just two high-profile cases in the UK and Canada.
Artificial intelligence (AI)-driven voice deepfakes are now used regularly in sextortion, CEO impersonation, and hostage scams—one case in Italy saw criminals impersonate the Minister of Defense in a live call to extort high-profile contacts.
AI-enhanced telephony systems, priced at around $20,000, can now impersonate any voice in any language across multiple conversations simultaneously—no human operator required.
These systems are available right now on dark web forums and Telegram marketplaces.
Automation has changed the game
The report introduces a “Deepfake Maturity Spectrum” highlighting how generative AI (GenAI) has evolved from static content creation and will soon reach autonomous agents that conduct live, video conversations with unsuspecting targets. Let us break it down:
Today’s most advanced malicious tools are powered by LLMs like DeepSeek and Gemini, and driven by customised models like WormGPT and GhostGPT. These tools not only generate content—they hold dynamic conversations, analyse victim responses, and adapt tone and language on the fly.
The criminal toolkit: Democratised and commoditised
Gone are the days when advanced deception required elite cybercrime syndicates. Now:
- Voice cloning tools like ElevenLabs can generate a convincing voice in under 10 minutes from short audio samples.
- Face-swapping plugins for live video are available in underground marketplaces starting at a few hundred dollars.
- One AI-driven phishing suite, GoMailPro, was openly advertised on Telegram for $500/month, with built-in ChatGPT support.
- Business email compromise kits, like the “Business Invoice Swapper,” automatically scan inboxes and alter invoice details using AI—scaling fraud with near-zero manual input.
Cyber crime has effectively outsourced creativity to machines. Now, even low-skilled attackers can launch sophisticated operations.
What happens when real and fake blur?
The FBI has already warned that AI-generated images, videos, and voices are undermining traditional forms of trust and verification. From job interview scams involving real-time face swaps to fake conference calls impersonating executives, the line between digital fiction and fact is evaporating.
Security teams can no longer rely on gut instinct or visual checks:
- Real and AI-generated voices are now indistinguishable.
- Audio deepfakes are already a go-to method for large-scale social engineering campaigns.
These are not theoretical risks—they are already embedded in real-world attacks.
Proactive defense against a self-running threat
To help organisations stay protected, Check Point’s solutions offer complete protection across file types, operating systems, and attack surfaces and proactively:
- Detect and block AI-generated threats like fake media files and phishing payloads
- Isolate suspicious behavior linked to autonomous AI agents
- Neutralise malware embedded in deepfake files or used to deliver them
Coupled with user awareness and zero trust principles, these solutions form a comprehensive shield against an adversary that never sleeps.
Deepfakes are not the future. They are here
Organisations can no longer afford to view deepfakes as a fringe novelty. As the AI Security Report 2025 shows, deepfakes have become self-generating, market-driven, and operationalised. Their ability to scale, deceive, and adapt in real-time marks a shift in the balance of cyber power.