ADVERTISEMENT

Financial fraud is on the rise thanks to the advent of deepfake impersonation with the advancement in artificial intelligence called generative adversarial networks (GANs). The power of social media in terms of mass outreach is being deliberately harnessed to broadcast financial investment advisory messages/audio-visual content while maligning the reputation of high-profile individuals ranging from successful businesspeople, celebrities, and even government officials.
False representation of such individuals in promoting fictitious investment schemes and/or endorsing unpopular/lesser-known brands without seeking their consent is a breach of their privacy and theft of their identity. Further, the misleading nature of such unverified content and unscrupulous links being shared results in cybercrimes of varying magnitudes.
Using the dark web, threat actors (also known as forgers) who create fake videos become so proficient that detectives can no longer spot deep fakes that lay a trap for unsuspecting, naïve individuals who end up losing their hard-earned money.
Key root causes of deepfake videos include the rapid development of technology. Its availability and ease of access lure individuals to earn easy money. The bad actors create technology-enabled platforms and use human psychology and AI to make them appear authentic.
January 2026
Netflix, which has been in India for a decade, has successfully struck a balance between high-class premium content and pricing that attracts a range of customers. Find out how the U.S. streaming giant evolved in India, plus an exclusive interview with CEO Ted Sarandos. Also read about the Best Investments for 2026, and how rising growth and easing inflation will come in handy for finance minister Nirmala Sitharaman as she prepares Budget 2026.
Data availability in terms of high-resolution selfies, videos, and public digital footprints of celebrities provides a picture-perfect training ground for algorithms to run mala fide content, promising the unbelievable financial freedom that is shown as delivered by brand ambassadors of great repute. Scammers are increasingly using voice cloning/authentic-looking official emails impersonating family members/CXOs at work to authorise money transfers to illegitimate bank accounts.
Absence of robust real-time deepfake detection tools, content authentication mechanisms or watermarking standards is an enabler of the widespread circulation of such manipulated subject matter. Further, as part of internal risk assessments and management initiatives, most organisations lack digital governance and policies on AI misuse/digital identity protection. This is further accentuated by cross-border global hosting and jurisdictional challenges, coupled with the anonymous nature of digital platforms.
While India currently does not have an ‘impersonation law’ governing deepfakes, the existing legal framework is being enhanced with support from the judiciary to ensure punitive action is initiated against the perpetrators. Impersonating a celebrity using deepfake technology may invoke criminal action under the IT Act, 2000, and IT Rules, 2021. Section 66 C (identity theft), Section 66D (cheating by personation), and Section 67 & 67A (impersonation involving obscenity) may result in imprisonment ranging from 3-5 years with fines ranging from ₹1 lakh to ₹10 lakh. While an official complaint can be filed at the National Cyber Crime Reporting portal, the court can issue an injunction and award recovery of damages from the impersonator to the aggrieved.
Similar provisions under Section 319 (cheating by personation), Section 336 (forgery), and Section 356 (defamation) under the Bhartiya Nyaya Sanhita (BNS) have been introduced to protect personality rights. Courts in India recognise that celebrities have the right to control commercial usage of their images, identity/name, and audio/visual content. The landmark judgement in the case of Anil Kapoor vs Simply Life India (2023) has been a trend setter in this domain.
Social media platforms need to assume greater responsibility as active gatekeepers and perform identity-based validation measures of the content being uploaded. While regulators and law enforcement agencies are ramping up measures for identity protection, AI detection filters need to be applied for scanning uploads for signs of deepfake manipulation before the video goes viral. Technology being open source, robust solutions like the ‘tag and trace’ method and verifying ‘content credentials’ by application of coalition for content provenance and authenticity (C2PA) are the need of the hour to provide the right defence mechanism.
(The author is partner-risk & business advisory, BDO India. Views are personal.)