ADVERTISEMENT

As India prepares for the India AI Impact Summit 2026, artificial intelligence has moved decisively from the realm of innovation to that of adjudication. Courts, regulators and lawmakers are now confronted with a fundamental question: how does the law respond when reality itself can be algorithmically fabricated? The rise of deepfakes, synthetic media and AI-generated content has introduced unprecedented legal complexity, challenging long-settled principles of evidence, liability, free speech and judicial truth-finding.
Deepfakes, hyper-realistic digital impersonations created using generative AI, have moved beyond academic demonstration to real-world impact. They have been deployed to fabricate statements by public figures, misrepresent private individuals and manipulate visual and audio records. The legal system’s traditional reliance on documentary and audio-visual evidence as neutral proxies for truth has been destabilised by technology capable of producing indistinguishable forgeries. The admissibility and evidentiary value of digital material, already governed by provisions such as Sections 65A and 65B of the Indian Evidence Act , now require deeper scrutiny. The challenge is no longer limited to establishing the source or integrity of electronic records but extends to proving their very authenticity.
The judiciary in India has already begun to wrestle with these challenges. The Delhi High Court recently granted an interim injunction restraining online distribution of an AI-generated film exploiting the likeness of a public figure’s child, noting the deepfake content posed irreparable harm to personal dignity and reputation. Similarly, Delhi High Court decisions in cases such as Mr. Sudhir Chaudhary vs Meta Platforms have underscored the misuse of AI to create and circulate misleading videos using a news anchor’s likeness, holding that such content both misleads the public and undermines individual reputation. These judicial responses illustrate a broader doctrinal evolution: Indian courts are increasingly applying existing constitutional guarantees of privacy and dignity to digital harms enabled by AI.
Yet judicial innovation alone cannot fill the legal void. Existing criminal provisions, such as Sections 465 and 469 IPC for forgery, Sections 66C and 66E of the Information Technology Act for identity theft and privacy invasion, are being stretched to account for deepfake harms, but they were not crafted with generative AI in mind. The attribution of intent and responsibility in cases involving autonomous or semi-autonomous generative systems remains unsettled. Should liability rest with the creator of the content, the deployer of the model, or the platform that disseminates it? As AI intermediaries proliferate, these questions demand statutory clarity.
Regulatory developments from the Indian government provide emerging answers. The Ministry of Electronics and Information Technology has moved to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to define “synthetically generated information” and impose enhanced due diligence obligations on intermediaries. The draft regulations aim to mandate visible labelling of AI-generated content and obligate platforms to verify declarations of synthetic creation using technical safeguards. More recently, the government has introduced amendments requiring large platforms such as Google, YouTube and Instagram to label AI content and takedown flagged synthetic media within strict timelines, reflecting heightened accountability demands on intermediaries.
These regulatory impulses align with global trends. Jurisdictions like the European Union are formalising risk-based AI oversight, including obligations for high-risk generative systems; India’s policy trajectory suggests a parallel albeit more cautious path. Notably, the Supreme Court declined to entertain a PIL seeking urgent deepfake regulation in 2025, directing the petitioner to the Delhi High Court, indicating judicial restraint in policy formulation and deference to evolving administrative frameworks.
Crucially, any regulatory regime must balance enforcement with constitutional liberties. The apex court’s landmark ruling in Shreya Singhal v. Union of India struck down overly broad restrictions on online expression, affirming that intermediary liability and content takedown mechanisms must respect freedom of speech under Article 19(1)(a). Contemporary litigation challenging IT rules amendments, including petitions in the Bombay High Court over new content moderation protocols, illustrate persistent concerns about due process and free expression even as the state tightens controls.
This balance is more than academic. Deepfakes used to misrepresent politicians’ statements or private individuals’ conduct can erode public trust and distort democratic deliberation. Overbroad content regulation risks silencing dissent and chilling legitimate expression. Legal frameworks must be precise, proportionate and grounded in clear standards that distinguish harmful synthetic fabrications from protected speech.
The judiciary will inevitably play a central role in refining these standards. Institutional investments, including judicial training in digital forensics, expert panels for AI verification and specialised procedural rules for digital evidence, are necessary to equip courts for the complexities ahead. Interim injunctions and tailored reliefs offer immediate remedies, but systemic clarity requires legislation that harmonises constitutional safeguards with technological realities.
As the India AI Impact Summit 2026 draws near, the conversation around AI and the law must broaden beyond technical stakeholders to include jurists, policymakers and civil liberties advocates. The risks posed by deepfakes and generative AI are not merely technological; they are deeply legal. India’s response will shape both the contours of digital governance and the resilience of its democratic institutions.
(The author is joint managing partner, JSA. Views are personal)