LIJDLR

PLATFORM LIABILITY AND DEEPFAKE PORNOGRAPHY: ARE INDIA’S INTERMEDIARY RULES FIT FOR THE AI AGE?

Saloni Shashank Patil, Chhatrapati Shivaji Maharaj University, Panvel, Navi Mumbai, Maharashtra, India

Satya Prakash Mishra, Chhatrapati Shivaji Maharaj University, Panvel, Navi Mumbai, Maharashtra, India

The advent of deepfake pornography significantly changes the way digital sexual abuse can be carried out. Deepfake pornography is a method of identity distortion and a violation of the individuals’ dignity, privacy, and sexual autonomy rights, which are protected by Article 21 of the Indian Constitution. It poses as a serious challenge to the constitutional adequacy of India’s intermediary liability framework. “The present regulatory framework under Section 79 of the Information Technology Act, 2000 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 rests on assumptions of intermediary neutrality and notice-based takedown, reflecting a pre-AI model of online harm. This article maintains that the existing framework, which is primarily a reactive takedown one, is constitutionally inadequate in a context where algorithmic systems are used not only to fabricate but also rapidly amplify synthetic sexual content. Based on constitutional jurisprudence recognising dignity and privacy as integral components of the right to life and personal liberty under Article 21 of the Constitution, the paper argues that the State has a corresponding duty to recalibrate platform governance beyond mere passive safe-harbour protection. Thus, the article advocates that instead of reacting to a notice-and-takedown system, a dignity-centric framework should be implemented, which, among other things, recognises the structural responsibility of platforms for AI-enabled harm. The article employs doctrinal analysis of Indian constitutional jurisprudence and a comparative examination of emerging regulatory approaches particularly the European Union’s Digital Services Act and the United Kingdom’s Online Safety Act to demonstrate how algorithmic amplification undermines the legal fiction of platform neutrality. The article contributes to debates on digital governance and platform accountability by reinterpreting intermediary liability through a constitutional dignity framework in the age of artificial intelligence.

📄 Type 🔍 Information
Research Paper LawFoyer International Journal of Doctrinal Legal Research (LIJDLR), Volume 4, Issue 1, Page 1146–1177.
🔗 Creative Commons © Copyright
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License . © Authors, 2026. All rights reserved.