RETHINKING MENS REA & CRIMINAL LIABILITY IN THE AGE OF ARTIFICIAL INTELLIGENCE
Swati Kumari, Student, 4th year student at Bharati Vidyapeeth (deemed to be university), New Law College, Pune (India)
Artificial Intelligence has moved beyond being a mere technological aid and now performs functions that involve independent decision-making, often with serious real-world consequences. This shift raises difficult questions for penal law, particularly in relation to the requirement of mens rea. While harm caused by AI systems can usually satisfy the element of actus reus, identifying a guilty mind becomes difficult when the actor is a non-human system lacking consciousness or intent. This paper examines whether existing principles of criminal liability are capable of addressing harms caused by AI, or whether their application reveals a structural problem. It analyses the problem of legal personhood in intelligent systems and evaluates different approaches to liability, including perpetration through another, natural and probable consequences, and direct liability of AI. using real incidents involving autonomous vehicles and AI-driven decision making, the paper argues that attributing criminal responsibility directly to AI risks weakening the moral basis of criminal law. Instead, it supports a framework that places responsibility on human actors involved in the design, deployment, and supervision of AI systems, while emphasising the need for preventive regulation to address emerging risks.
| 📄 Type | 🔍 Information |
|---|---|
| Research Paper | LawFoyer International Journal of Doctrinal Legal Research (LIJDLR), Volume 4, Issue 1, Page 2044–2063. |
| 🔗 Creative Commons | © Copyright |
| This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License . | © Authors, 2026. All rights reserved. |