RETHINKING MENS REA & CRIMINAL LIABILITY IN THE AGE OF ARTIFICIAL INTELLIGENCE
RETHINKING MENS REA & CRIMINAL LIABILITY IN THE AGE OF ARTIFICIAL INTELLIGENCE Swati Kumari, Student, 4th year student at Bharati Vidyapeeth (deemed to be university), New Law College, Pune (India) Download Manuscript doi.org/10.70183/lijdlr.2026.v04.89 Artificial Intelligence has moved beyond being a mere technological aid and now performs functions that involve independent decision-making, often with serious real-world consequences. This shift raises difficult questions for penal law, particularly in relation to the requirement of mens rea. While harm caused by AI systems can usually satisfy the element of actus reus, identifying a guilty mind becomes difficult when the actor is a non-human system lacking consciousness or intent. This paper examines whether existing principles of criminal liability are capable of addressing harms caused by AI, or whether their application reveals a structural problem. It analyses the problem of legal personhood in intelligent systems and evaluates different approaches to liability, including perpetration through another, natural and probable consequences, and direct liability of AI. using real incidents involving autonomous vehicles and AI-driven decision making, the paper argues that attributing criminal responsibility directly to AI risks weakening the moral basis of criminal law. Instead, it supports a framework that places responsibility on human actors involved in the design, deployment, and supervision of AI systems, while emphasising the need for preventive regulation to address emerging risks.
RETHINKING MENS REA & CRIMINAL LIABILITY IN THE AGE OF ARTIFICIAL INTELLIGENCE Read More »