ARTIFICIAL INTELLIGENCE AS A JURIDICAL PERSON: RETHINKING ACCOUNTABILITY IN THE ERA OF AUTOMATED DECISION MAKING BY AI
Aditya Pal, JRF-Ph.D., 2nd Semester, Scholar at SICMSS, Rashtriya Raksha University, Gandhinagar-382305 (India)
Honey Shankhwar, LL.M (Business Law), 2nd Semester, Student at Dharmashastra National Law University (DNLU), Jabalpur-482001 (India)
Artificial Intelligence OR simply AI has evolved rapidly from its humble beginnings in ‘cybernetics’ and ‘machine learning’ (ML) into a pervasive force which is now shaping governance, commerce and even social interactions. Following the trajectory of its evolution and developments like ‘Large Language Models’(LLMs), generative AI, Internet of Things (IoT), and the race of achieving ‘Artificial General Intelligence’, the time is now ripe to address the issue of accountability with regards to the ‘autonomous’ acts of AI systems. Traditional legal regimes were designed for humans and corporate entities. However, in the age of AI, it seems to be struggling to address the harms caused by autonomous acts of the AI systems, such as algorithmic biases in decision making, misinformation OR misrepresentation, accidents due to ‘self-driving’ (auto pilot) vehicles like in the case of a self-driving Uber vehicle (modified 2017 Volvo XC90 SUV operated by Uber’s Advanced Technologies Group) in 2018. This paper explores the idea of granting legal personality to AI so as to make it accountable, given the nature of evolution that it is going through. This research critically evaluates India’s fragmented and inadequate AI governance regime, including the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023, and India’s evolving AI policy architecture, spanning NITI Aayog’s National Strategy for Artificial Intelligence (#AIforALL, 2018), NITI Aayog’s Principles for Responsible AI (2021), and the Ministry of Electronics and Information Technology’s India AI Governance Guidelines (2025) issued under the IndiaAI Mission. It also undertakes a comparative analysis with global approaches, including the European Union’s AI Act and the regulatory approaches adopted by Japan and the United Arab Emirates toward AI accountability in commercial contexts. Arguments for and against granting juridical personality to AI are also examined. This paper proposes a ‘hybrid’ approach of granting ‘quasi-juridical’ personhood to AI in India, while combining shared accountability between the developers and deployers of AI systems alongside AI system itself.
| 📄 Type | 🔍 Information |
|---|---|
| Research Paper | LawFoyer International Journal of Doctrinal Legal Research (LIJDLR), Volume 4, Issue 2, Page 340–360. |
| 🔗 Creative Commons | © Copyright |
| This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License . | © Authors, 2026. All rights reserved. |