LIJDLR

AI AUTONOMY VS HUMAN CONTROL: BALANCING INNOVATION, ACCOUNTABILITY AND GOVERNANCE

Aaryan Naresh Parekh, BBA LLB, 10th Semester, Student at MIT World Peace University (India)

Janhavi Vinod Shrungare, BBA LLB, 8th Semester, Student at MIT World Peace University (India)

Artificial intelligence has become a disruptive force that is changing governance, decision-making, and how people engage with technology. Significant concerns about accountability, human oversight, privacy, justice, and the suitability of current legal and regulatory frameworks emerge as AI systems get more autonomous. This study highlights the need to strike a balance between responsible governance and technological innovation by examining the growing conflict between AI autonomy and human control. The paper looks at how artificial intelligence has evolved conceptually, how autonomous decision-making is becoming more and more important, and how crucial it is to maintain meaningful human control in high-impact and rights-sensitive fields. In addition to recognising the benefits of AI in improving efficiency, lowering human error, encouraging innovation, and expanding access across industries like healthcare, education, finance, and governance, the study examines significant legal issues brought on by AI autonomy, such as accountability gaps, data protection issues, algorithmic discrimination, and ethical quandaries. In order to assess new models of AI regulation, it also looks at comparative regulatory approaches and jurisprudential viewpoints, especially in the US, UK, EU, and India. In order to determine whether current legal frameworks sufficiently handle the challenges presented by autonomous AI systems, the study uses doctrinal and analytical research methods and is based on legislation, case law, policy instruments, and academic literature. The study contends that a human-centric and risk-based governance system based on responsibility, transparency, and effective supervision is necessary, as neither unbridled AI autonomy nor stringent human control will provide a workable answer. The study comes to the conclusion that responsible autonomy where innovation advances within moral and legal bounds that uphold rights while facilitating technological advancement is the key to the future of AI governance.

📄 Type 🔍 Information
Research Paper LawFoyer International Journal of Doctrinal Legal Research (LIJDLR), Volume 4, Issue 1, Page 3227–3259.
🔗 Creative Commons © Copyright
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License . © Authors, 2026. All rights reserved.