top of page

Balancing Innovation with Inclusion: Crafting India’s Path to Responsible AI Governance

How India Should Regulate Artificial Intelligence

Editorial Team

Last Updated:

2 August 2025

Synopsis

India stands at a pivotal moment in shaping AI regulation. Instead of blanket laws, a risk-based, sector-specific framework rooted in constitutional values is essential. Regulatory efforts must address bias, protect privacy, and ensure accountability, especially in public-facing AI systems. Investment in open-source models, local oversight institutions, and global engagement will ensure AI serves both innovation and inclusion. India's regulatory path must reflect its democratic ethos while safeguarding citizens from opaque and harmful AI deployments.

00:00 / 07:30

The Urgency of Regulation


Artificial Intelligence (AI) is transforming India’s economic and governance structures. With AI embedded in financial services, healthcare, public welfare delivery, and surveillance systems, regulation is no longer optional. India’s constitutional guarantees under Article 21 (Right to Life and Personal Liberty) implicitly demand that AI systems respect the autonomy, dignity, and due process rights of individuals affected by automated decisions.


> “AI is no longer a technology of the future—it is a force shaping our present. The urgency to regulate is not just technical, it is societal.”

— Fei-Fei Li





---


Building on Constitutional Values


India’s regulatory approach must align with the Fundamental Rights enshrined in the Constitution—especially Articles 14 (Equality before Law), 15 (Prohibition of Discrimination), and 19 (Freedom of Speech and Expression). Any AI regulation should be framed to uphold these rights in both public and private sector applications of AI. For example, AI-driven decisions in hiring or lending must not lead to indirect discrimination under Article 15(2).


> “We need AI systems that reflect our democratic values, not just optimize for efficiency or control.”

— Ursula von der Leyen





---


A Phased and Risk-Based Approach


A sector-specific and risk-sensitive model similar to the framework proposed in the EU AI Act can be adapted under Indian law through delegated legislation. Existing regulatory bodies like RBI, SEBI, IRDAI, and TRAI already have powers under their respective statutes to issue directions, circulars, or binding regulations, and could be mandated to develop AI-specific guidelines in their sectors under:


Section 35A of the Banking Regulation Act, 1949 (for RBI)


Section 11 of the SEBI Act, 1992


Section 79 of the Information Technology Act, 2000 (intermediary due diligence)


“We must regulate AI based on its capacity for harm not based on where it comes from or who makes it.”

— Yoshua Bengio


Tackling Bias and Inequality


Preventing discrimination by automated systems could be enforced under the Equal Remuneration Act, 1976, Maternity Benefit Act, 1961, and the Rights of Persons with Disabilities Act, 2016, in case AI is used in employment, healthcare, or government benefits allocation. A statutory requirement for Algorithmic Impact Assessments (AIAs) could be introduced under rules framed using powers from the IT Act, 2000, and sectoral legislation.


“Bias in AI is not just a technical problem it is a reflection of our historical and institutional inequities.”

— Timnit Gebru


Aligning AI with Data Protection


India’s Digital Personal Data Protection Act, 2023 (DPDP Act) is central to AI governance. AI systems relying on personal data for training and deployment must adhere to:


Section 4 (Purpose limitation)


Section 6 (Notice obligations)


Section 7 (Valid consent)


Section 8 (Right to access, correct, and erase personal data)


Section 16 (Security safeguards)


Additionally, Section 9 empowers the government to restrict processing in the interest of sovereignty or public order, which could impact public AI deployments.


“Privacy and data protection are foundational to building trustworthy AI. Without them, public confidence will erode.”

— Brad Smith


Strengthening Public Accountability


The use of AI in public decision-making must be subjected to judicial review under Article 32 and Article 226 of the Constitution. Citizens should be able to challenge arbitrary or opaque decisions, especially those affecting their welfare benefits, policing outcomes, or access to public services. The Right to Information Act, 2005 can also be amended or clarified to mandate disclosures of algorithmic logic used by government systems.


“In a democratic society, people have the right to understand and challenge how algorithms shape their lives.”

— Kate Crawford


Building National Capacity for Trustworthy AI


To support indigenous development of responsible AI, public funds such as Digital India, Startup India, and the National AI Mission (MeitY) should prioritize open-source, language-diverse AI models. The National Data Governance Framework Policy (NDGFP) and IndiaAI Digital Public Infrastructure (DPI) efforts can be institutionalized with legal backing to enable transparent, accountable public sector AI projects.


“The benefits of AI must not be limited to a few countries or corporations. Equity in access is a global imperative.”

— Sundar Pichai


Shaping Global Standards While Preserving Sovereignty


India’s active participation in forums like the G20, Global Partnership on AI (GPAI), and UNESCO’s AI Ethics Framework must not come at the cost of flexibility in domestic rule-making. The Constitution allows India to frame harmonized but sovereign policy that suits its developmental goals under Article 253 read with Entry 97, List I of the Seventh Schedule.


“International cooperation is essential, but not at the cost of domestic autonomy and contextual needs.”

— Ravi Shankar Prasad


The Need for Institutional Vision


India should consider enacting a dedicated Artificial Intelligence (Ethical Use and Governance) Act, and set up a statutory authority such as a National AI Commission or AI Ethics Board. This could be modeled under the National Commission for Protection of Child Rights Act or TRAI Act, granting advisory, oversight, and enforcement powers.


“We cannot rely on general-purpose institutions to govern a technology as complex and evolving as AI. We need purpose-built oversight bodies.”

— Stuart Russell


Striking the Right Balance


India’s AI regulation must strike a careful balance: promoting innovation without compromising individual rights, enabling global trade while preserving digital sovereignty, and using automation for efficiency without losing sight of human dignity. This is not just a legal challenge but a moral and developmental responsibility.


About

TechPolicyLaw.org is your trusted source for in-depth analysis, news, and commentary at the critical intersection of technology, public policy, and law. In a rapidly evolving digital world, we aim to make sense of the regulatory frameworks, legal battles, and policy shifts shaping the future of innovation.

© 2025 Tech Policy Law 

  • LinkedIn
bottom of page