- India released its AI Governance Guidelines in November 2025, choosing voluntary principles over binding legislation to regulate artificial intelligence.
- The framework is built on seven guiding “sutras” anchored by the principle of “Innovation over Restraint” — responsible AI development should take precedence over precautionary regulation.
- The IndiaAI Safety Institute, announced by Minister Ashwini Vaishnaw in January 2025, has selected eight initial research projects covering bias mitigation, explainability, and privacy-preserving machine learning.
- India has no plans to enact a dedicated AI law, instead relying on existing legislation including the Digital Personal Data Protection Act and new institutional bodies to govern AI risks.
What Happened
In November 2025, India’s Ministry of Electronics and Information Technology (MeitY) published the India AI Governance Guidelines, a non-binding framework for the responsible development and deployment of AI systems. The guidelines were released ahead of the AI Impact Summit 2026 and were developed by a drafting committee that MeitY constituted in July 2025 with representatives from government, industry, and academia.
The document establishes India’s official position: the country will not enact a dedicated AI law at this stage. Instead, it will adapt and update existing legal frameworks — including the Digital Personal Data Protection Act of 2023, the Information Technology Act, and sector-specific regulations — to address AI-related challenges as they emerge.
Why It Matters
India’s approach places it firmly in the “pro-innovation” camp alongside Japan, and in contrast with the EU and South Korea, which have adopted comprehensive binding frameworks with enforcement mechanisms. With one of the world’s largest AI talent pools, a rapidly growing technology sector, and a domestic market of over 1.4 billion people, India’s regulatory stance carries significant weight in global AI governance debates.
The guidelines explicitly state that “provided appropriate safeguards are in place, AI-related responsible innovation should take precedence over precautionary restraint.” This principle, labeled the third sutra — “Innovation over Restraint” — signals that India’s government views excessive regulation as a greater threat to national economic development than under-regulation at this stage of AI maturity.
The framework is tied to India’s Viksit Bharat 2047 vision, an ambitious national development program that positions AI as a catalyst for inclusive economic growth, improved public service delivery, and global competitiveness across manufacturing, agriculture, and services.
Technical Details
The guidelines are organized around seven guiding sutras: Trust is Fundamental, People First, Innovation over Restraint, Fairness and Equity, Accountability, robust AI Infrastructure development, and effective Governance implementation. These principles are entirely advisory rather than mandatory — there are no penalties, registration requirements, or compliance deadlines attached to them.
The framework introduces three new institutional bodies to coordinate India’s AI governance approach. The AI Governance Group provides cross-ministerial coordination across federal departments. The Technology and Policy Expert Committee offers technical guidance on emerging AI risks and recommends policy responses. The IndiaAI Safety Institute, announced by Minister for Electronics and IT Ashwini Vaishnaw on January 30, 2025, conducts testing, evaluation, and safety research on AI systems.
The Safety Institute has selected eight projects in its first funding round, addressing critical technical challenges including machine unlearning, bias detection and mitigation, privacy-preserving machine learning techniques, model explainability, automated auditing tools, and governance testing frameworks. The Institute collaborates with academia, startups, industry, and government ministries, with an explicit mandate to focus on indigenous research contextualized to India’s social, economic, cultural, and linguistic diversity.
Who’s Affected
AI developers and deployers in India face no new binding obligations under these guidelines. Companies operating in sectors already covered by existing regulations — such as financial services under the Reserve Bank of India, telecommunications under TRAI, or healthcare under the Medical Device Rules — continue to follow those rules, which may impose requirements relevant to AI deployment within their respective domains.
International AI companies benefit from India’s light-touch approach, which imposes no registration requirements, conformity assessments, or mandatory risk classifications for AI systems. However, the Digital Personal Data Protection Act, enacted in 2023, applies to all AI systems that process personal data of Indian citizens and carries its own compliance obligations including consent requirements and data localization provisions.
What’s Next
MeitY is expected to issue sector-specific guidance building on the seven sutras throughout 2026, starting with healthcare and financial services. The IndiaAI Safety Institute will publish findings from its initial eight research projects, which could inform future policy decisions on whether voluntary principles are sufficient or binding requirements are needed. India has not set a timeline for revisiting the question of dedicated AI legislation, and no indication from the current government suggests that binding rules are forthcoming in the near term.
Related Reading
- UK AI Policy: Pro-Innovation Approach With AI Bill Expected in 2026
- Japan AI Promotion Act: Innovation-First Regulation Without Penalties
- Singapore Leads on Agentic AI Governance With World-First Framework
- China AI Regulation: The World’s Most Layered AI Governance Framework
- NVIDIA Tested Whether GPT-5 Could Control a Robot — It Failed at Every Basic Task Without Human Help