7 min read

Reflecting on India's AI Governance Guidelines

Reflecting on India's AI Governance Guidelines

India published its national AI governance guidelines in November. I was the lead writer, working alongside a group of ten experts from government, policy and academia. We were given three months to develop a framework that was eventually approved by a high-level advisory group chaired by the Principal Scientific Advisor, and comprising secretaries from key ministries involved in administering tech policy in India.

One month on, I feel there’s enough distance to reflect on what makes these guidelines distinctive, honestly engage with the criticism, and outline my hopes for the future. This essay is a personal reflection, not an official commentary. I don’t speak for the government or other members of the committee, but I do have a point of view. With that, here goes. 

A Distinctive Approach

Over the last few years, I’ve studied various AI policy documents – statutory laws, whitepapers, executive orders, action plans. In general, national AI governance frameworks fall on a spectrum: on one end lies the EU and China's prescriptive, rules-based approach; on the other is the US laissez-faire approach. Others tend to sit somewhere in the middle. 

The ‘India AI guidelines’ reflect a different approach – a “third way” – marked by distinctive choices and features. Three stand out to me. 

  1. It’s not a regulatory framework. It's a governance framework, and an expansive one at that. It recognises that governance is not just about regulation or risk mitigation, but as much to do with adoption, diffusion, diplomacy and capacity building. The framework presents a holistic set of principles and practices that advance India’s strategic goals around AI, not just containment. This is a feature, not a bug – one that will likely resonate with many other countries, particularly in the Global South.
  2. It doesn't just import a governance model from Brussels, Beijing or Washington. Instead, it is built from the ground-up based on India's unique needs and aspirations: scaling AI for inclusive development, coordinating across a federal system, and addressing risks in the local context, such as caste bias and child safety. Where global frameworks have been referenced, they inform rather than dictate policy.
  3. It is intentionally agile, flexible and forward-looking. Where possible, it translates high level principles into practical guidelines. Where more detail is needed, it defers to institutions such as the AI Safety Institute, and calls on businesses to proactively adopt voluntary commitments. Taken as a whole, the guidelines are a living document that will naturally evolve as the technology matures and society adapts. 

Critiques and Responses 

The reactions to the guidelines have been largely positive, some effusive. The point of this essay is not to be self-congratulatory, but to engage with valid criticism. 

In this section I’ll respond to the major critiques I've encountered in both public and private.

  1. It prioritises innovation over safety

India’s official position on AI governance has been ‘pro-innovation’ for a while now (see pg. 24 of the G20 Delhi Declaration). The guidelines remain faithful to this.

One principle endorsed in these guidelines seems to have evoked particular concern – innovation over restraint. In a recent panel discussion, I was asked if this principle abandons the ‘human-centric’ view referenced elsewhere in the guidelines.  

First, this is a misreading of the principle. In full, it reads ‘responsible innovation over cautionary restraint’. Regulators are therefore advised to focus on real and present harm over ‘hypothetical’ risks. In practice, that means Indian policymakers are unlikely to endorse a moratorium on AI development out of abundant caution.  

Second, India’s bias towards deployment means there is no room for the precautionary principle, at least at this stage. While it is possible that large-scale AI deployments for public service delivery could result in social or economic exclusion, the immediate benefits are judged to outweigh unknown harms.

Third, the phrase ‘responsible innovation’ signals that organisations that are not acting in good faith will be held accountable. Nobody gets a free pass.

  1. It’s too soft on accountability

A common critique is that the guidelines rely too much on voluntary measures. When push comes to shove, profits trump public interest. I get this point. Voluntary measures are not enforceable and are, well, optional. Yet, I think they have an important role to play in risk mitigation (I’ve written a full paper on this).

Besides, Indian officials have consistently held that the government will act swiftly to regulate the industry and penalise bad actors if and when the need arises. I read this as a strong statement of intent. Yet, I think more can and should be done.

For one, I think we need strict binding rules to increase transparency in the AI value chain. Regulators should be empowered to ask how AI models are being developed for the Indian market and how they are impacting society. I also think there is merit in creating a new regulator for the digital sector. Right now, the Ministry of Electronics and Information Technology (MeitY) is the lead agency responsible for both promoting and regulating the AI industry. This dual-role is difficult to balance, even with the best of intentions. A new independent regulator would instill trust in the digital economy and help address AI-related cyber crimes.

  1. It’s silent on some socio-economic risks 

Another critique is that the guidelines provide an incomplete analysis of AI risks. For example, a discussion on the environmental and labour impact of AI is absent.

One reason I would offer is that these are distinct types of societal impact that are qualitatively different from market failures and consumer harm that have traditionally been a part of AI risk frameworks (for e.g. malicious uses of AI). But I will admit that it is inconsistent to adopt a broad view of governance, while neglecting the broader risk landscape. I think future iterations of these guidelines should account for novel societal, geopolitical and ethical risks emerging from AI. 

Further, the guidelines only present examples of risks. Actual empirical evidence of harm will only emerge from regulatory studies (such as the one by CCI on market concentration) and human-interest stories and data collected by organisations working in the trenches. I’m confident that an India-specific risk classification framework will soon emerge from these studies.

  1. It lacks operational details

The guidelines don’t get into sector-specific issues in areas like healthcare and finance. Others point to the lack of clarity in how the proposed “AI incident reporting” mechanism will operate. Still others say the question of liability remains unanswered.

First, the goal of the guidelines was never to provide an exhaustive list of compliance measures (that’s the job of sectoral regulators, industry bodies and in-house lawyers). Further, the AI Safety Institute, once operationalised, will help steer industry compliance. As for how the incident reporting mechanism will operate, this will emerge from academic literature and consultations between the government, industry and regulators. Finally, on the question of liability. It is an admittedly complex issue, and this report raises more questions than answers. I think courts and future legislation will have to provide more clarity – the guidelines can only go so far. 

  1. It centralises policy making

Some are of the view that the proposed AI Governance Group (an inter-agency body that will formulate and coordinate AI governance in India) will centralise policymaking. And seeking inputs from the Technology & Policy Expert Committee (T-PEC) will delay decision-making, they say. 

The AIGG has been designed precisely to consolidate policymaking because the current fragmented approach to governance was creating confusion. As for any delays that might arise from consulting with the TPEC, I would rather that be the case than to dispense with the need for expertise altogether. 

Finally, to those questioning the lack of formal industry representation on the AIGG, I think it’s fair to say that, given the strategic importance of AI, it was necessary to create a forum where the government could discuss sensitive matters internally. Besides, public consultations are now an integral part of the tech policy ethos in India, so excluding the industry from important decisions is highly unlikely. 

Where do we go from here?

The success of the AI governance guidelines depends on swift and thoughtful implementation. That’s why I think breaking up the Action Plan into short, medium and long-term was smart. Here’s my hope for what shape that will take.

The immediate priority is to create the scaffolding for robust governance. That means convening the AIGG and TPEC quickly (administrative orders can take months to formalise) so there is a strong institutional framework in place. Meanwhile, the AI industry should adopt voluntary commitments in areas such as privacy, child safety, model transparency, labour transition and malicious use of deepfakes. This should be done in the next 2-3 months. Efforts to improve data and compute access are underway. They just need to continue. 

In the medium term, updating the IT Act, which is now two decades old, will help enable innovation and create accountability. Key issues that the amendments should cover include platform classification, intermediary liability, online safety and cybersecurity. Setting up a digital regulator, as I’ve suggested earlier, should also be a priority. Consulting with industry stakeholders, getting the language right, and eventually getting it passed through Parliament may take up to a year. But the work must start now. In parallel, setting standards and experimenting with sandboxes will only help the local ecosystem grow faster.

One area that is often neglected is strategic thinking about the long-term impacts of AI. This exercise can sometimes fail the Eisenhower Matrix because it feels neither urgent nor important. Actually, it is both. When it comes to AI, our intuitions break down quickly, so we need to devise new tools and mental models to engage with governance issues of the near future, such as the impact of AI on human life, livelihood, and well-being. While NITI Aayog does some of this thinking, Indian policymakers need to invest much more in foresight research and simulation exercises to glean insights about the future we are headed towards. It might feel like we have the luxury of time, but the clock is already ticking…

If I had to name one thing I'll be watching for in the next 3-6 months, it is whether the AI Safety Institute is operational and issuing useful guidance. If it is, the governance scaffolding is working. If it isn't, we have a problem. Likewise, my biggest concern is that voluntary commitments never materialise – that industry waits to see if the government is serious, and the government waits to see if industry will self-regulate. Eventually, it is us citizens who will get hurt.