The global AI governance debate is splintering, shaped by elite safety concerns on one end and sovereign control on the other. For much of the world, both remain hindrances to maximising the benefits of the AI revolution. India’s emerging proposition, anchoring AI governance in trust, access, and scale, offers a compelling middle ground with particular resonance across the Global South. Can India translate this vision into durable rules that others are willing to follow?
The AI governance landscape is fragmented. Despite a plethora of summits, declarations, and frameworks, convergence among national priorities remains sparse. Some of this is understandable, since AI sits at the intersection of national security, economic competitiveness, social welfare, and political values. The rest may be attributed to the overall decline of multilateralism. Much current policy thinking gravitates toward two extremes: narrow, safety-centric frameworks on one end and state control on the other. This presents India with an opportunity to shape a third, or middle, path: one that bridges trust with development, and safety with scale. An approach that appeals to the largest section of humanity.
The Bletchley Declaration that emerged from the 2023 AI Safety Summit in the United Kingdom placed existential and frontier risks at the centre of the debate. It focused on cutting-edge AI capabilities being developed by a small set of actors, prioritising the prevention of catastrophic risks over mass deployment. Although China signed the declaration, its participation in subsequent processes in Seoul and Paris was visibly subdued—perhaps to avoid undermining its domestic AI governance model.