India FocusShikhar Burman·13 March 2026·16 min read

Who Controls the Future? AI, Geopolitical Power, and India's Position in the Race That Will Define the Century

The US and China are in an AI arms race with no equivalent in modern history. Semiconductor export controls. DeepSeek's cost revolution. OpenAI's Pentagon contract. Anthropic's constitutional limits. India's NASSCOM at USD 9.51 billion and growing to USD 130 billion. This is the geopolitics of AI — who is winning, what the stakes are, and where India fits in a world being restructured by the most transformative technology ever built.

There are moments in history when a specific technology becomes so strategically consequential that its development and control becomes inseparable from the question of who holds power in the world. The steam engine. Nuclear weapons. The internet. Artificial intelligence is that technology now — and the geopolitical competition to control it is unlike anything the post-war international order has previously experienced. It is faster, more technically complex, more deeply entangled with commercial activity, and more explicitly connected to military capability than any previous technology race.

The outlines of this race are visible in the decisions that have made headlines in early 2026: the US blacklisting Anthropic as a defence supply chain risk when the company refused to remove safety constraints from its military-deployed models. OpenAI entering a Pentagon contract with different constraints. China warning against 'unrestricted application of AI by the military.' DeepSeek optimising for Huawei and Cambricon chips to reduce Nvidia dependence. Iran attacking AWS data centres in the UAE to blind AI-dependent US military systems. These are not isolated events. They are points on the same curve: the curve of AI capability becoming indistinguishable from geopolitical power.

The US-China AI Divide: Where the Race Actually Stands

The conventional wisdom in 2024 was that the US was two to three years ahead of China in frontier AI development. That estimate, credible at the time, has been complicated by DeepSeek's emergence. DeepSeek V3 — trained for approximately $5.6 million, a fraction of what US frontier model training costs — achieved performance matching GPT-4 class models across most benchmarks. It was trained primarily on Huawei H800 chips rather than Nvidia A100s, explicitly demonstrating that US semiconductor export controls had not closed the capability gap.

Anthropic and OpenAI have alleged that DeepSeek conducted industrial-scale distillation — querying Claude and GPT APIs millions of times through fraudulent accounts to extract model capabilities and use the outputs as training data. If true, this represents a novel form of technology transfer that no export control regime currently addresses effectively. China's 'intelligentized warfare' doctrine — detailed in the RAND report 'China's AI Arsenal' published March 7, 2026 — describes a vision of military decision-making that is structurally dependent on AI at every layer, from strategic planning to tactical execution, with a stated goal of closing and eventually surpassing the US military AI advantage.

The AI 2027 research document, which models a scenario of near-term AGI, notes: 'In China, the CCP is starting to feel the AGI. Chip export controls and lack of government support have left China under-resourced compared to the West.' If that assessment is correct — and it is contested — the US leads. If DeepSeek's architectural innovations can be continued at V4 scale with Apache 2.0 open weights, the competitive landscape becomes far more complex.

The Semiconductor Chokepoint: Why Chips Are More Strategic Than Oil

The US government's semiconductor export controls — which restrict the sale of Nvidia A100, H100, and H200 chips to China — are the most consequential economic policy decision in the AI race. These chips are the physical substrate of frontier AI training: you cannot train a GPT-5 class model without access to tens of thousands of them. The export controls are an attempt to maintain a compute advantage that translates directly into a capability advantage.

The countermeasures are multiple and ongoing. Huawei's Ascend chip series has improved substantially and is now sufficient for training and inference on models like DeepSeek V3. Cambricon, Biren, and other Chinese chip designers are advancing. Most significantly, architectural innovations like DeepSeek's MoE approach reduce the compute required per capability point — meaning that the US compute advantage translates to a smaller capability advantage than it would have in a world of dense transformer scaling alone.

For India, the semiconductor situation is both an opportunity and an exposure. India currently has no domestic capability to manufacture advanced logic chips — the most strategically sensitive semiconductor category. The government's semiconductor incentive programme — committing approximately $10 billion to attract semiconductor manufacturing investment — is a meaningful policy commitment, but building from zero to frontier manufacturing capability takes a decade, not a political cycle. In the near term, India's AI development depends on access to chips manufactured in Taiwan (TSMC), South Korea (Samsung), or the US — all of which involve geopolitical dependencies that have become newly visible in the current moment.

The Big Tech Power Structure: Four Companies and What They Represent

The frontier AI landscape in March 2026 is structured around four private companies that, collectively, have more influence over AI development direction than any government body or international institution: OpenAI, Anthropic, Google DeepMind, and xAI. This concentration of power in private entities — each with different ownership structures, governance models, safety philosophies, and commercial incentives — is unprecedented in the history of transformative technology development.

OpenAI began as a non-profit, converted to a 'capped profit' structure, and is now completing a transition to a standard for-profit corporation after a period of intense internal conflict. It has entered a Pentagon contract and is increasingly integrated into Microsoft's commercial cloud infrastructure. Anthropic was founded by former OpenAI researchers specifically over concerns about safety at OpenAI; its Constitutional AI approach and its refusal to allow its models to be used for autonomous weapons systems have put it in direct conflict with the current US administration. Google DeepMind is a division of Alphabet, the most valuable advertising company in history; its Gemini family is advancing rapidly and DeepMind's AlphaFold protein structure work has already demonstrated transformative real-world impact. xAI is Elon Musk's vehicle for building AGI, with full access to X's data and Musk's personal commitment to building 'maximally truth-seeking' AI — a formulation that in practice means fewer safety constraints than Anthropic's approach.

India in the AI Geopolitical Landscape: Assets and Liabilities

India's AI position in March 2026 is characterised by a real but underutilised set of assets and a set of structural liabilities that honest analysis requires acknowledging alongside the assets. The assets: the largest pool of English-speaking technical talent in the world; an AI adoption rate that has made India the largest generative AI app market by downloads; a domestic AI economy valued at USD 9.51 billion in 2024 and projected to reach USD 130 billion by 2032 (NASSCOM); a 1.4 billion person market that is large enough to generate the data and application diversity needed to build competitive domain-specific AI products; and a democratic governance structure that makes India a more credible AI partner for Western nations than most alternatives.

The liabilities: no domestic frontier model lab capable of competing with OpenAI, Anthropic, or DeepMind. No domestic semiconductor manufacturing capability. A regulatory environment that moves slowly relative to AI development speed. An IT services industry that built India's software reputation on precisely the task categories most exposed to AI displacement. And a political environment in which the governance of AI has not yet generated the kind of focused policy attention that the United States and China are both giving it.

The policy decisions India makes in the next two to three years — on semiconductor investment, on AI research funding, on the regulatory framework for AI deployment, on the education system's capacity to produce AI-fluent graduates at scale — will have consequences that extend far beyond any individual political cycle. They will determine whether India is a significant player in the technology that will define the 21st century or a consumer and service provider for others who are.

The geopolitical AI race has a direct parallel at the individual level: the students and professionals who develop genuine, multi-model AI fluency in 2026 are positioning themselves for the world being built — not the world that already exists. LumiChats provides access to models from the full spectrum of the competitive landscape: Claude Opus 4.6 from Anthropic, GPT-5.4 from OpenAI, Gemini 3 Pro from Google DeepMind, Grok from xAI, DeepSeek from China's AI ecosystem, Qwen from Alibaba, and Mistral from Europe. Understanding how these models differ — in capability, in character, in the values embedded in their training — is itself a form of geopolitical literacy. At ₹69/day with 40+ models and 5 million tokens of daily context, it is an education no university course yet provides.

The Governance Gap: Why Rules Are Losing the Race

The most important thing a student of international relations, political science, or public policy can understand about AI geopolitics in 2026 is the governance gap. The capabilities being developed are running years ahead of the international frameworks, domestic regulations, and shared norms that would allow them to be managed responsibly. This is not a new phenomenon in the history of transformative technology — nuclear weapons preceded meaningful arms control by decades, with the most dangerous period being the gap between initial deployment and the establishment of limiting frameworks. The question for AI is whether the governance gap will be closed before or after the most consequential misuses.

The Iran conflict has demonstrated at scale what governance researchers had been warning about in theory: that AI military capabilities will be used in conflicts because they confer decisive advantages to whoever uses them, that international discussions proceed far more slowly than capability development, and that the companies building the most capable systems — Anthropic, OpenAI — have more influence over how those capabilities are constrained than any international body. Whether that is a feature or a bug depends on your assessment of those companies' judgement. It is not a stable long-term institutional arrangement either way.

Pro Tip: To understand this topic at a depth beyond news headlines, study three frameworks: Horowitz's 'The Diffusion of Military Power' for the geopolitics of military technology adoption; Michael Kearns and Aaron Roth's 'The Ethical Algorithm' for the values embedded in AI systems; and the Epoch AI research blog for the most technically grounded analysis of compute, capability, and the competitive landscape. Then use LumiChats Study Mode to upload and synthesise these sources, building a structured understanding rather than an accumulation of disconnected facts.

Serious engagement with questions of AI geopolitics, governance, and ethics requires the ability to process large volumes of primary source material, compare expert perspectives, and generate structured arguments of your own. LumiChats Study Mode enables exactly this: upload government policy documents, research papers, think tank reports, and primary source analyses, and receive page-cited answers grounded in those specific documents. Quiz Hub tests whether you have genuinely understood the arguments. Agent Mode lets you build data visualisations and analysis tools for processing the quantitative dimensions of the AI race — FLOP counts, deployment statistics, patent filings. The full platform at ₹1,199/month unlimited is the research infrastructure for students who want to understand this moment at the depth it deserves.

Ready to study smarter?

Try LumiChats for ₹69/day

40+ AI models including Claude, GPT-5.4, and Gemini. NCERT Study Mode with page-locked answers. Pay only on days you use it.

Get Started — ₹69/day

Keep reading

More guides for AI-powered students.