The language of AI and the EU–India sovereignty deficit (“Visions of AI” series #1)

Author: Dimitrios L. Margellos

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the views of their affiliated institutions. The authors write in their personal capacity.


The language around Artificial Intelligence (AI) has always been abstract, but as the technology diffuses to the general population, it can be difficult for societies to prepare for an increasingly nebulous future – even more so for policymakers. India’s vision of the “three sutras” centering  “People, Planet, and Progress”, French President Emmanuel Macron’s idea of Europe as a “safe space” for AI innovation, and Anthropic CEO Dario Amodei’s “country of geniuses in the datacenter”, are all visions of how AI could manifest in the future. Yet, the language obfuscates the fact that those with decision-making power in the space are worlds apart when it comes to concrete visions of the AI rollout. 

Juxtaposing the language employed by world leaders and tech moguls reveals diverging visions of AI, and a concerning deficit of sovereignty for the European Union (EU) and India. A more holistic view of AI sovereignty in the context of a realist turn in international relations shows how the EU and India are exposed to the United States (US) and China in both familiar and distinct ways. 

Who is afraid of safety?

For decades, AI researchers have posited that a powerful AI agent with human-level general cognitive abilities, nowadays referred to as Artificial General Intelligence (AGI), could potentially destroy human civilization. This rich field of research has informed the decisions of most technology leaders in the frontier AI labs, and has paradoxically led to an AI race, as tech CEOs rush to develop the very thing they seek to contain. So fierce and personal is this competition between these tech leaders that OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei refused to hold hands during the AI Impact Summit in New Delhi. 

These figures have now become household names – and for good reason. When Altman embarked on a world tour in 2023, he did so because he wanted to assuage policymakers who were concerned that new powerful chatbots like ChatGPT would pose enormous risks to the wellbeing of societies. This trip, much like the rhetoric at the time, placed safety at the center – but did so from a position of power and ambition. With a first mover’s advantage – amid Alphabet’s scramble to recover from the ChatGPT shock – OpenAI could afford to speak about safety, effectively stating that their product was so extraordinary, it would require government intervention. 

But it was not long before OpenAI’s competitors caught up. Altman’s second world tour in 2025 was instead defined by the competition brought upon by the powerful Chinese open-source model DeepSeek, which sent US AI stocks tumbling, and CEOs fundraising across the world. The “DeepSeek Moment”, as it has been named since, was a wake-up call for the West, and a do-or-die moment for the frontier AI labs. Safety would continue to dominate conversations among those involved with the development of AI, but it was no longer the top priority. It is thus no wonder that, when Silicon Valley leaders descended upon New Delhi for the AI Impact Summit in February 2026, they found that the word safety was long gone

This evolution is evident in the names of the successive summits. The 2023 “Safety Summit” in the UK was succeeded by the one in Seoul under the banner of “Safety and Innovation”. Around the same time, many influential figures called for “AI labs to immediately pause for at least 6 months the training of AI systems”, but the frontier AI labs pushed on. Soon enough, signatories of the original open letter – like xAI CEO Elon Musk – had switched sides. By the time policymakers caught onto the fact that there was no stopping this technology, the US and China had defined the terms of the competition. 

In 2025, Paris hosted the AI Action Summit, and safety was overshadowed by France’s push to establish itself as a player in the AI race. With US President Donald Trump back in office, the Europeans were scrambling to project strength amid a barrage of threats. The Summit, in retrospect, was a “missed opportunity” in the words of Amodei himself, adding that its mistakes should not be repeated in the next one. The appetite for safety was still there, but policymakers were simply unable to conjure up a sufficient response to the sweeping changes at hand.

Now, in 2026, the title of the Summit signals a tacit approval by the world that the powerful AI it once feared is just around the corner – whichever form it may take. Safety is a concern framed in the strongest of terms, but only by those fearing nothing less than total annihilation. Yet, policymakers are signalling that they are moving towards society-wide deployment, not more regulation.

Safety as a broader goal has diminished as a persuasive argument for deceleration. During the 2010s, when Google’s DeepMind was experiencing rapid development under conditions of extraordinary secrecy, the field was singularly focused on the safe development of the technology. Until OpenAI’s arrival, there was no reason to move fast for Google or any other company operating in the space. The Large Language Model (LLM) revolution then promptly kick-started the race for market dominance. 

Understanding the economic implications, the US has signalled that it will not regulate the technology or impose restrictions on AI development, as it considers Chinese competition to be an existential threat. Even in its current form, the technology has the potential to be utterly transformative for society. Demis Hassabis, CEO of Google DeepMind believes it is not unlike fire or electricity, and many in the space believe it will bring double-digit GDP growth. Faced with such economic prospects, it is hard to imagine that any country will give way to its adversaries to get there first. Safety, in this reading, is a weakness and ostensibly a resignation. 

The concept of safety has thus taken on a dual meaning: civilisational destruction and world-ending consequences on the one hand, and externalities that can be managed on the other. While the word has retreated into the background of these discussions, it continues to appear in official agreements – where it rhymes with “responsibility” or “prudence” – and in the warnings of Silicon Valley elites, where it means “sanctity” of human life itself. 

Safety acts as a proxy for ambition and progress in the AI field, which is why the word’s usage can shed light on the true ambitions of different actors. Whereas policymakers in the EU have used the term as a “crutch” to slow down progress, the frontier AI labs utilise it to signal an accelerated pace of breakthroughs. A shared language does not mean that there is also a shared vision. 

European and Indian approaches to AI

This matrix of linguistic usage is particularly helpful in explaining why most accounts of the AI race omit the role of actors other than the US and China; most importantly those of the EU and India. US tech companies dare to imagine a future where they can “solve all diseases”, relying on the “American light-touch innovation model” to enable them to dream even bigger. China’s centralised approach – able to funnel talent and resources to a single goal at breakneck speed – similarly provides the necessary resources and space for innovation

The EU, by contrast, was early in seeking to protect against the externalities associated with the rapid development and rollout of AI, with the EU AI Act entering into force in August 2024. Though its instincts may have been noble, it has been repeatedly castigated for doing so on the grounds that it is hampering innovation. More recently, Peter Steinberger, the founder of OpenClaw – an AI startup that took the Internet by storm – left Europe to join OpenAI citing “strict European regulations”. 

Indeed, Europe is experiencing a crisis of confidence, and European leaders are pushing to do away with bureaucratic hurdles to boost investment and competitiveness. The European approach can thus be understood as a bridge between the liberal-minded European instinct to be cautious in the face of technological change, and the economic incentives that dictate openness to innovation. President Macron’s remarks at the Summit encapsulated this balance act: “[…] Europe is not blindly focused on regulation. Europe is a space for innovation and investment, but it is a safe space, and safe spaces win in the long run”. Whether the EU is able to find such a sweet spot in the middle remains to be seen.

By contrast, India’s chosen policy is one focused on deployment as a way to harness the power of AI, and the country aims for “Welfare for All, Happiness for All”. The language centers on the same kind of inclusion that the EU is aiming for, but the subcontinent acknowledges the immense and unique challenges ahead. The Indian government also published its own AI Governance Guidelines, stating that it seeks to “ensure that AI is not concentrated in a handful of firms or geographies, but diffused across agriculture, healthcare, education, governance, manufacturing, and climate action.” The country’s experience with technological leapfrogging and its success with massive Digital Public Infrastructure (DPI) projects, has taught policymakers that it is possible to boldly declare itself a testing ground for tech applications – and do so at a fraction of the usual cost. 

AI use is already widespread in India – with some caveats. Concretely, there are 72 million daily ChatGPT users, constituting India as OpenAI’s largest market. Likewise, “India accounts for 5.8% of total Claude.ai use, second only to the United States”, and Anthropic’s research suggests that Indian users are seeing outsized benefits compared to the rest of the world. A focus on deployment is likely to see the number of users skyrocket, despite the fact that only 10–20% of India’s populace has thus far adopted the technology – especially since the AI giants are vying for market share in what could plausibly become the most important battlefield. 

But India also faces unique challenges in this regard: linguistic constraints, biases in the datasets, literacy rates, and a lack of energy infrastructure can all hamper the rollout. As such, the vision India has put forward is one focused on equitable society-wide deployment and outcome-based considerations. This is a fear shared by both tech CEOs and world leaders alike: the digital divide could become an AI divide, in the words of Google CEO Sundar Pichai.

Indian and EU leaders ultimately strike a similar tone, with slight differences emerging in relation to their instinct to regulate technology before harms arise. The language employed by the EU and India has thus far been consistently hopeful, but fundamentally reactive. Both sides speak in terms of regulating, deploying, safeguarding, or governing – not building. Where creation is present, it has long felt abstract foreign – though this may be changing with concrete initiatives underway. The expansive language, by contrast, is coming from the frontier AI labs themselves – and that is a danger for the EU and India.

“Country of geniuses in the data center”

During a recent interview with Dwarkesh Patel, an influential writer and podcaster in the world of technology, Anthropic CEO Dario Amodei delved into the economics of the AI rollout, and reiterated his company’s own vision of where their work is headed: “I really do believe that we could have models that are a country of geniuses in the data center in one to two years.” In Amodei’s telling, AI is far from a tool. What the Anthropic chief is describing is a future where agents are on par with human intelligence and are acting (independently) in the real economy.

This may seem like a trivial distinction, but policymakers have so far considered AI to be a tool – and are regulating it as such. The focus on sectoral AI use is inconsistent with the descriptions by frontier AI labs of agents that possess abilities beyond their training, and are able to extend their knowledge to adjacent fields. Current AI models do operate in what has been described as a “jagged frontier, where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI”. Yet, if tech leaders are to be believed, the jagged frontier could smoothen out, and a different policy would be necessary.

Even more concerning is that these agents will be housed in the US; a nation within a nation, governed by a private company, itself vulnerable to the whims of an unstable administration. 

It is unclear what future policymakers are “pricing in”. Under current conditions, AGI could very likely emerge in the US or China, and will be controlled by – in the case of the US – either Anthropic, OpenAI, or Alphabet. Tech CEOs like Amodei and Altman – despite differences in their approach to regulation and safety – are in accord that AGI will rake in trillions in profit. The profits these companies will accumulate will purportedly in turn be distributed in a manner that benefits humanity, whether it would be towards curing disease, or in the form of a Universal Basic Income (UBI). Ultimately, it will be up to their discretion. 

Policymakers across the world must ask themselves where their country fits in this picture. There is no reason to believe that the US, or China for that matter, would tacitly greenlight the global diffusion of the most powerful technology of all time. Advanced AI chips and chipmaking equipment are already curbed, and an important policy lever in the policy toolbox of the US administration. Other components of the US AI Stack could very well follow suit.

In their Foreign Affairs essay Geopolitics in the Age of Artificial Intelligence, former National Security Advisor Jake Sullivan and Tal Feldman explicitly frame this as a decision that is yet to be made, even as the current administration refuses to regulate the sector: “The United States must decide whether to rely on tightly controlled proprietary models or promote open-source alternatives as a way to shape global adoption.” Perhaps even more importantly, Sullivan notes that AI labs “would prefer to build and operate the infrastructure for large-scale training runs overseas, drawn by looser rules, cheaper energy, and additional capital”, a position that runs contrary to US national security interests.

This distinction matters for AI sovereignty, because it can be split into four dimensions: where data physically resides, who manages the infrastructure component of training/inference, who owns the weights/models, and which jurisdiction governs the technology. The EU and India are currently lacking in the latter two instances, and seeking ways to catch up.

The UAE provides a useful comparison of what such a deficit might look like in practice, as it is partnered with OpenAI in the AI for Countries programme. The UAE itself does not have control over the technology, which could potentially be pulled back at the discretion of the US government, with whom OpenAI is “in coordination”. Even if the weights of the model are somehow procured by third-party actors, or future models are housed in the territory, the proprietary technology remains in the hands of US actors. 

MistralAI, a French company backed by the likes of ASML and founded by former DeepMind and Meta researchers, instead wants to ensure that the technology remains in the hands of its partners. In a speech that largely flew under the radar, Mistral Co-Founder Arthur Mensch spoke about how the future belongs to those who own the AI themselves, and can ensure business continuity. He warned against a future dominated by “four companies”, putting the dilemma in concrete terms.

The EU and India currently risk the same kind of exposure. In the EU, only Mistral can currently “punch above its weight”. Its models can be competitive with those of the frontier labs on some benchmarks – that is a good start. The company also positions itself as a “sovereign” AI provider with open-weight models. But Mistral mostly targets enterprises rather than the wider-public, and has different ambitions than the US labs. Other European AI leaders like Aleph Alpha or Helsing are sector-specific and similarly constrained in their ambitions.

India is certainly more ambitious by comparison, but lacks the funding necessary to keep up. Sarvam AI is India’s flagship model, but does not provide the same capabilities as Western ones: its models are focused on Indian languages, document intelligence, and voice AI. The venture’s funding is minuscule, surpassed by Anthropic’s weekly revenue: Sarvam has raised approximately US$54 million; Anthropic’s annualised revenue run rate, by contrast, reached US$14 billion in early 2026.

Indian models may be weaker than their American counterparts, but funding is picking up and a promising (albeit late) start could confer certain advantages. New models have displayed impressive Indian-language skills, addressing a significant bottleneck with AI adoption outside Anglo-Saxon countries, and one recognised by the likes of Microsoft president Brad Smith during the Summit. Yet, such capabilities are not enough to compete globally, let alone nationally – especially if English language abilities lag behind.

Both the EU and India need to solve their sovereignty problem. Policymakers are showing a more comprehensive understanding of the space, but the actual ambitions of different actors are betrayed by the language employed in each case. 

Ways forward

India and the EU have possibly entered a race well into its final laps. OpenAI’s ChatGPT may have been released in 2022, but the fundamental research that went into these kinds of models was a decade in the making. For the EU and India to catch up, there are two immediate action points:

  1. Broaden the definition of AI sovereignty to include homemade state-of-the-art models;
  2. Identify specific bottlenecks and work together to address them.

During the 2026 AI Impact Summit, President Macron indicated that European policymakers are now moving in this direction, “choosing independence in AI model development and manufacturing”. The French President stressed that GPUs and chips translate directly into geopolitical power, adding that “no country is bound to serve only as a market where foreign companies sell the models and download the citizens’ data.” France’s prescriptions thus far address deficient, but already existing components of AI sovereignty: doubling AI scientists, a €9bn investment, and data centers powered by low-carbon energy as infrastructure and human capital plays – merely a necessary condition for achieving national control over models. 

The Germany–India AI Pact, signed during the Summit, also seeks to address some of the shortcomings: bilateral cooperation on sectoral AI, shared compute infrastructure, talent mobility, AI use in public administration, and a focus on energy, health, agriculture, and smart manufacturing. The language may slowly be moving from governance to ambition, but it pales in comparison to the enthusiasm and borderline blind self-belief of the companies across the pond. 

Cooperation can bear fruit and is an encouraging sign. From new markets for neo-clouds to technological know-how for data center construction, the two sides can provide the infrastructure and bank on their talent to do the rest. Much in the same way that the EU and India complement each others’ demography, so too could the two sides benefit from increased cooperation in the field of AI. 

The EU can indeed assist in developing critical infrastructure for India. India aims to compete in the AI race in the long-term amid a push to mobilise AI in its effort to become a developed country by 2047. To achieve this, it needs to build the resource-hungry infrastructure required for AI in the form of data centers, while considering the needs of its own people and those of the environment. Monsoons can aid in water usage, but the gains from AI usage must justify the redirection of resources — and alternative energy sources like nuclear power — to AI infrastructure rather than the general populace. 

Short-term benefits will certainly be distributed unevenly — hence the focus on equitable AI. Policymakers are by now aware of the caveats associated with General Purpose Technologies and their effect on (marginal) productivity, and the labour market more broadly. A young and eager population, technologically-savvy and open to progress, stands ready to reap the benefits of yet another transformative technology, but it is also uniquely vulnerable. 

The EU, by contrast, has been relatively slow to adopt AI, and Europeans want “more effective regulations in place, even if this slows down the development of AI”. Though it has yet to emerge as a concrete political movement, it is likely that an AI backlash could soon change the policy calculus; policymakers could quickly step on the brakes, for example, should the job market begin to wobble. But this is why an expanded idea of sovereignty is important at this stage. AI is a resource-hungry technology, requiring investment across all layers to justify the short-term pain. For example, it makes little sense to invest merely in cloud infrastructure, if there is nothing to host on it.  

The fear, ultimately, remains the same: both the US government, as well as the AI labs themselves, could unilaterally restrict access to more powerful versions of their models — and are likely to get there first. The ongoing power struggle between Anthropic and the US government to determine the use of the company’s models in highly-sophisticated military and surveillance operations is indicative of the dangers of dependence on an unpredictable partner.

It will be tempting to drop expensive efforts to catch up with the US or China, but that plays directly into the hands of the US. US policymakers are not subtle about this fact: when Sriram Krishnan, the Senior White House Policy Advisor on AI, urged allies not to abandon the US AI Stack during the Summit, he spoke in terms of “complementing” US technology. India, according to Krishnan, is a “key ally” of the US. Memories may be short in an era of a constant stream of information, but it was not long ago that India had to contend with a higher tariff rate than China, the chief adversary of the US. In the new world order, economic ties are used by great powers as leverage. Soon, AI will be too.