Visions of AI Series #3

Author:  Avtansh Behal

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the views of their affiliated institutions. The authors write in their personal capacity.

Past pieces in the series:

Part 1 by Dimitrios L. Margellos

Part 2 by Aahil Sheikh

Introduction

The coronavirus pandemic notwithstanding, future historians may be tempted to look at the 2020s as the decade of Artificial Intelligence (AI). From the everyday person using a free tier of a Large Language Model (LLM) to plan a vacation, to researchers using powerful custom models for highly sophisticated purposes, AI is now central to daily workflows. Despite such models coming to mass prominence only in 2022, AI is already reshaping the future of work as we know it – a majority of companies surveyed by the Harvard Business Review have already laid off employees in anticipation of AI capabilities. PwC also estimates that AI is furthering wage inequality among employed workers: workers with AI skills command a 56% wage premium over counterparts with no AI skills in the same job.

The rise of AI has also led to governments scrambling to develop coherent policy responses, giving rise to interesting divergences. While China has “opted for tight regulatory control, mandating algorithm disclosures, cybersecurity reviews and stringent data localization”, the EU’s AI Act acknowledges the fast-evolving nature of the technology; accordingly, it sets guardrails to limit potential harms and emphasise privacy protections in line with the General Data Protection Regulation (GDPR). While India lacks a dedicated AI law, it echoes this focus on safety, calling for an inclusive approach to AI that ensures that the benefits of the technology do not accrue to a handful of firms or geographies.

In contrast, the second Trump administration seems to favour a “light-touch” approach to AI, ostensibly to protect the freedom needed for research and innovation, preferring to focus on securing the resources needed to attain dominance in this field. Reflecting on the safety of AI applications is a secondary consideration for Washington, putting it at odds with the EU’s desire for a coherent policy framework around AI, to give one example. 

What motivates the US’ light-touch approach, so at odds with other actors reflecting on AI policy? Despite the temptation to cast it as another whim of the current administration, this piece argues that this vision is driven by a specific interpretation of American technological history, which may not be appropriate for the new paradigm AI is set to shape. For the EU and India to move forward in response to the US AI Action Plan, they must first understand this historical context. 

What does the United States’ approach entail?

Concretely, the United States’ light-touch approach addresses “the limited and slow adoption of AI, particularly within large, established organisations.” The AI Action Plan, unveiled by President Donald Trump in July 2025, calls on entities such as the Food and Drug Administration (FDA) and the Securities and Exchange Commission (SEC) to establish regulatory sandboxes, creating a dynamic, “try-first” culture across American industry. 

While the Action Plan makes no direct references, the parallel to the Advanced Research Projects Agency Network (ARPANET) is noteworthy. Established in 1969 by the Defense Advanced Research Projects Agency (DARPA), the research wing of the US Department of Defense (DoD), the project pioneered a number of elements we take for granted in modern Internet architecture: packet-switching, the TCP/IP communications protocol, and the @ symbol to designate email addresses. Unlike the Internet, access was restricted to a handful of government bodies, universities and companies such as IBM across the United States and Europe. 

Therefore, CERN’s creation of the World Wide Web in 1989 was merely an adoption of the aforementioned technologies, cementing the centrality of American innovation in the Internet as we know it today. This sequence of events provides vital clues to understanding and responding to Washington’s approach to AI.

Understanding the “light-touch” narrative

The AI Action Plan’s intention to create a “dynamic, try-first” culture for AI reflects a fervent hope to recreate the conditions of the early personal computing and Internet eras, where firms such as IBM were able to kickstart virtuous cycles. With their newfound legitimacy from their presence in initiatives such as ARPANET, the likes of IBM were able to identify, support and even invest in emerging American firms in new domains such as the Internet, search, e-commerce or social media.

Microsoft provides the most salient example of the progression enabled by this cycle. It benefitted as an upstart when IBM adopted MS-DOS as the default operating system for its PCs in the 1980s, anticipating the rise of home computing. A decade later, with email and the Internet changing the face of personal computing, Microsoft became the legacy player, acquiring Hotmail (now Outlook) in 1997. This cycle has only accelerated with the passage of time; Microsoft took nearly two decades to move from IBM’s junior partner to Hotmail’s parent, while Yahoo was large enough to almost buy out Google in 1998, despite only being founded in 1994.

In Silicon Valley’s imagination, the rise of American firms to dominance across various fields in the Internet economy, going as far as the rise of social media in the late 2000s, boils down to this very approach. Contrary to the proactive, safety-first approach desired by the EU and India for AI, the American approach to technology allocates a reactive role to regulation, seeking to tackle the costs of innovation once they arise. Much like OpenAI and Anthropic today, early Internet giants were allowed to cultivate significant user bases within a non-existent regulatory context. By the time legislation such as Section 230 (1996) and the Digital Millennium Copyright Act (hereafter the DMCA, 1998) emerged, firms such as AOL and Yahoo had sufficient means to keep scaling up and establish American dominance over the Internet economy.

This approach of prizing innovation over regulation is succinctly captured in Mark Zuckerberg’s famous aphorism “move fast and break things”. It is further encouraged by a risk-seeking culture of venture capital, which accepts risk and costly failures as a cost of finding the next Amazon or Google, as explored by Nate Silver in his 2024 work, On the Edge: The Art of Risking Everything. Legacy players still remain key providers of investment, resources and contacts, as evidenced by Amazon and Microsoft holding stakes in Anthropic and OpenAI respectively.

In this light, the temptation among American policymakers to let history repeat itself for dominance in AI is understandable. OpenAI and Anthropic have already cultivated large user bases, while legacy firms such as Google (Gemini) or X (Grok) possess a unified product ecosystem and significant financial resources. Therefore, the US already houses four major players in the AI sector capable of iterating on consumer AI, with the ability to collaborate with emerging players for more specific use cases, à la IBM of old.

Is the story of American dominance of the Internet era truly this straightforward? If so, is it readily transferable to the age of AI?

Contesting the “Wild West” approach to regulation for AI

Having established the narrative driving Washington’s desire to lean on the strengths of Silicon Valley, we must nonetheless consider its blind spots, which serve as a word of caution for AI policymaking.

Early Internet pioneers may have flourished in a regulatory “Wild West” before the enactment of Section 230 and the DMCA, but the Internet’s impact was naturally constrained by the slow expansion of Internet access. At the time of Section 230’s enactment in 1996, the World Bank estimates that only 16% of the American population had an Internet connection. This amounts to approximately 40 million people, less than the combined population of California and Florida at the time. This expansion was even slower abroad: only 3% of French residents and 4% of British ones were online in 1996, while India only reached the 1% mark (approximately 10 million users) in 2000. The growth trajectory of Internet access was quite staggered, rising sharply across Europe at the turn of the millennium following initial uptake in the US. This rise was only mirrored in Asia and parts of Africa much later; only 17% of India was online in 2016, compared to 70% today

This staggered pace of expansion meant that while initial Internet adoption was chaotic, the small user base provided protection to entrepreneurs. Unintended consequences could only affect a small population before being addressed, with both entrepreneurs and regulators working concurrently to address any unpleasant surprises. Therefore, despite being intended as late measures affecting a relatively mature competitive landscape, Section 230 and the DMCA were in fact early regulatory interventions when viewed from a wider perspective – most of the world only came online once these legal precedents were firmly in place. Viewed in this context, the American regulatory approach during the early Internet era was anything but “light-touch”.

By contrast, the AI Action Plan’s task is far more complex than the forebears it wishes to emulate – AI’s regulatory “Wild West” is already playing out to a far larger user base. ChatGPT alone has 900 million weekly active users, with other chatbots such as Claude seeing rapid uptake in 2026. These numbers are likely conservative, excluding a large number of users interacting with Gemini on Android or the Google Workspace Suite, and other users using AI tools bundled on their phones to touch up everyday photography. 

The entry barriers to AI use are also far lower than those of using the Internet in the early 1990s. Rather than acquiring expensive computers, modems and then connecting the modem to home telephone lines for a slow Internet connection, first-time AI users can simply reach into their pockets, go to the app store and download an AI app or chatbot of their choice, even on an entry-level smartphone. This change in context alone illustrates the need for a new policy framework, rather than drawing inspiration from a structure which worked a few decades ago. 

Even if the United States were able to take these aspects into consideration and react appropriately, should the EU and India trust it to come up with a responsible approach? 

The dangers of a world in Altman’s, Amodei’s or Musk’s image

Beyond historical precedent, the AI Action Plan’s focus on a “light-touch approach” is also a concern due to the Trump administration’s vulnerability to vested interests and policy instability – often in the face of legal checks and balances. American tech CEOs have capitalised on this aspect during the second Trump administration.

Elon Musk’s rise to the head of the Department of Government Efficiency (DOGE), while simultaneously being CEO of X, Tesla and SpaceX, provides a case in point. Arguably a reward for Musk’s $1 million lotteries in key swing states, the incoming executive sidestepped protests of any conflicts of interest, going as far as awarding an aborted $400 million contract to Tesla to produce armoured vehicles. Despite DOGE being disbanded abruptly in mid-2025, the DoD’s recent collaboration with OpenAI indicates that such patronage is unlikely to end; OpenAI President Greg Brockman donated $25 million to a Political Action Committee (PAC) supporting President Trump, while Sam Altman donated $1 million to Trump’s inauguration fund

Separately, AI’s nature as a “black box” raises a new set of problems. Even though both input and output are visible to the user, its workings (the model and training datasets used) remain obscured for intellectual property reasons. While this has led to occasionally amusing consequences such as Sam Altman committing to dial back ChatGPT 4o’s sycophantic nature, it illustrates the tight control firms are likely to exercise over their technology in the absence of strict guardrails. In this context, a “light-touch” approach would give AI firms licence to reshape the world in the image of its leaders. 

Doing so would be a mistake, particularly with sufficient evidence already available that safe and responsible AI is not a major consideration for developers. Grok’s risqué behaviour provides the most visible example of an AI in its creator’s image, but the problem extends beyond Elon Musk: OpenAI also removed all references to AI safety when it transitioned to being a for-profit firm. The consequences of this lax approach to AI safety are already evident: Palantir’s Gotham platform, designed for government customers, uses a proprietary layer dubbed Ontology to centralise and influence the data its AI processes on ordinary American citizens, allowing customers including the Immigration and Customs Enforcement (ICE) to “power the kill chain”. 

Beyond the ethical implications of a biased technology aiding decisions of life and death, allowing AI firms and founders to fashion the world in their image is also problematic from a philosophical standpoint. In his 1992 work Technopoly: The Surrender of Culture to Technology, the American cultural critic Neil Postman argues that the proponents of a new technology are oftennot the best judge of the good or harm which will accrue to those who practice it”. They are too blinded by the potential of the new technology, Postman argues, to impartially consider the drawbacks. These drawbacks, such as the rise of AI slop, are unfortunately quite real, with firms already raising concerns about the time lost in correcting AI output. Early adopters such as IBM are rehiring junior staff, deeming AI output for entry-level work a bigger hassle than it is worth. 

This author can only hope that AI thought leaders never foresaw these drawbacks in their enthusiasm, for the opposite would constitute gross negligence. However, with AI firms still unwilling to directly acknowledge these concerns, a race to the bottom may well be underway, wherein AI eventually shows diminishing returns once it starts training itself on its own slop. 

In light of these considerations, the Trump administration’s desire to let the AI industry dictate AI policy is quite misguided. While the author disagrees with DeepMind CEO Demis Hassabis’ contention that AI is as disruptive as fire or electricity, it is useful to draw a parallel with the printing press to illustrate the challenge at hand: allowing AI CEOs to build the world in their image would be as if one were to hand Gutenberg sole control over printing presses at the technology’s inception, and allow him to censor what goes out.

From the European or Indian perspectives, these aforementioned concerns underline the need for coherent regulation on AI. So far, both actors have been careful not to alienate the US, as evidenced by the removal of the word “safety” from the AI Impact Summit Declaration. However, with the technology evolving rapidly, the EU and India may eventually need to move forward quickly on safety, possibly in complete opposition to Washington. The concluding section offers two recommendations for New Delhi and Brussels to go about addressing the American narrative and emphasise the need for regulation.

Ways forward

This critical analysis of the US’ approach to AI does not intend to discredit Washington as a potential partner, nor cast AI policymaking as a zero-sum game. Rather, it is an attempt to contextualise the American resistance to AI guardrails, which the EU and India must overcome to arrive at an effective framework for AI safety without excluding the American AI landscape, which plays an outsized role in the sector. 

Therefore, the EU and India must first understand the discursive elements driving the American AI Action Plan. While some of them have been mentioned throughout the piece, both actors are encouraged to work closely with tech historians and policy experts to better understand the subtext driving Washington’s policymaking. This will allow New Delhi and Brussels to develop policy responses that directly address the strengths and weaknesses of the American AI discourse, particularly by highlighting the risks associated with AI use for national security reasons, and the likely race to the bottom with regard to “AI slop” should the current trajectory be maintained. By directly engaging with American discourse, the EU and India could invite American stakeholders into discussions on AI safety, preventing the rise of multiple competing paradigms on AI.

Secondly, the EU and India must present a unified framework for AI, and use it as a foundation for a global collective on AI policy. European laws such as the AI Act and the GDPR take safety as a starting point, enshrining privacy protections and accountability at the heart of its philosophy. India, meanwhile, prioritises democratisation, scale, and inclusion. These differences in vision, also evidenced by India’s Digital Personal Data Protection Act (DPDPA) being less exhaustive than the GDPR on certain aspects, need to be addressed. To this end, Indian and European policymakers must use the EU–India Trade and Technology Council (TTC) to align their approaches. By simultaneously drawing from the EU’s expertise in digital regulation, and India’s expertise in designing and iterating large-scale digital frameworks (such as UPI), the resulting framework can form the basis for an eventual global standard.

The “light-touch” approach advocated by the United States misses a key point: responsible regulation (as opposed to restrictive regulation) makes for a healthier tech landscape, allowing society to adapt to new technologies rather than be held hostage to their enormous potential. As of now, Washington’s “light-touch”, free-market ethos is incentivising AI firms to prioritise rapid progress, over slower but more sustainable growth. With such an approach thus proving incapable of reining in the technology and its leaders, the need for comprehensive AI regulation is increasingly urgent. The safety and inclusion-minded approaches of the EU, India, or other actors, provide a promising starting point, but require significant convergence if they are to properly channel the huge potential of AI. This author hopes that policymakers act quickly without falling for the comfort of measures that worked in the past.

***

Avtansh Behal is the Head of Communications at Generation EU-India (GenEI), and brings significant experience working at the intersection of Indo-French relations. Avtansh holds a Master’s in European Affairs from Sciences Po Paris, and served as a Blue Book Trainee with the European Commission’s Spokesperson’s Service in 2021. Based in Paris, he now works as a communications professional, and joined GenEI in February 2025 to further his interest in migration, EU foreign affairs and sustainability policy.