Navigating the space between Berners-Lee’s vision and today’s AI race

SUMMARY
Tim Berners-Lee, creator of the world wide web, is advocating for a CERN-like institution for AI — a neutral, global, and grounded in science to guide AI as a public good. The intensity of today’s AI race between nations (geopolitics) as well as between corporations (competition), however, creates a climate that falls quite short of his aspirations.

Why Berners-Lee looks to CERN as a model

In his September 29, 2025 Guardian essay, “Why I Gave the World Wide Web Away for Free,” Berners-Lee describes what a modern, CERN-like institution for AI should embody. He explains that when he released the world wide web for free in 1993, he did so believing that knowledge should circulate freely, benefiting everyone rather than enriching a few.

To prevent the hyper-commercialization that’s ensued across the Internet, and to ensure AI is protected as a technological artifact for public good, Berners-Lee looks to a CERN-like model for AI governance.

A “CERN for AI” body, according to Berners-Lee, is a place where scientists, ethicists, and policymakers work together across borders to ensure that artificial intelligence evolves as a public good, not a geopolitical weapon or commercial monopoly.

CERN, the European Organization for Nuclear Research, stands as a rare example of what global cooperation in science can achieve: neutral, open, and dedicated to collective discovery.

Berners-Lee sees the CERN-like model as critical for the AI era, where unregulated competition and profit-driven development threaten to outpace humanity’s ability to steer outcomes responsibly.

Berners-Lee’s 6 core criteria for a “CERN for AI”

Though he did not specifically outline the following criteria in his Guardian essay, the following 6 key elements shared below are what we understand Berners-Lee envisions for a CERN-like body for AI, based on his themes, shared values, and articulated goals:

  • Global legitimacy and neutrality
    It must serve humanity as a whole, not any one nation or company.
  • Scientific infrastructure
    Shared labs, data, and testing environments for open AI research.
  • Regulatory authority
    The power to set global standards and ensure compliance.
  • Enforcement capacity
    Ability to monitor, audit, or restrict unsafe or unethical activity.
  • Public-good orientation
    A mandate focused on collective benefit over profit.
  • Speed and adaptability
    Agility to keep pace with rapid AI advancements.

These elements together define how such a body could balance innovation with humanity’s collective interests.

What exists today: Partial Steps Toward the Vision

There are several international and national efforts attempting to manage AI governance; each offers valuable scope but all are limited in reach or capability, to varying extents:

Name / Initiative Strengths Weaknesses
UN AI Advisory Body Global legitimacy; diverse expert membership; visible UN platform. Advisory only; lacks enforcement power or research infrastructure.
Global Dialogue on AI Governance (UN) Promotes cooperation across states; creates diplomatic infrastructure. No authority to regulate or test models; lacks scientific labs or resources.
UNICRI Centre for AI & Robotics Anchored within UN system; expertise in security and risk domains. Narrow mandate; not broad AI governance.
European AI Office (EU AI Act) Regulatory teeth within EU; structured supervision of general-purpose AI. Limited to EU jurisdiction; not a global or scientific hub.
CAIRNE (European “CERN for AI” proposal) Advocates for large-scale AI research infrastructure in Europe. Still conceptual; not fully funded or implemented.
AI Safety Institutes (UK, US, EU, etc.) Strong technical safety expertise; ability to audit and evaluate models. Fragmented, national-level only; little global coordination or authority.
MAGIC (proposed global AGI consortium) Bold global vision; centralizes advanced AI under shared governance. Purely theoretical; politically and legally difficult to implement.

Despite this comparable AI governance landscape, critical challenges do exist, including:

  • Global legitimacy and neutrality
    Present only in UN-level bodies, which lack enforcement mechanisms.
  • Scientific infrastructure
    Exists in national safety institutes, but without global sharing or governance.
  • Regulatory authority and enforcement
    Found regionally (like in the EU AI Office), but absent globally.
  • Public-good orientation
    Shared in principle, but without operational power to guarantee it.
  • Speed and adaptability
    The fastest institutions are national, not international, which create imbalance.

Reality check time

Given the overlapping and significant challenges, the following realities make Berners-Lee’s vision difficult to achieve:

  • Fragmentation
    Efforts are scattered across nations and institutions, often duplicating rather than coordinating work.
  • Neutrality
    No existing entity is trusted equally by all major powers or insulated from corporate interests.
  • Authority & Power
    Most current bodies can advise or recommend but few can compel compliance or oversee global AI research.

In short: the world today has pieces of a CERN for AI, but no single multinational AI entity that unifies scientific infrastructure, global legitimacy, enforcement power, and neutrality exists in one place.

U.S. AI policy further complicates Berners-Lee’s vision

The current direction of U.S. AI policy makes the formation of a truly neutral global institution even more challenging.

While innovation and leadership are priorities, they currently tend to supersede or bypass collective governance and global coordination:

  • The “AI Action Plan” focuses on accelerating domestic innovation and cutting regulatory barriers; it does not currently address the need for establishing shared global oversight.
  • The Executive Order “Removing Barriers to American Leadership in AI” signals a shift toward deregulation and competition, emphasizing market freedom over international restraint.
  • Fragmented governance within the U.S.— where federal and state laws diverge — creates inconsistency even at the national level, much less internationally.

In such a landscape, national advantage and market control are prioritized over global collaboration. These conditions further impede the creation of any neutral governance model.

In closing

In the current geopolitical climate, nations are racing to claim AI leadership and corporations are engaged in fierce competition for market dominance. These realities make building a single, neutral “CERN for AI” improbable any time soon.

Still, that doesn’t mean the vision for a CERN-like body for AI should fade.

Berners-Lee’s framework gives us a model to aspire to as well a reminder that the long-term health of the AI ecosystem depends on trust, cooperation, and shared scientific responsibility.

In the interim, pragmatic steps towards Berners-Lee vision are possible

While a true CERN for AI is not in the cards today, the path forward begins with awareness, cooperation, and education.

That’s where we begin — in the spaces between the real and the ideal; a space that accommodates practical, smaller but important steps, including programs that foster:

  • Interoperability between national and regional AI safety bodies
  • Transparency and literacy that empower citizens and policymakers alike
  • Collaborative ethics frameworks that extend across industry and academia
  • Education and awareness that prepare societies to engage meaningfully in AI governance

These steps, in any capacity, help ensure AI becomes not a weapon of competition, but a tool for shared human advancement.

For further reading

Berners-Lee isn’t the only one thinking about the need for a single multinational AI entity; RAND — a nonpartisan research organization that develops solutions to public policy challenges and is committed to the public interest — authored an important paper on the topic this August and is worth a definite read if you, too, align yourself with similar aspirations.

Mayra Ruiz-McPherson, PhD(c), MA, MFA
Executive Director & Founder
The CyberPsych Institute (CPI)

Empowering Minds for the AI Age