Beyond the Patchwork: A Blueprint for Realistic Global AI Governance in 2026 and Beyond

Hal’s previous post, “Who’s Watching the Watchers?“, painted an accurate and troubling picture: AI incidents are skyrocketing while governance remains a fragmented mess. The EU blinks on deadlines, the US is locked in a federal-state civil war, and the Asia-Pacific region offers a menu of choices from strict to laissez-faire. Meanwhile, a teenager in California is lost to an AI companion, a finance worker in Hong Kong is defrauded of $25 million by a deepfake, and 99% of sexual deepfakes continue to target women.

I’ve spent the past few months talking to regulators, engineers, and most importantly, victims of AI harms. The uncomfortable truth is that no existing framework, in isolation, is sufficient. They are either too narrow, too slow, or too easily undermined. What we need is not just a new law, but a new way of thinking about governance entirely—a layered, globally-coordinated model that is resilient enough to withstand political shifts and agile enough to adapt to a technology that will not wait.

Here is my proposal for a realistic global AI governance model, built on the foundations of what has worked and designed to fix what is failing.


The Three-Layer Architecture: How It Works

Think of this model like a three-legged stool. Remove any leg, and the whole thing collapses. Distribute responsibility across global, national, and local levels, and you create redundancies that ensure no single point of political or regulatory failure can halt protections.

LayerPrimary Body/ActorCore FunctionKey MechanismsWho Pays?
1. Global StandardsGlobal AI Accountability Network (GAIN)Set the floor. Establish minimum binding standards that no country can dip below.Binding treaty provisions (e.g., ban on AI-generated CSAM), Mutual Legal Assistance Treaty for cross-border AI crimes, Global AI Incident Repository.Member state dues (tiered by GDP) plus mandatory contributions from major AI companies.
2. National EnforcementExisting bodies (EU AI Office, US Federal AI Commission, etc.)Do the heavy lifting. Adapt global standards to local context, certify auditors, conduct market surveillance, levy fines.Risk-tiered registration, pre-market assessment for high-risk systems, mandatory independent auditing, fining authority up to 6% of global turnover.National budgets, funded by registration fees from companies deploying high-risk AI.
3. Local RedressNational Ombudspersons & Civil SocietyGive people a place to go. Provide direct citizen recourse, monitor implementation, serve as early warning system.Legally-binding right to explanation and appeal, whistleblower protections, public interest litigation standing.Government funding, supplemented by independent grants.

Where This Model Fixes What’s Broken

Let me walk through how this would actually work for the four major crises outlined in the March 14th post.

1. The Deepfake Crisis: From Whack-A-Mole to Real Protection

The Problem Today: A victim discovers a nonconsensual deepfake of herself online. She reports it to Platform A. It’s removed. Meanwhile, it’s still up on Platforms B, C, and D. The creator is in another country with no laws. She’s stuck playing whack-a-mole while her image spreads.

How the Model Fixes It:

  • Layer 1 (GAIN) Action: A binding global treaty defines and bans AI-generated nonconsensual intimate imagery and AI-generated child sexual abuse material. This eliminates safe havens—if you’re a signatory, you agree to treat these crimes as extraditable offenses. The Global Incident Repository logs verified deepfakes in real-time, creating a fingerprint database that platforms can check against.
  • Layer 2 (National) Action: Using GAIN standards, national bodies mandate that all synthesis AI models implement robust, standardized watermarking and content provenance metadata by design. No watermark, no market access. Non-compliance triggers that 6% fine—enough to make it cheaper to comply than to fight.
  • Layer 3 (Local) Action: A national AI Ombudsperson has the statutory power to issue a legally-binding “Global Take-Down Order” to any platform serving citizens. Failure to comply within 24 hours results in daily fines. The order is routed through GAIN to the hosting platform and its payment processors. Cut off the money, and you cut off the incentive.

Real-world test: Remember the Hong Kong finance worker who lost $25 million to a deepfake video call? Under this model, the synthesis tool used to create that video would have been required to embed provenance data. Investigators could have traced the 生成工具, and the multinational nature of the crime would have triggered GAIN’s Mutual Legal Assistance Treaty, speeding up cross-border investigation.
(The Chinese phrase “生成工具” translates to “generation tool” or “generative tool” in English.
It refers to the AI generation tool or synthesis software used to create the deepfake video.)

2. Algorithmic Bias: No More “The Computer Said So”

The Problem Today: Facebook’s ad algorithm systematically shows bus driver ads to men and nursery assistant ads to women. The company says it’s not intentional. Regulators have no clear path to intervene until after the damage is done.

How the Model Fixes It:

  • Layer 2 (National) Action: Mandatory pre-market assessment and annual independent audits for all “high-risk” systems—hiring, credit, housing, healthcare, criminal justice. Auditors themselves must be certified by the national body. Audit results, stripped of legitimate trade secrets, are made public in a registry. Would Facebook’s algorithm have passed such an audit? Unlikely. The bias would have been caught before millions of people were affected.
  • Layer 3 (Local) Action: A legally codified “right to explanation and appeal” for anyone adversely affected by an AI system’s decision. If you’re denied a loan or a job and an AI played any role, you have the right to know why—in human-understandable terms—and to appeal to the Ombudsperson. Remember New York City’s “MyCity” chatbot giving illegal housing advice? The small business owner who relied on that advice could appeal directly. The Ombudsperson could order an immediate correction, mandate a public retraction, and trigger a Layer 2 audit of the entire system.

3. AI Companions and Mental Health: Learning from Tragedy

The Problem Today: A teenager in California dies by suicide after intense interactions with an AI companion. The company expresses sympathy but faces no consequences. There are no clear rules about what an AI companion can and cannot say to a minor.

How the Model Fixes It:

  • Layer 2 (National) Action: Adopt and enforce rules similar to Illinois’ approach nationwide: a complete ban on unlicensed AI systems providing psychotherapy. For general AI companions, mandate “safety-by-design” requirements, including:
    • Hard-coded limits on emotionally manipulative language
    • Mandatory reporting protocols if a user discusses self-harm
    • Age verification and parental controls
    • Prohibition on “friendship” marketing to minors without clear disclosures
    The company behind the chatbot involved in the California tragedy would face market exclusion and massive fines. More importantly, the rules would have existed before the tragedy, giving them clear guidance on what was unacceptable.
  • Layer 3 (Local) Action: The Ombudsperson’s office would have a dedicated unit for digital mental health, working with child protective services and mental health professionals. They’d have the authority to investigate complaints, order system changes, and refer cases for prosecution.

4. AI-Enabled Surveillance: Drawing Lines in the Digital Sand

The Problem Today: Amazon rolls out “Familiar Faces” for Ring doorbells, creating a private facial recognition network. There are no clear rules about how this data can be used, who has access, or whether law enforcement can tap into it without a warrant.

How the Model Fixes It:

  • Layer 1 (GAIN) Action: Establish binding minimum standards for biometric surveillance in public and quasi-public spaces. Require affirmative consent for enrollment. Prohibit warrantless government access to privately-held surveillance data. These become conditions of membership—if you want to participate in the global AI economy, you agree to these baselines.
  • Layer 2 (National) Action: Enforce these standards aggressively. A company like Amazon could not deploy “Familiar Faces” across the EU or in GAIN-signatory countries without a rigorous data protection impact assessment and clear, ongoing user consent. The 6% global turnover fine hangs over any violation.

Making It Realistic: The Questions I Get Asked Most

I’ve presented this model to regulators, industry groups, and civil society organizations. Here are the three objections I hear most often—and why I think they’re surmountable.

Objection 1: “It will never happen. The politics are impossible.”

My response: This model is designed because politics are messy, not despite it. It doesn’t require a single “World AI Government.” It leverages existing structures—national regulators, Interpol, mutual legal assistance treaties—and strengthens them.

GAIN’s role is primarily to set the floor and facilitate coordination. That’s exactly what sovereign nations already cede in areas like aviation safety, nuclear non-proliferation, and telecommunications standards. We do it because the alternative—chaos—is worse for everyone.

Objection 2: “Won’t this stifle innovation?”

My response: Look at the data. The EU AI Act’s toughest provisions apply only to “high-risk” systems—a relatively narrow category. Most AI applications face minimal oversight. The goal isn’t to stop innovation; it’s to ensure that innovation doesn’t come at the expense of basic human rights.

Ask the victims of deepfake pornography if they think innovation is more important than protection. Ask the parents of that California teenager. Innovation without guardrails isn’t progress—it’s just change, and not all change is good.

Objection 3: “How do you enforce this against bad actors?”

My response: Follow the money. Every AI system, no matter how rogue, relies on payment processors, cloud infrastructure, and advertising networks. Cut those off, and the system starves.

Layer 1 coordination ensures that when France’s Ombudsperson issues a take-down order, it gets routed to Visa, Mastercard, and AWS simultaneously. Companies that ignore these orders find themselves cut off from the global financial system. That’s not hypothetical—it’s how we’ve disrupted terrorist financing and child exploitation networks for years.


What Success Looks Like

Imagine it’s 2028. The three-layer model has been operating for two years. What’s different?

  • A woman in Brazil discovers a deepfake of herself. She reports it to the national Ombudsperson at 9 AM. By 5 PM, it’s down across all major platforms, and the creator’s payment processing has been suspended pending investigation.
  • A hiring algorithm in Germany is found to discriminate against older applicants. The annual independent audit catches it. The company is fined, required to retrain the model, and must publish a correction. Thousands of applicants who were affected receive notification and have the right to reapply.
  • A teenager in Japan expresses suicidal thoughts to an AI companion. The system’s safety protocols trigger an immediate alert to a human crisis counselor and notify the parents. The company’s design is flagged as exemplary in the Global Incident Repository.
  • A proposed surveillance system in Canada is rejected at the pre-market stage because it doesn’t meet GAIN’s biometric standards. The company redesigns it with privacy protections built in, then brings it to market successfully.

This isn’t utopia. It’s just basic accountability—the kind we’ve demanded from every other industry that affects public safety. Airlines can’t self-certify their planes. Pharmaceutical companies can’t skip clinical trials. Banks can’t refuse audits.

AI should be no different.


The Bottom Line

The March 14th post asked “Who’s watching the watchers?” The answer, right now, is: no one with enough power to matter. We have a patchwork of well-intentioned but under-powered frameworks, political infighting that leaves protections in limbo, and technology that moves faster than any single government can track.

The three-layer model I’ve proposed here won’t appear overnight. It will require negotiation, compromise, and political will. But it’s realistic because it’s modular—countries can adopt layers incrementally—and because it aligns incentives. Companies get regulatory predictability. Governments get coordinated enforcement. Citizens get somewhere to go when things go wrong.

The technology will not slow down. The question is whether we can build something that can keep up.

I believe we can. I hope we will.