A thought experiment

Imagine you hire an assistant. You give them instructions, review their work, and publish it under your name. One piece wins an award. Another gets you sued.

In the first case, nobody questions your ownership. In the second, would any court accept "My assistant did it, not me" as a defense?

Of course not. The legal principle is ancient and universal. In Roman law: qui sentit commodum sentire debet et onus — whoever enjoys the benefit must also bear the burden. In common law: respondeat superior. In everyday language: you can't have it both ways.

Yet with AI, we try to have it both ways every single day.

The asymmetry is measurable

This isn't a philosophical observation. It's a testable hypothesis. Using formal argumentation theory — the same mathematical frameworks used in computational logic and legal reasoning — you can structure the competing claims about AI ownership and accountability as a rigorous argument graph.

When you do, something interesting emerges. You can encode 16 distinct arguments drawn from published legal scholarship, court rulings, and philosophical traditions. Seven argue for strong human ownership of AI output. Six argue against full human accountability. Three bridge the gap between them.

The arguments attack each other. That's fine — disagreement is expected. But when you compute what survives all attacks (what argumentation theorists call the "grounded extension"), five principles emerge that no rational argument can defeat:

  • Consistent Attribution. Claiming ownership of beneficial AI output while denying accountability for harmful AI output is a logically incoherent position.
  • Meaningful Human Control. The threshold for "I own this" and the threshold for "I'm responsible for this" must be the same threshold. You can't set them independently.
  • Proportional Accountability. Responsibility scales with the degree of human direction and the foreseeability of risk.
  • Traceability. If you can't trace AI output back to human decisions, you can neither own it nor be held accountable for it.
  • Non-Waivable Accountability. You cannot contractually disclaim liability for foreseeable harms to people who never agreed to your terms.

These aren't opinions. They're the mathematical minimum consensus — the arguments that survive every possible attack in the framework.

The Research Question

Can we mathematically prove that the ownership-accountability asymmetry exists in human behavior toward AI — and if so, can formal argumentation frameworks predict how courts and regulators will resolve it?

The real world already confirms the paradox

The framework was stress-tested against five landmark cases. It predicted the outcome of every single one.

Thaler v. Perlmutter (US, 2025)
AI cannot be an author. The D.C. Circuit found six statutory provisions that only make sense if "author" means a human being. But the court deliberately left open the harder question of works made with AI.
Zarya of the Dawn (US, 2023)
A graphic novel using Midjourney received partial copyright — for the human-authored text and arrangement, but not for individual AI-generated images. The Copyright Office decided users lack sufficient control over the output.
Li v. Liu (China, 2023)
On functionally identical facts — extensive prompting of a diffusion model — Beijing's Internet Court reached the opposite conclusion. 150+ prompts constituted "personalized expression." The same behavior, different jurisdiction, different answer.
Taylor Swift Deepfakes (2024)
AI-generated explicit images went viral. Nobody claimed ownership. But the accountability question — who is responsible? — triggered 68 new state laws in a single year. The asymmetry in action.

The pattern is consistent: when output is desirable, humans rush to claim credit. When output is harmful, they retreat behind the technology. The framework's prediction engine correctly identified which arguments would activate in each case, which extensions would form, and which way the decision would go.

The Policy Question

If the threshold for ownership and the threshold for accountability must logically be the same — why does every current governance framework treat them as separate problems with separate answers?

Why this matters right now

The EU AI Act entered force in August 2024. It's the most comprehensive AI regulation in history. It classifies systems by risk, mandates conformity assessments, and establishes penalties. But it treats ownership and accountability as separate regulatory tracks.

The United States has no federal AI law. Over 1,200 state bills were introduced in 2025 alone. Patent examiners, copyright specialists, and federal judges are independently constructing AI governance through thousands of discretionary decisions. None of them coordinate with each other.

Meanwhile, 87% of organizations claim to have AI governance frameworks. Fewer than 25% have actually operationalized them. Ethics teams are being disbanded. The gap between what we say about AI accountability and what we do about it has never been wider.

The ownership-accountability asymmetry isn't just a logical puzzle. It's the structural flaw underneath all of this. Until policy frameworks recognize that these two questions are actually one question, we'll keep building governance systems that contradict themselves at the foundation.

What comes next

A behavioral experiment is being designed to measure the asymmetry directly. Four scenarios, two conditions each. Hold the AI interaction constant, vary only whether the outcome is positive or negative, and measure how people's ownership claims and accountability acceptance diverge.

If the asymmetry index is significantly greater than zero — if people systematically claim more ownership than accountability for the same level of AI involvement — it confirms what the formal framework predicts and what the case law already suggests.

The full technical framework, including the argumentation graph, the formal proofs, and the case analysis, is available for review. A companion white paper examining AI governance through the lens of public policy implementation theory — applying Matland, Lipsky, Sabatier, and Kingdon to the current regulatory landscape — accompanies this research.

Sometimes the hardest problems in AI aren't about the technology.

They're about what we're willing to admit about ourselves.

The companion white paper analyzing the AI governance implementation crisis through public policy theory is available below.

Download: AI Governance in 2026 — White Paper (PDF) →