What AI Needs Is an NTSB

Modern AI systems are often described as powerful, transformative, and inevitable. Much less often are they described as investigable. That omission matters more than we may yet appreciate.

When things go wrong in aviation—when an aircraft fails, when lives are lost—the response is not denial, obfuscation, or finger-pointing. Instead, a well-established process unfolds under the authority of the National Transportation Safety Board (NTSB). Data is gathered. Systems are examined. Decisions are reconstructed. Causality is determined. And the results are published—not primarily to assign blame, but to prevent recurrence.

Online services—and especially AI platforms—lack anything comparable.

That absence is becoming dangerous.


Safety Isn’t About Preventing All Harm

The NTSB model rests on a hard-earned truth:
complex systems cannot be made perfectly safe.

Aviation didn’t become safe because engineers eliminated all failure. It became safe because failure was made inspectable, traceable, and learnable. Every stakeholder—manufacturers, airlines, regulators—accepts in advance that if something goes wrong, investigators will have access to the data necessary to understand why.

This is not surveillance.
It is accountability by prior consent.

AI platforms today largely operate on the opposite assumption: that harm can be managed through disclaimers, acceptable-use policies, and post hoc distancing (“the model suggested it,” “the user chose it,” “we don’t control downstream use”). These approaches dissolve responsibility precisely when responsibility matters most.


The Oppenheimer Problem, Revisited

The ethical anxiety surrounding AI is often framed as new. It isn’t.

J. Robert Oppenheimer famously reflected that physicists had “known sin” after the atomic bomb—not because they pursued knowledge, but because knowledge was rapidly hardened into machinery without adequate moral governance.

AI now occupies a similar space. The danger is not discovery. It is deployment without accountability—what might be called injurious engineering: systems optimized for speed, scale, and efficiency that quietly eliminate deliberation, responsibility, and traceability.

The question is no longer whether AI will be used in harmful ways. It already is.
The question is whether harm will be investigable.


What the NTSB Model Offers AI

Translating the NTSB approach to AI and online services suggests several design commitments:

  1. Advance Agreement
    All participants—platform operators, third-party providers, integrators—agree before anything goes wrong that credible harm triggers an investigation.
  2. Causality Over Blame
    The purpose of investigation is to determine what happened and why, not to assign guilt or shield institutions.
  3. Privileged, Limited Access
    Investigators gain access to logs, configurations, prompts, outputs, and timing data necessary to reconstruct events—under strict procedural safeguards.
  4. Systemic Learning
    Findings are used to improve system design and practice, not merely to punish individuals.

This is how aviation turned tragedy into progress. There is no reason AI services cannot do the same—except unwillingness.


Accountability Requires Data—Not Surveillance

A common objection arises immediately: privacy.

But privacy and accountability are not opposites. They are orthogonal.

An NTSB-style model does not require continuous monitoring or routine access to user content. It requires that:

  • causal chains can be reconstructed after the fact
  • logs are preserved immutably
  • access is gated, role-based, and exceptional
  • encryption protects normal operation but does not render investigation impossible

In aviation, black boxes are not tools of surveillance. They are tools of understanding. AI systems need the moral equivalent.


Third-Party Providers and the Cost of Scale

This model has teeth only if it applies to everyone in the ecosystem.

That means third-party AI providers and integrators must accept accountability as a condition of participation. Encryption, proprietary systems, and commercial secrecy cannot be allowed to function as moral blindfolds.

If a provider refuses to support post-incident investigation, the conclusion should be simple and unapologetic:

This system is not suitable for accountable deployment.

Aviation does not accept engines that cannot be examined after a crash. AI should not accept platforms that cannot be examined after harm.


Why This Makes Ethical Use More Likely—Not Less Innovation

Some will argue that such requirements slow innovation. They do—just as safety slowed early aviation.

But what they slow is reckless scale, not discovery.

Accountability raises the cognitive and moral cost of injurious engineering. It deters misuse not by prohibition, but by removing plausible deniability. It ensures that no one can say, with a straight face, “the system did it.”

That alone would change behavior across the industry.


Toward an Accident-Investigation Culture for AI

AI does not need more aspirational ethics statements.
It needs institutional memory.

An NTSB-style accountability model offers something rare in technology governance: a way to acknowledge fallibility without surrendering responsibility, and to learn from harm without normalizing it.

We do not need AI systems that promise perfection.
We need systems that promise they can be understood when things go wrong.

That is how safety actually scales.


Discover more from Topoi-AI

Subscribe to get the latest posts sent to your email.