“A human reviewed it.” That sentence is becoming the most legally hollow phrase in corporate governance.
- grimmeljohnathan
- 6 days ago
- 1 min read

When AI tools enter the decision process, accountability does not disappear. In practice, it often compresses onto the person who approved the output, even when that person had only limited time to review it, little visibility into how it was generated, and no clear basis for challenging it.
This is closely related to what Madeleine Elish described as the “moral crumple zone.” The concept emerged in the context of aviation and autonomous systems, but it now applies just as clearly to knowledge work. When a system-level failure occurs, responsibility tends to settle on the human closest to the final output, regardless of how much real control they had over the process that produced it.
MIT Sloan Management Review argued in April 2026 that traditional, single-point models of accountability are starting to break down as decisions are shaped across combinations of people, systems, and models. Their proposed alternative, “narrative responsibility,” is useful because it treats accountability as something that must be mapped across the full chain: model owners, deployers, reviewers, and decision-makers.
A 2025 controlled experiment with 556 participants adds an important wrinkle. Simply disclosing an algorithm’s accuracy, a standard responsible AI practice, increased conformity to the system’s recommendations, including in cases where the algorithm was consistently wrong. Transparency matters, but transparency alone does not produce sound judgment.
The practical question for leaders is straightforward: if an AI-influenced decision went wrong tomorrow, could your organization reconstruct what the system recommended, what the human changed, and why the final decision was made? If not, the governance gap is already here.



Comments