Who owns the decision when AI gets it wrong
The question that exposes the gap
If a regulator asked tomorrow which of your AI systems are high-risk under the EU AI Act, and who is accountable for each one, could you answer within 24 hours?
Most mid-market manufacturers cannot. Not because they lack AI systems. Because no one has been formally assigned to own what those systems decide.
On August 2, 2026, the EU AI Act’s full requirements for high-risk AI systems become enforceable. Fines reach 35 million euros or 7% of global annual turnover, whichever is higher. The organizations most exposed are not the ones that ignored the regulation. They are the ones that assumed compliance was a technology project. It is not. It is a governance project.
What the regulation actually requires
The EU AI Act does not primarily regulate algorithms. It regulates the accountability structures around them.
High-risk AI systems — which include systems used in employment decisions, safety-critical operations, and certain supply chain functions — must have a named human accountable for oversight. There must be documented processes for when a system fails or produces unexpected outputs. There must be a governance structure that can, on demand, demonstrate who reviews AI outputs, who can override them, and who bears the consequences when they are wrong.
If you are a manufacturer using AI for demand planning, quality inspection, predictive maintenance, or workforce scheduling, you likely have systems that fall into this category. The question is not whether you have deployed them. The question is whether the accountability structure around them would survive a regulatory review. For most organizations, it would not.
Why the gap is structural, not intentional
The accountability gap in AI systems is rarely a product of carelessness. It is structural. Most AI initiatives are launched inside operating models that were never designed to accommodate them.
A demand planning system gets deployed inside a supply chain organization that still governs decisions the way it did before the system existed. The planner reviews the output. The planner’s manager reviews the planner. But no one has formally answered: when the system recommends something the planner disagrees with, who decides? When the system is wrong and a customer is affected, who owns it?
These are not questions about the model. They are questions about authority. And in most organizations, authority follows informal norms rather than documented structures. Regulators are not interested in informal norms. They want a name, a role, a documented process, and evidence the process works.
The decision most leadership teams are deferring
There is a specific decision most organizations are avoiding right now. It is not whether to comply — most executives accept that compliance is required. The deferred decision is this: who redesigns the operating model around our AI systems, and who owns that work?
This is not a technology project. It is an organizational design project. It requires someone with the authority to name decision owners, document escalation paths, define what human override looks like, and ensure the governance structure is real rather than nominal.
In a mid-market manufacturer with 500 to 5,000 employees, that person is usually absent. The CIO owns the technology. The COO owns the operations. The CFO owns the risk budget. No one owns the decision architecture that connects them. That is the CDO gap. August 2 is when the cost of that gap becomes measurable.
First moves if you are behind
Three things that can be done in the next 30 days without a full governance program.
- Inventory your AI systems against the EU AI Act’s high-risk categories. Most organizations do not have a complete list of what they have deployed, let alone a risk classification. A two-hour working session with your CIO and COO produces a first-draft inventory that is more valuable than most consultant reports.
- Name a human accountable for each system on that list. Not a team. Not a function. A person — someone with the authority to override the system’s output and the responsibility to explain a failure to a board or regulator.
- Document what human oversight actually looks like for each system. How often is output reviewed? Under what conditions is a human required to intervene? What happens when the system and the human disagree? One page per system is sufficient to start.
None of this is a full compliance program. It is evidence of a governance posture — which is what regulators are actually assessing.
A question worth taking to your next board meeting
If a regulator asked tomorrow which of your AI systems are high-risk under the EU AI Act, and who is accountable for each one, could you answer in 24 hours?
If the answer is no, the decision to make before August is not the technical one. It is the organizational one.