Integrated AI Insights

Deployed Is Not Directed. The Gap Your AI Rollout Didn't Close.

← All Insights

Deploying AI is not the same as directing it. Most industrial rollouts have the system prompt and the governance. Here's what they're missing.

Anna Sandholm, who writes about AI at Substack, ran a practical experiment last week. She gave eight AI tools the same brief and documented what happened.

ChatGPT, after three years of shared history with her business, came back with familiar answers. Audits. Templates. Diagnostics. Things she already offered. The tool that knew her best had learned her well enough to optimise for what it already knew, not what the brief actually asked for.

That is worth sitting with for a moment, and then moving past the curiosity of it, because this is not a Substack writer's problem. It is a structural problem with how AI tools behave inside organisations that have been using them long enough to condition them. The manufacturing reader knows this territory: AI deployed, teams using it, outputs flowing. Nobody checking whether the tool has quietly learned to produce comfortable, familiar, low-friction responses rather than operationally useful ones. The gap between rollout and actual capability does not announce itself. It accumulates. And it is not visible until you measure it.

The distinction that most industrial deployments are missing

There are three different things that shape what an AI tool does inside an organisation. They are not interchangeable. Most deployments treat them as if they are.

The system prompt is the foundational instruction layer. It is set at configuration and it defines what the tool is permitted to do, how it is expected to behave, what guardrails are in place. It is infrastructure. A well-constructed system prompt is necessary. It is not sufficient.

The policy document tells people what they are allowed to ask. It is a compliance instrument. It protects the organisation. It does not direct the work.

The brief is the task-level instruction. It is the thing that tells the tool what to actually do in this instance, for this purpose, with this constraint. It is where the operational knowledge lives. It is also the thing that almost no industrial organisation has a practice for.

Why the gap is invisible until it isn't

The system prompt and the policy are visible artefacts. They exist as documents. Organisations can point to them when asked about their AI governance. They are legible evidence of responsible deployment.

The brief is invisible because it lives in the moment of use. It is the instruction typed into the chat window, or not typed, or typed badly. It is entirely dependent on the capability of the person using the tool, and that capability varies enormously, and it is almost never measured.

The result is that organisations can have strong governance and weak practice simultaneously. The system prompt is well-constructed. The policy is comprehensive. And the outputs being produced are subtly wrong in ways that nobody is catching, because the outputs look right.

What a practice failure looks like in an industrial context

Consider a maintenance planning scenario. A team is using an AI tool to assist with work order prioritisation. The team member opens the tool and types a brief.

The brief does not include the asset age. It does not include the last service date. It does not flag the recent parts substitution, or the tolerance requirement that differs from the standard. The output will look right. It will be formatted correctly. It will use the right language. It will be wrong in the ways that matter.

This is not a technology failure. It is a practice failure. And it is widespread.

If you have deployed and you are not measuring, the gap is already there

The gap between a deployed AI tool and a directed AI tool is not visible in the rollout metrics. Adoption rates, usage frequency, time saved on standard tasks -- none of those numbers tell you whether the tool is being briefed well enough to do the work.

The only way to see the gap is to test against a standard. To ask not just whether your teams are using AI, but whether they are directing it.

Most organisations discover, when they look at it clearly, that the practice is thinner than the rollout suggested.

Know where your operation actually stands.

The Industrial AI Readiness Diagnostic takes 8 minutes. It's free. It gives you a scored view across four operational pillars and tells you which one is your current constraint.

Take the Diagnostic