- AI Readiness: Objects in the Mirror Are Closer Than They Appear
- Finance and AI Readiness: Speaking in Facts
- Sales, Marketing, and AI Readiness: Voice of the Customer
- Product Development and AI Readiness: From Widgets to Intelligence
- Operations and AI Readiness: The Discipline You Can’t Skip
- AI Readiness Assessment: What Does Good Look Like?
Operations brings the lean discipline that AI implementations desperately need. Waste elimination, metrics culture, and daily standups matter more than algorithms.
There’s a line I’ve been using for longer than I care to admit: “when you automate a mess, you get an automated mess.” It was true when we were implementing ERP systems. It was true when we were rolling out warehouse management platforms. And it’s even more true with AI, because AI doesn’t just automate your process. It amplifies it.
That distinction matters. An automated mess replicates your problems at machine speed – the same errors, the same inefficiencies, just faster. An amplified mess does something worse. AI learns from your process, finds the patterns in it (including the bad ones), and optimizes around them. If your inventory logic is sound, AI will make your supply chain extraordinarily responsive. If your inventory logic is a pile of workarounds that nobody has cleaned up since the last ERP migration, AI will optimize the workarounds – and you’ll get results that are impressively wrong, delivered with tremendous confidence.
Operations knows this. Not because they’ve studied AI, but because they’ve spent decades learning that discipline is what makes technology work. Lean principles, waste elimination, management by metrics, the daily rigor of running processes that perform reliably under real-world conditions – these aren’t just manufacturing concepts. They’re the quality control that AI implementations need and almost never get. Operations brings the discipline you can’t skip. And most organizations are skipping it.
The Lean Principles AI Needs
The specific types of waste targeted by lean manufacturing – overproduction, wait time, transportation, extra processing, unnecessary inventory, excess motion, defects – have been the foundation of operational discipline for decades. Operations teams know how to spot them, measure them, and systematically eliminate them. What most organizations haven’t realized is that every one of these waste categories has a direct parallel in AI deployment.
Overproduction: building AI features and models that nobody asked for and nobody uses. It’s remarkably easy to do – the technology is exciting, the data science team wants to show what’s possible, and before you know it you’ve built a sophisticated recommendation engine for a process that three people use twice a month. I’ve seen this happen more than once – a team spends months building a beautiful predictive model, presents it to the business unit, and gets polite nods followed by silence. Nobody asked for it. Nobody’s workflow changes because of it. The model sits there, technically impressive and practically irrelevant, consuming compute resources and maintenance attention that could have gone toward something people actually need. Operations knows how to ask the basic question that prevents this: what’s the demand signal? Who needs this, how often, and what decision does it change? If you can’t answer those questions before you start building, you’re manufacturing inventory nobody ordered.
Wait time: AI projects stall constantly because the data isn’t ready. It’s in the wrong format, it’s in the wrong system, it hasn’t been cleaned, the ownership is disputed. Operations teams understand throughput and bottlenecks – they know how to map a process, find where things get stuck, and redesign the flow so that materials (or in this case, data) move smoothly from one stage to the next.
Defects: in AI terms, these are the hallucinations, the bad recommendations, the outputs that look plausible but are wrong. And unlike a defective part that you can see and pull off the line, a defective AI output often looks perfectly normal. The recommendation is formatted correctly, the confidence score is high, the dashboard shows green. But the answer is wrong – and without someone who understands the process well enough to know it’s wrong, it goes downstream and creates problems that are harder to trace back to the source. Operations has a deeply ingrained quality mindset – not just catching defects but building processes that prevent them. Root cause analysis, statistical process control, the discipline of measuring defect rates and driving them down systematically. When a manufacturing line starts producing out-of-spec parts, operations doesn’t just pull the bad ones. They stop, figure out why the process drifted, and fix the root cause. That’s exactly the skill set AI model monitoring needs – not just flagging bad outputs after the fact, but building the feedback loops and process controls that catch drift before it becomes a quality problem.
The Pareto principle applies here too. Your operations team is probably the most skilled group in the organization at understanding that 80% of the value comes from 20% of the effort. When everyone wants to do everything with AI – and the vendor pitches make everything sound equally urgent – Operations can apply the same focus discipline that keeps a manufacturing floor productive: identify the critical few, do those well, and ruthlessly deprioritize the rest. Time and attention are the scarcest resources in any transformation, and Operations knows how to protect them.
The Metrics Culture
Here’s something I find fascinating about effective operations teams: they’ve been doing data-driven decision-making for years, often with minimal technology. Walk into a well-run manufacturing floor and you’ll see visual management boards with daily metrics tracked by hand. Charts showing quality rates, throughput, safety incidents – updated with markers, not software. Morning huddles where cross-functional teams review yesterday’s performance and set today’s priorities using nothing more sophisticated than a whiteboard and a conversation.
This matters for AI readiness because it reveals something important: the discipline of managing by metrics precedes the technology. The technology makes it faster and more powerful, but the foundational skill – knowing what to measure, knowing what a healthy number looks like, knowing when to trust the data and when something doesn’t feel right even if the numbers say otherwise – that comes from experience, not from software.
AI implementations are desperate for this kind of judgment. A predictive maintenance model produces an alert. Is it real or is it noise? An AI demand forecast disagrees with what the planner expects. Do you trust the model or your gut? These questions can’t be answered by the data science team alone, because they require domain knowledge – the kind of intuitive understanding of process behavior that experienced operators have built over years of watching their metrics, correlating patterns, and knowing where the data tells a clear story and where it needs human interpretation.
Here’s where it gets interesting. Say a predictive maintenance model flags a bearing for replacement on a machine that the maintenance lead inspected last week and judged to be fine. The model says replace it. The lead’s experience says it’s got months of life left. Who’s right? The answer isn’t always the model, and it isn’t always the human – but the process for resolving the disagreement is what matters. An operations team with a strong metrics culture doesn’t treat this as model-versus-gut. They treat it as a data point that needs investigation. What’s the model seeing that the lead isn’t? Is it picking up a vibration pattern that’s invisible to a visual inspection? Or is it reacting to a sensor anomaly that the lead knows happens every time the ambient temperature drops? That conversation – the back-and-forth between what the model knows and what the operator knows – is where AI actually gets better. And operations teams are already wired for exactly this kind of disciplined disagreement, because they’ve been reconciling instrument readings with human observation for decades. The organizations that treat model outputs as either infallible or useless miss the point entirely. The good ones build the habit of investigating the gap.
Operations teams also understand something that most AI implementations learn the hard way: metrics need context. A number without context is just a number. Operations people don’t just report metrics – they use them to change behavior, to focus attention, to drive conversations about what’s working and what isn’t. That culture of using data to make better decisions every day – not annually, not quarterly, but daily – is the foundation that AI performance monitoring needs to be built on.
Direct Collaboration at the Point of Impact
One of the standout practices from lean manufacturing is the daily standup – and it’s worth talking about, because it illustrates a broader principle that AI deployment keeps getting wrong.
As the day begins, cross-functional teams get together to review hot items, follow up on yesterday’s priorities, and make sure everyone is focused on the critical customers and shipments for today. No technology has to be involved. The focus is on simple charts and open conversation among the team. It’s direct collaboration at the point of impact – the people closest to the work, talking to each other about the work, making decisions in real time.
The tech world adopted this practice through agile methodology and thinks it invented it. Operations has been doing it for decades, and there’s a reason it works: it creates a feedback loop that’s short enough to actually matter. Problems surface within hours, not weeks. Priorities adjust based on reality, not last month’s plan. And the people making the decisions are the ones who understand the context.
AI deployment needs exactly this kind of rapid feedback loop, and it rarely has one. Most AI implementations follow a build-deploy-monitor cycle where “monitor” means checking a dashboard once a week. Operations knows that real monitoring happens at the point of impact, in conversation, every day. When a model starts drifting, who notices first? Not the data scientist reviewing accuracy metrics in a Jupyter notebook. It’s the floor supervisor who says “these recommendations haven’t made sense since Tuesday.” That kind of ground-truth feedback is the most valuable quality signal an AI system can have – and operations teams are already wired to provide it.
The broader principle is simple: AI works better when the people closest to the process are involved in evaluating its outputs. Operations teams already have the structure, the habits, and the cultural permission to do this. The daily standup isn’t just a manufacturing practice. It’s a model for how AI governance should work at the operational level.
The Reality-Testing Ground
There’s a reason Operations is article five of six in this series, and not article two. Everything the other departments contribute to AI readiness – Finance’s data rigor, S&M’s customer insight, Product Development’s innovation vision – eventually has to survive contact with the physical world. And Operations is where that contact happens.
AI recommendations meet physical constraints on the shop floor. Predictive models meet human factors in the warehouse. Demand forecasts meet supply chain chaos in logistics. Operations is the place where theory becomes practice, where elegant algorithms encounter messy reality, and where you find out whether your AI actually works or just looks good in a test environment.
The examples are endless, and they’re often things a data scientist would never think to test for. An AI-optimized warehouse picking route that looks brilliant on screen but requires a forklift to cross a fire lane. A demand forecast that doesn’t account for the fact that your plant shuts down for two weeks every July for maintenance – because that shutdown isn’t in the training data, it’s in the institutional knowledge of the people who run the building. A quality prediction model that performs beautifully on historical data but falls apart when the raw material supplier changes their formulation slightly, because the model was trained on consistency that no longer exists. These aren’t edge cases. They’re the daily reality of running operations in the physical world, where conditions change in ways that data alone can’t capture. Operations people catch these things because they live in the gap between what the system thinks is happening and what’s actually happening on the floor – and they’ve developed a healthy skepticism about any tool that claims to understand their process better than they do.
This isn’t a passive role. Operations doesn’t just receive AI outputs and comply. Good operations teams test, challenge, and improve them. They find the edge cases the data scientists missed. They identify the scenarios where the model’s assumptions don’t hold. They provide the feedback that makes the next iteration better. And they do all of this with a speed and directness that most other parts of the organization can’t match, because operational feedback cycles are measured in hours and days, not quarters.
The lean principle of “respect for people” applies here in a way that most AI deployments ignore. Don’t just deploy a tool and expect people to adapt. Understand the human workflow. Watch how experienced operators actually make decisions – not how the process document says they should. Design the AI to support that workflow, not to replace it with something the data science team thinks is more efficient. Operations people know this instinctively, because they’ve been on the receiving end of technology deployments that ignored how work actually gets done. That experience – the hard-earned understanding of what it takes to make technology work in the real world – is what makes Operations’ contribution to AI readiness irreplaceable.
When you automate a mess, you get an automated mess. When you apply AI to a mess, you get an amplified mess – faster, more confident, and harder to untangle. Operations is the team that prevents this, not through technology but through discipline. The waste elimination instinct. The metrics culture. The daily standup feedback loop. The reality-testing that separates working AI from impressive demos.
In the first article of this series, we argued that AI readiness is built from every functional area’s contributions. Finance speaks in facts. Sales and Marketing brings the customer voice. Product Development brings the vision. Operations brings the discipline – the non-negotiable rigor that makes everything else actually work. Next and last: how you measure all of it, and what “good” actually looks like.
If you’re finding this series useful, there’s more where it came from. We write regularly about AI strategy, digital transformation, and the practical realities of making technology work in organizations that build real things. Join our mailing list and we’ll keep the conversation going.
Related Articles
- How AI Fits into Lean Six Sigma – Holweg and Davenport explore how AI tools are accelerating continuous improvement methods that operations teams have practiced for decades
- Frontline Leadership in Manufacturing’s AI Adoption – PwC and the Manufacturing Institute find that AI adoption on the factory floor depends less on technology than on frontline leadership readiness
- Agentic and Gen AI in Operations – McKinsey’s collection on applying generative AI across the operations value chain, with emphasis on deploying AI as a transformation rather than a technology project
Recommended Books
- Don’t Think So Much – Jim MacLennan’s field guide to making technology decisions with clarity and confidence, drawn from decades of CIO experience across manufacturing and consumer products
- The Toyota Way, Second Edition – Jeffrey Liker’s definitive guide to the 14 management principles behind Toyota’s operational excellence, updated with insights on how lean thinking applies to modern digital transformation
- Lean On! Evolution of Operations Excellence with Digital Transformation – Mohit Gupta bridges lean manufacturing principles with digital transformation through stories from Tesla, Amazon, and other companies navigating the intersection of operational discipline and new technology
20 April, 2026






Comments (0)