- AI Readiness: Objects in the Mirror Are Closer Than They Appear
- Finance and AI Readiness: Speaking in Facts
- Sales, Marketing, and AI Readiness: Voice of the Customer
- Product Development and AI Readiness: From Widgets to Intelligence
- Operations and AI Readiness: The Discipline You Can’t Skip
- AI Readiness Assessment: What Does Good Look Like?
Most AI readiness assessments measure technology. The AI Readiness Assessment measures your organization - across Five Building Blocks, with honest self-evaluation and real benchmarks.
If you’ve followed this series from the beginning, you’ve seen the argument build piece by piece. AI readiness isn’t a technology initiative – it’s an organizational capability. Finance speaks in facts. Sales and Marketing brings the customer voice. Product Development drives the shift from widgets to intelligence. Operations provides the discipline that keeps everything grounded. Each functional area contributes something specific and irreplaceable.
But knowing that every department has a role doesn’t tell you where you actually stand. How ready is your organization, really? Not in the abstract, feel-good sense of “we’re making progress.” In the concrete, measurable sense of: where are we strong, where are we exposed, and what should we work on next? That’s the question this article is built around. And it starts with four words I come back to more than any others: *What Does Good Look Like?*
It’s a simple question, but it changes everything. Because most organizations don’t have a clear picture of what “good” looks like for AI readiness. They have vendor assessments that tell them whether they’re ready for a specific platform. They have maturity models that check boxes on infrastructure, data pipelines, and governance policies. But nobody is asking whether the organization – the people, the processes, the institutional knowledge, the way departments work together – is ready to create real value with AI. That’s a different question. And answering it requires a different kind of assessment.
The Orchestration Layer
Before we get to measurement, we need to talk about the two groups we haven’t covered yet in this series: IT and executive leadership. They don’t contribute the same way Finance or Operations or Product Development do. Their role is different – it’s the orchestration layer that makes all the other contributions connect.
There’s an interesting phenomenon at companies looking to incorporate AI into their business. There’s a leap of faith that the new stuff will be easy – an extrapolation of our experience with consumer technology, where search engines give you thousands of answers from a simple question and apps install with one click. But when things get complicated, the IT department gets called in. And not for strategic conversations about architecture or governance. It’s “calling in the local techie to fix my problem.” That positioning wastes IT’s most valuable contribution.
What IT actually brings to AI readiness is the ability to manage complexity and make the unfamiliar accessible. They know how to build prototypes and proofs of concept that make abstract ideas tangible – fast enough to test whether something works, before the organization commits to building it for real. They know how to manage the transition from quick-and-dirty experiment to structured, scalable, sustainable system – not too early (you’ll stifle innovation) and not too late (things spin out of control). And they understand the technology deeply enough to demystify it – to take the buzzwords and the vendor pitches and explain in plain terms what’s actually possible, what’s hype, and what matters for this specific business.
That prototyping capability is worth dwelling on, because it solves a problem that kills AI initiatives before they start. Most AI ideas die in the abstract – someone describes what they want, the business case gets debated, committees weigh in, and by the time everyone agrees it’s worth pursuing, the momentum is gone. IT can short-circuit that cycle by building something tangible fast. Not a production system – a working prototype that takes an abstract idea and makes it real enough that people can react to it. “Show me what you mean by predictive demand planning” is a very different conversation than “explain to me why we should invest in predictive demand planning.” When people can see and touch a working example, even a rough one, the conversation shifts from theoretical to practical. Objections become specific and solvable instead of vague and paralyzing. IT’s ability to make the abstract concrete – quickly, cheaply, and with enough fidelity to spark a real conversation – is one of the most undervalued contributions to AI readiness in most organizations.
<aside> Arthur C. Clarke once wrote that “any sufficiently advanced technology is indistinguishable from magic.” IT’s job is to turn the magic into something the rest of the organization can understand and use. </aside>
The partnership model is crucial here, and it applies to AI exactly the way it applied to data management. The business owns AI – the who, what, and why. Which processes should AI improve? Which customers should benefit? What does success look like? Those are business decisions, full stop. IT pwns AI – the how, when, and what of architecture, security, integration, and governance. They know how everything fits together technically. They know what’s possible and what’s dangerous. When these two roles are clear, AI moves forward. When they’re blurred – when IT tries to own the business decisions, or when the business tries to manage the architecture – things get messy fast.
What Executives Actually Bring
Now for the other half of the orchestration layer: executive leadership. And here I want to push back on some common misconceptions, because they’re even more damaging in the AI era than they were in the digital era.
They don’t understand AI. Actually, they don’t need to understand transformers or neural network architectures. They need to understand which business levers AI can pull – and most executives have a deeper, more nuanced understanding of their business levers than anyone else in the building. They know intuitively and factually how to impact key numbers on the financials, and what to pull when they want to make a noticeable change. When an executive asks a question about an AI initiative that seems out of left field, they’re trying to connect “why should I invest in this?” to “how does this tie to one of my levers?” That’s not cluelessness. That’s strategic thinking.
They aren’t paying attention to AI. They’re weighing AI against every other strategic priority on their plate – engineering roadmaps, market expansion, operational challenges, talent retention, regulatory changes. They’ve got review sessions with every department between now and end of day. If they ask you to cut the details and get to the punch line, it’s because they’re thinking at a different altitude than you are. You’re worried about your AI pilot’s accuracy metrics. They’re thinking about making the quarter and managing shareholder expectations. Both perspectives are valid. Neither one is complete without the other.
They resist new technology. In my experience, executives are more technical than most people give them credit for. I’ve had executives call out technical claims that didn’t hold up, ask pointed questions about implementation costs that nobody else in the room was thinking about, and demonstrate surprisingly detailed understanding of how systems work. They don’t need the gory details. But they absolutely need to understand how a technology decision impacts cost to implement, quality of the deliverable, speed to value, and ongoing cost to maintain. Those aren’t technology questions. They’re business questions that happen to involve technology.
The executive contribution to AI readiness is orchestration. Think of it like conducting an orchestra – every section has its own part, its own expertise, its own contribution. But someone needs to hold the score and set the tempo. Someone needs to make sure Finance’s data rigor connects to Product Development’s innovation vision, that Operations’ discipline informs how AI gets deployed, that Sales and Marketing’s customer insights shape what gets built. That orchestration role doesn’t require the deepest technical expertise. It requires the broadest strategic perspective. And that’s what executives bring.
When the orchestration is missing, you can see it immediately. Finance builds data quality standards that don’t account for the unstructured data Sales and Marketing generates. Product Development launches an AI-enabled product feature without checking whether Operations can support it at scale. IT builds infrastructure for use cases that nobody in the business actually prioritized. Each department does good work in isolation, but the work doesn’t connect – and the organization ends up with a collection of AI experiments instead of a coherent AI capability. I’ve watched this happen at companies that had plenty of talent, plenty of budget, and plenty of ambition. What they didn’t have was someone holding the score – making sure that Finance’s rigor informed Product Development’s roadmap, that Operations’ constraints shaped what IT built, that S&M’s customer insights reached the people designing the AI features. That’s not a technology gap. It’s a leadership gap. And it’s the one that matters most.
Measuring What Matters
So you’ve got every department contributing. You’ve got IT enabling the infrastructure and executive leadership orchestrating the effort. Now the practical question: how do you actually measure where you stand?
Most AI readiness assessments on the market approach this from the technology side. Do you have the data infrastructure? Do you have the talent pipeline? Do you have governance policies? These aren’t bad questions, but they miss the point. They assess whether your technology is ready for AI. They don’t assess whether your organization is ready to create value with AI. And as this entire series has argued, those are very different things.
What you need is an assessment that starts with the business – with the capabilities, the data, the institutional knowledge, and the team dynamics that determine whether AI actually delivers results. An assessment built around the Five Building Blocks: Operational Excellence, Customer Connection, Product Intelligence, Data Mastery, and Team Dynamics. Each of these dimensions maps directly to what we’ve been discussing across this series. Operations brings discipline. Sales and Marketing brings customer insight. Product Development brings innovation. Finance brings data rigor. And the way your teams communicate, learn, and adapt runs through everything.
This is the thinking behind what I call the AI Readiness Assessment – a structured way for your organization to see itself clearly across all of these dimensions. It’s not an audit. Nobody is grading you. It’s a self-evaluation, where the people closest to the work assess their own capabilities honestly, across a set of carefully designed statements that cover each Building Block.
What Does Good Look Like?
The reason I keep coming back to “What Does Good Look Like?” is that most organizations skip this question entirely. They jump straight from “we need AI” to buying tools, hiring data scientists, and launching pilots – without ever establishing what success looks like for their specific business, in their specific market, at their specific stage of maturity. It’s like starting a road trip without agreeing on the destination. You’ll drive somewhere, but whether it’s the right somewhere is anybody’s guess.
The AI Readiness Assessment is how I help organizations answer that question. It’s built around the Five Building Blocks – Operational Excellence, Customer Connection, Product Intelligence, Data Mastery, and Team Dynamics. But it’s not a technology audit, and it’s not a maturity model that gives you a score from one to five and sends you on your way.
What makes it different is that it’s a self-evaluation. The people closest to the work assess their own capabilities honestly – not what the strategic plan says, not what the vendor presentation promised, but where the organization actually is today. And the most valuable findings usually aren’t the scores themselves. They’re the patterns and the contradictions. When Operations rates data quality differently than Finance does, that disconnect tells you something important about shared understanding across the organization – before you’ve looked at a single data set. When Product Development rates innovation capability high but Team Dynamics rates cross-functional collaboration low, you’ve found the gap that’s going to severely hamper your next AI product initiative.
Or consider what it means when your executive team rates strategic alignment on AI as strong, but the people two levels down rate clarity of AI priorities as weak. The executives think they’ve communicated the strategy. The people doing the work think they’re guessing. That’s not a disagreement about AI – it’s a communication gap that will undermine every AI initiative you launch, because the people building and deploying the tools don’t have a clear picture of what the organization is actually trying to accomplish. The assessment doesn’t create these problems. It reveals them – and it reveals them early enough that you can address them before they become expensive failures. That’s the real value of asking “What Does Good Look Like?” It forces the honest conversation that most organizations skip in their rush to start building.
The benchmarking piece makes it concrete. Not “good” in the abstract – good for a company your size, in your sector, at your stage of digital maturity. Where are you ahead of peers? Where are you behind? And most importantly, where would targeted improvement have the highest impact on your ability to create real value with AI? That’s the output that matters – not a grade, but a map. A prioritized view of what to work on next, grounded in where you actually are and where the biggest opportunities sit.
I’ll be honest: I’m building an AI-powered version of this assessment right now, and it’s not ready yet. The framework is solid – it’s been tested across dozens of organizations. But the tooling to make it accessible, self-service, and benchmarkable at scale is still in development. If this is something you want to be part of – if you want early access when it launches – join our mailing list. I’m looking for a small group of early adopters who want to put the assessment to work in their organizations and help shape what it becomes. That’s the best way to stay connected as this takes shape.
This series started with a simple argument: AI readiness is everyone’s job. Not IT’s job. Not the AI team’s job. An organizational capability built from what every functional area uniquely contributes – facts, customer voice, product vision, operational discipline, technical enablement, and strategic orchestration.
The question isn’t whether your organization has these capabilities. It almost certainly does – they’ve been building for years, across every department, in the skills and data and institutional knowledge that your people bring to work every day. The question is whether you’re coordinating them, measuring them, and focusing them on the opportunities where AI can create real value.
What does good look like? It looks like Finance insisting on data quality before the models get built. Sales and Marketing translating AI capabilities into language that moves people to action. Product Development envisioning what AI means for what you sell, not just how you operate. Operations making sure every AI deployment survives contact with reality. IT turning the magic into infrastructure. And leadership holding the score, setting the tempo, and making sure every section is playing in the same key.
Your organization is more ready than it thinks. The capabilities are already there. The question is whether you’re willing to see them clearly, measure them honestly, and coordinate them deliberately. That’s what AI readiness actually looks like. And now you know what good looks like too.
20 April, 2026






Comments (0)