- The Data Value Chain: Seven Skills That Turn Data Into Decisions
- AI and the Data Value Chain: Where the Bottleneck Moved
- Unstructured Data and AI: The Knowledge You’ve Been Sitting On
- Seven Links in the Data Value Chain (Original)
The Data Value Chain was built for structured data. But 80% of what your organization knows is unstructured - and AI just cracked it open. This is the knowledge management breakthrough we've chased for 30 years.
The first knowledge management project I worked on was at Searle, right around the time Monsanto was absorbing the company in the mid-’90s. The vision was ambitious – capture the collective expertise of a global pharmaceutical and chemical operation, make it searchable, make it reusable, make it so that when a scientist in St. Louis solved a problem, a team in London could find that solution without reinventing it from scratch. We built taxonomies. We designed portals. We created elaborate classification systems that would organize everything anyone knew into neat, navigable categories.
It mostly didn’t work. Not because the idea was wrong – it was a genuinely good idea, and the people behind it were smart and committed. It didn’t work because we were asking thousands of busy professionals to stop what they were doing, open a system they didn’t particularly enjoy using, and type up what they knew in a format that fit our taxonomy. The friction was enormous. The payoff was distant and abstract. And so the portals slowly emptied out, the taxonomies gathered dust, and the institutional knowledge stayed exactly where it had always been – in people’s heads, in email threads, in the informal networks that actually made the organization function.
I thought about that project recently while working on something that felt eerily similar and completely different at the same time. The ambition is the same – capture organizational knowledge and make it usable. But the technology has changed so fundamentally that the old barriers don’t apply in the same way. And that got me thinking about a blind spot in the Data Value Chain framework I’ve been writing about in this series – a pretty significant one, actually.
The Blind Spot in the Chain
The Data Value Chain I’ve been describing in this series – Insight, Architect, Generate, Store, Process, Analyze, Present – was built for a specific kind of data. Structured data. Rows and columns. Transactions in an ERP system. Sales figures by region. Inventory levels by SKU. Sensor readings from a production line. The kind of data that fits neatly into a database schema, with defined fields and predictable formats.
That’s the data we’ve been organizing, analyzing, and arguing about for decades, and the framework holds up well for it. But here’s the blind spot: structured data represents maybe 20% of what an organization actually knows. The other 80% is unstructured – emails, meeting transcripts, support tickets, Slack conversations, project post-mortems, the casual hallway exchange where someone mentions that a key supplier is having quality problems. It’s the institutional knowledge trapped in the heads of people who’ve been around long enough to know how things really work, and it’s never been accessible at scale.
The Data Value Chain doesn’t have a link for this. It was never designed to handle language as raw material. And until recently, that was fine – nobody could do much with unstructured data anyway, so the framework covered the data that mattered operationally. But AI changed that equation, and now the most valuable information in most organizations is sitting in formats the chain was never built to process.
The Knowledge Management White Whale
This isn’t the first time someone has tried to unlock unstructured knowledge. The late ’90s and early 2000s saw a massive wave of knowledge management initiatives, and I had a front-row seat for several of them.
The ambition was exactly right. Organizations recognized that their most valuable asset wasn’t the data in their transactional systems – it was the expertise, judgment, and institutional memory distributed across thousands of employees. If you could capture that and make it searchable, the competitive advantage would be enormous. So companies invested in enterprise portals, collaboration platforms, elaborate taxonomy systems, and knowledge bases that would organize everything anyone knew into neat, navigable structures.
The technology wasn’t the only problem, but it was a big one. The systems required people to manually classify, tag, and upload their knowledge into rigid categories. You had to stop what you were doing, open a portal you didn’t particularly enjoy using, figure out where in the taxonomy your insight belonged, and type it up in a format the system could handle. The friction was brutal, and the payoff felt distant. I wrote about this back in 2006, looking at why enterprise wikis were fundamentally challenged – they didn’t fail because people couldn’t find information. They failed because nobody created it in the first place.
The deeper problem was that the technology couldn’t meet humans where they were. Knowledge doesn’t naturally come in taxonomy-friendly packages. It comes in conversation, in the way someone explains a workaround to a colleague, in the offhand observation during a project review that nobody writes down. Asking people to translate that into structured, categorized, uploadable formats was asking them to do the hardest part of the job – and they were already busy doing their actual jobs. So the portals emptied out, the taxonomies gathered dust, and knowledge management became something that everyone agreed was important and almost nobody successfully implemented.
The white whale. Everybody could see it. Nobody could catch it.
Why This Time Might Be Different
AI collapsed the input barrier that killed knowledge management. That single change is more important than anything happening on the output side, and it’s the part that most people are underestimating.
Natural language works now. You don’t need to classify your knowledge into a taxonomy. You don’t need to open a portal and fill out forms. You can dictate into a system and it understands you. You can upload decades of old documents – memos, presentations, reports, emails – and AI will extract the patterns and connections without anyone having to manually tag a single file. You can let it listen to a meeting and pull out the decisions, the action items, and the institutional context that would otherwise evaporate the moment everyone walked out of the room. The user interface for knowledge capture went from “navigate our enterprise portal and classify your contribution” to “just talk.”
I’ve experienced this firsthand building JazzAI, and the contrast is striking. Uploading structured content – my book, blog posts, presentations, strategy frameworks accumulated over decades – works almost effortlessly. The AI digests it, finds the connections, and suddenly has access to patterns I’d built over an entire career. That part is genuinely impressive and, frankly, a little unsettling in how well it works.
But here’s where it gets honest. Capturing the nuanced knowledge – the contextual judgment that comes from 40 years of executive experience, the stories about why certain approaches work in certain situations, the instinct for when to push forward and when to step back – that still requires real effort. Not because the technology can’t handle it, but because articulating tacit knowledge clearly enough for any system to use has always been the hard part, and AI doesn’t magically solve that. It makes the capture dramatically easier, but someone still has to recognize which knowledge is worth capturing and be able to express it clearly.
The technology caught up. The organizational discipline still matters. But for the first time, the balance has shifted enough that knowledge management might actually work at scale.
A Different Kind of Chain
So what happens if you try to run unstructured data through the Data Value Chain? The short answer is that the chain stretches, bends, and in some places breaks entirely. The links are recognizable, but the skills required at each one are fundamentally different.
Insight still starts the process, but the questions look different. With structured data, Insight means knowing which metrics matter – “what would we need to measure to understand why customer retention dropped in Q3?” With unstructured data, the questions are about patterns in language and behavior – “what are our customers actually telling us in support conversations that we’re not hearing?” or “what does our sales team know about competitive threats that never makes it into a CRM field?” The business imagination required is similar, but the domain expertise shifts from quantitative pattern recognition to qualitative interpretation.
Architect changes dramatically. You’re not designing relational database schemas. You’re building vector stores, configuring embedding models, designing retrieval-augmented generation pipelines, and making decisions about chunking strategies and context windows. The underlying judgment is the same (anticipate future needs without over-engineering), but the technical vocabulary is almost entirely different. An architect who spent a career designing data warehouses is going to need significant new skills to design a knowledge retrieval system.
Generate is an interesting inversion. With structured data, Generate was about pulling information from sources – writing extraction routines, building API connections. With unstructured data, the information already exists. It’s in your email server, your collaboration platform, your document management system. The challenge isn’t extraction – it’s access and permission. How do you let an AI system read meeting transcripts without exposing confidential conversations? How do you process customer support emails without running into privacy constraints? The technical problem shifted from “how do we get the data” to “how do we get the data responsibly.”
Process faces perhaps the biggest transformation. You can scrub and normalize a database field. How do you “clean” a conversation? Unstructured data doesn’t have malformed records or duplicate entries in the traditional sense. But it has noise, contradiction, context-dependence, and ambiguity. Two people in the same meeting might describe the same decision completely differently. An email thread might contain both the official position and the unofficial reality. Processing unstructured data means developing entirely new approaches to quality, consistency, and trust – and those approaches are still being invented.
Analyze and Present shift too, but in ways that feel more like evolution than revolution. Analysis of unstructured data leans heavily on synthesis and interpretation rather than statistical pattern detection. And presentation becomes less about data visualization and more about narrative construction – telling a coherent story from fragments of conversational evidence. Both still require deep business understanding, but the toolbox changes.
The Data Value Chain was built for a world where data meant numbers. When data means language, the chain doesn’t break, exactly – but it needs to be rethought almost link by link.
Who Owns This?
Here’s the organizational question that nobody has a good answer for yet: who is responsible for unstructured data?
Structured data has owners. Finance owns the general ledger. Operations owns production data. Sales owns the CRM. These ownership lines were hard-won – it took years of ERP implementations and data governance initiatives to establish them – but they exist. When the financial data has quality problems, you know whose door to knock on.
Unstructured data has no equivalent. The knowledge trapped in email threads doesn’t belong to anyone. The institutional memory in your 20-year veteran’s head isn’t on anybody’s org chart. The insights buried in customer support transcripts fall between the cracks of IT, customer service, and product management. Nobody’s job description says “ensure the organization’s conversational knowledge is captured, organized, and accessible.”
This is exactly why the knowledge management wave of the 2000s failed at the organizational level, and it’s why many AI knowledge initiatives will struggle too. The technology is finally good enough, but the organizational muscles haven’t been built. You need people who can recognize which knowledge is worth capturing – not everything is, and treating every conversation as equally valuable will bury you in noise. You need people who can articulate complex, nuanced expertise clearly enough for an AI system to incorporate it. And you need the organizational discipline to make knowledge capture a habit, not a one-time project.
The organizations that cracked ERP implementation learned these lessons the hard way. They built governance structures, established data ownership, created incentives for transactional discipline. AI-era knowledge management requires a similar commitment – and the companies that treat it as a technology project instead of an organizational change initiative will end up with the same dusty portals and empty taxonomies they had 20 years ago.1This is where Data Mastery connects to all four other Building Blocks. Unstructured knowledge lives in operations, customer interactions, product feedback, and team expertise. Capturing it is an enterprise-wide challenge, not a data team project.
The Knowledge You’ve Been Sitting On
The Data Value Chain gave you a framework for structured data. AI reshaped the economics of that chain by compressing the technical middle and amplifying the human bookends. But the real frontier isn’t faster analytics on the data you already have in your databases. It’s the knowledge you’ve been sitting on for decades – the unstructured, conversational, experiential knowledge that has always been your organization’s most valuable and least accessible asset.
For 30 years, knowledge management has been the white whale of enterprise technology. The vision was always right. The technology was never ready. AI has changed that equation – not by eliminating the need for organizational discipline, but by making the capture problem dramatically more tractable. Natural language as an interface. Automated transcription and extraction. Retrieval systems that can find relevant knowledge without requiring someone to have filed it in the right taxonomy.
The organizations that figure this out won’t just have better AI implementations. They’ll have something much more valuable – a way to preserve and leverage institutional knowledge that would otherwise walk out the door every time a key employee retires or moves on. That’s not a technology advantage. That’s a strategic one.
And it starts with recognizing that the most important data in your organization isn’t in any database. It never was.
Want to unlock the knowledge your organization has been sitting on? Join our community of executives and practitioners navigating Data Mastery and the other Building Blocks of a connected business. Subscribe to the Maker Turtle mailing list for frameworks, case studies, and practical guidance you won’t find in a vendor pitch.
Related Articles
- Using Unstructured Data to Fuel Enterprise AI Success – MIT Technology Review on why structured data must be ready before unstructured data becomes useful for AI
- Top Knowledge Management Trends 2026 – Enterprise Knowledge CEO on the convergence of structured and unstructured data through “knowledge assets”
- Top 5 Unstructured Data Management Predictions for 2026 – Komprise research showing enterprises storing 5-10+ PB of unstructured data with growing urgency around AI-readiness
Recommended Books
- Working Knowledge by Thomas Davenport and Laurence Prusak – The seminal text on knowledge management from 1998; directly supports the “white whale” history in this article
- Don’t Think So Much by Jim MacLennan – Digital transformation and organizational change lessons that apply directly to AI-era knowledge management
- The Knowledge-Creating Company by Ikujiro Nonaka and Hirotaka Takeuchi – The classic on tacit versus explicit knowledge conversion, directly relevant to the capture challenge AI is now addressing
6 April, 2026
- 1This is where Data Mastery connects to all four other Building Blocks. Unstructured knowledge lives in operations, customer interactions, product feedback, and team expertise. Capturing it is an enterprise-wide challenge, not a data team project.







Comments (0)