Artificial Intelligence and DaaS March 9, 2026 8 min

The knowledge your AI may never have

It’s a common situation we’ve being seeing: you have fixed the data pipeline, you have hired or trained the talent, you have the executive mandate. The budget. The technology. The time and dedication even! And you are still wondering: why is your enterprise AI still underperforming? Why is it not scaling? The answer, turns out, may be hiding in in people’s heads.

Enterprise AI adoption has a well-documented data problem. IDC research consistently identifies data quality, data availability, and data silos as the top barriers to scaling AI across the organization. Globally, 89% of organizations acknowledge some level of data quality problem and at the same time, 52% of companies say data quality is the most important factor for AI projects success. Only 6% of CIOs admitted they completed all data initiatives and are ready to move to the next level of AI adoption. And 7 in 10 IT and business leaders cite data silos as one of the biggest challenges for AI adoption. We can see significant budgets being invested in data lakes, data governance frameworks, or MLOps infrastructure. And yet, more than a half of AI initiatives stall after the pilot phase: succeeding  at delivering impressive demos but failing to generate enterprise-wide value.

The data problem is real. But is the problem just about data readiness? There might be another knowledge crisis running quietly and often in the shadows. Most organizations do not see it coming until something goes seriously wrong. A major restructuring or a round of layoffs happens. A reorg that lets go of the wrong people. Suddenly, things that used to just work start breaking down. Processes that ran smoothly for years  become unreliable. New hires cannot figure out how their predecessors got results. That is the moment leadership realizes something important walked out the door. And if it was never captured, it is simply gone. This kind of knowledge is easy to overlook precisely because it is invisible when everything is going fine. It only shows up in the gap it leaves behind. Or when an AI project doesn’t scale.

There are things that cannot be put into a dataset

In the 60’s, philosopher Michael Polanyi articulated something that anyone who has tried to teach a skill to a machine, or an algorithm, or even to another human, knows intuitively: “We can know more than we can tell”. This is the essence of the Polanyi Paradox. It captures the idea that much of what we know, we know through experience, practice, and intuition rather than through rules we could ever fully write down. A master chess player cannot explain every instinct that guides a move. A skilled surgeon cannot put into words every micro-adjustment she makes mid-procedure. They just know and that knowing lives in them, not in any manual or dataset. The paradox is this: the knowledge that is often most valuable is precisely the knowledge that is hardest to transfer, document, or teach explicitly. That silent knowledge is often called tacit.

Organizations are similar. Tacit knowledge is everything an organization knows but has never written down. It is the senior underwriter who can sense a bad risk before she looks at a single data point. It is the way the logistics team re-routes shipments when two things go wrong simultaneously in a process that exists nowhere in any corporate workflow diagram but their heads. It is this unspoken understanding of which stakeholder actually needs to approve something, regardless of what the org chart says. It is decades of built-up expertise, accumulated judgment, pattern recognition, and impossible to document gut feeling. And it is embedded in people and informal process, not systems and databases.

AI models, algorithms, and systems learn and reference  from data. But tacit knowledge,  by definition, never makes it into the databases, structured or not. Which means, if organizations decide to look at the end-to-end transformational AI deployments, that often are training and deploying AI using an incomplete picture. At the same time, achieving success around unique use cases, where knowledge can easily be written down and transferred.

Three more problems?

The tacit knowledge gap is structurally difficult to manage for three reasons that reinforce one another.

  • First, Polanyi himself never bothered to assess what the proportion of tacit knowledge was. Organizations do not know how much tacit knowledge they have. Is it 25% of institutional knowledge? 60%? 90%? There is no universal answer, even if some management experts try to guess, and that uncertainty is itself a strategic liability. It is impossible to close a gap that cannot be measured. We can only assume that in knowledge-intensive industries, from professional services, to healthcare, to advanced manufacturing and financial services, the proportion of expertise that lives solely inside people’s heads is almost certainly larger than leadership assumes.
  • Second, and this is where the problem compounds beyond Polanyi’s original framing, tacit knowledge is not static. It evolves as markets shift, as teams learn, and, painfully, as people come and go. Every time a senior expert retires or an experienced employee leaves, a portion of that knowledge walks out the door permanently. The institutional knowledge base your AI was designed around last year may no longer reflect today’s reality. The power of silent expertise also fluctuates, it is particularly crucial in time of sudden changes, which means it will become even more critical when an organization decides for AI transformation.
  • Third, and something I can see among many experts I meet, tacit knowledge gaps may erode trust in AI. This is perhaps the most underappreciated consequence. When experienced professionals interact with AI outputs that feel off, like answers that are technically defensible but miss something important, they often cannot articulate exactly why. The AI passed every benchmark. The data was clean. But the output does not  fit the context that only an insider would know. The result: employees either spend hours manually verifying AI recommendations, and defeating the productivity case, or they quietly stop or avoid using the tools altogether. A smooth way to prove the business case wrong, if you’re asking me.

Is there hope? I don’t know, but we can try

There probably is no one ultimate way to address the Polanyi Paradox – that is, in a sense, the point. But organizations can – and should – take deliberate steps to reduce the gap and build AI systems that are more honest about what they do and do not know.

Companies need to design AI for collaboration, not replacement. The most effective AI deployments in mature organizations use human expertise to continuously refine models, tools, or applications behavior. This can be done through feedback loops, exception handling, or human-in-the-loop review. This, if done correctly, creates a mechanism for tacit knowledge to gradually surface and be encoded over time. AI will take over much of the work that can be fully defined and encoded, but in many situations it will only handle a limited part of the overall task.

Companies can start making tacit knowledge capture a design requirement, not an afterthought. Before deploying AI in any high-stakes domain, conduct structured knowledge extraction with domain experts. Techniques borrowed from cognitive task analysis (sounds heavy, but can be really fun!), may help surface decision logic that experts themselves did not know they were applying. This is not a one-time exercise; it needs to be embedded in how teams work and how processes are documented. This process also calls for factoring in potentially high cost and resistance, and prioritizing “easier” AI use cases unless the expected return is exceptionally high.

Organization should treat employee transitions as a knowledge continuity risk. Organizations frequently invest significantly in operational continuity planning. Knowledge continuity deserves the same approach. Structured offboarding, mentorship programs designed to transfer expertise rather than just tasks, and apprenticeship models can preserve hidden knowledge before it disappears.

Organizations must aim at making AI systems transparent  about uncertainty.  When building or procuring AI tools, organizations can define confidence thresholds that trigger human review rather than automated action (might not be great for an autonomous agentic part, but we need compromises). They can also test models specifically against edge cases and domain-specific scenarios where tacit knowledge would normally kick in and use those gaps to inform where human oversight is non-negotiable. It is less about an organization admitting  AI weakness and more about an organization designing guardrails around known blind spots.

The organizations that will extract the most value from AI over the next decade will be the ones that are honest about what knowledge they have actually managed to encode, even if, yes, we all agree we can never have all the knowledge. And those trying to close that gap systematically. For your AI to succeed and scale, data is necessary, but it is not sufficient. The missing variable is knowledge and all of it, not just the part that lives in organization’s databases.

Got a question? Drop it in here.

Ewa Zborowska - Research Director, AI, Europe - IDC

Ewa Zborowska is an experienced technology professional with 25 years of expertise in the European IT industry. Since 2003, she has been a member of the IDC team, based in Warsaw, researching IT services markets. In 2018, she joined the European team with a specific emphasis on cloud and AI. Ewa is currently the lead analyst for IDC’s European Artificial Intelligence Innovations and Strategies CIS.

Subscribe to our blog