It’s a common situation we’ve being seeing: you have fixed the data pipeline, you have hired or trained the talent, you have the executive mandate. The budget. The technology. The time and dedication even! And you are still wondering: why is your enterprise AI still underperforming? Why is it not scaling? The answer, turns out, may be hiding in in people’s heads.

Enterprise AI adoption has a well-documented data problem. IDC research consistently identifies data quality, data availability, and data silos as the top barriers to scaling AI across the organization. Globally, 89% of organizations acknowledge some level of data quality problem and at the same time, 52% of companies say data quality is the most important factor for AI projects success. Only 6% of CIOs admitted they completed all data initiatives and are ready to move to the next level of AI adoption. And 7 in 10 IT and business leaders cite data silos as one of the biggest challenges for AI adoption. We can see significant budgets being invested in data lakes, data governance frameworks, or MLOps infrastructure. And yet, more than a half of AI initiatives stall after the pilot phase: succeeding  at delivering impressive demos but failing to generate enterprise-wide value.

The data problem is real. But is the problem just about data readiness? There might be another knowledge crisis running quietly and often in the shadows. Most organizations do not see it coming until something goes seriously wrong. A major restructuring or a round of layoffs happens. A reorg that lets go of the wrong people. Suddenly, things that used to just work start breaking down. Processes that ran smoothly for years  become unreliable. New hires cannot figure out how their predecessors got results. That is the moment leadership realizes something important walked out the door. And if it was never captured, it is simply gone. This kind of knowledge is easy to overlook precisely because it is invisible when everything is going fine. It only shows up in the gap it leaves behind. Or when an AI project doesn’t scale.

There are things that cannot be put into a dataset

In the 60’s, philosopher Michael Polanyi articulated something that anyone who has tried to teach a skill to a machine, or an algorithm, or even to another human, knows intuitively: “We can know more than we can tell”. This is the essence of the Polanyi Paradox. It captures the idea that much of what we know, we know through experience, practice, and intuition rather than through rules we could ever fully write down. A master chess player cannot explain every instinct that guides a move. A skilled surgeon cannot put into words every micro-adjustment she makes mid-procedure. They just know and that knowing lives in them, not in any manual or dataset. The paradox is this: the knowledge that is often most valuable is precisely the knowledge that is hardest to transfer, document, or teach explicitly. That silent knowledge is often called tacit.

Organizations are similar. Tacit knowledge is everything an organization knows but has never written down. It is the senior underwriter who can sense a bad risk before she looks at a single data point. It is the way the logistics team re-routes shipments when two things go wrong simultaneously in a process that exists nowhere in any corporate workflow diagram but their heads. It is this unspoken understanding of which stakeholder actually needs to approve something, regardless of what the org chart says. It is decades of built-up expertise, accumulated judgment, pattern recognition, and impossible to document gut feeling. And it is embedded in people and informal process, not systems and databases.

AI models, algorithms, and systems learn and reference  from data. But tacit knowledge,  by definition, never makes it into the databases, structured or not. Which means, if organizations decide to look at the end-to-end transformational AI deployments, that often are training and deploying AI using an incomplete picture. At the same time, achieving success around unique use cases, where knowledge can easily be written down and transferred.

Three more problems?

The tacit knowledge gap is structurally difficult to manage for three reasons that reinforce one another.

  • First, Polanyi himself never bothered to assess what the proportion of tacit knowledge was. Organizations do not know how much tacit knowledge they have. Is it 25% of institutional knowledge? 60%? 90%? There is no universal answer, even if some management experts try to guess, and that uncertainty is itself a strategic liability. It is impossible to close a gap that cannot be measured. We can only assume that in knowledge-intensive industries, from professional services, to healthcare, to advanced manufacturing and financial services, the proportion of expertise that lives solely inside people’s heads is almost certainly larger than leadership assumes.
  • Second, and this is where the problem compounds beyond Polanyi’s original framing, tacit knowledge is not static. It evolves as markets shift, as teams learn, and, painfully, as people come and go. Every time a senior expert retires or an experienced employee leaves, a portion of that knowledge walks out the door permanently. The institutional knowledge base your AI was designed around last year may no longer reflect today’s reality. The power of silent expertise also fluctuates, it is particularly crucial in time of sudden changes, which means it will become even more critical when an organization decides for AI transformation.
  • Third, and something I can see among many experts I meet, tacit knowledge gaps may erode trust in AI. This is perhaps the most underappreciated consequence. When experienced professionals interact with AI outputs that feel off, like answers that are technically defensible but miss something important, they often cannot articulate exactly why. The AI passed every benchmark. The data was clean. But the output does not  fit the context that only an insider would know. The result: employees either spend hours manually verifying AI recommendations, and defeating the productivity case, or they quietly stop or avoid using the tools altogether. A smooth way to prove the business case wrong, if you’re asking me.

Is there hope? I don’t know, but we can try

There probably is no one ultimate way to address the Polanyi Paradox – that is, in a sense, the point. But organizations can – and should – take deliberate steps to reduce the gap and build AI systems that are more honest about what they do and do not know.

Companies need to design AI for collaboration, not replacement. The most effective AI deployments in mature organizations use human expertise to continuously refine models, tools, or applications behavior. This can be done through feedback loops, exception handling, or human-in-the-loop review. This, if done correctly, creates a mechanism for tacit knowledge to gradually surface and be encoded over time. AI will take over much of the work that can be fully defined and encoded, but in many situations it will only handle a limited part of the overall task.

Companies can start making tacit knowledge capture a design requirement, not an afterthought. Before deploying AI in any high-stakes domain, conduct structured knowledge extraction with domain experts. Techniques borrowed from cognitive task analysis (sounds heavy, but can be really fun!), may help surface decision logic that experts themselves did not know they were applying. This is not a one-time exercise; it needs to be embedded in how teams work and how processes are documented. This process also calls for factoring in potentially high cost and resistance, and prioritizing “easier” AI use cases unless the expected return is exceptionally high.

Organization should treat employee transitions as a knowledge continuity risk. Organizations frequently invest significantly in operational continuity planning. Knowledge continuity deserves the same approach. Structured offboarding, mentorship programs designed to transfer expertise rather than just tasks, and apprenticeship models can preserve hidden knowledge before it disappears.

Organizations must aim at making AI systems transparent  about uncertainty.  When building or procuring AI tools, organizations can define confidence thresholds that trigger human review rather than automated action (might not be great for an autonomous agentic part, but we need compromises). They can also test models specifically against edge cases and domain-specific scenarios where tacit knowledge would normally kick in and use those gaps to inform where human oversight is non-negotiable. It is less about an organization admitting  AI weakness and more about an organization designing guardrails around known blind spots.

The organizations that will extract the most value from AI over the next decade will be the ones that are honest about what knowledge they have actually managed to encode, even if, yes, we all agree we can never have all the knowledge. And those trying to close that gap systematically. For your AI to succeed and scale, data is necessary, but it is not sufficient. The missing variable is knowledge and all of it, not just the part that lives in organization’s databases.

Got a question? Drop it in here.

Ewa Zborowska - Research Director, AI, Europe - IDC

Ewa Zborowska is an experienced technology professional with 25 years of expertise in the European IT industry. Since 2003, she has been a member of the IDC team, based in Warsaw, researching IT services markets. In 2018, she joined the European team with a specific emphasis on cloud and AI. Ewa is currently the lead analyst for IDC’s European Artificial Intelligence Innovations and Strategies CIS.

In the early 2020s, most IT dashboards looked deliciously green – until you cut them open. That “watermelon problem” summed up the gap between what SLAs said and how people actually felt at work: 99.8% uptime on paper, but slow logons, clunky multi-factor authentication, and chatbots that couldn’t understand what anyone really wanted. Experience was an afterthought, AI was a sideshow, and creativity was nowhere to be found in the contract.​

When SLAs ruled the world

Back then, three things defined the status quo. AI was narrow and local, sitting on the edge of workflows answering FAQs or routing tickets rather than orchestrating work. Experience measurement lagged reality, with annual or quarterly surveys surfacing issues long after the damage was done. And creativity simply didn’t exist in the metrics; contracts cared about uptime, not whether people had the cognitive space to experiment or innovate.​

The result was a strange split-screen. On one side, leaders proudly cited their SLA success. On the other, employees wrestled with friction that didn’t fit any KPI: context-switching between tools, re-entering the same data, and watching “helpful” chatbots miss the point. XLAs were occasionally piloted  (an NPS here, a satisfaction score there) but rarely changed actual design or investment decisions.​

Now: XLAs as control towers for human-AI work

Fast forward to 2026, and AI is no longer the sidekick; it is the backbone of digital work. GenAI assistants, low-code agents, and orchestration platforms now sit inside service desks, digital workplace platforms, and line-of-business apps. XLAs have emerged as the language that decides whether all this AI is genuinely helping humans do better work or just adding more noise.​

Three big shifts define the “now.” Agentic AI makes XLAs real-time and contextual, correlating technical signals like latency and crashes with human signals such as sentiment, task completion, and time to productivity. It can trigger automated remediation, from self-healing endpoints to conversational agents that guide users through fixes, and spotlight experience hotspots for specific personas or workflows. IDC’s 2025 Future of Work survey shows 79% of organizations now actively measure the relationship between employee and customer experience, with two-thirds having proof of causal linkages, while 94% of AI-enabled work adopters report productivity gains and over half see significant improvements.​

Making creativity a measurable outcome

The most interesting XLAs no longer treat creativity as a fuzzy aspiration. They track uninterrupted focus time per persona, link AI automation to freed-up hours, and measure innovation throughput:  ideas submitted, prototypes built, experiments completed. Instead of only asking if AI is fast or accurate, organizations track “human-plus” metrics: how much better decisions, proposals, and options become when humans and AI work together.​

Governance grows up

This evolution is forcing governance structures to grow up fast. AI-focused Centers of Excellence increasingly use XLA dashboards as strategic instruments, challenging deployments that look great on technical metrics but poor on human outcomes. They prioritize changes that build trust and agency, such as better explainability, robust feedback loops, and human override capabilities, and retire tools that consistently score badly on ease of use or learning curve.​

Metrics are diversifying accordingly: about 69% of organizations use productivity scores such as task-based speed and throughput to assess AI, while 42% also track employee satisfaction and 44% monitor skills proficiency. XLAs have become a proxy for hard questions: Are we making it easier for people to solve novel problems? Are AI tools empowering experts or boxing them in? Where is digital friction quietly killing initiative?​

Tomorrow: XLAs as the OS for co-creation

Looking ahead, XLAs are set to become the operating system for human/AI co-creation. Emerging “experience-risk” indices predict burnout or disengagement, while creativity capacity scores combine focus time, use of exploratory tools, and psychological safety indicators. Agentic AI will increasingly use XLAs as experience-intent parameters  – goals like maximizing focus time for data scientists or ensuring frontline staff resolve most issues in under three minutes  – and autonomously orchestrate tools, notifications, and workflows to hit them.​

Contracts will catch up too, moving from green dashboards to models that reward innovation, protect against “experience debt,” and explicitly safeguard time and cognitive bandwidth for meaningful work. For service providers, the mandate is clear: anchor XLAs on outcomes only humans can deliver, make creativity visible on the dashboard, build strong feedback loops, and use XLAs as guardrails against over-automation. XLAs are no longer just a friendlier way to measure IT; they are becoming the central platform for keeping human potential at the center of an AI-driven future of work.

For more information see IDCs upcoming research documents: “Measuring What Matters: XLAs and the 2026 Digital Workplace” and “Control Towers for Human Potential: The Growing Importance of XLAs in the Age of Agentic AI”.

If you have a question about this or any other IDC research, drop it in here.

Meike Escherich - Associate Research Director, European Future of Work - IDC

Meike Escherich is an associate research director with IDC's European Future of Work practice, based in the UK. In this role, she provides coverage of key technology trends across the Future of Work, specializing in how to enable and foster teamwork in a flexible work environment. Her research looks at how technologies influence workers' skills and behaviors, organizational culture, worker experience and how the workspace itself is enabling the future enterprise.

By Bo Lykkegaard, Associate VP for Software Research Europe with advice and review by Ewa Zborowska, Research Director, AI, Europe

Providers of SaaS solutions across the world have been through the market capitalization bloodbath during the past six months. Despite presenting solid indicators of growth and margins for 2025, almost all publicly traded companies have seen share price reductions 10% to 60% with the average reduction being in the 30-35% range.

Forget about looming trade wars, recession fears, missed revenue goals, and other conventional share price depressants. This is about AI disruption of the current SaaS user experience, licensing model, and product architecture. Investors are starting to fear that the SaaS ‘rental model for software’ will become invisible ‘featureware’ inside an AI agent layer.

What Are the Market Cap Reductions Telling Us?

We have examined the market cap reductions of public traded SaaS vendors over the past six months. Based upon this, we can make the following observations:

  • All SaaS vendors are affected across solution areas, geographies, size of vendor, recent growth KPIs, and size focus (SMB vs. enterprise). This means that investors are reexamining their assumptions related to SaaS growth prospects in general.
  • Vendors of workflow automation solutions and vendors targeting small and medium-sized businesses appear particularly exposed. Commercial workflow software is seen as exposed to replacement by new AI agent technologies. Also, vendors targeting small businesses are seen as more exposed to churn and price pressures.
  • SaaS vendors headquartered in EMEA do not appear harder hit than those headquartered in North America and the market cap correction has hit the largest as well as the smaller SaaS vendors.

Changes that All SaaS Vendors Are Facing

Firstly, the conventional SaaS user experience must change. In a conventional SaaS application, the user executes tasks manually within defined workflows. In an AI-powered application, the system adds to these structured workflows with probabilistic outputs, where it generates, predicts, recommends, or executes. Also, AI-powered applications can accept and react to all kinds of conversational user inputs. Furthermore, just like today’s LLM-based apps, business applications understand context and remember past interactions, which make recommendations and predictions more relevant and precise. Finally, AI-powered business applications are more proactive in nature and help users with monitoring tasks and relevant notifications.

Secondly, the conventional SaaS licensing model must evolve. The talk of the town these days is ‘outcome-based pricing’, i.e. the notion of pricing an application on outcomes (e.g. number of invoices issued) as opposed to number of users. If agentic workflows increasingly automate core business processes in the future, the user of a, say, financial application will be an agentic workflow as opposed to a human user. As AI agents increasingly become users of business applications, the user-based revenue model of SaaS application collapses. Investors are looking for SaaS vendors to at least align licensing better to business outcomes.

Thirdly, the conversional SaaS product architecture must be rethought. Adding AI to a conventional SaaS solution in the form of a chatbot or other form of AI-generated add-on does not make a meaningful difference. Real modernization requires rethinking the SaaS workflow from the ground up. AI changes all levels of the SaaS product stack and needs foundation model(s), embedding layer, vector database, retrieval-augmented generation (RAG), orchestration layer, guardrails, monitoring, and prompt/version management.

AI is making several other significant changes in SaaS. Development and maintenance as well as running costs have become more volatile and unpredictable. Data management requires new approaches, as application data now serves as a key source for training AI-powered SaaS solutions. Product roadmaps and release cadences are increasingly driven by AI model upgrades rather than traditional update schedules. Software vendors face new risk management challenges related to hallucinations and regulatory compliance. And both vendors and end-user organizations need to adapt their teams with new sets of skills. And most importantly, the overall competitive landscape has shifted, with AI-based startups and hyperscaler offerings emerging as new challengers.

The changes above certainly apply to SaaS vendors in Europe. However, in addition, vendors in Europe – as they adapt solutions and business models to become AI-driven – must pay particular attention to four areas in order to successfully transform.

Firstly, there is the GDPR, NIS2 and EU AI Act compliance, often accompanied by various national or industry-specific regulations. If they cannot document and showcase complete compliance to customers, they cannot sell their AI-powered solutions to compliance-sensitive European organizations.

Secondly, increasingly we see data residency requirements from customers in Europe, particularly in public services, financial services, and healthcare. Buyers in such industries can require EU-hosted data and sovereign cloud guarantees and approaches and can seek to avoid subjection to the US CLOUD Act and to exposing data for foundation model training.

Thirdly, Europe is multi-lingual and buyers require multi-language model performance. A conversational SaaS application is great but only if the conversation happens in the local European language where the application is deployed. We have seen many cases where non-English conversational capabilities are years behind English.

Fourth, European AI-powered SaaS vendors should expect higher demand for transparency and explainability. European customers have a strong preference for understanding how AI systems make decisions, a need often reinforced by regulations like GDPR and the EU AI Act. This means vendors must provide clear logic behind decision criteria, bias mitigation documentation, human oversight mechanisms, and comprehensive audit trails. Black box AI approaches such as “Pick this candidate because the recruiting application assigned a high AI score” simply will not fly in Europe, where trust is key and it heavily depends on being able to trace and justify how conclusions are reached.

Join the Conversation

At IDC, we help you navigate these changes with deep market research, robust data analytics, and tailored custom solutions. Whether you need strategic insights, benchmarking, or support in adapting your business model, our experts are ready to guide you.

Contact us to discuss your unique challenges and discover how IDC can empower your next steps in the evolving, AI-disrupted European software landscape.


Sources:

Bo Lykkegaard - Associate VP for Software Research Europe - IDC

Bo Lykkegaard is associate vice president for the enterprise-software-related expertise centers in Europe. His team focuses on the $172 billion European software market, specifically on business applications, customer experience, business analytics, and artificial intelligence. Specific research areas include market analysis, competitive analysis, end-user case studies and surveys, thought leadership, and custom market models.

In January, Carla Arend, Rahiel Nasir and Luis Fernandes presented IDC’s predictions for cloud in 2026 and beyond. Below is a summary of the main points that were made in the webcast.

The need for digital resilience has never been more crucial

  • Tariffs, supply chain glitches, regulations, skills shortages… digital organisations are being assaulted from all sides.
  • For the majority of EMEA organisations, maintaining operational resilience and cyber security is the top priority.
  • To survive, organisations need to ensure their tech stack is robust and assess the strengths of their tech partner ecosystem. Adaptability and financial stability will also be key weapons to add to the armoury.

Digital sovereignty could help

  • Around half of organisations in EMEA have increased interest in implementing digital sovereignty solutions due to all the geopolitical uncertainties, such as trade tensions, regional conflicts, and regulatory shifts, witnessed in 2025.
  • Digital sovereignty solutions offer data owners complete control and autonomy over their digital assets – maintaining operational resilience is a key tenet of sovereignty.
  • Governance, risk and compliance solutions will be the key focus for organisations looking for sovereign cloud providers, especially for their AI. This will help them reassess their cloud provider options, determine the right IT venue for their workloads, and help to create a more robust tech stack.

The right venue for AI workloads

  • Enterprises are shifting to specialized AI providers and edge infrastructure to maximize performance and efficiency.
  • By 2028, physical AI use cases will experience explosive growth with cloud providers powering the bulk of these deployments at the edge with industry-specific AI agents and high-performance edge infrastructure.
  • By the end of this decade, at least 30% of advanced GPU needs will be met by specialised AI cloud providers offering true cloud features, flexible pricing, APIs, and software services (unlike GPU-only providers).

 AI and cloud modernisation

  • Cloud modernisation continues while legacy systems are re-platformed for AI, using autonomous agents to automate operations and orchestration.
  • Over the next two years, more than half of enterprise apps will leverage SaaS platforms to orchestrate predefined app functions and AI agents for real-time workflows, enabling modular and interoperable solutions.
  • By 2030, 45% will use cloud AI-infused tools to assess cost and performance metrics to optimise workload placement. Furthermore, a fifth will use AI agents to automate workload orchestration.

 Recommendations for cloud users

  • With geopolitical turmoil continuing into 2026 (and probably beyond), organisations are advised to take a risk-based approach to their cloud and AI strategies.
  • Choose the most appropriate venue for your workload. This should be supported by a hybrid and multicloud ecosystem of partners who offer services tailored to your needs.
  • The time to modernise your cloud estate to get ready for AI is now.

Watch the European cloud predictions webcast here:

For the EMEA FutureScape predictions webcast, click here.

If you would like more information on any of the above, please drop your details in here.

Rahiel Nasir - Research Director, European Cloud Practice, Lead Analyst, Digital Sovereignty - IDC

Rahiel Nasir is responsible for leading and contributing to IDC's European cloud and cloud data management research programs, as well as supporting associated consulting projects. In addition, he leads IDC's worldwide Digital Sovereignty research program. Nasir has been watching technology markets and writing about them throughout his professional life.