短期的には、リホストはリビルドに次ぐ2番目に大きなセグメントであり、メインフレームなどのEOL(End of Life)に対し早急な対応を要するに企業による支出が市場を牽引している——ただし既に成熟期を迎えており、今後はマイナス成長が予測されている。中長期的な成長機会は、アプリケーションのモダナイゼーション——リライト、リファクタリング、マイクロサービス化やクラウドネイティブアーキテクチャの採用——にある。
Masaru Muramatsu is a senior research analyst, responsible for research and analysis of the Japanese IT services market, including IT consulting, systems integration, business services.
Prior to joining IDC, Masaru worked to help digitalize local government in Japan, implementing software as a service (SaaS) in the education and taxation sectors. He also acquired experience in domestic and international sales/marketing with his work for a company that provided materials for electronic devices like smartphones, PCs, and printers.
Masaru Muramatsu earned a master’s degree in engineering from Chuo University, Japan.
For years, digital accessibility, the practice of ensuring that digital products and services can be perceived, understood, and used by everyone regardless of ability, was treated as a compliance checkbox. That framing is no longer adequate. AI is reshaping accessibility into a strategic capability, one that is adaptive, continuous, and embedded in how people work, interact, and innovate.
As AI-enabled work becomes the norm, accessibility is no longer about supporting a small subset of users. It is about ensuring that everyone, across physical, sensory, cognitive, and neurodiverse dimensions, can fully participate in increasingly digital and AI-mediated environments. In this context, accessibility becomes foundational to productivity, inclusion, and ultimately business performance. Accessibility is also part of company culture: involving disabled and neurodiverse individuals in co-design, not just testing, creates more robust and adaptable systems. Sustaining long-term impact also requires investment in skills and culture, training employees, fostering inclusive design practices, and making accessibility a shared responsibility across teams.
The opportunity: AI as a scaler of inclusion and innovation
AI introduces a powerful opportunity to rethink accessibility at scale.
First, it enables real-time content adaptation. Capabilities such as automatic captioning, transcription, translation, and alternative text generation allow organizations to dynamically tailor content to different user needs. AI can also adjust reading levels, restructure complex information, and personalize interaction styles, supporting a broader range of cognitive and sensory preferences.
Second, AI supports continuous accessibility operations. Traditionally, accessibility has relied on periodic audits and remediation efforts. AI-driven testing tools now allow organizations to embed accessibility checks directly into development pipelines, transforming accessibility into a continuous, iterative process aligned with DevOps cycles.
Third, AI helps democratize innovation. By making tools and workflows more accessible, organizations can engage a wider and more diverse talent pool, including neurodiverse individuals and those historically underserved by traditional work environments. This expands creative input, improves problem-solving, and strengthens organizational resilience.
Finally, AI enables data-driven accessibility insights. Organizations can use AI to analyze accessibility barriers, monitor usage patterns, and measure outcomes, linking accessibility directly to business metrics such as productivity, employee engagement, and customer satisfaction.
Taken together, these capabilities shift accessibility from a cost center to a driver of innovation and competitive differentiation.
The pitfalls: Bias, complexity, and the risk of scaling barriers
Despite its promise, AI also introduces significant risks that organizations must actively manage.
One of the most critical challenges is bias in AI models. Many AI systems are trained on data and designed by teams that lack diversity. This can result in outputs that unintentionally exclude or disadvantage certain groups, particularly people with disabilities or non-standard interaction patterns. Without deliberate inclusion in design and testing, AI can reinforce existing barriers or create entirely new ones. Feedback loops that combine AI-driven insights with real user experiences are essential to countering this risk.
Another risk lies in inaccessible AI-generated content. While generative AI can produce fluent and polished outputs, these may still fail accessibility standards through improper structure, missing semantic cues, or formats that are difficult for assistive technologies to interpret. Auto-generated captions, for example, are often not accurate enough for compliance purposes.
The rise of agentic AI systems (autonomous AI that acts across workflows and applications without direct human instruction at each step) adds further complexity. If poorly designed, they can propagate inaccessible processes at scale, embedding friction into core operations rather than eliminating it.
There is also a governance challenge. As AI becomes embedded across systems, organizations must ensure clear accountability, transparency, and control over how accessibility preferences are handled, how decisions are made, and how user data is used.
Perhaps most importantly, over-reliance on automation can create a false sense of security. AI can scale testing and detection, but human validation, especially by people with disabilities, remains essential to identifying real-world usability issues.
Recommendations: Turning intent into impact
Organizations that want to lead in AI-enabled accessibility should focus on four key actions:
Prioritize accessibility as a design principle. Move from reactive compliance to proactive, accessible-by-design systems embedded in AI-enabled platforms and services.
Establish proactive AI accessibility governance. Integrate accessibility into AI governance frameworks early, ensuring inclusive workflows and avoiding costly retrofits.
Design for workforce adaptability and inclusion. Extend accessibility strategies beyond compliance to support diverse employee needs, including neurodiversity, aging workforces, and varying cognitive styles.
Act early to mitigate risk and maximize value. Early investment reduces remediation costs, strengthens trust, and positions accessibility as a strategic differentiator rather than a regulatory burden.
AI is redefining digital accessibility as a core element of how organizations operate, innovate, and compete. Those that embrace accessibility as a strategic priority will not only meet regulatory requirements but also unlock broader talent, improve user experiences, and build more resilient AI systems.
Erica Spinoni - Senior Research Analyst, Worldwide AI-Enabled Future of Work & EMEA Practice Co-Lead
Erica Spinoni is a Senior Research Analyst for IDC’s Worldwide AI Enabled Future of Work practice, where she also contributes with regional expertise on EMEA-specific trends and dynamics. Her research helps technology vendors understand how emerging technologies reshape workforce practices…
Amy Loomis, Ph.D. - Group Vice President, Workplace Solutions
Amy Loomis is Group Vice President for IDC’s worldwide Workplace Solutions. Amy leads a team of analysts focused on the evolving nature of human resources, skills development, collaboration, and leadership across the employee lifecycle. Her research into the Future of…
Melinda-Carol Ballou - Research Director, AI Assurance, ALM, Quality & Portfolio Strategies
Melinda Ballou delivers insights into the future of AI assurance, the impact of AI, ML and agentic adoption on agile and digital work, resilience, quality, product and software engineering, the role of technology in business and culture, and the evolution…
If you’ve spent the last few years talking to enterprise IT buyers about cost efficiency, you weren’t wrong. That was the conversation. But over the past few months, things have clearly shifted.
The outbreak of war in the Middle East, with its direct impact on people and organizations in the region, as well as broader effects on energy costs and IT manufacturing supply chains, is a primary driver. At the same time, early AI buildout pressures on memory supply were already raising concerns.
Today, when CIOs and their teams make technology decisions, the question is no longer, “How do we optimize spend?” It’s, “How do we keep the business running when things break?”
This shift shows up clearly in data from two major surveys on IT priorities and spending plans conducted in February and again in March. Concerns about hardware supply constraints have increased by more than 15%, and geopolitical risk is rising quickly. Meanwhile, traditional cost pressures, while still present, are starting to take a back seat.
This is not because cost no longer matters. It is because cost is now seen as downstream. If systems go down, supply chains stall, or cyber incidents escalate, cost becomes secondary very quickly.
What are IT buyers most concerned about in 2026?
When you talk to IT leaders today, the tone is different. There is more urgency, more realism, and more skepticism. They are thinking about exposure:
Where are we too dependent on a single cloud region?
What happens if a supplier cannot deliver?
How quickly can we recover from a cyber event?
Increasingly, they recognize that these risks are interconnected. A geopolitical event can disrupt supply chains, which impacts infrastructure, which affects applications, and ultimately hits revenue.
That is why IDC is seeing a clear pivot toward resilience.
Cybersecurity has moved to the top of the investment list globally, not just as a defensive measure but as a core part of keeping operations running. At the same time, organizations are accelerating investments in multi-region cloud architectures and backup strategies. Cloud security and multi-region resilience are now leading priorities across every major region.
IDC is also hearing from CIOs about a growing push to reduce dependency. CEOs are placing more focus on diversifying suppliers across all parts of the business. CIOs are responding by exploring sovereign cloud options and rethinking how and where infrastructure is deployed.
AI has not disappeared from the agenda, but it is being reframed. It is no longer just about innovation. It is about using automation and intelligence to keep systems stable under pressure.
Put simply, IT buyers are trying to build systems that can bend without breaking.
What does this shift mean for IT suppliers?
For suppliers, this shift creates both risk and opportunity.
The biggest risk is continuing to sell the way you did before. Leading with performance benchmarks, cost savings, or incremental features will not resonate the same way.
The opportunity is much bigger. Buyers are actively looking for partners who can help them navigate uncertainty. They are asking tougher questions:
What happens if this service goes down in one region?
How quickly can workloads move?
Where are the hidden dependencies?
How exposed am I if conditions worsen?
If you can answer these questions clearly and credibly, you move from being a vendor to becoming a strategic partner.
How should IT suppliers respond to rising resilience demands?
The challenge is that resilience means something different depending on where you sit in the ecosystem. The common thread is this: you must show how your offering performs under stress, not just under ideal conditions.
Cloud providers: How to prove resilience beyond scale
For cloud providers, this is a moment to rethink the narrative.
Scale and efficiency still matter, but they are no longer enough. CIOs want to know how your platform behaves when a region is disrupted, connectivity is constrained, or workloads need to move quickly.
This means making multi-region resilience the default, not an add-on. It also requires transparency about risk exposure and greater flexibility around sovereignty and localization.
In short, you are not just selling capacity anymore. You are selling survivability.
SaaS providers: Why continuity is now a core differentiator
SaaS providers are increasingly part of the critical path of operations. If your application goes down, the business feels it immediately.
Buyers want reassurance. They want to understand your disaster recovery posture, regional architecture, and dependencies. They want to know how their data is protected and how quickly services can be restored.
The vendors that stand out will clearly articulate how they maintain continuity, not just deliver functionality.
IT and professional services firms: From transformation to readiness
For services firms, the conversation has shifted from long-term transformation to immediate readiness.
Clients still care about transformation, but right now they need help answering urgent questions: Where are we exposed? What should we fix first? How do we prepare for multiple scenarios?
There is a real opportunity to lead with practical, actionable support. Rapid assessments, scenario planning, and resilience design are where clients need help now.
Speed matters. Clarity matters even more.
Communications providers: Why network resilience is now critical infrastructure
Connectivity has always been important. Now it is critical infrastructure in the truest sense.
Organizations are looking for redundancy, alternative routing, and, in some cases, entirely new connectivity models, including satellite and hybrid networks.
The differentiator is reliability under pressure. If you can demonstrate that your network keeps people and systems connected when other options fail, that becomes a powerful advantage.
Infrastructure vendors: Delivering certainty in uncertain supply chains
Hardware vendors are facing a different kind of scrutiny.
Availability and certainty in delivery are becoming as important as performance. Buyers want to know not just what the system can do, but whether they can actually get it, deploy it, and rely on it.
Transparency into supply chains, flexibility in configurations, and the ability to adapt to constraints are becoming key differentiators. In this environment, certainty is value.
Why IT buying decisions are shifting from optimization to assurance
Stepping back, what we are seeing is a shift in how technology decisions are made.
It is less about optimization and more about assurance. Less about peak performance and more about consistent operation.
The suppliers that win over the next six months will be the ones that can answer a simple but critical question:
What happens when things do not go according to plan?
From an enterprise IT leader’s perspective, that is no longer a hypothetical. It is the reality they are planning for every day. Resilience is no longer just a capability. It is the basis for trust.
What should IT suppliers do next?
If you are an IT supplier, now is the time to recalibrate how you engage with customers.
Start by pressure-testing your value proposition:
Can you clearly articulate how your offering performs under disruption?
Can you quantify how you improve resilience, not just efficiency?
Can you help customers understand and reduce their exposure?
Just as importantly, ground your strategy in real buyer insight.
IDC’s latest Future Enterprise Resiliency & Spending Survey (March 2026, Wave 2) provides a detailed view into how enterprise IT leaders across regions are reprioritizing risk, resilience, and investment decisions in response to geopolitical and supply chain disruption.
We encourage you to explore the survey findings to better understand:
Suppliers that align early with these shifts will be better positioned to engage, differentiate, and win. Because in this market, insight isn’t just helpful.
It’s your competitive edge.
Rick Villars - Group Vice President Worldwide Research
Rick is IDC's leading analyst guiding research on the future of the IT Industry. He coordinates all IDC research related to the impact of Cloud and the shift to digital business models across infrastructure, platforms, software, and services. He helps…
AI is not just changing job descriptions; it is actively rewiring how work is coordinated, controlled, and created, and it is doing so on multiple fronts at once, inside the same organization.
AI Is Transforming Work on Multiple Fronts Simultaneously
Some of our IDC Future of Work predictions bring this into sharp focus: by 2027, 40% of current job roles in large organizations will be redefined or eliminated, accelerated by GenAI adoption. At the same time, by 2030, around 70% of new job roles in Europe are expected to be directly enabled by AI technology. This is not a neat “old jobs out, new jobs in” swap. It is a systemic reconfiguration of how value flows through the enterprise. Yet most leadership frameworks still present AI scenarios as if they were mutually exclusive: automate to cut headcount, augment to boost productivity, redesign work for agility, or push toward autonomous operations.
When Automation, Augmentation, and Autonomy Collide
On the ground, those dynamics do not arrive one by one; they collide. In the same business unit, you may be cutting FTEs as routine tasks are automated and taken over by “digital colleagues,” while simultaneously hiring AI orchestrators, prompt engineers, and automation product owners to keep up with demand for AI-adjacent skills. You may be tearing up long-standing workflows as agentic systems reshape a significant share of knowledge work, at the same time as parts of your operation drift toward near-autonomous execution, powered by employees building personal agents and conversational workflows that quietly absorb whole segments of the process. These are not options on a slide; they are concurrent forces acting on the same organizational fabric. Treating them like menu choices is not workforce planning. It is misdiagnosing an organizational phase transition, a fundamental shift in the underlying architecture of how work happens.
From Role-Based Models to Capability-Based Architectures
The uncomfortable truth is that many leaders are still planning for roles, new and “to be eliminated,” while AI is reshaping the landscape at the level of capabilities and architecture. You can see the tension in three simple signals. A clear majority of European organizations have already deployed or are piloting automation to offset chronic labor shortages. A growing share of executives openly discusses replacing positions with automation, and many plan to substitute a measurable portion of their workforce with “digital colleagues.” Meanwhile, by the end of this year, a meaningful slice of frustrated knowledge workers with no formal development background will be building their own agentic workflows to change how they work, regardless of what HR’s role catalog says. When people can spin up an agent in a week, any static role taxonomy you publish today is out of date tomorrow. The center of gravity moves from “what roles do we have?” to “what capabilities can we compose, and how fluidly can we recombine them as AI matures?”
Why Traditional Role Models No Longer Hold
Role-centric models allow for some seriously wrong assumptions: that tasks are stable enough to bundle into jobs, that jobs are stable enough to plan around for three to five years, and that hierarchies are stable enough to govern how value flows. Agentic AI quietly breaks all three. Tasks fragment, recombine, and migrate between humans and machines in near real time. Work starts to look less like a tidy org chart and more like a living graph of capabilities, human, machine, and hybrid. In that context, planning headcount against static job descriptions is like trying to architect a cloud-native platform using only server rack diagrams.
Architecture Determines the ROI of AI
However, IDC’s Future of Work research also shows that when enterprises invest in digital adoption and automated learning technologies, they can unlock substantial productivity gains. The pattern across these findings is consistent: it is the architecture that determines the yield of AI, not just the tools themselves. If your workflows are fragmented, AI struggles to “see” the end-to-end journey it needs to transform. When critical data is locked in legacy systems, it cannot provide the rich, contextual recommendations you were promised. When governance is tuned for stability rather than experimentation, it throttles the learning cycles AI needs to be useful. Layer on top the reality that many organizations openly acknowledge they lack the capability support to implement automation effectively, and a clear picture emerges.
AI Amplifies Existing Organizational Weaknesses
In that environment, throwing more AI at the problem does not fix anything. It amplifies what is already there. Bad processes simply run faster. Poor decisions scale further. Shadow automation blooms in the gaps, as frustrated employees script around the constraints of the operating model. AI becomes an accelerant, not a cure.
Reframing the Strategic Question for Leaders
This is why the strategic question has to change. Instead of asking, “Which jobs will we automate?”, leaders need to ask, “Is our organization structurally able to absorb intelligence at scale?” Answering that requires moving from headcount planning to capability mapping, designing work around the interplay between human strengths, judgment, domain expertise, relationship-building, and machine strengths such as pattern recognition, generation, and orchestration. It means treating architecture as a product: standardizing interfaces, workflows, and data contracts so AI can plug into work without bespoke integration every single time. It means tracking how many workflows, decisions, and customer journeys are genuinely enhanced by AI, not just how many licenses have been bought. And it means steering reduction, augmentation, redesign, and autonomy as one coherent portfolio of change, not four disconnected projects.
Conclusion: The Real Stress Test Is Your Operating Model
AI is already changing jobs. The real test is whether your operating model can evolve quickly enough to harness that change, or whether AI will simply accelerate you toward the limits of the system you already have.
If you would like more information, drop your details in here.
Meike Escherich - Associate Research Director, European Future of Work - IDC
Meike Escherich is an associate research director with IDC's European Future of Work practice, based in the UK. In this role, she provides coverage of key technology trends across the Future of Work, specializing in how to enable and foster teamwork in a flexible work environment. Her research looks at how technologies influence workers' skills and behaviors, organizational culture, worker experience and how the workspace itself is enabling the future enterprise.
Key figures at a glance
¥1,304B – IT modernization services market size, 2025
10.2% – Projected average annual growth rate, 2025–2030
¥2,123B – Forecast market size by 2030
~80% – Large and mid-sized enterprises still running legacy systems
Why Japan is outpacing the world
Japan’s IT services market is forecast to grow at a CAGR of 6.6% from 2024 to 2029, nearly double the global average of 3.6%. The answer is structural. Japan carries a uniquely heavy legacy burden, decades of investment in proprietary mainframe environments, complex bespoke systems, and a workforce that has long maintained them. Now, three forces are converging to make modernization unavoidable:
Fujitsu Mainframe Sunset – In 2022, Fujitsu announced the end of sales and support for its mainframe and UNIX server products around 2030. This single announcement put more than 1,000 enterprises on an irreversible countdown, accelerating timelines across the entire Japanese market.
AI Readiness Imperative – AI adoption presupposes tightly integrated data pipelines and modern business process architectures, exactly what legacy systems make impossible. Modernization is no longer optional for companies that want to remain AI-competitive.
Demographic Pressure – The generation of engineers who built and maintained Japan’s legacy systems is retiring. Organizations face a narrowing window to migrate knowledge and infrastructure before institutional memory disappears entirely.
Three paths to modernization
IDC segments IT modernization services into three execution types, each with distinct implications for services firms:
Rehost – Lift-and-shift to non-legacy platforms. Preserves existing application assets. The near-term entry point for enterprises constrained by budget or migration timelines.
Rewrite – Convert legacy source code to modern languages without changing business logic. A middle path for controlled transformation.
Rebuild – Redefine processes, data models, and architecture from the ground up. The highest-value, highest-complexity path.
Near-term, rehost is the second-largest segment after rebuild, driven by enterprises responding urgently to mainframe end-of-life deadlines — though it has already reached maturity and is forecast to decline. The mid-to-long-term growth opportunity lies in application modernization, rewriting, refactoring, and the adoption of microservices and cloud-native architectures.
What enterprises need from services firms
IDC surveyed large and mid-sized Japanese enterprises and found that organizations with significant legacy exposure do not simply want technical execution, they want transformation partners. Security remains a baseline expectation, but top-ranked needs now include business process redesign and cloud architecture strategy.
Demand signals also diverge meaningfully by sector:
Financial Services – Prioritizes cloud-native application development capabilities, the ability to innovate rapidly on modern infrastructure.
Manufacturing and Distribution – Prioritizes business process transformation, embedding efficiency and intelligence into operations, not just upgrading the underlying technology.
Across all sectors, IDC observes a consistent shift in enterprise expectations: business outcomes are becoming the primary purchase criterion. Technical competence is assumed; value creation is the differentiator.
How to build a winning position now
For services firms, the competitive imperative is clear. The service providers best positioned to win this market will do three things:
1. Codify your legacy modernization track record
Past engagements are an underutilized asset. Service providers should build structured libraries of business outcomes achieved, cost reductions, cycle time improvements, AI readiness unlocked and make these the core of their go-to-market narrative.
2. Develop industry-specific reference architectures for the AI era
Generic modernization pitches are losing traction. Enterprises want system architectures and implementation roadmaps calibrated to their sector, their regulatory environment, and their AI ambitions.
3. Invest in application modernization capabilities ahead of demand
The rehost wave is already approaching its peak. The high-margin opportunity – rewrite, refactor, rebuild – is building behind it. Service providers who develop deep cloud-native and microservices capabilities now will be the ones enterprises turn to in the second half of this decade.
About the IDC Report
IDC has published a comprehensive analysis of Japan’s IT modernization market: 2026 Japan IT Modernization Market Analysis. The report provides a medium-term market forecast for IT modernization of legacy systems — a primary growth driver in IDC’s Japan IT services market outlook. Legacy systems are characterized by aging and obsolescence, excessive complexity and scale, and a lack of transparency. It covers enterprises’ IT modernization trends and an analysis of the service vendors’ services trend. Market forecasts are segmented by service type, execution type (rehost, rewrite, rebuild), system type, and industry vertical. Together, these analyses offer a comprehensive view of shifting enterprise needs, emerging market opportunities, and the strategies and service offerings of leading vendors in Japan’s IT modernization landscape.
Masaru Muramatsu is a senior research analyst, responsible for research and analysis of the Japanese IT services market, including IT consulting, systems integration, business services.
The global semiconductor market is undergoing a seismic transformation. IDC’s latest forecast projects the industry will surge past the $1 trillion revenue threshold in 2026, significantly ahead of prior expectations. The growth will be driven overwhelmingly by AI infrastructure investment, which is reshaping the entire market.
Total semiconductor revenues are forecast to reach $1.29 trillion in 2026, up 52.8% year over year from $842.8 billion in 2025. The memory segment is at the epicenter of this shift: DRAM revenues alone are projected to nearly triple in 2026 to $418.6 billion, driven by demand for high-bandwidth memory (HBM) and DDR from hyperscalers and AI infrastructure providers. Meanwhile, non-memory semiconductors are growing at a robust but more measured pace, reaching $693.5 billion in 2026.
In this post, we break down three forces reshaping semiconductors right now: why AI infrastructure has become the industry’s new center of gravity, what’s happening in memory markets and why it matters beyond the data center, and how other markets from automotive and IoT to mobile and PCs are navigating a market increasingly defined by AI.
Global Semiconductor Market: Selected Forecast (USD Billions)
Source: IDC Semiconductor & Semiconductor Applications Forecast, April 2026. A = Actual, E = Estimate, F = Forecast.
AI infrastructure: The engine of the supercycle
The single most consequential shift in the semiconductor market is the emergence of AI infrastructure as a structurally dominant end market. What began as a cyclical uplift in data center spending has evolved into a self-reinforcing investment cycle that is reshaping demand patterns across the semiconductor value chain.
Hyperscale capital expenditure exceeded $100 billion for the first time in Q3 2025, and the i4 are expected to increase capex by 70% year over year to approximately $600 billion in 2026. IDC forecasts data center semiconductor revenues to reach $477.1 billion in 2026. By 2030, data center semiconductors will account for $843.2 billion, nearly half the total semiconductor market.
“The semiconductor industry has crossed a structural threshold. AI is no longer a demand catalyst — it is the demand foundation. The race to build out AI infrastructure is consuming silicon at a pace the industry has never seen, and the implications for memory, logic, and packaging are profound.” —Jeff Janukowicz, Research VP, Semiconductors & Semiconductor Manufacturing, IDC
Datacenter Semiconductor Revenue Decomposition
Source: IDC Semiconductor Applications Forecast, April 2026.
The $281 billion “intelligent” datacenter segment, encompassing CPUs, AI accelerators, GPUs, custom ASICs, and networking silicon, now constitutes the largest identifiable category within non-memory semiconductors. Spending is heavily concentrated among top-tier hyperscalers and a growing set of sovereign AI infrastructure programs, many of which have secured long-term supply agreements with leading chip manufacturers.
Three factors are keeping this growth self-sustaining rather than cyclical:
Compute intensity continues to rise. Generative AI and agentic workloads require far more compute density per rack than prior architectures, increasing the overall silicon footprint
Inference demand compounds on itself. Each new model generation increases the volume of inference, requiring ongoing hardware upgrades
AI is spreading beyond the data center. As enterprises, edge deployments, and client devices begin running AI workloads locally, demand becomes more distributed
Memory: From cyclical commodity to strategic constraint
If you want to understand what’s really happening in semiconductors right now, start with memory.
Total memory revenues rise from $226 billion in 2025 to $594.7 billion in 2026, and then to $790.4 billion in 2027. This is not simply a recovery cycle, it reflects a market that is being structurally repriced.
DRAM is where the shift is most visible. IDC forecasts $418.6 billion in DRAM revenues for 2026, up 177% year over year. This is not primarily a volume story driven by consumer devices. Hyperscalers are buying a fundamentally different, more expensive class of memory and are willing to pay a premium to secure supply. Each HBM chip also requires significantly more silicon real estate, further tightening the availability of other types of DRAM.
“The memory market is at an unprecedented inflection point, with demand materially outpacing supply. For an industry long characterized by boom-and-bust cycles, this time is different. The rapid expansion of AI infrastructure and workloads is placing significant pressure on the memory ecosystem. As a result, the market is shifting from a cyclical recovery following the 2023 downturn to a more structurally constrained environment, with clear implications for end markets.” — Jeff Janukowicz, Research VP, Semiconductors & Semiconductor Manufacturing, IDC
The HBM bottleneck
High-bandwidth memory has become the primary constraint in the AI accelerator supply chain. Most capacity is already pre-committed through 2026, with forward allocations extending into 2027. That capacity is concentrated in NVIDIA and AMD GPU platforms, along with a growing set of hyperscaler custom silicon programs.
The production economics are also very different. HBM relies on advanced packaging and stacking technologies, resulting in per-bit costs that are several times higher than standard DRAM.
Suppliers are investing aggressively to expand capacity, but the technical complexity and capital intensity mean meaningful new supply will not reach the market until late 2026 at the earliest.
NAND: AI drives storage demand
NAND Flash revenues are forecast to reach $174.1 billion in 2026, up 138.5% from 2025. AI infrastructure is again the dominant driver, with demand coming from training datasets, checkpoint storage, and high-performance inference environments.
Unlike DRAM, the NAND market is seeing broader repricing. Enterprise SSD prices have surged as hyperscalers secure supply, which is tightening availability across consumer and OEM channels.
Other markets: Navigating the shadow of the AI supercycle
While AI infrastructure dominates the headlines, the broader semiconductor market is facing a more nuanced environment.
Non-memory, non-datacenter revenues are projected at $406.3 billion in 2026. Several end markets are dealing with margin pressure, supply allocation challenges, and macroeconomic headwinds.
In mobile, semiconductor revenues are forecast to decline to $89.8 billion in 2026. The issue is not consumer demand, particularly for AI-capable devices, but cost pressure. Memory now represents a larger portion of the bill of materials, forcing OEMs to make difficult tradeoffs between margin, pricing, and product specifications.
Automotive is being shaped more by macro factors than AI. Tariffs, interest rates, and energy prices are weighing on demand. While the long-term outlook remains strong, 2026 reflects a period of near-term softness.
IoT shows a similar pattern. The segment is projected at $136.6 billion in 2026, with near-term pressure from inventory digestion and cautious spending. However, edge AI is beginning to create a new, higher-value demand category that will become more meaningful over time.
Source: IDC Semiconductor Forecast, April 2026.
Outlook: Path to $1.75 trillion
IDC’s base case projects semiconductor revenues reaching $1.75 trillion by 2030.
Several dynamics will shape that trajectory:
Memory pricing will normalize, but remain structurally higher than pre-AI levels
Non-memory semiconductors will continue steady growth, driven by AI adoption across devices and industries
Macro and geopolitical risks will remain important variables
What is clear is that the semiconductor market has undergone a fundamental shift.
“What the data makes clear is that the semiconductor market has undergone a permanent expansion of its addressable opportunity. AI infrastructure has reset the demand baseline, memory has repriced as a strategic asset, and the industry’s growth trajectory through 2030 is no longer contingent on a consumer refresh cycle.” — Nina Turner, Research Director, Semiconductors, IDC
IDC will be tracking how AI infrastructure investment continues to reshape semiconductor demand at Computex 2026.
Jeff Janukowicz - Research Vice President, Global Lead, Semiconductors and Enabling Technologies
Jeff Janukowicz is Research VP within IDC’s enterprise infrastructure global research domain. He is the global subdomain lead for Semiconductor and Enabling Technologies. Jeff and his team deliver data-driven analysis, technology insights, market trends, and strategic guidance across compute, memory,…
Nina Turner - Research Director, Semiconductors and Enabling Technologies
Nina Turner is Research Director within IDC’s enterprise infrastructure global research domain. She focuses on silicon technologies and packaging as part of the Enabling Technologies subdomain. Nina and her team cover the breadth of processors and architectures, from datacenters to…
AI adoption is accelerating across EMEA, yet many organizations struggle to translate investment into measurable business value. This blog explores the structural challenges behind stalled AI initiatives and what differentiates organizations that successfully scale.
AI Adoption in EMEA: High Investment, Limited Business Value
AI adoption across EMEA has progressed significantly over the past 12–18 months, with organizations moving beyond experimentation into broader deployment phases. However, progress remains uneven.
IDC research shows that a substantial share of organizations are slowing down, scaling back, or refocusing their AI initiatives. This reflects a shift in priorities rather than a decline in interest. As macroeconomic pressures, regulatory complexity, and competing IT investments intensify, organizations are increasingly challenged to execute AI initiatives while demonstrating measurable business outcomes.
Why AI Projects Fail: The Execution Gap
The challenges that limit AI impact are consistent across industries, but particularly pronounced in EMEA.
According to IDC research, organizations continue to face difficulty in quantifying and demonstrating AI-driven ROI, alongside competition for resources and increasing regulatory uncertainty. According to IDC research, only 9% of EMEA organizations have been able to deliver measurable business outcomes from most of their AI-related projects over the past two years (Source: IDC Future Enterprise and Resiliency Survey, Wave 1, March 2026), At the same time, resistance to process change remains a persistent barrier, especially where AI requires cross-functional alignment and new ways of working.
These factors rarely cause projects to fail outright. Instead, they contribute to a gradual loss of momentum, where initiatives remain in pilot phases or are scaled selectively without broader organizational impact.
AI ROI: Why Proving Business Value Remains So Difficult
A central issue in AI adoption is the ability to measure value consistently.
IDC research highlights that AI impact extends beyond direct cost reduction to include indirect benefits such as productivity gains, revenue enablement, and risk mitigation. This makes it difficult to capture value using traditional ROI models.
As a result, many organizations lack a standardized approach to evaluating AI initiatives. This leads to fragmented decision-making, where use cases are assessed in isolation and scaling decisions are not consistently aligned with business priorities.
Without a clear framework for value measurement, AI initiatives often struggle to move beyond experimentation.
Scaling Enterprise AI: Why Moving Beyond Pilots Is So Hard
Scaling AI requires more than successful use cases. It requires integration into core business processes and operating models.
IDC research indicates that organizations face increasing challenges when moving from pilot to scale, particularly in relation to budget allocation, operational complexity, and governance requirements. While initial projects are often funded as innovation initiatives, scaling requires sustained investment in infrastructure, data, and ongoing operations.
This transition exposes structural gaps. Organizations that lack alignment between business strategy, data architecture, and execution models often struggle to scale beyond isolated successes.
AI Governance and Regulation in EMEA: Barrier or Opportunity?
Regulation is a defining factor for AI and broader technology adoption in EMEA.
According to IDC research, regulatory requirements around data protection, AI, and cybersecurity are significantly shaping how organizations approach AI deployment. While compliance increases operational and infrastructure costs, it is also driving more structured approaches to governance.
At the same time, organizations report benefits such as improved resilience, stronger ESG performance, and increased customer trust. This suggests that regulation is not only a constraint, but also a catalyst for more sustainable and trusted AI adoption.
Organizations that integrate governance early are better positioned to scale AI effectively.
AI and Workforce Transformation: Why the Human Factor Matters
AI transformation is not purely a technology challenge. It is fundamentally an organizational one.
IDC research emphasizes the importance of aligning AI initiatives with workforce capabilities, culture, and leadership. This includes reskilling, change management, and building trust in AI-driven processes.
Organizations that fail to address these elements often encounter slower adoption and limited impact. In contrast, those that integrate the human factor into their AI strategy are better positioned to realize long-term value.
The Evolving Role of the CIO in AI-Driven Organizations
As AI becomes central to business strategy, the role of the CIO continues to expand.
IDC research shows that digital leaders are increasingly expected to drive business value, support growth, and strengthen resilience. For instance, 42% of EMEA C-Suite leaders expect their CIO role to lead digital and AI transformation with a major focus on specifically creating new revenue streams (Source: IDC Worldwide C-Suite Tech Survey, September 2025). This requires a shift from a technology-centric role to a more strategic position aligned with business outcomes.
CIOs and digital leaders are therefore playing a critical role in connecting AI initiatives with measurable impact and ensuring alignment across the organization.
From AI Strategy to Execution: What Differentiates Leading Organizations
The current phase of AI adoption in EMEA is defined by execution.
Organizations that successfully scale AI tend to take a more structured approach, linking initiatives to business objectives, embedding governance early, and aligning technology with organizational change.
However, many organizations are still in transition. Key questions remain:
How can AI ROI be measured consistently across different use cases?
Which frameworks support scaling AI at the enterprise level?
What changes are required to align workforce and operating models?
How should the role of digital leaders evolve to effectively support AI-fueled business transformation? These questions will be explored in more detail in the upcoming webinar.
Drawing on insights from the IDC EMEA Digital Leader Playbook, the session will provide a practical perspective on how organizations across the region are approaching AI strategy and value realization.
Join the Discussion
For organizations seeking to move from AI experimentation to measurable business impact, understanding these dynamics is critical.
Martina Longo is a research manager in the IDC Digital Business Research Group. In her role she advises ICT players on how European organizations create business value using digital technologies. She also leads IDC European Digital Native Business research, focused on those enterprises born in a modern technological world in a mix of start-ups, scaleups, and more mature digital natives. Within the European Digital Business Research, the European Digital Native Business, Start-ups and Scale-ups theme advises technology suppliers on the market dynamics and segmentation, business priorities, tech buying patterns and go to market approaches (sell to/sell with) needed to engage digital native organizations in Europe.
Hannover Messe 2026 ran from April 20 to 24 in Hannover, Germany, and it delivered. Under the theme “Think Tech Forward”, the show brought together over 130,000 visitors from more than 150 countries, 4,000 exhibitors, and 300+ start-ups across industrial automation, software, and hardware.
Brazil was this year’s partner country, and the event itself got a makeover: a new hall layout, a revamped thematic structure, and a brand-new Defense Production Park zone, reflecting just how much the scope of industrial technology has shifted.
Here are the Top 10 things I’m taking home, and yes, I’m happy to be challenged on any of them.
The user attention battle is quietly beginning
My deepest feeling coming out from the #HMI26 floor was to be the witness of the first deployments of the armies fighting for who controls the factory of the next decade. Most demos at Hannover Messe 2026 I was exposed to started with a chat box prompting the users. The question is how many of them can co-exist in a factory setup. My answer is as little as possible. The battle for the factory UI has hence started. It can turn out this way: one system as the front-end workers actually use, the others as solid back-end.
Context is the new competitive asset. Whoever owns it, then owns the process. And physics-aware data fabrics are the competitive moat
The differentiating capability in industrial AI is not model quality, but it is contextual depth. A physics-aware industrial data fabric that connects real-life physics, process history, sensor telemetry, operational and operator knowledge provides more competitive advantage than any algorithm running on top of it. Hopefully, manufacturers will define a technology journey built around data first, then context, then impact, but I fear the need to rush the deployment of industrial AI apps may result in missed opportunities in building the critical industrial model foundation.
MES stands for “Must Evolve Soon”
This application is the spine of the plant (because it acts as both the system of engagement and the system of record). But process flexibility is now its hardest test… Why? First, top-down. Advanced Planning and Scheduling applications are seeing accelerated adoption, driven by a new generation of algorithms capable of delivering real-time, context-rich, executable plans. As APS systems push dynamic re-sequencing into execution, MES must evolve fast enough to receive and act on what APS produces, or risk being seen as the weakest link. To this, it directly follows… the bottom-up pressure. Unstructured production cells (i.e. multifunctional robots, wireless machines, AMR-driven object routing) are going to be gradually replacing fixed lines. Customer requests are shifting toward rapid configuration, faster changeovers, and multifunctional automation. MES must evolve to accommodate less deterministic workflows, or lighter tools will fill the gap.
Forget upskilling. The connected worker is all about context generation and retention
The ability to bring anybody “to speed” has been so far one of the typical selling points for connected frontline worker platforms so far. But this is barely scratching the surface. The combination of AI-first vision systems, IIoT, RFID, RTLS, and mobile or wearable devices creates an ultra-visible data substrate that makes the factory transparent. On top of it, the layer of human-process interaction managed through connected worker platforms enables unprecedented levels of visibility on how people interact with process execution steps. This is truly the best material for AI-driven process improvement. This data gold mine is not just in the machine data. It is the analysis of what happens between the worker and the process.
The industrial metaverse is developing as a hyper-contextual decision-making environment
The exponential growth in data availability, combined with falling costs of modelling and representation, is unlocking use cases that were economically impossible two years ago. Hence, we can say that the “VCR” moment has arrived. Now we have the full capability to “zoom in and zoom out” and as well as “fast forwarding” the process for continous multi-scenario process planning and simulation, as well as “rewind” or playback the process for traceability and analysis.
Right-size AI now or face the potential consequences
The differentiating capability will be the agentic continuum, i.e. the unbroken intelligent chain across production execution. But building that chain responsibly requires confronting infrastructure and cost realities that vendor marketing may be now underplaying. Right-sizing AI and matching model scale and infrastructure to actual operational demand is a business continuity decision. The question is not “what is the most powerful model?” but “ do we need AI at all for this, and if the answer is “yes”, then “what is the appropriate model for this decision/process automation, in this operating environment?”
Manufacturing runs on deterministic sequences. Agentic AI is inherently non-deterministic. Reconciling these two realities is the governance challenge
Two distinct scenarios define the governance challenge. In the first, the desired output is well understood, and users can accept or reject an AI result without a care in the world about inspecting the internal process. In the second, the correct answer is uncertain, and full transparency into how the model generated its output is required before the result can be trusted. The challenge is how to gradually hand over large bits of process control to an agentic software layer that is stochastic in nature. Most manufacturing companies today are only comfortable approving small, incremental AI-driven changes, not because AI is incapable of more, but because the accountability and auditability frameworks for automating larger decisions do not yet exist.
So what?
What does this mean in practice? Three implications stand out.
Survive to Scale: Link the technology curve to the organisation curve
Technology is advancing faster than most organisations can absorb. The strategic risk for many manufacturers is not deploying too slowly, but it is scaling before the organisational substrate is ready.
Bring in the Naysayers: Organisational buy-in requires involving sceptics early, not convincing them late
There is a very nice saying that goes more or less as “Don’t let people saying that it can’t be done disturb the people who are already doing it.” But in this new venture, bringing the contrarians will be important. Creatin forums where sceptics stress-test plans with the utmost ferocity (before the market does it!) will be key.
Complexity demands simplicity: Focus on fundamental problems, not exhaustive use-case catalogues
Technology is evolving faster than any list can stay current. Vendors and manufacturers alike should resist chasing every new capability appearing on the horizon, and rather concentrate on first principle-based, core solutions that foster data integration for autonomy and decision-making improvement.
For a deeper look into Lorenzo’s research, visit our website. If any of these perspectives challenge your thinking or connect to your priorities, we would be glad to continue the discussion via our contact form.
Lorenzo Veronesi - Associate Research Director, IDC Manufacturing Insights - IDC
Lorenzo Veronesi is an associate research director for IDC Manufacturing Insights EMEA.
In this role, Veronesi leads the Worldwide Smart Manufacturing research program and supports all the IDC MI research services for EMEA, by looking at Digital Transformation drivers in multiple manufacturing industry sub-verticals. He is also often involved in consulting projects across the world for end-users, IT vendors and public authorities.
During the last decade his research has focused across key processes such as manufacturing operations management, supply chain management, and product lifecycle management in multiple manufacturing verticals, including - among others - automotive, aerospace, machinery, high-tech, chemicals, CPG, and fashion.
Before joining IDC, Veronesi worked as analyst in multiple projects including research in the industrial logistics sector and as advisor for public authorities in Italy.
Veronesi holds an MSc Degree in Regional Science at the London School of Economics and Political Science and has graduated cum laude at the Bocconi University in Milan.
International Data Group is committed to protecting the environment, the health and safety of our employees, and the community in which we conduct our business. It is our policy to seek continual improvement throughout our business operations to lessen our impact on the local and global environment. We are committed to environmental excellence, pollution prevention and to purchasing products that reduce the use of natural resources.
We fulfill this mission by a commitment to:
Encouraging all partners to share in our mission
Understanding environmental issues and sharing information with our partners
Recognizing that fiscal responsibility is essential to our environmental future
Instilling environmental responsibility as a corporate value
Developing innovative and flexible solutions to bring about change
Using our platforms and position in the IT industry to promote sustainability
Minimize air travel to help reduce our impact on the environment
Minimize use of materials and energy consumption in our offices
Create a working environment that efficiently uses our office space
Develop and maintain a hybrid working model that benefits both our employees and business partners
Encourage employees to measure, minimize and collaborate on reducing energy consumption at home and in the office
Engaging employees and promoting active participation in environmental and sustainability initiatives
Leaving?
You are about to leave this section. Do you wish to continue?