The linear supply chain, which was optimized solely for cost, speed, and sequential handoffs, is over. In this model, if one link breaks, the entire chain comes to a halt, as there is no built-in redundancy or networked capability to navigate around the problem. As we look toward 2030, the key characteristic of successful operations is no longer just efficiency; it is intelligence at scale. This shift to an “ecosystem” or “network” model is critical for 2026 and beyond.

The last few years have served as a brutal stress test for legacy models, exposing structural fault lines that “optimization” can no longer hide. In late 2024 and throughout 2025, we witnessed a convergence of volatility that linear chains simply could not absorb.

Three specific industry failure modes have emerged from this period, signaling why a new direction is inevitable:

  • The Tier-N Blindspot (The Visibility Gap): A major automotive manufacturer recently halted production when a climate event impacted a Tier 3 sub-component provider. Lacking multi-tier visibility, the planning team remained unaware of the risk until Tier 1 shipments ceased.
  • The “Digital Tower of Babel” (The Interoperability Gap): During recent port congestions, manual handoffs between disparate systems prevented logistics networks from adapting, causing cascading delays. Agile firms pivoted instantly using open platforms while traditional operators remained trapped by disconnected data.
  • The Expanded Attack Surface (The Security Gap): Rapidly increasing IT and OT connectivity without robust security has turned supply chain networks into prime targets for ransomware, cyber-physical attacks on IoT equipment, and AI-enabled attack vectors. Enterprises are deploying distributed, AI-driven systems to proactively neutralize risks from external partners to internal operations.

These are not isolated incidents; they are the growing pains of a sector in transition. They underscore why the next five years will not be defined by better silos, but by the dissolution of silos altogether.

These insights reflect IDC’s 2026 FutureScape: Worldwide Supply Chain and Industry Ecosystems research, which outlines the forces reshaping global operations and the capabilities leaders must prioritize. Explore the full predictions in the global report.

Emerging from this volatility are three distinct trends that will define the path to 2026 and beyond.

1. Multi-Enterprise Orchestration: Visibility That Extends Beyond Boundaries

Disruptions now emerge across extended supplier tiers, logistics partners, and regional networks. Traditional visibility approaches anchored in ERP data and Tier 1 insights are no longer sufficient.

Supply chains must evolve into multi-enterprise networks that enable:

  • Real-time visibility beyond Tier 1 suppliers
  • Shared alerts and contextual intelligence among all partners
  • Coordinated response actions across nodes

This shift moves visibility from a standalone tool to an integrated capability woven through planning, execution, and risk management.

As a result, IDC predicts(1):

By 2028, 50% of enterprise-scale supply chains will use business networks to enable n-tier visibility, serving as a key mechanism to reduce the impact of disruption and improve response speed by 25%.

Organizations that build this foundation gain faster detection, more accurate impact assessment, and greater confidence under volatility.

2. Supplier and Partner Ecosystems: Interoperability as a Performance Multiplier

The ability to work seamlessly across partner ecosystems will define future competitiveness. Interoperability, once a technology challenge, is now a strategic one.

Next-generation supply chains require platforms that:

  • Integrate supplier, logistics, and customer systems with minimal friction.
  • Support shared workflows, not just shared data.
  • Enable AI agents to operate across organizational boundaries.
  • Maintain consistent process logic, metrics, and governance across nodes.

As more partners connect to shared platforms, these networks become orchestrated ecosystems rather than loose collections of bilateral relationships.

As a result, IDC predicts:

By 2029, 45% of G2000 companies will have adopted agentic AI–driven channel management and orchestration, driving a 20% revenue uplift and a 30% improvement in partner and customer satisfaction scores.

This interoperability amplifies agility: when market conditions shift, changes cascade across partners in hours, not months.

3. Data Foundations and Distributed AI-Driven Security: Trust at Ecosystem Scale

As supply chains become more interconnected, the surface area for cyber and data risk expands dramatically. At the same time, AI’s effectiveness depends on high-quality, secure, and interoperable data.

A modern supply chain must invest in:

  • Federated data models enabling domain-level control with shared standards.
  • Governance frameworks, ensuring consistent semantics, lineage, and quality.
  • Distributed AI-driven security that continuously assesses ecosystem risk.
  • Zero-trust principles applied across suppliers, platforms, and data flows.

Trust is no longer about internal compliance. It is about ensuring safe, reliable data movement across the entire network, because partner data is now operational data.

As a result, IDC predicts:

To secure supply chains, by 2030, 60% of large enterprises will deploy distributed AI-driven cybersecurity, enabling proactive third-party risk management as AI adoption intensifies cyber risks.

These foundations ensure AI-driven decisions are grounded in secure, high-integrity data flowing consistently across partners. Trust now means ensuring safe, reliable data movement across the network—partner data is operational data.

Future Imperatives for Operations and Supply Chain Leaders

The predictions point to one conclusion: supply chains must operate as intelligent, interconnected ecosystems. To lead in this environment, COOs and CSCOs should focus on five strategic imperatives anchored in the three core themes.

1. Transform N-Tier Visibility into Operating Infrastructure

Treat your supply chain as a system of systems. Visibility must shift from periodic reporting to a live intelligence layer that detects disruptions at their source, whether in a sub-tier supplier or a regional hub. Establish shared workflows and coordinated decision-making models to reduce blind spots and shorten recovery times.

2. Architect for Interoperability to Accelerate Execution

Shift from one-off integrations to platform-based ecosystems where suppliers, carriers, and manufacturers connect with minimal friction. When systems “speak” fluently, coordination becomes orchestration, leading to fewer handoffs, lower latency, and faster alignment under stress. Select platforms that enable partners to plug in without extensive customization.

3. Treat Data Readiness as the Precursor to AI Scale

AI agents cannot scale without clean, governed, and interoperable data. Conduct a cross-functional audit of data availability and structure. Ensure that core datasets, including supplier, logistics, and product data, are aligned and secure. Data readiness is now AI readiness; without it, advanced capabilities like automated forecasting and risk sensing will fail.

4. Embed Distributed Security as a Resilience Pillar

As connectivity grows, security becomes the foundation that protects visibility and orchestration. Integrate third-party cyber assessments into supplier scorecards and deploy continuous monitoring tools. Adopt zero-trust principles across systems and data flows to detect anomalies early and maintain continuity even when threats emerge elsewhere in the network.

5. Leverage Ecosystem Intelligence for Value Beyond Productivity

Use interoperable platforms to enable new service models, dynamic capacity sharing, and sustainability-led optimization. Expand the definition of value to include resilience, customer trust, and ecosystem performance, turning the network itself into a competitive advantage.

The Leadership Mandate

Supply chains are becoming ecosystems. AI will accelerate this shift, but its value depends on network strength: visibility, interoperability, and data integrity.

Leaders must champion modernization that aligns partners, platforms, and data—core to strategic growth and operational continuity.

Investing in multi-enterprise orchestration, ecosystem interoperability, and AI-ready data foundations enables organizations to build responsive, resilient, future-ready supply chains.

Join Stephanie Krishnan for an upcoming webinar on 24 February 2026, 1:30 PM SGT on what agentic AI readiness means in Asia Pacific and how organizations can move from proof of concept to production responsibly. Register now!

Stephanie Krishnan - Associate Vice President - IDC

Stephanie Krishnan leads IDC’s Asia/Pacific research and advisory for supply chain, manufacturing, retail, and adjacent industry domains. As Associate Vice President for IDC Insights, she guides organizations through the rapid transformation toward digitally enabled, AI-driven, and highly interconnected operations. Her work centers on the future of supply chain ecosystems, operational resiliency, sustainability, and the rise of agentic and autonomous decision-making across global networks.

Announced 5th December, these increases are set to take place from the 1st July 2026 and will impact most Enterprise subscribing customers at their next major agreement renewal. Using justification for increases based on additional features, functionality and AI elements, these increases affect customers of all types in all territories and currencies. These list price increases differ from the recently announced changes to automatic entitled volume license discounts and whilst pricing is still negotiable, list price increases will ultimately influence end customer pricing.

Microsoft 365 SuiteCurrent List PriceJuly 1st 2026 List PriceIncrease %
Microsoft 365 E3$36.00$39.008%
Microsoft 365 E5$57.00$60.005%
Microsoft 365 F1$2.25$3.0033%
Microsoft 365 F3$8.00$10.0025%
Office 365 E3$23.00$26.0013%
Business Basic$6.00$7.0017%
Business Standard$12.50$14.0012%
Pricing in other currencies and territories are expected to increase by similar deltas

Of specific note are the exceptionally large increases to Frontline Worker SKU’s, typically deployed by customers with shared computer environments, providing more cost-effective options for these users than Full User licenses, with the savings delta now significantly impacted by this price increase. Customers utilizing these products should carefully plan and consider their current position and forward strategy.

Whilst these price increases impact all customers, government customers specifically will see these increases in some cases split across a two-year period. The timing of actual impact may be dependent on agreement renewal timing.

What this means

For most customers subscribing to these products governed by a current Enterprise Agreement (EA) or Enterprise Agreement Subscription (EAS), Microsoft may look to assert increases on renewals after July 1st 2026 . Until renewal customers with these agreement types will typically have agreed pricing which will be unimpacted.

Whilst some of these increases look to be close to inflationary increases (E5), many enterprise customers already have negotiated discounts, and renewing customers would almost always see cost increases at renewal through the reduction of discounts and/or the ramping of discounts during the agreement term. This is important as these new list price increases may be levied in addition to discount reductions, therefore customers should expect larger increases than perhaps previously anticipated.

IDC Sourcing Advisory Services had already observed more restrictive discounting for Frontline Worker products and customers with these can expect to see large compound increases at renewal. We note that the F5 addon product is not currently in scope of these increases, however as this is an add-on product, customers will still be impacted overall.

With the removal of entitled discounts, the path is now clear for Microsoft to assert more aggressive unit cost increases, through both these list price increases and discount reductions

Also, customers with Unified Support face a double impact through these price increases, as their Unified Enterprise Base cost is calculated on a percentage of categorized product spend, and any product cost increases will result in Unified Support increases. These cost increases may not necessarily be co-termed to the wider renewal or when the price increases impact the customer, indeed these may come at a later date. Customers should look at the impact across agreements not only to budget but also to provide leverage for future negotiations.

What can Enterprise Customers do

Notwithstanding typical Microsoft renewal actions and strategies, customers should immediately consider the following;

  • Act early & Plan now – Customers should begin assessments now, and in some cases may look to shift contractual timelines, so as to mitigate some of these cost increases in the near term
  • Pricing is Negotiable – Whilst list prices might increase, Microsoft continues to incentivize customers and pricing is always negotiable.
  • Leverage – Customers can seek to leverage many aspects of direct and indirect Microsoft investments and strategic product adoptions in order to drive optimal pricing. Collating current investments and identifying future requirements, even seemingly unrelated ones such as Azure, will help to build an overall investment growth profile and negotiation leverage
  • Strategy – As always customers should develop a renewal strategy, aligned to their technology strategies, to drive optimal product selection, rationalization, adoption and negotiations. However, customers should now take a specific view on the potential impact of these increases and how this strategy may be influenced by, or equally influence, future Microsoft commercials
  • Frontline Workers – Where customers subscribe, or plan to subscribe, to Frontline Worker SKUs, careful impact assessment and value analysis might be undertaken with a view to identifying risks and opportunities for mitigation.
  • Early Renewal – Customers may consider renewing their current agreement early, prior to 1st July 2026, to maximize price protection, however should carefully balance this on the understanding that early renewal pricing may likely increase overall costs in the immediate term.
  • Extensions – Those customers with contractual Extension options with fixed pricing might plan to utilize these in order to extend price protection durations.
  • Alternatives – Where customers are egregiously impacted by the changes, or where the full functionality of the suites is not being leveraged, customers may choose to realign their requirements and potentially look to competitive solutions. These options may provide direct cost mitigation and/or give competitive leverage when commencing renewal discussions.
  • Benchmarking – With global variance in discounts and incentive funding customers should benchmark their Microsoft investments and renewals against their peers and the market to ensure they are cost optimal and provide independent justification for decisions and change.

Summary

In summary these price increases, in tandem with potential reductions in discounts, present some clear commercial cost challenges for many Microsoft customers, in some cases significant ones. For Enterprise customers pricing remains negotiable and those customers that act early and assess the impact of these changes may identify opportunities for successful mitigation and cost optimization.

Neil Stewart - Vice President-Software Contracting Advisory (Major Vendors) - IDC

Neil Stewart, IDCs Senior Research Director for the Sourcing Advisory Service, provides expert coverage and insight into the Software Procurement and Commercial Market for Global Customers. Focusing on Major Software Vendors, Mr Stewart provides research, data and competitive intelligence helping customers to optimise their Software Investments, providing research and commercial insight on optimal pricing, contract vehicles and terms, available concessions, and proven negotiation strategies. Where Vendors might be transitioning to new product offerings, or where customer requirements are yet to be fully developed, he also provides more consultative assistance and strategic insight helping organisations both right-size software services and product requirements, but also understand their ongoing investments, entitlements and contractual responsibilities.

The IT industry stands on the brink of one of its most transformative eras, triggering major consolidations in many long-standing sectors and the emergence of major new markets. IDC’s latest research shows that the infusion of autonomy, adaptability, and decision-making into products and services with agentic AI is redefining the very foundation of technology design, delivery, and value creation. It goes beyond an evolution in AI adoption. It’s a structural shift that will determine which technology providers lead in the emerging agent economy and which struggle to adapt.

Agent Use Will Surge 10x by 2027

IDC predicts that by 2027, G2000 agent use will increase tenfold, with token and API call loads rising a thousandfold [FutureScape 2026; Category: Worldwide IT Industry; Prediction 2]. For technology providers, this is a customer adoption story and a capacity challenge. Every software provider, cloud platform, and hardware vendor will need to optimize compute delivery, token use/delivery efficiency, and orchestration tools to manage unprecedented scale. Vendors that help enterprises measure, govern, and contain the costs of agentic automation will become essential partners in the next phase of digital transformation.

Slide describing Agentic Surge. By 2027, G2000 agent use will jump 10 times and token/call loads 1000 times, making agent vetting, orchestration, and optimization essential It responsibilities. 40% of US enterprises and 27% of Chinese enterprises have already put AI agents in prodution. In-app AI agents and greater use on no code/low agentic orchestration platforms will make it easier than ever to deploy new agents. Limited vetting of agent options, lack of orchestration/guardrails for agent fleets, and limited insights into long-term costs become major risks.

The Coming Surge: Agents, Actions, and Industry Transformation

IDC developed our new  Agent Economics Adoption and Delivery Model to provide a foundation for tracking the worldwide pace and scale of agent adoption by type of agent and to extend that tracking by geography and  sector over the next 5 years.

In our first release, IDC projects that the number of actively deployed AI agents will exceed 1 billion worldwide by 2029 which is 40 times more than in 2025. They will include in-application and standalone agents built and operated by cloud, software and services providers, but they will also include a growing number of custom-configured (no code/low code) and bespoke agents optimized to address the unique needs of individual enterprises.

Image of a slide showing the number of active agents per day in 2029 to exceed 1 BILLION.

More significantly, these agents will execute over 217 billion actions per day and consume 3.7 TeraTokens/Calls (3,700,000,000,000) daily to support this still rapidly expanding inferencing load. The token delivery cost worldwide for supporting all these agent actions will surpass $68 billion annually, but the cost to complete an ever more complex and sophisticated individual action will be 87% lower.

A chart showing how many agent actions will happen per day by 2029 to approach 217 billion.

For the IT industry, this represents both a windfall and a reckoning. The software, infrastructure, and service providers who can enable this scale efficiently will dominate the market. Those who can’t because they are still tied to monolithic licensing models, siloed architectures, or restricted data practices will see margins shrink as automation commoditizes legacy value propositions.

Applications Become Agentic Platforms

A critical early driver of agent adoption and token load growth comes from IDC’s Worldwide AI-Enabled Enterprise Applications and Agents 2026 Predictions which forecasts that by 2027, agentic automation will enhance capabilities in over 40% of enterprise applications [FutureScape 2026; Category: AI-Enabled Enterprise Applications and Agents; Prediction 1]. This signals a new design imperative for the IT industry: applications are becoming actors, not just interfaces.

The winners will be those who transform products into platforms built on collaborative ecosystems of agents that anticipate user intent, orchestrate processes autonomously, and continuously optimize operations through feedback. This agentic pivot will blur the boundaries between software, infrastructure, and services delivery, pushing the entire IT industry toward modular, interoperable architectures.

Data Readiness Becomes a Competitive Advantage

Data is the lifeblood of the agent economy. IDC warns that by 2027, companies that fail to establish high-quality, AI-ready data foundations will suffer a 15% productivity loss as generative and agentic systems falter [FutureScape 2026; Category: Worldwide Agentic Artificial Intelligence; Prediction 1]. For technology providers, this means opportunity. The leaders will be those that offer the tools to unify data governance, enhance observability, and create federated architectures, fueling agents with trusted, real-time intelligence.

A New Tech Industry Transformation

For the IT industry, the coming years will bring a massive reallocation of value. Demand for hardware optimized for AI inference and training will surge. Cloud providers will see unprecedented pressure on network and compute resources. Software vendors will need to evolve beyond seat-based licensing toward models that measure value by actions, outcomes, and intelligence generated. Managed service providers, in turn, will need to automate their own operations with agentic platforms that deliver speed, transparency, and scale.

IDC’s research underscores that as agents proliferate; the IT industry must take on a new role: the orchestrator of autonomy. Providers will no longer just deliver technology; they will deliver the frameworks that govern intelligent digital resources by balancing efficiency, ethics, and economic sustainability.

Building the Foundation of the Agentic Economy

For technology providers, the time to act is now:

  • Engineer for scale and sustainability. Design architectures and platforms that can handle the exponential growth in agent numbers, actions, and token/call demand.
  • Reimagine pricing and delivery models. Move beyond seats and licenses to outcome-based models that reflect continuous, autonomous operation.
  • Champion data integrity. Invest in AI governance, observability, and interoperability to ensure agents operate with accuracy and accountability.
  • Embrace modularity and interoperability. Partner across ecosystems to create open, agentic frameworks that integrate seamlessly with others.
  • Lead with responsibility. As agents gain autonomy, establish ethical guardrails and compliance mechanisms that earn enterprise and public trust.

The IT industry has faced transformation before, from mainframes to client/server, to cloud, and now to AI-infusion. The question for every provider is no longer if this transformation will happen, but whether they will steer it.

For IT leaders and technology providers, the path forward is clear.

Design for orchestration. Deliver for scale. Govern for trust.

Success in agent economics  will belong to those who can turn autonomy into advantage and help the world’s enterprises navigate this next great inflection point with confidence.

Rick Villars - Group VP, Worldwide Research - IDC

Rick is IDC's chief analyst guiding research on the future of the IT Industry. He coordinates all IDC research related to the impact of Cloud and the shift to digital business models across infrastructure, platforms, software, and services. He helps enterprises develop effective strategies for using their diverse portfolio of cloud investments and applications. He supplies early guidance on implications of critical innovations such as the shift to cloud-based control platforms for deploying/managing infrastructure, data, and code delivery as well as the emergence of AI as a critical IT workload and part of all IT products/services.

Many executive teams are asking the same question: How can we shorten the distance between a business event and a decision that matters? IDC’s FutureScape: Worldwide Data and Analytics 2026 Predictions points to a clear direction:

In this post, I explain what converged workloads mean in practice, why adoption is accelerating, how vendors are packaging the approach, and what to prioritize as you plan the next phase of your database strategy.

What “converged” really means

Converged workloads bring transactions and analytics together so insight and action can occur simultaneously on the same data. Instead of exporting from operational systems to a separate analytics stack, with the copies, cost, and delay that entails, a converged approach runs both in one governed environment. The outcome is straightforward: decisions based on live data, not yesterday’s batch.

This shift turns databases from systems of record into systems of intelligence, where every transaction can be analyzed and acted on immediately. It forms the foundation for continuous intelligence in areas such as fraud prevention, asset health, and customer personalization.

Why it is accelerating now

Three forces are turning convergence from concept into practice. Cloud elasticity allows IT teams to right-size mixed workloads as demand changes, avoiding unnecessary cost and overprovisioning. Streaming and in-memory processing make it possible to ingest and analyze data as it arrives, significantly reducing latency. IDC research shows that 96% of enterprises are using or planning to use streaming for AI and analytics.

Bringing AI closer to the data further reduces pipeline friction. Seventy-five percent of organizations use or plan to use integrated vector databases to store and query embeddings for AI. Adoption of agentic patterns is also accelerating, with 53% of enterprises already running AI agents in production and another 28% planning deployments within six months.

A look at one approach in the market

Vendors are packaging convergence in different ways. Oracle’s approach connects Oracle AI Database 26AI, which serves as the operational system of record with in-database AI and vector search for real-time decisioning on multi-model data, and Oracle Autonomous AI Lakehouse, which provides the enterprise analytics and governance layer. The two work together to unify operational and analytical data. The lakehouse extends discovery and governance across environments, integrates with third-party catalogs, supports open engines and formats, and runs AI (including vector search) directly on lake tables. Real-time pipelines keep information synchronized across sources.

Other leading providers are taking similar paths, adding operational capabilities to lakes, analytical depth to transactional systems, and stronger governance across both.

What leaders should expect

Simplification and speed. Early wins come from fewer data copies and fewer ETL hops, which shorten time to insight and reduce integration work. Embedded automation, including self-tuning, anomaly detection, and workload management, shifts focus from maintenance to innovation.

Performance without trade-offs. Modern converged platforms are designed to analyze live operational data while preserving transactional responsiveness. In practice, that means fewer compromises between “run the business” and “analyze the business.”

Governance up front. As AI becomes operational, unified auditing, lineage, and policy enforcement are non-negotiable. Converged designs help by applying consistent controls in one place rather than stitching them together across multiple stacks.

A market tilting to cloud. Database spending continues to concentrate in cloud services. Public-cloud DBMS revenue is projected to grow at 18.3% CAGR through 2029, reflecting the shift to flexible, scalable architectures that support mixed workloads.

How to get started

  • Start with a few high-value, time-sensitive use cases. Fraud detection, predictive maintenance, and key customer interactions are strong candidates. Allow legacy systems to coexist while you validate latency, reliability, and governance controls.
  • Build governance and observability in from day one. Prioritize clear lineage, unified access policies, and end-to-end monitoring across both operational and analytical environments.
  • Choose AI-ready data platforms. Integrated retrieval and in-database AI reduce pipeline complexity and keep inference close to the data for faster insights.
  • Plan for Agentic AIEstablish real-time connections between converged data stores and agent frameworks, with clear policies for access, lineage, rollback, and audit.

Takeaways

Converged workloads are transforming databases from systems of record into real-time systems of intelligence. This shift is driven by cloud elasticity, streaming and in-memory processing, and AI that operates close to the data, with agentic AI emerging as the main demand signal. In the near term, expect simpler architectures and greater automation, but make governance and observability first-class priorities from the start. Begin with a few high-value, time-sensitive use cases, validate performance and controls, and expand as operating patterns stabilize.

You can also explore other key predictions shaping the future of data and analytics in IDC FutureScape: Worldwide Data and Analytics 2026 Predictions.

Devin Pratt - Research Director, Data Management - IDC

As Research Director of Data Management within IDC’s AI, Automation, Data & Analytics practice, Devin analyzes market trends and vendor strategies shaping the Data Plane, including database management software and tools. He advises technology vendors and enterprises on product strategy, cloud and AI adoption, and the shift toward Agentic AI, delivering custom research, business value studies, and speaking engagements. His work focuses on providing clear, research-driven insights that support informed decisions and accelerate progress toward an AI-powered future.

Software development is at an inflection point. As agentic AI reshapes how teams build, deploy, and manage applications, the boundaries between developers, tools, and systems are dissolving.

The 2026 IDC FutureScape: Worldwide Developer and DevOps Predictions explores this evolution across four major shifts: from developers guiding AI-augmented tools, to intelligent agents reshaping DevOps, to organizations mastering multi-agent orchestration, and finally to the rise of structured agent development itself.

These predictions trace a dual shift: developers are simultaneously learning to work with intelligent agents and learning to build them. Both paths demand new skills, new development paradigms, and new models for scaling and governing AI across the enterprise.

The path of transformation: Developers as orchestrators

Autonomous AI agents will redefine what it means to build software. These systems will act as intelligent extensions of the development process, generating code, identifying bugs, refactoring systems, and proposing architectural improvements. This shift allows developers to move from repetitive work to higher-value problem-solving.

The human role becomes one of oversight: assigning tasks, validating outputs, and refining results. Architecture and code reviews remain essential, with human teams ensuring that AI-generated contributions meet performance, design, and security standards. At the same time, AI enhances productivity by flagging vulnerabilities, enforcing consistency, and surfacing optimizations that might otherwise go unnoticed.

As AI integration deepens, developers will take on greater responsibility for designing, guiding, and governing agent behavior. Their focus will shift toward planning, orchestration, and oversight to ensure that automation supports organizational goals while remaining ethical, explainable, and secure.

From linear pipelines to adaptive systems

Software delivery is evolving from automated pipelines to intelligent ecosystems. AI agents will be embedded across development and security workflows, automatically handling code testing, deployment, and compliance checks. These agents will work around the clock, accelerating delivery while reducing the chance of human error.

Platform engineering will provide the foundation for this model. Consistent standards, APIs, and observability across teams will ensure that agents can operate securely and reliably at scale. This transformation allows organizations to balance innovation with governance as automation reaches new levels of efficiency.

The shift to agentic delivery represents a significant inflection point for DevOps. It’s not just about doing things faster but about creating a pipeline that can continuously learn, adapt, and improve. Organizations that prepare for this change will see shorter release cycles, stronger security, and a level of agility that defines the next generation of software delivery.

The governance imperative

As organizations move from using a handful of independent agents to managing vast networks of interconnected ones, the challenge becomes one of control and accountability. This scale and complexity introduce new risks: agents operating outside policy boundaries, misaligned decision-making, and cascading failures that can ripple across entire platforms.

Organizations that succeed will treat governance as a continuous discipline embedded in every layer of operations. Investing in robust oversight, centers of excellence, and monitoring systems will not only mitigate risk but also unlock faster innovation. With the proper governance structure, multi-agent systems become an engine for resilience.

For technology leaders, the message is clear: as AI-driven automation scales, so must your governance. The companies that get this balance right will be the ones that innovate confidently, able to harness the full potential of agentic systems, while others are still managing unexpected complexity.

Building agents, not just using them

As AI agents multiply across the enterprise, organizations will need a structured way to manage their creation, training, and governance. Traditional development methods aren’t built for the complexity of agentic systems that learn, reason, and evolve. The Agent Development Life Cycle (ADLC) will become the backbone of how companies scale AI safely and effectively.

ADLC introduces a new paradigm for development. It integrates large language models with reasoning engines, memory systems, and continuous feedback loops to ensure agents can adapt intelligently over time. This advancement means development must evolve from static product releases to dynamic, ongoing systems of improvement. The ADLC provides the structure and guardrails to keep pace with AI’s rapid learning cycles while maintaining transparency and trust.

For business leaders, this is more than an IT initiative. It’s a strategic capability that redefines how value is created and maintained. Companies that achieve ADLC maturity early will be able to deploy agentic AI faster, respond to market shifts in real time, and continuously improve business outcomes. Those who delay will find themselves limited by outdated processes, unable to manage AI complexity at scale.

The new developer paradigm takes shape

As developers build with AI agents, they’re also building AI agents. These aren’t separate tracks but interconnected practices that inform and reinforce each other. The new paradigm is characterized by developers who are simultaneously users, creators, and governors of intelligent systems. Organizations that recognize this evolution will move faster and more confidently, developing the skills and structures needed to operate at both levels. Mastery of this dual capability will define what it means to develop software in the agentic era.

These predictions come from IDC’s FutureScape: Worldwide Developer and DevOps 2026 Predictions. For the complete research on how agentic AI is reshaping software development, delivery, and governance, explore the full report.

To understand how these developer shifts connect to the broader agentic enterprise transformation, visit IDC’s FutureScape 2026 Predictions and join our webinar series for actionable insights on navigating the agentic era across your organization.

Jim Mercer - Program Vice President, Software Development, DevOps & DevSecOps - IDC

Jim Mercer is a Program Vice President managing multiple programs spanning application lifecycle management (ALM), modern application development and trends, emerging generative AI software development, DevOps, DevSecOps, open source, PaaS for developers, and cloud application platforms. His focus areas are DevOps and DevSecOps Solutions research practices. In this role, he is responsible for researching, writing, and advising clients on the fast-evolving DevOps and DevSecOps markets.

As enterprises accelerate their use of AI, the importance of secure data sharing has never been greater. In IDC’s recent FutureScape 2026 predictions, it was predicted that by 2028, 60% of enterprises will collaborate on data through private exchanges or data clean rooms.

With Amazon Web Services (AWS) announcing new privacy-enhancing synthetic data generation within AWS Clean Rooms, we are already starting to see that prediction take shape.

We sat down with Lynne Schneider, Research Director for Data Collaboration and Monetization, and Location & Geospatial Intelligence at IDC, to unpack this prediction, explore the impact of AWS’s announcement, and offer guidance for enterprises preparing for the next era of AI-driven data collaboration.

Over the next several years, we anticipate that the majority of global enterprises will be collaborating through some form of private data exchange or data clean room. The reason is simple: the only sustainable advantage in an AI world is data, and novel data combinations.

What frightens people is the idea that their private data might leak or reach people they never intended to share it with. That’s why data collaboration technologies, including private exchanges and clean rooms, will rise from “nice to have” to must-have.

Amazon recently announced privacy-enhancing synthetic dataset generation within AWS Clean Rooms. How does this validate the direction you predicted?

This announcement sits at the nexus of two IDC predictions: growth in data collaboration and growth in synthetic data.

People turn to synthetic data for two reasons:

  1. To expand small datasets when training models.
  2. To add privacy protection by creating an equivalent privacy-safe dataset.

AWS’s announcement is focused on that second reason — privacy.

Before secure data collaboration was technologically feasible, people relied on contractual promises to keep shared data private. Now the technology itself enforces privacy. Synthetic data was one way organizations tried to protect sensitive elements (like social security numbers or addresses) to reduce the risk of re-identification.

What AWS has introduced is essentially a second layer of privacy protection. You bring your proprietary data into the clean room, activate the AWS service, and it generates a synthetic dataset. AWS also provides instruments to measure how well that synthetic data meets your privacy requirements before you use it.

How does combining clean rooms with synthetic data expand what enterprises can safely do with AI, especially as we head into the agentic AI era?

It’s really an up leveling when you combine the two.

Clean rooms already support federated training and let both humans and AI agents access and combine data securely. Synthetic data adds another privacy option on top of that. Together, they allow organizations to explore more advanced AI use cases — including generative and agentic AI — without exposing raw sensitive data.

From a trust, governance, and privacy standpoint, what does it mean that enterprises can now generate synthetic datasets inside the clean room rather than relying on external tools?

When people build synthetic data today, we often see “synthetic audiences” — personal data that’s transformed for advertising or marketing applications. We’re also seeing emerging use cases in life sciences and healthcare, where the data is extremely sensitive and sometimes scarce. Synthetic data helps expand those datasets for modeling and experimentation.

The challenge is that synthetic data can go wrong in two ways:

  • It may stray too far from the original data and become meaningless, or
  • It may stay too close, raising re-identification risks.

Combining synthetic data generation with a clean room solves both issues. The clean room governs access and also controls what analyses can be performed. It provides an extra seal of privacy.

What should enterprises start doing now to prepare for this shift, both in terms of data strategy and AI readiness?

Enterprises should start by identifying what kinds of data they need to make their AI, analytics, or decision intelligence more effective.

For example:
If you’re forecasting demand for a product and weather impacts that demand, you may need to combine:

  • A general LLM
  • Your enterprise’s historical demand data
  • External weather data (public or partner-provided)
  • Logistics partner data about fleet availability

Each party may hold sensitive information they don’t want to expose. Clean rooms allow you to combine all those pieces securely.

Is there anything else enterprises should know about the direction this market is heading?

We have some great examples of how enterprises are benefiting from data collaboration in a recent IDC report: From Adoption to Advantage: Experiences of Data Cleanroom Innovators.

There was an initial period when “data clean rooms” were a popular buzzword — the same way “AI” is today. Many organizations wanted to say they were doing it. But once you get past the check-the-box phase, you need to prove the value.

This research highlights 11 different use cases, challenges, outcomes, and guidance on how companies are realizing value through data collaboration technologies.

Christina Cardoza - Content Marketing Manager - IDC

Christina Cardoza is a Content Marketing Manager at IDC, where she specializes in brand content and social media strategy. With a background in journalism and editorial leadership, she has a proven ability to transform complex technology topics into clear, actionable insights.

IDC and Amazon are teaming up to make high-quality business insights faster, easier, and more accessible. IDC announced a new strategic partnership that brings its proprietary technology intelligence directly into Amazon Quick Research, an AI-powered research agent inside Amazon Quick Suite.

Trusted intelligence now built into AWS workflows

Amazon designed Quick Research to help business professionals generate, synthesize, and analyze complex information across multiple data sources. By integrating IDC’s premium research and more than 11.5 billion data points, the tool now delivers a new level of depth, accuracy, and credibility, all within the user’s existing AWS environment.

For many organizations, this solves a growing challenge: business users are overwhelmed by fragmented data sources and a surge of unverified AI-generated content. Embedding IDC’s validated intelligence directly into an AI-driven agent helps close that trust gap at a moment when clarity and speed are more critical than ever.

Through the integration, customers will gain:

  • Faster, more precise insights that blend next-generation AI with IDC-validated research
  • Seamless access to IDC content inside daily AWS workflows
  • Higher productivity and confidence in AI-generated recommendations and analyses

A milestone in IDC’s AI-fueled ecosystem strategy

The partnership also marks a milestone in IDC’s plans to deliver trusted intelligence directly into the tools and environments customers use every day. This integration is also part of IDC’s broader shift toward an AI-fueled, human-driven model for delivering trusted technology intelligence. IDC recently shared its vision for evolving from static research delivery into a connected intelligence ecosystem powered by APIs, partnerships, and agentic AI. The collaboration with Amazon Quick Research is one of the first visible steps in bringing that strategy to life, meeting customers where they work and embedding IDC insights into the flow of everyday decision-making.

And this is only the beginning. IDC will continue expanding its intelligence ecosystem in 2026 with additional integrations, enhanced APIs, and new ways for customers to tap into analyst-validated insights across the platforms where they already work.

Access to IDC insights is available to Amazon Quick Research users with select IDC subscriptions.

Christina Cardoza - Content Marketing Manager - IDC

Christina Cardoza is a Content Marketing Manager at IDC, where she specializes in brand content and social media strategy. With a background in journalism and editorial leadership, she has a proven ability to transform complex technology topics into clear, actionable insights.

Many enterprises are eager to deploy AI-driven capabilities, yet their ambitions are constrained by accumulated technical debt — outdated systems, fragile integrations, and limited data interoperability. IDC research shows that unmanaged tech debt can consume 20–40% of development time, diverting resources away from innovation and modernization.

For CIOs, the problem isn’t only technical, it’s strategic. Systems that were once fit for purpose now inhibit agility, scalability, and trust in data-driven decision-making. Vendors have an opportunity to become partners in reducing this friction by linking modernization roadmaps directly to the organization’s AI goals and measurable business outcomes.

An aging learning and development platform

Consider a global manufacturer whose workforce skilling system was built a decade ago on a rigid, on-premises learning management platform. The system stores static course libraries and tracks completions but cannot personalize training or integrate real-time performance data. As the company explores AI-enabled, adaptive training that generates custom learning paths based on employee behavior, role, and skills gaps, the legacy system becomes a liability:

  • Technical debt: Custom code and outdated integrations make migration costly and complex.
  • Operational drag: Manual updates and data entry consume IT hours that could support AI adoption.
  • Business risk: Workforce skills lag behind new digital processes, slowing innovation and productivity.

Without modernization, the organization cannot take advantage of new agentic or AI-driven learning systems capable of dynamically tailoring training to role, performance, or predicted need.

How vendors can accelerate modernization and build shared value

Vendors can play a critical role in helping technology leaders move from technical debt management to technical health improvement.

  1. Quantify and visualize technical health.
    Provide assessment frameworks and tools to measure the client’s “technical health” across systems — highlighting how legacy systems inhibit AI adoption. This gives CIOs a defensible, data-driven case for investment.
  2. Link modernization to AI outcomes.
    Position upgrades not as infrastructure refreshes but as enablers of AI-readiness — improved data access, reduced integration friction, and scalable infrastructure that supports machine learning and automation.
  3. Co-own the transformation roadmap.
    Collaborate on a phased modernization plan that addresses immediate technical debt while embedding continuous improvement and governance models. This partnership ensures measurable progress toward an AI-enabled enterprise.
  4. Embed learning modernization in the platform.
    Vendors offering AI-driven learning solutions can integrate adaptive skilling, microlearning, and real-time performance analytics directly into their technology, helping organizations cultivate the AI literacy and workforce agility needed for sustained transformation.

The strategic payoff

For the enterprise, addressing technical debt becomes a launchpad for AI advantage. For the vendor, guiding this transition cements long-term strategic partnership and stickier platform adoption. By aligning modernization efforts with business impact for faster upskilling, improved productivity, and data-driven workforce performance vendors move from being solution providers to co-architects of enterprise resilience and AI maturity.

Daniel Saroff - GVP, Consulting and Research Services - IDC

Daniel Saroff is Group Vice President of Consulting and Research at IDC, where he is a senior practitioner in the end-user consulting practice. This practice provides support to boards, business leaders, and technology executives in their efforts to architect, benchmark, and optimize their organization's information technology. IDC's end-user consulting practice utilizes our extensive international IT data library, robust research base, and tailored consulting solutions to deliver unique business value through IT acceleration, performance management, cost optimization, and contextualized benchmarking capabilities.

In IDC’s FutureScape: Worldwide Agentic Artificial Intelligence 2026 Predictions, two forecasts capture this divergence clearly.

One warns that by 2030, up to 20% of G1000 organizations will face lawsuits, fines, and CIO dismissals due to high-profile disruptions tied to poor AI agent governance. In contrast, the other prediction anticipates that by 2031, 60% of G2000 CEOs will use agentic AI to inform strategic decisions, leveraging autonomous systems to simulate outcomes and guide boardroom planning.

These predictions describe opposite potential outcomes of the same adoption curve: one driven by unchecked automation, the other by disciplined governance and transparent design.

The governance gap: Where failures occur

The early wave of GenAI deployments surfaced a pattern where speed sometimes outpaced safeguards. Under board and competitive pressure, CIOs deployed GenAI applications before implementing comprehensive processes to mitigate the potential for inaccurate or poor results.

The stakes are potentially higher when it comes to agentic AI implementations, particularly if they are deployed into mission-critical workflows — from logistics optimization to financial approvals — before governance frameworks are in place. The potential includes:

  • Uncontrolled decision cascades. When agents are authorized to take action, considerations should be made for how those actions may propagate through interconnected systems. Lack of control and visibility could lead to unintended consequences.  
  • Opaque behavior. When teams lack the explainability tooling to trace why an agent took a specific action, leaders may be left unable to defend outcomes to regulators or customers.
  • Fragmented escalation protocols. When human oversight is nominal and when governance is split across data, IT, and legal functions with no unified escalation path, problems may go undetected.

The consequences of these scenarios are immediate and potentially dramatic, including service outages, privacy violations, shareholder lawsuits, and loss of executive confidence.

It’s not about technology failure, but organizational unpreparedness.

From control to confidence

By contrast, organizations that treat governance as infrastructure and not insurance are finding that control and confidence grow together.

One of the predictions envisions a near future where CEOs use agentic AI not for operational efficiency, but for strategic insight. These systems may model mergers, simulate supply chain disruptions, and forecast policy impacts faster than human teams can aggregate the data.

To make that shift, enterprises will need to embed three design principles at the core of their AI programs:

  1. Traceability by design. Every autonomous decision should carry a data lineage record and confidence score, allowing oversight without throttling performance.
  2. Integrated governance. AI ethics, risk, and compliance functions should be unified and integrated, and applied across the development and operations lifecycle.
  3. Accountability loops. Decision thresholds or hard-coded events trigger human interventions before outcomes cross defined boundaries.

When these design principles are followed, governance doesn’t slow innovation or adoption. Instead, it builds confidence. When leaders trust the system, they can push AI further, including into strategic applications such as board-level scenario modeling, capital planning, and long-horizon strategy.

Bridging the divide

IDC’s research shows that the organizations succeeding with agentic AI share a common mindset: they see governance and growth as inseparable. The message from the 2026 FutureScape is clear: the problem isn’t that AI agents act autonomously, it’s that too few enterprises are ready for them to do so.

The next era of competitive advantage will belong to organizations that can govern autonomy, not constrain it.

Nancy Gohring - Senior Research Director, AI - IDC

Nancy Gohring is a senior research director, co-leading IDC's GenAI and Agentic AI Strategies program. Nancy covers big picture trends related to enterprise adoption of AI, including GenAI and agentic AI. Key research themes include business, organizational, and technology architecture transformation, in the context of AI and GenAI. As part of the Worldwide AI, Automation, Data & Analytics Research practice, Nancy supports a range of clients across the technology stack including hyperscalers, developer tool providers, enterprise application vendors, professional services organizations, automation frameworks providers, and infrastructure suppliers.

As a CIO and CTO, my responsibility is less about answering “can we do something”, and more about “should we do something”, even when not asked.  Agentic AI orchestration exemplifies this, and tech leaders must weigh in, even if not being asked. One of the biggest misconceptions to avoid is to think Agentic AI is a new tool when really, it’s a completely new way of achieving business objectives, it’s a new Architectural Archetype.

Although nascent, the Agentic AI architectural archetype can’t be ignored. Following traditional best practices today for technology developments could lead to costly over-engineered platforms, and potential obsolescence even before you finish deploying them!  Not understanding the criticality of architecture versus tooling could lead to deploying attractive point solutions (even AI ones), that ultimately lead to future complexity proliferation, silos, costs, and rigidity.  Agentic AI orchestration isn’t improving or automating technology solutions, it is offering a complete re-think on the underlying solution design from the ground up providing unprecedented dynamic enterprise outcome driven agility.

The base building blocks for agentic orchestration are AI Agents, self-directed independent mini-systems (workers) that sense, decide, learn and act to achieve goals. Unlike microservices and APIs, agents have contextual understanding and reasoning with minimal human input. Federated Agentic AI Orchestration is a coordination fabric including orchestration, capabilities and governance where multiple specialized autonomous AI agents collaborate across distributed systems, utilizing tools coordinated by a master orchestration layer that governs policies and allows for sub-orchestration. This enables agents to operate autonomously, learn continuously, and be swapped or upgraded modularly, while maintaining interoperability through standardized protocols like MCP (Model Context Protocol) to achieve organizational objectives. It’s the difference between planning your trip with a paper map versus asking your car to figure out how to get to a destination and the car automatically adapts based on real-time traffic and road-closures.It’s the leap from static workflows to dynamic orchestration, from rigid integration to capability onboarding, from deterministic execution to bounded emergence.

Reference Architecture Considerations

To achieve this target state, there are several emerging, and at this time, often immature components of the reference architecture required including: 

  • Orchestration through standardized inter-agent protocols (e.g., MCP; others will emerge), memory management, routing, evaluation, recovery.  
  • Capabilities through tools, tool registries, and capability contracts allow GenAI based agents that are predictive (non-deterministic) by nature, to utilize deterministic capabilities through tools linked to mature hardened deterministic enterprise systems helping to reduce errors (e.g. hallucinations) that could occur within a pure GenAI based workflow.
  • Governance through identity management with enforced least privilege, observability, enforceable policies, human-in-the-loop (HITL), lineage with fall-backs to deterministic paths all play a vital role in not only providing confidence in what is achieved, but also in how it is being achieved. 

Four Mindset Shifts for CIOs and IT Leaders to consider

To unlock agentic potential, CIOs must embrace four mindset shifts:

  1. From “deploy a workflow” to “design a market”:  Your orchestration is based on a marketplace where multiple agents, tools, data, and models can be considered to achieve an objective.  This is a shift from building rigid solutions to building reusable flexible capabilities for orchestration in multiple solutions.
  2. From deterministic workflows to policy-bounded emergence:  Expect non-determinism. Engineer bounded variability with approvals on sensitive actions, human-in-the-loop thresholds, deterministic fallbacks for regulated steps. Think of it as a policy cage that offers autonomy with guardrails and contingencies.
  3. From integration backlog to capability onboarding:  Stop wiring systems point-to-point. Start onboarding capabilities with contracts (inputs, outputs, pre/post-conditions, risks, costs) published to a registry
  4. From vendor lock-in to composition strategy:  Assume a rotating cast of agents/tools. Prioritize interchangeability.  Monitor and compare agent and tool efficiency and effectiveness with intent to swap out for better agents and tools through continuous improvement.  It’s not about best practices it’s about next practices.  

The decisions we make today as technology leaders will either enable enterprise agility or entrench systemic fragility. Agentic AI Orchestration demands that we shift thinking in terms of pre-determined workflows and integrations, and start designing for emergence, modularity, and policy-bound autonomy. The allure of deploying isolated AI-powered solutions is strong, but seductive simplicity often leads to architectural entropy. Federated orchestration offers a path forward: one where agents collaborate across domains, utilizing reliable tools, governed by shared protocols and enforceable policies, enabling continuous learning and safe autonomy.

The question isn’t whether disruption is coming, it’s whether your enterprise will be ready when it does. Because in the age of bounded emergence, agility isn’t built… it’s orchestrated.

Rex Lee - CITO - Canadian Tire Corporation

Rex is the Chief Information & Technology Officer (CITO) at Canadian Tire Corporation (CTC), one of Canada’s most iconic and trusted companies with multiple retail banners spanning general merchandise, sporting goods, apparel, and businesses in automotive, financial services, real estate, and petroleum all with a brand purpose of “Making Life in Canada Better”. His mandate includes strategy, architecture, governance, development, operations, and cybersecurity across all retail locations, digital properties, corporate operations, and global facilities.