AI promises to transform IT service management (ITSM): faster resolutions, automated service desks, and the ability to catch problems before they happen. Yet many AI initiatives fall short.

In my work with end-user clients, the difference between success and frustration almost always comes down to one thing: the configuration management database (CMDB).

A CMDB is simply a system that tracks all your IT assets (servers, laptops, software) and how they connect. When your CMDB is accurate, complete, and includes cost information, AI becomes a powerful tool. It can route problems to the right people, fulfill requests automatically, and support better contract negotiations with external suppliers.

When your CMDB is a mess, AI just makes that mess happen faster.

Two ways organizations manage their CMDB

Across hundreds of client engagements, we see a clear divide.

At one end are “black box” environments where the CMDB is a relic, populated once, rarely maintained, and lacking cost or relationship data.

At the other end are “cost-enriched” environments where the CMDB is a live, trusted source of information, continuously updated and directly linked to service management processes.

The wrong way: AI amplifies bad CMDB data

A global manufacturer learned this the hard way. It deployed an AI virtual agent to automate service desk requests. The tool was designed to check asset availability and resolve common issues without human intervention.

But the CMDB was years out of date. When employees requested laptop refreshes, the AI encountered duplicate and obsolete records. Unable to determine which devices were actually in use, it escalated nearly every request to a human agent.

There was no reduction in ticket volume. In fact, the service desk spent more time correcting AI outputs than resolving issues.

Predictive incident management also failed. The CMDB lacked accurate relationships between applications and infrastructure, so the AI could not prioritize incidents by business impact. Average resolution times remained unchanged.

The right way: a cost-enriched CMDB unlocks AI value

Now contrast that with a mid-size financial services organization that invested 18 months in CMDB hygiene and enrichment.

Every configuration item (laptops, virtual machines, storage, and more) included unit cost, supplier, lease end date, and relationship data. Automated discovery ran continuously, and the CMDB was integrated with finance, procurement, and virtualization platforms. It became the single source of truth for IT service management.

When this organization deployed an AI-powered service management platform, the value was immediate.

A department head requested 50 new high-spec laptops. Instead of automatically generating a purchase order, the AI queried the CMDB and identified 20 unassigned units already in inventory. It reserved those devices and routed a budget approval request for the remaining 30, including total cost (hardware plus licensing).

Fulfillment time dropped from five days to less than 24 hours. The service desk was freed from a high-volume request category, and real-time cost visibility helped prevent budget overruns.

The biggest impact came during renewal of the organization’s managed services contract. The managed service provider (MSP) priced support per virtual machine and per terabyte of storage.

With an integrated CMDB, AI analytics enabled a pre-renewal audit. It identified:

  • 50 virtual machines that had been powered off for more than 90 days, with no activity
  • 15 terabytes of allocated storage with no active data

The CMDB confirmed no business owner or application dependency, making these safe to remove.

Eliminating this unused infrastructure reduced MSP costs by €4,000 per month, or €48,000 annually. It also created a more transparent and accountable partnership with the provider.

What the data shows: CMDB maturity drives AI outcomes

IDC research shows that organizations with a mature, cost-enriched CMDB achieve significantly better AI outcomes.

They:

  • Deliver 2.5x higher return on AI investments
  • Reduce service request times by 30–50%
  • Improve average resolution times by 15–25%
  • Lower total service management costs by 10–15%

Other factors that influence AI success in ITSM

A strong CMDB is the foundation, but it is not the only requirement.

Successful AI initiatives also depend on:

  • Clear, well-documented processes
  • Teams that understand how to work with AI tools
  • A culture that supports change

Without these, even high-quality data will not deliver full value. But without accurate, cost-enriched data, even the best processes and teams will struggle to make AI effective.

What you can do now

Treat your CMDB as a strategic foundation for IT service management, not just a compliance requirement.

  • Use automated tools to maintain accuracy
  • Assign clear ownership for CMDB quality
  • Add cost and relationship data to every asset
  • Integrate the CMDB with finance, procurement, and IT systems
  • Clean your data before deploying AI

When negotiating with suppliers, use CMDB-driven insights to validate usage and challenge invoices.

Final thought: AI is only as good as your data

In the rush to adopt AI in ITSM, success will not come from buying the most advanced tools. It will come from investing in the data those tools depend on.

Organizations that win will build strong data foundations, supported by the right processes, skills, and culture.

An accurate, cost-enriched CMDB is no longer just an operational necessity. It is a competitive advantage for driving efficiency, improving service quality, and reducing costs with AI.

Tom Collins - Senior Consultant, IT Sourcing & Benchmarketing - IDC

Tom Collins is an insights and strategy leader focused on delivering data-driven analysis and market intelligence. With deep expertise in consumer behavior, branding, and innovation, he advises organizations on identifying growth opportunities and navigating evolving market dynamics.

Wholesale has traditionally been a scale-driven business focused on connectivity and volume. That model is now evolving. 

Across the telecom industry, wholesale providers are rethinking how they deliver value, moving toward more flexible, platform-based approaches that go beyond traditional network services. 

This shift is being driven by changing customer expectations, new technologies, and increasing pressure to create sustainable growth. 

Wholesale telecommunications is shifting toward platform-based models 

IDC highlights 2026 as a key moment in the transition toward more automated, API-driven, and AI-enabled wholesale models. 

Rather than offering static products, wholesale providers are increasingly expected to deliver services that are more flexible, on-demand, and easier to integrate into customer environments. 

This includes: 

  • Greater use of APIs to expose network capabilities  
  • Increased automation across ordering, provisioning, and operations  
  • More dynamic and usage-based pricing models  

As a result, wholesale telecommunications is gradually adopting characteristics typically associated with cloud platforms. 

Customer expectations in wholesale telecommunications are changing 

Wholesale customers are no longer only looking for access to infrastructure. They expect solutions that can adapt to their specific requirements and business models. 

Flexibility, scalability, and ease of integration are becoming key decision factors. 

This is particularly relevant as enterprise and service provider customers operate in more complex, multi-vendor environments and require greater control over how services are consumed and managed. 

Wholesale providers are responding by offering more configurable services and by simplifying how customers interact with their networks. 

Ecosystems are becoming more important in wholesale telecom 

As wholesale models evolve, the role of ecosystems is expanding. 

Providers are increasingly working with partners to extend coverage, enhance capabilities, and co-develop new services. This includes collaboration across technology vendors, platform providers, and other telecom operators. 

At the same time, there is a growing focus on standardization, particularly around APIs and emerging technologies, to enable interoperability and scale across ecosystems. 

Managing these ecosystems effectively is becoming a key capability for wholesale providers. 

Vendor strategies in telecommunications are evolving 

This shift is happening alongside changes in how telcos approach their vendor landscape. 

Operators are becoming more selective and are reducing the number of partners they work with. There is a clear move toward strategic partnerships with vendors that can deliver end-to-end capabilities and take on greater accountability. 

This reflects the increasing complexity of telecom environments, where fragmented ownership across multiple vendors can slow down transformation and increase operational challenges. 

Fewer, more integrated partners can help simplify execution and align outcomes more closely with business objectives. 

What this means for wholesale telecom providers 

For wholesale telcos, the transition to platform-based models requires both technology and organizational change. 

This includes: 

  • Modernizing legacy systems to support API-driven services  
  • Investing in automation and AI capabilities  
  • Rethinking product design toward more modular, flexible offerings  
  • Building and managing partner ecosystems more actively  

At the same time, providers need to balance innovation with the realities of existing infrastructure and customer commitments. 

Wholesale telecommunications is becoming a strategic growth lever 

Wholesale is no longer just a supporting function within telecom organizations. It is increasingly seen as an area for differentiation and new revenue generation. 
Platform-based models, ecosystem collaboration, and more flexible service delivery approaches are opening up new opportunities to monetize infrastructure and reach new customer segments. 

As these models mature, the ability to execute effectively will determine which providers can translate this shift into sustainable growth. 

Download the full analysis 

Wholesale transformation is one of several trends reshaping the telecom market. In the IDC eBook State of the Telco Market 2026, you’ll find detailed data, forecasts, and analysis on platform-based models, API strategies, and evolving telco business models. 

Download the eBook to explore how wholesale is evolving and what it means for telecom providers. 

If you’re currently evaluating how platform models or ecosystem strategies could impact your wholesale business, our experts are happy to exchange perspectives. Whether you’re just starting or already transforming your model, we welcome the conversation. Get in touch with our team to continue the discussion. 

Jan Hein Bakkers - Senior Research Director, European Infrastructure and Telecoms - IDC

Jan Hein Bakkers is responsible for IDC's research efforts in the European enterprise and wholesale communications domain. His personal areas of expertise include internet access and WAN services, as well as wholesale connectivity markets. His research has a particular focus on the evolution of wholesale models, WAN transformation and the role of key growth segments, such as SD-WAN, cloud connectivity and very high bandwidth services within that. His work is published in IDC's EMEA Wholesale Telecoms Strategies and European Enterprise Communications Services programs, as well as the Worldwide Telecom Services Tracker. In addition, he provides his insights, opinions, and advice to a broad base of clients via custom engagements. He is a regular speaker at industry, client, and IDC events, and is frequently quoted in the press. Since joining IDC in 2001, he has analyzed a range of telecommunications and networking areas, including broadband equipment, TV services, and consumer multiplay strategies. He is based in the Netherlands and has degrees in international marketing and technical business administration.

For the past few years, large language models (LLMs) have dominated the AI conversation, and for good reason. They have transformed how we interact with software, accelerated content creation, and unlocked new forms of productivity.

But here is the reality enterprises need to internalize in 2026: the future of AI is no longer about a single model architecture.

A new AI model ecosystem is rapidly taking shape, one that is more diverse, more specialized, and far more powerful when orchestrated correctly.

The shift is subtle, but its implications are massive.

What is replacing LLM-only AI strategies? The shift to “any-to-any” models

LLMs, largely built on transformer architectures, still play a central role. But they are no longer sufficient on their own. Across industries, we are seeing a more diverse model landscape emerge that includes:

  • Deep reasoning models with adaptive thinking
  • Multimodal models (text, image, video, audio)
  • Small, efficient models for edge and latency- or cost-sensitive use cases
  • Domain-specific models tailored to industries like healthcare, finance, and manufacturing
  • Dedicated language models that excel in underrepresented languages
  • Quantitative and physics-based models for scientific and simulation-heavy workloads and real-world problems, ultimately enabling physical AI
  • Models with novel architectures beyond transformers, such as structured state space models (SSMs), mixture of experts (MoE), liquid neural networks (LLNs), and world models that solve problems differently, including time series and spatial tasks
  • Vision-language models built on novel architectures
  • Vision-action and world models that interact with real environments
  • Autonomous AI agents that interact with the real world, search the web, manage knowledge work, and use tools

This expansion is happening because real-world problems are not purely linguistic, and many business requirements are not easily solved by transformer-based models. Enterprises are discovering that while LLMs “think in language,” many business problems require reasoning in numbers, space, time, and physics.

To truly accomplish work and enable action-oriented outcomes, models need different skill sets and must be trained on a more diverse corpus of data. Solving a business problem may require a combination of models optimized for both reasoning and execution.

Why multi-model AI matters for enterprise strategy

This is not just a technical evolution. It is a strategic one. Organizations that continue to treat AI as a single-model problem will quickly hit limitations in:

  • Accuracy
  • Cost efficiency
  • Scalability
  • Use case coverage

Meanwhile, those adopting a multi-model, multimodal, and multi-agent approach will unlock entirely new capabilities, including action-oriented AI.

How to build a multi-model AI strategy: Three actions to take now

1. Treat model selection as a core capability
Most enterprises still underinvest in model selection. That is a mistake. Choosing the right model is no longer a one-time decision. It is an ongoing enterprise capability involving:

  • Matching model types to use cases
  • Evaluating trade-offs (cost versus performance versus latency)
  • Routing tasks dynamically across models

2. Design for a “model portfolio,” not a single stack
Your future AI architecture will look less like a stack and more like a portfolio, with different models playing distinct roles. This includes general-purpose LLMs, models for large quantitative tasks, and vision-language models for generating video and other content. The possibilities are extensive.

This “constellation of models” is becoming the new normal.

3. Invest in AI-ready data infrastructure
As models diversify, data becomes the unifying layer. Without the right data foundation, even the best models will underperform. The shift toward a multi-model world is driving a major evolution in data platforms, including:

  • Converged data architectures
  • Real-time pipelines
  • Vector and multimodal databases

Final thought: AI success will depend on model orchestration

The organizations that win will be those that:

  • Embrace model diversity
  • Build flexible AI architectures that can quickly incorporate new model types and innovations
  • Invest in AI-ready data architecture and orchestration

Because in the new AI landscape, it is not about having the best model. It is about having the right combination of models.

Learn more

For a deeper dive into how the model ecosystem is evolving and what it means for enterprise strategy, explore IDC’s latest research on AI model landscapes and emerging architectures.

Tim Law - Research Director, AI & Automation - IDC

Timothy Law is a Research Director for AI & Automation, responsible for the generative AI lifecycle tools and technologies research practice. Mr. Law’s core research coverage includes the evolution of generative AI infrastructure and platforms, foundation models, developer tools, observability solutions, agentic systems, and generative AI services. This research analyzes the trends and developments in the AI software markets, including the costs, benefits, and impact of generative AI technologies.

Zhenshan Zhong - Vice President, Emerging Technology Research - IDC

Zhenshan Zhong is the vice president of Emerging Technology Research, leading the research teams focusing on emerging technologies (the four pillars and innovation accelerators). Zhenshan and his team are responsible for the overall success of these research domains, including participation and program management of key client engagements and the daily management of research team members.

The gap between execution and strategy often widens as hype cycles gain steam. This is evident in the growing disconnect in retail and restaurants between AI investment and foundational readiness. Executives are placing big bets on AI to transform operations and CX, but at a basic level, AI is only as good as the data underpinning it, and many organizations are still operating on data foundations that were never designed for real-time decision making or intelligent automation.

This gap is becoming one of the biggest risks to competitiveness. No matter how advanced the AI, it cannot overcome fragmented systems, unsynchronized data, and limited visibility across the business.

Before AI can deliver value, the data must work.

Retailers and restaurant operators are simultaneously trying to modernize their businesses while continuing to deliver consistency, efficiency, and growth in an environment that remains highly unpredictable.

Economic volatility, shifting tariffs, supply chain disruption, labor constraints, margin pressure, rising customer expectations, and intensifying competition are all converging at once. At the same time, business leaders are being told that AI can help address many of these challenges, from improving forecasting and pricing to strengthening personalization and automating decision making.

In many ways, this is like trying to build the plane while in the air, and there is an important reality that is becoming clearer: AI is only as effective as the data foundation beneath it.

From modernization as IT project to business necessity

As we look across the market, it is evident that retailers and restaurant operators are no longer thinking about modernization as a narrow IT project. They are increasingly focused on modernizing data flows, processes, and systems as a business necessity, one that can enable greater operational efficiency now and more advanced AI capabilities over time.

For most organizations, the issue is not, nor has it ever been, a lack of data. Rather, restaurateurs and retailers often bemoan not knowing where data is, who owns it, and what to do with it. The common refrain is that retailers and restaurants already generate enormous volumes of data across POS systems, ecommerce platforms, loyalty programs, ERP environments, supply chain applications, labor systems, partner networks, and digital customer touchpoints.

The challenge is that much of this data remains fragmented, delayed, inconsistent, or difficult to govern. As a result, organizations often struggle to turn raw information into timely, trusted, and actionable insight.

Enter data modernization

A modern data environment is not just about migrating workloads to the cloud. It is about creating an architecture that can unify distributed data, improve visibility, support real-time decision making, and establish the governance needed to scale analytics and AI responsibly.

In retail and restaurants especially, where business conditions can change by the hour and differ significantly by location or region, synchronized and accessible data is foundational to better execution.

IDC research reinforces how significant this issue has become. According to IDC’s Global Retail Technologies and Business Processes Trends Survey, 2025, retail and restaurant organizations cite data visibility and accessibility, data synchronization, and data unification among the most important challenges to staying competitive. The same research also shows that poor data synchronization and integration, along with lack of access to real-time data and analytics, remain persistent barriers to successfully executing AI strategies.

Why AI efforts stall without modern data

Many organizations are eager to scale AI, but their data is still trapped in disconnected environments, governed inconsistently, or refreshed too slowly to support intelligent action. In that context, AI initiatives can easily stall, underperform, or produce outcomes that business users do not fully trust.

This is why so many retailers and restaurant operators are now prioritizing modernization efforts to improve how data moves across the enterprise and unlock efficiency and innovation. They want to reduce friction between systems and teams. They want better governance, better quality, and better observability. Increasingly, they want flexible architecture to support future use cases they may not have fully defined yet.

What this means for technology buyers and providers

Technology buyers eager to adopt the latest AI capabilities must think about data modernization and how they engage services partners to support this work. This goes beyond migrating legacy data estates. That is only the beginning.

Services providers must help brands design more adaptable, composable, and cloud-enabled foundations that support retail and restaurant-specific outcomes. This may include real-time inventory visibility, improved demand forecasting, dynamic pricing, customer intelligence, shrink reduction, faster fulfillment, or more efficient store and restaurant operations.

Retail and restaurant businesses do not need generic modernization strategies disconnected from business realities. They need partners that understand frontline complexity, omnichannel operations, distributed environments, and the increasing pressure to improve both efficiency and customer experience at the same time.

Data modernization as a strategic lever

For retailers and restaurant operators, data modernization is far more than a back-end initiative. It is becoming a strategic lever for resilience, innovation, and competitive differentiation. AI may be driving urgency, but modernization is what makes progress possible.

Dorothy Creamer - Sr. Research Manager - IDC

Dorothy Creamer is Senior Research Manager for IDC Research, Hospitality & Travel Digital Transformation Strategies, providing research and advisory services for hotels, casinos, restaurants and travel organizations. Ms. Creamer's research will focus on how these business segments are transforming and leveraging technology to increase efficiencies, deliver operational benefits and identify new revenue streams. Ms. Creamer's research will report on effective digital strategies to empower both guests and employees and analysis of areas of opportunity in a fast-evolving and highly competitive segment.

Margot Juros - Research Director, Worldwide Retail AI, Platforms, and Technologies - IDC

Margot Juros is a Research Director for IDC Retail Insights responsible for the Worldwide Retail Platforms & Technologies research program. Ms. Juros’s core research focuses on examining best practices, discerning emerging trends/critical business concerns, evaluating market conditions and vendor offerings to provide authoritative advice on IT investment strategies and optimal use of technologies for modern retail IT infrastructure. Her research covers key technologies for retail transformation, including cloud/edge, AI/GenAI/agents, cybersecurity/security, data management, payments, unified platforms, mobile, and networking/5G/Wi-Fi.

2026年4月8日,Anthropic正式官宣推出前沿大模型Claude Mythos Preview,并同步启动网络安全合作计划Project Glasswing,这场技术发布不仅改写了全球网络攻防的力量格局,更给中国网络安全行业带来了深刻的冲击与思考。

作为当下最具颠覆性的AI模型,Mythos在网络安全领域展现出的能力已超越多数顶尖人类安全专家,而Project Glasswing计划的核心参与方中,却未见任何一家中国网络安全厂商的身影。在全球AI安全竞争日趋激烈、技术壁垒逐渐形成的背景下,Mythos带来的不仅是行业机遇,更有不容忽视的挑战,中国传统网络安全产品的生存空间被重新拷问,国内厂商如何破局突围,成为当下最紧迫的命题。

Anthropic近期惊艳表现:Mythos与Project Glasswing重构攻防规则

作为Anthropic迄今为止最强大的模型,Mythos并非专门为网络安全场景训练的专用模型,但其通用智能的自然溢出,使其在漏洞挖掘、代码推理、漏洞复现与利用等核心安全场景中实现了跨代级提升。根据Anthropic官方披露,Mythos已在所有主流操作系统和浏览器中发现数千个零日漏洞,其中多个被定级为高危,其能力甚至超越了除最顶尖安全专家之外的所有人类,且全程无需人工引导即可自主完成相关操作。

为应对Mythos能力带来的潜在安全风险,同时推动防御方抢占先机,Anthropic同步启动了Project Glasswing计划,该计划以“防御先行”为核心定位,旨在让防御方在AI攻击能力向更广泛行为者扩散前,获得足够的防御支撑。据悉,Project Glasswing的创始合作伙伴包括AWS、苹果、谷歌、微软、思科、Palo Alto Networks等12家科技与安全巨头,此外还有超过40家构建或维护关键软件基础设施的组织获得扩展访问权限,用于扫描和加固自身及所依赖的开源系统。为支撑该计划落地,Anthropic投入了最高1亿美元的API使用额度,同时向开源安全组织捐赠400万美元,推动漏洞披露、供应链安全等领域的行业协作,并承诺在90天内披露阶段性研究成果。但中国网络安全厂商均未进入这一合作体系,无法直接共享Mythos的模型能力与相关安全资源。

Mythos对全球及中国网络安全的重大冲击

共性挑战:AI重构攻防逻辑,传统安全体系面临失效

无论对于全球还是中国,Mythos带来的核心挑战,本质上是“AI驱动的攻防速度差”与“传统安全产品的能力断层”,这种冲击正在重构整个行业的游戏规则。

首先,攻防速度差急剧压缩,被动防御模式彻底失灵。传统网络安全依赖人工分析与静态规则匹配,漏洞挖掘、攻击响应的周期以小时或天为单位,而Mythos驱动的攻击可实现分钟级全链路渗透——从自主挖掘零日漏洞,到生成攻击代码、构造攻击链路,全程无需人工干预,这让传统“告警-分析-处置”的流程完全无法适配,防御方陷入“防不住、响应慢”的困境。

其次,攻击门槛大幅降低,黑产工具呈现“平民化”趋势。Mythos的能力若被恶意利用,将彻底打破网络攻击的技术壁垒,非专业攻击者也能借助模型快速生成恶意代码、挖掘零日漏洞、构造高仿真钓鱼内容,甚至自主完成漏洞利用。

最后,传统网络安全产品的核心能力全面失效。传统漏扫产品、防火墙、IPS/IDS等产品,核心依赖静态特征库与人工规则,无法识别AI生成的无特征零日攻击与隐蔽攻击。例如,传统漏扫工具无法自主挖掘代码深层逻辑漏洞,而Mythos可快速定位隐藏数十年的安全隐患;

中国特有的额外挑战:技术壁垒与不对称竞争

相较于参与Project Glasswing计划的海外厂商,中国网络安全领域面临着更为特殊且严峻的挑战:

核心挑战之一是技术获取不对称,与海外同行形成明显技术代差。Project Glasswing计划的参与方可优先借助Mythos的强大能力,开展漏洞挖掘、威胁检测与防御体系优化,同步共享相关安全研究成果与开源资源,实现防御能力的快速迭代。而中国厂商被完全排除在该合作体系之外,无法直接获取Mythos的模型能力与相关安全资源,只能依靠自身力量研发相关技术,这导致在AI安全技术的迭代速度上,中国与海外同行存在天然差距,尤其是在零日漏洞挖掘、AI对抗等高端领域,这种代差可能进一步扩大,短期内难以实现追赶。

核心挑战之二是网络安全威胁大幅升级,关键基础设施防御压力剧增。中国金融、能源、政务、医疗等关键基础设施领域,广泛使用各类开源软件与通用操作系统,而Mythos已在这些系统中发现大量高危漏洞。参与Project Glasswing计划的海外厂商可借助模型快速获取漏洞信息、生成修复方案,及时完成系统加固;而中国厂商无法获得相关漏洞信息与修复指引,只能依靠自主排查、自主修复,不仅大幅增加了防御成本,更延长了漏洞暴露时间,显著提升了关键基础设施被攻击的风险。同时,随着Mythos能力的扩散,中国面临的国家级APT攻击、黑产攻击将更加隐蔽、高效,攻击手段也将更加多样,进一步加剧了网络安全防御的难度,对国家网络安全底线构成严重挑战。

此外,中国网络安全行业还面临着AI安全人才短缺、自主技术研发投入压力大等衍生问题。海外依托Project Glasswing计划形成了“技术共享+人才协同”的良性生态,而中国在AI安全领域的高端人才储备不足,自主研发缺乏成熟的技术参考与生态支撑,进一步制约了防御能力的提升,使得中国在应对Mythos带来的AI攻击时,处于更加被动的地位。

中国网络安全行业未来方向与市场机会

结合前文Mythos带来的攻防变革、中国面临的技术代差与安全威胁,IDC立足中国网络安全行业现状,为中国网络安全行业未来发展方向和市场机会提出以下几点建议:

  • 一是正视挑战,摒弃侥幸心理。国内厂商需清醒认识Mythos带来的技术冲击,正视无法接入海外先进模型的技术代差、关键基础设施防御压力加剧、人才短缺等现实难题,摒弃“固守传统、被动防御”的惯性思维,主动将挑战转化为转型动力,明确AI时代的发展短板,精准发力破局。
  • 二是勇于变革,突破发展瓶颈。面对行业重构机遇,厂商需主动打破传统产品与技术壁垒,以变革思维推动自身升级,不局限于现有产品体系,主动适配AI主导的攻防新格局,将“AI赋能”作为转型核心,摒弃低效、落后的防御模式,实现从“被动应对”到“主动突围”的转变。
  • 三是加快AI能力迭代,筑牢核心竞争力。AI是未来网络安全产品的核心支撑,也是弥补技术代差的关键。厂商需加大AI安全技术研发投入,优先推进AI在漏洞识别、攻击研判、威胁防御等核心场景的落地,快速迭代AI算法与产品模块,适配本土合规与场景需求,打造AI原生安全产品,杜绝技术空转,切实提升防御实效,缩小与海外同行的差距。
  • 四是重视生态协作,凝聚众家合力。单打独斗难以应对行业变革,厂商需主动联动AI企业、科研机构、行业协会,构建自主可控的AI安全生态。通过技术合作借力标准化AI资源,降低研发成本;联合攻关核心技术,补齐人才与技术短板;参与行业标准与开源生态建设,共享资源、联动防御,挖掘关键基础设施、中小企业等细分场景市场机会,弥补无法接入海外生态的资源缺口。

目前,国内网络安全技术提供商持续加大对AI技术的资源投入,并已经在网络安全运营、数据发现与分类分级、威胁检测与响应、钓鱼邮件检测等多个场景取得显著效果。IDC的预测,中国网络安全相关AI Agent应用收入的市场规模在未来五年的复合年均增长率将高达106.5%,并在2030年达到593.5亿元人民币(如下图所示)。

同时,大模型安全评估、大模型安全围栏、智能体威胁检测等专注于AI安全防护领域的安全产品也正在快速涌入市场。根据IDC的预测,中国人工智能安全收入的市场规模在未来五年的复合年均增长率将达到50.5%,并在2030年达到340.3亿元人民币(如下图所示)。

IDC在2026年将聚焦AI自身安全与AI赋能安全启动多项市场研究,洞察网络安全技术发展趋势、展现安全厂商最新技术能力、推动中国网络安全市场持续变革与快速发展,欢迎广大技术提供商积极关注。

IDC更多相关研究

IDC已于2026年启动AI安全技术系列研究,围绕AI原生安全架构、安全智能体成熟度评估、AI驱动DevSecOps实践路径及企业级AI治理框架展开深入分析。

AI攻防规则已变。IDC深耕大模型安全与智能体治理研究,助您量化风险、重构防线。欢迎联系IDC中国,获取最新洞察与定制化咨询,共探破局之道。请点击此处与我们联系。

随着Sora的退场,全球多模态大模型竞争格局正在发生深刻变化。从“技术标杆”到“商业现实”的转折,不仅意味着AI视频赛道进入理性发展阶段,也对中国厂商提出更高要求。在多模态能力加速突破与产业应用持续深化的背景下,中国大模型正从追赶走向引领,但在算力成本、商业化闭环与合规安全等方面仍面临关键考验。IDC基于最新实测结果,对中国多模态大模型的发展现状、竞争格局与未来趋势进行了系统解析。

Sora落幕并非可以放慢脚步的信号,中国多模态大模型更需加速前行

近日,OpenAI宣布关停旗下视频生成模型Sora,曾被视为AI视频标杆的产品正式退出市场。这一事件引发全球AI行业震动,也让国内多模态大模型领域迎来新的思考:外部标杆退场,并非可以 “躺平” 的理由,反而意味着中国多模态大模型技术必须持续坚持自主创新,在技术、生态与商业化上走出自己的道路。

Sora的关停,源于高昂算力成本、版权合规压力与商业化困境,这也为全球多模态赛道敲响警钟:炫技时代结束,实用、可控、可落地才是核心竞争力。依赖外部模型、简单对标模仿的路径已不可持续,自主创新的技术架构、合规安全的数据体系、高效普惠的产业价值,将成为下一阶段竞争的关键。

面对行业变局,中国多模态大模型已展现强劲势头,从文本、图像到视频、3D、语音的全域融合,正在重构内容生产与产业效率。Sora的离场,清空了浮躁的对标焦虑,却也让全球赛道进入更残酷的“自研淘汰赛”。对中国而言,这既是窗口期,更是压力测试:算力底座、算法创新、数据安全、伦理合规、商业闭环,缺一不可。

2026年3月,国际数据公司(IDC)发布《中国多模态大模型市场主流产品评估》报告,全面评估了国内主流厂商在图像生成、图像理解、视频生成等三大多模态大模型核心领域的技术实力与产品表现。报告显示,中国多模态 AI 产业正迎来高质量发展新阶段,使用多模态大模型构建的应用可以处理和整合多种类型的数据,这些数据更丰富、更能感知上下文,从而大大提高准确性、效率和用户体验。随着技术不断成熟,多模态 AI 也将进一步渗透到个人生活、办公场景,企业级应用场景,推动人机交互方式的革命性变革。

中国多模态大模型迈入加速迭代期:IDC 2026年3月实测结果揭晓

2025年至2026年初,中国多模态大模型领域迎来前所未有的迭代浪潮,新一代模型在文本、图像、音频及视频的理解与生成上实现了质的飞跃。技术供应商竞相发布具备更强逻辑推理与长上下文能力的旗舰产品,使得AI不仅能 “看” 懂复杂图表,也能实时创作高清视频。如字节跳动、阿里、快手、腾讯等旗下产品,在多模态大模型关键指标上持续突破,逐步形成 “技术突破—产业应用—生态反哺”的正向循环。

IDC在2026年1-2月对市面上主流的多模态大模型产品进行了实测,本次实测覆盖了国内多家头部技术供应商的代表性产品,测试时间截至 2 月 28 日,对象为公开的网页版产品,实测问题涉及图像生成类、理解类、视频生成类。打分标准主要考察生成/理解内容的质量,从指令遵循与幻觉、逻辑性、鲁棒性、质感及细节、生成时间与稳定性、可用/创新性、 内容安全性/公平与隐私保护等方面综合展开。主要研究结论如下:

图像生成类字节跳动豆包 Seedream 5.0、腾讯元宝 Hunyuan Image 3.0、阿里万相 2.6 凭借出色的生成质量位居前列。这些产品在语义理解、细节还原、风格多样性等方面表现突出,能够精准匹配用户创作需求,同时在生成效率与画质稳定性、内容安全性上实现平衡。

图像理解类:字节跳动豆包大模型 2.0、阿里千问 3.5、阶跃星辰 Step3 表现最为亮眼。这类产品在复杂场景识别、跨模态推理、细粒度语义解析等核心能力上优势明显,能够高效处理图文混合输入,为多场景应用提供有力支撑。

视频生成类:字节跳动即梦 AI Seedance 2.0、快手可灵2.6、生数科技 Vidu Q3等产品视频生成表现极佳,凭借优质的生成质量与高效的生产效率成为行业标杆,推动国产视频生成技术在短视频创作、影视特效、虚拟数字人等领域的落地应用。

把握多模态技术发展趋势,中国多模态大模型未来仍需审慎推进

IDC 报告指出,未来,随着多模态技术与各行业深度融合,中国厂商有望在全球市场占据更重要的地位,为数字经济发展注入新动能。未来在图像和视频模态,IDC认为重要的技术趋势有:

图像模态:从生成到理解,走向统一与可控——未来更注重生成质量与可控性跃升、理解与推理深度化、架构统一化、轻量化与端侧部署、3D模型生成等方向发展。

视频模态:时序建模突破,走向长视频与实时交互——未来将更注重长上下文与时空一致性、生成质量/成本与效率、视频深度理解与多模态交互、3D 与世界模型融合等方向发展,更好地服务于个人生活以及影视娱乐、游戏、媒体、教育等行业应用。

中国多模态大模型市场头部厂商当前商业化路线以 ‌B端+C端全面展开‌,一方面通过借助C端流量与生态基础,另一方面聚焦于将多模态AI能力深度嵌入企业工作流,打造“模型即服务”(MaaS)与针对在媒体、短视频创作、影视特效、虚拟数字人、电商、文旅等行业定制化解决方案。有一部分具有生态优势的国内头部厂商已形成内容-流量-变现的商业闭环,用户使用量与付费转化率均领先海外同行。但多模态大模型技术供应商仍需持续监控未来转化与留存指标,部分中国市场C端产品定价并不普惠。

另外,B端场景的全面渗透也仍需时间。Sora近期关停也给中国多模态大模型技术供应商带来启示,仍需警惕以下风险:

内容安全与深度伪造风险:超逼真图像、视频易被用于虚假信息传播、金融诈骗、人格侵权,对社会信任与公共安全构成威胁。

监管政策、版权与法律合规风险:训练数据多来自未授权的图片、影视、短视频素材,生成内容版权归属模糊,易引发诉讼与监管处罚。全球各国对AI生成内容的监管趋严,可能限制真人素材生成、内容传播等核心功能,影响产业扩张。

技术与算力成本风险:多模态大模型的算力成本,是文本大模型的数十倍甚至上百倍,以致训练与推理算力成本高昂,中小厂商难以负担;同时存在算法偏见、模型幻觉等技术缺陷。

商业化可持续性风险:C端用户付费意愿、转化与留存需要密切监测,B端场景渗透仍需时间,警惕内容同质化与可持续性发展风险。

IDC中国研究经理程荫表示,本次实测结果反映出中国多模态 AI 产业已从技术追赶转向创新引领阶段。头部厂商在技术迭代与产品落地方面持续发力,不仅在基础能力上实现突破,更在商业化场景探索中取得进展。技术竞争没有终点,标杆退出不代表终点线前移。中国多模态大模型仍需深耕技术底座、贴近产业需求、筑牢安全底线,才能在全球AI格局中占据主动,真正实现从跟跑到并跑、再到领跑的跨越。

IDC长期深耕人工智能与生成式AI领域,围绕技术演进、竞争格局与商业落地构建了系统化研究体系。基于持续的一手实测与行业跟踪,IDC不仅提供权威数据与趋势判断,更可为技术供应商、行业用户及投资机构输出面向实际决策的策略建议,助力识别关键技术路径与商业化机会。

在多模态大模型加速演进的关键阶段,欢迎与我们联系,获取完整研究成果、报告解读及定制化咨询服务,抢占下一轮AI发展先机。请点击此处与我们联系。

Anne Cheng - Research Manager - IDC

Anne Cheng is a research manager in IDC China whose research focuses on the AI and big data markets. She collaborates with IDC's regional and global consulting teams and is involved in the business development of related markets. Prior to joining IDC, Anne had nearly four years of working experience in the IT/ecommerce and consulting industries, serving as consultant and business analyst. Her experiences made her familiar with industry data/customers and helped her gain deep insights into the business application scenarios. Anne holds a master's degree in Statistics from the University of Missouri Columbia.

在当前政策与产业共振的窗口期,央国企智能化转型正从“技术导入”阶段,迈入“体系重构与价值兑现”阶段。2026年全国两会政府工作报告明确提出,鼓励央企国企带头开放应用场景,打造智能经济新形态,并深化“人工智能+”行动。国资委同步推进中央企业“AI+”专项行动,强调以主责主业为牵引,构建协同高效的产业与经营机制,强化战略支撑与示范带动作用。这一系列顶层设计,意味着AI已由“技术变量”转变为“发展变量”,成为央国企重塑增长逻辑的关键抓手。

从发展路径看,央国企正由规模导向转向质量导向。国资委提出“两个确保、两个力争”,明确要求确保“一利五率”经营指标稳中向好,并实现结构性优化。这一目标体系,本质上是对投入产出效率、资产质量和经营韧性的系统重构。在这一框架下,AI不再是边缘工具,而是支撑“高质量发展”转型的决定因素,其核心在于是否能够嵌入主责主业,进入生产、运营与决策的关键环节,并形成可度量的价值闭环。只有当AI实现从“分析建议”到“自动执行”的跃迁,并满足低幻觉率与高可靠性的业务要求,才能真正支撑央国企的高质量发展目标。

三大结构性趋势重塑央国企智能化转型路径

首先,算力体系加速从通用能力向AI原生底座升级。央国企已深度参与国家算力网络建设,“算力+电力”协同持续推进。随着大模型与复杂智能体应用深化,传统通用算力体系难以支撑高并发与高复杂度场景,央国企对自主可控AI算力底座的需求显著提升。叠加信创考核与供应链安全约束,国产AI芯片与算力体系的规模化应用正在成为刚性要求。以智能云为核心的算力、数据与平台一体化底座,将成为央国企智能化的基础设施。

其次,数据治理从“汇聚管理”转向“要素化运营”,数据价值开始成为新的增长来源。央国企智能化转型在于高质量数据集与数据要素体系的构建。一方面,围绕能源、工业等关键行业,形成可支撑模型训练与智能体迭代的高质量数据资产;另一方面,通过可信数据空间与合规流通机制,实现跨部门、跨行业的数据共享与价值转化。

再次,AI应用从试点验证进入规模化落地阶段,价值导向成为核心评估标准。随着前期大模型试点基本完成,央国企的投资重心明显转向场景化落地能力。企业不再关注模型规模本身,而更加关注在生产优化、运营管理、客户服务与风险控制等关键场景中的实际效果。

进一步看,智能化建设正从工具叠加走向体系重构,AI成为业务运行的底层能力,央国企进入流程重塑与组织适配为核心的深水区。AI不再是外挂系统,而是嵌入复杂组织与业务体系之中,驱动流程再造与管理模式升级。这一转变意味着企业正从“业务数字化”走向“数字业务化”,数字技术由辅助工具转变为业务本身的运行逻辑。

IDC建议:三大结构性机遇指向央国企智能化升级关键方向

多智能体协同架构成为复杂业务场景的关键技术范式。

央国企业务链条长、专业分工细,单一模型难以覆盖全流程需求。通过构建多智能体协同体系,将不同能力模块化并形成协作网络,可以有效支撑跨部门、跨系统的复杂业务运行。同时,该架构天然契合央国企分层分级的组织结构,在实现灵活部署的同时,满足可管、可控、可审计的治理要求。

高质量数据集与数据要素体系建设成为核心基础工程。

随着行业模型与智能体应用深化,数据质量直接决定AI应用效果。央国企正在从单点数据治理走向体系化数据能力建设,通过标准化、精标注、多模态数据集构建,支撑模型持续优化。同时,围绕数据流通与价值转化,逐步形成面向行业的共享与运营体系。

国产软硬件体系进入深度适配阶段,推动智能应用规模化落地。

在自主可控战略牵引下,央国企持续推进全栈国产化替代,核心业务系统成为落地重点。从当前进展看,国产化已由“单点可用”走向“规模好用”,并在部分关键场景实现性能优化与成本下降。这一趋势为智能化应用提供了安全、稳定且可持续的技术底座。

总体来看,2026年作为“十五五”开局之年,央国企智能化转型的路径已逐步清晰:以智能算力为底座,以高质量数据为核心,以行业模型与智能体为抓手,以体系化重构为路径,实现AI在核心业务中的规模化应用。从数据治理、行业应用到云与算力体系,再到重点行业的应用场景深化,均显示出一致的方向——智能化正在从“能力建设”转向“价值兑现”。在政策牵引、技术成熟与业务需求的共同驱动下,央国企正进入以AI为核心驱动的新一轮增长周期。

IDC相关研究

面向2026年,IDC围绕“AI驱动央国企高质量发展”持续开展系统性研究,重点聚焦智能算力基础设施、行业大模型应用、数据要素体系等关键方向。IDC通过企业调研、案例分析与市场跟踪,形成覆盖技术趋势、行业实践与投资决策的系列研究报告,旨在为央国企在“AI+”行动中的战略规划、路径选择与价值评估提供可落地的参考依据。相关研究将持续更新,支持央国企在智能化转型过程中实现从能力建设到价值兑现的跨越。

如您希望进一步了解IDC在央国企智能化转型、行业大模型落地、数据要素体系建设或AI投资规划方面的研究与咨询服务,欢迎与我们取得联系。IDC可结合企业实际情况,提供定制化研究、专项咨询及落地路径设计支持,助力央国企在“AI+”转型中实现可持续增长与价值突破。欢迎随时与我们沟通交流,我们的分析师团队将为您提供更具针对性的洞察与建议。

如需进一步了解与研究相关内容或咨询 IDC其他相关研究,请点击此处与我们联系。

The Japanese AI infrastructure market is currently at a major turning point. Until now, market growth has been driven by investments in AI infrastructure supporting model training. However, IDC expects a transition toward a phase of real-world deployment centered on inference. As AI adoption expands from proof of concept (PoC) to full-scale production, the role of AI infrastructure and the requirements placed upon it are undergoing significant changes. IDC positions 2026 as the inflection point from training to inference.

1. Rapid Growth of the Domestic AI Infrastructure Market and the Shift Away from a “Training-Centric” Model

In recent years, the Japanese AI infrastructure market has expanded rapidly. Backed by large-scale investments from hyperscalers and domestic cloud providers, the market recorded year-over-year growth exceeding 100 percent in both 2023 and 2024, more than doubling in size for two consecutive years. IDC forecasts that spending on Japanese AI infrastructure will reach 694.6 billion yen in 2025 and continue growing at a compound annual growth rate (CAGR) of 7.3 percent, approaching nearly 1 trillion yen by 2030.

However, the drivers of future growth will change significantly. In addition to traditional training workloads, demand for inference, where AI is continuously used within business operations, will expand and shift the market’s core focus. IDC predicts that by 2027, spending on inference in the Japanese AI server market will surpass that on training. Furthermore, from 2025 to 2030, the CAGR for inference-related spending is expected to exceed that of training by more than 10 percentage points.

2. Changes in AI Infrastructure Utilization Driven by the Expansion of Inference

According to IDC’s latest survey, Japan Digital and AI Infrastructure Strategies and Investment Survey 2026, public cloud accounts for the majority of AI infrastructure planned for inference use. At the same time, “private AI infrastructure,” including dedicated environments and edge deployments, represents 20 to 30 percent range of usage.

Meanwhile, only 22 percent of organizations are currently leveraging internal data for AI in a full-scale or advanced manner. This indicates that leading enterprises are just beginning to utilize internal data, including confidential and personal information, for AI applications.

IDC’s research shows that these early adopters intend to increasingly utilize private AI infrastructure going forward. This trend is driven by the need for optimized configurations tailored to specific business requirements, higher predictability in availability and costs, and the importance of addressing regulatory requirements and sovereign AI considerations. These organizations are building AI foundations that balance cost competitiveness with reliability while ensuring business continuity.

  • Only 22 percent of companies are leveraging internal data extensively or at an advanced level for AI.
  • Leading organizations in internal data utilization show a strong intention to adopt private AI infrastructure to enhance cost competitiveness and predictability while building reliable AI foundations that also account for sovereign AI.

As AI infrastructure becomes directly linked to national strategies and corporate competitiveness, addressing sovereign AI and data sovereignty is becoming increasingly important. From the perspectives of data protection, data residency management, and geopolitical risk mitigation, the use of dedicated environments and sovereign clouds is expected to expand.

3. Expansion of the AI Infrastructure Services Market and Changes in Competitive Dynamics

With the expansion of AI infrastructure adoption, the IT infrastructure services market covering deployment, operation, and maintenance is also experiencing rapid growth. The Japanese AI-related IT infrastructure services market is projected to grow from 95.7 billion yen in 2025 to 232.0 billion yen by 2030, achieving a CAGR of 19.4 percent. The increasing complexity of AI infrastructure, including requirements such as liquid cooling and advanced data center facilities, is driving demand for specialized services.

The competitive landscape is also changing. It is moving away from a traditional focus on hardware performance toward flexibility in infrastructure selection, service delivery capabilities, and the ability to support production-level AI deployment. While vendors that led with high-performance GPU-based infrastructure and related services have driven the market to date, IDC expects that companies capable of providing end-to-end support, from AI adoption and application development to hybrid environment operations and sovereign AI compliance, will establish a competitive advantage.

IDC Report Overview

IDC has published a report analyzing changes in the Japanese AI infrastructure market in detail: Japan AI Infrastructure and Services 2026: The Shift in Competitive Dynamics Driven by Inference. This report provides a segmented market forecast from 2025 to 2030 to capture structural changes in the Japanese AI infrastructure market. It includes analysis by server and storage, service provider and enterprise, deployment model, and industry vertical. The AI server market is further segmented by training and inference as well as accelerated and non-accelerated servers.

In addition, the report presents forecasts for the Japanese IT infrastructure services for AI market by customer type and service type. It also examines changes in AI infrastructure demand and key vendor trends, clarifying future market opportunities and changes in the competitive landscape.

Through these analyses, readers can gain a comprehensive understanding of how demand structures are evolving from training to inference, differences in investment trends between service providers and enterprises, and emerging opportunities in the expanding services market.

For more detailed insights and market trends, please contact our analysts by completing this form IDC | Identifying Market Opportunities – Contact Us.

Yukihisa Hode - Research Manager, Infrastructure & Devices, Research, IDC Japan - IDC Japan

Yukihisa Hode is a research manager covering digital infrastructure strategies as well as AI infrastructure, IT infrastructure services, IT operations, hybrid/multicloud and hyperconverged infrastructure (HCI). He leads the research program on digital infrastructure strategies, providing insight and advice on the digital infrastructure through research reports, marketing content, and presentations to support IT and digital decision-making.

Across industries, AI has already delivered measurable operational gains. Workflows have been automated. Processes have accelerated. Teams have improved efficiency and reduced costs. Early AI adoption focused on productivity because leaders needed clear, measurable returns.

These early results were important. Contact centers reduced handle times. Back-office operations automated routine tasks. Sales and marketing teams improved throughput. AI proved it could enhance performance across multiple business functions.

However, productivity advantages diffuse quickly.

What creates competitive differentiation in one quarter often becomes standard capability the next. Productivity improvements layered onto existing operating models eventually reach saturation. Organizations find themselves optimizing processes that competitors can easily replicate.

The result is what many leaders are beginning to recognize as a productivity plateau.

Why productivity gains plateau

Productivity-first strategies hold organizations back in three ways.

They reinforce functional silos.
When AI is deployed function by function, each team focuses on optimizing its own objectives. Marketing automates campaigns, finance improves reporting cycles, and service teams reduce response times. Gains develop in isolation rather than reinforcing enterprise-wide value.

They lock in current assumptions.
Optimization strengthens existing workflows and metrics. As markets evolve, organizations that invest heavily in refining legacy models often find themselves constrained by the very systems they improved.

They produce linear gains.
Efficiency improvements inevitably plateau. AI becomes an improvement layer rather than a growth engine.

The limitation is not the technology itself. AI capabilities continue to advance rapidly. The constraint lies in operating design.

When AI is layered onto legacy structures without rethinking how value is created, outcomes remain incremental.

The limits of efficiency as a strategy

Early AI adoption naturally focused on the most immediate and measurable gains. Automation reduced costs and accelerated execution. These results helped organizations justify investment and build confidence in the technology.

Over time, however, efficiency becomes table stakes.

Competitors implement similar automation. Vendors integrate comparable capabilities into standard platforms. What once provided differentiation becomes a baseline expectation.

Organizations then face a strategic choice.

They can continue optimizing existing models—capturing smaller, incremental gains—or begin redesigning the systems that define how value is created.

This transition marks a shift from productivity to innovation.

Innovation as the structural payoff of agentic AI

Innovation occurs when AI reshapes enterprise structure rather than simply accelerating task execution.

Agentic systems enable coordinated decision-making across marketing, supply chain, finance, service, and partner ecosystems. Systems move from isolated automation toward orchestration embedded within enterprise operating models.

This shift changes how organizations capture value.

When agents operate autonomously at scale, assumptions about capacity, cost, and output evolve. Business cases designed for linear improvement fail to capture the compounding value created when systems coordinate across portfolios and ecosystems.

Innovation beyond productivity requires organizations to rethink economic logic, governance models, and even industry boundaries.

Moving beyond the productivity plateau

Organizations that remain focused exclusively on efficiency risk becoming highly optimized versions of yesterday’s operating model.

Those that move beyond productivity gains begin to redesign enterprise systems around coordination, adaptability, and growth.

The shift from productivity to innovation does not eliminate the importance of efficiency. It clarifies its limits.

Efficiency improves performance.
Innovation reshapes advantage.

In the agentic era, leaders who understand the difference will position their organizations to capture the next wave of AI-driven value.

IDC - -

International Data Corporation (IDC) is the premier global market intelligence, data, and events provider for the information technology, telecommunications, and consumer technology markets. With more than 1,300 analysts worldwide, IDC offers global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. IDC’s analysis and insight help IT professionals, business executives, and the investment community make fact-based technology decisions and achieve their key business objectives.

My introduction to IDC didn’t come from a report or a pitch. It came from sitting in a room at IDC Directions 2025.

But within the first few sessions, it was clear this was something different.

At most events, the product is something you can demo. At IDC Directions, the product is the data. Every session was grounded in it. Not opinions, not surface-level trends, but actual evidence. What the data shows. What it means. And most importantly, what you should do next because of it.

I remember walking in with pretty standard expectations. I thought it would feel like most customer events I’d been to before. Some presentations, maybe a few product narratives, a chance to network and pick up a couple of useful ideas.

When the data is the product, the conversation shifts. It moves from opinion to evidence, and that changes how decisions get made.

That shift changes everything.

Even the panel sessions felt different. Instead of talking about challenges in the abstract, people were digging into how they were navigating them. What was working, what wasn’t, where things were breaking down. It wasn’t about agreeing that problems exist. It was about figuring out how to move forward.

If you’re responsible for making decisions in this environment, that difference matters.

What I Saw in the Room

What stood out just as much as the content was the energy in the room.

Every seat was filled. People weren’t distracted. They were paying attention, taking photos of slides, and writing things down. After sessions, you’d see people immediately tracking down analysts to continue the conversation.

The 1:1 area for client/analyst meetings was packed, rows of tables with discussions happening back-to-back.

It didn’t feel like people were there to hear something interesting. It felt like they were there to get answers to bring back to their teams. And that’s a very different kind of environment because the conversations are grounded in reality, not theory. That level of engagement tells you something important. People saw immediate value in applying what they were hearing right away.

The Moment It Clicked

There was one moment that really made it click for me.

It was during the rapid-fire predictions session after the breakouts. The analysts took everything they had shared across the event and pushed it forward. Not just “here’s what’s happening,” but “here’s what we see in the future.”

It’s one thing to tell someone it’s raining. It’s another thing to tell them they’re going to need an umbrella while the sun is still shining. That’s what IDC does. It connects insight to action before the urgency is obvious. It helps you prepare for decisions before the pressure shows up.

What Changed for Me

I left that event with a completely different understanding of what IDC actually is.

Honestly, I was giddy. Because I realized what access to this kind of expertise really means.

At previous companies, I would have pushed hard just to get time with analysts like this. Now I get to work with them directly. People like Laurie Buczek, who advises CMOs, CROs, and strategy leaders on how to modernize marketing, shift business models, and reduce risk.

That means I can take a real plan, something I’m actively working on, and get guidance grounded in data and real market perspective. That’s not just helpful. It changes how quickly you can make decisions and how confident you are in them. Instead of debating internally for weeks, you can pressure-test your thinking with people who see the market every day.

Why This Year Feels Different

And it’s a big part of why I’m so excited about Directions this year. Because if last year was about seeing the value, this year feels like it’s about applying it in a much more urgent environment.

The conversation around AI has changed quickly. You can hear it in the questions leaders are asking. It’s no longer about what AI is or where to experiment. Now it’s about how to scale it, operationalize it, govern it, and prove that it’s actually delivering value.

The shift from exploration to execution is real.

Visit the IDC Directions 2026 event page to see more about what’s going on in Boston.

AI is no longer about discovery. It’s about evolution. And that shift raises the stakes. These aren’t future decisions anymore. They’re decisions that impact how the business performs now.

That creates a different kind of pressure. The decisions being made now will shape the next few years for many organizations. There’s less room for trial and error, and a much greater need for clarity.

That’s where IDC plays a very specific role. Not by adding more noise, but by helping leaders focus on what matters, grounded in evidence, so they can move forward with confidence.

What I’m Looking Forward to at Directions 2026

Going into Directions 2026, I’m looking forward to very different things than I was last year.

  • I want to hear how IDC is thinking about the future of tech intelligence, especially from new IDC CEO, Lorenzo Larini.
  • I’m interested in where the data is pointing when it comes to AI investment and value, not just potential.
  • I’m paying close attention to how conversations around the agentic era are evolving, and what that means for how businesses operate and compete.
  • And I’m especially interested in the AI Lab.

There’s a limit to what you can absorb from reading. Being able to engage directly, ask questions, and explore how these insights apply in real scenarios brings a different level of clarity.

Check out the full IDC Directions 2026 agenda and learn what topics will be discussed.

Who Benefits Most from IDC Directions?

Stepping back, I think the people who will get the most out of this event are the ones who are actively trying to make decisions right now. If you’re responsible for strategy, for AI policy, or even for bringing AI-powered products to market, the environment has changed.

Buyers are using AI. They’re using data. They’re relying on trusted intelligence to guide their decisions. Understanding how those decisions are being shaped isn’t optional anymore. It directly impacts how you position, invest, and compete.

If You’re Still Deciding–

If you’re on the fence about attending, I’d put it this way:

You can spend time piecing things together on your own. Reading reports, interpreting signals, trying to build a clear plan in a very noisy environment, or…

You can be in the room. Just like me.

Hear the latest insights directly from the people producing the data. Talk through your specific challenges. Compare notes with others who are navigating the same decisions. IDC Directions isn’t about more information. It’s about making the right decisions sooner before the cost of waiting shows up in your business.

And once you’ve seen what that looks like in practice, it’s hard not to want to be there again.

Ryan Smith - Content Marketing Director - IDC

Ryan Smith is the Director of Content Marketing at IDC, where he leads brand-level content and social media strategy, aligning research insights with compelling storytelling to engage technology decision-makers. With a background in both IT and marketing, Ryan brings a unique blend of technical understanding and creative strategy to his work. He’s also a seasoned storyteller, speaker, and podcast host who believes the right message, told the right way, can drive both trust and transformation.