NVIDIA’s GTC 2026 announcements reinforce a structural transition underway in client-side AI infrastructure. Traditional workstations are evolving beyond the familiar tower form factor toward a new class of high-density, near-user AI systems that IDC identifies as part of an emerging “sidetop” category.

These systems elevate local compute capabilities while maintaining the proximity, control, and responsiveness required for next-generation AI workflows.

NVIDIA’s updates to the GB300 architecture and advancements in local agent orchestration reflect this broader shift. Combined with Dell’s introduction of its first GB300-based OEM systems, the market is entering a phase in which deskside AI compute is becoming operationally mainstream rather than experimental.

GB300 matures into a deskside AI supercomputer

At GTC 2026, NVIDIA introduced an enhanced DGX Station built on the GB300 Grace Blackwell Ultra Desktop Superchip, positioned as the most powerful deskside AI system in NVIDIA’s portfolio.

The platform delivers up to 20 petaflops of local compute performance and is capable of running one-trillion-parameter models entirely onsite, enabling development teams to execute advanced AI workloads without dependency on rack-scale systems or external infrastructure.

NVIDIA DGX Spark and DGX Station (Source: NVIDIA, 2026)

Compared to the GB300 configuration previewed during the GTC 2025 cycle, the 2026 update reflects NVIDIA’s full transition into the Blackwell generation. The system shifts from a hybrid exploratory design to a stable, production-ready architecture aligned with agentic and multimodal workload requirements.

For organizations pursuing AI factory-style development environments in constrained spaces, the GB300 represents a viable deskside alternative to small-scale cluster deployments.

Dell introduces first OEM GB300 offering

Dell’s GTC 2026 announcement marked a significant milestone, as the company became the first OEM to introduce GB300-based systems within the Dell AI Factory with NVIDIA portfolio. The offering provides enterprises with validated system configurations, integrated storage and data pipeline capabilities, and end-to-end lifecycle support aligned with Dell’s existing AI infrastructure frameworks.

Dell Pro Max with GB300 (Source: Dell, 2026)

OEM adoption is a critical indicator of enterprise readiness. With Dell bringing GB300 systems into general availability, organizations can now deploy deskside AI compute as part of standardized IT planning rather than custom or isolated implementations. This enhances the GB300’s relevance for enterprise environments where compliance, orchestration, and operational predictability are required.

NemoClaw introduces a framework for local agentic computing

Alongside hardware updates, NVIDIA introduced NemoClaw, a secure, enterprise-ready reference stack for managing local agentic systems. NemoClaw provides governance and safety layers necessary for operating persistent AI agents on local devices while protecting confidential and sensitive information.

NemoClaw (source: NVIDIA, 2026)

NemoClaw (Source: NVIDIA, 2026)

As deskside systems gain the capability to host large models and continuous agent workflows, IDC views frameworks like NemoClaw as essential for enabling practical, policy-aligned deployment of agentic AI within enterprise environments. The combination of local compute capacity and controlled agent execution marks a meaningful shift from experimental agent frameworks toward structured operational use.

Implications: The sidetop era begins

The combined effect of NVIDIA’s GTC 2026 announcements signals a foundational change in how AI workloads will be distributed across compute tiers. The workstation is transitioning from a peripheral productivity tool into a critical component of the AI development and inference lifecycle.

IDC assesses that the GTC 2026 announcements represent a pivotal moment in the evolution of workstation computing. The market is moving beyond desktop-centric paradigms toward a sidetop architecture that integrates AI compute into the physical and operational workspace.

Organizations planning AI strategies should expect deskside systems to play a significantly larger role in both development and inference workflows over the next 24 to 36 months.

Explore IDC’s Workstation Opportunities research to understand how AI is reshaping workstation demand, use cases, and market dynamics.

Mohamed Hakam Hefny - Senior Program Manager - IDC

Mohamed Hefny leads market research in EMEA on professional workstation PCs and solutions. He also reports on professional computing semiconductors, processors, and accelerators (CPUs and GPUs), as well as breakthroughs and trends related to the market. In addition, Mohamed is actively involved in AI PC taxonomy and research. He participates in business development projects, contributes to consulting activities, and provides IDC customers with analysis, opinions, and advice.

“在发展中固安全,在安全中谋发展”,十五五以全领域安全体系构建夯实新发展格局根基。

当数字经济深度渗透实体经济,当低空经济、人工智能等新赛道加速崛起,安全不再是发展的“附加题”,而是贯穿经济社会发展的“必答题”。十五五规划中,安全领域迎来前所未有的战略定位,网络安全、数据安全、人工智能安全、低空安全四大核心方向协同布局,从制度构建到技术创新,从国内治理到全球合作,勾勒出“以智筑防、全域覆盖、协同发展”的安全发展新蓝图,为中国式现代化筑牢安全屏障。

IDC结合规划,分析出十五五期间安全领域有如下发展前景:

1、网络安全:升维为国家安全基石,智能重构防御新体系

在十五五规划中,网络安全首次在国家规划中独立成节,正式成为数字经济发展的“基础设施”与国家安全的重要基石,其战略定位实现质的跃升。规划明确四大核心任务,从深化综合治理、严厉打击网络犯罪,到支持技术创新与产业发展,并重点强调要“推进容灾备份体系建设,加强工业控制系统和新技术新应用的网络安全防护”,最后明确要深度参与全球网络空间治理,形成“内筑防线、外拓合作”的全方位布局,让风清气正的网络环境成为数字经济发展的“沃土”。

技术层面,网络安全正完成从“被动防御”到“主动智能”的关键跨越,人工智能成为核心引擎,推动威胁检测自动化、响应流程智能化,大幅提升防护效率。产业发展也将告别单一产品竞争,迈向生态协同新阶段,MDR/MSSP等托管运营模式加速兴起,有效降低企业安全运营成本,让中小微企业也能共享专业安全防护能力。而在全球维度,在网络安全全球治理中,中国正在发挥积极作用,为全球网络空间安全贡献中国方案。

从“人防”到“技防+智防”,AI驱动的下一代安全防御体系正在重构网络安全边界,让网络安全从“事后补救”转向“事前预警、事中快速响应”,真正成为数字经济发展的“坚固底座”。

2、数据安全:全生命周期治理,解锁数据要素价值密码

数据作为新型生产要素,其安全治理是数字经济高质量发展的关键。十五五规划聚焦数据安全,提出构建全生命周期治理体系,从制度基石到技术赋能,从国内治理到跨境流动,层层筑牢数据安全屏障,让数据在安全前提下实现价值最大化。

规划明确提出“建立健全数据产权、流通利用、收益分配、安全治理等数据要素基础制度”,实施数据分类分级管理,根据数据重要性和敏感程度实施差异化保护,让数据安全保护更精准、更有效。同时,完善科学有效的监管机制,依法打击数据滥用、深度伪造、隐私泄露等行为,为数据要素流通划定“红线”。

技术赋能让数据安全治理更智能,通过AI技术实现智能感知、动态防御、全局治理,推动数据安全从被动防御向主动智能治理跨越。而在数据跨境流动领域,规划兼顾“有序流动”与“安全防控”,一方面建立科研数据等跨境安全有序流动机制,另一方面积极参与全球治理,构建跨境数据安全防线,依法打击数据滥用、深度伪造、泄露隐私等行为。

3、人工智能安全:平衡创新与风险,迈向负责任的智能时代

人工智能是新一轮科技革命的核心驱动力,而人工智能安全则是其健康发展的前提。十五五规划围绕人工智能安全,提出“推动建立人工智能全生命周期风险管理制度,健全覆盖安全监测、风险预警、应急响应的风险防控体系。”

规划强调加强数据治理,加快建设人工智能语料库,建立训练数据合理使用制度,从源头防范数据安全风险;同时,大力研究发展智能体安全相关技术,为人工智能全链条安全提供技术支撑。在技术赋能层面,人工智能正反向赋能安全治理,提升感知预警、指挥决策、精准管理和即时响应能力,让安全治理更高效、更智能。

规划更将人工智能安全纳入前沿科技攻关与“人工智能+”行动核心范畴,研制高性能AI芯片与基础软件栈,深化可解释、可决策等关键算法研究;同时,推动人工智能在市场监管、安全生产、防灾减灾、网络空间维护等领域的应用,探索构建自然人、数字人、智能机器人协同的安全治理体系,让人工智能成为安全治理的“新利器”。

创新无边界,安全有底线。十五五规划让人工智能安全与创新同频共振,让智能时代更可控、更可信。

4、低空安全:护航立体交通新秩序,夯实低空经济发展根基

作为新兴领域的重要赛道,低空经济成为十五五规划的亮点之一,而低空安全则是低空经济规模化发展的“前置条件”。规划从技术支撑、基础设施、防护体系三大维度,构建全方位低空安全保障体系,为低空经济发展保驾护航,开启立体交通新秩序。

技术与基础设施层面,规划聚焦低空装备、低轨卫星互联网、低空基础设施三大核心安全方向,推动低空智能网联系统、重点区域低空安全防护能力建设,统筹推进卫星互联网星座建设并提升其安全防护能力,让低空经济发展有“硬支撑”。

低空安全体系的构建,为无人机物流、低空旅游、城市空中交通等低空经济场景扫清安全障碍,也将成为安全领域新的热点。

未来,随着十五五规划各项政策的落地实施,安全领域将迎来技术创新、产业升级、治理完善的黄金发展期,而以安全为基石的发展模式,也将推动中国经济在高质量发展的道路上行稳致远。

总结:

十五五规划将安全提升至战略新高度,以网络安全、数据安全、人工智能安全、低空安全四大领域为核心,构建“以智筑防、全域覆盖”的现代化安全体系。IDC认为,未来安全发展将呈现智能驱动、生态协同、治理前置等趋势,AI全面赋能防护升级,数据治理激活要素价值,低空安全支撑新兴业态。安全正从发展“保障”转变为发展“基石”,为高质量发展筑牢坚实底座。

IDC相关研究报告:

《中国大模型安全评估平台厂商评估,2026》

《中国工控防火墙市场份额,2025》

《中国工控安全靶场市场份额,2025》

《中国安全大模型一体机技术评估,2026》

《中国数据发现与分类分级智能体能力评估》

《中国网络安全软件技术发展路线图,2026》

《中国网络安全厂商亚太区出海服务能力评估,2026》

《中国数据安全管理平台市场份额,2025》

《中国数据库安全审计市场份额,2025》

《中国企业级通用智能体安全防护解决方案市场洞察,2026》

《中国物联网安全市场份额,2025》

《DeepFake 智能体市场洞察,2026》

IDC已于2026年启动AI安全、工控安全、低空安全等技术研究,围绕新技术、新场景展开深入分析。如需进一步探讨或沟通,欢迎与我们联系。

如需进一步了解与研究相关内容或咨询 IDC其他相关研究,请点击此处与我们联系。

2026年GTC大会上,英伟达的一系列产品发布和战略布局再次对全球智能算力市场产生重大影响。其重点推出的Vera Rubin全栈计算平台,打破了以往“单芯片主导算力竞争”的传统逻辑,转而以机架级系统、集成式AI工厂架构为核心,强调软硬件协同、全链路优化的平台化价值。

国际数据公司(IDC)最新服务器市场追踪报告预测,到2029年全球加速计算服务器市场规模将超过1万亿美元规模,未来5年仍然以每年增长30%的速度猛增。未来几年的人工智能发展,仍将依赖于智算算力基础架构设施的不断扩张和技术革新。结合IDC全球AI基础设施市场数据,IDC总结出未来几年智能算力发展的五大核心发展趋势,为科技企业、云服务商及投资者提供决策参考。

趋势一:算力架构专用化深化,GPUCPU/专用加速器协同成为主流

Vera Rubin计算平台和Groq 3 LPX机架的协同发布,清晰折射出AI算力架构从通用优化向专用分工的深度演进。Vera Rubin的Rubin GPU侧重高吞吐量、大规模参数处理,适配智能体AI的复杂推理与训练需求;Groq LPX的LPU则聚焦低延迟token生成,专攻推理场景中的解码环节,两者通过解耦推理架构实现协同,本质上是算力架构专用化的极致体现。

这一趋势是全球智算算力发展的必然选择。随着大语言模型参数迈向万亿级,智能体AI的应用普及,单一GPU架构在特定场景下的效率瓶颈日益凸显,专用加速器的价值持续提升,LPU针对大模型推理的延迟与带宽瓶颈优化,形成与GPU的互补分工。IDC认为,未来几年间,通用GPU+专用加速器的异构协同架构将成为AI算力集群的标配,不同场景下的专用算力芯片将持续涌现,算力架构将呈现场景适配、分工协同的核心特征,彻底打破GPU单一主导的格局。

趋势二:互联技术迭代升级,超节点技术和算网融合成为算力架构的发展重点

Vera Rubin架构的NVLink 72和Spectrum-6交换机等产品的发布,标志着智算互联技术上升到了一个新的台阶,算网融合成为技术竞争的核心焦点。NVLink 72实现72个GPU的全互联拓扑,单机柜带宽达260TB/s;Spectrum-6交换机则通过将硅光子引擎封装到芯片,减少信号损耗,进一步压缩端到端延迟,适配高密算力集群的互联需求。

随着大模型模型规模扩大,多节点协同成为常态,通信延迟与带宽瓶颈成为制约算力释放的主要瓶颈。IDC调研数据显示,当前AI大模型训练中,数据通信耗时占比已达30%-40%。这都推动了新一代智算集群互联技术的发展。互联架构从树形拓扑向全互联无损网络演进,算网融合的深度与广度将持续提升,成为AI算力集群的核心基础设施。

趋势三:冷却与能效技术革新,液冷成为高密算力集群必选配置

推出的Vera Rubin系统的100%液冷设计架构,高密机柜的部署模式,印证了冷却技术与能效优化已成为智算算力基础设施的核心支撑。随着算力密度持续飙升,训练服务器单机柜功率极限即将突破100kW,液冷技术从可选升级正式转变为必选配置。

液冷技术的普及,不仅是散热需求的驱动,更是能效优化的必然选择。IDC预测,到2029年仅中国国内的液冷服务器市场规模就将突破200亿美元,年复合增长率超过50%,其中AI算力场景占比将超过60%,成为液冷技术的核心应用领域。

除了液冷技术,能效优化还将呈现“软件+硬件”协同的趋势。通过动态调度技术,通过软件算法优化算力与电力分配,最大化每瓦token产出效率,实现算力输出与能效优化的动态平衡。未来,每瓦token数的能效指标,将取代单纯的算力性能FLOPS,将成为考核智算算力的核心维度。

趋势四:智能体生态崛起,推动算力需求多元化与边缘渗透

OpenClaw热潮延续到了本次大会。智能体技术落地加速,正推动算力需求向多元化、全域化演进。作为开源智能体操作系统,实现了资源调度、工具调用的全流程自动化,推动AI从生成内容走向自主执行,这一变革直接带动推理算力需求的指数级增长。

从算力需求结构来看,智能体时代的算力需求呈现两大特征:一是推理算力占比持续提升,IDC预测,2027年推理算力在智能算力大盘中的占比将超过70%,成为算力需求的核心增长引擎;二是算力需求从核心数据中心向边缘、车端、工业现场等全域渗透,边缘算力需求增速将超过核心算力,成为新的增长极。

这一趋势将推动算力基础设施的形态变革。边缘算力节点需具备小体积、低功耗、高可靠的特征,适配智能体在工业机器人、车端、电信基站等场景的部署需求。同时,智能体对工具调用、数据访问的需求,将推动算力与存储、网络、安全技术的深度融合,形成算力+生态的协同发展格局。

趋势五:SCSP/Neo‑clouds崛起,专业智算服务重构算力供给格局

大会提及的算力合作伙伴CoreWeave等,属于IDC 定义的专业云服务商SCSP,或称Neo‑cloud,即面向 人工智能的专用算力云,主打高密度 智算集群、低延迟网络、弹性调度与成本优化。与传统提供全栈通用云能力,覆盖企业全场景 IT 需求的超大规模云相比,专业算力SCSP/Neo‑clouds聚焦智算算力即服务,极致优化大模型训练 / 推理集群,交付更快、更专、更省的 智算算力。两者共同构成混合多云格局,成为企业算力的主流交付模式。

与全球市场不同,中国的传统超大规模云厂商也在加大对智算算力的投资。在继续覆盖企业多场景应用的前提下,也提供智算算力的供给与优化,匹配智能体AI场景的核心需求。未来,智算算力与通用型算力形成分工互补的格局:智算算力专注于AI专用算力领域,为大模型的训练和推理提供基础架构;通用算力则聚焦智能体执行操作,覆盖Agent Skill更多场景,满足企业多元化算力需求。

IDC建议

IDC中国研究副总裁周震刚先生表示,本次大会发布的产品所折射的五大技术趋势,本质上是AI走向规模化生产的必然结果。对于科技企业而言,需聚焦专用算力架构、算网融合、液冷能效等核心技术领域;对于开发者而言,开源生态的崛起为技术创新提供了广阔空间,可依托OpenClaw等开源平台,聚焦垂直场景的智能体开发;对于投资者而言,液冷服务器、专用加速器、边缘算力等赛道,将成为未来3-5年的核心增长领域,值得重点关注。

同时,IDC也提醒市场参与者关注技术迭代带来的挑战:一是技术转型成本较高,中小厂商面临研发投入与供应链整合的双重压力,行业分化将持续加剧;二是电力资源约束日益凸显,能效优化能力成为企业的核心竞争力;三是开源生态的安全风险需重点防范,尤其是智能体时代,数据安全与合规性将成为行业发展的重要前提。

IDC相关研究报告:

China Digital Infrastructure Strategies (Chinese Version)

China AI Infrastructure Strategies (Chinese Version)

China Semiannual Accelerated Server Tracker

China Semiannual Liquid Cooling Server Tracker

China Semiannual Intelligent Computing Infrastructure as Services Tracker

IDC将持续追踪全球智能算力架构、市场格局与应用创新的最新动态,围绕AI基础设施、算力服务、边缘智能、绿色计算等核心议题,输出前瞻性研究与深度洞察。如您希望进一步获取相关报告或进一步交流,欢迎与我们联系。

如需进一步了解与研究相关内容或咨询 IDC其他相关研究,请点击此处与我们联系。

Thomas Zhou - Vice President - IDC

Thomas Zhou is the vice president of Enterprise Research for IDC China. He leads the enterprise research team in covering market analyses, tracking of data, forecasting, and consulting for enterprise computing, storage, networking, infrastructure software, cloud, and datacenter. He is also responsible for IDC data tracking of software, services, and the public cloud services market in China. Thomas speaks frequently at IDC, industry, and user events and is always quoted in leading business and technology publications. Thomas joined IDC in 2006. He provides in-depth market analysis, research, and consulting on all aspects of the enterprise infrastructure to IT vendors and investors. During his tenure at IDC China, Thomas has led IDC's primary research focused on emerging trends in enterprise systems and datacenters. This research continues to make IDC a thought leader in enterprise infrastructure‒powered digital transformation. Thomas's recent topics covered software-defined infrastructure, hyperconvergence, virtualization, and cloud computing infrastructure. Prior to joining IDC, Thomas worked for 10 years as a senior project manager and business consultant for several leading IT companies in China. Thomas holds a master's degree in Computer Engineering from the University of Science and Technology of China.

Public sector senior leaders, such as mission and program executives, CIOs, CTOs, and CAIOs, have always faced a dual mandate: drive technology-enabled innovation while controlling risk. Private sector IT and business leaders have historically leaned more toward innovation, although leaders in regulated industries have faced pressures similar to those in the public sector.

That tension has escalated over the past twelve to eighteen months. The potential benefits and disruptive impact of AI have raised new questions about how to manage its risks. At the same time, geopolitical turbulence has made strategic autonomy in technology choices, control over data, and operational resilience paramount. These forces have converged in the sovereign AI debate.

In a recent conversation with senior government officials in a major Asian country, IDC found that the country’s vision is to build national AI infrastructure capabilities that they can “control in time of crisis.” While they recognize they cannot manufacture everything overnight, they “cannot accept dependency without a plan.” At the same time, their goal is “not to lock data away so tightly that no one can innovate,” but to find the right balance.

How the sovereignty debate is evolving from control to strategy

That tension between speed and control, and between innovation and sovereignty, sits at the heart of today’s digital and AI strategies. It also reflects how the conversation around sovereignty has evolved.

Early digital and cloud sovereignty discussions were driven by a specific concern: that sensitive data could be accessed by foreign jurisdictions. That narrow focus has now expanded into something much broader. Sovereignty has become a strategic imperative that shapes how organizations design their entire technology stack.

Today, sovereignty is no longer just about where data resides. It is about control over data, infrastructure, operations, and even the supply chain. AI sovereignty extends this further, encompassing control across the entire AI lifecycle, from model development to deployment and governance.

IDC research shows that market signals are clear. Governments are investing in sovereign AI capabilities, from national cloud infrastructures to domestic AI ecosystems. They are incentivizing local data centers, funding native-language AI models, and defining guidelines that will shape how sovereign solutions are acquired and deployed. For policymakers, AI is no longer just a technology. It is an instrument of economic competitiveness and national security.

For organizations, this creates a new reality. Senior business and IT leaders are no longer designing a single global architecture. They are navigating a fragmented, multi-sovereign world.

Choosing the right sovereign AI deployment approach

Faced with this complexity, many leaders look for a single answer: which deployment model is the most sovereign?

The reality is that the market offers a spectrum of deployment archetypes, ranging from public cloud to fully air-gapped environments. Each comes with different levels of control, agility, innovation speed, and cost. There is no one-size-fits-all approach.

A highly regulated AI workload may require a sovereign or even air-gapped environment. A customer-facing application may benefit from the scalability of the public cloud, combined with added sovereign controls.

The real challenge is selecting the right model for different use cases, or even different components of the same use case. For example, one deployment model may be used for AI training, another for retrieval-augmented generation, and a third for an agentic AI orchestration layer.

This is why hybrid architectures are emerging as the dominant pattern across both the public and private sectors. According to IDC’s 2025 Digital Sovereignty survey of more than 900 IT and business leaders, 37% of respondents say on-premises is currently their main environment and that sovereign cloud is, or will be, the only type of cloud they use. At the same time, 55% say sovereign cloud is, or will be, part of a multicloud or hybrid strategy.

IDC predicts that by 2028, CIOs at multinational organizations will increase investments in modular, sovereign-ready cloud and data localization environments by 65% to future-proof operations against rising sovereignty demands. Additionally, by 2026, 55% of governments will adopt hybrid sovereign cloud stacks, blending hyperscaler scale with national control to ensure compliance, security, and strategic autonomy for AI.

Public and private sector leaders are not retreating from the cloud. They are reshaping it. By combining global hyperscaler capabilities with local control layers, they are creating what IDC describes as sovereign-ready environments.

This approach reflects a deeper truth: sovereignty is not about isolation. It is about choice and control.

What leaders need to know about sovereign AI strategy

The conversation around digital and AI sovereignty is often framed as a trade-off between control and innovation. The organizations that will succeed are those that reject this binary thinking. They understand that sovereignty is not about limiting innovation, but about enabling it on their own terms.

In a world where AI is becoming the backbone of economies and societies, IDC research helps connect the dots between technology providers offering cloud and AI solutions and the business and IT leaders who must select the right deployment approaches to achieve their sovereignty goals.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.

Rahiel Nasir - Research Director, European Cloud Practice, Lead Analyst, Digital Sovereignty - IDC

Rahiel Nasir is responsible for leading and contributing to IDC's European cloud and cloud data management research programs, as well as supporting associated consulting projects. In addition, he leads IDC's worldwide Digital Sovereignty research program. Nasir has been watching technology markets and writing about them throughout his professional life.

Tim Cook might have just given Apple its single most disruptive launch since the iPhone. Apple introduced the MacBook Neo earlier this month, just ahead of Apple’s 50th anniversary, at a striking price point: $599 at retail and $499 for education. My initial reaction, like many others, was, “Wow. This is a killer price.”

For years, Apple has remained disciplined at the premium end of the PC market, rarely launching a brand-new product at what could genuinely be considered entry-level pricing. Seeing Apple move this decisively into the sub $700 segment is an aggressive play that clearly signals an intent to capture share.  It brings a Mac into the hands of users who’ve aspired to own a Mac but have historically been priced out. But it also raises other important questions. What compromises did Apple make to achieve this and mor importantly, will it dilute the Apple brand?

After spending a few weeks with the device, the answer was clear.

First impressions: Neo feels anything but budget

The MacBook Neo immediately feels like a Mac, not a compromised or stripped-down version. It is thin, exceptionally light at roughly 2.7 pounds, and the aluminum build delivers the solidity consumers associate with Apple’s premium notebooks. The keyboard is comfortable, the touch track pad feels precise, and the display is noticeably bright and sharp – standing out instantly in this price band.

Day-to-day performance is fluid. App launching and switching are smooth and responsive, which is notable given the modest hardware configuration: 8GB of RAM paired with Apple’s A18 Pro processor, previously used in iPhones, which I wager may soon become a trend to be followed by other PC makers.  Apple, yet again, demonstrates vertical integration can matter more than raw specifications.   Apple has found a way to expand its reach without undermining its product quality, user experience and core brand promise.

Design also plays a critical role in the Neo strategy. The brighter color options give the Neo a sense of personality that resonates strongly with younger users. It feels modern, expressive, and distinctly non-generic. Simply put, very little about the MacBook Neo feels “budget.”  Rather than diluting the Mac brand, Apple has effectively extended its premium perception into a lower price tier – something very few PC vendors have managed successfully.

Why MacBook Neo resonates with younger users

What stood out even more than my own reaction as an IDC analyst was what I observed at home. I have three teenage children-squarely within Apple’s target demographic for this device-who quickly attempted to claim ownership of the Neo.

None of them asked about the processor, memory, or benchmark performance. There were no questions about architecture or specifications. Instead, they focused on how much better it looked and felt compared with the Chromebooks and low-cost Windows laptops they currently use for school. They noticed the display quality. They liked the keyboard. They commented on how light it was.

Then came the reaction that captured everything: “This would look so cool at school.”

That “cool factor” is often underestimated in market analysis, but in a school environment it is a powerful driver of preference. Within minutes, the verdict was clear-they wanted it, and it landed immediately on their birthday wish lists. 

MacBook Neo hits the sweet spot

That reaction highlights a broader market reality. Buyers in the sub $700 notebook segment are overwhelmingly not current Mac users, nor are they making decisions through a spec-driven lens. Their purchases are constrained by budget and centered on core experience: design, ease of use, battery life, and overall feel.

In that context, MacBook Neo stands apart. It provides a compelling option for education institutions approaching refresh cycles after the COVID-era buying surge, for students purchasing their first personal notebook, and for small and midsize businesses operating with tighter cash flows.

While the Neo does make trade-offs relative to the MacBook Air, particularly in performance headroom and features such as external multi-display support, these limitations are largely irrelevant for this target market and first-time Mac buyers. In the areas that matter most to this audience, Neo delivers a meaningfully differentiated experience. That positions Apple to directly disrupt a segment long dominated by Windows and ChromeOS devices-and should be a legitimate concern for incumbent vendors.

What opportunity does the MacBook Neo unlock for Apple?

To understand the scale of the opportunity, it is important to frame the broader PC market. Global PC shipments totaled roughly 285 million units in 2025, with Apple holding just under a 10% share. Within that, the sub‑$700 notebook segment accounted for approximately 75 million units, nearly 40% of total notebook volume, and has historically been dominated by Microsoft Windows and Google ChromeOS, which together account for more than 95% of shipments in this tier. 

Geographically, Neo also positions Apple for expansion beyond its traditional strongholds. Today, Mac shipments remain heavily concentrated in the U.S. and Western Europe. With Neo, Apple has a credible pathway to reach more price‑sensitive buyers in emerging markets where Macs have historically seen limited penetration.

In my opinion, the opportunity extends beyond grabbing existing Windows or Chrome users.  I believe MacBook Neo will further expand Apple’s addressable market by enticing users who have deferred notebook purchases altogether. Those unwilling to compromise on experience with low-cost Windows systems but unable to justify the price premium of a MacBook Air.  By bridging that gap, Neo has the potential to both drive share gains and unlock incremental demand.

MacBook Neo: Perfect timing and long-term strategy

To top it off, the timing of this move also worked out in Apple’s favor. The broader PC industry is entering a challenging period as DRAM and NAND pricing pressures intensify. Rising memory costs are pushing many vendors upstream toward higher-priced systems or forcing them to cut specifications to defend lower price points. Apple, in contrast, is moving in the opposite direction – delivering a premium-like product at a budget price.  This move will send competitors back to the drawing board to defend their share in this massive segment and I am eager to see their response.   

Strategically, Neo represents far more than a near-term share grab.  It advances one of Apple’s long-term objectives: increasing ecosystem penetration earlier in the user lifecycle. By introducing macOS to younger users-often as their first personal Mac, Apple strengthens platform stickiness and maximizes lifetime value. Once users are embedded in Apple’s ecosystem through iMessage, FaceTime, iCloud, and AirDrop, across multiple devices, they are less likely to switch to another platform across any device. As younger Neo users transition into higher education and into professional roles with greater purchasing power, upgrading to a MacBook Air becomes a natural progression rather than a competitive evaluation. In this sense, Neo serves as a feeder into Apple’s higher-margin Mac portfolio over time.

That dynamic is already visible in my own household. My children live on their iPhones and iPads, and a MacBook that is finally within financial reach simply extends that ecosystem into the notebook category. Once that level of integration is established, switching away becomes far less likely. This is where the real strategic value of MacBook Neo lies-not in short-term unit volume alone, but in locking in demand across multiple device cycles.

Final thoughts

MacBook Neo might just be the best 50th‑anniversary gift Tim Cook could have given Apple. The device is not just about a lower-priced Mac but represents a long-term ecosystem lever. Viewed through that lens, it has the potential to be one of Apple’s most disruptive launches since the iPhone-not because it introduces breakthrough technology, but because it will significantly alter the competitive landscape of the PC Market for the foreseeable future.

Great news for Apple, less so for me, as I now need to figure out how to buy three Neos.

Nabila Popal - Sr. Director, Data & Analytics - IDC

Nabila Popal is Senor Director with IDC's Data & Analytics team, specializing in Mobile Phones, PC Monitors and other consumer devices. Ms. Popal is responsible for the global research and quality and timely delivery for her respective technologies, coordinating with regional and worldwide research teams. She continuously engages with global vendors and key market players to discuss the latest industry trends and dynamics. Ms. Popal is also responsible for future product planning and evolution whilst managing client relationships and providing thought leadership and executing custom engagements. She also manages communications with the media and is often published in leading local and international media outlets. Ms. Popal has been with IDC since 2013, and prior to her role with the Worldwide team, she was with IDC MEA, leading the research for Middle East, Africa, and Turkey, based out of Dubai, UAE.

Nations are prioritizing AI sovereignty, complicating operations for global CIOs. AI sovereignty defines a nation’s, or organization’s, control over the entire AI ecosystem, from the data used for training to the algorithms and the physical chips (GPUs) required to run them. Today, many governments are asserting their intent to regulate how AI is developed and deployed within their borders. For the modern enterprise, this transforms AI management from a purely technical challenge into a foundational leadership priority.

CIOs and enterprise executives must determine acceptable risk management objectives and how enterprise AI policy will align with regulatory compliance. Furthermore, CIOs must create a dynamic response that allows the IT organization and the broader enterprise to use predictive control to adapt to continuously changing regulations. IDC’s Sovereign AI Framework helps executives navigate these geopolitical and regulatory risks. By using this framework, organizations can align enterprise AI policies with diverse jurisdictional laws, ensuring strategic independence and compliance in an increasingly complex and regulated global landscape.

One size cannot fit all. There is no single model to respond to AI sovereignty globally, but there are several underlying themes for global CIOs. The enterprise AI model must account for the country of origin, the jurisdictions in which the company operates, how and why AI is deployed within the enterprise and with external entities such as suppliers and customers, and the industry in which the company operates.

As organizations rush to adopt AI, they often find themselves caught between innovation and risk management. While AI adoption is accelerating, it introduces a complex web of strategic, operational, regulatory, and geopolitical risks that global CIOs must navigate, often at significant cost. The table below provides a summary of risks with key drivers and dependencies.

Risk categoryKey drivers and dependencies
Regulatory and jurisdictionalModels and data hosted abroad may fall under foreign laws like the U.S. CLOUD Act or China’s Personal Information Protection Law (PIPL).
Security and supply chainVulnerabilities such as model poisoning and dependence on foreign semiconductor supply chains must be protected against.
Data and IP lossUse of external platforms can expose sensitive training data, customer information, and product designs.
Ethical and reputationalRelying on third-party models means inheriting their national biases and potentially inadequate safeguards.
Operational fragilityExcessive reliance leads to human skill erosion and single-point-of-failure architectures.
Economic and costEscalating compute/storage costs and unpredictable API pricing introduce variables that must be managed. Lack of scale to meet local requirements can make unit economics unattractive.

Five actions for CIOs

To manage the risks of AI sovereignty, IDC recommends five strategic actions for global CIOs:

  1. Educate the C-suite. Raise awareness of the importance of AI sovereignty, including data sovereignty, with the senior executive team. Provide a clear plan outlining opportunities and risks. Use the IDC Sovereign AI Framework as a starting point and adapt it to your enterprise, jurisdictions, and strategic intent.
  2. Consult legal experts. Work with legal experts who understand each jurisdiction where you operate to assess current and emerging AI laws and regulations relevant to your industry. CIOs will need to coordinate across functions, aligning legal, financial, and operational priorities.
  3. Balance global and local providers. Understand the trade-offs between global AI providers and smaller national providers. Most enterprises will adopt a hybrid approach, leveraging the scale of global providers while using smaller providers to build fit-for-purpose solutions aligned with enterprise strategy.
  4. Secure your data perimeter. Define an enterprise-specific AI sovereignty model. Identify proprietary data that should remain protected, such as marketing plans, customer information, research results, and product designs. Assess jurisdictional exposure for both data and the AI models that depend on it across all operating regions.
  5. Anticipate architectural shifts. A core implication of AI sovereignty is the move away from one-size-fits-all cloud models toward model-agnostic, hybrid architectures. CIOs are increasingly responsible for ensuring that sensitive workloads are processed within controlled environments. This often includes hybrid inference, where AI models run at the edge or within owned datacenters, keeping critical data, logic, and derived insights within the organizational perimeter.

As AI adoption becomes the norm across industries, managing AI sovereignty has shifted from a technical issue to a core risk management priority for global CIOs. AI sovereignty cannot be ignored.

Dr. Ron Babin - Adjunct Research Advisor - IDC

Dr. Ron Babin is an adjunct research advisor in IDC’s IT Executive Programs (IEP). He is a full professor at Toronto Metropolitan University, where he teaches IT management.

長年にわたり、国内IT市場の成長は、大企業、公共部門の既存システムのモダナイゼーション、そして消費者のPC、スマートフォンといったデバイス更新サイクルによって牽引されてきました。また、国内においてデジタルトランスフォーメーション(DX)関連支出は主に大企業が中心というのが、これまでの一般的な見方でした。
しかし、その前提を見直す必要があります。


IDCは、2026年の国内IT市場規模が28兆4,189億円に達し、前年比3.3%増、2024年から2029年までのCAGRは6.4%になると予測しています。大企業は引き続き市場を主導し、その構成比は2025年の53.9%から2029年には56.0%へと拡大する見込みです。日本のIT市場拡大において、大企業の影響力は依然として中核を成しています。
しかし、構造的に重要なのは、中堅企業の同時的な存在感の高まりです。
従業員数100~999名の中堅企業は、IT支出全体に占める割合を2025年の19.8%から2029年には21.2%へと拡大する見込みです。さらに2026年には、中堅企業のIT支出(PCを除く)は前年比9.5%増と予測されており、大企業の8.7%増を上回ります。

2026年以降、日本のIT市場は「デュアルエンジン構造」によって特徴づけられることになります。すなわち、大企業による持続的な拡大と、中堅企業におけるデジタル化の加速です。

なぜ中堅企業はIT投資を加速させるのか

1. 生産性向上と人材コストの問題が経営課題に

国内の人手不足は、もはやマクロ経済の問題ではありません。とりわけ中堅企業にとっては、日々の事業運営に直結する制約要因となっています。

大企業も同様の課題を抱えていますが、強力なブランド力、人材採用体制、成熟したデジタル基盤を持ち、すでに自動化やデータ統合、生産性向上を目的にしたデジタルプラットフォームに多額の支出を行っています。

一方で中堅企業は、人材面やデジタル成熟度に課題を抱えている場合が多く、給与水準やブランド力での人材採用競争も容易ではありません。2026年に向けて人材不足がさらに深刻化する中、デジタル化は戦略的選択肢ではなく、事業継続の前提条件となります。

さらに、大企業や官公庁/地方自治体からのデジタル化対応の要請がサプライチェーンを通じて波及しています。デジタル化に遅れた中堅企業は、取引機会を失うリスクに直面します。

2026年以降、生産性向上を目的としたデジタル化は構造的な潮流となります。

2. 中堅企業には外部ベンダーのデジタル化支援が必要

大企業は内製化やIT子会社の設立、ハイパースケーラーや先端企業との直接連携を進めており、自社内でのITリソースを高度化させています。

しかし中堅企業は異なる制約下にあります。

多くの中堅企業は社内IT人材が限られており、大規模なシステムモダナイゼーションプロジェクトを自力で推進する能力を十分に持っていません。2026年にデジタル化プロジェクトが本格実行段階に入るにつれ、ITベンダーやSIerへの依存度は高まります。

中堅企業が求めるのは:

・エンドツーエンドの導入支援
・ユースケースベースのパッケージソリューション
・運用面まで含めたスケーラビリティ
・AIおよびクラウド活用に関する専門知識

ただし、この市場に対応するには、提供モデルの構造的な見直しが必要です。案件規模は比較的小さく、予算も限定的です。より軽量で成果志向のアプローチが求められます。

3. 中堅・地域系ベンダーの構造的優位性

国内IT市場の成長の重心が中堅企業に移る中、ITベンダー自身のポジショニングも重要になります。

大手および準大手ベンダーは大企業における大規模プロジェクトに不可欠ですが、中堅企業には異なるデリバリーモデルが求められます。より現場密着型で、地域性を踏まえた、柔軟な導入を重視するアプローチです。

中堅・地域系SIerは、この環境において構造的な優位性を持つ可能性があります。

規模、コスト構造、組織体制が中堅企業のニーズに適合しやすく、より密接な関係性を築きやすいからです。大規模プロジェクトに最適化された大手ベンダーとは異なり、スピード、アプローチの優位性、柔軟な導入の容易性に強みを持つプレイヤーは、中堅企業のデジタル化の拡大局面で成長機会を獲得しやすいでしょう。

4. クラウドが変革のハードルを下げる

大企業はレガシーシステムや高度にカスタマイズされたアーキテクチャにより、モダナイゼーションに時間とコストを要するケースが多くあります。

中堅企業は、相対的にシステム構造が単純であり、クラウド移行の障壁が低い傾向にあります。

IaaSやクラウドネイティブ基盤の拡大により、以下が可能になります:

・新システムの迅速な導入
・初期投資の抑制
・スケーラブルなIT基盤
・AI関連機能との容易な統合

2026年には、AIモデル、データ基盤、エージェント型AIプラットフォームを含むAI関連支出が急拡大する見込みです。クラウド環境は、中堅企業が大規模なシステム再構築プロジェクトを行わずにこれらを導入することを可能にします。

クラウドは既存システムと新しいシステムとの間の摩擦を減らします。迅速な成果を求める中堅企業にとって、これは特に重要な要素です。

2026年以降:成長は集中へ

国内IT市場は分散しているのではなく、多くの企業、公的部門において拡大傾向で収斂しています。

大企業は引き続き市場シェアを拡大し、中堅企業は構造的な成長エンジンを持つことで国内IT市場での存在感を強めます。

次の成長フェーズは:

・大企業の継続的なモダナイゼーション
・中堅企業のデジタル化の加速
・大企業、中堅企業の両セグメントでのAI活用拡大
・クラウド基盤への依存度の上昇

を軸に展開されます。

ITベンダーにとっての示唆は明確です。

今後の成長は、大企業による超大型プロジェクトだけではありません。システムモダナイゼーション、デジタル化プロジェクトに着手する中堅企業へのビジネス規模の拡大が鍵となります。

国内IT市場におけるデュアルエンジンでの市場拡大の構造を早期に把握し、提供ソリューション、パートナー戦略、デリバリー体制を中堅市場に適応させたベンダーこそが、日本のIT市場における次の持続的な成長フェーズを取り込むことができるとみています。

図表: 国内IT市場(PCを除く)前年比成長率、並びにIT支出割合比較:大企業、中堅企業

関連する調査やご相談について

より詳細なインサイトや市場動向については、当社アナリストへお気軽にご相談ください。

Hitoshi Ichimura - Senior Research Manager, Software, Services, and IT Spending, IDC Japan - IDC Japan

Hitoshi Ichimura is responsible for the market analysis of overall Japan IT spending, based in Tokyo. In this role, he is responsible for the market analysis of IT Spending research by vertical, company size and region. His main area of research involves IT Spending market forecast and trends for the Japan financial industry local area and SMB segment. Ichimura is also involved in various custom research projects in the area.

2025年中国智能眼镜市场出货量246.0万台,同比增长87.1%,轻量化和AI接入成为标配,为行业从尝鲜走向普及积蓄了势能。但真正的用户价值尚待发掘,场景落地和渠道转化仍是重要方向。

根据国际数据公司(IDC)最新发布的《全球智能眼镜市场季度跟踪报告》,2025年全球智能眼镜市场出货量1477.3万台,同比增长44.2%。其中,中国智能眼镜市场表现尤为突出,全年出货量246.0万台,同比增长87.1%。四季度出货量67.9万台,同比增长57.1%,受到新厂商集中铺货、四季度促销季以及2026年年初智能眼镜首次被纳入国补等多重因素推动,厂商提前备货、加速渠道布局,为市场拐点积蓄势能。主流产品重量普遍控制在40-50克区间,佩戴体验接近传统眼镜,同时光学方案持续进步、AI能力逐步接入、用户接受度明显提升,共同推动市场从预热走向放量。

2025年中国智能眼镜市场整体表现

2025年,中国厂商在智能眼镜市场的出货量占全球市场的23.3%。其中在AR/ER细分市场,中国厂商出货占比达到87.4%,继续保持主导地位。这一份额的维持,核心在于供应链整合能力与场景落地速度的协同。依托成熟的消费电子产业链,中国厂商能够将AI、光学显示等技术快速转化为轻量化、具备成本优势的量产产品;同时凭借对市场需求的快速响应,灵活调整产品定义并实现规模化复制,进而提升从技术到市场的转化效率。

四季度,智能眼镜市场迎来新一轮活跃周期。多家新玩家集中入局,厂商格局出现明显变化。国内市场方面,千问、理想等中国厂商相继推出首款AI眼镜新品,引发广泛关注。海外市场方面,Meta凭借新发布的Display产品,在入局首季便跃居全球ER眼镜市场前三。头显市场亦迎来关键产品迭代,Apple升级Vision Pro至M5芯片版本,三星则推出首款搭载Android XR系统的头显,补齐安卓阵营在高端头显领域的空白。

整体来看,四季度中国厂商表现依然突出,同时加快海外市场拓展步伐。以雷鸟、XREAL为代表持续深耕欧美市场,小米、Rokid也在多个海外区域启动渠道铺货,品牌出海节奏明显提速。

细分市场表现

音频和音频拍摄眼镜市场

2025年中国音频和音频拍摄眼镜市场出货量172.6万台,同比增长122.0%。其中拍摄眼镜占比从一季度的7.1%提升至四季度的39.4%,带摄像头的AI眼镜正逐步替代纯音频产品成为市场增量的主力。厂商格局方面,小米依旧占据主要份额,华为、雷鸟、界环跟随其后。从全年来看,产品功能逐步丰富,语音交互之外,实时翻译、物体识别、第一视角记录等功能的应用频次也在提升。

AR/VR市场

2025年中国AR/VR市场出货量73.4万台,同比增长36.5%。AR&ER品类依然是增长主力,四季度市场份额达到89.8%,同比增长163.7%。四季度夸克S1开售,凭借阿里生态的整合能力获得较高关注,份额直接跃居市场前三。其他厂商也趁促销季发售新品,推动出货增长。回顾全年,市场格局更趋均衡,前五厂商雷鸟、XREAL、Rokid、INMO、阿里份额差距逐渐收窄,头部竞争加剧。

VR&MR市场全年出货量同比下滑45.6%,四季度出货量同比下滑62.1%,市场仍未走出调整周期。不过经过低谷期,明年随着Pico等厂商轻量级新品上市,市场有望恢复增长。此外商用领域持续渗透,2025年VR&MR商用市场份额达到41.1%,大空间与教培依旧是支撑商用出货的主要场景方向。

2025年中国智能眼镜市场的三大显著特点

1. 头部厂商相继试水,产品形态仍在快速迭代

消费电子、互联网大厂相继发布首款AI眼镜产品,但从实际落地情况来看,多数厂商仍处于试探性布局阶段,出货量普遍有限,部分产品仅发布尚未正式开售。现阶段各家的技术路线虽然较为一致,多围绕拍摄/AI语音/轻显示的轻量化方向展开,但产品迭代路线尚未定型,后续仍有较大调整空间。这一阶段更多是品牌对下一代交互入口的战略占位,真正的市场竞争尚未全面展开。

2. 线下渠道建设开始起步,线下渗透率仍有较大提升空间

2025年智能眼镜与眼镜零售终端的合作明显加速,越来越多的传统眼镜门店开始引入智能眼镜产品,设立体验专区或授权验配点。但从实际落地效果来看,2025年中国智能眼镜市场线上出货占比超过68%,线下渠道仍面临挑战:一方面门店的专业认知和服务尚未跟上,另一方面高价位产品在传统眼镜店的销售转化难度较大,而形态最接近传统眼镜的音频眼镜表现相对更好。眼镜作为强佩戴属性产品,试戴体验和验配服务对购买决策至关重要,如何真正发挥线下渠道的价值,将是2026年需要持续攻坚的方向。

3. AI接入已基本普及,场景落地开始显现苗头

2025年中国智能眼镜市场支持大模型语音助手的产品比例已达到50.5%,头部厂商产品普遍接入大模型能力,AI在交互层面的覆盖已基本完成。但从实际使用来看,多数AI功能仍停留在问答、翻译等通用场景,尚未形成真正驱动用户长期使用的核心价值。不过随着年底厂商在应用生态层面的持续发力,围绕主动服务、场景闭环的差异化竞争已经开始显现,部分厂商开始尝试将AI能力与用户的日常出行、办公、健康管理等需求进行更深度的绑定。2025年为AI能力的接入打下基础,2026年将进入场景落地的关键期。

建议与展望

IDC中国市场分析师叶青清认为,2025年中国智能眼镜市场完成了硬件层面的基础铺垫,轻量化和AI接入成为标配,为行业从尝鲜走向普及积蓄了势能。但真正的用户价值尚待发掘,场景落地和渠道转化仍是重要方向。

对于厂商而言,2026年需重点关注以下三个方面:

第一,持续推进场景落地,从功能集成转向场景深耕。 AI能力的竞争将从“有没有”转向“好不好用”,厂商需要围绕用户的日常出行、办公、健康管理等高频场景,打造具有主动服务能力的闭环体验,提升产品的不可替代性。

第二,加速线下渠道建设,发挥体验式销售的优势。 眼镜作为强佩戴属性产品,试戴体验和验配服务对购买决策至关重要。厂商应加强与传统眼镜零售终端的合作,提升门店专业认知和服务能力,同时探索“线上引流+线下体验”的O2O模式,提高转化效率。

第三,产品形态的差异化探索。当前各厂商技术路线较为一致,多围绕音频+拍摄+AI语音的轻量化方向展开,产品定义尚未定型。2026年厂商需打造更多细分场景的专属产品和差异化形态,如模块化设计、特定人群定制、与生态深度绑定的功能创新等,在硬件趋同的背景下找到自身的差异化定位。

IDC持续关注全球智能眼镜及可穿戴设备市场的发展动态。我们诚邀行业同仁、投资机构及媒体朋友与IDC中国分析师团队保持沟通,共同探讨市场趋势、技术创新与商业机遇。无论您是希望深入了解数据细节,还是寻求定制化市场洞察,欢迎随时与我们联系。

如需进一步了解与研究相关内容或咨询 IDC其他相关研究,请点击此处与我们联系。

IDC一直密切关注主权云和AI主权的话题。特别是在当下地缘政治环境复杂的大背景下,企业如何在控制与创新之间找到平衡,在保障数据安全的同时实现业务发展,成为一道必答题。近日,IDC全球研究总监Massimiliano Claps、IDC中国研究副总裁周震刚联合撰文揭示,主权AI的核心理念并非追求绝对的封闭或开放,而是在安全合规与发展需求之间寻求和谐统一,真正将选择权掌握在自己手中。本文将梳理核心观点,与您一同探寻主权AI的破局之道。

各国政府机构与公共部门的高层管理者——如项目主管、CIO、CTO和CAIO等角色——始终肩负着双重使命:既要通过技术推动创新,又要控制好伴随而来的风险。从历史上看,私营部门的IT和业务领导者往往更敢于创新。不过,在受监管行业中,领导者所承受的压力与公共部门颇为相似。过去12到18个月里,这种张力在不断加剧。AI可能带来的巨大收益与颠覆性影响,引发了人们对于如何管理相关风险的担忧。同时,地缘政治的动荡,也让技术选择的战略自主性、数据的控制权以及运营韧性变得至关重要。这些变化交织在一起,汇聚成了当前关于主权AI的讨论。


主权辩论的演变:从战术控制到战略要务

速度与控制、创新与主权之间的张力,是当下数字化和AI战略的核心问题。这也正是主权辩论不断演变的焦点所在。之前,关于数字化和云主权的讨论,源于一个非常具体的担忧:敏感数据可能会被外国司法管辖区访问。如今,这种较为狭隘的担忧,已经扩展成了一个更宏大的命题。当前,主权已经成为一项战略要务,深刻影响着组织设计自身整个技术架构的方式。

今天,主权不再仅仅关乎“数据存放在哪里”,它涉及到对数据、基础设施、运营乃至供应链的控制。而AI主权则更进一步,它关乎对整个AI生命周期的控制——从模型开发、部署,到最终的治理。

IDC全球研究总监Massimiliano Claps表示,IDC研究表明,市场的信号非常清晰。各国政府正在积极投资主权AI能力,从国家云基础设施到本土的AI生态系统。它们通过各种激励措施推动本地数据中心建设,资助本土语言的AI模型,并制定指导方针,来规范主权解决方案的采购与部署方式。对于政策制定者而言,AI早已不再仅仅是一项技术,它已经成为提升经济竞争力和保障国家安全的重要工具。”

对组织来说,这意味着一个全新的现实。如今,企业高层管理者(包括业务与IT负责人)面临的问题不再是设计单一的全球架构,而是如何在碎片化、多主权的世界中找到自己的方向。

选择正确的路径

面对这种复杂性,许多领导者都在寻找一个“标准答案”:到底哪种部署模式最有主权?实际上,市场上已经形成了一系列部署原型,从公有云到完全物理隔离的环境,不一而足。每种模式在控制力、敏捷性、创新速度和成本方面,都各有利弊。没有哪种模式可以包打天下。

一个受到高度监管的AI工作负载,可能确实需要一个主权环境,甚至是物理隔离的环境。而一个面向客户的应用程序,可能更适合利用公有云的可扩展性,同时辅以一些主权控制措施。

真正的挑战在于,要为不同的用例,甚至是同一个用例的不同组成部分,选择正确的模式。例如,AI训练用一个部署模型,检索增强生成用另一个,而代理型AI编排层可能又要选第三种。

正因如此,混合架构正成为公共和私营部门共同的主流模式。根据IDC 2025年数字主权调查(覆盖了900多名来自各行各业的IT及非IT领导者),37%的受访者表示“本地部署是目前的主要环境,主权云是(或将是我们唯一使用的)云类型”,而高达55%的受访者表示“主权云是(或将成为)我们多云/混合云战略的一部分”。

IDC预测:

  • “到2028年,跨国公司的CIO,将把对模块化、主权就绪的云和数据本地化环境的投资增加65%,以应对日益增长的主权需求,确保运营能够适应未来发展。”
  • “到2026年,55% 的政府机构将采用混合主权云架构,将超大规模云服务商的能力与国家层面的控制相结合,确保AI应用合规、安全,并实现战略自主。”

公共和私营部门的领导者们,并未因此放弃云,而是在重塑云。他们将全球超大规模云服务商的能力与本地控制层相结合,构建出IDC所称的“主权就绪”环境。
这种做法也揭示了一个更深层次的真相:主权并不意味着自我封闭。主权的核心在于拥有选择权和控制权。

结合具体情况,IDC将主权云与主权 AI 划分为数据主权、技术主权、运营主权三个递进层级,主权掌控力度由低到高,企业无需追求一步到位,应结合自身合规要求、业务场景与创新节奏灵活选择适配层级。

  • 数据主权的核心是数据的属地存储、访问权限与合规流转,确保敏感数据不出境、受本国司法管辖,是企业满足基础监管要求的最低门槛,也是主权 云和主权AI 的起点。 
  • 技术主权聚焦算力硬件、模型框架、核心算法与供应链的自主可控,减少对单一外部技术的依赖,保障 AI 研发与迭代的技术自主性,适用于对安全与供应链韧性要求较高的场景。
  • 运营主权指对云和AI 全生命周期的部署、调度、运维、治理、应急响应拥有完全掌控权,覆盖基础设施运维、服务连续性、权限管理与合规审计,实现从技术到落地运营的全流程自主。

IDC中国研究副总裁周震刚表示,IDC认为,企业不必盲目追求更高层级,可按业务属性分级适配:普通创新场景满足数据主权即可;核心 AI 业务需叠加技术主权保障安全;政务、金融、关键基础设施等高监管领域,则需完整实现运营主权,在安全可控与业务效率间取得最优平衡。

主权AI的真相

关于数字化和AI主权的讨论,常常被描述成一种取舍:要么要控制,要么要创新。但那些真正能脱颖而出的组织,恰恰是摒弃了这种非此即彼思维的组织。它们明白,主权并不是限制创新,而是按照自己的方式去实现创新。

在AI正逐步成为经济与社会支柱的时代背景下,IDC的研究致力于帮助那些需要做出战略选择的业务和IT领导者,以及那些正在重新调整产品方向的技术供应商,把关键要素串联起来。

IDC主权云及AI主权相关研究报告

全球及政策制定者

  • 从数字主权到政府AI主权(2025-12)
  • 数字主权如何影响政府中的AI应用(2024-09)
  • IDC PlanScape:政策制定者的数字主权框架(2023-05)
  • AI主权:国家经济竞争力与安全(2025-02)
  • IDC PlanScape:国家政府IT领导者的数字与AI主权行动方案(2025-06)
  • 数据韧性、控制力与战略自主清单:重构复杂主权方法的实践进展(2025-08)

欧洲、中东、亚太等地区

  • 主权云对西欧、中东、土耳其和非洲地区AI工作负载的影响:组织需考虑的因素(2024-11)
  • 2025年欧洲主权云:什么是“B计划”?(2025-09)
  • 海湾地区主权云部署选择——全球与本地供应商如何在规模、控制与信任之间取得平衡(2025-11)
  • Deem Cloud:赋能沙特主权与AI就绪型政府服务(2025-09)
  • 亚太地区主权云:2025年市场动态(2025-09)

AI、云、能源、数据、应用

  • 数字主权如何影响AI与主权云的使用(2025-12)
  • 主权AI:是什么、为什么、怎么做(2025-11)
  • 全球主权云市场预测,2025–2029(2025-12)
  • 能源主权:数字主权如何影响IT能源选择(2025-10)
  • 数字主权与数据空间:不断演变的数据共享格局(2024-09)
  • 哪些工作负载迁移至主权云,AI如何受到影响?(2025-07)

IDC长期深耕数字主权、主权云与AI主权领域,持续跟踪全球市场动态,为企业提供从宏观趋势到落地路径的系统性洞察,涵盖国内以及亚太、中东非洲、欧洲、拉美等众多区域市场。

如需进一步了解与研究相关内容或咨询 IDC其他相关研究,请点击此处与我们联系。

Thomas Zhou - Vice President - IDC

Thomas Zhou is the vice president of Enterprise Research for IDC China. He leads the enterprise research team in covering market analyses, tracking of data, forecasting, and consulting for enterprise computing, storage, networking, infrastructure software, cloud, and datacenter. He is also responsible for IDC data tracking of software, services, and the public cloud services market in China. Thomas speaks frequently at IDC, industry, and user events and is always quoted in leading business and technology publications. Thomas joined IDC in 2006. He provides in-depth market analysis, research, and consulting on all aspects of the enterprise infrastructure to IT vendors and investors. During his tenure at IDC China, Thomas has led IDC's primary research focused on emerging trends in enterprise systems and datacenters. This research continues to make IDC a thought leader in enterprise infrastructure‒powered digital transformation. Thomas's recent topics covered software-defined infrastructure, hyperconvergence, virtualization, and cloud computing infrastructure. Prior to joining IDC, Thomas worked for 10 years as a senior project manager and business consultant for several leading IT companies in China. Thomas holds a master's degree in Computer Engineering from the University of Science and Technology of China.

AI data pricing is being negotiated before organizations understand how value is created, retained, or scaled in production systems. As a result, enterprises are locking in commercial terms without a clear model for how their data will behave—or what it will ultimately be worth.

Enterprise teams are being pushed into decisions about data earlier than expected. Not just technical decisions, but commercial ones.

Contracts are being negotiated before teams have a stable understanding of how their AI systems will behave in production. In many cases, pricing terms are being set before architecture, usage patterns, and governance controls are fully defined.

That creates real exposure.

What rights apply to training versus retrieval? How should data be priced when it continues to influence a model after initial use? Who carries liability when usage scales beyond what the original agreement assumed?

These questions are now showing up in active negotiations.

AI changes how data value is created and retained

In AI systems, data value is no longer tied to a single transaction—it depends on how data is used across training, retrieval, and continuous ingestion. Each model creates different economic and contractual implications.

Earlier data models assumed bounded use. A dataset supported a defined use case, and pricing reflected access, volume, or users.

AI systems behave differently.

Training embeds patterns into model weights. That effect persists.
Retrieval-based approaches provide controlled, revocable access.
Live connectivity introduces continuous ingestion.

These models carry different economic and contractual implications.

At the same time, AI expands consumption. A dataset that once supported a team of analysts may now support thousands of automated decisions.

Pricing models built around human-scale usage are now under structural pressure.

“Once the data is in the model, it’s in the soup. You can’t extract it.”

(Industry executive)

Misalignment shows up in contracts first

The tension shows up immediately in negotiations.

Buyers are trying to control cost and avoid open-ended exposure. Sellers are trying to capture value that extends beyond a single transaction. Platform providers influence access and control points.

Each party is acting rationally. But they are doing so without a shared model for how value should be defined.

This is why many negotiations stall or become overly complex.

The discussion shifts to definitions:

  • What counts as a derivative output
  • How reuse is defined
  • Whether training creates lasting economic claims
  • How usage is monitored and enforced

When these questions are not resolved early, they reappear later in more constrained and expensive ways.

Predictability is winning over precision

In theory, pricing should reflect value.

In practice, value is difficult to measure in AI systems where multiple data sources contribute to outcomes.

Most organizations are prioritizing predictability.

They want to understand:

  • What they are committing to
  • How costs change as usage scales
  • What constraints apply to future use

In AI data pricing, predictability is often more valuable than precision.

This is why simpler models such as tiered usage and credits are gaining traction, even when they are not economically perfect.

“Simplicity beats perfect value capture in early-stage AI adoption.”

(Data vendor executive)

Governance is now part of pricing

Governance is no longer just about compliance. It affects pricing directly.

Organizations with strong governance can:

  • Clarify rights and usage boundaries
  • Reduce perceived risk
  • Support reuse across use cases

Organizations without it face:

  • Restrictive terms
  • Higher pricing
  • Delays

Pricing discussions increasingly require architectural clarity before contracts are finalized.

What to do now

The market has not settled. That does not remove the need to make decisions.

A few practices are emerging:

  • Separate training, retrieval, and live access rights early
  • Model the full lifecycle cost of data
  • Avoid long-term commitments during pilot phases
  • Preserve flexibility to renegotiate

The goal is not to find a perfect pricing model—it is to avoid decisions that limit future options.

The core tension

Contracts are being signed while the underlying model is still evolving.

The immediate challenge is how to structure data pricing decisions today without limiting how AI systems create value tomorrow.

If you are working through these issues, I go deeper into them in my recent IDC Perspective.

Lynne Schneider - Research Director - IDC

Lynne Schneider is Research Director leading IDC's Data Collaboration & Monetization, and Location & Geospatial Intelligence market research and advisory practices. Ms. Schneider's core research coverage in DaaS includes data sourcing and delivery services from traditional and emerging data providers along with evolving data aggregation and dissemination platforms. The breadth of coverage includes services that enable an organization to externally monetize data generated as part of the organization's ongoing operations, value-added information derived from this data, and the marketplace for combining data with other solutions. This research analyzes the supply and demand side business and technology trends of this emerging category.