Over the course of the COVID-19 pandemic, we have seen a “digital divide” emerge, where companies that invested in digital before 2020 were able to progress through the stages of disruption (business continuity, cost containment, etc.) to the next phase of growth. Digital transformation was accelerated by the need for remote work and collaboration with internal and external constituents, and those companies that had previously implemented DX initiatives prior to the pandemic were, and continue to be, able to respond more flexibly.

Another component of digital transformation that evolved rapidly during the pandemic is the evolution of industry ecosystems from a static list of partners to a diverse, flexible, and scalable mix of technology vendors, industry organizations, industry consortia, supply chains, service providers, service and expert networks, and end customers.

Organizations from every industry, including manufacturing, healthcare, retail, financial services, government, and construction, continue to expand and evolve the way that they work with industry ecosystem partners. Traditional value chains are now open, iterative closed loops between a varied set of partners from inside and outside these organizations — and within and outside their industry.

As our 2021 Global Future of Industry Ecosystems Survey shows, working more closely with industry ecosystem partners is necessary to spark innovation; augment skills, capacity, and knowledge; and enable resiliency. Examples exist today of companies – friend and foe – working in tandem to share data and insights, share applications, and share operations and expertise, and deliver products, services, and experiences in a blended physical and digital way to end patients, citizens, customers, or consumers.

There are multiple use cases supporting each of these initiatives that we think will only grow and expand. Industry ecosystems will play a critical role in the months and years ahead, for companies that realize they need to have a supporting cast of ecosystem participants that function as a scalable extension of organizations, as well as a source of data and insight, codeveloper of applications, and/or provider of shared operations. CEOs, board of directors, and business line leadership realize that a flexible industry ecosystem is necessary to move quickly to meet the end customer’s changing needs, assure product and service safety and quality, adapt to any disruption, and evolve like a biological ecosystem.

Recent IDC research on the future of industry ecosystems explores this digital divide. We expect that organizations that focus on industry ecosystems will begin to derive a large percentage of their revenue from these new business models. A big part of this transformation will be achieved by taking an open approach with industry ecosystems and utilizing tools such as digital twins, blockchain, and innovation communities to accelerate, monetize, and derive value from data, application, and operation initiatives and opportunities.

Capital and foundational support of assets, operations, and finances will come from funding (ecosystem “venture capital”) within industry ecosystems, as well as from governments. Organizations in every industry will continue to mandate that their industry ecosystem partners follow an aligned environmental social governance approach so that their industry ecosystem mission remains consistent, and they can meet patient, citizen, customer, or consumer expectations.

Manufacturers leverage this approach for innovation and manufacturing capacity; healthcare organizations share data and knowledge to improve patient outcomes; energy companies team up and share applications to ensure quality and high performance; and retailers work with their ecosystem partners to deliver goods and services to consumers in a blended physical and digital way. Often these are not only traditional partners but also competitors and organizations from other industries — in fact, IDC’s Future of Industry Ecosystems Global Survey shows that within 12 months most new partners for organizations will come from outside their core industry.

Suffice it to say, the future of industry ecosystems — an expansion of ecosystem participants that organizations must work with in support of any situation, whether innovation, product or service change, dynamic demand, or unexpected disruption — is rapidly progressing as the new way of working to ensure innovation, flexibility, and resiliency.

Please join me on February 8, when I will host a complimentary webinar, Industry Ecosystems Reorient for Purpose, Profit, and Digital Transformation, that looks at how a greater proportion of an organization’s ability to generate value will be tied to its participation in a new economy and new ecosystems.

Jeffrey Hojlo - Research Vice President - IDC

As Research Vice President, Future of Industry Ecosystems, Innovation Strategies, & Energy Insights at IDC, Jeff Hojlo leads one of IDC's Future Enterprise practices at IDC - the Future of Industry Ecosystems. This practice focuses on three areas that help create and optimize trusted industry ecosystems and next generation value chains in discrete and process manufacturing, construction, healthcare, retail, and other industries: shared data & insight, shared applications, and shared operations & expertise. Mr. Hojlo manages a group focused on the research and analysis of the design, simulation, innovation, Product Lifecycle Management (PLM), and Service Lifecycle Management (SLM) market, including emerging strategies across discrete and process manufacturing industry such as product innovation platforms and the closed loop digital thread of product design, development, digital manufacturing, supply chain, and SLM. He also manages IDC's North American Energy Insights group, with a focus on key topics such as energy transition & sustainability, distributed energy resource management, and digital transformation in the Oil & Gas and Utilities industries.

IT organizations are under tremendous pressure to deliver on the promise of the digital business model. As organizations become increasingly digital, guaranteeing a reliable digital connection for both internal and external parties is now critical to business continuity. Any connectivity disruption can have a dramatic impact on revenues, partner engagement and customer experience and satisfaction.

Forced to pivot and adapt in 2020, organizations quickly learned that they must now deliver and execute on a resilient connectivity strategy that provides uninterrupted bandwidth to keep business operations running smoothly. Organizations must be able to respond, adapt and evolve based on real-time and future connectivity demands – and this adaptation should extend beyond the network.

In IDC’s recent report, IDC Future Scape: Worldwide Future of Connectedness 2022 Predictions, we predict that more than 30% of organizations will prioritize connectivity resiliency to ensure business continuity, resulting in uninterrupted digital engagement for customers, employees and partners in 2022. Based on recent survey data from IDC’s Future of Connectedness survey, connectivity resiliency is aligned to state of connectedness across the enterprise. Currently, almost half of enterprises are still mid-stage or younger in their overall maturity strategy.

Even as we hopefully watch the world return to some newer state of normal this year, this prediction percentage should be higher. It should in fact be a wake-up call to any enterprise that looks to effectively manage its distributed workforce effectively as they further adopt digital-first principles. The business outcomes will be the faster ability to adapt to business and market conditions, adopt new technologies to improve business agility, and adapt to market changes.

Connectivity resiliency isn’t a one-time transformation. It requires the continuous alignment of people, processes, and technology around a common goal of guaranteed uptime. It also requires investment and integration of complementary connectivity technologies that empower users and customers to remain connected to key applications and services regardless of location or issue. The business outcomes will be the faster ability to adapt to changing business and market conditions and the adoption of new technologies to improve agility. 

The Roadmap to Connectivity Resiliency

Enterprises must first assess their current network, IT, cloud and application footprint to get a full understanding of how people, processes, systems, and “things” interact, and the actions needed to address potential technology related issues. Here, enterprises should perform an internal gap analysis to assess existing connectivity capabilities, strengths, weaknesses, and priorities for the entire organization. This should include:

  • Network access capabilities and needs
  • Usage patterns of on-premises and hosted applications
  •  Determining acceptable service level assurances associated with the disruption of business operations.

Second, organizations should prioritize investment and integration of complementary connectivity technologies that allow users and customers to remain connected to key applications and services regardless of location or issue. This could be via a complementary network, or through cloud solution adoption that allow access/usage anywhere and that eliminate risk of siloed data.

Thirdly, organizations should focus on building automation and analytics into all critical connectivity processes. As organizations use complementary technologies like 5G and WiFi 6 to ensure seamless connectivity, it will be just as critical to leverage data intelligence and performance metrics to track customer engagement, provide notification of issues, remove manual intervention of any hand off, and automate migration requirements between networks or systems.

As organizations evaluate both short and longer term business goals, addressing the resiliency challenge today will help drive opportunities to become more innovative, efficient and creative tomorrow. The end result will be increased agility for the Future Enterprise, with a connectedness outcome that improves network and IT efficiency, keeps employees, customers and partners engaged, helps ensure business continuity, and accelerates time to revenue.

Interested in learning more about IDC’s Future of Connectedness research practice? Download our latest eBook, Value Chain Agility: Connecting Partner Ecosystems to Shared Data and Intelligence

Paul Hughes - Research Director, Future of Connectedness - IDC

Paul Hughes is a Research Director leading IDC's Future of Connectedness Agenda program. He is also a key member of IDC's larger Worldwide Telecom Research Team. In this role, Paul is responsible for research related to the future innovation and transformation of how data and connectivity impact people, things, applications, and processes used by enterprises and end users. Within the Future of Connectedness practice, he also publishes thought leadership on how the Connectedness ecosystem – including communications service providers, cloud providers network equipment vendors, IT hardware vendors, software vendors and systems integrators – must develop solutions to meet future technology needs of businesses and consumers.

When was the last time you participated in a live conversation with a group of your peers? Virtual roundtables offer the opportunity for valuable face time with your buyers, fostering peer-to-peer networking. These sessions curate dynamic and exclusive conversations in a digital, yet live, setting. The inherent flexibility, allowing connection from any location to engage in real-time discussions, often leads to increased participation rates, especially among senior IT buyers.

When a virtual roundtable is set around the right timely topic, it affords you the ability to explore the challenges, opportunities, and threats as they are told by your ultimate customer. Without distraction from other sources, you get face time with the most engaged audience of decision makers. By joining your peers from across your industry, virtual roundtables give you a platform to discuss best practices, and lessons learned, to help guide your decision making around a particular focus area of your business. More importantly, your customers are looking for this style of interaction to help with their purchase decisions.

Benefits of hosting a virtual roundtable include:

  • Captive audience of Senior IT decision makers
  • Exclusive in-depth live conversations with customers and prospects
  • A platform to identify customer’s key business challenges and peer-to-peer networking
  • Full contact information for all registrants and attendees

Virtual networking events and online product demos are the top two resources that will be more important as tech buyers research technology products and make purchase decisions over the next 12 months.*

How to Conduct an Effective Virtual Roundtable

  1. Pick a topic:  The topic is very important because it is the driver of engaged conversation versus quick short answers. Make sure your topic has breadth to it; It should not be centered around your business but reflect industry changes and opportunities as well.
  • Shortlist a speaker:  The right moderator makes all the difference in stimulating and maintaining an engaged discussion amongst all of your participants. Hosting a third-party speaker, such as an industry analyst, also supports your credibility in the topic area you wish to dive into, because they are an impartial expert. They also bring a level of thought leadership to the discussion. IDC analysts, for example, are recognized worldwide for their industry expertise, with over 1100 analysts specializing in technologies and industries in 110 countries. So, as a moderator, IDC analysts present key findings, engage in the discussion and work with you to develop a powerful topic fueled with our research, creating a more engaged audience.
  • Event LogisticsOnce you have a topic, so you know which expert moderator is the right fit for your discussion, it’s time to choose a date that works well with your teams. Create your digital invitations and plan for follow up messaging to non-responders. Design an inviting landing page to point your invitations to, and gather attendance. An effective landing page will provide your invitees more information about your virtual roundtable event, and, through carefully crafted urgency, encourage their participation. Lastly, follow up and create reminders of your event, before your event date. This will create excitement and ensure your participants attend on the day of your virtual roundtable. It’s an opportunity to create a relationship and show that their attendance matters. If you are working with a third-party research firm, such as IDC, they will take on this planning and development for you and guarantee you a minimum attendance of qualified leads for your business.
  • Content development:  It’s important to draft marketing-focused content that considers industry data points that can, not only, get conversation started, but keep it flowing. Your content plan is the driver of insights into what your customers are thinking, but it must be crafted in a way that creates a meaningful peer-to-peer discussion, one where everyone gets something of value. Your chosen moderator will work with you and craft the perfect content, layered with research, that will guide the conversation and leave everyone with best practices and lessons learned after the event.

Guaranteed Leads to Drive Your Business Forward

We are experts at organizing and facilitating virtual roundtables. Created and researched by IDC and produced in partnership with IDG Communications, the producers of the world’s largest technology media brands like InfoWorld, CIO, and Network World, you will receive targeted leads, a chance to give and receive constructive feedback from senior level tech buyers, a platform to identify your customer’s key business challenges and actionable pipeline for your sales teams.

host a turn-key virtual roundtable

Host a Turn-Key Virtual Roundtable

The topic covered is crucial to the success of a virtual roundtable. The right topic and moderator will generate a two-way dialogue between you and your peers; one that provides you with enough insights to move your strategies forward. IDC’s Future Enterprise practice covers nine key areas that will be driving CEO agendas. We can help you host one of these critical conversations today:

  • Future of Digital Infrastructure:  responsive, scalable, and resilient infrastructure technologies
  • Future of Intelligence:  enterprise intelligence
  • Future of Trust:  risk, compliance, security, privacy, and ethics/governance
  • Future of Work:  workforce transformation
  • Future of Customers and Consumers:  data-driven customer experiences
  • Future of Industry Ecosystems:  open, agile, and scalable ecosystems that encourage innovation
  • Future of Digital Innovation:  trends in software development and distribution
  • Future of Operations:  resilient market driven operations
  • Future of Connectedness:  seamless connected experiences
Interested in hosting a Future Enterprise virtual roundtable with your key buyers and peers?

*Sources: IDG Role & Influence of the Technology Decision-Maker Survey, 2020; * Innovate MR –COVID-19 B2B Study, June 2020

The start of the new year brings many people closer to realizing ways they can improve, perhaps its eating better, or fitting in more time with family and friends. There might be professional resolutions such as meeting more regularly with your boss, connecting with colleagues outside of your department. For IT, cutting back on wasted cloud spending is often high on the list but tends to eventually fall through the cracks, with no resolution to this pattern. 

According to Forbes, while executives estimate that 30% of their cloud spending is wasted, at the same time enterprises intend to spend even more on cloud services. Clearly wasteful cloud spending is a recognized yet growing problem that for many continues to go unresolved. As this blog will show, where IT leaders fall short on is not identifying areas of spending that can be improved but implementing a plan of action for cost savings and maintaining it. 

To elaborate on cloud costs, there are many tools available from cloud providers and third parties that provide reports and dashboards, and even recommendations about which instances can remove or reduce/enlarge (rightsizing). Tools that provide intelligence can also determine how to use discount options (reserved instances, savings plans, reserved capacity, etc.), how to handle licenses smartly and what to do in application architecture to save costs. And, instances can be disabled when not in use. 

black image with blue swirl from bottom left corner and top right corner

In summary these resources provide insight, but knowledge into your spending is only as useful as what you do with it to turn around your spending. And how you act will determine how effective you are at plugging the holes of your spending.  

Because of the effort that’s needed its common for IT to plug their holes with patches. Take, for example, disabling instances outside working hours. In theory this is an excellent saving, but instances are part of applications, which in turn are part of chains. And then it may just be the case that data exchange takes place in a chain outside working hours. But also, test teams that are approaching a deadline may sometimes need their environment outside the pre-planned working hours. And if environments are used in the management chain, they must also be available after hours in case of an emergency. Overall savings is easier said than done, mainly because it takes work to get there. 

Rightsizing is also more difficult than it seems. Users and administrators are often hesitant about removing capacity; users see their performance decrease, and administrators see the risk that more failures will occur because there is less overcapacity to absorb issues. In the latter case, you must carefully analyze where these issues come from; a mediocre application can benefit from more capacity, but that is not a long-term solution. Remember, if the roof leaks, you can replace the bucket that collects the water with a larger tub, but that too will become full at some point. You’ll eventually need to repair the roof. 

Ultimately, you’ll have to move towards an entirely new approach in which you not only have insight into the costs, but also involve users and administrators, so that you can make the right decisions about saving on your cloud costs. This isn’t as daunting or unattainable as it sounds. In our next blog we’ll reveal how some IDC Metri Cloud Economics clients have transformed their cloud spending, so you can see how to get there too.  

Can’t wait until the next blog is published to learn more about cutting cloud costs? Contact us to schedule a conversation. 

Recently published IDC’s global Future of Digital Infrastructure survey research shows that among enterprises with 1,000 or more employees:

  • 76% want their strategic vendors to take more day to day administrative and operational responsibility for infrastructure so internal IT staff can focus more on the business.
  • 75% indicate that the vendor’s ecosystem of ISVs and managed services partners is one of their most important digital infrastructure and cloud services selection criteria.
  • 73% plan to use flexible, pay as you go OpEx consumption models for the majority of their digital infrastructure and cloud purchasing by the end of 2022.

The changing nature of digital infrastructure is having significant impact on the ways that enterprises evaluate and do business with major infrastructure hardware and software vendors, as well as public cloud service providers and channel partners. Specifically, the increasing sophistication of data-intensive, cloud native workloads, coupled with adoption of hybrid, interconnected digital infrastructure across public clouds, data centers, and edge platforms, requires enterprises to implement highly autonomous, intelligent approaches to management, design, security, compliance and multicloud control. 

Most organizations are struggling to recruit and retain the required operational skills.  Simultaneously, developers find their time consumed with configuring, updating and securing infrastructure and data resources rather than on coding for innovation. The continuous introduction of purpose-built silicon and specialized public cloud services exacerbates an already challenging situation as more options provide the potential for powerful innovation, but require internal teams to master yet another set of technologies.

Vendors Respond with Many Digital Infrastructure Options

Enterprises are concluding that there has to be a better way to make digital infrastructure operations more reliable and economical using automation and proven best practices.  Vendors are stepping up to offer enterprise technology decision makers a wide range of emerging digital infrastructure offerings, to address these challenges, including:

  • Consumption-based infrastructure subscriptions for dedicated platforms, such as Dell APEX, Cisco Plus, and HPE Greenlake
  • Extended public cloud infrastructure and services deployed on premises, such as AWS Outposts, Google Anthos, IBM Distributed Cloud and Oracle Cloud@Customer
  • Portable cloud-native platforms optimized for hybrid and multicloud deployments, such as Red Hat OpenShift, Rancher, and VMware Tanzu
  • Hybrid and multi-cloud management software and services, such as Azure Arc and VMware Cross-Cloud Services
  • Cloud and data center interconnect and data streaming as-a-service solutions from providers, such as Equinix and Cloudera

It’s clear that most major digital infrastructure providers understand enterprise customers want a simplified, secure, and cost-effective way to ensure that mission critical applications and developer services are available as needed, anywhere and anytime, regardless of the end user’s physical location. In most cases, the major vendors are working closely with strategic channel partners to help them expand competencies and step up to help promote and support these new offerings.

Traditional infrastructure hardware and software vendors are also deepening partnerships with the major public cloud services providers to include more complex and sophisticated services via public cloud marketplaces.

Lessons Learned from Success Digital Transformation Efforts

Fundamentally, each of these new types of vendor offerings provide enterprises with an option for shifting some aspects of day-to-day digital infrastructure operations, and lifecycle management, to third-party vendors and service providers, including channel partners. 

At first glance, some decision makers might think that this type of transition undercuts the value of IT operations teams within their enterprise.  IDC’s research shows such worries are unfounded.  Rather than weaken IT’s connection to the business, the ability to better optimize performance, cost and security via vendor managed platforms and services actually frees up internal resources to focus more on improving business outcomes.

The experiences of IDC’s recently announced Future Enterprise Best in Future of Digital Infrastructure North American Award winners and finalists are instructive, underscoring how successful IT leaders can make a difference by using modern hybrid architecture and as-a-service options to break down brittle internal data and application silos and enable the rapid launch of vital new capabilities.  For example:

  • A major luxury retail brand house undertook a multiyear digital core transformation to support aggressive worldwide growth targets for global digital sales that included migrating over 100 servers to a public cloud, to quickly reduce costly technical debt. The new digital core platform allowed the organization to enrich the end user experience and take advantage of advanced analytics and RPA, AI, and ML technologies, to drive sales based on consumer behavior.
  • A global automotive financial services organization, that served different brands with separate dedicated legacy infrastructure stacks, transformed itself into a 100% public cloud-based multi-brand, multi-tenant platform, to provide customized brand-specific online engagement and support, with full data segregation and protection. Adopting a cloud-native platform approach to digital infrastructure allowed the company to rapidly grow new lines of business and to introduce more agile and data intensive services in a fraction of the time it would have taken using their traditional approach to infrastructure planning and modernization.
  • As US-based regional credit union implemented a major overhaul of its entire data center focused on improving business and cyber resilience.  It focused on the goal of resuming full business operations within an hour after a ransomware incident. As a high-volume transaction-based business, the credit union implemented a highly integrated, cloud-friendly digital infrastructure environment that relied on automated analytics to detect issues and protect and restore data securely.
  • A global medical systems and software company transitioned existing silos of legacy data center and cloud services into a unified cloud-first platform to support more efficient and cost-effective centralized compute, analytics and data storage services across the business.  The resulting digital work platform allowed the company to accelerate R&D cycles, digitize manufacturing and increase the ability to support mobile work.

Successful organizations emphasize how the success of their digital infrastructure transformation efforts are tied to their willingness to disrupt the status quo, by dramatically accelerating the speed of deployments, proactively breaking down operational silos and focusing on business centric key performance indicators (KPIs). In many cases, this approach required IT teams to rapidly migrate workloads out of traditional data centers, engage with new types of IT and cloud service partners, automate many traditional aspects of IT operations, create new governance programs, and tie the business case to fundamental digital business imperatives and business goals.

In 2022, IDC expects many more enterprises to step out of their comfort zones and find more effective and flexible ways to enable mission critical digital business priorities.  Vendor relationships will evolve in tandem with more emphasis on outcomes, consumption-based sourcing, software-driven automation and remote lifecycle support.

Join me at IDC Directions, in Boston or Santa Clara in March to hear more about how enterprises are changing the ways they work with strategic digital infrastructure vendors, cloud service providers and channel partners.

Learn more about IDC’s Future of Digital Infrastructure research practice and its coverage of the changing nature of digital infrastructure.

Mary Johnston Turner - Research VP - IDC

Mary Johnston Turner is Research Vice President within IDC's worldwide infrastructure research organization and global research lead the Digital Infrastructure Strategies practice. Mary's coverage tracks enterprise tech buyer sentiment related to compute, storage, edge, operations and cloud platforms and deployment models. Current research priorities emphasize the impact of rising requirements for data-driven AI-Ready Infrastructure, Fit-for-Purpose Hybrid and Multicloud Architectures, Autonomous Operations, Edge Integration, and collaborative business and IT governance. Her practice emphasizes the voice of the enterprise customer, based on surveys and in-depth analysis of best practices and infrastructure investment priorities. Mary's research emphasizes consideration of topics related to AI-ready infrastructure, tech debt avoidance, data center modernization, mainframe modernization, infrastructure governance, staffing and skills priorities, and infrastructure operating models. Within the infrastructure research organization, Mary collaborates with other practice leads to ensure coherency and alignment of insights and published research.

Prior to joining IDC, I used to work at a product and process innovation consulting firm. The technical staff – many of my colleagues – consisted of more than a hundred mechanical, electrical, chemical, and other engineers with multiple doctorate and post doctorate degrees. The company also actively maintained a network of scientists who played an advisory role and could be consulted on a moment’s notice. During my ten-year tenure this network grew into the thousands. The company’s top clients included Fortune 1000 companies – mostly in the manufacturing and consumer goods industries – with significant R&D investments.

This consulting firm was relatively small by most standards – chiefly a peer group of larger consulting firms – but that did not stop it from taking on some very cool assignments on behalf of its clients. It designed tiny antennas (a few millimeters tall) for the telecommunications industry, resolved vexing production problems with aluminum truck wheel manufacturing, solved corrosion issues with natural gas pipelines, lowered the cost of solar panels, developed new approaches for paint coatings, and even came up with a breakthrough dental whitening solution.

How could this firm be so effective in so many different disciplines? Because of a culture that fostered innovation. The company had developed a systematic innovation methodology that could be applied to any engineering, scientific, or research area. This methodology deconstructed an “engineering system”, whether it was a tooth whitening system or a truck wheel manufacturing system, into the functions between its underlying components. It determined which functions were useful and which were unnecessary, or even harmful, and then rebuilt the system by eliminating the unnecessary or harmful functions, and adding useful ones. For those of you in the know, this methodology was based on TRIZ – the Theory of Inventive Problem Solving. All kinds of other analytical tools were applied as well, that I won’t delve into here.

One could argue that this firm built “functional twins” of engineering systems that needed to be optimized, prototyped, or operationalized. This was long before terms like “digital twins” (used in the context of Operational Technology) were commonplace (A “Digital Twin” is a virtual representation and a real-time digital counterpart of a physical object or process). And indeed, no computers, except basic laptops, were involved in the analytical process used by the technologists at this firm. Simply put, the very reason that this small innovative consulting firm could thrive was because very few of its large clients had invested in, or could access, large high-performance computing (HPC) systems (such as the ones that could be found in research institutions and universities at the time), leave alone have in-house skills to codify or program pertinent problems onto these HPC clusters. You could say that this consulting firm’s core staff of 100 engineers and adjunct staff of 3,000 scientists represented parallelized human computers, a bit like the “human computers” that NASA employed for early Apollo missions, except decades later.

Today, companies can no longer rely on “human computers” for their R&D initiatives. Fierce competition, the constant quest to maintain or further an organization’s differentiation, and the need to make decisions steeped in digital information mean that almost every company – regardless of industry – must invest in high performance computing, artificial intelligence, and analytics infrastructure. And they must employ technical staff that can make effective use of these systems. We are in the era of what NVIDIA’s CEO Jensen Huang calls the “industrialization of HPC”. If data is the new oil, the industrialization of HPC is designed to make sure that the crude oil can be quickly extracted, refined, and made fit for consumption, internally and externally. What the firm I worked at delivered as a service, to clients that could afford it, will soon become table stakes for every firm in every industry, regardless of their size.

Revolutionizing Business Investments and Outcomes

The industrialization of HPC – also sometimes referred to as the democratization of HPC – is nothing more than HPC technologies becoming commonplace. Their adoption is no longer limited to well-funded national laboratories, universities, and select industries such as oil & gas, genomics, finance, aerospace, chemical, or pharmaceutical. HPC is gaining wider adoption in public and private research institutions, cloud, digital and communications service providers, and – crucially – at many enterprises. This is revolutionizing business investments and outcomes:

  • Industrial firms are overhauling their manufacturing plants and R&D centers. Increased investments in software solutions enable scientists and technical staff to accelerate product and process innovation with precision and deterministic reliance.
  • Fast, highly responsive, and disruptive, rather than incremental product and process innovation, are crucial for an organization’s competitiveness today, and such innovation is requiring increasingly more sophisticated approaches, including modeling and simulation on HPC systems.
  • Companies are increasing their investments in artificial intelligence (AI), leading to a faster penetration of AI in enterprise workloads. IDC predicts that by 2025, a fifth of all worldwide computing infrastructure will be used for running AI. AI began on siloed systems in the datacenter but is increasingly being migrated to large clusters of the same type that can run HPC. Essentially these AI clusters are the benign Trojan horse that brings HPC clusters to the industrial world.

The growing availability of stupendous amount of computing at manageable costs (and measurable returns) – whether they are capital expenses for on-premises HPC systems or operational expenses for HPC as a service – has tremendous enabling power for scientists, engineers, and technical staff. Coupled with rich data sets, access to vast amounts of compute capacity has ushered in a new R&D culture among scientists. The ability to increase iterative runs without penalty allows them to tweak a model or run a simulation as often as necessary within acceptable timeframes.

This last point goes further than just the enabling of multiple runs. It also allows R&D in enterprises to take on a fundamentally scientific and data-led approach to their domain, one in which they are not just trying to develop solutions but are also starting to actively look for new problems (which can be solved using algorithmic approaches). Disruptive innovation lies in using technology to look for new problems, which is where scientific discovery usually begins.

HPC at IDC

All this brings me to why this is an important area for IDC and why I am fortunate to lead IDC’s reinvestment in this domain. IDC’s clients – which includes vendors, service providers, end users and financial investors – continue to seek high quality market research and intelligence on High Performance Computing. They have been calling on IDC to expand its global research framework to include HPC for a while now. And I am here to tell you that we heard you loud and clear. In other words, IDC’s coverage of HPC is born out of an unmet need in the market for reliable and actionable market data and insights, related trends, and crucially the convergence of HPC with emerging domains like Artificial Intelligence, Quantum Computing, and Accelerated Computing.  In doing so, we want to ensure that any new HPC related coverage is taxonomically and ontologically aligned with IDC’s global industry research framework. This is a strategic investment area for IDC, and we plan to pursue it with all our might.

Starting January 2022 IDC is launching two HPC focused syndicated research programs (called continuous intelligence services or CIS). These programs (which will be part of my practice, and for which I am hiring two analysts – more on that below) will track the HPC market and industry from all aspects, including work done at national labs, universities, businesses, and other organizations across the globe. The two programs are:

Both programs will offer intelligence for vendors and service providers as they seek to offer technology stacks as a service to enable a variety of use cases related to High-Performance Computing

IDC has been following the HPC market closely for several years, using the term “Modeling and Simulation (M&S)”. We have examined M&S as a use case group that is spread across our enterprise workloads market segments and tied to the enterprise infrastructure markets that we track. Further, we define, track, size, forecast, and segment adjacent markets, technologies, and use case groups, namely:

In doing so, we concluded that all of the above can be brought together under one umbrella term: Performance-Intensive Computing (PIC), notably because of a convergence of compute and storage infrastructure used for deploying workloads related to these use case groups.

IDC defines Performance Intensive Computing (PIC) as the process of performing large-scale mathematically intensive computations, commonly used in artificial intelligence (AI), modeling and simulation (M&S), and Big Data and analytics (BDA). PIC is also used for processing large volumes of data or executing complex instruction sets in the fastest way possible. PIC does not necessarily dictate specific computing and data management architecture, nor does it specify computational approaches. However, certain kinds of approaches, such as accelerated computing and massively parallel computing, have naturally gained prominence.

From the context of Performance Intensive Computing, IDC views HPC to be comprised of three principal market segments:

  • Supercomputing sites that have been funded and custom-built for governments, national labs, and other public organizations
  • Institutional or enterprise sites that have been built with a mix of custom and off-the-shelf designs
  • Mainstream HPC environments that have been built with off-the-shelf designs to fulfill the technical and scientific computing needs of thousands of businesses around the world

When we define these markets, we make sure that they fit seamlessly together – like a puzzle. We also ensure that that they logically align to IDC’s definitions and tracking approaches for the worldwide enterprise infrastructure market. The figure below shows how, in IDC’s taxonomy, these markets fit together.

Ten years ago, firms such as the one I worked at prior to joining IDC used human experts to develop methodologies for solving or optimizing problems, by creating functional twins of engineering systems. Engineers would spend weeks taking a system apart (on paper), defining the functions between all the components, removing harmful or unnecessary functions, adding beneficial ones and then reconstructing the system with innovative new features. Today, engineering systems are recreated, analyzed, and optimized digitally, product and process innovation are performed with support from AI, HPC, and BDA, and scientists must have software development expertise. This shift is nothing less than a global digital transformation in the R&D and engineering departments at companies of all sizes. High-performance computing is now truly industrialized and is playing a central role in driving disruptive innovation.

And if you are interested in joining our new HPC practice, consider these fantastic job openings:

Should you invest in High Performance Computing solutions? IDC’s research and insights can be customized and designed around your specific product goals. Read our latest research on Performance-Intensive Computing Market Trends.

About the Author: Peter Rutten is Research Vice-President within IDC’s Worldwide Infrastructure Practice, covering research on computing platforms. Mr. Rutten is IDC’s global research lead on performance-intensive computing solutions and use cases. This includes research on High-Performance Computing  (HPC), Artificial Intelligence (AI), and Big Data and Analytics (BDA) infrastructure and associated solution stacks.

Peter Rutten - Research Vice President, Performance Intensive Computing (PIC) - IDC

Peter Rutten is Research Vice-President within IDC's worldwide infrastructure research organization and global research lead for the performance-intensive computing (PIC) practice. IDC's PIC coverage includes research on High-Performance Computing (HPC), Artificial Intelligence (AI) and Generative AI (GenAI), Big Data and Analytics (BDA) and Quantum Computing (QC) infrastructure stacks, deployments, solutions, workloads and use cases. It includes coverage of classical and hybrid quantum-classical supercomputing, and institutional and mainstream HPC. Peter and his team take a keen interest in emerging infrastructure domains - including quantum, analog and neuromorphic computing - that are highly disruptive to mature infrastructure markets. As a member of IDC's worldwide compute infrastructure research practice, Peter covers high-end, accelerated, in-memory and heterogeneous computing infrastructure systems, platforms, and technologies. These include servers with discrete and embedded accelerators (e.g., GPUs, FPGAs, and ASICs) used in AI and HPC environments. In his role, he performs quantitative (market sizing and forecasting) and qualitative (primary research based) analysis as well as custom market sizing for IDC's clients.

Agile development promises faster, more responsive development, that better aligns with the transformation, digital or otherwise, of organizations, as they face heightened, more competitive environments. Driven by market and technology changes, organizations are re-structuring themselves and their products and services to be more Agile and opportunistic to market changes. Agile should be suited to delivering this responsiveness when building and supplying technology capabilities to transforming organizations.  

But, frequently, it isn’t.  

By its nature, Agile can and should be a major enabler supporting these changes, but many organizations find it difficult to manage and extract this value, due to challenges in measuring productivity, quality, performance, and forecasting delivery. It’s hard to manage what can’t easily be measured.  

Why is Agile hard to measure and harvest value from?  

Waterfall and other goal- or milestone-focused development methodologies are structured with clear definitions of project phases (requirements gathering, sequential development, codified dev-test-QA-production flows) and milestones. Agile is more fluid. Agile measures productivity in terms of qualitative measurement, Story Points, that make cross-team productivity comparisons difficult. Agile value is based on individuals and interactions getting it done over process. It drives to create working code (moving quickly) while back-seating documentation. By working closely with the customer in the development process, it is more responsive and adaptable at the risk of increasing backlog, and expanding scope and requirements.  

While Agile is well suited for delivering capabilities in a modern, competitive landscape, getting that value is hard, but not impossible. Typically, organizations struggle in three areas of Agile value: 

  1. Predictable delivery of capabilities (reliable productivity) 
  2. Quality 
  3. Cost to performance, including with service providers 

IDC Metri’s Agile Value Management product addresses these management challenges by assessing agile development efforts across team and product performance categories. Key team factors assessed are productivity, cost efficiency, delivery speed, and quality. For product quality, we evaluate robustness, efficiency, security, changeability, transferability, and technical debt. Future, it allows for benchmarking team performance against other teams within an organization and against market peers. These assessments filter up into management dashboards, to help identify trends, and engineering dashboards that drill into specific recommendations and remediation.  

Predictable delivery of capabilities 

With the Agile framework being structured around sprints (typically two-week cycles of refactoring, back-log attack, development, and just-in-time requirements gathering), Story Points for goals, and velocity (Story Point clearing) for progress, it’s hard for organizations to translate these measures to more traditional measures of progress. A lot of motion and momentum is demonstrated, but how this leads to predictable delivery of capabilities is elusive. To address this, IDC Metri uses a proven methodology for assessing progress and ensuring predictability—automated and enhanced function point analysis (FPA).   

The IDC Metri Agile Value Management (AVM) solution assess a development team’s progress using both enhance and automated function point analysis. FPA delivers a concrete assessment of size delivered (value) and enables comparison of productivity across teams and benchmarking against industry peers. AVM provides management with the progress measurement dashboards for productivity and delivery speed. To measure and assess these, IDC Metri uses the functional output a team has delivered in a certain timeframe, leveraging the NESMA standard of functional size added + changed + deleted. In the case of automatic measurement of functional size, IDC Metri measures according to the ISO 19515 standards of Automated Function Points (AFP), and Enhancement Function Points (EFP). This data is presented in a fashion that allows managers to understand progress to goals and transparency to understand and predict capability delivery.

Quality 

Ensuring predictable, or efficient development, only matters if the product being produced is of the quality (stability, security, efficiency, etc.) necessary to meet the business goals. For this reason, it is important to balance performance measures of the team with quality measures of the code. We don’t want measures and goals for performance to have the unintended consequence of driving down quality.  

AVM provides source code analysis. This analysis provides ongoing assessment and trends in team quality over time and highlighting key areas of deficit. With this analysis, an Engineering dashboard is created showing the (critical) violations found, why these are violations, where they are found and how to solve them. The most critical ones are put on an action plan. This data is also presented in easily digestible fashion for managers responsible for managing and ensuring product quality.  

The Engineering dashboards clearly identifies poor code and critical violations (CVEs) allowing the development team to better, and more rapidly, address quality issues. When adopting the guidance from the Engineering dashboard, overall development team practices improve. Quality and performance enhances, due to lower testing efforts, resulting from enhanced coding practices. Also, improving practices and identifying better practices reduces team stress and enables recently onboarded team members to become more rapidly productive.

Cost to performance 

Sourced Agile development projects are typically time and materials (T&M), which shifts budget risk from the sourcing vendor to the buyer. Previously, development projects were typically fixed prices where risk (especially financial) was weighted towards the sourcing vendor. Similarly, even with internal projects, budgeting and cost were more predictable, due to the structure and predictable nature of methodologies like Waterfall.  

AVM, by putting measurable, traceable and consistent metrics around development, helps make cost management and cost efficiency easier and transparent. Also, by providing benchmarking within an organization and against peers, a client has the context to understand the competitive meaning of these assessments (i.e., is my team underperforming in my industry in the cost/performance ratio for development?). Further, by assessing sourcing vendor current performance versus cost, goals can be set and measured consistently, over time, for assessment. AVM, with its combination of market benchmarking for services and concrete performance metrics, benchmarks the sourcing vendor performance against market peers. This enables buyers to determine whether the service capabilities they procured are delivered competitively to other vendors in the market. Furthermore, it gives leverage to the buyer in ensuring that a T&M development contract is performing at a minimum to market peers, i.e., that the buyer is not over-paying for the quality and productivity of the development they receive.  

A client example illustrates this. The client company nearshored application development and maintenance. They were concerned they were paying more than the value they received. IDC Metri performed an AVM assessment demonstrating gaps in value based on the hours (cost) put into the sprints. Productivity was 30% lower and cost 22% higher than market average. Maintenance cost four times the market average. This assessment culminated in supplier improvement actions to comply with performance and product health metrics (with ongoing verification by IDC Metri).  

To rephrase an earlier observation: if you can’t easily measure something, you can’t easily manage it. AVM allows organizations to clearly understand how their Agile development teams (staff or sourced or hybrid) perform and deliver value. It cleanly addresses three key organization struggles around Agile: predictability, quality, and cost. It makes it easy to measure and assess Agile development, which means it enables easier and effective management of Agile.  

Want a deeper example of an organization that overcame challenges in quantifying Agile value? Read “A Management Primer: How Agile Development Teams Can Deliver Value”

Daniel Saroff - GVP, Consulting and Research Services - IDC

Daniel Saroff is Group Vice President of Consulting and Research at IDC, where he is a senior practitioner in the end-user consulting practice. This practice provides support to boards, business leaders, and technology executives in their efforts to architect, benchmark, and optimize their organization's information technology. IDC's end-user consulting practice utilizes our extensive international IT data library, robust research base, and tailored consulting solutions to deliver unique business value through IT acceleration, performance management, cost optimization, and contextualized benchmarking capabilities.