The Canadian tech industry has been experiencing a significant transformation in recent years, driven by market destabilizing events, the rise of cloud computing, and the emergence of traditional infrastructure as a service (IaaS).

In light of these changes, IDC Research has conducted a special study titled “Canadian Channel Study in Transition 2022” to explore the role of channel partnerships in the Canadian tech landscape. The study reveals a vibrant and dynamic channel ecosystem can provide significant business growth opportunities. In contrast, a stagnant channel poses challenges to the industry’s growth. 

The report sheds light on the Canadian channel partner ecosystem, encompassing various solution partners serving multiple industries. The report reveals that Canadian channel partners are now 150% more involved in providing digital transformation solutions than before the COVID-19 pandemic. This blog explores the top trends shaping the Canadian Channel and key findings from this report. 

IDC has watched several trends unfold in the Canadian channel partner ecosystem since 2015, when digital transformation started to take hold in large organizations. Canadian sales, service, and delivery partners of technology vendors sought ways to add value to clients, drive revenue generation, and develop new capabilities to enhance their profitability.

Numerous trends reshaped the strategies and plans of these individual businesses:

  • A technology shift to cloud services.
  • The primary business activity shifted from resale to services, sales motion, and buyer trends to focus on use cases that mattered to C-Suite buyers. 
  • And inter-company collaboration from a competitive mindset to connected ecosystem co-opetition. 
  • An increase in merger and acquisition activity in the channel as companies made strategic moves to meet market and client demand. Specialized players joined forces to scale up operations while publicly traded IT providers consolidated independent entities. CentriLogic and Carbon60 and large services firms like KPMG and Accenture, or publicly traded IT providers like Converge Technology Solutions and Insight consolidated. These M&A activities are reshaping the landscape, creating new opportunities for collaboration and innovation.
  • A shift towards on-demand or as-a-service infrastructure embraced by both buyers and sellers, recognizing the financial and technological benefits it offers 

What is the Pulse of the Canadian Channel?

We surveyed 203 companies in the Canadian channel and examined emerging topics and topics that were observed in similar studies in 2016, 2017, and 2019. After analyzing the data, we looked for examples of Canadian channel players that illustrated trends we saw using IDC’s proprietary Channel Partner Ecosystem (CPE) database

The CPE database provides a comprehensive look at IT vendors, partners of IT vendors, and the entire channel partner ecosystem. It displays the links between IT vendor partners highlighting the technological areas they cover, the markets they service, geographic location data, and much more. As a result, this ecosystem depicts information on the service, solution, and geographic reach of a channel partner.  

The CPE database contains information on more than 250,000 partners of 2,000+ technology vendors and identifies 1,000,000+ network relationships. Among the many vendors covered are Adobe, Amazon Web Services, Autodesk, Cisco, Citrix, Dell Technologies, Google, Hitachi, HP Inc., IBM, Intel, Intuit, Juniper, Micro Focus, Microsoft, NetApp, Oracle, Palo Alto Networks, Red Hat, Salesforce.com, SAP, SAS, ServiceNow, Siemens, Symantec, and VMware.

The database covers a wide area geographically with data from 100+ countries such as Australia, Canada, France, Germany, Hong Kong, Japan, the rest of Asia/Pacific, CEE, Latin America, the Middle East and Africa, Russia, the United Kingdom and the United States. 

The health of the channel is a barometer for the health of the Canadian technology industry. Simplifying and de-risking IT decision-making is critical to channel partners’ success.”

Key findings from the Canadian Channel in Transition study include:

  • The revenue percentage from resale products and services for the typical channel partner in Canada has remained the same post-COVID (47%) as it was pre-COVID (48%). In contrast, recurring revenues have become a bigger slice of revenue – growing from 29% in 2019 to 42% in 2022 for the typical channel partner.
  • In 2022, the typical channel partner derived 35% of revenue from digital technologies – up from 18% pre-COVID in 2019.
  • MSPs, SIs, and ISVs generate more revenue from Large/Enterprise companies. Resellers have a more balanced revenue mix from SMBs and LE.
  • Over 85% of the partnerships exist in the “undeclared” or under-the-surface areas (see Figure 1). It would be difficult for the technology supplier to find the appropriate partner. To alleviate the problem of selecting the correct partner, IDC’s CPE database offers insight into partnerships that are not readily apparent or disclosed. It helps in going beyond the top-tier partnerships to identify what is under the surface.
  • Canadian channel partners are now 150% more involved in providing digital transformation solutions than before the COVID-19 pandemic. 

FIGURE 1: IDC’s CPE Value — Finding Visible and Nonvisible Partners

As the Canadian economy emerges from the pandemic, the health of the channel becomes a barometer for the health of the technology industry. By embracing digital transformation and forming strategic alliances, channel partners can play a pivotal role in driving growth, innovation, and disruption across various sectors.

The Canadian Channel Study in Transition 2022 findings underline the need for businesses to recognize the value of channel partnerships and invest in fostering collaborative relationships to thrive in the rapidly changing Canadian tech landscape. 

Ready to unlock the power of strategic channel partnerships? Gain exclusive access to the Canadian Channel Study in Transition or schedule a call with Jason Bremner to learn more about this study.

Jason Bremner - Research Vice President, Industry and Business Solutions - IDC

Jason Bremner is Research Vice President for IDC's IT Consulting and Systems Integration Strategies program, providing research insights and thought leadership on the key issues and trends affecting the IT consulting and systems integration services markets globally. His core research includes analyzing customer demand and vendor offerings for IT consulting and systems integration services, and the services ecosystems for leading application software and infrastructure solution providers.

Many cities owe their existence to the proximity of a river but in recent times we have separated the development of a river from the built environment of the city. We have an opportunity to maximize the river asset and add value through what we have learned in the smart city arena and leveraging rapidly maturing technology such as digital twins, AI, Edge and IoT for both the natural and built environment.

Early peoples settled by rivers because they were a source of water, food, trade, irrigation, transportation, recreation, access and egress. They were so valuable that fortifications were built to control access, and these grew into Major Cities. The identity of a city is linked to its river, and it is difficult to imagine cities such as London without the Thames, Paris without the Seine and the river Moscow gave the city its name.

Today, new regeneration activity often starts with riverside property that soon becomes the most valuable in a city.

With the industrial revolution and increases in population, river pollution rose to the point where many inner-city rivers were dead. A number of factors are aligning to bring those rivers back to life.

Firstly, environmental awareness is rising.

Secondly, heavy industry has or is moving out of cities and thirdly, as cities become more congested, alternative means of transport are being sought and lastly, we now have the technology to better understand, monitor and improve a river.

London has changed dramatically in the last 30 years and in that incredibly short space of time the Thames has come back from the dead. It now has fish, turtles and dolphins. Well, the odd dolphin and turtle that probably lost their satnav but at least they did not gag and die on entering the Thames.

At a global level It is already happening.

The Namami Gange project is a major $9Billion initiative undertaken by the Indian government to rejuvenate and clean the Ganga River, one of the most sacred and culturally significant rivers in India. The project was launched in 2015 as a comprehensive approach to restore and protect the Ganga River and its tributaries. The Namami Gange project is a multi-disciplinary effort that involves collaboration between various government ministries, departments, and agencies at the central and state levels. Its long-term vision is to ensure the ecological and cultural integrity of the Ganga River, benefitting millions of people who rely on the river for their livelihoods and spiritual practices.

When we looked at what they were doing, it was a linear version of what we are trying to do in smart cities. Instrumenting the river to monitor, measure, predict and effect change across 2500 kilometres of the country.

The Ganges programme is still ongoing and can be seen as the ‘Everest’ of current Smart River Projects. An earlier project, the Cheonggyecheon River project in South Korea, was a significant urban renewal and restoration project that aimed to revive and transform the Cheonggyecheon River, a historic polluted waterway that had been covered by a highway.

The project, which was completed in 2005, achieved several notable outcomes around environmental restoration, improved water management and enhanced urban aesthetics, increased connectivity and economic growth.

Climate change is accelerating interest in more effective water management and the adoption of the UN’s Strategic Development Goals. “The establishment of Sustainable Development Goal 6 (SDG 6) “Ensure availability and sustainable management of water and sanitation for all” confirms the importance of water and sanitation in the global political agenda. SDG 6 addresses the sustainability of water and sanitation access by focusing on the environmental aspects of freshwater ecosystems and resources – including their qualityavailability and management.”

River usage can also cause international conflict. The land between the Tigris and Euphrates is where cities arguably began and the damming and gravel mining of the river in Turkey is changing the flow of the water that is critical to the city of Bagdad and the wider Iraq economy.

The same can be said of the Nile that flows through or along the border of eleven African countries. Many of the potential areas of conflict such as water removal, pollution, mining and flooding could be ameliorated through using technology to build a better understanding of a river. Through instrumenting it with sensors and connecting the information produced to digital twins to make the information accessible and allowing for scenario planning and predictive interventions.

We have developed an IDC Market Perspective Report that looks at policy and governance issues and gone into more detail on how technology such as digital twins, sensors, data, AI, 5G, edge, cloud and social media can be used in this arena, click here to find out more.

We will be researching deeper into the subject of River Cities and also the wider subject of Smart Rivers and would be keen to hear about case studies globally.

Last month my colleagues and I on the IDC data, analytics, and enterprise intelligence research team had an inquiry with a client from a large insurance company. Part of the conversation was about the unexpected increase in costs following the migration of their data warehouse to the cloud. The situation was not necessarily atypical given the many variables involved in such a migration. What surprised me was the client’s inability to answer our question about how their workloads changed on the new data warehouse from those on the old data warehouse.  

It became clear that this data management professional wasn’t informed about how the data warehouse he worked on was being used. Was it primarily for BI workloads or AI workloads or both? Was it to support the client service function or the risk management function?

This situation happens more often than it should. We often talk about data silos, but rarely about internal knowledge silos about the use of data. Perhaps we should call them “silos of apathy”. There is certainly something amiss with the data culture in organizations where data management teams don’t know how data analysts or data scientists intend to use data or when the latter are unaware how their work contributes to business decision making processes.

To highlight the need for greater understanding and collaboration among data engineers, data analysts, data scientists, and all decision makers who rely on results of data analysis, we recently published an IDC study on the four planes of the enterprise intelligence architecture.

This conceptual model starts with the hypothesis that every organization wants to increase its enterprise intelligence. In the parlance of IDC, this means that every organization wants to be better than they currently are (and/or better than their competitors) in:

  1. Synthesizing information
  2. Collectively learning
  3. Delivering insights at scale
  4. Fostering a data culture

Our hundreds of interviews with decision makers across industries and analysis of responses from thousands of survey participants across the world have identified these four capabilities as core pillars that define enterprise intelligence. These capabilities are also measurable and help differentiate organizations that are better able to leverage data, analytics, and AI to achieve their goals.

Yet, many organizations continue to address their internal demand for data-driven decision making with discrete projects optimized for KPIs that are disconnected from the goal of lifting enterprise intelligence. These organizations build large data lakehouses, invest in the best data scientists and machine learning tools, experiment with the latest generative AI, conduct data literacy training, deploying intuitive dashboards, and implement data governance policies. What they don’t do enough is connect the dots – among different technologies, different decision-making processes, different plans, data or model ops initiatives.

Our research shows that few organizations have a comprehensive view that enables execution of the enterprise intelligence strategy with the corresponding architecture that can truly improve metrics that matter. This matters because of the growing complexity across data, analytics, AI, and decision-making vectors has resulted in organizations having issues that were highlighted in IDC’s recent Data Valuation study, where respondents cited:

  • Data decay: 75% of decision makers say that data loses its value within days.
  • Data waste: 33% of executives say they often don’t get around to using data they receive.
  • Data disconnect: 61% of executives say data complexity has increased compared to last year.

An enterprise intelligence strategy defines a corresponding architecture that becomes a guide to greater utilization of data for productive purposes, including greater decision velocity that drives differentiation in the digital era. IDC’s Future of Enterprise Intelligence research has found that organizations with greater intelligence have 3x-4x better business outcomes than their counterparts with nascent enterprise intelligence.

The IDC enterprise intelligence architecture is a conceptual representation of attributes, technologies, and functionality that enable the organization to execute its enterprise intelligence strategy. Our work on this view of the enterprise intelligence architecture began by defining the data control plane and evolved into four planes.

IDC Enterprise Intelligence Conceptual Architecture

  • Data Plane: Organizes the realities of modern data environments into three primary categories:  distributed, diverse, and dynamic data. DBAs and data architects are the personas usually associated with the data plane.
  • Data Control Plane: Leverages intelligence about data to take control of modern data environments through governance and engineering. Data engineers, data stewards, and data ops managers are typically involved in this plane.
  • Data Analysis Plane: Helps organizations explore, explain, and envision data and insights. Data scientists and data analysts, BI developers, and business analysts work in the data analysis plane.
  • Decisioning Plane: Has capabilities that enable decision design, engineering, and orchestration. This plane is the broadest in its use by business decision makers, executives, and even automated decisioning-systems.

Attributes, technologies, and functionality of each plane are described in greater detail in Four Planes of Enterprise Intelligence Architecture: A Conceptual View into the Data Plane, Data Control Plane, Data Analysis Plane, and Decisioning Plane, where they are depicted along with commons services across the planes (e.g. security, monitoring, knowledge management, etc.)

Attributes of the Four Planes of the Enterprise Intelligence Architecture

The four planes are also aligned with personas who must collaborate to achieve common goals rather than only optimize for peak performance within their plane.

Very few vendors address all the planes fully with packaged software, thus one of the considerations in evaluating technology providers and their products is to understand how the vendor moves or evolves within and across planes. Some vendors expand their portfolios and functionality through internal R&D, others do so through acquisitions. Extra caution is warranted when a vendor ‘skips’ a plane. For example, when they have been providing technology for the data plane and expand into the data analysis or decisioning planes. These types of moves are difficult and rarely successful.

When vendors don’t fully address a plane, customers must substitute a product from another vendor, or develop their own technology – often based on an open-source project, and often focused on immediate needs rather that strategic direction of the enterprise intelligence architecture. Whether integration of multiple software components is intentional or involuntary, it creates overhead and risks due to integration and ongoing maintenance needs. To counter such risks, it’s important to understand the extent to which a vendor’s product provides support of open data and analysis standards.

Focus your organization’s enterprise intelligence strategy to be top-down from the decisioning plane to the data plane. Too many organizations do the opposite and end up with projects resulting in great data management technology solutions that are disconnected from the goal of improving overall enterprise intelligence.

Dan Vesset - GVP/GM, Global Research Operations - IDC

Dan Vesset is Group Vice President of IDC's Analytics and Information Management market research and advisory practice, where he leads a group of analysts covering all aspects of structured data and unstructured content processing, integration, management, governance, analysis, and visualization. Mr. Vesset also leads IDC's global Big Data and Analytics research pillar. His research is focused on best practices in the application of business intelligence, analytics, and enterprise performance management software and processes on decision support and automation, and data monetization.

The tech industry is at a seminal moment. The combination of executive and board level interest, clearly defined outcomes, and the sheer speed of adoption makes Generative AI unlike anything we have seen before.

In this blog, we will shed light on the rapid rise of Generative AI (GenAI), its impact on tech companies, and fundamental questions related to AI technology.

The rapid adoption of Generative AI moves AI from an emerging software segment in the stack to a lynch-pin technology at the center of a platform transition.

Meredith Whalen – Chief Research Officer

GenAI – A Seminal Moment in Technology

In seven short months, GenAI has simultaneously captured the attention, imagination, and trepidation of tech and business leaders across the world.

  • Attention. Executives easily see how this technology will impact productivity levels and margins. The Brookings Institution forecasts GenAI will raise productivity and output by 18% over the next 10 years.
  • Imagination. GenAI has a wide range of applications – from horizontal use cases such as software development and marketing content creation to industry-specific use cases such as drug discovery and manufacturing design. The business benefits of the use cases are obvious, and enterprises aren’t waiting around for a business case to be developed to start experimenting. IDC’s research shows that knowledge management, marketing, and code generation are the top use cases being considered.
  • Trepidation. Executives see how this technology can rapidly disrupt their business model. The 20-year journey for the cloud to represent 50% of core IT spending and the 10-year journey to become a digital business will look colossally slow in comparison to the accelerated timeframes it will take for enterprises to implement Generative AI use cases at scale. The well-founded concerns around ethics, regulatory compliance, and governance will also need to be embedded in this new business model.  

Hiding in Plain Sight

A Transition is Coming. This graph shows the timeline of tech eras, starting with the introduction of cloud and mobile. Starting at 2015 technology has started to skyrocket in innovation. We are currently at the beginning of AI. Graph predicts AI Everywhere will start another jump in tech innovation through narrow ai, generative ai experimentation, and widening ai.

How did technology with this much impact creep up on most business leaders? It didn’t. The foundational elements were being developed throughout the past decade.

  • Era of Multiplied Innovation. What IDC refers to as the Era of Multiplied Innovation was primarily fueled by the cloud, mobility, and the Internet. Low-cost semiconductors and virtualization enabled the cloud, which made computing elastic and plentiful. Mobility made computing ubiquitous. And the internet dropped the costs of distributing those computing bits to almost zero.
  • Platforms and Communities. With abundant, ubiquitous, and elastic infrastructure in place, platforms, communities, and digital ecosystems emerged. These platforms triggered a massive data consolidation process and the birth of the transformer model architecture which enabled the creation of foundational artificial intelligence models, including large language models (LLMs).
  • Era of AI Everywhere. Generative AI, which utilizes unsupervised and semi-supervised algorithms to generate content from previously created content such as text, audio, video, images, and code, is a trigger technology that will usher in a new era of computing – the Era of AI Everywhere. This new era will include the journey from narrow AI to widening AI and will completely change our relationship with data and how we extract value from both structured and unstructured data.

Generative AI triggers the dawn of this new era because it will drastically reduce the time and costs associated with developing solutions for a wide range of use cases associated with automation and intelligence. The rapid adoption of Generative AI moves AI from an emerging software segment in the stack to a lynch-pin technology at the center of a platform transition.  The market generally assumes that this type of platform transition requires a shift in hardware, similar to the move to client-server from mainframes, or to the cloud from client-server.  However, IDC believes that this time it will be different. This platform transition will focus more on data. This time it will be about how we use data as an input (to train, fine tune and infer foundational models) and as a business outcome (as part of the development of new use cases).

GenAI and Tech Industry Market Disruption

As Generative AI will impact most tech markets from semiconductors to professional services, tech suppliers are rapidly revising their product roadmaps and rethinking their business, pricing, and customer service models.

Infrastructure. Today, much of the value is being captured by semiconductor vendors, most notably NVIDIA, as running the training and inference workloads for the foundation models demands significant GPUs. Semiconductor providers need to have chips specifically designed for AI workloads, which is creating an opportunity for new challengers. Training AI models will also drive storage and networking investments, putting public and hybrid cloud providers in a solid position to capture share since dedicated on-prem training of foundation models is expensive.

Software. In the medium-term, well-entrenched platform and application vendors stand to benefit if they can pivot their offerings and business models fast enough. They must decide which Generative AI use cases can support direct monetization, and which will be important to implement from a defensive point of view.  For example, generative AI could transform the way we interact with enterprise software. It is potentially the biggest shift in UX design since point and click and poised for disruption by GenAI native applications startups.

As it looks like many of the costs associated with managing Generative AI models for scale, security, and privacy will fall on the shoulders of the software provider, the following key decisions are being evaluated to protect their margins:

  • Should they train their own foundation models or partner with model providers?
  • What is the new pricing model to support Generative AI capabilities?
  • Will SLAs need to include grounding for some use cases? And if so, should levels of support be added to deal with context and data drift?
  • Will getting access to customer data to train models be a part of a new set of licensing terms and conditions?
  • Do they need to provide indemnification on AI-generated assets?

Services. While service firms are busy helping their clients identify GenAI use cases, they are simultaneously investigating how GenAI will impact the demand for their services over the long term and how their delivery models around software development, accounting, and legal services will be automated.  Increasingly, services firms are bringing their own AI software platforms to engagements which is blurring the lines between software and services.

Security and Trust. Due to its ability to generate fake code, data, and images closely resembling the real thing, Generative AI is likely to increase identity theft, fraud, and counterfeiting cases. The LLMs are also vulnerable and could be a source of attack and manipulation. Security vendors have a ripe opportunity to develop new solutions to address these emerging challenges.

New Markets. Of course, with any disruptive technology, new technology markets will spawn. Start-ups are already emerging to provide tools to personalize models, provide contextualization for the model, increase the speed of training LLMs, and orchestrate the process. There are huge opportunities for software companies to meet the market where it stands. It may mean offering a full-stack translation service rather than translation software.

Despite all the unknowns facing the tech industry, what is clear is the need to quickly get your arms around the fundamental questions related to Generative AI and how it will drive your business model in the future.

If your organization is interested in partnering with IDC to better understand how Generative AI will impact the markets most critical to your success contact us.

We also recommend you take advantage of these recent resources from our thought leaders and tech market experts:

Meredith Whalen - Chief Research Officer - IDC

As IDC's Chief Product, Research & Delivery Officer, Meredith Whalen leads the company's global product, research and data, and delivery organizations. Under her leadership, IDC delivers cutting-edge intelligence to the world's leading technology vendors, enterprises, and investors as they navigate the evolving AI economy. Meredith sets the strategic direction for IDC's global analyst community, shaping research methodologies and agendas that generate industry-leading data and actionable insights to drive high-impact business decisions. With more than 20 years at IDC, Meredith has been a catalyst for some of the company's most transformative initiatives. She founded IDC's Industry Insights and Tech Buyer business units and pioneered the industry's first comprehensive business use case taxonomy. She also led the creation of IDC's DecisionScape methodology-a strategic framework that empowers organizations to better plan, implement, and optimize their technology investments. A recognized thought leader and sought-after speaker, Meredith regularly delivers keynotes at major global technology events and advises senior executives on the trends shaping the future of business and technology. Meredith holds a B.A. with honors from Wellesley College and an MBA with honors from Babson College's F.W. Olin Graduate School of Business.

Unless you’ve been living under a rock for the past six months, you’ll have heard of generative AI – technology that enables computers to create synthetic data or digital content based on previously created data or content. The launch of ChatGPT in late 2022 lit a fire under this emerging space and seemingly overnight, hundreds of millions of people became inspired by the results of work that had already been going on for years within academic and commercial technology vendor research departments.

Earlier in June we spent two days touring around investment banks and hedge funds in London to talk to investors about generative AI and answer their questions.

 

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

 

We had many great, in-depth discussions. Here are the questions that came up most frequently.

  1. Where is the Value in Generative AI in the Short, Medium, and Long Term?

Today, most of the value is being captured by hardware vendors – most notably NVIDIA, which has seen its share price take off following a sharp upswing in its predicted revenues. As the market leading provider of GPUs with a strong enabling software story and emerging as-a-service play, too, NVIDIA is very well positioned to capitalise on the generative AI boom.

Of course, NVIDIA isn’t the only vendor that potentially stands to benefit; AMD and other semiconductor vendors (including start-ups like Graphcore, Cerebras & Moore Threads) are emerging as challengers, and generative AI platforms will drive storage and networking infrastructure investments too.

In the short to medium term, hyperscale public cloud providers can also expect to benefit significantly. With its early move investing in OpenAI and accelerated investments in generative AI across its software portfolio, Microsoft is in a particularly strong position; but AWS, Google, and Oracle are all also making significant moves in this space.

In the medium-term platform and application vendors also stand to benefit, although the value equation for them is less clear cut. There are significant question marks over which generative AI use cases can support direct monetization, and which will be important to implement from a defensive point of view. Many of the costs associated with managing generative AI models for scale, security, privacy and trust will also fall on their shoulders.

  1. What Will Have to Be True to Make GenAI a Truly Broadly Adopted Technology?

Right now, we’re still in “year zero” for generative AI in a commercial context. There is still a lot of confusion around the technology and its applicability in practical real world use cases.

What is already clear, though, is that publicly shared foundation models delivered as a service (such as those hosted by OpenAI) will only be suitable for a subset of enterprise use cases. For many, enterprises will use fine-tuned, specialised domain-specific models that are made available directly to them on a private (or controlled) basis.

The current state-of-the-art in generative AI yields systems that are prone to accuracy problems, difficult to control and predict, and expensive to run. All of these issues need to be worked on.

  1. Where Are the Implications for the Software Landscape?

Every software vendor that IDC is speaking to is updating or recreating their product roadmaps to incorporate their respective Generative AI strategies. Obviously, this will play out differently across infrastructure, platforms and applications – however there are certain common questions that are being asked:

  • Should we develop our own large language models, or should partner with model providers like OpenAI, Anthropic, Cohere and AI21 and tune them for our software capabilities?
  • How should we price our new Generative AI features?
  • Should we include getting access to customer data to train models as part of a new set of licensing terms and conditions. What do we offer in return (if anything)?
  • Do we need to evolve our support models to include service level agreements (SLAs) on accuracy on certain use cases that are being delivered?

Across all these questions, what is clear is that margin protection will be a major question for software vendors over time – especially those with questionable pricing power. In addition, there will be increased requirements for additional levels of support to deal with model, context and data drift. For the application players, there is an increasing likelihood that forms-based computing as a basis for applications will likely disappear over time and certain markets – for example, salesforce automation and human capital management could potentially be redrawn in the medium-term. 

As part of these changes, what is becoming clear is that the application vendors that are cloud laggards will be AI laggards, and that platforms will continue to dominate the software landscape.

More importantly, incorporating trusted and responsible AI principles into both product development and customer engagement will move from being a differentiator in the short term to table stakes in the medium term.

  1. What Are the Implications for Developers?

There’s been a significant amount of excitement about the ability of generative AI services (such as GitHub CoPilot, Replit Ghostwriter and Warp AI) to generate code, documentation, test scripts, and more.

Today’s state-of-the-art models are not going to put developers out of work. Rather, for some specific types of development work, and for some particular types of software asset being created, generative AI services are very likely to help developers accelerate their efforts to deliver working software, acting side-by-side with human developers in a “CoPilot” arrangement.

But it’s important to keep things in perspective: when we zoom out to consider the broader software delivery lifecycle, pro-innovation developers happy to experiment with new tools tend to bump into deployment, operations and support professionals who are much more risk averse.

  1. What Are the Implications for Services Providers?

Lastly, many of the investment teams we spoke to were very interested in discussing how professional services (particularly IT services) firms might be impacted by generative AI. Will it bring them major new opportunities? Or will its ability to drive automation of knowledge work mean that it forces providers to cannibalise their own businesses?

Our early research shows that more than 65% of early adopters of generative AI capabilities agree or strongly agree that their need for external services providers will be reduced in the future

The potential impact of generative AI on project delivery is, in some ways, analogous to the potential impact of low- and no-code development tools; if providers can embrace these tools effectively and also deliver trusted solutions to clients, they may find fewer hours are required to deliver projects – but outcomes will be improved for everyone.

 

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

 

The arrival of Generative AI technologies has created what we believe to be a seminal moment for the industry: it will be so impactful that it will influence everything that comes after it. However, we believe it is just the starting point. We think that Generative AI will trigger a transition to AI Everywhere – moving us from the use of narrow AI for specific use cases to widening AI for a range of use cases simultaneously.

This means that it will impact every element of the technology stack, and also drive a rethink of all horizontal and vertical use cases. However, given the questions around risk and governance, it will also require every organization to develop and incorporate an AI ethics & governance framework to deal with the risks mentioned earlier.

The investors that we spoke to in London agreed that the tech industry needs to take balanced approach to commercializing the opportunity, while also ensure that policies and regulations continue to protect consumers, enterprises and society as a whole.

Neil Ward-Dutton - VP AI, Automation, Data & Analytics Europe - IDC

Neil Ward-Dutton is vice president, AI, Automation, Data & Analytics at IDC Europe. In this role he guides IDC’s research agendas, and helps enterprise and technology vendor clients alike make sense of the opportunities and challenges across these very fast-moving and complicated technology markets. In a 28-year career as a technology industry analyst, Neil has researched a wide range of enterprise software technologies, authored hundreds of reports and regularly appeared on TV and in print media.

IDC’s Future Consumer team just released the latest version of the Consumer Market Model (CMM). The CMM is a unique dataset that provides insight into the size of the consumer market and opportunities in individual market segments.

The data set quantifies the future consumer by providing the following data points: socioeconomic profile, internet users, home internet access, devices used to access the internet, hours spent online, internet user online activities of dozens of digital services and experiences, internet buyers of the same digital services and experiences, and B2C ecommerce spending.

This research also quantifies consumer engagement with dozens of online activities and ecommerce categories. For online activities, engagement is measured across total users, those users that spend, and aggregate spend for each activity across 51 countries with 7 years of historic data and 5-year forecasts.

The world population is now roughly 8 billion, with nearly 5.5 billion being between the ages of 15 and 64. There are over 2.3 billion households. Each individual and household is a potential customer of consumer goods and services.

With nearly 5.5 billion internet users worldwide, the overall addressable market for online, or digital, consumer goods and services is massive. Moreover, the addressable market continues to grow with internet users forecast to reach 6.2 billion in 2027 as roughly 800 million net new users come online through the 2022-2027 forecast period.

Growth in Mature and Emerging Market Opportunities

Consumer engagement is growing across mature and emerging digital experiences. Among the mature user experiences, video streaming (4.7% CAGR), music streaming (8.0% CAGR), e-books (8.5% CAGR), gaming (6.7% CAGR), podcasts (5.8% CAGR) and cloud backup (10.8% CAGR), hundreds of millions of net new users will be added to the worldwide market during the 2022-2027 forecast period.

Tremendous growth is forecasted for emerging consumer market opportunities as well. Telehealth (12.0% CAGR), online fitness (6.3% CAGR), smart home services (7.5% CAGR), and micro mobility (9.3% CAGR) all are forecast to post strong growth in terms of worldwide users. Not all these users will be paying users, but this speaks to the need for strategies and technologies to support content, service, and experience monetization inclusive of paid a la carte, paid subscription, ad-supported, and hybrid models.

Beyond the continued growth of consumer market segments and monetization models, other key trends to watch include:

  • Content creation. Consumers as content creators and consumer engagement with independently created content is a key transformative digital experience. This represents a risk not just to legacy news platforms such as digital newspapers but also high-engagement digital services such as streaming video subscriptions. As individuals continue to engage as creators and consumers of content, overall time spent on social and content sharing platforms and share of advertising dollars will shift from other services.
  • Impact of artificial intelligence (AI). AI’s potential impact on the future world can hardly be overstated at this point. Its impact on the consumer market will be felt across everything from optimized customer segmentation and recommendations to content creation. AI-assisted creator solutions will surely come into play as will the continued growth of content created entirely by AI.
  • Generational shifts. IDC’s consumer team keeps an eye on how younger generations engage with technology in ways different from older generations. The behaviors of Gen Z and younger Millennials appear likely to be transformative and drivers of new opportunity growth and legacy behavior decline.
  • Economic uncertainty. While the outlook for the economy remains uncertain, this does not necessarily mean the growth of digital consumer service engagement will decline. Some may prove to be relatively recession-proof just as legacy pay TV generally was. Shifts in consumer spending to home health monitoring, online fitness, less expensive banking and financial services are among the key areas to watch.

Next Steps

Opportunities in the consumer market range from B2C to B2B2C. Of course, there are opportunities for companies that sell direct to consumers, whether content, applications, experiences, goods, and services. While the consumer market for digital applications and experiences can sometimes appear to be dominated by large companies, scores of smaller ones actively drive innovation and achieve success.

IDC is beginning to map out the extent to which these large companies offer products and services in the various segments of the consumer market to show their relative strength and their weaknesses or gaps. Opportunities also exist for vendors to provide the enabling technologies, in a B2B2C context, that support consumer services. For these companies, understanding emerging trends in the consumer market is critical to staying ahead of demand and creating innovative solutions that enable B2C companies.

For example, winning opportunities associated with the growth of consumer content creation range from providing tools (devices, software, and services) used by creators to enabling content distribution and monetization.

IDC can help you identify opportunities in the consumer market. The CMM quantifies consumer engagement and spending across dozens of opportunities, and engagement with the analyst team can help you target these opportunities through custom segmentations, competitive analysis, and consumer needs assessments.

Gregory Ireland - Sr. Director, Research - IDC

Greg Ireland is a Senior Director for the Consumer Markets programs at IDC. In this role, he manages IDC's Consumer Market Trends, Consumer Market Model, and GenAI for Content Creators and Consumers research programs. He focuses on consumer adoption of and engagement with digital technologies, services, and applications that transform consumer experiences, business models, and market opportunities. Greg leads IDC's coverage of consumer GenAI, and he also has expertise in and provides in-depth analysis on the ways in which digital video content is distributed, consumed and monetized across traditional pay TV, over-the-top (OTT), and social media services and platforms.

Exploring the Weaknesses and Strengths of an Innovative Technology

As IT healthcare analysts we are biased towards excitement for generative AI, but also cautious in its integration in the business at all costs, especially when we refer to healthcare organisations. It’s impossible not to be impressed, excited and terrified when you’re shown the latest technology.

Researchers use it to investigate genes and DNA to identify patterns and make predictions regarding disease progression in nanoseconds, instead of normally wasting human years. A first generation of generative AI has already been considered to facilitate and automatise many clinical processes: an effective case is the personalisation of care plans.

For example, generative AI algorithms can be used to refine and further personalise engagement with patients directing them to the right resources across multiple clinical systems, improving their experience and optimising their pathways.

Nevertheless, what is still missing is to understand whether, when and how healthcare organisations really need generative AI and when the decision is out; they need to define how to govern it and its risks.

 

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

 

The Potential Risks of Generative AI in the Healthcare Industry — Regulations will Be Needed

Governments, public authorities, industry experts, academia should have deep discussions to develop policy frameworks that both regulate potential harms and unlock benefits. They should access a collective debate and forge a collective path forward.

As already seen for AI technologies, also for generative AI, without the right rules and protections, this is going to get seriously out of hand, and quickly. And for the healthcare market, these words resonate more and more for several reasons:

  • First, regulation plays a key role when generative AI is touching sensitive medical data and its intersection with the benefit for the healthcare community and us all. A simple example would be the use of personal medical data to conduct drug discovery and clinical trials.

Is it “right” to share our personal healthcare data with healthcare professional scientists to get innovative care treatments and drug discovery for the entire population? While this issue of protecting sensitive patient data from being disclosed without the patient’s consent has already been raised with the adoption of AI-based applications. In the case of generative AI, it’s even more difficult to manage.

For instance, patients’ consent can’t be easily exercised in the case of an unlearning process. Removing selected data points from a model might affect the performance of the model itself.

  • Second, the risks of abuse are extensive because the accuracy of the responses from these generative AI tools largely depends upon the data used to train them. Without a real and human understanding of the healthcare topic under the analysis, these models create and predict what’s statistically likely or looks good, but not necessarily true.

This will cause reasonable concerns for their use in clinical practice, which necessarily needs immediate regulation.

  • Third, the IT infrastructure underpinning generative AI requires huge investments from healthcare organisations. To perform efficiently and effectively, these large language models need continuous training on real-world health data. But this requires major investment in clusters of compute, storage, networking, and systems infrastructure software.

Furthermore, resources are needed to manage, optimise, scale, and secure the entire infrastructure and associated applications to prevent privacy breaches and ensure business continuity.

 

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

 

The Potential Benefits for the Healthcare Industry Are Significant

Despite the concerns surrounding generative AI, its potential benefits for the healthcare industry cannot be overlooked. By harnessing this technology, the healthcare sector can:

  • Improve workforce experience:
    • Streamlining clinical documentation, generating patients’ histories, referrals, etc. suggest order entry.
    • Helping to explain to patients their medical conditions in simpler terms and in an empathetic way.
    • Analysing patient data, identifying patterns and make predictions regarding disease progression, treatment response, and suggesting treatment plans.
  • Improve quality care:
    • Improving patient experience by answering basic questions, explaining medical terms, scheduling appointments, directing them to appropriate resources.
    • Helping to collect more accurate health data from different sources (wearables, conversations, EHR) to support personalised health recommendations.
    • Enriching digital therapeutics solutions capabilities, expanding the capabilities of remote care and treatment.

 

Generative AI holds immense promise for healthcare, but we must strike the right balance between innovation and safeguarding patient interests. Collaborative efforts involving governments, providers, industry experts, and academia, are crucial to develop policy frameworks that address concerns, ensure data privacy, validate accuracy, and optimise the integration of Generative AI in healthcare.

Are you more worried or more excited about generative AI? Please share your thoughts with us, and in the meantime, we invite you to read our latest research on the topic.

 

If you are interested in knowing more about IDC Health Insights’ upcoming research, please contact Silvia Piai or Adriana Allocato.

It’s no secret…generative AI offers immense potential in a multitude of ways in the tech world. And we’ve only begun to scratch the surface when it comes to digital commerce.

The internet has been full of buzz about the newest high-profile AI-based tool on the block, ChatGPT.  If you believe the hype, it’s the latest technology publicly poised to disrupt content marketing, customer service, creative jobs, digital commerce, and even skilled labor jobs.

However, ChatGPT was not the first product, nor is it the only product, with the potential to disrupt the paid search and knowledge work industries. While ChatGPT may dominate the airwaves, assistive authoring and search technology has been around since 2020.

The future of ecommerce customer services is here but is not a replacement for human operators.

Heather Hershey – Research Director, Worldwide Digital Commerce

GPT-3, or Generative Pretrained Transformer 3 from OpenAI, is the backbone for Jasper.ai and other intelligent cloud-based content writing applications. GPT-3 was first publicly released by OpenAI on June 11, 2020. Since then, it has been widely adopted by chatbot developers, content writers, and machine learning researchers for a variety of natural language processing (NLP) tasks, such as summarization and translation.

GPT-3 is based on the concept of “generative pretraining”, which involves predicting the next token in a context of up to 2,048 tokens. This means that it can learn from a huge amount of data and produce results at scale, making it a powerful tool for businesses that rely on content marketing and sales to drive their digital commerce success.

However, GPT-3 still struggles with long forms of content and is derivative of content that is already published on the public web. With time and larger language models, this problem will likely be ameliorated, but right now, it is still in essence an elevated form of autocomplete.

When it comes to digital commerce and customer experience (CX) the big question is, “can ChatGPT provide customer service, write blog posts and personalize eCommerce shopping for customers?”

In IDC’s 2022 AI Path Survey, 224 respondents who use AI applications for digital commerce indicated that predictive analytics (67.4%), product recommendations (59.4%), and commerce website personalization (58.5%) were poised to bring the most value to their commerce operations. This is an indication of the strength in using these emerging customer experience (CX) technologies.

ChatGPT and other generative AI technologies offer a unique ecommerce experience that may allow customers to benefit from the convenience of personalized shopping and fully automated 24 x 7 customer service on-demand. By leveraging browsing history, purchase records, and other metrics related to a user’s behavior, these chatbots may soon become an integral component of personalization engines for ecommerce sites, enabling more precise product suggestions as well as improving customer service through automated support channels.

While these technologies are designed to disrupt the customer service industry, they are still in beta form, cost prohibitive for smaller companies and can produce incorrect or biased information. It’s only a matter of time before these technologies evolve far enough to improve the overall customer experience of online shopping by streamlining the search process and increasing the chance of successful conversions with automatically sales-optimized conversational commerce. The future of ecommerce customer services is here but is not a replacement for human operators.

If your company wants to leverage generative AI for personalized shopping experiences, proceed with a critical eye on longtail keyword strings shoppers use when searching for specific products. AI needs to overcome this known technological performance constraint of complex queries to be useful as a personal shopping assistant.

  • Close hits may not be good enough for modern shoppers to act upon the recommendation.
  • Customers demand satisfaction when shopping for specific products online and often use long strings of keywords to formulate complex queries when searching for what they desire.
  • Technically speaking, overcoming the known limits of longtail search is much easier said than done. Most keyword strings are only about three to five words in length. Strings that exceed these limits can frustrate customers and push them into the weeds as they search.
  • Customers are not trained on how to “talk to the bot” to get the best search results when shopping online. Therefore, the bots need to learn how to recognize various competency levels and meet the customer where they are the most comfortable. This will also ensure optimal user accessibility when these technologies are deployed for digital commerce.

Even with these limitations, the prospects for the future are quite tantalizing. GPT-3 and ChatGPT could represent powerful tools for enhancing personalized product, search, and shopping experiences. By analyzing customer data such as browsing history, purchase history, and other behavioral metrics, AI-driven technology like ChatGPT can provide more accurate recommendations and product suggestions based on an individual’s preferences and needs.

Heather Hershey - Research Director, Worldwide Digital Commerce - IDC

Heather Hershey is Research Director for IDC Worldwide Digital Commerce practice. Ms. Hershey’s core research coverage includes digital commerce applications targeting businesses of all sizes and industries (B2C, B2B, B2B2C); Product Information Management (PIM) and syndication applications; Commerce personalization, search, and merchandizing applications; CPQ and order management applications; Digital marketplaces; Headless digital commerce; Enterprise partnership/integration strategies among digital commerce, supply chain, marketing, and content management vendors; Commerce experience management across channels, Digital shelf trend; and AI-enabled or Intelligent commerce

After a year of disruptions like high inflation, war, geopolitical tension, energy shocks, and the anticipation of recession in major countries, it’s no surprise that IT leaders entered 2023 with a mission to minimize technology investments and develop plans for executing spending cuts if conditions worsened. Despite the uncertainty, their teams were running full tilt, filling open positions and “catching up” with the business.

Then along came ChatGPT. Suddenly, IT leaders find themselves planning for the coming Artificial Intelligence (AI) onslaught and asking, “Are we prepared?”

The Threat of IT Malaise

For the first few months of 2023, IDC noted that economic and IT spending outlooks of IT leaders in our monthly Future Enterprise Resiliency & Spending surveys began to improve. IT supply chains loosened, China reopened, energy shocks failed to develop, and the recession continued to be a worry for the future, not a reality of today.

In March, the Silicon Valley Bank failure, a series of banking problems, and concerns about a US debt default canceled out much of the growing economic optimism in the US and Europe, but not in Asia Pacific countries. A more troubling new concern that IDC heard from IT leaders starting in May was that the continued “waiting for recession” is starting to affect economic and IT investment assumptions for 2024, not just 2023.

It became easy to conclude that CIOs and IT leaders should be hunkering down to ride out an extended period of economic uncertainty and IT malaise, focusing on constraining new expenditures and optimizing the use of existing assets. While sustaining efforts to establish cloud economic practices is important, it’s no longer the top priority. Now is the time to start preparing for AI Everywhere.

Innovation Beyond IT Is a Rejuvenator, but Creates Disruption

Leveraging technology to drive innovation in our daily lives has always been a key expansion driver for the entire IT industry since the start of the computer and digital communications eras of the 1950s. The most significant technology driven transformations, such as the advent of the Internet/Web and the launch of the smart phone/cloud era, occurred at times when economic conditions were uncertain, and questions were being raised about the marginal utility of new IT investments.

In both cases, individual consumers and business leaders “got ahead” of IT leaders, driving long term fundamental changes in IT architectures and the role of IT organizations. Because IT leaders were unprepared, most organizations found themselves following a “Fire, Aim, Ready” pattern resulting in unnecessary duplication of work and data/application fragmentation. Those IT organizations that were prepared in advance, executing a “Ready, Aim, Fire” strategy, emerged as the leaders in the new wave.

Generative AI Is the New Trigger but Requires Preparation Now

In this time of “perceived” IT malaise, the emergence of generative AI exemplified by ChatGPT and Dall-E with all their possibilities and shortcomings, has captured the attention of individuals, educators, businesses, and governments around the world. As IDC found in conversations with CIOs and IT leaders, the assessment and use of AI is starting to dominate the planning and long-term investment agendas of businesses across many industries, triggering what IDC anticipates will be a period of extending AI Everywhere.

The semi-good news for most CIOs and CTOs is that generative AI products and services are still limited in availability and will be relatively immature for the rest of 2023 and early 2024. Making hard decisions about commitments of significant treasure in a period of economic uncertainty will remain limited for all but a few organizations.

Every CIO and CTO, however, needs to start committing time and intellectual capital right now to ensure their organization is prepared, avoiding missteps and capitalizing quickly on the potential of AI across both IT and the business. The keys to ensuring your organization’s AI preparedness are assessing your level of AI awareness and determining your state of AI readiness.

AI Awareness: Take Stock and Aim for Consistency

Generative AI services like Jasper and Microsoft 365 Copilot, as well as all the buzz around foundation models, are the “bright shiny objects” right now. Everyone in the organization wants to talk about how they can transform everything from customer service to code and product design, but AI-enhanced capabilities in the areas of prediction (e.g., threat detection and digital twins) and interpretation (e.g., machine vision) are also likely to be well underway in selected parts of the organization.

Now is the time to conduct a comprehensive view of where in the organization AI initiatives of all types are underway. Asking, “What do we think AI can do and not do?” Use this effort to identify early areas where duplication threatens, or collaboration beckons. It can also help you identify gaps where business leaders may be missing critical opportunities because they are distracted by one shiny object.

A key next step is to develop a series of persona-based AI awareness education activities that span from the C-Suite and business/IT leaders to front line employees and even critical customers and partners. The goal isn’t to make everyone an AI expert. It’s to ensure that your organization is consistently “AI aware” as readiness assessments start, and commitment decisions are made.

AI Readiness: Focus on Cloud Native, Hybrid Cloud, and Control

As with many innovations, the ability to quickly adopt a transformational technology is determined by the existing level of technical sophistication and IT operational maturity of the organization. For example, companies that aggressively adopted virtualization as a technology for deploying and managing workloads on their own systems were able to more effectively adopt early public cloud infrastructure solutions that leveraged similar foundational technologies.

Cloud providers will play a significant role in the early introduction of generative AI enablement services and agile DevOps teams will play an equally important role in translating AI capabilities into useful business outcomes. Cloud pioneers and pacesetters with mature cloud operations and architecture models along with well managed DevOps processes will be better prepared to leverage AI than cloud laggards.

Beyond the cloud-native maturity noted above, IDC believes that companies with mature “hybrid-by-design” cloud strategies will be well positioned to take full advantage of AI innovation across many different cloud environments as well as across many different locations, core, and network to edge.

Now is also the time to ask, “Are we ready for AI Everywhere?” The key areas to conduct a critical AI readiness assessment will center on control.  How consistent/inconsistent and open/siloed are data management and data use practices/guidelines? How standardized/fragmented and trustworthy/unreliable are code creation and life cycle systems and standards? How mature are FinOps and Cloud Cost Optimization practices?

Addressing the data questions will accelerate the need to address responsible AI governance and ethics with forethought and readiness. Mature cost and ROI tracking will be critical since “cost” will remain one of the most unpredictable elements of generative AI rollouts for the next several years.

The tech industry is thrilled by the possibilities of AI! This includes silicon designers, cloud providers, software and services clients, and even your own IT teams. The sense of anticipation and even giddiness is palatable, signaling a renewed focus on innovation as the driver of technology. Success, however, depends upon you having confidence in the ability to accurately track and link near and long-term costs with desired business benefits.

Cost/benefit readiness is the key skill required when it’s time to commit to AI, especially in this time of economic uncertainty and the threat of succumbing to IT malaise.

Rick Villars - Group VP, Worldwide Research - IDC

Rick is IDC's chief analyst guiding research on the future of the IT Industry. He coordinates all IDC research related to the impact of Cloud and the shift to digital business models across infrastructure, platforms, software, and services. He helps enterprises develop effective strategies for using their diverse portfolio of cloud investments and applications. He supplies early guidance on implications of critical innovations such as the shift to cloud-based control platforms for deploying/managing infrastructure, data, and code delivery as well as the emergence of AI as a critical IT workload and part of all IT products/services.

The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.

GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.

We observe three ways that industrial users can get in touch with GenAI:

  1. Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
  2. Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
  3. Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Be Ready — But Careful

In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.

Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.

AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.

AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.

Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.

Output Challenges

Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.

GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.

GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.

GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.

Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.

If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.

We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.

Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.

Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

Evolving Challenges

Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.

Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.

For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.

Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.

The Sky is the Limit — For Now

In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.

To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.

Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.

Beware the Competition!

GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.

But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.

At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.

As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.