On May 24, AMD revealed its new Radeon RX 7600 graphics card. This is an entry-level card positioned to play the newest games at 60+ frames per second (fps) at 1080p. It supports very efficient streaming using the latest AV1 encoding technology. According to AMD, the card performs 1080p gaming 29% faster on average than the AMD Radeon RX 6600.

AMD’s latest RDNA 3 generation of cards have marked ray tracing improvements over the previous RDNA 2 versions. Our tests show that the Radeon RX 7600 can get close to the performance of the Radeon RX 6700 XT midrange card in ray tracing benchmarks such as Speedway and Port Royal. The RX 7600 achieved around 86% of the performance of the midrange card in both tests using default driver settings.

The Radeon RX 7600 is based on the AMD RDNA 3 architecture and includes revamped compute units with unified ray tracing and AI accelerators. It features AMD’s Infinity Cache technology from the second generation of cards.

The Test Platform

The AMD Ryzen 5 7600X processor, the Radeon RX 7600 graphics card, the GIGABYTE X670E Aorus Master motherboard, and a G.SKILL Trident Z5 Neo 2x16GB DDR5-6000 EXPO memory kit — which were all provided to IDC by AMD — comprised the test PC hardware components. The primary Windows 11 disk was a 1TB GIGABYTE Aorus NVMe Gen4 solid state drive.

A be quiet! Silent Loop 2 280mm water cooler was fitted for the processor, which was coupled with a be quiet! STRAIGHT POWER 11 Platinum 850W power supply. A 34” Dell Gaming S3422DWG monitor — a quad-HD 3440*1440 display with a 144Hz frame rate, FreeSync, 10-bit colors, and high dynamic range functionality — was also used.

The reviewers utilized the motherboard’s optimal default settings, set the memory profile to EXPO 6000, and made sure that smart access memory was enabled. No special tuning, optimization, or overclocking was carried out for the tests.

Synthetic Benchmarks and Productivity Performance

Blender Benchmark 3.5.0 was used to evaluate the graphics card’s rendering performance. The Radeon RX 7600 ranked in the top 29% of all benchmarks, thanks to the Heterogeneous Interface for Portability — AMD’s compute language for GPUs utilized by Blender Benchmark (as opposed to OpenCL, which does not utilize it). A far quicker result than expected was delivered. This is good news for gamers who do light personal and family photo editing or enhance pictures for social media posts.

The system’s 3DMark Time Spy score of 10,557 was better than 60% of all results, which is respectable for an entry-level gaming machine.

Gaming Performance

Various old and new video games were tested on the platform, including next-gen versions.

Shadow of the Tomb Raider

This game averaged 134fps at 1080p with the maximum graphics settings and AMD’s FidelityFX Contrast Adaptive Sharpening enabled. With ray traced shadow enabled at high settings, the game ran at an average 77fps with a low of 53fps. Increasing the quality of the ray traced shadow to extreme resulted in an average 70fps and a minimum of 43fps.

Far Cry 6

This game averaged 118fps at the 1080p high graphics quality setting, registering a minimum of 98fps. During testing, all DirectX Ray tracing (DXR) and FidelityFX Super Resolution (FSR) capabilities were activated. Increasing the graphics settings to ultra quality resulted in an average 99fps and a minimum of 85fps.

Cyberpunk 2077

At 1080p, this game averaged 37fps with a minimum of 22fps. Ultra ray tracing presets and FSR 2.1 capabilities were activated automatically. The game performed at an average 50fps and a minimum of 35fps using the medium ray tracing setting, resulting in a much smoother experience.

The Witcher 3: Wild Hunt Next-Gen

This game averaged 38fps at 1080p, with a minimum of 26fps. Ultra ray tracing presets and FSR 2.1 capabilities were activated automatically. The game functioned significantly better at the medium ray tracing setting, clocking an average 57fps and a minimum 46fps. Without ray tracing, rasterization performance averaged 104fps and registered a minimum of 76fps in extreme settings.

Frequency, Power Consumption, Temperature, and Noise

The RX 7600 operated at an average frequency of 2545MHz, consumed 160W of power, and attained an average temperature of 79C when playing The Witcher 3 in ultra ray tracing mode, with the GPU loaded to 99%. Due to their small size and low revolutions per minute, the two 90mm fans kept the card cool and noiseless.

Final Words and Conclusion

According to IDC’s monitor tracker, about two-thirds of new monitors still have a max resolution of 1080p. There is a massive installed base of such monitors. Not every customer with full HD aspirations is seeking the best and most costly gear. For example, Minecraft and Roblox are popular among youngsters, while Fortnite in performance mode is popular among teens. Such groups will be very delighted with a PC powered by the RX 7600, and their parents will not have to seek a loan to build it!

AMD faces increased competition now that Intel has entered the arena, alongside Nvidia and AMD. Difficult macroeconomic conditions — ranging from inflation to a war on the ground in Europe — are reducing consumer purchasing power. However, AMD has wisely evaluated the market conditions and taken quick and clever measures to adjust, such as reducing the proposed end-user price of the Radeon RX 7600 from an anticipated $299 to $269! AMD has also reduced the prices of its previous generation RDNA 2-based RX 6000 series cards, thereby providing gamers and customers with a wide selection of goods at various price points.

In conclusion, there is a lot to like about the AMD Radeon RX 7600. It is an affordable, sleek, and compact dual slot, dual fan graphics card that delivers impressive 1080p gaming performance at 50+fps on the highest graphical settings with FSR and ray tracing enabled.

Mohamed Hakam Hefny - Senior Program Manager - IDC

Mohamed Hefny leads market research in EMEA on professional workstation PCs and solutions. He also reports on professional computing semiconductors, processors, and accelerators (CPUs and GPUs), as well as breakthroughs and trends related to the market. In addition, Mohamed is actively involved in AI PC taxonomy and research. He participates in business development projects, contributes to consulting activities, and provides IDC customers with analysis, opinions, and advice.

Generative AI is a fascinating topic and has emerged as a powerful technology that pushes the boundaries of what computation can accomplish.

It has the potential to transform the realms of art and creativity, but also revolutionise industry processes.

There are a myriad use cases of generative AI across industries. We can see that different industries are adopting the technology to achieve specific business outcomes or address common challenges every organisation faces.

With its ability to generate content autonomously and simulate human-like outputs, generative AI has found applications in all industries. In fields as diverse as marketing, customer experience, citizen engagement, as well as industry-specific processes, such as supply chain management automation in manufacturing, for instance.

We would like to start diving into the use cases that are commonly used by several industries.

One of the first use cases to be adopted by organisations are conversational applications. They can range from virtual assistants and chatbots to language translation to personalised recommendations.

Another use case spanning across industries is in marketing applications, which can be widely adopted, depending on the sensitivity of the customer/citizen/patient data and the industry appetite for online marketing. For example, social media automation, customer support via chatbots and personalised marketing campaigns can be used to enhance the visibility of the organisation while being more efficient in their marketing investments.

A third use case cutting across industries is knowledge management applications. This use case can be seen in organisations being applied in identifying existing knowledge, knowledge summarisation, and in language translation and geographic contextualisation.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

However, industries adopt technologies based on their specific needs, goals, and customer demands. Unique processes, regulations, and market dynamics require tailored technologies, and it will be no different with generative AI.

Diverse industry requirements, resource constraints, competition, and technological maturity stages drive varying technology adoption across organisations. Now we’d like to explore how several industries are approaching generative AI and the technology adoption patterns of each industry:

Finance

In the ever-evolving landscape of the financial services industry, the emergence of generative AI technologies, led by Open AI’s ChatGPT, has garnered significant attention from CIOs.

While some express concerns regarding privacy and ethics, and others grapple with understanding the full potential, there is a growing sense of urgency driven by the fear of missing out (FOMO). Contrary to sceptics’ concerns, the industry has demonstrated a shift in focus towards augmenting the capabilities of financial services professionals, rather than seeking to replace them.

By harnessing the power of large language models, financial institutions aim to centralise knowledge, empowering agents and professionals with essential information to enhance customer experiences and optimise operational efficiency.

An excellent example of this progressive trajectory is Sedgwick, a prominent global provider of third-party claims administration services. It has successfully integrated the Open API version of ChatGPT, named “Sidekick,” into its sophisticated claims system, exemplifying Sedgwick’s commitment to elevating its claim-handling process and delivering unparalleled customer service experiences.

Another notable application gaining traction involves leveraging generative AI to enhance conversational interfaces. By revolutionising conversational capabilities, generative AI enables more human-like responses and facilitates complex interactions. Helvetia, a pioneering force in the insurance services realm, has embarked on a bold endeavour by launching a direct customer contact service utilising OpenAI’s ChatGPT.

This experimental initiative aims to provide seamless access to various financial products, showcasing the vast potential of generative AI in transforming customer interactions.

Energy (Utilities and Oil & Gas)

According to a recent IDC Survey ― Future Enterprise Resiliency & Spending Survey Wave 2, March 2023 (FERS) ―  the utilities industry globally ranks second highest in terms of investments in generative AI technologies for 2023 (40% of respondents), surpassing the global cross-industry average of 24%.

This highlights the enormous potential for innovation, the amplification of human work, and reinvention of work processes in utility companies. The automation of certain tasks and AI-assisted transformation are expected outcomes.

While the utilities industry is still in the exploratory phase of identifying fruitful use cases, generative AI holds significant promise in areas such as content generation for sales and marketing code-generation applications. To improve productivity and employee experience, conversational applications for customer service and CX improvements, and knowledge management, which is especially crucial given the challenge of an aging workforce in the utilities sector.

On the other hand, oil and gas organisations appear to be adopting a more conservative position.

The FERS survey reveals that only 18% of oil and gas companies worldwide are willing to invest in generative AI technologies in 2023.

However, 82% are actively conducting initial assessments to identify potential use cases. These assessments include evaluating the use of generative AI for multi-scenario authentic simulations and predictive capabilities in asset operations, generating subsurface images using fewer seismic data scans in the upstream part of the business, and generating human-like text to provide responses to domain-specific questions for business leaders.

Manufacturing

The early months of 2023 witnessed a surge of interest in generative AI and a renewed focus on AI in general.

While manufacturing organisations have not been early adopters of generative AI, they are gradually recognising the technology’s potential for leveraging vast research resources to create diverse content, including text, video, images, and virtual environments.

Among the respondents to the IDC 2023 Manufacturing Survey, 27% are already investing in generative AI technologies, and an additional 38% are engaged in basic exploration. Knowledge marketing and marketing applications are areas where organisations see short-term benefits, likely due to the availability of user-friendly technology that is easily accessible, such as ChatGPT.

Moreover, manufacturers believe that generative AI can have a significant medium-term impact on various aspects of their operations, such as production planning, quality control, AI-driven maintenance, code generation for programmable logic controllers, product development, design (including modelling, testing, and product life-cycle management), and sales (including client data analysis and content management).

However, there are ongoing challenges in maximising the value of AI/ML in manufacturing organisations. Many organisations still lack the necessary tools to address issues related to data availability and quality. IDC observes that internal capabilities and training in leveraging AI-powered technology and analytical tools are often lacking.

Read blog: Gen AI in an Industrial Environment — Recommendations for Early Adopters

Government

Generative AI tools such as ChatGPT, Bard, Dall-E 2, Vall-E, Stable Diffusion, and others have rapidly transitioned from arcane terms known only to AI experts to subjects of popular discussion in newspapers and TV talk shows within a matter of months.

OpenAI’s launch of ChatGPT in late 2022 sparked a wave of curiosity and speculation among the public, private companies, and public administrations. Initially, policymakers exercised caution, but senior civil servants quickly developed an interest in generative AI. Consequently, some jurisdictions have begun issuing guidelines.

The United Arab Emirates government, for example, has released guidelines encouraging the use of generative AI and providing ideas for potential use cases.

The Portuguese government has announced the “Practical Guide to Access to Justice,” which utilises the ChatGPT platform to help citizens obtain legal information in layman’s terms.

In another intriguing instance, a member of the Italian parliament used generative AI to write a speech, surprising fellow senators by disclosing its computer-generated nature at the end of the debate.

In the long term, generative AI has the potential to improve citizen experiences, amplify the competencies and capacity of civil servants, who often face overwhelming amounts of documents and cases, and aid administrations struggling to hire new talent.

At present, however, no major government entities in Europe, the Middle East, and Africa (EMEA) have implemented generative AI at scale. Nevertheless, numerous ideas, pilots, and prototypes are under development to understand the potential benefits in terms of citizen and employee experiences, increased operational efficiency, enhanced trust and compliance, environmental sustainability, and the governance and technical challenges that need to be addressed.

Healthcare

European healthcare organisations are increasingly recognising the benefits of generative AI in empowering and engaging patients and clinicians.

The most promising area of investment lies in knowledge management applications that enable a more efficient and effective flow of information among healthcare professionals, ultimately leading to better patient care.

For instance, generative AI can be employed to create or integrate more accurate patient histories and identify disease patterns, significantly enhancing the ability to make accurate diagnoses and develop effective treatment plans.

However, effective implementation of generative AI in healthcare faces limitations related to both data and models. Generative AI models require extensive training on large volumes of high-quality data.

Healthcare data quality varies widely, and its availability can be restricted due to privacy and ethical concerns. Additionally, generative AI models have limitations in terms of reproducibility due to their probabilistic nature and complex architecture. This undermines the reliability and trustworthiness of the models, especially when used to support clinical decision-making.

Read blog: Generative AI in Healthcare: Benefits and Risks

Retail

The retail industry is moving faster than the human pace can keep up with. Evolving customer expectations and needs, fierce competition, and the quest for enhanced process efficiency ― among others ― are all factors driving retailers to rush into experimenting with emerging technologies.

In fact, in 2022 newspapers were crowded with titles of bold retailers and brands landing in the metaverse while, in 2023, the focus has already shifted to generative AI. However, while the metaverse initiatives of retailers have already cooled down in favour of new forms of (spatial) computing, generative AI technologies (such as ChatGPT and Dall-E) and solutions powered by LLMs or text-to-image models could have a major transformational business impact across the retail value chain.

IDC data shows that 40% of retailers are in the initial exploration phase of the technology, while 21% are actively investing in the implementation of generative AI tools for the year ahead. We can already see some relevant applications in the areas of product development, merchandising, supply chain, marketing, and customer experience.

Organisations such as Coca-Cola, Mattel, and Carrefour are piloting generative AI applications ― even though still on a limited scale and predominantly with a test-and-learn approach.

According to IDC findings, 50% of retailers are expecting to prioritise generative AI uses cases for marketing in the next 18 months. In particular, generative AI could have a tremendous impact on the automation and personalisation of resource-intensive and time-consuming ecommerce processes such as product page descriptions, images/videos, and marketing copies.

For example, the Chinese ecommerce giant JD.com announced the imminent release of its own retail-specific ChatGPT solution which aims to improve online retailers’ rankings of product listings on SERP, generate product descriptions that are tailored to a shopper’s preferences, and optimise online product images and video generation processes.

Overall, as shown by the IDC data cited above, the most promising and imminent area of investment for generative AI in the retail sector is marketing and, more specifically, digital marketing.

Even if, in the near future, the technology could raise important questions in terms of proprietary data sharing and customer data privacy, without a doubt the use of generative AI for text and image generation could greatly enhance and streamline the ecommerce shopping experience, leading to higher profitability of retailers’ online channels.

Architecture, Engineering, and Construction

The built environment sector has long been considered behind the curve when it comes to productivity and the adoption of digital technology. But emerging technologies, including generative AI, are accelerating innovation across the sector and aligning it with other industries.

According to an IDC Survey (Future Enterprise Resiliency & Spending Survey Wave 2, IDC, March 2023), 25% of resource and construction companies are investing in generative AI technologies this year, just above the industry average.

The potential of generative spans across the building life cycle. When planning and designing a building, drawings and BIM models typically take weeks or months to produce. Generative AI has the potential to generate building designs in an afternoon based on pre-defined criteria such as building codes, site conditions, and sustainability standards.

The construction process is also ripe for innovation: studies find that the need to correct errors during projects accounts for between 5% and 12% of costs. Here, generative AI can create optimised construction schedules and augment supply chain and material planning.

The opportunities extend to a building’s operation through to its demolition and recycling.

As with all industries, these opportunities must be balanced with potential risks. For AEC companies, there are specific physical safety risks associated with using generative AI for the automation of building designs and compliance checks. The correct safeguards and checks will need to be put in place as these technologies are piloted and rolled out.

Generative AI models also require extensive training on large high-quality data sets: the industry’s legacy of digital immaturity and data fragmentation will affect, but not stall, the rate of innovation.

Moving Forward

In conclusion, as the field of generative AI continues to evolve rapidly, it is paramount to cultivate strategies that enable us to navigate through the noise and discern between hype and reality.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

By gaining a clear understanding of the true potential and limitations of this technology, we can effectively harness its power. The wide-ranging applications of generative AI across various industries have the potential to reshape the way organisations manage their businesses and increase efficiency and productivity.

However, amid the excitement and buzz, it is vital to approach the subject with a discerning eye. Adopting an approach based on use cases, which reveals tangible evidence based on real-world results, becomes an imperative for tech vendors and end-user organisations alike.

Drawing upon practical applications and real-world experiences provides invaluable context, allowing us to differentiate between exaggerated claims and genuine achievements. By prioritising the examination of use cases and seeking concrete results, we deepen our understanding of the true potential and limitations of generative AI.

Another angle of the discerning strategy when it comes to generative AI is to rely on subject experts and look for insights that are connected to the industry in question, as experienced professionals in the field are the best source of reliable and up-to-date information. Moreover, this article was written by several humans, embedded by human intelligence with the help of computers, not generative AI.

Contributing analysts: Adriana Allocato, Davide Palanza, Gaia Gallotti, Jan Burian, Louisa Barker, Massimiliano Claps and Sofia Poggi

If you want to know more about generative AI visit our website, or for more in-depth industry insight click here.

Unless you’ve been living under a rock for the past six months, you’ll have heard of generative AI – technology that enables computers to create synthetic data or digital content based on previously created data or content. The launch of ChatGPT in late 2022 lit a fire under this emerging space and seemingly overnight, hundreds of millions of people became inspired by the results of work that had already been going on for years within academic and commercial technology vendor research departments.

Earlier in June we spent two days touring around investment banks and hedge funds in London to talk to investors about generative AI and answer their questions.

 

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

 

We had many great, in-depth discussions. Here are the questions that came up most frequently.

  1. Where is the Value in Generative AI in the Short, Medium, and Long Term?

Today, most of the value is being captured by hardware vendors – most notably NVIDIA, which has seen its share price take off following a sharp upswing in its predicted revenues. As the market leading provider of GPUs with a strong enabling software story and emerging as-a-service play, too, NVIDIA is very well positioned to capitalise on the generative AI boom.

Of course, NVIDIA isn’t the only vendor that potentially stands to benefit; AMD and other semiconductor vendors (including start-ups like Graphcore, Cerebras & Moore Threads) are emerging as challengers, and generative AI platforms will drive storage and networking infrastructure investments too.

In the short to medium term, hyperscale public cloud providers can also expect to benefit significantly. With its early move investing in OpenAI and accelerated investments in generative AI across its software portfolio, Microsoft is in a particularly strong position; but AWS, Google, and Oracle are all also making significant moves in this space.

In the medium-term platform and application vendors also stand to benefit, although the value equation for them is less clear cut. There are significant question marks over which generative AI use cases can support direct monetization, and which will be important to implement from a defensive point of view. Many of the costs associated with managing generative AI models for scale, security, privacy and trust will also fall on their shoulders.

  1. What Will Have to Be True to Make GenAI a Truly Broadly Adopted Technology?

Right now, we’re still in “year zero” for generative AI in a commercial context. There is still a lot of confusion around the technology and its applicability in practical real world use cases.

What is already clear, though, is that publicly shared foundation models delivered as a service (such as those hosted by OpenAI) will only be suitable for a subset of enterprise use cases. For many, enterprises will use fine-tuned, specialised domain-specific models that are made available directly to them on a private (or controlled) basis.

The current state-of-the-art in generative AI yields systems that are prone to accuracy problems, difficult to control and predict, and expensive to run. All of these issues need to be worked on.

  1. Where Are the Implications for the Software Landscape?

Every software vendor that IDC is speaking to is updating or recreating their product roadmaps to incorporate their respective Generative AI strategies. Obviously, this will play out differently across infrastructure, platforms and applications – however there are certain common questions that are being asked:

  • Should we develop our own large language models, or should partner with model providers like OpenAI, Anthropic, Cohere and AI21 and tune them for our software capabilities?
  • How should we price our new Generative AI features?
  • Should we include getting access to customer data to train models as part of a new set of licensing terms and conditions. What do we offer in return (if anything)?
  • Do we need to evolve our support models to include service level agreements (SLAs) on accuracy on certain use cases that are being delivered?

Across all these questions, what is clear is that margin protection will be a major question for software vendors over time – especially those with questionable pricing power. In addition, there will be increased requirements for additional levels of support to deal with model, context and data drift. For the application players, there is an increasing likelihood that forms-based computing as a basis for applications will likely disappear over time and certain markets – for example, salesforce automation and human capital management could potentially be redrawn in the medium-term. 

As part of these changes, what is becoming clear is that the application vendors that are cloud laggards will be AI laggards, and that platforms will continue to dominate the software landscape.

More importantly, incorporating trusted and responsible AI principles into both product development and customer engagement will move from being a differentiator in the short term to table stakes in the medium term.

  1. What Are the Implications for Developers?

There’s been a significant amount of excitement about the ability of generative AI services (such as GitHub CoPilot, Replit Ghostwriter and Warp AI) to generate code, documentation, test scripts, and more.

Today’s state-of-the-art models are not going to put developers out of work. Rather, for some specific types of development work, and for some particular types of software asset being created, generative AI services are very likely to help developers accelerate their efforts to deliver working software, acting side-by-side with human developers in a “CoPilot” arrangement.

But it’s important to keep things in perspective: when we zoom out to consider the broader software delivery lifecycle, pro-innovation developers happy to experiment with new tools tend to bump into deployment, operations and support professionals who are much more risk averse.

  1. What Are the Implications for Services Providers?

Lastly, many of the investment teams we spoke to were very interested in discussing how professional services (particularly IT services) firms might be impacted by generative AI. Will it bring them major new opportunities? Or will its ability to drive automation of knowledge work mean that it forces providers to cannibalise their own businesses?

Our early research shows that more than 65% of early adopters of generative AI capabilities agree or strongly agree that their need for external services providers will be reduced in the future

The potential impact of generative AI on project delivery is, in some ways, analogous to the potential impact of low- and no-code development tools; if providers can embrace these tools effectively and also deliver trusted solutions to clients, they may find fewer hours are required to deliver projects – but outcomes will be improved for everyone.

 

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

 

The arrival of Generative AI technologies has created what we believe to be a seminal moment for the industry: it will be so impactful that it will influence everything that comes after it. However, we believe it is just the starting point. We think that Generative AI will trigger a transition to AI Everywhere – moving us from the use of narrow AI for specific use cases to widening AI for a range of use cases simultaneously.

This means that it will impact every element of the technology stack, and also drive a rethink of all horizontal and vertical use cases. However, given the questions around risk and governance, it will also require every organization to develop and incorporate an AI ethics & governance framework to deal with the risks mentioned earlier.

The investors that we spoke to in London agreed that the tech industry needs to take balanced approach to commercializing the opportunity, while also ensure that policies and regulations continue to protect consumers, enterprises and society as a whole.

Neil Ward-Dutton - VP AI, Automation, Data & Analytics Europe - IDC

Neil Ward-Dutton is vice president, AI, Automation, Data & Analytics at IDC Europe. In this role he guides IDC’s research agendas, and helps enterprise and technology vendor clients alike make sense of the opportunities and challenges across these very fast-moving and complicated technology markets. In a 28-year career as a technology industry analyst, Neil has researched a wide range of enterprise software technologies, authored hundreds of reports and regularly appeared on TV and in print media.

The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.

GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.

We observe three ways that industrial users can get in touch with GenAI:

  1. Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
  2. Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
  3. Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Be Ready — But Careful

In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.

Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.

AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.

AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.

Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.

Output Challenges

Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.

GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.

GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.

GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.

Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.

If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.

We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.

Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.

Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

Evolving Challenges

Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.

Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.

For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.

Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.

The Sky is the Limit — For Now

In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.

To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.

Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.

Beware the Competition!

GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.

But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.

At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.

As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.