Generative AI has wowed consumers and individuals across the globe with its ability to find information and author high-quality content. For enterprises, the use cases are still being explored and defined. In this blog, we will explore a potential ‘killer app’ for generative AI: The Virtual Mentor as a new way to do learning and onboarding.

In today’s organizations, the vast majority of mentoring is done by speaking to experienced colleagues, looking for answers in the public internet or in company-specific intranets, trawling through various PDF guides and presentations or maybe e-learning courses or classroom sessions. The problem is that there is no easy way of finding the information employees need using existing technologies and approaches.

Current e-learning and onboarding solutions struggle with multiple challenges. Firstly, the content is costly and time-consuming to produce. Secondly, it quickly becomes outdated and is generally static, once produced. Thirdly, the one-size-fits-all standard approach to learning and onboarding doesn’t quite meet the needs of the individual, who already knows all about A but would like to deep-dive into B.

We believe that generative AI will be a game changer in solving these problems, because the system themselves – for the first time in world history – can generate the needed learning content. Future virtual mentors will meet many of today’s unserved learning and onboarding needs and employee would be able to interact digitally, remotely or in the office, intensively or in drip-feed style, and the learning content would be created on the fly determined largely by the nature of the interaction and the learner queries.

AI-Powered Virtual Mentor vs. Previous Learning Approaches

First of all, let’s define generative AI. We define generative AI as a branch of computer science that involves unsupervised and semi-supervised algorithms that enable computers to create new content using previously created content, such as text, audio, video, images and code.

Secondly, let’s define what an AI-powered virtual mentor is. We envision the AI-powered mentor is have the following characteristics:

  • Always available. Like Microsoft’s failed personal digital assistant Clippy (remember the animated talking paperclip?), a virtual mentor will be an omni-available resource to the learner.
  • Creates content itself. If fed enough material, a generative AI-powered virtual mentor will be able to create the relevant teaching material itself by synthesizing existing content.
  • Conversational. Just like a real-life, human mentor, the AI-powered virtual mentor interacts via conversation. The human mentor converses verbally, while the virtual mentor works best via written conversation (although verbal user experience is on its way, as well).
  • Adaptive. A virtual mentor goes far beyond what is known today as ‘adaptive learning’, I.e., an e-learning experience with some variation in the course depending on the individual learner. A virtual mentor can freestyle and go where the learner would like to go within a general topic area.

An employee would be able to ask a wide variety of general questions to the virtual mentor, such as:

  • What is the pricing structure for product X?
  • Do we have representation in Peru?
  • What are the key new features in the version YY.YYY of product Z?
  • What is the expense management policy for a client meeting?
  • Who in my company works with [expertise area]?

Let’s compare what it is like to work with a generative AI-powered virtual mentor compared to traditional e-learning as well as classroom training:

Why Do We Need Virtual Mentors When We Already Have ChatGPT and Similar Generative AI Platforms?

ChatGPT is of limited use in an enterprise context for one simple reason: Employees using the platform are likely to reveal sensitive company information. This is why most organizations have banned the use of ChatGPT among employees.

Just imagine an employee at a healthcare provider uploaded the raw transcript of an internal meeting regarding the cancer treatment of patient XX and asking for an abbreviated minute of meeting. Such an upload to a public internet system would constitute a major violation of the privacy of patient XX.

Virtual mentors, on the other hand, would leverage the public internet-based Large Learning Models but would not feed any inquiries from employees back to the public internet. Such ChatGPT replicas in confined corporate setting will be the first wave of generative AI virtual mentors that we are going to see on the market.

This will, in other words, be general purpose virtual mentors based upon public internet information. These can be adopted by organizations of any size and are ready to use immediately.

A subsequent wave of virtual mentors will be based on curated content specific to a functional area or an industry or similar. Such specialized content virtual mentors will be sold by vendors that are in charge of curating content and maintaining the AI solution.

A virtual mentor in the area of accounting could be offered by learning content provider or alternatively to an accounting solution provider. Some specialized virtual mentors could be provided as free add-ons to commercial software subscriptions.

Finally, we will see a wave of organization-specific virtual mentors that will act as experts in one organization. In this case, the organization itself would be in charge – possibly aided by a services provider – of feeding the system with learning material.

A product manufacturer would input all manuals, product FAQs, marketing material, customer service interactions, HR policies, internal communication, public pricing information, everything on the intranet and company internet sites, training materials, etc. That solution could be very helpful in onboarding new employees and help answering inquiries for existing employees. However, it would take time and resources to implement and require a certain company size in order to benefit.

The figure below shows the different levels of data feeding into a virtual mentor. The interaction between the virtual mentor and the employee will be chat-based to begin with. However, in the medium term, interaction could also be done through verbal communication, games, metaverses, augmented reality, etc.

Evidence of Generative AI Replacing Existing Digital Learning and Coaching Solutions

Chegg, an established American education technology (EdTech) company known for textbook rentals, online tutoring, and a variety of student services, was among the entities to feel the competition from generative AI. Their initial projection regarding generative AI tools, such as ChatGPT, was that these technologies would take a longer period to truly influence the market.

However, the release and subsequent popularity of GPT-4 among students, credited to its swift response time, efficiency, and affordability, led to a sales slowdown and a dramatic Chegg stock price decline of 48% in early May 2023.

As response to these trends, Chegg entered into a partnership with OpenAI in April 2023, leading to the development of CheggMate. This tool, which is still in its development phase, intends to amalgamate GPT-4’s generative AI capabilities with Chegg’s existing question database.

The goal for CheggMate is to enhance user experience by better aligning user queries with the most suitable resources.

Other EdTech vendors, including Duolingo, have unveiled new AI-driven features. Specifically, Duolingo introduced a role-play chat where users can learn a language by conversing with an AI. After these interactions, they receive feedback and suggestions to enhance their language-learning journey.

We have also witnessed the first examples of generative AI approaches in mentoring. CoachHub is a leading vendor of digital coaching solutions recently unveiled AIMY, a virtual AI-powered career coach rooted in OpenAI’s ChatGPT. AIMY is designed to let users try personalized coaching sessions without any human interactions and without the costs associated with traditional coaching. It emulates human to human coaching, is still in beta phase, and not yet able to manage too complex discussions.

Challenges to Overcome for Virtual Mentor Solutions

Adopting virtual mentor solutions for learning, onboarding, and coaching purposes is not without challenges. Here are a few key obstacles that organizations might encounter when introducing these new AI-driven solutions:

  • Data privacy and security concerns. The first cases of data breaches related to the use of generative AI solutions by employees have already emerged, such as Samsung’s discovery of staff uploading a variety of sensitive information to ChatGPT. Future virtual mentor solutions will not feedback data to public generative AI systems, such as ChatGPT.

As shown in the figure above, virtual mentors will use a combination of user data, curated company data, curated industry or functionally specific data as well as publicly available data as training material. Such approaches will limit the risk of data breaches significantly.

However, adoption will require significant attention to security-related aspects, such as ensuring robust encryption, compliance with data protection regulations, etc.

  • Implementation complexity and skills gap. Introducing virtual mentor solutions on top of existing data is likely to require specialist AI training skills, which might not be in possession of many organizations. In terms of the overview figure above, the company-specific layer presents the biggest challenges. This is because training material is limited (compared to the vast number of resources available on the public internet) and because training material must be curated, updated, deleted (in case of obsolete material), etc.
  • Risk of hallucinations. AI-driven virtual mentors can produce “hallucinations” or inaccurate answers. In a mentoring context, this can lead to confusion or misguidance and ultimately a rejection of the mentor system as unreliable by the employees. The risk of hallucinations by the virtual mentor means that organizations will have to dedicate resources to quality assurance, ticketing system for incorrect or inappropriate answers, etc.

Implications for HCM and Payroll Vendors

Generative AI will have a major impact on the field of Human Capital Management solutions. There has been a significant initial focus on the impact of generative AI on recruiting, candidate marketing, and employee performance.

However, learning and onboarding will also see massive change as a result of generative AI.

A market for curation of Large Learning Models for various industries and functional areas will appear. This could open new revenue streams for the providers with strong existing domain knowledge.

As displayed on the table above, different learning delivery methods will have different sweet spots. Classroom-based learning and traditional e-learning formats will not disappear.

What will happen, however, is that a lot of the more general learning and onboarding tasks will transition to generative AI-based learning formats. Initially, the formats will evolve around chat-based interfaces, but over time other user experiences and communication formats will emerge.

Generative AI is an opportunity for vendors of learning and onboarding solutions. However, they will need to react fast in terms of evolving existing solutions and building in generative AI features and aspects.

Existing learning and onboarding vendors will come under pressure from new providers of virtual mentors and other related generative AI-based solutions. Generative AI is a twin edged sword for HCM vendors, a blessing for those who are willing revisit their existing offerings, but a curse for those that fail to respond.

Bo Lykkegaard - Associate VP for Software Research Europe - IDC

Bo Lykkegaard is associate vice president for the enterprise-software-related expertise centers in Europe. His team focuses on the $172 billion European software market, specifically on business applications, customer experience, business analytics, and artificial intelligence. Specific research areas include market analysis, competitive analysis, end-user case studies and surveys, thought leadership, and custom market models.

Is Generative AI possible without the cloud? This question lingers as we delve into the world of AI innovation and explore the potential of generative AI models.

Let’s try to agree on the pivotal role that cloud platforms play in unleashing the power of generative AI as they provide a pathway to rapid development, scalability, and help to unlock the full potential of what some call a groundbreaking technology.

So, do we think generative AI truly flourishes without the aid of cloud platforms? Are they really a match made in technological heaven?

The cloud serves as a catalyst for rapid development and scalability in the realm of generative AI. Imagine the obstacles faced by both startups and established vendors burdened with the need for costly infrastructure investments.

High-performance computing resources such as GPUs and TPUs become accessible without substantial upfront investments. This liberates organizations to focus on what truly matters: developing innovative generative AI solutions, free from almost any infrastructure concerns.

 

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Beyond this, though, one of the most important benefits of cloud platforms for generative AI is the way they provide managed access to pre-trained foundation models and APIs. These resources act as a springboard, propelling developers forward without the need to start from scratch.

Pre-trained models capture the knowledge and expertise of generative AI experts, saving significant time and computational resources. By leveraging these models, developers can advance their projects, focusing on fine-tuning and customization rather than spending countless hours on training models.

Of course, enterprises can build and host their own foundational models themselves if they so wish, but this is a very expensive, complicated and time-consuming process that requires large teams of rare specialist talent. Cloud providers offer APIs that abstract the complexities of generative model architectures, thus simplifying the integration of generative AI capabilities into already existing and newly built applications. This democratizes access to generative AI, allowing developers to use its power without too deep expertise in model development.

Building generative AI models usually requires comprehensive and efficient development environments. Cloud providers offer a wide range of frameworks, development libraries, and collaboration tools tailored specifically to generative AI. These tools simplify the development, training, and evaluation of generative models, supporting developers and data scientists in bringing their ideas to life. By partnering with cloud providers, companies building developer tools and platforms ensure seamless integration with cloud-based infrastructure and services.

Yet, as much as we want to believe this is a romantic relationship, this is in fact a marriage of convenience aka business, so both sides need to think how this partnership will work for them.

 

Watch the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

 

What AI-Model Providers Should Do

Prioritize Knowledge Transfer

To fully utilize generative AI, it is crucial to invest in knowledge transfer and training programs. Collaborate with cloud providers to develop training materials, workshops, and resources that enhance the understanding and skills of employees. Empowering individuals within organizations to leverage generative AI technologies effectively will maximize the potential of this field.

Foster Continuous Learning and Research

Leverage the support provided by cloud providers for research and development. Engage in research collaborations, attend conferences, and utilize cloud resources for experimentation and innovation. Staying up to date with the latest advancements in generative AI is vital for building new solutions.

Plan for Strong Data Management

Strong data governance practices in place are a must to ensure compliance, data privacy, and responsible use of data. While it makes a lot of sense to leverage cloud platforms’ data management and governance tools to maintain data quality, data lineage, and appropriate access controls throughout the generative AI lifecycle, AI providers must never assume that cloud providers’ tools are enough.

What Cloud Providers Should Do

Invest in Hardware/Chips R&D

Enhance hardware and chip capabilities specifically tailored for generative AI tasks. Explore specialized hardware accelerators, optimize GPU and TPU architectures, or even develop new chips designed to accelerate generative AI computations. By staying at the forefront of hardware advancements, cloud providers can offer superior performance and cost-efficiency.

Develop Industry-Specific or Use-Case Specific AI Frameworks

Differentiate by developing industry-specific or use-case specific AI frameworks that cater to the unique needs of various domains. Offer pre-trained models, domain-specific data management tools, and integration with industry-specific applications. By providing specialized AI frameworks, cloud providers can enable businesses to leverage generative AI effectively and drive sector-specific innovation.

Support Model Deployment and Lifecycle Management

Cloud platform providers must develop comprehensive tools for model deployment, monitoring, and lifecycle management in support of generative AI governance. This includes intuitive interfaces for deploying models, robust monitoring for issue resolution, and higher-level tools for responsible AI delivery. Simplifying processes enhances user experience for developers and data scientists.

 

Together, both sides should absolutely focus on building ecosystems and on fostering collaboration models that encourage the participation of various stakeholders in the generative AI space. Cloud providers need to create open platforms and APIs, allowing seamless integration with innovative tools, services, and solutions to provide customers with a broader range of generative AI capabilities. AI creators can leverage open platforms and APIs to integrate tools and services developed by complementary companies in the generative AI space, fostering a thriving marketplace of offerings.

And please, remember, a marriage of convenience can only work in situations where both partners enter the marriage with clear expectations and mutually beneficial goals. This can be too much for real family life but should be exactly what’s needed for commercial success.

Ewa Zborowska - Research Director, AI, Europe - IDC

Ewa Zborowska is an experienced technology professional with 25 years of expertise in the European IT industry. Since 2003, she has been a member of the IDC team, based in Warsaw, researching IT services markets. In 2018, she joined the European team with a specific emphasis on cloud and AI. Ewa is currently the lead analyst for IDC’s European Artificial Intelligence Innovations and Strategies CIS.

In 2022, the ability to attract and retain talent was the #1 internal CEO concern worldwide according to the Conference Board CEO survey after a booming 2021. Fast-forward 12 months, the environment is different due to layoffs in the tech and financial services sector, inflationary pressures, and the looming recession.

However, in the Conference Board CEO survey for 2023, the ability to attract and retain talent remains the #1 internal CEO concern worldwide.

This CEO expectations of a continuous tight labor market in Europe and elsewhere is supported by data from Eurostat from June 2023. Despite fluctuations mainly related to the Covid-19 pandemic, unemployment appears on a continuous downward trend in the EU, while the EU overall employment rate is on a continuous increase.

The recent wave of layoffs in high tech and related industries – shocking as it was – is unlikely to change this picture. Why? Because it already happened and is on the decrease after peaking around January 2023 for the technology industry and even earlier for other industries, according to Layoffs Tracker.

Our own survey data confirms that the European labor market remains tight. Over half (54%) of software decisionmakers are challenged to find new staff in IDC’s European Enterprise Apps & CX Survey from January 2023 (n = 670). Viewed by industry, recruitment difficulties are present across industries, with signs of some easing of the severe labor shortages that was experienced in retail and hospitality in 2021.

What IDC’s survey data also says is that employee retention pressure has dropped off somewhat in 2023, because of the economic uncertainties and layoffs. In our report, Status of Employee Retention in Europe, based on a survey of 2,785 European employees in March 2022, we found that an alarming one in every four employees on average was actively and voluntarily looking for another job. Some job seekers were forced to look for alternative employment due to relocation or being on a temporary contract (i.e., actively and involuntarily job hunting), and those were excluded.

We made a similar survey in March 2023 of 3,527 employees in Europe. The new survey showed that the proportion of voluntary job seekers had decreased from 24.5% in 2022 to 16.8% in 2023 — a drop of almost 8 percentage points. We asked those that were not actively looking for a new job in terms of why not, and the second and third most popular reasons were most interesting because they referred to the current economic environment, making it “financially sensible to stay” and “hard to find a new job,” respectively.

These concerns appear to be the main reasons why we saw the proportion of voluntary leavers decline from 24% in 2022 to 17% in 2023.

European Organizations Use a Multitude of Coping Strategies to Improve Employee Attraction

Given that the tight labor market is likely to continue for the foreseeable future, what are European organizations doing to get the staff that they need? We asked all software decisionmakers in organizations with some level of recruitment difficulties about their coping strategies.

Interestingly, upskilling and reskilling existing employees was the most popular answer. Educating current employees and redeploying them in new, relevant positions makes sense in many cases.

Existing employees already have valuable knowledge about the organization and industry compared with new hires. One open question is how extensive upskilling/reskilling efforts are required and what learning methods will be needed.

We believe that a significant proportion of the upskilling/reskilling activity will focus on technology and data related skills.

European organizations will also use other methods to make ends meet. The second most popular coping strategy is offering higher salaries, which we see practiced for positions where there is a confined resource pool and limited substitution options. Examples could be a certain trading specialist, a particular medical professional, etc.

Third place was hiring more recruiters and acquiring better recruiting tools, which is a reasonable strategy, especially in organizations where the recruiting function is understaffed and equipped with outdated software and/or processes.

Other popular strategies included widening the spectrum of applicable candidates, lowering criteria, and investing in better branding and candidate marketing.

Three-quarters of organizations deployed a combination of coping strategies. It means that organizations typically see these coping strategies in combination, as opposed as individual silver bullets. Please see Employee Shortage Coping Strategies in Europe (IDC #EUR150726123, June 2023) for more information.

What Are the Upsides from the Point of View of HCM and Payroll Application Vendors in Europe?

The tight labor market and recruiting difficulties among European organizations are in fact sweet music in the ears of many of the software vendors in the HCM space. The solution areas that are best positioned to capitalize on the employee attraction desires and approaches of European organizations are:

  • eLearning solutions, learning services, reskilling strategy services. The stated intent to “reskill and upskill” can be achieved by different means, including onsite training, mentoring, and external education courses, learning technologies are also likely to play a key role. IDC believes that the reskilling/upskilling ambitions will trigger investments into more comprehensive eLearning technologies, as opposed to micro learning and social learning approaches.
  • Recruiting solutions and services. Vendors of recruiting solutions and HCM suites with strong recruiting modules stand to benefit as do providers of talent acquisition services and recruiting agencies. Investing in such capability is almost mandatory, as the consequence of doing nothing and not being able to attract the required talent can be crippling for an organization.
  • Skills mapping, skills management, and skills matching solutions. Upskilling and reskilling is a fine remedy, however, an overview of existing skills and skill gaps are prerequisite to invest in learning. In order to progress, an organization first needs a map – a skills map – to navigate and target investments.
  • Temp staff providers, outsourced labor services. In some industries, such as healthcare and professional services, organizations will include contingent labor and external services as part of the solution to the lack of available labor resources.
  • Marketing solutions related to candidate marketing and employer branding. In this age, the employees do not come flocking around employers. Rather, it is the other way around. Employers must target potential applicants on social media and build databases with passive candidate pools, and target these effectively. This requires marketing technology, and this opens a new target market for vendors of such solutions.

Bo Lykkegaard - Associate VP for Software Research Europe - IDC

Bo Lykkegaard is associate vice president for the enterprise-software-related expertise centers in Europe. His team focuses on the $172 billion European software market, specifically on business applications, customer experience, business analytics, and artificial intelligence. Specific research areas include market analysis, competitive analysis, end-user case studies and surveys, thought leadership, and custom market models.

Introduction

Software project failures are a harsh reality in the world of technology. Despite the best intentions and efforts, projects can unravel due to various reasons, such as poor estimation and planning, inadequate requirements gathering, scope creep, and unrealistic timelines. These failures not only result in financial losses but also tarnish a company’s reputation and erode stakeholder trust. Addressing project failures requires a proactive approach, emphasizing communication, risk management, continuous evaluation and especially realistic estimation and planning. Embracing these lessons can lead to improved project outcomes and foster a culture of learning and growth in the software development industry.

In the dynamic world of software development, accurate cost estimation is crucial to ensure project success. Organizations rely on dependable software cost estimation practices to manage budgets, meet deadlines, and deliver quality products. To address this need, a new Software Cost Estimation Certification has emerged, complemented by the Cost Estimation Body of Knowledge for Software (CEBoK-S). In this blog, we will delve into the significance of this certification and the CEBoK-S, shedding light on how they empower professionals to excel in the field of software cost estimation.

The New Software Cost Estimation Certification (SCEC)

The new Software Cost Estimation Certification is a comprehensive program designed to equip professionals with the latest tools, methodologies, and best practices for accurately estimating software project costs. Offered by the International Cost Estimation and Analysis Association (ICEAA), its special interest group ICEAA-Software, this certification reflects the industry’s evolving demands and ensures that participants stay up to date with the latest trends.

Key Components:

  1. Advanced Estimation Techniques: The certification program covers a wide array of advanced estimation techniques, from traditional methods like function point analysis and COCOMO to modern approaches like agile estimation and parametric modelling. By learning these techniques, professionals gain the flexibility to adapt their approach to diverse project requirements.
  2. Risk Assessment and Mitigation: Effective cost estimation involves identifying potential risks and uncertainties that can impact the project’s outcome. The certification equips participants with the skills to assess and mitigate risks, allowing for better planning and resource allocation.
  3. Industry Case Studies: Real-world case studies are an integral part of the certification program. These case studies provide valuable insights into how cost estimation principles are applied in various scenarios, offering participants a practical understanding of the challenges they may encounter.

The CEBoK-S – Cost Estimation Body of Knowledge for Software

The CEBoK-S is a comprehensive guide that provides a structured framework for software cost estimation. Developed by industry experts, this body of knowledge encompasses a wide range of topics, from fundamental concepts to advanced practices, creating a solid foundation for professionals in the field.

Key Features:

  1. Detailed Framework: The CEBoK-S offers a detailed framework that covers all aspects of software cost estimation. It defines the key processes, activities, and inputs required for accurate estimation, guiding professionals through the entire estimation lifecycle.
  2. Best Practices and Standards: In an ever-changing industry, adhering to best practices and standards is crucial. The CEBoK-S outlines established industry standards, ensuring consistency and reliability in cost estimation practices across projects and organizations.
  3. Continuous Updates: Software development is continually evolving, and the CEBoK-S keeps pace with these changes. It undergoes regular updates to reflect the latest advancements and emerging trends in the field, making it a reliable and relevant resource for professionals.

Impact on the Software Industry

The combination of the new Software Cost Estimation Certification and the CEBoK-S has revolutionized the software industry’s approach to cost estimation. Certified professionals armed with the knowledge from the CEBoK-S are better equipped to address the challenges posed by modern software projects, leading to improved project outcomes and client satisfaction.

  1. Enhanced Project Planning: The comprehensive knowledge gained from the certification and the CEBoK-S enables professionals to create accurate and realistic project plans. This, in turn, leads to better resource allocation, reduced budget overruns, and timely project deliveries.
  2. Quality and Consistency: Employing standardized cost estimation practices ensures consistency in project management across different teams and organizations. This leads to higher-quality software development, as well as improved collaboration and communication among stakeholders.
  3. Improved Stakeholder Trust: Clients and stakeholders place their trust in organizations that employ certified professionals and follow industry standards. The certification acts as a testament to an organization’s commitment to excellence and professionalism.
  4. Higher success rates of software development projects, resulting in fewer cost and schedule overruns. This potentially saves companies huge amounts of money and reputation damage.

Conclusion

In conclusion, the new Software Cost Estimation Certification and the CEBoK-S are instrumental in equipping professionals with the knowledge and skills required to excel in software cost estimation. By combining advanced estimation techniques with a structured body of knowledge, these resources elevate the industry’s cost estimation practices to new heights. As organizations continue to embrace these certifications, we can expect to see more successful projects, satisfied clients, and a stronger, more reliable software industry overall.

IDC Metri is proud to announce that its Software Cost Estimation Center of Excellence now has two Software Cost Estimation Certified professionals: Frank Vogelezang and Harold van Heeringen. More information can be found here: https://www.idc.com/eu/idcmetri/it-intelligence

On May 24, AMD revealed its new Radeon RX 7600 graphics card. This is an entry-level card positioned to play the newest games at 60+ frames per second (fps) at 1080p. It supports very efficient streaming using the latest AV1 encoding technology. According to AMD, the card performs 1080p gaming 29% faster on average than the AMD Radeon RX 6600.

AMD’s latest RDNA 3 generation of cards have marked ray tracing improvements over the previous RDNA 2 versions. Our tests show that the Radeon RX 7600 can get close to the performance of the Radeon RX 6700 XT midrange card in ray tracing benchmarks such as Speedway and Port Royal. The RX 7600 achieved around 86% of the performance of the midrange card in both tests using default driver settings.

The Radeon RX 7600 is based on the AMD RDNA 3 architecture and includes revamped compute units with unified ray tracing and AI accelerators. It features AMD’s Infinity Cache technology from the second generation of cards.

The Test Platform

The AMD Ryzen 5 7600X processor, the Radeon RX 7600 graphics card, the GIGABYTE X670E Aorus Master motherboard, and a G.SKILL Trident Z5 Neo 2x16GB DDR5-6000 EXPO memory kit — which were all provided to IDC by AMD — comprised the test PC hardware components. The primary Windows 11 disk was a 1TB GIGABYTE Aorus NVMe Gen4 solid state drive.

A be quiet! Silent Loop 2 280mm water cooler was fitted for the processor, which was coupled with a be quiet! STRAIGHT POWER 11 Platinum 850W power supply. A 34” Dell Gaming S3422DWG monitor — a quad-HD 3440*1440 display with a 144Hz frame rate, FreeSync, 10-bit colors, and high dynamic range functionality — was also used.

The reviewers utilized the motherboard’s optimal default settings, set the memory profile to EXPO 6000, and made sure that smart access memory was enabled. No special tuning, optimization, or overclocking was carried out for the tests.

Synthetic Benchmarks and Productivity Performance

Blender Benchmark 3.5.0 was used to evaluate the graphics card’s rendering performance. The Radeon RX 7600 ranked in the top 29% of all benchmarks, thanks to the Heterogeneous Interface for Portability — AMD’s compute language for GPUs utilized by Blender Benchmark (as opposed to OpenCL, which does not utilize it). A far quicker result than expected was delivered. This is good news for gamers who do light personal and family photo editing or enhance pictures for social media posts.

The system’s 3DMark Time Spy score of 10,557 was better than 60% of all results, which is respectable for an entry-level gaming machine.

Gaming Performance

Various old and new video games were tested on the platform, including next-gen versions.

Shadow of the Tomb Raider

This game averaged 134fps at 1080p with the maximum graphics settings and AMD’s FidelityFX Contrast Adaptive Sharpening enabled. With ray traced shadow enabled at high settings, the game ran at an average 77fps with a low of 53fps. Increasing the quality of the ray traced shadow to extreme resulted in an average 70fps and a minimum of 43fps.

Far Cry 6

This game averaged 118fps at the 1080p high graphics quality setting, registering a minimum of 98fps. During testing, all DirectX Ray tracing (DXR) and FidelityFX Super Resolution (FSR) capabilities were activated. Increasing the graphics settings to ultra quality resulted in an average 99fps and a minimum of 85fps.

Cyberpunk 2077

At 1080p, this game averaged 37fps with a minimum of 22fps. Ultra ray tracing presets and FSR 2.1 capabilities were activated automatically. The game performed at an average 50fps and a minimum of 35fps using the medium ray tracing setting, resulting in a much smoother experience.

The Witcher 3: Wild Hunt Next-Gen

This game averaged 38fps at 1080p, with a minimum of 26fps. Ultra ray tracing presets and FSR 2.1 capabilities were activated automatically. The game functioned significantly better at the medium ray tracing setting, clocking an average 57fps and a minimum 46fps. Without ray tracing, rasterization performance averaged 104fps and registered a minimum of 76fps in extreme settings.

Frequency, Power Consumption, Temperature, and Noise

The RX 7600 operated at an average frequency of 2545MHz, consumed 160W of power, and attained an average temperature of 79C when playing The Witcher 3 in ultra ray tracing mode, with the GPU loaded to 99%. Due to their small size and low revolutions per minute, the two 90mm fans kept the card cool and noiseless.

Final Words and Conclusion

According to IDC’s monitor tracker, about two-thirds of new monitors still have a max resolution of 1080p. There is a massive installed base of such monitors. Not every customer with full HD aspirations is seeking the best and most costly gear. For example, Minecraft and Roblox are popular among youngsters, while Fortnite in performance mode is popular among teens. Such groups will be very delighted with a PC powered by the RX 7600, and their parents will not have to seek a loan to build it!

AMD faces increased competition now that Intel has entered the arena, alongside Nvidia and AMD. Difficult macroeconomic conditions — ranging from inflation to a war on the ground in Europe — are reducing consumer purchasing power. However, AMD has wisely evaluated the market conditions and taken quick and clever measures to adjust, such as reducing the proposed end-user price of the Radeon RX 7600 from an anticipated $299 to $269! AMD has also reduced the prices of its previous generation RDNA 2-based RX 6000 series cards, thereby providing gamers and customers with a wide selection of goods at various price points.

In conclusion, there is a lot to like about the AMD Radeon RX 7600. It is an affordable, sleek, and compact dual slot, dual fan graphics card that delivers impressive 1080p gaming performance at 50+fps on the highest graphical settings with FSR and ray tracing enabled.

Mohamed Hakam Hefny - Senior Program Manager - IDC

Mohamed Hefny leads market research in EMEA on professional workstation PCs and solutions. He also reports on professional computing semiconductors, processors, and accelerators (CPUs and GPUs), as well as breakthroughs and trends related to the market. In addition, Mohamed is actively involved in AI PC taxonomy and research. He participates in business development projects, contributes to consulting activities, and provides IDC customers with analysis, opinions, and advice.

Generative AI is a fascinating topic and has emerged as a powerful technology that pushes the boundaries of what computation can accomplish.

It has the potential to transform the realms of art and creativity, but also revolutionise industry processes.

There are a myriad use cases of generative AI across industries. We can see that different industries are adopting the technology to achieve specific business outcomes or address common challenges every organisation faces.

With its ability to generate content autonomously and simulate human-like outputs, generative AI has found applications in all industries. In fields as diverse as marketing, customer experience, citizen engagement, as well as industry-specific processes, such as supply chain management automation in manufacturing, for instance.

We would like to start diving into the use cases that are commonly used by several industries.

One of the first use cases to be adopted by organisations are conversational applications. They can range from virtual assistants and chatbots to language translation to personalised recommendations.

Another use case spanning across industries is in marketing applications, which can be widely adopted, depending on the sensitivity of the customer/citizen/patient data and the industry appetite for online marketing. For example, social media automation, customer support via chatbots and personalised marketing campaigns can be used to enhance the visibility of the organisation while being more efficient in their marketing investments.

A third use case cutting across industries is knowledge management applications. This use case can be seen in organisations being applied in identifying existing knowledge, knowledge summarisation, and in language translation and geographic contextualisation.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

However, industries adopt technologies based on their specific needs, goals, and customer demands. Unique processes, regulations, and market dynamics require tailored technologies, and it will be no different with generative AI.

Diverse industry requirements, resource constraints, competition, and technological maturity stages drive varying technology adoption across organisations. Now we’d like to explore how several industries are approaching generative AI and the technology adoption patterns of each industry:

Finance

In the ever-evolving landscape of the financial services industry, the emergence of generative AI technologies, led by Open AI’s ChatGPT, has garnered significant attention from CIOs.

While some express concerns regarding privacy and ethics, and others grapple with understanding the full potential, there is a growing sense of urgency driven by the fear of missing out (FOMO). Contrary to sceptics’ concerns, the industry has demonstrated a shift in focus towards augmenting the capabilities of financial services professionals, rather than seeking to replace them.

By harnessing the power of large language models, financial institutions aim to centralise knowledge, empowering agents and professionals with essential information to enhance customer experiences and optimise operational efficiency.

An excellent example of this progressive trajectory is Sedgwick, a prominent global provider of third-party claims administration services. It has successfully integrated the Open API version of ChatGPT, named “Sidekick,” into its sophisticated claims system, exemplifying Sedgwick’s commitment to elevating its claim-handling process and delivering unparalleled customer service experiences.

Another notable application gaining traction involves leveraging generative AI to enhance conversational interfaces. By revolutionising conversational capabilities, generative AI enables more human-like responses and facilitates complex interactions. Helvetia, a pioneering force in the insurance services realm, has embarked on a bold endeavour by launching a direct customer contact service utilising OpenAI’s ChatGPT.

This experimental initiative aims to provide seamless access to various financial products, showcasing the vast potential of generative AI in transforming customer interactions.

Energy (Utilities and Oil & Gas)

According to a recent IDC Survey ― Future Enterprise Resiliency & Spending Survey Wave 2, March 2023 (FERS) ―  the utilities industry globally ranks second highest in terms of investments in generative AI technologies for 2023 (40% of respondents), surpassing the global cross-industry average of 24%.

This highlights the enormous potential for innovation, the amplification of human work, and reinvention of work processes in utility companies. The automation of certain tasks and AI-assisted transformation are expected outcomes.

While the utilities industry is still in the exploratory phase of identifying fruitful use cases, generative AI holds significant promise in areas such as content generation for sales and marketing code-generation applications. To improve productivity and employee experience, conversational applications for customer service and CX improvements, and knowledge management, which is especially crucial given the challenge of an aging workforce in the utilities sector.

On the other hand, oil and gas organisations appear to be adopting a more conservative position.

The FERS survey reveals that only 18% of oil and gas companies worldwide are willing to invest in generative AI technologies in 2023.

However, 82% are actively conducting initial assessments to identify potential use cases. These assessments include evaluating the use of generative AI for multi-scenario authentic simulations and predictive capabilities in asset operations, generating subsurface images using fewer seismic data scans in the upstream part of the business, and generating human-like text to provide responses to domain-specific questions for business leaders.

Manufacturing

The early months of 2023 witnessed a surge of interest in generative AI and a renewed focus on AI in general.

While manufacturing organisations have not been early adopters of generative AI, they are gradually recognising the technology’s potential for leveraging vast research resources to create diverse content, including text, video, images, and virtual environments.

Among the respondents to the IDC 2023 Manufacturing Survey, 27% are already investing in generative AI technologies, and an additional 38% are engaged in basic exploration. Knowledge marketing and marketing applications are areas where organisations see short-term benefits, likely due to the availability of user-friendly technology that is easily accessible, such as ChatGPT.

Moreover, manufacturers believe that generative AI can have a significant medium-term impact on various aspects of their operations, such as production planning, quality control, AI-driven maintenance, code generation for programmable logic controllers, product development, design (including modelling, testing, and product life-cycle management), and sales (including client data analysis and content management).

However, there are ongoing challenges in maximising the value of AI/ML in manufacturing organisations. Many organisations still lack the necessary tools to address issues related to data availability and quality. IDC observes that internal capabilities and training in leveraging AI-powered technology and analytical tools are often lacking.

Read blog: Gen AI in an Industrial Environment — Recommendations for Early Adopters

Government

Generative AI tools such as ChatGPT, Bard, Dall-E 2, Vall-E, Stable Diffusion, and others have rapidly transitioned from arcane terms known only to AI experts to subjects of popular discussion in newspapers and TV talk shows within a matter of months.

OpenAI’s launch of ChatGPT in late 2022 sparked a wave of curiosity and speculation among the public, private companies, and public administrations. Initially, policymakers exercised caution, but senior civil servants quickly developed an interest in generative AI. Consequently, some jurisdictions have begun issuing guidelines.

The United Arab Emirates government, for example, has released guidelines encouraging the use of generative AI and providing ideas for potential use cases.

The Portuguese government has announced the “Practical Guide to Access to Justice,” which utilises the ChatGPT platform to help citizens obtain legal information in layman’s terms.

In another intriguing instance, a member of the Italian parliament used generative AI to write a speech, surprising fellow senators by disclosing its computer-generated nature at the end of the debate.

In the long term, generative AI has the potential to improve citizen experiences, amplify the competencies and capacity of civil servants, who often face overwhelming amounts of documents and cases, and aid administrations struggling to hire new talent.

At present, however, no major government entities in Europe, the Middle East, and Africa (EMEA) have implemented generative AI at scale. Nevertheless, numerous ideas, pilots, and prototypes are under development to understand the potential benefits in terms of citizen and employee experiences, increased operational efficiency, enhanced trust and compliance, environmental sustainability, and the governance and technical challenges that need to be addressed.

Healthcare

European healthcare organisations are increasingly recognising the benefits of generative AI in empowering and engaging patients and clinicians.

The most promising area of investment lies in knowledge management applications that enable a more efficient and effective flow of information among healthcare professionals, ultimately leading to better patient care.

For instance, generative AI can be employed to create or integrate more accurate patient histories and identify disease patterns, significantly enhancing the ability to make accurate diagnoses and develop effective treatment plans.

However, effective implementation of generative AI in healthcare faces limitations related to both data and models. Generative AI models require extensive training on large volumes of high-quality data.

Healthcare data quality varies widely, and its availability can be restricted due to privacy and ethical concerns. Additionally, generative AI models have limitations in terms of reproducibility due to their probabilistic nature and complex architecture. This undermines the reliability and trustworthiness of the models, especially when used to support clinical decision-making.

Read blog: Generative AI in Healthcare: Benefits and Risks

Retail

The retail industry is moving faster than the human pace can keep up with. Evolving customer expectations and needs, fierce competition, and the quest for enhanced process efficiency ― among others ― are all factors driving retailers to rush into experimenting with emerging technologies.

In fact, in 2022 newspapers were crowded with titles of bold retailers and brands landing in the metaverse while, in 2023, the focus has already shifted to generative AI. However, while the metaverse initiatives of retailers have already cooled down in favour of new forms of (spatial) computing, generative AI technologies (such as ChatGPT and Dall-E) and solutions powered by LLMs or text-to-image models could have a major transformational business impact across the retail value chain.

IDC data shows that 40% of retailers are in the initial exploration phase of the technology, while 21% are actively investing in the implementation of generative AI tools for the year ahead. We can already see some relevant applications in the areas of product development, merchandising, supply chain, marketing, and customer experience.

Organisations such as Coca-Cola, Mattel, and Carrefour are piloting generative AI applications ― even though still on a limited scale and predominantly with a test-and-learn approach.

According to IDC findings, 50% of retailers are expecting to prioritise generative AI uses cases for marketing in the next 18 months. In particular, generative AI could have a tremendous impact on the automation and personalisation of resource-intensive and time-consuming ecommerce processes such as product page descriptions, images/videos, and marketing copies.

For example, the Chinese ecommerce giant JD.com announced the imminent release of its own retail-specific ChatGPT solution which aims to improve online retailers’ rankings of product listings on SERP, generate product descriptions that are tailored to a shopper’s preferences, and optimise online product images and video generation processes.

Overall, as shown by the IDC data cited above, the most promising and imminent area of investment for generative AI in the retail sector is marketing and, more specifically, digital marketing.

Even if, in the near future, the technology could raise important questions in terms of proprietary data sharing and customer data privacy, without a doubt the use of generative AI for text and image generation could greatly enhance and streamline the ecommerce shopping experience, leading to higher profitability of retailers’ online channels.

Architecture, Engineering, and Construction

The built environment sector has long been considered behind the curve when it comes to productivity and the adoption of digital technology. But emerging technologies, including generative AI, are accelerating innovation across the sector and aligning it with other industries.

According to an IDC Survey (Future Enterprise Resiliency & Spending Survey Wave 2, IDC, March 2023), 25% of resource and construction companies are investing in generative AI technologies this year, just above the industry average.

The potential of generative spans across the building life cycle. When planning and designing a building, drawings and BIM models typically take weeks or months to produce. Generative AI has the potential to generate building designs in an afternoon based on pre-defined criteria such as building codes, site conditions, and sustainability standards.

The construction process is also ripe for innovation: studies find that the need to correct errors during projects accounts for between 5% and 12% of costs. Here, generative AI can create optimised construction schedules and augment supply chain and material planning.

The opportunities extend to a building’s operation through to its demolition and recycling.

As with all industries, these opportunities must be balanced with potential risks. For AEC companies, there are specific physical safety risks associated with using generative AI for the automation of building designs and compliance checks. The correct safeguards and checks will need to be put in place as these technologies are piloted and rolled out.

Generative AI models also require extensive training on large high-quality data sets: the industry’s legacy of digital immaturity and data fragmentation will affect, but not stall, the rate of innovation.

Moving Forward

In conclusion, as the field of generative AI continues to evolve rapidly, it is paramount to cultivate strategies that enable us to navigate through the noise and discern between hype and reality.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

By gaining a clear understanding of the true potential and limitations of this technology, we can effectively harness its power. The wide-ranging applications of generative AI across various industries have the potential to reshape the way organisations manage their businesses and increase efficiency and productivity.

However, amid the excitement and buzz, it is vital to approach the subject with a discerning eye. Adopting an approach based on use cases, which reveals tangible evidence based on real-world results, becomes an imperative for tech vendors and end-user organisations alike.

Drawing upon practical applications and real-world experiences provides invaluable context, allowing us to differentiate between exaggerated claims and genuine achievements. By prioritising the examination of use cases and seeking concrete results, we deepen our understanding of the true potential and limitations of generative AI.

Another angle of the discerning strategy when it comes to generative AI is to rely on subject experts and look for insights that are connected to the industry in question, as experienced professionals in the field are the best source of reliable and up-to-date information. Moreover, this article was written by several humans, embedded by human intelligence with the help of computers, not generative AI.

Contributing analysts: Adriana Allocato, Davide Palanza, Gaia Gallotti, Jan Burian, Louisa Barker, Massimiliano Claps and Sofia Poggi

If you want to know more about generative AI visit our website, or for more in-depth industry insight click here.

Unless you’ve been living under a rock for the past six months, you’ll have heard of generative AI – technology that enables computers to create synthetic data or digital content based on previously created data or content. The launch of ChatGPT in late 2022 lit a fire under this emerging space and seemingly overnight, hundreds of millions of people became inspired by the results of work that had already been going on for years within academic and commercial technology vendor research departments.

Earlier in June we spent two days touring around investment banks and hedge funds in London to talk to investors about generative AI and answer their questions.

 

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

 

We had many great, in-depth discussions. Here are the questions that came up most frequently.

  1. Where is the Value in Generative AI in the Short, Medium, and Long Term?

Today, most of the value is being captured by hardware vendors – most notably NVIDIA, which has seen its share price take off following a sharp upswing in its predicted revenues. As the market leading provider of GPUs with a strong enabling software story and emerging as-a-service play, too, NVIDIA is very well positioned to capitalise on the generative AI boom.

Of course, NVIDIA isn’t the only vendor that potentially stands to benefit; AMD and other semiconductor vendors (including start-ups like Graphcore, Cerebras & Moore Threads) are emerging as challengers, and generative AI platforms will drive storage and networking infrastructure investments too.

In the short to medium term, hyperscale public cloud providers can also expect to benefit significantly. With its early move investing in OpenAI and accelerated investments in generative AI across its software portfolio, Microsoft is in a particularly strong position; but AWS, Google, and Oracle are all also making significant moves in this space.

In the medium-term platform and application vendors also stand to benefit, although the value equation for them is less clear cut. There are significant question marks over which generative AI use cases can support direct monetization, and which will be important to implement from a defensive point of view. Many of the costs associated with managing generative AI models for scale, security, privacy and trust will also fall on their shoulders.

  1. What Will Have to Be True to Make GenAI a Truly Broadly Adopted Technology?

Right now, we’re still in “year zero” for generative AI in a commercial context. There is still a lot of confusion around the technology and its applicability in practical real world use cases.

What is already clear, though, is that publicly shared foundation models delivered as a service (such as those hosted by OpenAI) will only be suitable for a subset of enterprise use cases. For many, enterprises will use fine-tuned, specialised domain-specific models that are made available directly to them on a private (or controlled) basis.

The current state-of-the-art in generative AI yields systems that are prone to accuracy problems, difficult to control and predict, and expensive to run. All of these issues need to be worked on.

  1. Where Are the Implications for the Software Landscape?

Every software vendor that IDC is speaking to is updating or recreating their product roadmaps to incorporate their respective Generative AI strategies. Obviously, this will play out differently across infrastructure, platforms and applications – however there are certain common questions that are being asked:

  • Should we develop our own large language models, or should partner with model providers like OpenAI, Anthropic, Cohere and AI21 and tune them for our software capabilities?
  • How should we price our new Generative AI features?
  • Should we include getting access to customer data to train models as part of a new set of licensing terms and conditions. What do we offer in return (if anything)?
  • Do we need to evolve our support models to include service level agreements (SLAs) on accuracy on certain use cases that are being delivered?

Across all these questions, what is clear is that margin protection will be a major question for software vendors over time – especially those with questionable pricing power. In addition, there will be increased requirements for additional levels of support to deal with model, context and data drift. For the application players, there is an increasing likelihood that forms-based computing as a basis for applications will likely disappear over time and certain markets – for example, salesforce automation and human capital management could potentially be redrawn in the medium-term. 

As part of these changes, what is becoming clear is that the application vendors that are cloud laggards will be AI laggards, and that platforms will continue to dominate the software landscape.

More importantly, incorporating trusted and responsible AI principles into both product development and customer engagement will move from being a differentiator in the short term to table stakes in the medium term.

  1. What Are the Implications for Developers?

There’s been a significant amount of excitement about the ability of generative AI services (such as GitHub CoPilot, Replit Ghostwriter and Warp AI) to generate code, documentation, test scripts, and more.

Today’s state-of-the-art models are not going to put developers out of work. Rather, for some specific types of development work, and for some particular types of software asset being created, generative AI services are very likely to help developers accelerate their efforts to deliver working software, acting side-by-side with human developers in a “CoPilot” arrangement.

But it’s important to keep things in perspective: when we zoom out to consider the broader software delivery lifecycle, pro-innovation developers happy to experiment with new tools tend to bump into deployment, operations and support professionals who are much more risk averse.

  1. What Are the Implications for Services Providers?

Lastly, many of the investment teams we spoke to were very interested in discussing how professional services (particularly IT services) firms might be impacted by generative AI. Will it bring them major new opportunities? Or will its ability to drive automation of knowledge work mean that it forces providers to cannibalise their own businesses?

Our early research shows that more than 65% of early adopters of generative AI capabilities agree or strongly agree that their need for external services providers will be reduced in the future

The potential impact of generative AI on project delivery is, in some ways, analogous to the potential impact of low- and no-code development tools; if providers can embrace these tools effectively and also deliver trusted solutions to clients, they may find fewer hours are required to deliver projects – but outcomes will be improved for everyone.

 

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

 

The arrival of Generative AI technologies has created what we believe to be a seminal moment for the industry: it will be so impactful that it will influence everything that comes after it. However, we believe it is just the starting point. We think that Generative AI will trigger a transition to AI Everywhere – moving us from the use of narrow AI for specific use cases to widening AI for a range of use cases simultaneously.

This means that it will impact every element of the technology stack, and also drive a rethink of all horizontal and vertical use cases. However, given the questions around risk and governance, it will also require every organization to develop and incorporate an AI ethics & governance framework to deal with the risks mentioned earlier.

The investors that we spoke to in London agreed that the tech industry needs to take balanced approach to commercializing the opportunity, while also ensure that policies and regulations continue to protect consumers, enterprises and society as a whole.

Neil Ward-Dutton - VP AI, Automation, Data & Analytics Europe - IDC

Neil Ward-Dutton is vice president, AI, Automation, Data & Analytics at IDC Europe. In this role he guides IDC’s research agendas, and helps enterprise and technology vendor clients alike make sense of the opportunities and challenges across these very fast-moving and complicated technology markets. In a 28-year career as a technology industry analyst, Neil has researched a wide range of enterprise software technologies, authored hundreds of reports and regularly appeared on TV and in print media.

The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.

GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.

We observe three ways that industrial users can get in touch with GenAI:

  1. Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
  2. Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
  3. Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Be Ready — But Careful

In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.

Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.

AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.

AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.

Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.

Output Challenges

Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.

GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.

GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.

GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.

Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.

If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.

We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.

Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.

Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

Evolving Challenges

Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.

Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.

For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.

Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.

The Sky is the Limit — For Now

In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.

To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.

Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.

Beware the Competition!

GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.

But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.

At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.

As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.

You often see it on television: programs about people who are struggling financially. They run out of money at the end of the month, they can’t sell their house, they have a problematic debt burden, and so on. A common denominator is often the lack of insight into their own situation, and while coming up with ways to save money may not be very difficult, actually implementing and sticking to them is much harder.

I mean, it’s easy for an outsider to suggest that someone should get rid of their dog, but if that pet is their only source of comfort, it will take some effort.

The same goes for cloud costs: saving money is easier said than done. There are all sorts of great tools available from both cloud providers and third parties to help you understand your costs.

These tools provide various reports and dashboards, and even recommendations on which instances to remove or resize (rightsizing). With the right knowledge, you can also determine how to use discount options (reserved instances, savings plans, reserved capacity, etc.), how to manage licenses intelligently, and what you can do in your application architecture to save costs. And, of course, you can always turn off instances when you’re not using them.

All of this insight is great, but then comes the second part. Just as people have a hard time saying goodbye to their pets, users and administrators have a hard time shedding their old habits and ways of thinking. And that’s something cloud providers never talk about.

For example, consider turning off instances outside of working hours. In theory, this is an excellent way to save money, but instances are part of applications, which in turn are part of chains. It can happen that data exchange takes place in a chain outside of working hours.

Testing teams that are under a deadline may also need their environment outside of the predetermined working hours. And if environments are used in the management chain, they must also be available after working hours in case of an emergency. So savings are theoretically simple, but practice is more complicated. It can be done, but it takes a lot of effort.

Rightsizing is also less straightforward than it seems. Users and administrators are often hesitant to remove capacity: users see their performance decrease, and administrators see the risk of more outages because there is less excess capacity to handle issues. In the latter case, you need to analyze where these issues are coming from: a poor application can benefit from more capacity, but that is not a long-term solution.

If the roof is leaking, you can replace the bucket you use to catch the water with a mortar tub, but even that will eventually fill up. Ultimately, you’ll have to repair the roof.

So, objections can be raised for all types of savings. Eventually, you’ll need to adopt an approach that not only makes costs visible but also involves users and administrators, and leads to the right considerations on where to save on your cloud costs and where not to.

Don’t know where to start? Can’t figure it out quickly enough? IDC Metri has helped several organizations get started. Our specialists can help kickstart your cost-saving efforts in the cloud. Because understanding costs is one thing, but it’s only useful if they actually decrease.

 

Want to learn more? Subscribe to IDC Metri’s monthly newsletter full of actionable insights on IT benchmarking, intelligence, sourcing and more.

I was born in Ravenna, on the east coast of Emilia-Romagna, one of the most liveable and prosperous regions in Italy. Emilia-Romagna is home to 7.3% of the Italian population. It accounts for 9.2% of GDP and 11.8% of agricultural production.

It headquarters globally successful firms in automotive, motorbikes, food production, ceramic tiles, textile and fashion, biomedical engineering, construction, woodworking equipment and much more. Unemployment is at 5.1%, well below the 2022 national average of 8.2%. Life expectancy is higher than the national average.

There are white sandy beaches, natural reserves in coastal wetlands, and beautiful hills and mountains, which combined with a rich heritage — Ravenna alone boasts eight UNESCO heritage sites — and amazing food and wine attract tens of millions of tourists every year.

Besides these material treasures, there is a unique way of living in Emilia-Romagna. And even more so in Romagna, where I grew up; there’s an old saying that you can tell if you are in the Romagna part of the region because when a stranger shows up at someone’s door, they are welcomed with a smile and a glass of wine. On the Emilia side, they’ll be equally warmly welcomed, but with a glass of water!

There is a sense of shared joy, a passion for life and a pride in belonging to one’s community. A shared sense of resilience that drives people to go through the hardness of life with a smile on their face, and always trying to put a smile on someone else’s. Because there is always a little bit of magic, even in the small things.

As Federico Fellini, the world-famous movie director and one of the most beloved children of our region, once said: “Life is a combination of magic and pasta.”

It feels good to be a Romagnolo. And to visit Romagna … unless you happened to be there in the first two weeks of May 2023.

Smart River and Water Management: Preparing for Foreseeable Disasters

After many months of drought, in the first 17 days of May 2023, Romagna was hit by as much rain as it usually gets in six months. In some areas this meant up to 400mm of rain in two weeks. To put things in perspective, one of the worst hit municipalities, Faenza, which is home to 60,000 people, experiences on average 760mm of rain a year.

The stereotypical rainy London gets 690mm a year. The result of this unusually heavy rain was that 23 rivers burst their banks, resulting in 50 floods; 305 landslides devastated hills and mountains, 14 people died and over 36,000 people were displaced from their homes. The estimated economic damage to homes, factories, farms and public infrastructure is north of €5 billion, with around €600 million just to rebuild public infrastructure.

Climate change is increasing the frequency and intensity of these extreme weather events. Long-term environmental sustainability actions, which are progressing way too slowly, will not be enough.

Resilience to short-term shocks is imperative. Money is not the problem; in fact, there is an estimated €8 billion available from the Italian COVID Recovery and Resilience Plan and the “Italia Sicura” (Safe Italy) plan to make public infrastructure more resilient. This, however, is at risk of not being spent, or not spent well, because of lack of planning, skill gaps, slow public procurement, and insufficient competencies and capacity to audit.

Technology innovation is not a silver bullet, but when implemented wisely it can help fill some of those gaps. The increasing availability and granularity of data from satellite images, IoT sensors, weather monitoring and forecasting models already tell us that Italy has the highest amount of rain in Europe, with 300 billion cubic meters a year.

Building permitting systems, public works inspection systems and other sources tell us that Emilia-Romagna was the fourth worst region in terms of soil consumption in Italy in 2021, including in areas at high risk of flooding. By building on the existing knowledge, collecting more data and turning the data into intelligent smart river and water management insights, governments, water utilities and the public could make better decisions across the disaster resilience life cycle, from mitigation to preparedness, from response to recovery.

  • Mitigation: Governments can use a wide variety of tools to develop hazard maps that can identify areas most at risk and feed into planning and preparedness systems. Policymakers and building inspectors can feed intelligent insights into planning and operational simulation tools, such as digital twins, to simulate the impact of building code and permitting decisions to reduce soil consumption and require the use of more resilient building techniques and materials.
  • Preparedness: The benefits of building flood resilient systems (dams, levees, flood walls and diversion canals, etc.) to protect natural systems such as wetland, marshes and beaches, and using resilient building techniques such as tiled pavements instead of concrete for parking lots and roads to increase water absorption, can be augmented by making these assets and tools intelligent. The intelligence from those systems can enable real-time or preventive decisions about diversion tactics, rather than reacting only when the flood is too close.
  • Response: Real-time data from weather forecasting models, integrated with data from dam and river sensors, should be analysed to detect anomalies to automatically raise emergency alerts that can then promptly notify citizens, rather than having to rely on fire and police patrols roaming the roads of small rural villages and towns using loud speakers to tell citizens to evacuate homes or expecting mayors to post videos on social media hoping everybody pays attention, as happened in the past two weeks in Romagna. More intelligent use of data can also provide insights for command-and-control personnel to coordinate first responders and orchestrate the supply of food, clothes and medicine for shelters, instead of relying on emails, spreadsheets and phone calls.
  • Recovery: Digital twins would allow evidence-based infrastructure planning decisions and monitoring the progress of investments aimed to rebuild infrastructure, therefore increasing speed and transparency of projects to avoid wasting time and money. AR/VR tools can help engineers conduct inspections when anomalies are detected.

The same technology infrastructure — with a few additions in terms of sensors and applications — will provide intelligent insights for other use cases, such as water conservation in dry seasons, leakage reduction, biodiversity protection in rivers, marshes and ports, sustainable water transportation, and water quality.

Only two days after the peak of the emergency, millions of euros, as well as food, clothing and other supplies, had been donated to flooded areas in Emilia-Romagna from all over Italy and beyond. Boosted by the typical Romagnolo spirit, spontaneous neighbourhood efforts have mushroomed to clean mud from houses, roads and farms. Beaches have already been cleaned for the upcoming tourist season. But that resolve to recover quickly should not allow us to forget what happened. We know what the future holds. Extreme weather events will happen, not only in well-known high-risk flooding areas, such as the Indian Subcontinent, Southeast Asia, and Pacific and Caribbean Islands, but also in traditionally safer regions of the world.

Technology innovation will be critical to climate change resilience. But technology alone will not be enough. It’s not enough to feel compassion to help when disaster happens. We need to invest in mitigation and preparedness measures that generate the highest long-term returns.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.