The EU’s new Corporate Sustainability Reporting Directive (CSRD) has thrown a chill on the business processes of organizations: Companies must modernize their applications and data foundations to enhance their reporting capabilities.

The struggle of companies in Europe to comply with the CSRD was on display at the ChangeNOW global summit, held in Paris at the end of March. Participants at the event — which seeks to map sustainable initiatives, best practices, tools, and technologies — revealed that organizations are lagging when it comes to implementing CSRD.

This is in line with results of IDC’s recent European IT Services Survey (N = 700), which found that just 25.6% of European organizations expect to deploy tech to improve sustainability KPIs as a transformation initiative in the next two years.

The CSRD is having a huge impact on organizations: It imposes reporting standards that compel organizations to publish their ESG information, which must then be verified and audited. All industrial sectors, from large accounts to SMBs, are subject to a staggered compliance timetable: The first reports must be published between 2025 and 2026 for large accounts, and in 2027 for SMBs.

Everyone agrees on one point: It’s a race. The timetable is forcing the acceleration of activities in data collection and qualification, methodologies and best practices, to structure and industrialize the creation of these reports.

CSRD weighs heavily at all levels of organizations. It requires a review of business processes and the organizational model, and, therefore, the modernization of core business applications — where the data is. New platforms or custom developments may need to be deployed to consolidate ESG data.

After examining their data lakes and the shift toward new data architectures, many businesses perceive this as a transformational endeavor.

Like any IT project, such complexity brings opportunities for services providers to support organizations with compliance. IDC surveys have shown that 41.2% of organizations expect partners to play a key role in implementing their sustainability strategy and achieving their objectives.

The Scaling Problem of Legacy Finance

Let’s examine where CSRD creates a bottleneck. Among the processes impacted by the CSRD is that of the finance department. Today, the CFO is one of the guardians of the transformation of the finance function, whose scope has been extended to non-financial matters and CSR.

For example, the French bank Crédit Agricole and cosmetics specialist L’Oréal have entrusted the finance department with their CSRD projects. Experienced in standardized financial reporting, the CFO has the difficult task of reproducing and improving processes by integrating CSRD.

Logical, but still difficult to implement. One of the biggest challenges is getting the different personas impacted by CSRD — and the associated data — to sit at the same table to find the right communication channel and vocabulary to communicate.

These human interconnections represent a real challenge in terms of governance but are necessary to deploy an application modernization strategy and convert the new operational model and business processes into a revitalized IT structure.

Financial IT systems are often very mature. CSRD requires them to scale rapidly to support new workloads in only three years. This includes related data initiatives: the mapping of data sets, the overcoming of information silos, increasing automation, and supporting heterogeneous files (PDF or Excel, for the most part).

The legacy must be modernized within the timeframe of the CSRD. But urgency means risks must be controlled. For example, misunderstanding the regulation and the requested data could have a negative impact on technological engagements and procurement.

Using GenAI to modernize legacy applications and make them “CSRD ready” has been explored to collect, map, and consolidate data, generate appropriate information for criteria, or automate the storytelling inside the CSRD reports.

Capgemini has detailed how GenAI could accelerate gap analysis and identify which data is lacking and which data is relevant for presentation. L’Oréal discussed how it believes that GenAI is key to education and acculturation on the criteria and wording of the regulation.

This scenario is in line with our vision for application modernization strategies in Europe.

The implementation of the CSRD — and, by extension, the major theme of sustainability — represents a powerful driver for adapting processes, revitalizing part of the application estate, and establishing a coherent link between IT and new business requirements.

Revitalizing applications to optimize business processes is a key theme of IDC’s European Application Modernization Strategies research program.

Modernize with a Sustainability/ESG Integration Platform

The challenges include making the regulation a starting point for a more global strategy, and placing CSRD and sustainability at the center of the organization’s decision-making and business innovation.

We believe this requires building an enterprise architecture, including modular and loosely coupled components, to integrate systems, applications, and data in a flexible and sustainable way over time.

Such a sustainable integration platform will de-silo business applications, facilitate the continuous collection of data, the industrialization of analytical reporting, and the connection to ecosystems. In short, it means building a dynamic CSR link in the value chain and anticipating the evolution of reporting obligations.

Cyrille Chausson - Research Manager, European Application Modernization Strategies - IDC

Cyrille Chausson is a research manager within IDC's European Cloud Innovation, Services and Skills research team. Based in Paris, Cyrille is responsible for IDC's European Application Modernization Strategies research program. In his role, he offers insights into trends, market dynamics, and strategic investments pertaining to application transformation, migration, development, and delivery. Cyrille's research primarily focuses on the opportunities and challenges that application modernization presents to service providers and IT buyers, as they transition to more digital-oriented organization and models.

Ransomware attacks have been one of the most high-profile scourges of business over the past decade — and the threat shows no signs of abating. If anything, it has become more prevalent as “ransomware as a service” has lowered the entry barrier for threat actors.

Innovation by cybercriminals keeps security teams on high alert. When governments and security agencies advise organizations not to pay ransom, attackers may switch to extortionware approaches.

Or, sticking with ransomware, they may use AI to augment their capabilities, refine their lures, automate attacks, or hit hundreds or thousands more organizations than they would have been able to previously.

This Is Going To Hurt

According to IDC’s Future of Enterprise Resilience Survey, conducted in November 2023, 63.4% of EMEA organizations with 500 or more employees suffered a ransomware attack that blocked access to their systems or data in 2023.

Which assets are being impacted? According to the survey respondents, the most frequently impacted resources were collaborative applications (37%) such as MS 365 or Google Workspace. These were followed by virtual or physical servers (35%) and public cloud IaaS and PaaS (also 35%). For 34% of organizations, ransomware attacks impacted their partner, supplier, or customer systems.

These impacts reflect the infrastructure and environments in which most modern organizations operate: cloud-based infrastructure and platforms running cloud-based collaborative applications on enterprise licenses for cost efficiency and productivity, often within broader digital ecosystems to enhance operational efficiency.

Targeting what has become the critical infrastructure for operational capability gives cybercriminals the greatest leverage over their victims. The hackers strive to ensure there is no choice but to pay the ransom.

The Best Defense is… Multi-Layered

Despite the rising volume of attacks, more than one-third of the surveyed organizations stated that no ransomware attacks had managed to block access to their systems or data. These organizations highlighted some of the key technologies that helped them detect the attacks before the malware was able to deploy.

The most frequently cited tool was a cloud security gateway/cloud access service broker (CASB, 30%). This aligns with the operational environments described above, placing protection where it is needed most. Deploying a CASB provides visibility and control over cloud environments and assets, enabling quicker detection and containment of potentially malicious activity.

Threats can come from within the organization as well as outside. A further 26% of respondents said they used specific security analytics aimed at detecting insider threats. The third most common response was SIEM systems (25%), which help by correlating data from multiple sources to identify suspicious patterns and anomalies before an attack. Organizations also mentioned that NDR, identity analytics/UEBA, and EDR helped with detection.

Fundamentally, there is no single technology that is a silver bullet against ransomware. Effective protection depends upon a layered approach that aligns security controls to the environment, infrastructure, and processes of the organization.

As attacks grow more prevalent, fueled by ransomware as a service and AI-augmented attack campaigns, EMEA organizations need to be on their guard with a mix of technologies to detect and contain malware payloads before they can be deployed.

Mark Child - Associate Research Director, European Security - IDC

Associate Research Director Mark Child of IDC’s European Security Group leads the group's Endpoint Security and Identity & Digital Trust (IDT) research for both Western Europe and Central & Eastern Europe. He monitors developments in security technologies and strategies as organizations address the challenges of evolving business models, IT infrastructure, and cyberthreats. Mark's coverage includes in-depth security market studies, end-user research, white papers, and custom consulting.

Changes are occurring in the work environment that can no longer be ignored or dismissed with superficial comments like, “This is how things are evolving, so you need to accept them.”

In this day and age, the full employee experience package must be nurtured. Sharp attention must be paid to the demands of younger employees entering the work environment.

The statements above are some of the thought-provoking perspectives that technology end users voiced to IDC during deep-dive discussions at IDC’s Future of Work and AI Summit in London and our Future of Work Summit in Milan. During these events, both of which occurred in March, IDC held free-ranging conversations with more than 100 Italy- and U.K.-based IT and HR experts who work in industries including education, manufacturing, finance, and healthcare.

The talks revealed 8 Future of Work trends that are likely to impact workspaces in 2024 and beyond.

  1. Using Tech to Boost Productivity and User Experience in Hybrid Workspaces: The experts IDC spoke to supported greater technology adoption, including of intuitive technologies, to unlock productivity improvements and help employees close digital skills gaps. They emphasized the need for workplace cultural change, including clear communication to employees on the benefits of new technologies. The experts noted that hybrid working models will require organizations to redesign office spaces to enable digital parity between remote and onsite workers.
  2. Assessing AI’s Impact on the Workforce: The experts were generally of the view that AI and automation will make a positive impact on processes, employee productivity, and innovation. Organizations should make upskilling a priority, as new skills will be required to advance these technologies. Attention must also be paid to the EU’s new Artificial Intelligence Act, which demands greater transparency and traceability of AI initiatives, as well as contains requirements around removing bias that could be fed into large language models (LLMs).
  3. Ensuring Cybersecurity in Flexible Work Environments: Cybersecurity remains critical, especially for organizations that employ remote workers and/or employees who split time between working at the office and at home. IDC’s discussions pointed to the need to deploy multiple layers of safeguards, such as cryptography and virtual desktops, to safeguard data and assets connected to the organization’s networks. Regardless of their location (i.e., home or office), workers must be continually trained on cybersecurity and on how to protect IT and OT data in converged environments.
  4. Leveraging Data, Automation, and Innovation to Build Intelligent HR: When applications are being created, employees in different functions may not have the same understanding of the processes that need to be designed. A pivotal initial step to ensure user adoption is to make certain that all involved share the same understanding of goals and processes. The IT function, for example, should not spend time developing solutions that will not ultimately serve user needs efficiently and effectively. A complicating factor is that many organizations are still stuck with legacy solutions that hinder technological advancement. Governance is another challenge. Many organizations are struggling to develop and implement processes that guarantee clean and ready data for use in AI and GenAI applications.
  5. Fine-Tuning Hybrid and Flexible Work Models: Hybrid and flexible models require a high level of employer trust in workers’ ability to be productive if not in the office. Some of the experts IDC spoke to indicated that many in Italian senior management remain skeptical about the benefits of work-from-home policies and continue to demand that their workforces return to the office. On the workforce side, there is growing demand for objectives and detailed KPIs. In general, the experts regard hybrid and flexible working models to be at least as productive as office-only models — in some cases more so. Flexible working models can be critical to help ensure employee engagement, especially for those who are caregivers, a parent, or members of the younger generation.
  6. Boosting Employee Engagement and Retention: Companies can utilize multiple levers to improve employee engagement and retention. These include fostering in-office/in-person connections, team building, and providing clear and continuous feedback to employees from the top to the bottom of the organization. The role of technologies in such initiatives is pivotal. Employees, for example, are usually happier and more engaged if they are satisfied with the technologies used in their workplace. The experts at our meetings also told us that the expectations of the incoming generation of workers are driving organizations to reshuffle their employee engagement priorities and requirements.
  7. Connecting the Future of Work and Sustainability: Organizations in the U.K., Ireland, and Italy are increasingly responsive to environmental, social, and governance (ESG) priorities. Much effort and resources are being invested in the “E” component as companies act to shrink their carbon footprints, for example, by shifting to more carbon-neutral cloud solutions. Initiatives connected to the “S” component are raising organizational awareness of issues like gender parity, inclusion, digital accessibility, and community commitment. “G” components focus on the R&D and implementation of technologies to collect and analyze reporting data. To meet their ESG commitments efficiently, companies are seeking to onboard sustainability experts across all organizational levels.
  8. Analyzing How Skills and Talent Are Evolving: Organizations continue to struggle to find employees with the skills to help the company stay abreast of new technology and innovations. On one hand, we see AI boosting productivity and making some tasks and jobs obsolete. On the other, there is rising demand for humans with the “hard” technical skills to effectively manage AI and connect AI with humans. Demand is also rising for humans who possess the “soft” skills to manage the creativity and needs of human employees. Employees who can effectively fulfill these roles will be highly valued and rewarded.

 

Many of the above points are succinctly summarized in IDC’s Human-First Future of Work Framework, which is based on five pillars that are essential for any business seeking to build a sustainable, human-first work environment.

Interested in a deeper understanding of the issues discussed here? Contact IDC’s Future of Work Team or connect with us on LinkedIn for live updates from the EMEA Xchange Summit in Malaga on April 15–16, 2024.

Erica Spinoni - Senior Research Analyst, European Research - IDC

Erica Spinoni is a senior research analyst for the European Research Team. Based in Milan, Spinoni supports IDC’s European Digital Business Strategies and IDC’s European Future of Work practices. In her role she advises ICT players on European digital business and future of work market trends, supporting them in their planning, go-to-market and sales cycles with market research, custom projects, as well as honoraria.

Digital-native businesses’ (DNBs) deal-making, valuations, and exit activities were all down in 2023 in the European venture market, according to Atomico’s The State of European Tech 2023. A market return to form that, however, can be considered a worldwide phenomenon.

The key fundamentals that led to a downturn in the funding environment in the last two years are still in place. Limited partners are still cautious about providing more money to the venture capital (VC) ecosystem, due to persisting macroeconomic and geopolitical uncertainties. With difficulties continuing in the funding environment, the number of exits is expected to remain limited in the short term, in favor of M&A and consolidation.

With all this as a backdrop, what will 2024 look like for European DNBs?

From AI to Sustainability Technologies: Where Is the Money?

European venture capitals hold a consistent amount of dry powder due to this lack of activity, which could be invested in selected deals this year. A 2024 rebound is expected in the event of a cut in interest rates, which could lower risk perception from limited partners. If only 10 new unicorns (privately owned companies with valuation above $1 billion) were created in Europe in 2023, down from 46 in 2022, with an upturn in deal-making activities we expect a larger number of DNBs to join the unicorn cohort.

European Artificial intelligence DNBs are expected to be at the forefront of investors’ interest again in 2024. As focus on deals from VCs and corporate VCs in 2023 was on large language models (LLMs), deals will most probably shift toward AI vertical applications. With regulations such as the EU AI Act coming into effect, investment will also shift toward start-ups and scale-ups focused on AI security and privacy.

Sustainability technology DNBs, from carbontech to climatetech, dominated capital flows in 2023, and the segment is expected to attract more capital in 2024 too, with climate change a key topic on European (and worldwide) leaders’ agendas, as demonstrated by the outcomes of COP23. Furthermore, tech start-ups growth in Europe is also sustained by national and EU stimulus funds, such as the European Innovation Council (EIC) work programme 2024, which allocates €1.2 billion for strategic technologies and scaling up companies in deep tech innovations, from spacetech to quantum technologies.

How Will External Conditions Shape European DNBs’ Technology Investments?

Uncertain market conditions push digital natives to reprioritize their tech spending toward optimizing processes and increasing profitability, but tech expenditure will not be cut, as it is essential to sustain their digital-based business models. More specifically, security technologies and cloud platforms are pivotal investments to develop secure and scalable digital products and services, whereas increased focus on AI and automation technologies is set to make larger DNBs leaner and more cost effective. Data infrastructure, integration, and quality investments would be still pivotal to boost wider AI adoption, targeting customer experience initiatives as well, with the aim to retain and enlarge the existing customer base.

Want to know more? You can find these and other key trends driving the European DNB landscape, in IDC’s 2024 Digital-Native Business Trends or by getting directly in touch at mlongo@idc.com.

Martina Longo - Research Manager, Digital Business - IDC

Martina Longo is a research manager in the IDC Digital Business Research Group. In her role she advises ICT players on how European organizations create business value using digital technologies. She also leads IDC European Digital Native Business research, focused on those enterprises born in a modern technological world in a mix of start-ups, scaleups, and more mature digital natives. Within the European Digital Business Research, the European Digital Native Business, Start-ups and Scale-ups theme advises technology suppliers on the market dynamics and segmentation, business priorities, tech buying patterns and go to market approaches (sell to/sell with) needed to engage digital native organizations in Europe.

San Francisco-based OpenAI’s introduction of ChatGPT on November 30, 2022, marked a significant milestone in the development of large language models (LLMs) and generative AI (GenAI) technology. The launch by OpenAI, the creator of the initial GPT series, sparked a race among technology vendors, system providers, consultants, and app builders. These entities immediately recognized the potential of ChatGPT and similar models to revolutionize industry.

2023 saw a surge in efforts to develop GenAI tools that are smarter, more powerful, and less prone to hallucinations. The competition led to an influx of innovative ideas and tools aimed at harnessing the capabilities of LLMs. The goal became to leverage these models as ultimate tools to enhance productivity, competitiveness, and customer experience across diverse sectors.

With ChatGPT paving the way, a broad range of organizations and professionals are exploring how to integrate GenAI into workflows and solutions. The widespread interest and investment have underscored the technology’s transformative potential and laid the groundwork for its continued evolution in the years to come.

4 Uses Cases for GenAI in Manufacturing

In manufacturing organizations, the utilization of GenAI-powered tools and solutions is primarily focused on four key areas:

  1. Content Generation: This includes automated report generation, in which GenAI algorithms are employed to automatically generate reports based on predefined parameters and data inputs.
  2. User Interface Enhancement: This involves the integration of chatbots into user interfaces, enabling more intuitive and interactive communication between users and systems.
  3. Knowledge Management: GenAI facilitates knowledge management by providing co-pilot services that help users access and interpret vast amounts of data and information.
  4. Software and Delivery: This encompasses various applications, such as code generation, in which GenAI is leveraged to automate the creation of software code, streamlining development processes.

According to IDC’s GenAI ARC Survey of 2023, manufacturing organizations are actively evaluating or implementing GenAI solutions.

Around 30% of European respondents have already invested significantly in GenAI, with spending plans established for training, acquiring Gen AI-enhanced software, and consulting. Nearly 20% are doing some initial testing of models and focused proofs of concept, but don’t yet have a spending plan in place.

These results suggest steady growth in the adoption of GenAI-powered tools and solutions within the manufacturing sector. The initial hype surrounding GenAI in 2023, fueled by its perceived potential as a “wonder technology,” has evolved into a pragmatic recognition of its capacity to address ongoing challenges such as workforce shortages, skills gaps, language barriers, data complexity, regulatory compliance, and more.

In the manufacturing industry, GenAI is increasingly viewed as an enabling technology capable of facilitating innovation and overcoming barriers to success.

Framework for Manufacturing Organizations to Implement GenAI

To fully capitalize on the potential of GenAI pilots, manufacturing organizations recognize the need for comprehensive frameworks that encompass processes and policies. Key measures include:

  • Data Sharing and Operations Practices: Organizations should prioritize the implementation of practices that ensure data integrity for LLMs developed internally or in collaboration with third parties. This ensures that data used in GenAI models is accurate, reliable, and ethically sourced.
  • Corporate-Wide Guidelines for Transparency: Guidelines should be established to evaluate transparency and track the use of GenAI code, data, and trained models throughout the organization. This promotes accountability in GenAI usage.
  • Mandatory GenAI Awareness and Acceptable Use Training Programs: Mandatory training programs should be implemented to raise awareness of GenAI capabilities and ethical considerations among designated workforce groups. This helps ensure that employees understand how to responsibly utilize GenAI technologies.

As excitement over the capabilities of GenAI has died down, organizations are becoming increasingly aware of the risks posed by potential intellectual property theft and privacy threats linked to the technology.

To address these concerns, many organizations are prioritizing the establishment or expansion of formal AI governance/ethics/risks councils tasked with overseeing the ethical use of GenAI and mitigating risks associated with privacy, manipulation, bias, security, and transparency.

As a manufacturing interviewee in one of my studies put it, “The governance framework is indispensable in ensuring responsible and ethical AI implementation.” This underscores the importance of implementing robust governance measures to ensure the ethical use of GenAI within manufacturing organizations.

Deployment Strategies

Strategies for selecting the right solution for the right use case can vary substantially. A global white goods company, for example, piloted several GenAI-powered use cases in 2023. Its selection and deployment strategy encompassed a range of approaches, including:

  • Off-the-Shelf Solutions: The company utilized ready-to-use, commercially available GenAI-embedded software-as-a-service solutions. These offered immediate access to GenAI capabilities without the need for extensive development or customization.
  • AI Assistants: It deployed AI assistants to support specific tasks within their business processes. These assistants helped, for example, to create designs based on predetermined workflows, providing valuable support and efficiency gains.
  • AI Agents: The company deployed AI agents in complex use cases requiring the orchestration of workflows and decision-making based on AI-driven insights. The agents leveraged GenAI to analyze data and make informed decisions autonomously.

A primary challenge often mentioned in such endeavors is selecting the optimal LLM for company-specific use cases from a multitude of possibilities. With new models and solutions constantly emerging and becoming accessible, this task can be daunting. The selection process typically involves thorough market research, vendor presentations, and internal discussions about the technology framework underlying current and future use cases.

However, the success of GenAI ultimately hinges on the quality and quantity of the data utilized. Curating a diverse and sufficient data set is critical to ensuring unbiased outcomes and maximizing the effectiveness of GenAI solutions. Data curation therefore remains a cornerstone of success in leveraging GenAI technologies.

The Bottom Line

GenAI-powered technology holds immense potential across industries and regions, offering capabilities that traditional machine learning algorithms or neural networks may struggle to match in terms of breadth and depth. GenAI can assist in co-piloting humans, thereby addressing challenges associated with an aging and/or unqualified workforce.

However, organizations must prioritize addressing concerns such as data leakage, biases, and maintaining sovereignty over IT processes running in the background. These issues must be carefully managed to ensure the responsible and ethical implementation of this powerful technology.

The past year and a half has demonstrated the impressive capabilities of generative AI (GenAI) systems, such as ChatGPT, Bard, and Gemini. Business application vendors have since begun a sprint to include the most recently enabled capabilities (summarizing, drafting text, natural language conversation, etc.) into their products. And organizations across industries have started to deploy generative AI to help serve customers — hoping that GenAI-powered chatbots could provide a better customer experience than the failed and largely useless service chatbots of the past.

The results have started to come out, and they are mixed. The service chatbots of organizations such as Air Canada and DPD have made unsubstantiated offers or even rogue poetry. Another customer chatbot for a Nordic insurance company was not updated with the latest website reorganization and kept sending customers to outdated and decommissioned web pages.

The popular Microsoft Copilot hallucinated about recent events and invented occurrences that never happened. Based upon personal experience, a customer meeting summary written by generative AI included a final evaluation of the meeting as “largely unproductive due to technical difficulties and unclear statements” — an assessment not echoed by the human participants.

These issues highlight several dilemmas related to using generative AI in software applications:

  • Autonomous AI functions versus human-supervised AI. Autonomous AI is attractive to customer service departments because of the cost difference between a chatbot and a human customer service agent. This cost saving potential must, however, be balanced against the risk of reputational damage and negative customer experiences as a result of chatbot failures and mishaps.

Instead, designing solutions with “human in the loop” may have multiple benefits. Incorporating employee oversight to guide, validate or enhance performance of AI systems may not only drive outputs accuracy, but also increase adoption of GenAI solutions. For example, a customer service agent could have a range of tools, such as automatically drafted chat and email responses, intelligent knowledge bases, and summarization tools that augment productivity without replacing the human.

  • At what point is company-specific training enough? In other words, extensive training investments into company-specific large language models (LLMs) versus relying on out-of-the-box LLMs, such as ChatGPT, for good-enough answers. In some of the generative AI failures described above, it seems that the company-specific training of the AI engine was too superficial and did not cover enough interaction scenarios.

As a result, the AI engine resorted to its foundational LLM, such as GPT or PaLM, and these did, in some cases, act in unexpected and undesired ways. Organizations are obviously eager not to reinvent the wheel with respect to LLM, but the examples above show that overly reliance upon general LLMs is risky.

  • Keeping the chat experience simple versus allowing the user to report issues. This includes errors, biased information, irrelevant information, offensive language, and incorrect format. To this end, it is crucial to understand sources and training methods. A good software user experience is helped by a clean user interface. In the context of generative AI, think of the prompt input field in an application. Traditional wisdom suggests keeping this very clean. However, what is the user supposed to do in case of errors or other types of unacceptable AI responses, and how is the user supposed to verify sources and methodologies?

This is linked to the need for “explainable AI”, which refers to the concept of designing and developing AI systems in such a way that their decisions and actions can be easily understood, interpreted, and explained by humans.

The need for explainability has arisen because many advanced machine learning models, especially deep neural networks, are often treated as “black boxes” due to their complexity and the lack of transparency in their decision-making processes.

  • Using generative AI for very specific and controlled use cases versus general AI scenarios. One way to potentially curb the risks of AI errors is to frame the use of AI into specific and limited application use cases. One example is a “summarize this” button as part of a specific user experience next to a field with unstructured text. There is a limit to how wrong this can go, as opposed to an all-purpose prompt-based digital assistant.

This is a difficult dilemma simply because of the attractiveness of a general-purpose assistant, which has prompted vendors to announce such general assistants (e.g., Joule from SAP, Einstein Copilot from Salesforce, Oracle Digital Assistant, and the Sage Copilot).

  • Charging customers for generative AI value versus wrapping into existing commercial models. GenAI is known to be expensive in terms of compute costs and manpower needed to orchestrate and supervise training. This begs the question of whether such new costs should be rolled over to the customers.

This is a complex dilemma for a number of reasons. Firstly, AI costs are expected to decline over time as this technology matures. Secondly, AI functionality will be embedded into standard software, which is already paid for by customers.

The embedded nature of many AI application use cases will make it very difficult for vendors to change for incremental, separate new AI functions. Mandatory additional AI-related fees related to existing SaaS solutions are likely to be met by strong objections from customers.

  • Sharing the risk of Generative AI outputs inaccuracy with customers and partners versus letting customers be fully accountable. Generative AI will be increasingly leveraged in supporting key personas’ decision-making processes in organizations. What if it hallucinated and the outputs were misleading? And what if the consequence is a wrong decision that will have serious negative impact on the client organization? Who is going to take the responsibility for the consequences of those actions? Should customers accept this burden alone, or should the accountability be distributed between vendors, their partners (e.g., LLMs), and end customers?

In any case, vendors should have full transparency of their solutions (including clear procedures regarding training, implementing, monitoring, and measuring the accuracy of generative AI models) to be able to immediately provide required information to the customer in the case of an emergency.    

 

After having taken the enterprise technology space by storm, generative AI is likely to progress slower than initial expectations. As a new technology, GenAI might enter the “phase of disillusionment,” to paraphrase colleagues in the analyst industry.

This slowdown will be driven by a more cautious adoption of AI in enterprise software, as new horror stories instill fear of reputational damage in CEOs across industries. We believe that new generative AI rollouts will have more guardrails, more quality assurance, more iterations, and much better feedback loops compared to earlier experiments.

Bo Lykkegaard - Associate VP for Software Research Europe - IDC

Bo Lykkegaard is associate vice president for the enterprise-software-related expertise centers in Europe. His team focuses on the $172 billion European software market, specifically on business applications, customer experience, business analytics, and artificial intelligence. Specific research areas include market analysis, competitive analysis, end-user case studies and surveys, thought leadership, and custom market models.

The efficient management of identities and access has become central to digital business. It determines the speed and agility with which an organization is able to operate or pursue new goals; it underpins employee productivity and enables operational efficiencies; and it is key to security, privacy, and compliance. Most organizations have deployed identity and access management (IAM) solutions to handle their operational demands effectively.

However, the identity infrastructure and processes themselves are a frequent target of cyberattackers, driving recognition that identity security measures need to be improved.

What Are the Main Identity Threats?

IDC’s Global Identity Management Assessment Survey 2023 found that in Western Europe, the two categories of identity that are perceived as the biggest threats are hybrid or remote employees and partners, suppliers, or affiliates (each category mentioned by 49.6% of respondents). The external nature of these identities — from a location perspective, an employment perspective or both — increases the attack surface of the organization and creates potential vulnerability and exposure of data, systems, and processes.

Nevertheless, those roles also provide access to a broader talent pool and deliver operational efficiencies and economies of scale, allowing organizations to outsource non-core functions. Consequently, organizations are striving to accurately assess and manage the risk.

What Are the Top IAM Investments?

Accordingly, the top two service areas in which Western European organizations are planning to make significant IAM investments to address the security risk are identity management for roles and authorizations (56.9%) and privileged access management (PAM – 53.3%).

Note that since the onset of the COVID-19 pandemic in 2019, investments in PAM have been growing steadily, as organizations required greater control over remote employees accessing sensitive corporate applications and data.

Which IAM Areas Must Improve

The survey also asked which IAM areas organizations need to improve on significantly in the next 18 months. From a list of options including functional, operational, structural, and organizational aspects, the top responses were squarely in the area of identity security:

  • The biggest share of organizations (45.1%) want to improve their ability to detect insider threats.
  • A further 44.3% aim to improve identity threat detection and response (ITDR).
  • 9% aim to improve integration with other IT security solutions.

The emergence of ITDR in the last couple of years as a key priority for organizations building out their security and identity capabilities has been a key takeaway of multiple IDC surveys now.

The final area to touch on is the “wish list” question, always a good barometer of what respondents really value. In this case, if your organization had the budget and resources to do so, what’s the one identity technology solution you’d add or strengthen in the next three months?

The top response was strong authentication, such as two-factor authentication or multifactor authentication (MFA), cited by 25.6%. This was followed by generative AI (GenAI) for fraud detection and identification of synthetic identities (20.3%) and, again, ITDR (19.5%).

The rapid maturing of deep fake tools and capabilities underlined by real-world examples of successful attacks is already driving demand for security tools to protect against them as the GenAI arms race heats up.

Identity really is at the heart of everything in the digital era: business, security, trust, compliance, risk management, operational efficiency, and more. It is fundamental to enterprise initiatives such as building cyber resilience or adopting zero trust principles.

Many direct references to IAM and identity security controls in the growing landscape of EU legislation further emphasize why identity should be high on every organization’s priority list. This new report maps many of the key trends shaping the European identity and access landscape in 2024.

Mark Child - Associate Research Director, European Security - IDC

Associate Research Director Mark Child of IDC’s European Security Group leads the group's Endpoint Security and Identity & Digital Trust (IDT) research for both Western Europe and Central & Eastern Europe. He monitors developments in security technologies and strategies as organizations address the challenges of evolving business models, IT infrastructure, and cyberthreats. Mark's coverage includes in-depth security market studies, end-user research, white papers, and custom consulting.

When NASA created its Apollo launch vehicles to take payloads to space (including humans), they were designed with multiple segments. The segment nearest the ground on launch (the “first stage”) contained huge rockets and fuel tanks that could get everything into the air and accelerate it to a velocity where it could escape Earth’s gravity. At this point, still some way before the edge of Earth’s atmosphere, the first stage would be jettisoned, to fall back to Earth. The rest of the vehicle would continue on its way, with escape velocity now reached.

A Frenzy of FOMO

OpenAI is the outfit that — above all others — is responsible for the rapid acceleration of interest and investment in generative AI (GenAI) technologies. The launch of ChatGPT in November 2022 kick-started a frenzy of FOMO, first for many individuals (after all, ChatGPT did surpass 1 million users in just five days) and then in businesses — as well as catalyzing conversations about intellectual property in the digital age, potential impacts of AI on employment and skills, and more.

Just over 12 months from the GenAI market launch created primarily by the attractiveness of OpenAI’s consumer services, IDC conducted a worldwide survey that demonstrated the incredible momentum behind the new technology within businesses: in January 2024, 68% of organizations already exploring or working with GenAI said it would have an impact on their business in 2024-2025, and an astounding 29% said that GenAI had already disrupted their business to some extent.

OpenAI continues to benefit from amazing levels of mindshare, thanks to the good old rule of “be first”, but also to the undeniable PR power of its CEO Sam Altman — not least within senior business leadership circles. But mindshare is not enough; it also benefits from a strategic partnership with Microsoft, which has seen Microsoft committing to provide $13 billion of investment, in return for an exclusive license to OpenAI’s IP and an agreement that it would be OpenAI’s exclusive cloud provider.

The heavily promoted downstream results of that partnership (Azure OpenAI Service, use of OpenAI models in CoPilots, and so on) have continued to create mindshare momentum.

And yet: OpenAI is not currently traveling along the route that businesses want to take.

OpenAI’s Alignment Problem

The outfit was founded as a not-for-profit research institute, focused on developing artificial general intelligence (AGI) — a currently hypothetical future level of capability that envisions AI systems that can perform as well or better than humans on a wide range of cognitive tasks — with a capped profit company subsidiary (which is the entity invested in by Microsoft and others).

However, when we ask organizations what they need from GenAI in order to create business value from the technology, they typically cite qualities such as accuracy, privacy, security and frugality. For example: 28% of organizations are concerned that GenAI jeopardizes control of data and intellectual property; 26% are concerned that GenAI use will expose them to brand or regulatory risks; and 19% of respondents are concerned about the accuracy or potential toxicity in the output of GenAI models.

OpenAI is innovating fast, but the dominant innovation focus is on breadth and depth of functionality (e.g., the introduction of “multimodal” models that can manipulate multiple content types, including text, images, sound, and video). Not on accuracy, privacy, security, frugality, and so on.

Currently, it is vendors “higher up the stack” (enterprise application and enterprise software platform vendors) who are attempting to bridge the gap with functionality aimed at addressing trust issues and minimizing risks. But it is clear that foundation model providers also need to bear some responsibility for… being responsible.

Beyond OpenAI: An Explosion of GenAI Model Providers

OpenAI might have amazing mindshare right now, but it is already far from the only source of GenAI model innovation. Fueled by venture capital and corporate investment, competitors have flooded into the space, including:

  • GenAI research-focused vendors like Anthropic, AI21, and Cohere
  • Hyperscale public cloud providers AWS and Google
  • Enterprise technology platform vendors including IBM, Oracle, ServiceNow, and Adobe
  • Sovereignty-focused providers, including Mistral, Aleph Alpha, Qwen, and Yi
  • Industry-specialized providers, including Harvey (insurance) and OpenEvidence (medicine)
  • A vibrant and fast-growing open-source model community, with thousands of GenAI-related projects hosted by Hugging Face and GitHub

Open-source communities are a particularly energetic vector of innovation: open-source projects are quickly evolving model capabilities in terms of model size and efficiency, training and inferencing cost, explainability, and more.

Microsoft Is Clearly Looking Beyond OpenAI

In late February, Microsoft President Brad Smith published a blog post announcing Microsoft’s new “AI Access Principles”.

There’s a lot of detail in the post, but underpinning it all is a clear direction: in order to reinforce its credentials as a “good actor” in the technology industry and minimize the risks of interventions by industry regulators around the world, Microsoft is committing to support an open AI (no pun intended) ecosystem across the full AI technology stack (from datacenter power and connectivity and infrastructure hardware to services for developers). As part of this, it is increasingly emphasizing the importance of a variety of different model providers. For instance, it’s made a recent small investment in France’s Mistral AI and is expanding support for models from providers like Cohere, Meta, NVIDIA, and Hugging Face in its platform.

Will OpenAI Fly or Crash?

In order for OpenAI to reap significant rewards from business demand for GenAI technology implementation, it is going to have to evolve its approach. While the initial success of ChatGPT captured market attention, the rapidly evolving landscape of both GenAI technology supply and demand requires a stronger business focus. OpenAI is faced with tension between its research-oriented ethos and the market’s demand for practical AI applications. This alignment problem raises questions about its identity and future strategy.

Lastly — what about Microsoft? It must back its new principles with tangible actions that genuinely advance AI responsibly. It needs to ensure transparency and avoid actions that would suggest it only uses “responsible AI” as a PR tool for driving profits. It needs to promote both innovation and competition. Nobody wants a world where one model’s dominance could stifle competition and limit options for developers.

Hence, fostering an open and inclusive ecosystem where smaller players can grow will be imperative for Microsoft’s credibility and allow for a trustworthy AI ecosystem benefiting everyone.

 

Want to know more? Join IDC’s experts on the 19th of the March from across EMEA for an exclusive peek into our latest research to:

  • Uncover real-world use cases from organizations aiming to maximize positive impact of GenAI on their business,
  • Learn about evolving GenAI technology, supplier dynamics, and the shifting regulatory landscape,
  • Gain actionable insights to reveal a roadmap to get through GenAI possibilities and challenges in 2024 and beyond.

Register for the webcast here: How EMEA Organizations Will Deliver Business Impact With GenAI – Beyond the Hype.

Neil Ward-Dutton - VP AI, Automation, Data & Analytics Europe - IDC

Neil Ward-Dutton is vice president, AI, Automation, Data & Analytics at IDC Europe. In this role he guides IDC’s research agendas, and helps enterprise and technology vendor clients alike make sense of the opportunities and challenges across these very fast-moving and complicated technology markets. In a 28-year career as a technology industry analyst, Neil has researched a wide range of enterprise software technologies, authored hundreds of reports and regularly appeared on TV and in print media.

Governments across Europe, the Middle East, Africa (EMEA) and beyond are busy experimenting with and scaling AI and GenAI (generative artificial intelligence) use cases. The French and U.K. central governments’ GenAI-powered virtual assistant projects — in one case targeted at civil servants and the other at citizen chatbots — show the high level of interest and the early stages of maturity. Also in France, a large language model (LLM) is being introduced to improve the processing of legislative proceedings.

According to IDC EMEA’s 2023 Cross-Industry Survey, the government sector currently has the second-lowest level of adoption of GenAI in comparison to other industries (ahead of only agriculture). But the government sector has the highest percentage of organizations that plan to start investing in it over the next 24 months. Some government entities are taking a more cautious approach, putting restrictions on the use of commercial GenAI platforms, while considering developing their own LLMs.

This phenomenon is not new in the public sector. For several reasons, governments usually have a slower rate of adoption of new technologies.

One is that the public sector is obligated to guarantee access to their services to everyone. Government bodies thus require longer to test innovative technologies in order to deliver inclusive outcomes. Legal requirements can also constrain technology procurement, as can limited capacity and competencies.

The current AI investments are all critical steps toward realizing the benefits of data and AI in government — but they are not sufficient. Beyond operational use cases like virtual assistants, summarizing council meetings, expediting code development and testing for software applications, flagging risks of fraud in procurement and tax collection, and drafting job requisitions, governments need to think of the long-term impacts of AI and GenAI.

They need to think of what will happen when AI is used pervasively across industries and is widely accessible by individuals on their smartphones — when the potential benefits and risks of AI will impact government operations well beyond the current stage of maturity and affect the government’s role in society.

The Potential Impact of AI and GenAI on Future Government Operations and Policy

AI has been used in government — particularly by tax, welfare, public safety, intelligence, and defense agencies — for more than a decade. But the advent of GenAI indicates that existing AI applications only scratch the surface of what’s possible.

Government Operations

From a government operations perspective, AI- and GenAI-powered chatbots are just the beginning. European and United Arab Emirates government officials that we recently spoke with are already thinking about how the next generation of virtual assistants could entirely replace government online forms and portals.

For example, a natural language processing algorithm trained to recognize languages, dialects, and tones of voice could enable citizens to apply for welfare programs, farming grants, business licenses, and more just by sending voice messages.

An AI-powered system combining an automatic speech recognition system and an LLM model would comb through voice messages to identify the entity (individual or business) making the request and the key attributes, then feed the data to an eligibility verification engine. No forms would need to be filled in manually.

This scenario is not too far off. A regional government we spoke with is already collecting voice samples to test such a system for farming grant applications.

But multiple questions are raised. Legal and technical questions like: How and where should voice data be collected and stored to comply with GDPR? How can a citizen’s or business owner’s identity be verified through a voice message in compliance with GDPR and eIDAS? How can the government remain transparent and accountable for its decisions if there is not even a digital front end?

It also raises business and operational questions like: Will such a system really replace online forms — or instead become an additional channel that segments of the population use, thus pushing the volume of requests to a level that causes delays in government responses? Will the pervasive use of GenAI in the private sector multiply that volume effect?

Will lawyers’ pervasive use of GenAI incentivize them to file more proceedings, even ones they do not expect to win, because it is so easy that they may as well try? How will government business, legal, operational, technical, and functional capabilities evolve to cope with these challenges?

Policy

From a policy perspective, the spectrum of open questions is expanding by the day. One of the most critical questions, and one that many are thankfully already asking, is about the impact of AI-powered automation on the job market.

If workers are displaced by AI-powered automation, there is no silver bullet. Training programs are not fast enough and may not work for everybody.

Universal basic income can be part of the recipe. But how much is affordable and what is the right level of income? Will the government need to consider employing more people to cushion a drop in employment in other industries?

If so, are roles requiring both expertise and empathic interactions, such as education, healthcare, and social care, the right public sector domains to do so? If new jobs appear on the market, how does that impact worker social protection policies?

In a year when half of the global population will be asked to cast a vote, the impact of AI on democracy is also called into question. AI is already generating a surge in misinformation and increasing risks of polarized political positions.

What if the attempt of mainstream media to protect copyrights from web crawlers used to feed LLMs unintentionally opens the door for bad actors to make even more misinformation available to train GenAI? Does the government need to establish counter-misinformation authorities or issue laws and guidelines that hold the private sector accountable to do so?

If a government authority is established, how can it ensure public oversight and independence from the already existing cyberunits of defense and intelligence departments, which have a different mission? In France, a recent debate over media independence and balanced journalism might be settled by AI analyzing speeches, attendees, and ensuring pluralism. But who would train a democratic judge of pluralism?

What about the government’s ability to regulate private markets? What if AI and GenAI accelerate medical science through analysis of vast amounts of real-world health data that have been historically hard to collect and prepare for algorithm training? What if, for example, such an acceleration in medical sciences finds a cure that diabetics can use to treat their disease once and for all, instead of having to take medication for the rest of their lives? What would be the impact on the revenue model of pharma companies? Will governments have to change intellectual property rights entirely, to make sure that pharma companies invest in such treatments and make them affordable to all diabetics people around the world?

The same goes for cultural companies and intellectual properties. What would be the role of governments to ensure that culture workers can continue to participate in the entertainment industry and in the creativity and identity of a country through their art?

Finally, what are the ethical implications of using AI in warfare? There are already systems that can alert snipers of targets. What is their impact on the rules of engagement on the battlefield and on the accountability of the individual soldier and the chain of command?

These are big questions that require technology, legal, policy, ethical, and process experts to come together. They cannot be left to the chief information officer or the chief data officer. And they require civil service and policymaking leaders to engage openly with the public, with academic and private sector experts, to avoid the risks of being influenced (or perceived being influenced) only by lobbyists. They require international collaboration. They require measuring the value of AI not just in terms of productivity, but also in terms of fairness, robustness, responsibility, and social value.

Remi Letemple - Senior Research Analyst, IDC Government Insights - IDC

Remi Letemple leads IDC’s Worldwide Sustainable Transportation and Smart Vehicles Strategies service, where he provides strategic guidance and thought leadership on the future of mobility and transportation. Operating at a global level, he is recognized as a subject matter expert in smart mobility and transportation technologies—including connected, autonomous, shared, and electric mobility—enabled by software-defined vehicle (SDV) architectures, over-the-air (OTA) updates, cloud and edge platforms, and AI, including generative AI.

On October 19th, 2023, AMD announced new processors for the workstation and high-end desktop (HEDT) markets. The processors are based on 5nm Zen 4 architecture and offer up to 96 cores and 192 threads of performance.

The Ryzen Threadripper PRO 7000WX series of processors, which are designed for professionals and businesses that demand top-tier performance, reliability, expandability, and security, feature AMD PRO technologies and eight channels of DDR5 memory.

Meanwhile, the Ryzen Threadripper 7000 series signals AMD’s return to the HEDT market, offering overclocking capabilities and the maximum clock rates possible on a Threadripper-based CPU. Power, performance, and efficiency are all made possible by 5nm technology and Zen 4 architecture. The Threadripper 7000 series provides ample I/O channels for desktop users, with up to 48 PCIe Gen 5.0 lanes for graphics, storage, and more.

The new processors were available from OEM and system integrator (SI) partners, including Dell Technologies, HP, and Lenovo, as well as do-it-yourself (DIY) retailers, from November 21st, 2023.

On November 13th, 2023, AMD announced the Radeon PRO W7700, a new workstation graphics card that offers high performance, reliability, and top-notch price/performance ratios for professional applications. The new card bridges the gap between the high-end Radeon PRO W7800 (32GB GDDR6) and the entry-level Radeon PRO W7600 (8GB GDDR6). The 16GB VRAM graphics card supports DisplayPort 2.1, AI acceleration, and hardware-based codecs for video editing and production.

This review will focus on the AMD Ryzen Threadripper 7980X processor, with additional coverage of the AMD Radeon PRO W7700 professional graphics card.

Test System Details

AMD Ryzen Threadripper 7980X Processor

The AMD Ryzen Threadripper 7980X processor (non-pro) signals AMD’s return to the HEDT market, offering overclocking capabilities and the maximum clock rates possible on a Threadripper series CPU.

Power, performance, and efficiency are all made possible by 5nm technology and Zen 4 architecture, which are available for the DIY market and SI partners. The Threadripper 7000 series provides ample I/O channels for desktop users, with up to 48 PCIe Gen 5.0 lanes for graphics, storage, and more.

AMD Radeon PRO W7700

With 16GB of Error Correction code (ECC) memory, the AMD Radeon PRO W7700 easily handles data-intensive operations. In terms of visual fidelity, the card features the New Radiance Display Engine, which supports 12-bit high dynamic range (HDR) color and recreates over 68 billion unique colors with high precision.

The Radeon PRO W7700 GPU’s major feature is its 48 unified RDNA 3 compute units, 48 second-generation ray accelerators, and 96 Al accelerators. The card has 16GB of GDDR6 ECC memory and four DisplayPort 2.1 (UHBR 13.5) connectors. The connectors, which provide up to 52.2 Gbit/s total bandwidth, are designed for 10K displays with 60Hz refresh rates, 2x8K displays, or 4x4K displays with Display Stream Compression technology.

AMD’s new dual media engine offers hardware-accelerated support for AV1 encoding, with the Radeon PRO W7700 capable of delivering 7680×4320 video at 60fps (8K60). The media engine supports two AVC and HEVC streams that can be encoded or decoded simultaneously. For live broadcasters, AMD has included many capabilities that increase both performance and quality.

Memory and Motherboard

We installed the Ryzen Threadripper 7980X processor on a Gigabyte TRX50 AERO D motherboard, alongside the G.SKILL Zeta R5 Neo DDR5-6400, CL32-39-39-102, 1.40V, 128GB (4x32GB) kit with AMD EXPO memory overclocking and ECC support enabled.

AMD Ryzen Threadripper CPUs only support DDR5, LRDIMM, and 3DS RDIMMs. Threadripper 7000 processors can handle up to 8 channels/2TB on PRO motherboards (based on 8x256GB DIMMs) and up to 4 channels/1TB on HEDT motherboards (based on 4x256GB DIMMs), with support for both single-rank and dual-rank at 5200Mhz and a single DIMM per channel. ECC is enabled, although its functioning varies depending on the motherboard. The maximum official transfer rate varies by DIMM configuration, like with other AMD Ryzen CPUs.

Other Components

The Windows 11 main storage device was a 1TB GIGABYTE AORUS NVMe Gen4 solid-state drive. AMD provided a 360 all-in-one water cooler; however, it did not completely cover the CPU surface. Instead, we used the Arctic Freezer 4U-M, an 8x6mm direct contact heatpipe tower cooler with 2x120mm fans in push/pull mode. This cooler is intended for the most powerful server and workstation CPUs with up to 96 cores and a thermal design power of up to 350W.

The be quiet! STRAIGHT POWER 11 Platinum 850W power supply powered the system. A 34″ Dell Gaming S3422DWG monitor — a Quad-HD 3440×1440 display with a 144Hz refresh rate, FreeSync, 10-bit colors, and HDR support — was also utilized.

Benchmarks

Blender Benchmark

Blender Benchmark version 4.0.0 was used to assess the AMD Ryzen Threadripper 7980X processor’s rendering performance. With a score of 1708.66, the processor’s performance ranked among the top 28% of benchmarks running the same workloads. Given the inclusion of GPU results, the CPU performed brilliantly.

In terms of GPU results, the AMD Radeon PRO W7700 ranked in the top 27% of benchmarks, with a slightly elevated score of 1883.80. This reflects how strong the processor is at GPU-level rendering, which is fantastic news for studios that rely on CPUs for production.

IndigoBench

IndigoBench v4.4.15 is another standalone benchmark based on Indigo 4’s rendering engine and the industry-standard OpenCL.

With a total score of 47.54 million samples per second, the Threadripper 7980X ranks fourth among the top CPU performances when using normal settings and no overclocking. The processor also outperforms the Threadripper 3990X and Pro 5995WX by 30% and 33%, respectively, demonstrating a significant generational jump.

PCMark 10

PCMark 10 is a comprehensive benchmarking tool that covers the wide variety of tasks performed in the modern workplace. Web browsing, videoconferencing, spreadsheet and word processing, photo and video editing, and rendering and visualization are some of the tasks tested by the tool.

The 8,772 score the test platform achieved was better than 98% of all results produced by PCMark 10.

CINEBENCH

The 2024 edition of Cinebench now includes a GPU benchmark that takes advantage of Redshift, Cinema 4D’s default rendering engine. The Radeon PRO W7700 scored 9,504, nearly matching the Radeon Pro W6800, which scored 9,643 (according to the test database). This result demonstrates the level of sophistication of RDNA 3 computation, given the Radeon Pro W7700 has half the infinity cache and dedicated graphics RAM of the W6800.

Based on the 92,817 Cinebench R23 result, the AMD Ryzen Threadripper 7980X CPU is nearly three times faster than the Ryzen 9 7950X. This result demonstrates that the Threadripper is in a class of its own and is a much-needed high-performance solution.

3DMark CPU Profile

This test stresses the CPU at various levels of threading while reducing the GPU burden, ensuring that GPU performance is not a limiting factor. It takes advantage of sophisticated CPU instructions sets supported by different processors, including Advanced Vector Extensions 2 (AVX2). It also leverages the straightforward, highly efficient simulations provided by the SSSE3 code path.

With standard settings and no overclocking, the AMD Ryzen Threadripper 7980X CPU score of 25,374 qualifies for 3DMARK’s MAX Threads Hall of Fame. It ranks among the top 100 benchmark scores ever recorded, and holds 25th place among the world’s most skilled overclockers.

V-Ray 6 Benchmark

The V-Ray Benchmark, which uses the V-Ray 6 render engines, was used to gauge the system’s rendering speed.

With a vsamples score of 120,247, the AMD Ryzen Threadripper 7980X CPU is nearly twice as fast as the Threadripper Pro 599XWX and 3990X, representing a considerable generational leap.

SPECworkstation

The SPECworkstation 3.1 Benchmark fully assesses workstation performance across a variety of professional applications.

The AMD Ryzen Threadripper 7980X CPU scores are higher across all application groups, except for apps that rely more on the processor (such as financial services). This exception is due to the use of the Radeon PRO W7700, a midrange professional graphics card. Higher results across all application groups could be achieved with the use of the Radeon Pro W7800 or W7900.

Gaming

Since many professional gamers and streamers utilized HEDTs in the past to support multitasking — playing games, encoding and recording gameplay, and streaming to several web platforms — the Threadripper’s gaming performance was evaluated on this professional test platform. Professionals that enjoy playing games would undoubtedly prefer not to invest in another gaming PC after paying a premium for this test platform.

Shadow of the Tomb Raider ran at an average 61 frames per second (fps) at 1440p, with a minimum of 42fps. The highest graphical settings, as well as AMD’s FidelityFX CAS package, were enabled. Surprisingly, the use of XeSS for upscaling while running the game test boosted performance by 10% at the same settings, achieving a minimum of 50fps and an average of 66fps. This might be a demonstration of the RDNA3 architecture’s AI acceleration capabilities and the Radeon Pro W7700’s AI accelerators.

Far Cry 6 ran at an average 104fps at 1440p, registering a minimum of 92fps. All DirectX Raytracing (DXR) and FidelityFX Super Resolution (FSR) features were enabled during testing.

Cyberpunk 2077 ran at an average 36fps at 1440p, registering a minimum of 28fps. Ultra-ray tracing presets and FSR 2.1 features were automatically enabled.

The fact that the gaming results were 100% GPU bound indicates that the CPU was never a bottleneck and that employing top-tier gaming cards can improve gaming performance.

IDC Opinion and Conclusion

When AMD announced the Threadripper 5000 series in the Pro-only category, primarily for OEMs, the enthusiast community was left feeling let down. However, we are pleased that AMD did not abandon those customers for too long. AMD brought this category back to life after realizing — as its competitor had already done — that this is a prestigious and necessary niche market that cannot be satisfied by high-end consumer CPUs.

We are also pleased to see that the HEDT refreshment with the Ryzen 7000 platform supports the newest and greatest in networking and connectivity with excellent I/O support, including PCIe5 and DDR5 ECC registered memory modules (RDIMM/RDIMM-3DS), in addition to USB4 Type-C, 10 gigabit ethernet (10GbE), and Wi-Fi 7.

In the past, it was impossible to reach extremely high speeds while remaining stable and controlling voltage and temperature. However, this CPU is so quick, snappy, and opportunistic as it can surge up to 5.1 GHz when just a few cores are on demand, and 4.1 to 4.7 GHz when all cores are stressed, which is incredible.Furthermore, attaining rates of up to 6400MHz is another productivity breakthrough as it was previously difficult to overclock ECC RAM above the norm.

Aside from its intense performance, efficiency is the most striking aspect of the processor. Under full load, the Threadripper 7980X’s power consumption did not go over 340W. High-end consumer CPUs with fewer cores use the same amount of energy.

Although the Radeon Pro W7700’s power output stayed under 140W, we were not as satisfied with its clock speed, and thought there was potential for a higher frequency that was purposefully regulated. With our 850W platinum power supply, we had no trouble operating the system overall, and were even able to install it in a midi tower case.

We would love to see more partner solutions for cooling to fully cover the processor’s integrated heat spreader as well as motherboard support for extreme high-end use cases that require up to seven or eight graphics cards. The Threadripper 7000 series is more than capable of handling booming AI, machine learning, and training solutions — as well as media production and automotive rendering workloads — when needed on desktop platforms.

AMD should consider a SI certification scheme, similar to AMD Advantage in gaming. By doing so, it can provide customers with reliable and better experiences on an all-AMD platform that features the Threadripper and the Radeon PRO. This strategy will strengthen trust in the AMD brand and help SIs compete against OEMs with ISV-approved devices.

In conclusion, the AMD Ryzen Threadripper 7980X reigns supreme among HEDT CPUs. It delivers great performance straight out of the box, with most cores running at the highest clock speeds in a very energy efficient manner.

Mohamed Hakam Hefny - Senior Program Manager - IDC

Mohamed Hefny leads market research in EMEA on professional workstation PCs and solutions. He also reports on professional computing semiconductors, processors, and accelerators (CPUs and GPUs), as well as breakthroughs and trends related to the market. In addition, Mohamed is actively involved in AI PC taxonomy and research. He participates in business development projects, contributes to consulting activities, and provides IDC customers with analysis, opinions, and advice.