Globally, cities are rediscovering the importance of their rivers as a central tenet of the health, wellbeing, and economy of a city. A river was often, if not always, the reason for a city to develop and grow, but during the 20th century city, authorities began to focus primarily on the built environment and to see water management as a less important sub issue of city management.

With the rise of environmental awareness in the 21st century, cities are beginning to relook at the interrelationship between the built environment and their rivers. We have been tracking this new direction through their research on River Cities and how technology now allows us to instrument both water and the built environment in concert.

According to our research, 28% of local governments across EMEA are already investing in smart rivers with an additional 29% considering investing in the future (IDC Survey, December 2023).

The French are Reclaiming their Rivers

France is emerging as a leader in this process, and the clearest example of this will be the opening ceremony of the 2024 Olympic Games.

Paris is the most visited city in the world, and it is impossible to imagine Paris without picturing the Seine. Olympic opening ceremonies are historically held in a stadium, but France will be using the banks of the Seine for the ceremony to increase participation and celebrate the special relationship between the river and the city.

The Digital Twin Project

A further French example of this new thinking is captured in a recently published IDC Perspective Building a River Digital Twin: A Case Study of the Port de Bordeaux. This document provides an overview of a project led by the Port de Bordeaux, an entity managing marine activities across Bordeaux and the Gironde Estuary.

The objective of the project is to create a digital twin of the Estuary – the largest Estuary in Western Europe covering around 635 km2. The Gironde Estuary is formed from the meeting of the rivers of Dordogne and Garonne and spans several cities, the main one being Bordeaux with more than 250k residents. The Port de Bordeaux manages 7 terminals is the 7th largest French port in terms of traffic.

The digital twin was built to help project participants in both their day-day tactical decision-making process (for example, information on water levels, pollution and navigation) as well as addressing longer-term and strategic challenges (adaptation and impacts of climate change). The Port de Bordeaux developed 8 core goals for the digital twin project:

  • Sharing and developing knowledge of the river.
  • Promoting the exchange of data and operational results.
  • Anticipating the effects of climate change.
  • Identifying mitigation solutions.
  • Developing economic, recreational and tourism opportunities.
  • Preserving biodiversity and environmental wealth.
  • Developing coastal and river surveillance (alert systems).
  • Fostering replicability of the platform on other rivers.

 

An innovative aspect of this project is that the project team looked beyond environmental challenges to a broader set of objectives, including, for example, economic and recreational activities. This approach is centred on the view of a river as a complex ecosystem of different stakeholders and an integral part of the identity of the region. The project has a wide target audience, and the use cases, outputs and goals were co-created with the relevant stakeholders at the design stage.

Digital twins are at an early stage of adoption for rivers and marine environments. However, the application of technologies to the blue economy is increasing.

We predict that, by 2027, threatened by water scarcity and extreme weather, 40% of large cities will have digital twins of their water resources to manage water supply, quality, resilience and behavioural change (IDC Smart Cities and Communities 2023 Futurescape).

The Port de Bordeaux is an early adopter and can provide a model and blueprint for others to follow as a core principle of the project was making as many aspects as possible open source. Local stakeholders can upload their own data and use the GIROS platform to visualize their results. This supports a broader community of users being able to take advantage of the model.

Crucially, the Bordeaux project team intentionally designed the solution to be replicable on other rivers; while the numerical model for the Gironde is geographically specific, the framework and architecture of the solution is being made publicly available.

We are soon to publish our first Tech Innovator report on companies involved in rivers and water management and are keen to hear of innovative technology solutions and case studies involving river management for future reports so please do get in touch with us jdignan@idc.com, lbarker@idc.com, rletemple@idc.com

Remi Letemple - Senior Research Analyst, IDC Government Insights - IDC

Remi Letemple leads IDC’s Worldwide Sustainable Transportation and Smart Vehicles Strategies service, where he provides strategic guidance and thought leadership on the future of mobility and transportation. Operating at a global level, he is recognized as a subject matter expert in smart mobility and transportation technologies—including connected, autonomous, shared, and electric mobility—enabled by software-defined vehicle (SDV) architectures, over-the-air (OTA) updates, cloud and edge platforms, and AI, including generative AI.

After much back-and-forth on work models, bold ultimatums from employers and staunch resistance from workers, European businesses are in the process of codifying different ways of where, how, when, and why we work. One of the many reasons for this change is the speed with which technology, especially artificial intelligence and GenAI, have made it possible to work equally well in varying, flexible work models.

The downside of this rapid technology development has been that European organizations simply cannot hire enough workers with current or deep skills – both technical and human. Do you manage highly distributed teams performing complex and interdependent tasks? Certainly not easy. Finding employees trained sufficiently well to safely transition to the use of Gen AI solutions? Not easy either.

Enter the promise of automation and in particular the ability of AI and Gen AI tools to both facilitate repetitive tasks like coding, data entry, research, and content creation but also to amplify the effectiveness of learning in the flow of work and secure company assets.

The following 3 predictions are examples of what work in Europe might look like in the next five years, considering the areas of work personalization, skills development and the impact of climate change on office design.

Future of Work Predictions for 2024 and Beyond

  • Prediction 1: 60% of Large Businesses will upgrade hardware and software technologies to increase worker retention with personalized work experiences and enhanced collaboration by 2025.

Rapidly evolving technologies and work methods are forcing companies to upgrade hardware and implement new software technologies that support better employee experiences, personalization and improved collaboration.

Collaboration apps are becoming more visual and continue to develop features unlike multiplayer games that enable a more personalized view of work and teams, better targeting of projects, and hands-on collaboration apps. Meetings and other work resources, including collaboration resources (workflow, meetings, new document formats, etc.) are translated and transcribed in real time, captured, analyzed and exploited by other integrated business data sources. The results enable faster and more personalized decisions and collaboration, including summaries using generative AI. AI solutions are gradually increasing the ways people consume content and data, and AI itself will become a digital collaborator.

  • Prediction 2: Enterprises will leverage personalized technology skills development to drive $1T in productivity gains by 2026, enabled by GenAI and automation everywhere.

As the development and use of technology in everyday work environments becomes more complex, organizations struggle to find experts for programming, security, architecture, operations, management and many other roles. IDC data from 2023 shows that 43 % percent of organizations lack the capability support needed to successfully implement automation.

One of the reasons Gen AI adoption and experimentation has grown so rapidly is that everyday workers can see its immediate value. As new jobs come online due to new automation requirements and workers learn new skills, Gen AI is being incorporated into tools that create employee training. Workers with entry-level skills can better target individual learning needs based on the speed with which Gen AI can generate code, summarize data, and create first-draft multimedia products. This customized approach ensures that people (including IT staff) receive the most appropriate training, optimizing efforts to increase their skills and competencies as jobs evolve, plus the need to program GenAI applications themselves.

  • Prediction 3: By 2028, organizations will invest in office climate havens, using asset-based/renewable energy to defray 30% of their ongoing operating costs.

It is not just work patterns that are rapidly changing. The environment we live and work in is rapidly changing too. As uncontrolled wildfires, climate change and extreme weather events become more common in Europe, the consequences are affecting human health and the ability to work effectively. Sustainability measures are no longer considered optional as organizations worldwide recognize them as necessary components of strategic planning and sustainable operational excellence.

In future, progressive companies will adopt a combination of innovative building design, digital twins, robotics and integrated climate systems to create climate havens where workers and their families can both find relief and focus on work. Unfortunately, simply rebuilding existing buildings with AI or robotics adds energy demand to an already struggling European energy infrastructure.

Companies that invest heavily in asset-based energy (hydro/tidal, geothermal, solar and wind) on-site in their climate havens, support both their operating costs and potentially create a secondary revenue stream when they feed electricity back into the grid. This is reversing the long-term trend of digital organizations threatening their local communities through excessive power usage, while improving community relations, employee retention and talent recruitment.

 

All the above predictions have much in common – they seek to better understand the intersection of technology and human behavior. Science fiction predicts dystopian visions of mechanized and artificially controlled societies where human efficacy is threatened. IDC far from that point of view, but we also see how the concerns raised by new technologies such as Gen AI can play a big role in hindering adoption—for better or for worse.

Organizational leaders must invest time and money in the strategic planning for the adoption of AI and GenAI technologies, as well as the new roles and ways of working they create. This is not just a technology issue that affects computing, security, hardware, infrastructure, and integration requirements. It is also a human issue that must be addressed employee empowerment through skill development and the development of appropriate, re-imagined career paths.

For more information on the impact of Automation on the European Future of Work, please access the following resources:

Meike Escherich - Associate Research Director, European Future of Work - IDC

Meike Escherich is an associate research director with IDC's European Future of Work practice, based in the UK. In this role, she provides coverage of key technology trends across the Future of Work, specializing in how to enable and foster teamwork in a flexible work environment. Her research looks at how technologies influence workers' skills and behaviors, organizational culture, worker experience and how the workspace itself is enabling the future enterprise.

This year’s Enlit Europe, which took place between November 28 and November 30 in Paris, attracted almost 12,000 visitors,700 exhibitors from 100 countries and 500 speakers, — proving once again to be a reference point for the European (if not worldwide) utility sector.

Sessions on the energy transition (energy efficiency, electrification and decarbonization), flexibility, and digitalization, as well as numerous hub sessions, provided a great opportunity for knowledge sharing during the three-day event. Here are our key takeaways from discussions and debates with technology providers and utilities.

Among the conversations with various utility leaders, three key themes emerged that outline the direction in which this industry is moving.

  • Flexibility at the heart of energy transformation. One of the dominant conversations that continued this year at Enlit is the growing criticality of flexibility for the utility industry. With increasing renewable energy sources and the need to integrate distributed energy resources more effectively, utilities are increasingly focusing on operational flexibility. Additionally, booming electrification requires demand flexibility to mitigate the impact of the energy transition on grids, which are the invisible enabler of it all. Industry representatives stressed the importance of investing in technologies and systems that enable more dynamic grid management, ensuring more efficient and sustainable energy distribution and consumption.
  • The imperative of marketing. Another interesting aspect that emerged during the event was the growing success of utilities that understand the value of marketing, to change customers’ perception of their company and the industry as a whole, while improving their relationship with consumers. Utilities that have invested in understanding consumer needs and have built strong brands are reaping the benefits. Utilities are at the heart of a transformation that impacts everyone and will set the stage for the next generations, if done right and marketed well, companies can turn misconception of the industry on its head, leading to newfound success.
  • What about Generative AI? Despite growing interest over the last year, the topic of GenAI was not as apparent as we would have expected. Discussions we had were more focused on the benefits of horizontal applications of GenAI and very rarely on industry specific use cases that utilities should be digging into. Currently, the discourse on GenAI tends to be more high-level than practical, with utilities trying to figure out how to integrate this technology effectively into their daily operations. The largely uncharted territory of GenAI also raised additional conversations around artificial intelligence and machine learning overall and the untapped potential that still exists. And it all came back to the topic of “data” … the quality of the data, the frequency of the data, the amount of data, etc. The challenge now is for utilities to translate high-level discussions into concrete and practical action, successfully addressing industry challenges and capitalizing on emerging opportunities. And for this they need the help of their peers and the technology ecosystem that surrounds them.

Overall, it was positive to see an Enlit returning to its pre-COVID bustle, with a diverse pool of companies exhibiting on the floor, both from a software and a hardware perspective. Let’s hope the onsite enthusiasm trickles into utilities daily activities fostering more drive to the energy transition.

Here’s to quickening progress in 2024 to be discussed when we meet in Milan at next year’s Enlit Europe.

For more of our coverage on the energy market, visit our website.

In 2023, I attended and spoke at many IDC conferences, such as the IDC Government Xchange, IDC Portugal Directions, and non-IDC events, like the Smart Cities Expo and the ServiceNow World Forum in Rotterdam, to name a few. One common thread of many of the conversations that I had and the presentations I listened to was how executives felt the pressure to increase their organizations’ speed, to keep up with the fast pace of innovation.

Everyone seemed obsessed with speed being the big difference between how innovation happens today versus how it happened twenty, fifty or a hundred years ago. That may be true for enterprises that want to be fast followers, for instance to drive incremental productivity improvements through digital transformation; in fact, IDC’s research indicates that the average time to value of digital projects was in the 6 to 24 months range, two years ago, while now it is less than 10 months.

But enterprises that are looking for opportunities to re-shape their destiny and gain competitive advantage should take a closer look at the way technology innovation shapes the creation of new markets. And that is different today, not only because of its pace, but also because of the dynamics among the key attributes of a market.

No matter whether one tries to interpret market dynamics through the lenses of neoclassical economic, Austrian school of economics, institutional economics and other theories, the common elements of market formation are the exchange of products and services, the narratives built around buyer and seller values that define how they perceive the benefits and risks of those products and services, and the norms – including laws, policies, standards – that regulate the exchange to maximize benefits and reduce risks.

From Linear to Warped Innovation

In the past, the interplay between exchanges, narratives and norms not only took a long time to come together but was quite linear. When the car market formed, Daimler and others invented the product in the 1880s, then Henry Ford and Alfred Sloan at GM shaped the narratives of the car for the mass consumer in the early 1900, and it was not until the 1950s that car safety regulations started to become more pervasive.

Overall, it took over 70 years for exchanges, narratives, and norms to fall into place and in a linear sequence. Fast-forward to today to look at how the Generative AI market is shaping up. In 2017, Google researchers published a paper on transformer models that was born out of a specific need – making language translation more efficient – but was soon understood as the seminal moment of a new category of product and services based on large language models (LLMs).

In late 2022, OpenAI public launch of ChatGPT detonated a new narrative about mass usage of LLMs to search and synthesize knowledge and create new content, being images, text, or computer code. While ChatGPT was launching, the EU was finalizing its AI Act, but decided to delay the completion of the draft regulation, to consider the impact of GenAI.

In essence, over the course of six years, the development of products and services to be exchanged, the narratives and the norms started to interplay. And one year after the launch of ChatGPT, new products and services are constantly coming to market in the form of public platforms, domain-specific models, capabilities embedded in enterprise software; the narratives around the value and risk of GenAI have not yet crystallized at all, with suppliers and buyers that are trying to figure out the impact across the most disparate use cases; the AI Act was modified, and then went through a first approval cycle.

So, not only the timeline was compressed, but the dynamic interplay of exchanges, narratives, and norms, was (and still is) far from linear. It is a warped dynamic, meaning, not only happening at “warp speed” compared with the past, but also sometimes convoluted and unpredictable because of the feedback loops among all its determinants.

In many scientific fields new discoveries are accelerating, often powered by emerging tech such as AI and quantum computing, like in nano materials, bio-engineering, space-tech, and nuclear fusion. At the same time, there are plenty of societal challenges where technology innovation can find applications, such as energy efficiency and climate change resilience, healthy and sustainable food for all, smart and sustainable roads, sustainable use of precious water resources.

These momentous changes will bring more warped innovation and less linear innovation. Enterprises and their technology partners that want to shape new markets in this context need to consider that:

  • Proving the value or ROI of technology innovation will revolve around business and revenue model re-imagination, creation of new industries and ecosystems, nurturing of jobs and skills for the future, rather than considering only efficiency and speed of product and service development.
  • Governing innovation will require making organizations more permeable to bring together stakeholders across enterprises and often across industries to explore new ecosystems through virtual joint ventures and outcome-driven joint development projects, rather than scaling partnerships with suppliers and customers along familiar value chains.
  • Business value will be created by investing in organizational capacity and skills that nurture collaboration, curiosity, data literacy, storytelling and scenario narrative, user-centricity, and pervasive resilience that enables to withstand the failures and mistakes that come from serendipitous trial and error iterations.

Private and public sector leaders that want to succeed in shaping new markets in the world of warped innovation should concede some slowness and sloppiness to shape the intricate interplay between turning new scientific discoveries into products and services that can be exchanged, crystalizing the narrative around the social and economic values that drive buyers and sellers, and designing the norms that maximize benefits and minimize risks.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.

As it’s ChatGPT’s first birthday, now seems like a good time to look back at what its arrival has sparked, and to look forward at what might happen next with Generative AI.

There is no doubt that when OpenAI launched ChatGPT in late November 2022, it kickstarted a huge new wave of excitement about the potential for AI within business. A major survey conducted by IDC in August 2023 showed that in just a few months, we arrived at the situation where 75% of European organizations said they were already working with Generative AI (GenAI) in some capacity. What’s more, 18% of European organizations said that GenAI had already disrupted their business to some extent.

In August 2023. That’s just 9 months after ChatGPT’s debut.

It’s important that we remember that GenAI didn’t come from nowhere: prior to ChatGPT’s launch, a number of organisations (including Google, and OpenAI itself) had been working on GenAI technologies for years. But it was ChatGPT’s user-friendly conversational interface, and free service, that created high levels of awareness about what GenAI might be able to do – in an extraordinarily short space of time.

Of course, it’s important to note (as we have in numerous publications) that the GenAI opportunity is about much more than chatbots. Organizations are exploring use cases spanning marketing content generation, knowledge management, automation of software development activities, and much more.

Together, this span of use cases is driving massive interest. Some other key insights from IDC’s August GenAI survey:

  • 55% of European organizations said their C-Suite leaders are actively engaged with IT leaders about GenAI on a regular basis.
  • European C-Suite leaders are most interested in how GenAI can have an impact on customer experience (24.3% said this was the most sought information); how it can improve the performance of decision making (18.1%) and how it can improve employee productivity (15.8%).
  • European C-Suite leaders want to move fast: 88% of respondents said their C-Suite leaders wanted to integrate GenAI into applications and processes within 18 months.
  • On average, European organizations expected that investments in GenAI would account for 11% of new IT project budget in the next 18 months.

IDC forecasts that worldwide spending on GenAI implementation will reach $143.1B by 2027: that accounts for about 28% of the expected overall spending on AI implementation in that year. At a 5-year CAGR of 73.3% between 2023 and 2027, this represents a colossal market opportunity, that will significantly affect existing hardware, software and services markets.

As a result, it’s natural that OpenAI has now been joined by many new competitors all aiming to provide commercial GenAI models. The company now has competition from AWS, Google, and IBM, as well as other specialists (Cohere, Anthropic, Mistral, Inflection and Aleph Alpha are some examples). And enterprise application vendors like SAP, Salesforce and ServiceNow are leveraging open-source alternatives as well as partnering with a wide range of commercial model providers, in order to embed GenAI features in their application suites and platforms.

So – with GenAI set to be a major force in enterprise technology over the coming years, what happens next?

I’ve long wondered whether OpenAI might be an outlier in terms of how it approaches the GenAI opportunity; and indeed, whether its strategy makes it risky to focus too much on what OpenAI does, as a way to get a sense of overall market direction. I’ve said multiple times to colleagues and clients that OpenAI is more like a research outfit that is figuring out how to make its tech available to the world, than a product company.

Certainly, OpenAI has been culturally distinct from its competitors since before the launch of ChatGPT. While its competitors are primarily focused on developing products that businesses can use, OpenAI has operated as a hybrid between a not-for-profit research outfit committed to trying to develop what it calls “AGI” (Artificial General Intelligence) – something that many commentators feel is a very long-term project – and a commercial venture. Because it is at least partly focused on this long term mission, OpenAI is less focused than many of its competitors on meeting the real-world current needs of business customers. Which means that although OpenAI created the market that we see today, its future as a significant force in the market is far from guaranteed.

This question has come into sharp focus in recent days, as a soap opera has rapidly unfolded at OpenAI. On November 17th, without any warning, the company’s board fired its CEO, Sam Altman. The company’s President and co-founder Greg Brockman also quit. But within 48 hours, investors in the company called for Altman’s reinstatement as CEO, and for the board to quit instead. At the same time, Satya Nadella, CEO of Microsoft (OpenAI’s strategic investment backer) announced that Altman and Brockman would join Microsoft to found a new AI research lab; and hundreds of OpenAI employees signed an open letter to OpenAI’s board, saying they would quit unless Altman and Brockman were reinstated.

At one year old, human children are a hot mess of crying, screaming and unexpected and unmanaged bodily functions. Based on current events, it looks like the organization behind ChatGPT, which kicked off so much industry excitement, might be exhibiting the same tendencies…

 

For more information on GenAI in EMEA download our eBook: Generative AI in EMEA: Opportunities, Risks, and Futures , or visit our website.  

Neil Ward-Dutton - VP AI, Automation, Data & Analytics Europe - IDC

Neil Ward-Dutton is vice president, AI, Automation, Data & Analytics at IDC Europe. In this role he guides IDC’s research agendas, and helps enterprise and technology vendor clients alike make sense of the opportunities and challenges across these very fast-moving and complicated technology markets. In a 28-year career as a technology industry analyst, Neil has researched a wide range of enterprise software technologies, authored hundreds of reports and regularly appeared on TV and in print media.

Heavy machinery, automotive, and machine building typically have complex bills of material, multi-tier supply chain networks, and depend on carbon-intensive materials such as steel, aluminum, and plastics. To meet sustainability goals, these engineering-oriented value chains (EOVC) must undergo a transformative shift.

Manufacturing organizations stand at the forefront of decarbonization efforts worldwide. In 2022, IDC’s Industry Intelligence Survey found that customer requirements drove investments in sustainability as a strategic business priority for 45% of U.S. and 39% of European manufacturing respondents. For 40% of U.S and European respondents, regulatory requirements were the leading sustainability investment driver.

Decarbonizing the entire value chain — with a particular focus on Scope 3 emissions — is central to the evolution of EOVCs. Scope 3 emissions represent a significant share of a company’s overall carbon footprint, extending beyond direct operational activities (Scope 1) and indirect energy consumption (Scope 2). Scope 3 emissions encompass indirect emissions generated across the value chain, including the production of materials like steel, plastics, aluminum, batteries, and glass.

Understanding the dynamics of affordable (and available) clean and renewable energy is crucial to developing an emissions-free supply chain. Europe, however, faces significant challenges in deploying the low-carbon energy resources crucial to decarbonizing supply chains in general.

Many challenges related to value chain decarbonization are addressed at the C-suite level. However, the roles that must implement these strategies include material engineers, procurement department leaders, quality managers, and supplier management leaders.

As material and production technology evolves, new components are developed, and new regulations emerge, those in supplier network management-related positions must have detailed knowledge about the impact of component materials on carbon footprints. We are not talking about emissions related solely to logistics, but about the carbon footprint of the production process itself.

Products like steel, aluminum, electric batteries, and plastics are often referred to as “hotspots” — that is, making them produces major emissions of CO2 and other greenhouse gases and is a leading contributor to the auto industry’s emissions footprint.

According to a McKinsey & Company study, typical upstream EV emissions include the battery (40%–60%), steel (15%–20%), aluminum (10%–20%), and plastics (around 10%). Upstream internal combustion engine (ICE) vehicle emissions include steel (25%–35%), aluminum (20%–30%), and plastics (15%–20%).

Let’s briefly examine the carbon-emitting hotspots in EOVC supply chains.

Batteries

The rise of EVs has highlighted the environmental impact of battery production. Manufacturing lithium-ion batteries involves resource-intensive processes that contribute to Scope 3 emissions. EV batteries contain nickel, manganese, cobalt, lithium, and graphite, which emit substantial amounts of GHGs during their mining and refining processes.

Some processes in the production of anode and cathode active materials require high, energy-intensive temperatures. Other factors that determine the amount of embedded production carbon include battery chemistry, the production technology, the raw material suppliers, and transportation routes.

Oliver Zipse, chairman of the Board of Management of BMW, said in a statement that the company’s competence center near Munich is laying the technological foundations for the efficient and resource-saving production of battery cells along the entire value chain. The statement said sample production of sixth-generation round cells has already begun. These cells are characterized by an up to 20% higher energy density, and BMW has been able to reduce the CO2 footprint in cell production by up to 60%, according to the statement.

  • Worth Watching: On November 21, 2023, Swedish company Northvolt announced a state-of-the-art sodium-ion battery developed to expand cost-efficient and sustainable energy storage systems. The cell has been validated for an energy density of 160+ watt-hours per kilogram at the company’s R&D and industrialization campus in Västerås, Sweden. This energy density is close to that of the type of lithium batteries typically used in energy storage. Lithium batteries used in electric cars have an energy density of up to 250–300 watt-hours per kilogram.  Northvolt says the technology can minimize dependence on China for the green transition. Battery designers and engineers, as well as supply chain managers, are advised to keep an eye on the company’s efforts to scale the technology for industrial use.

Steel

Traditional methods of steel production cause high emissions due to the use of fossil fuels in the smelting process. Decarbonization efforts involve adopting innovative technologies like hydrogen-based steelmaking and electric arc furnaces powered by renewable energy. Transitioning to sustainable steel production is vital to mitigating the impact of Scope 3 emissions and reducing the automotive industry’s overall carbon footprint.

Plastics

Plastics, widely used in automotive components, pose an environmental and sustainability challenge. The production of plastics, particularly from petrochemical sources, contributes significantly to carbon emissions. Addressing this hotspot involves embracing circular economy principles, recycling plastics, and developing bio-based alternatives. Recycling initiatives and reducing dependence on fossil fuels for plastic production will enable the automotive industry to make substantial strides in Scope 3 emissions reduction.

Aluminum

Aluminum, valued for its lightweight properties crucial to fuel efficiency, is a key material in automotive manufacturing. Traditional aluminum production is energy-intensive and contributes to significant carbon emissions. The adoption of recycled aluminum, coupled with advancements in low-carbon primary aluminum production, is essential to mitigate environmental impacts. Innovations in aluminum production processes (e.g., smelting using renewable energy sources) offer promising avenues for reducing Scope 3 emissions.

Conclusion

Collaboration across the entire value chain — from raw material suppliers to manufacturers and consumers — is critical to drive meaningful change and accelerate the transition toward a low-carbon EOVC sector.

Establishing a transparent and trusted carbon-free environment requires an understanding of the entire Scope 3 upstream supplier footprint. Understanding the Scope 2 emissions of each supplier is also essential. Acquiring this level of transparency requires tools and data platforms that offer access to trusted information provided by suppliers and suppliers’ suppliers, as well as tools that monitor OEM compliance with regulatory obligations.

The future of decarbonization of the entire manufacturing supply chain is, of course, inevitably enabled by ubiquitous data. Sustainable zero-carbon efforts span not only the visible chain of tier suppliers but also primary and secondary raw material processing plants and green energy providers.

Automotive, machinery, and heavy machinery OEMs may share suppliers; hence, an OEM can benefit from the sustainability-related transparency of the supplier network established by another OEM.

Utilizing a secure, scalable, and transparent digital data collection platform is an absolute must to successfully achieve the net-zero supply chain transition. I was pleasantly surprised to find that 70% of global manufacturing respondents to IDC’s 2022 Industry Intelligence Survey were already using cloud infrastructure to support sustainability metrics.

My Recommendation: Go beyond the obvious. In addition to Scope 3, focus on Scope 1 and Scope 2 of each entity in your supply chain. Turn suppliers into ecosystem stakeholders. Provide them with knowledge, help develop their workforce, and offer digital technology support!

 

Find out more about our Manufacturing Insights coverage, visit our website.

Generative AI (GenAI) attracted significant interest in 2023 and has already begun to impact horizontal and industry applications and use cases. According to our predictions for 2024, it’s anticipated that in 2026, half of G2000 companies will have integrated operational systems with GenAI to better ingest data, identify issues, and provide real-time context to operators, improving efficiency by 5%.

GenAI’s influence on the manufacturing sector is poised to be pivotal. It has already triggered a transition in which AI is omnipresent, no longer an emerging software segment amidst the technological stack.

Numerous firms, including industrial organizations, are assessing how AI can bring value to their operations. They may not have been early adopters of GenAI, but industrial organizations are well-placed to utilize the technology to generate diverse content and conduct extensive research. Algorithms can be trained using existing large data sets to produce text, video, images, even virtual environments.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Guidelines to Develop GenAI-powered Use Cases

To help organizations learn from company experiences, successes, and challenges in developing GenAI-powered use cases, I have established some guidelines:

  1. Do Not Underestimate Implementation

GenAI holds a lot of promise, but implementation carries risks that adopters have to watch very carefully. Appropriately trained and utilized, it proves reliable and can be implemented at a reasonable cost. From my perspective, organizations should view GenAI-powered solutions as an integral part of a digitally enabled strategy, particularly in fields like asset maintenance.

It’s essential to meticulously plan each phase of the solution’s implementation journey. The desired goals should be outlined, and key performance indicators should be identified. Regarding ROI, the total cost of ownership should be accounted for, including OPEX.

During the planning stages, organizations should project how the solution will scale and integrate with existing IT systems (especially in terms of technology standards). Organizations should also not undervalue the importance of the post-implementation period. Establishing review cycles with technology partners is crucial to ensure that user feedback is appropriately addressed. Finally, organizations should engage in discussions with experts who can provide insights into other areas that could benefit from GenAI solutions.

  1. Expand on Technology Partnerships

I recommend that organizations forge partnerships with technology providers and establish trusted relationships that foster the sharing of goals, capabilities, and values. A collaborative approach enables organizations to expedite and expand innovation. Due to the potentially lengthy journey from proof of concept to implementing company-wide solutions, organizations should ensure that their partners are capable of delivering scalable solutions and offering guidance throughout the implementation process.

When constructing a private and secure GenAI environment, organizations should consider technology partners capable of transferring internal data into large language models (LLMs) securely and without loss. Such partners can also facilitate knowledge transfers to internal staff for ongoing management and proficiency.

  1. Keep Security in Mind

Organizations should be on guard against potential data leaks and biases, while also retaining control over the IT processes operating in the background. It is vital to establish a governance mechanism to tackle concerns related to privacy, manipulation, biases, security, transparency, disparities, and potential workforce displacement.

I suggest actively participating in specialized drills aimed at mitigating the risk of sophisticated phishing attacks. Organizations can also enhance data security by updating their data infrastructures to meet the expanding data requirements of GenAI models.

  1. Be Creative in Finding New Use Cases

Organizations should prioritize using AI to deliver value and enhance business outcomes; AI should not be pursued for its own sake. The decision-making process regarding ROI involves various parameters. Early adopters have suggested focusing on one of the most critical aspects: the strategic fit of the investment. A fundamental approach is to give priority to initiatives that offer the most beneficial outcomes but require the least effort. Based on the experiences of GenAI adopters, I support adopting an agile methodology and the minimum viable product (MVP) strategy, which should prevent investment in non-value-added projects.

In a recent interview with an end user, it was revealed that 100+ potential use cases were identified during GenAI ideation workshops. Of these, two have already been launched as MVPs, and 14 are in active development.

Watch the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

Conclusion

GenAI solutions are transforming manufacturing operations, improving efficiency, facilitating data-driven decision-making, and simplifying complex processes for frontline workers. By implementing these innovative practices, organizations can adapt to the changing manufacturing landscape and significantly enhance operations.

Our research indicates that the adoption of GenAI by manufacturing organizations is still in the early stages. However, there has been a notable increase in GenAI awareness: IDC’s July 2023 Future Enterprise Resiliency and Spending Survey revealed that just 19% of manufacturing organizations were unaware of GenAI, compared to 35% in March 2023. This trend suggests that GenAI is steadily being integrated into the technology frameworks of organizations, putting them on an innovation trajectory.

To explore more of our coverage on Gen AI, visit our dedicated page.

In September 2023, two and a half years after the launch of the Radeon RX 6700 XT and Radeon RX 6800 XT, AMD introduced two new GPUs to round out its new RDNA 3-based graphic card portfolio: the midrange Radeon RX 7700 XT and high-end Radeon RX 7800 XT. AMD also published the most recent version of AMD Software: Adrenalin Edition, which introduced AMD HYPR-RX and FidelityFX Super Resolution 3 (FSR 3) with frame generation technologies. In addition, AMD released a technical preview driver that allows all DirectX 11 and DirectX 12 games to benefit from Fluid Motion Frames, a frame generation technology.

The 12 GB AMD Radeon RX 7700 XT and 16 GB Radeon RX 7800 XT graphics cards are equipped with second-generation AMD Infinity Cache technology and are based on the groundbreaking AMD RDNA 3 architecture. They offer 1440p high refresh rate gaming experiences with good performance at reasonable prices.

The AMD Radeon RX 7800 XT and 7700 XT graphics cards, which have suggested etail prices of $449 and $499, respectively, went on sale from September 6, 2023. For this review, we installed sample AMD cards in two systems, each with a Ryzen 9 7950X CPU, a Gigabyte X670E Aorus Master motherboard, and a G.SKILL Trident Z5 Neo 2x16GB DDR5-6000 EXPO memory kit.

Architecture

RDNA 3

The Radeon RX 7700 XT and RX 7800 XT respectively have 54 and 60 unified AMD RDNA 3 compute units. These new cards are built on the RDNA 3 architecture, which include new Infinity Cache technology, AI accelerators, and second-generation raytracing accelerators. The AMD Radiance Display Engine, with DisplayPort 2.1 support for high refresh rate displays, is also incorporated into the new cards.

2nd Generation AMD Infinity Cache

The new cache hierarchy balance has been optimized for the ideal mix of 2nd Generation Infinity Cache and L2 cache to allow fast data access and act as a significant bandwidth amplifier despite being half the size of the RDNA 2-based GPUs. The new cards thus have better performance and are more power efficient than previous models.

Media Engine

Like the previously announced RDNA 3-based GPUs, the new media engines on the Radeon RX 7700 XT and RX 7800 XT have hardware-accelerated support for AV1 encoding up to 8K resolution at 60 frames per second (FPS). It is now possible to produce output videos at smaller file sizes while maintaining the same bitrate and quality. The current versions of OBS, DaVinci Resolve, and Adobe Premiere Pro, with the Voudoker plug-in, all have support for AMD RDNA 3 Media Engine AV1 encoding. Support for FFmpeg and Handbrake encoding will be introduced in future.

Game Bundles

AMD continues to offer newly released AAA gaming titles alongside its launch of new products and seasonal promotions. AMD is the exclusive PC partner for Starfield, Bethesda Game Studios’ new open world game in almost 25 years. Starfield was created by the award-winning designers of The Elder Scrolls V: Skyrim and Fallout 4. The Radeon RX 7800 XT and Radeon RX 7700 XT, as well as qualifying Radeon + Ryzen PCs offered with these graphics cards, are eligible for the Starfield Premium Edition package, which provides gamers with free access to the game.

Adrenalin Software

The latest AMD Adrenalin Edition driver update adds additional performance and feature upgrades. The new HYPR-RX and AMD Radeon Anti-Lag+ technologies allow for next-generation gaming experiences on AMD Radeon RX 7000 Series GPUs.

To produce a performance-stacking effect, the AMD HYPR-RX technology streamlines and handles the simultaneous interoperation of AMD Radeon Super Resolution, Radeon Anti-Lag, Radeon Anti-Lag+, and Radeon Boost. AMD Radeon Anti-Lag+ allows players to reduce input latency.  However, AMD encountered a problem with Anti-Lag+ along with some anti-cheating technologies used in multiplayer games. This problem prompted the company to release the AMD Software: Adrenalin Edition 23.10.2 driver, which disables Anti-Lag+ technology in all supported games. AMD now advises gamers to use the new driver. AMD also stated that it is actively working with game developers on a solution to re-enable Anti-Lag+ and reinstate gamers who have been affected by anti-cheat restrictions.

Performance

Scale-Up

Because it employs DirectX 12 Ultimate Raytracing tier 1.1 for real-time global illumination and raytraced reflections as well as new performance enhancements such as Mesh Shaders, 3Dmark Speed Way is an ideal synthetic benchmark for comparing the performance of the latest AMD graphics cards with their predecessors.

The Radeon RX 7700 XT represents a great generational jump over the RX 6700 XT. However, the RX 7800 XT did not scale up against the RX 6800 XT, as the additional 12 compute units of the RX 6800XT compensate for the new architecture with quicker memory.

Ultra-Wide 1440p Gaming with Radeon RX 7700 XT

Games were tested on a 34-inch ultra-wide quad-HD 1440p monitor with a 144Hz frame rate, FreeSync, and 10-bit colors. We utilized the games’ maximum graphics settings, with ultra raytracing, FSR, and HYPR-RX enabled.

In the Forspoken demo, AMD FSR 3 was put to the test with the Radeon RX 7700 XT. The average FPS jumped from 55 to 96. During gaming, there was no latency or stuttering.

Gaming in 4K with Radeon RX 7800 XT

The Radeon RX 7800 XT was tested at 4K resolution and maximum graphical settings, with ultra raytracing enabled were possible. In a demanding game like Microsoft Flight Simulator 2020, with the FlyByWire A32NX and the Terrain LoD set to 400, the AMD Radeon RX 7800 XT had an average FPS of 41 and a one percentile low frame rate of 35 FPS for a smooth and predictable experience in the cockpit.

Flight Simulator 2020 is an ideal game for the technical preview driver with AMD Fluid Motion Frames, as the simulator is quite CPU bound when to playing on ultra settings. Limiting the FPS to 30 with RivaTuner Statistics Server and enabling Fluid Motion Frames resulted in a smooth 60 FPS experience, even over highly detailed areas such as New York or London.

With the typical slow and smooth scenery movements from the cockpit view, Fluid Motion Frames technology consistently generated additional in-between frames for a great flight sim experience.

Starfield achieved an average 42 FPS at 4K ultra settings on the AMD Radeon RX 7800 XT without resolution scaling. However, the FPS dropped significantly in built-up areas within the game. Game performance improved when the settings were lowered, with an average 50 FPS recorded at high settings. AMD Fluid Motion Frames can be enabled for Starfield with the technical preview driver. While Fluid Motion Frames help smooth some areas of the game, processing can temporarily stop when there are rapid direction changes during gameplay. This effect results in an FPS drop and stuttering, just when a gamer needs additional frames to smooth out motion. Improving this capability of Fluid Motion Frames in the driver will really improve the overall experience and make RDNA 3 and RDNA 2-based cards much more usable over time, especially as more demanding games come to market.

IDC Opinion and Conclusion

The RX 7700 XT, which is $30 less than the RX 6700 XT at launch (at $449), nonetheless has a noticeable increase in performance. The RX 7700 XT outperforms the previous generation in raytracing games, with up to 40% better performance. With the Radeon RX 6700 XT, playing demanding games like Cyberpunk 2077 at maximum visual settings was impractical. An average 27 FPS was recorded with ultra raytracing enabled on an ultrawide 1440p monitor. In contrast, the RX 7700 XT’s 12GB of VRAM did not pose any restrictions at 1440p. The card is a worthwhile improvement and will become more popular should a price decrease be effected in future.

The AMD Radeon RX 7800 XT is a bit more of a complex proposition for a consumer. The AMD RX 6800 XT’s suggested retail price at launch was $649, while the RX 7800 XT costs $499. With fewer but higher performing compute units, the Radeon RX 7800 XT performs broadly at par with the Radeon RX 6800 XT, but costs 25% less compared to the latter’s original launch price. The card thus represents much better value, especially as the stock of end-of-life RDNA 2-based GPUs dries up.

AMD has definitely read the market and is taking competition in the midrange gaming market seriously, as evidenced by the lower launch prices of its new cards. Due to market demand, AMD will likely reduce the suggested price of the RX 7700 XT even more in the near term, given the small $50 (10%) price difference between it and the RX 7800 XT and the much greater overall performance of the RX 7800 XT.

Mohamed Hakam Hefny - Senior Program Manager - IDC

Mohamed Hefny leads market research in EMEA on professional workstation PCs and solutions. He also reports on professional computing semiconductors, processors, and accelerators (CPUs and GPUs), as well as breakthroughs and trends related to the market. In addition, Mohamed is actively involved in AI PC taxonomy and research. He participates in business development projects, contributes to consulting activities, and provides IDC customers with analysis, opinions, and advice.

Virtual care is at a critical juncture in its development. The rapid rise of virtual care during the pandemic sparked discussions about its ongoing significance. While numerous studies have highlighted its advantages, the future of virtual care is now under scrutiny.

As the novelty effect fades, both healthcare professionals and patients are redefining their expectations and preferences, leading to valid questions about the path ahead for virtual care.

What is Virtual Care?

Definition in this space abounds, but from a general perspective, virtual care encompasses all the remote interactions between healthcare organizations and patients, enabled by digital tools to deliver health services, promote engagement and enhance health and well-being. It includes services and organizations focused on patient education and advice but also services that entail diagnosis, monitoring and treatment.

There is a broad spectrum of technologies enabling virtual care, covering connectivity, collaboration, clinical information systems, consumer technologies, MedTech, etc. These components enable communication, data exchange and analysis, transforming how healthcare is delivered and experienced.

 Main Benefits and Limitations of Virtual Care

The adoption of virtual care in European healthcare systems has triggered profound shifts and delivered benefits across multiple dimensions It has brought about notable improvements at the system level, as well as advancements within healthcare organizations. Virtual care has had a positive impact on patient outcomes and experiences, underscoring its significance in shaping the future of healthcare.

According to IDC’s December 2022 Consumer Pulse Survey, healthcare consumers reported comparable satisfaction levels with virtual visits as they did with traditional in-person appointments. When focusing on some macro-level benefits, virtual care has shown its pivotal role in healthcare by relieving ER pressure, freeing beds for patients with more critical needs, and improving operational efficiency.

For example, the English NHS recently surpassed its goal’s target of 10,000 virtual ward beds by September 2023. These beds cater to patients with conditions like COPD, heart failure, and frailty. Over 240,000 patients have been treated through virtual wards, with research suggesting comparable or faster recovery compared to traditional hospital care.

However, it is crucial for healthcare providers to acknowledge that virtual care is not without limitations and challenges, which must be carefully addressed. Without these considerations, healthcare organizations will struggle to realize the anticipated benefits, and, in some cases, they could even compromise care quality.

  • Transforming Essential Care – If We Connect the Dots: Virtual care offers convenience by allowing patients to access non-urgent medical services from home. It can alleviate strain on emergency rooms, address primary care shortages, and extend services to underserved areas. However, integration into patient records is essential for continuity of care. Technical obstacles, especially in underserved regions, require further investments in infrastructures, health information data integration and management.
  • Empowering Chronic Disease Management – If We Use the Right Tools: Virtual care positively impacts chronic disease management, but it requires appropriate engagement tools. Choosing ergonomic device design, automated real-time data collection, performance criteria (speed, accuracy, response times), and diverse functionalities (like medication reminders, predictive alerts) significantly impact the outcomes for virtual care. Striking a balance between virtual care benefits and the need for in-person assessments is vital. Understanding which aspects of disease management can be handled remotely, and which require in-person attention due to physical exams and specialized tests, is essential for comprehensive care.
  • Tackling Health Inequalities – If We Mind the Digital Gap: Virtual care aims to bridge healthcare access gaps, but the digital divide and limited digital literacy pose barriers. Health inequalities persist, with evidence of disparities in digital technology utilization. IDC research shows that the primary user demographic of virtual visits are young, urban, higher-income individuals, typically facing fewer access barriers. To tackle these disparities, virtual care solutions need to be intentionally designed to address specific barriers faced by different patient groups.

How to Future-Proof Virtual Care

Virtual care holds the potential to improve healthcare accessibility and delivery by providing care to individuals regardless of their location or the time of day. To fully harness this potential, it is crucial to thoughtfully address the challenges and shortcomings it currently faces and implement long-term strategies that facilitate transformative changes.

  • Assess Population Needs: Understand diverse patient requirements, including digital literacy and social determinants of health (SDoH). Embrace “hybrid care” models that offer both virtual and in-person options, catering to individual needs.
  • Choose Appropriate Technology: Select medical-grade remote patient monitoring devices and platforms that integrate seamlessly with healthcare professionals’ workflows. Ensure active patient engagement.
  • Establish a Virtual Care Ecosystem: Foster collaborations among healthcare authorities, provider organizations, and life science companies. Implement suitable reimbursement schemes to incentivize healthcare providers for round-the-clock virtual care availability.

At IDC we have just published a report “ IDC Innovators: Remote Patient Engagement and Virtual Care Solutions, 2023”  exploring how emerging technology vendors are supporting the evolution of  virtual care working with healthcare organizations and the broader health ecosystem.

Virtual care should not be seen just as a legacy of the pandemic-era; instead, it stands as a transformative force shaping the future of healthcare. By aligning with population needs, leveraging technology sensibly, and fostering collaboration in the healthcare ecosystem, virtual care can advance even further. Its role in delivering high-quality, accessible, and convenient healthcare services tailored to individuals worldwide will remain pivotal.

As we transitioned into a post-pandemic era, it’s crucial to fine-tune virtual care programs to fit population-specific needs and integrate them seamlessly into the broader healthcare ecosystem, solidifying their significance.

If you’re curious and want to dive deeper into this subject, consider reaching out to our IDC Health Insights team. Also be sure to check the latest research on virtual care from Nino Giguashvili, Federico Mayr and Silvia Piai

Silvia Piai - Research Director, IDC Health Insights - IDC

Silvia leads the team of analysts covering the European healthcare market and the Worldwide Medical Devices Industry. Her research provides strategic advice to end users and vendors in healthcare and life sciences, assisting organizations in understanding how technologies are disrupting and transforming traditional business models. Silvia Piai's research offers a comprehensive perspective on the foundational elements shaping the health industry's evolution. Her analysis delves into the implication of key industry trends like evidence-based medicine, personalization and integration of care services and the transformation of health industry ecosystems. Through these overarching themes, Silvia Piai offers in-depth analysis of ongoing innovations and best practices in pivotal technological domains such as AI, IoT, Cloud and industry-specific solutions.

Fall traditionally means Predictions time for the entire IDC community. That point of the year where we gather across different research domains to reflect on those trends steering organizations’ digital agendas and predict what will characterize the digital landscape in the months and years to come.

This year, this is happening at the dawn of a new chapter in the Digital Business Era: the chapter of AI Everywhere.

Generative AI triggered the opening of this chapter because it holds the potential to drastically reduce the time and long-term costs associated with developing solutions across a wide range of use cases associated with automation and intelligence. It is completely changing our relationship with data and how we extract value from both structured and unstructured data.

This era is about how we use data as input and as a business outcome:

  • 18% of EMEA organizations believe that GenAI is already disrupting their business, and 70% of all organizations believe it will do so in the next 18 months.
  • 44% of EMEA organizations are already investing in GenAI or doing initial model testing and proofs of concepts.
  • Customer-facing applications, financial and operational decision support applications, and employee experience applications are sweet spots for GenAI integration.
  • GenAI is expected to capture 15% of EMEA organizations’ new IT projects budgets in 2024, representing a must-have chevron in technology vendors capabilities and portfolios (IDC GenAI ARC Survey, August 2023 — GenAI Awareness, Readiness, and Commitment: A First Look at IT Leaders’ Expectations and Concerns for Generative AI).

For organizations to gain a competitive edge in this new era, a full reimagination is needed.

Creating an intelligent architecture that is supported by a cost-effective digital infrastructure and relevant capabilities is a priority. At the same time, this journey raises ethical and trust-related questions that purpose-driven organizations must prepare for.

AI Everywhere is certainly the key factor altering the global business and digital ecosystem for the next 12-24 months and beyond, but other critical external drivers will also shake 2024:

  1. The Drive to Automate
  2. Economic Uncertainty 
  3. Geopolitical Turbulence
  4. Global Supply Chain Resiliency
  5. Cybersecurity and Risk
  6. The Digital Business Imperative
  7. Everything as a Service Intensification
  8. Dynamic Work and Skills
  9. Shifting Tech Regulatory Landscape 
  10. Operationalization of ESG 

This year’s unveiling of IDC’s EMEA Predictions for 2024 and will take place on December 11. In the weeks leading up to the reveal, we will release a series of thought leaderships assets that will double-click on these key drivers, analyzing their digital impact and highlighting actions that organizations will have to take to be Digital Future ready (the partial list of upcoming webcasts with registration links can be found below).

In the meantime, if you want to remain updated on the upcoming releases, please visit our IDC European FutureScape page, and register to join us for the IDC EMEA Futurescape 2024 webcast.

 

Upcoming October and November IDC EMEA webcasts you can register for:

 

Andrea Siviero - Senior Research Director, MacroTech, Digital Business, and Future of Work - IDC

Andrea Siviero leads IDC's European Digital Business and Future of Work Research group. The group provides market research insights to foster a purposeful and fair adoption of technologies supporting digital societies, businesses and workforce and empower tech providers in strategic decision making, planning and go-to-market activities. Siviero also co-leads the IDC Worldwide MacroTech Research program, focused on the intertwined connection between the Economical and Digital worlds - analyzing the impact key MacroEconomic factors have on the digital landscape and viceversa, how technologies are impacting economies around the world.