The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.

GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.

We observe three ways that industrial users can get in touch with GenAI:

  1. Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
  2. Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
  3. Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Be Ready — But Careful

In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.

Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.

AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.

AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.

Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.

Output Challenges

Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.

GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.

GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.

GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.

Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.

If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.

We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.

Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.

Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

Evolving Challenges

Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.

Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.

For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.

Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.

The Sky is the Limit — For Now

In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.

To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.

Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.

Beware the Competition!

GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.

But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.

At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.

As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.

You often see it on television: programs about people who are struggling financially. They run out of money at the end of the month, they can’t sell their house, they have a problematic debt burden, and so on. A common denominator is often the lack of insight into their own situation, and while coming up with ways to save money may not be very difficult, actually implementing and sticking to them is much harder.

I mean, it’s easy for an outsider to suggest that someone should get rid of their dog, but if that pet is their only source of comfort, it will take some effort.

The same goes for cloud costs: saving money is easier said than done. There are all sorts of great tools available from both cloud providers and third parties to help you understand your costs.

These tools provide various reports and dashboards, and even recommendations on which instances to remove or resize (rightsizing). With the right knowledge, you can also determine how to use discount options (reserved instances, savings plans, reserved capacity, etc.), how to manage licenses intelligently, and what you can do in your application architecture to save costs. And, of course, you can always turn off instances when you’re not using them.

All of this insight is great, but then comes the second part. Just as people have a hard time saying goodbye to their pets, users and administrators have a hard time shedding their old habits and ways of thinking. And that’s something cloud providers never talk about.

For example, consider turning off instances outside of working hours. In theory, this is an excellent way to save money, but instances are part of applications, which in turn are part of chains. It can happen that data exchange takes place in a chain outside of working hours.

Testing teams that are under a deadline may also need their environment outside of the predetermined working hours. And if environments are used in the management chain, they must also be available after working hours in case of an emergency. So savings are theoretically simple, but practice is more complicated. It can be done, but it takes a lot of effort.

Rightsizing is also less straightforward than it seems. Users and administrators are often hesitant to remove capacity: users see their performance decrease, and administrators see the risk of more outages because there is less excess capacity to handle issues. In the latter case, you need to analyze where these issues are coming from: a poor application can benefit from more capacity, but that is not a long-term solution.

If the roof is leaking, you can replace the bucket you use to catch the water with a mortar tub, but even that will eventually fill up. Ultimately, you’ll have to repair the roof.

So, objections can be raised for all types of savings. Eventually, you’ll need to adopt an approach that not only makes costs visible but also involves users and administrators, and leads to the right considerations on where to save on your cloud costs and where not to.

Don’t know where to start? Can’t figure it out quickly enough? IDC Metri has helped several organizations get started. Our specialists can help kickstart your cost-saving efforts in the cloud. Because understanding costs is one thing, but it’s only useful if they actually decrease.

 

Want to learn more? Subscribe to IDC Metri’s monthly newsletter full of actionable insights on IT benchmarking, intelligence, sourcing and more.

I was born in Ravenna, on the east coast of Emilia-Romagna, one of the most liveable and prosperous regions in Italy. Emilia-Romagna is home to 7.3% of the Italian population. It accounts for 9.2% of GDP and 11.8% of agricultural production.

It headquarters globally successful firms in automotive, motorbikes, food production, ceramic tiles, textile and fashion, biomedical engineering, construction, woodworking equipment and much more. Unemployment is at 5.1%, well below the 2022 national average of 8.2%. Life expectancy is higher than the national average.

There are white sandy beaches, natural reserves in coastal wetlands, and beautiful hills and mountains, which combined with a rich heritage — Ravenna alone boasts eight UNESCO heritage sites — and amazing food and wine attract tens of millions of tourists every year.

Besides these material treasures, there is a unique way of living in Emilia-Romagna. And even more so in Romagna, where I grew up; there’s an old saying that you can tell if you are in the Romagna part of the region because when a stranger shows up at someone’s door, they are welcomed with a smile and a glass of wine. On the Emilia side, they’ll be equally warmly welcomed, but with a glass of water!

There is a sense of shared joy, a passion for life and a pride in belonging to one’s community. A shared sense of resilience that drives people to go through the hardness of life with a smile on their face, and always trying to put a smile on someone else’s. Because there is always a little bit of magic, even in the small things.

As Federico Fellini, the world-famous movie director and one of the most beloved children of our region, once said: “Life is a combination of magic and pasta.”

It feels good to be a Romagnolo. And to visit Romagna … unless you happened to be there in the first two weeks of May 2023.

Smart River and Water Management: Preparing for Foreseeable Disasters

After many months of drought, in the first 17 days of May 2023, Romagna was hit by as much rain as it usually gets in six months. In some areas this meant up to 400mm of rain in two weeks. To put things in perspective, one of the worst hit municipalities, Faenza, which is home to 60,000 people, experiences on average 760mm of rain a year.

The stereotypical rainy London gets 690mm a year. The result of this unusually heavy rain was that 23 rivers burst their banks, resulting in 50 floods; 305 landslides devastated hills and mountains, 14 people died and over 36,000 people were displaced from their homes. The estimated economic damage to homes, factories, farms and public infrastructure is north of €5 billion, with around €600 million just to rebuild public infrastructure.

Climate change is increasing the frequency and intensity of these extreme weather events. Long-term environmental sustainability actions, which are progressing way too slowly, will not be enough.

Resilience to short-term shocks is imperative. Money is not the problem; in fact, there is an estimated €8 billion available from the Italian COVID Recovery and Resilience Plan and the “Italia Sicura” (Safe Italy) plan to make public infrastructure more resilient. This, however, is at risk of not being spent, or not spent well, because of lack of planning, skill gaps, slow public procurement, and insufficient competencies and capacity to audit.

Technology innovation is not a silver bullet, but when implemented wisely it can help fill some of those gaps. The increasing availability and granularity of data from satellite images, IoT sensors, weather monitoring and forecasting models already tell us that Italy has the highest amount of rain in Europe, with 300 billion cubic meters a year.

Building permitting systems, public works inspection systems and other sources tell us that Emilia-Romagna was the fourth worst region in terms of soil consumption in Italy in 2021, including in areas at high risk of flooding. By building on the existing knowledge, collecting more data and turning the data into intelligent smart river and water management insights, governments, water utilities and the public could make better decisions across the disaster resilience life cycle, from mitigation to preparedness, from response to recovery.

  • Mitigation: Governments can use a wide variety of tools to develop hazard maps that can identify areas most at risk and feed into planning and preparedness systems. Policymakers and building inspectors can feed intelligent insights into planning and operational simulation tools, such as digital twins, to simulate the impact of building code and permitting decisions to reduce soil consumption and require the use of more resilient building techniques and materials.
  • Preparedness: The benefits of building flood resilient systems (dams, levees, flood walls and diversion canals, etc.) to protect natural systems such as wetland, marshes and beaches, and using resilient building techniques such as tiled pavements instead of concrete for parking lots and roads to increase water absorption, can be augmented by making these assets and tools intelligent. The intelligence from those systems can enable real-time or preventive decisions about diversion tactics, rather than reacting only when the flood is too close.
  • Response: Real-time data from weather forecasting models, integrated with data from dam and river sensors, should be analysed to detect anomalies to automatically raise emergency alerts that can then promptly notify citizens, rather than having to rely on fire and police patrols roaming the roads of small rural villages and towns using loud speakers to tell citizens to evacuate homes or expecting mayors to post videos on social media hoping everybody pays attention, as happened in the past two weeks in Romagna. More intelligent use of data can also provide insights for command-and-control personnel to coordinate first responders and orchestrate the supply of food, clothes and medicine for shelters, instead of relying on emails, spreadsheets and phone calls.
  • Recovery: Digital twins would allow evidence-based infrastructure planning decisions and monitoring the progress of investments aimed to rebuild infrastructure, therefore increasing speed and transparency of projects to avoid wasting time and money. AR/VR tools can help engineers conduct inspections when anomalies are detected.

The same technology infrastructure — with a few additions in terms of sensors and applications — will provide intelligent insights for other use cases, such as water conservation in dry seasons, leakage reduction, biodiversity protection in rivers, marshes and ports, sustainable water transportation, and water quality.

Only two days after the peak of the emergency, millions of euros, as well as food, clothing and other supplies, had been donated to flooded areas in Emilia-Romagna from all over Italy and beyond. Boosted by the typical Romagnolo spirit, spontaneous neighbourhood efforts have mushroomed to clean mud from houses, roads and farms. Beaches have already been cleaned for the upcoming tourist season. But that resolve to recover quickly should not allow us to forget what happened. We know what the future holds. Extreme weather events will happen, not only in well-known high-risk flooding areas, such as the Indian Subcontinent, Southeast Asia, and Pacific and Caribbean Islands, but also in traditionally safer regions of the world.

Technology innovation will be critical to climate change resilience. But technology alone will not be enough. It’s not enough to feel compassion to help when disaster happens. We need to invest in mitigation and preparedness measures that generate the highest long-term returns.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.
May 25, 2023

ИТ-лидеры Азербайджана обсудили развитие цифровых инноваций в период неопределенности на форуме IDC Day в Баку

25 мая 2023 года в Баку состоялся форум IDC Day «Предприятие будущего в период неопределенности», посвященный вопросам ускорения развития цифровых технологий и инноваций на предприятиях...

Read full release

AI Act: How Did We Get Here and Where Are We Now?

In April 2021, the European Commission submitted a detailed proposal of its plan to regulate artificial intelligence development and use in Europe: the AI Act. The AI Act’s goal is to ensure that the development and deployment of AI systems in Europe is safe, transparent and compliant with the EU’s fundamental rights and values ― protecting the public, while still fostering innovation.

The Commission adopted a “general approach” on a set of harmonized rules on artificial intelligence in November 2022, but rapid progress of the technology, together with the sudden wave of innovation in Generative AI systems, delayed the final discussion of the legislation as new amendments to cover the latest developments were explored. On May 11, the European Parliament committees approved the AI Act with a large majority in a vote that paves the way to the plenary vote in mid-June (June 14 as a tentative date).

Let’s now look at the main principles of the proposed regulation and how it will impact the AI market in the region.

Regulating the Development and Deployment of AI in the EU ―  Key Aspects of the AI ACT

The proposal identifies three (+1) risk categories for AI applications and applies different restrictions and obligations on system providers and users, depending on the category of the application in question:

  • Unacceptable risk: applications that involve subliminal practices, exploitative or social scoring systems by public authorities. Such applications will be banned.
  • High risk: applications related to education, healthcare and employment, such as CV-scanning, ranking job applicants, will be subject to specific legal requirements (e.g., ensure transparency and safety of the systems, complying with the Commission’s mandatory conformity requirements). Providers of “high-risk” systems will have obligations to establish quality management systems, keep up-to-date technical documentation, undergo conformity assessments (and re-assessments) of the systems, conduct post-market monitoring, and collaborate with market surveillance authorities.
  • Limited risk: this mostly includes AI systems such as chatbots that will be subject to specific transparency obligations (e.g., disclosing that interactions are performed by a machine, so that users can take informed decisions).
  • Minimal risk: applications that are not listed as risky, nor explicitly banned are left largely unregulated (e.g., AI-enabled video games). Currently, this category covers the majority of AI systems used in the EU.

How Will the AI Act Affect the European AI Landscape?

The introduction of the European AI Act has sparked discussions on its potential impact on the adoption of AI technologies. Will this regulation hinder AI innovation in Europe? The answer is not straightforward, as it depends on various factors and the evolving landscape.

AI regulation may impose compliance costs, administrative burdens, and legal uncertainty on businesses and developers. Extensive testing, validation, and monitoring of AI systems may become necessary, which can be time-consuming and expensive. There might also be limitations on the types of applications, industries, data, or algorithms used in AI systems.

However, when assessing the direct impact on AI use cases falling under the regulated risk categories, the outcome is not overwhelmingly negative. When we at IDC built a data model to verify which and how many AI use cases will be directly impacted (we considered those that would fall into the above listed risk categories) the outcome was only modest, and we have not seen the impact, defined by possible lost revenue, to be worrying.

The compliance costs and administrative burdens could be challenging for SMEs and startups, though, which may inhibit competition in Europe if larger, more established providers find it easier to comply.

Industries like healthcare, public administration or finance are likely to face more stringent requirements due to their potential impact on human life and safety. Transparency, explainability, human oversight, and restrictions on the use of, for example, biometric identification technologies are some of the obligations that might be imposed. While these requirements may limit certain applications, they also aim to protect privacy and individual rights. However, it’s important to note that this regulation offers a list of exemptions, so if you are a provider for national security interests, you may not need to worry about that too much.

On the positive side, regulation has the potential to enhance wider trust and confidence in AI systems. This is crucial in countering overhyped pop culture-fed media narratives of AI as a threat. A trusted regulatory framework always reduces legal uncertainty and creates a level playing field for businesses, public institutions and consumers and citizens. Wisely designed laws will improve the quality and safety of AI systems and will first and foremost safeguard individuals.

The AI Act aims to encourage AI technologies that align with ethical and societal values that the EU strongly supports, such as transparency, accountability, and human-centricity. It wants to stimulate research and development in these areas and promote collaboration and openness among organizations and regions. By establishing common standards and best practices, the EU facilitates knowledge exchange and expertise sharing.

Conclusion

Looking at AI regulation through the lens of healthcare offers valuable insights. Healthcare regulations ensure safety, efficacy, and patient rights. They impose requirements on manufacturers to meet necessary standards. Similarly, AI regulations can ensure ethical and safe technology use while balancing innovation and protection.

While the potential impact of the European AI Act on AI adoption and innovation may present challenges, it also offers opportunities. By adhering to the regulatory framework, AI providers can navigate the landscape effectively, gain public trust, and promote responsible AI practices.

As the AI Act progresses, it is crucial to stay updated with the latest developments. At IDC, we will closely follow the progress of the AI Act and will continue publishing comprehensive research, providing deeper insights into its implications and potential impact as we approach the EU vote in June.

 

If you want to know more about this, please contact the team: Lapo Fioretti, Andrea Siviero, Neil Ward-Dutton or Ewa Zborowska

Lapo Fioretti - Senior Research Analyst - IDC

Lapo Fioretti is a Senior Research analyst in IDC Digital Business Research Group, leading the European Emerging Technologies Strategies research. In his role, he advises ICT players on how European organizations leverage new technologies to create business value and achieve growth and analyzes the development and impact of emerging trends on the markets. Fioretti also co-leads the IDC Worldwide MacroTech Research program, focused on the intertwined connection between the Economical and Digital worlds - analyzing the impact key MacroEconomic factors have on the digital landscape and viceversa, how technologies are impacting economies around the world.