After a year of disruptions like high inflation, war, geopolitical tension, energy shocks, and the anticipation of recession in major countries, it’s no surprise that IT leaders entered 2023 with a mission to minimize technology investments and develop plans for executing spending cuts if conditions worsened. Despite the uncertainty, their teams were running full tilt, filling open positions and “catching up” with the business.

Then along came ChatGPT. Suddenly, IT leaders find themselves planning for the coming Artificial Intelligence (AI) onslaught and asking, “Are we prepared?”

The Threat of IT Malaise

For the first few months of 2023, IDC noted that economic and IT spending outlooks of IT leaders in our monthly Future Enterprise Resiliency & Spending surveys began to improve. IT supply chains loosened, China reopened, energy shocks failed to develop, and the recession continued to be a worry for the future, not a reality of today.

In March, the Silicon Valley Bank failure, a series of banking problems, and concerns about a US debt default canceled out much of the growing economic optimism in the US and Europe, but not in Asia Pacific countries. A more troubling new concern that IDC heard from IT leaders starting in May was that the continued “waiting for recession” is starting to affect economic and IT investment assumptions for 2024, not just 2023.

It became easy to conclude that CIOs and IT leaders should be hunkering down to ride out an extended period of economic uncertainty and IT malaise, focusing on constraining new expenditures and optimizing the use of existing assets. While sustaining efforts to establish cloud economic practices is important, it’s no longer the top priority. Now is the time to start preparing for AI Everywhere.

Innovation Beyond IT Is a Rejuvenator, but Creates Disruption

Leveraging technology to drive innovation in our daily lives has always been a key expansion driver for the entire IT industry since the start of the computer and digital communications eras of the 1950s. The most significant technology driven transformations, such as the advent of the Internet/Web and the launch of the smart phone/cloud era, occurred at times when economic conditions were uncertain, and questions were being raised about the marginal utility of new IT investments.

In both cases, individual consumers and business leaders “got ahead” of IT leaders, driving long term fundamental changes in IT architectures and the role of IT organizations. Because IT leaders were unprepared, most organizations found themselves following a “Fire, Aim, Ready” pattern resulting in unnecessary duplication of work and data/application fragmentation. Those IT organizations that were prepared in advance, executing a “Ready, Aim, Fire” strategy, emerged as the leaders in the new wave.

Generative AI Is the New Trigger but Requires Preparation Now

In this time of “perceived” IT malaise, the emergence of generative AI exemplified by ChatGPT and Dall-E with all their possibilities and shortcomings, has captured the attention of individuals, educators, businesses, and governments around the world. As IDC found in conversations with CIOs and IT leaders, the assessment and use of AI is starting to dominate the planning and long-term investment agendas of businesses across many industries, triggering what IDC anticipates will be a period of extending AI Everywhere.

The semi-good news for most CIOs and CTOs is that generative AI products and services are still limited in availability and will be relatively immature for the rest of 2023 and early 2024. Making hard decisions about commitments of significant treasure in a period of economic uncertainty will remain limited for all but a few organizations.

Every CIO and CTO, however, needs to start committing time and intellectual capital right now to ensure their organization is prepared, avoiding missteps and capitalizing quickly on the potential of AI across both IT and the business. The keys to ensuring your organization’s AI preparedness are assessing your level of AI awareness and determining your state of AI readiness.

AI Awareness: Take Stock and Aim for Consistency

Generative AI services like Jasper and Microsoft 365 Copilot, as well as all the buzz around foundation models, are the “bright shiny objects” right now. Everyone in the organization wants to talk about how they can transform everything from customer service to code and product design, but AI-enhanced capabilities in the areas of prediction (e.g., threat detection and digital twins) and interpretation (e.g., machine vision) are also likely to be well underway in selected parts of the organization.

Now is the time to conduct a comprehensive view of where in the organization AI initiatives of all types are underway. Asking, “What do we think AI can do and not do?” Use this effort to identify early areas where duplication threatens, or collaboration beckons. It can also help you identify gaps where business leaders may be missing critical opportunities because they are distracted by one shiny object.

A key next step is to develop a series of persona-based AI awareness education activities that span from the C-Suite and business/IT leaders to front line employees and even critical customers and partners. The goal isn’t to make everyone an AI expert. It’s to ensure that your organization is consistently “AI aware” as readiness assessments start, and commitment decisions are made.

AI Readiness: Focus on Cloud Native, Hybrid Cloud, and Control

As with many innovations, the ability to quickly adopt a transformational technology is determined by the existing level of technical sophistication and IT operational maturity of the organization. For example, companies that aggressively adopted virtualization as a technology for deploying and managing workloads on their own systems were able to more effectively adopt early public cloud infrastructure solutions that leveraged similar foundational technologies.

Cloud providers will play a significant role in the early introduction of generative AI enablement services and agile DevOps teams will play an equally important role in translating AI capabilities into useful business outcomes. Cloud pioneers and pacesetters with mature cloud operations and architecture models along with well managed DevOps processes will be better prepared to leverage AI than cloud laggards.

Beyond the cloud-native maturity noted above, IDC believes that companies with mature “hybrid-by-design” cloud strategies will be well positioned to take full advantage of AI innovation across many different cloud environments as well as across many different locations, core, and network to edge.

Now is also the time to ask, “Are we ready for AI Everywhere?” The key areas to conduct a critical AI readiness assessment will center on control.  How consistent/inconsistent and open/siloed are data management and data use practices/guidelines? How standardized/fragmented and trustworthy/unreliable are code creation and life cycle systems and standards? How mature are FinOps and Cloud Cost Optimization practices?

Addressing the data questions will accelerate the need to address responsible AI governance and ethics with forethought and readiness. Mature cost and ROI tracking will be critical since “cost” will remain one of the most unpredictable elements of generative AI rollouts for the next several years.

The tech industry is thrilled by the possibilities of AI! This includes silicon designers, cloud providers, software and services clients, and even your own IT teams. The sense of anticipation and even giddiness is palatable, signaling a renewed focus on innovation as the driver of technology. Success, however, depends upon you having confidence in the ability to accurately track and link near and long-term costs with desired business benefits.

Cost/benefit readiness is the key skill required when it’s time to commit to AI, especially in this time of economic uncertainty and the threat of succumbing to IT malaise.

Rick Villars - Group VP, Worldwide Research - IDC

Rick is IDC's chief analyst guiding research on the future of the IT Industry. He coordinates all IDC research related to the impact of Cloud and the shift to digital business models across infrastructure, platforms, software, and services. He helps enterprises develop effective strategies for using their diverse portfolio of cloud investments and applications. He supplies early guidance on implications of critical innovations such as the shift to cloud-based control platforms for deploying/managing infrastructure, data, and code delivery as well as the emergence of AI as a critical IT workload and part of all IT products/services.

The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.

GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.

We observe three ways that industrial users can get in touch with GenAI:

  1. Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
  2. Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
  3. Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Be Ready — But Careful

In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.

Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.

AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.

AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.

Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.

Output Challenges

Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.

GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.

GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.

GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.

Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.

If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.

We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.

Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.

Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

Evolving Challenges

Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.

Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.

For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.

Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.

The Sky is the Limit — For Now

In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.

To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.

Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.

Beware the Competition!

GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.

But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.

At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.

As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.

Several years since the introduction of watchOS in 2014, Apple is once again setting its sights on revolutionising a technology that has yet to fulfil its potential. While augmented reality (AR) and virtual reality (VR) are not new, they have been subject to the unpredictable nature of product launches, with numerous companies transitioning from pioneers to underachievers in double quick time.

Nearly 350 AR and VR headsets have been launched in the past 10 years. Each brand has presented its own vision of AR and VR, only to fall short of lofty expectations. How many times have we eagerly embraced a new device, anticipating its transformative impact on our lives, only to be swiftly let down again and again?

Why will it be different this time? And why is this announcement so important?

The Revolution of Technology

The significance of this announcement lies in the anticipation surrounding tech companies’ efforts to revolutionise the next generation of user interfaces.

Throughout much of the latter half of the 20th century, keyboards were the primary means of interacting with digital content. But we have since witnessed the rise and widespread adoption of the mouse, touch interfaces, multitouch, voice control and voice assistants, with Apple playing a leading role in advancing some of these. Over the years, various organisations have explored immersive technologies and in the past decade VR and AR have become accessible to both consumers and businesses.

No single consumer electronics brand has managed to truly transform our interaction with digital content, however. This is what Apple aims to achieve with the Vision Pro — and it has started with a bang.

Why Vision Pro Is a Game-Changer

I was lucky enough to experience the Vision Pro hands-on. This is a product that truly lives up to the expectations set out in the keynote. Every aspect of the device is extraordinary: the image quality, the eye tracking and hand gestures, the immersive 3D spatial photos and content, the FaceTime conversations with 3D holograms, the way it blends the virtual with the real world through EyeSight, the user-friendly interface, and the luxurious feel of a meticulously crafted device.

With the Vision Pro Apple has revolutionised AR and VR experiences with a device that surpasses any other headset I’ve ever tested. This ground-breaking product has propelled the world of augmented and virtual reality to a completely different level.

Over the past decade, the collective expenditure on VR and AR headsets has exceeded $21 billion, while the number of headsets shipped has reached 59 million. The market is poised for even greater expansion, thanks to Apple’s entrance, which is expected to ignite widespread adoption and compel competitors to enter the segment.

We forecast that combined shipments of AR, VR and mixed-reality (MR) devices will skyrocket to 97 million units between 2023 and 2027, generating estimated revenue of $49 billion.

Vision Pro Potential in Business

While Apple emphasised its consumer-focused approach during the keynote, the company must expand its vision beyond just the consumer segment. Gaming has traditionally dominated the VR landscape, and this is likely to continue in the coming years. But there is an emerging potential for commercial applications as enterprises seek ways to minimise expenses and enhance customer satisfaction. By 2027, training, collaboration and improving customer experience will account for more than 52% of overall expenditure on MR hardware.

Similarly, AR has predominantly catered to enterprise users for troubleshooting, product development and design purposes. But there is also a rising consumer market opportunity for personal productivity and entertainment.

To realise this potential, Apple will need to mobilise its extensive developer community. Given the large community of developers, the company is well positioned to drive content creation through its developer base, which will be pivotal in reaching a significantly broader customer base.

Vision Pro Is Expensive — But the Benefits Are Clear

The Vision Pro is not cheap, but focusing only on its cost overlooks the main benefit. The product is not designed to generate long lines outside stores on launch day.

Instead, it will be a platform for content creators to unlock their creativity and seize new opportunities. Just as the iPad empowered developers to leverage a larger screen for innovative applications, the Vision Pro delivers a flawless, intuitive and immersive experience to end users — critical for developers to focus on content opportunities and not on product glitches.

Developers want a device that enables them to offer premium and familiar experiences to users, while enterprises see the potential of MR in reducing costs across areas such as product development, training, industrial maintenance and emergency response. Embracing MR can also enhance collaboration and improve customer experiences.

Enterprises and developers need a high-quality device with exceptional specifications that empowers them to deliver outstanding experiences, all while minimising costs. The Vision Pro does just this.

For consumers, the Vision Pro offers innovative ways to engage with digital content. Although we can access content on various screen sizes, an exceptional experience often requires the optimal screen size. This often leads to compromising mobility to enhance the experience, as only smartphones, iPads and laptops offer truly mobile screens.

For instance, while movies can be enjoyed on smartphones, a larger screen in a theatre provides a significantly better viewing experience. In the workspace, working with multiple displays boosts productivity compared to relying on a single laptop screen. But users can’t be carrying multiple screens when they change locations. AR experiences can also be accessed via smartphones or tablets, but the ability to view content hands-free is a major enhancement to the overall experience.

For years, MR headsets have promised such features. The ability to individually access all desired displays for each specific experience is not a novel concept. But while other companies have made promises and only partially delivered on them, primarily in gaming and in limited commercial applications, Apple is now delivering what many players in the space acknowledge only it can deliver.

Three Improvements for Vision Pro?

Despite its disruptive nature, there is still room for improvement with the Vision Pro:

  • After using it for 30 minutes, I found myself wondering whether I could comfortably wear the device for a few hours. It was heavier than I’d thought, though that’s understandable considering the advanced technology it incorporates.
  • Another consideration is that the device essentially “glues” a screen to our eyes, so eye fatigue could be an issue. Users should be careful and look at ways to minimise discomfort during prolonged use.
  • Personal interactions. While EyeSight is one of the headset’s standout features, enabling users to connect with others without having to remove the device, it does raise practical concerns. How many of us would truly engage in conversations by displaying a digital representation of our eyes? This may require further evaluation to determine its real-world utility and acceptance.

In summary, Apple has been a disruptive force across multiple categories and industries, transforming personal computers, music players, smartphones and watches, to name a few. Its innovative products have not only set the standard for their respective categories, but have also revolutionised our lives in unimaginable ways.

With the introduction of the Vision Pro, Apple is initiating the next revolution in personal technology.

Please reach out if you have any questions, or follow me on Twitter or LinkedIn.

As the old adage goes, “A smooth sea never made a skilled sailor.” Nowhere is this more evident than in today’s IT landscape. CIOs across the globe are grappling with a new, unexpected wave in their voyage – inflation. What makes this wave particularly unsettling is that it’s blowing up the cost of all IT services without offering additional value. This paradigm shift has put cost-efficiency on every IT leader’s radar.

However, a leaner IT function doesn’t necessarily equate to a downgrade. It means that IT must now be savvy – not just technologically, but also financially. This requires a strategic reevaluation and a sharper toolkit. With this in mind, let’s dive into a robust discussion on navigating the cost tide as it stands to meet the needs of the current business environment.

Acknowledging the Storm: Current Economic Climate

Globaly business revenue is set to decline due to macroeconomic factors and constricted consumer spending. IT budgets, which are often proportional to business revenue, will undoubtedly feel the pinch. The pressure will be on IT departments to ensure every dollar is spent wisely.

Meanwhile, staffing and labor shortages for IT talent have escalated due to the digital skills gap, an evolving job market and pandemic-related disruptions. This has further complicated the IT budgeting equation, causing CIOs to rethink their talent strategy.

On the supply front, IT hardware, often sourced globally, has been affected by supply chain difficulties. The resultant unpredictability in both cost and availability requires us to reframe our IT sourcing and inventory strategy.

These challenges are multi-faceted, but they’re not insurmountable. The need of the hour is to act decisively, recalibrating our approach to ensure cost-efficiency and value delivery.

Taking the Helm: Practical Steps for CIOs

The current inflation-driven wave can’t be ridden out by simply releasing water. Instead, it requires us to take decisive actions and steer the ship in a new direction. Below are some of the key steps that CIOs and IT managers can consider.

Committing to Clear Technical Debt

In the world of IT, technical debt can accumulate much like financial debt in the real world. It is the cost that companies pay for short-term technological fixes that, over time, require an increasing amount of work just to keep the systems running. When unaddressed, it can lead to increased costs, inefficiencies and ultimately reduced agility and innovation.

Today, more than ever, we need to start chipping away at these debts. In this challenging economic environment, the cost of servicing this debt becomes even more burdensome. Paying down technical debt isn’t an easy task – it requires a well-thought-out plan, which might involve revising outdated code, rearchitecting inefficient systems, or even investing in new technologies. However, the benefit lies in streamlined processes, reduced costs and increased operational efficiency, all crucial in the inflation-impacted business climate.

Right-Sizing Staffing: A Delicate Dance

In an inflation-driven world, staffing becomes a high-wire act. The goal here isn’t merely about finding the balance between overstaffing and understaffing, but about making strategic decisions on how to most efficiently deploy human resources.

Firstly, consider which skills are most needed for your department’s strategic initiatives and day-to-day operations. Are these skills available in-house, or do you need to recruit? Then, evaluate the cost-benefit of full-time employees, contract workers, outsourced teams, and automation solutions. Implementing automation for repetitive tasks, for example, can not only cut costs but also free up your talented IT professionals to focus on more value-added activities.

The labor shortages in the IT industry only amplify the need for a thoughtful and strategic approach to staffing. By right-sizing your team, you can maximize output while keeping costs under control.

For more on this, please see the earlier blog post, Winning The War For Talent With IT Service Cost Management.

Adopting a Mature IT Budgeting Approach

Now more than ever, a mature and nimble IT budgeting process is crucial. Traditionally, IT budgets have been a once-a-year event, often rigid and slow to respond to changing business needs. However, the current economic climate calls for a more agile approach.

Incorporate frequent budget reviews, allowing adjustments in response to changing business conditions and IT demands. Cultivate transparency and communication about the budget within your team and across departments. Moreover, every line item on the budget should clearly tie back to the value it delivers. This means moving beyond the cost-center mindset and communicating IT’s contribution to business goals.

Regular Benchmarking: Keeping a Finger on the Pulse

Regular benchmarking of your IT costs against industry standards is a critical part of maintaining cost efficiency. It allows you to identify areas where costs may have inflated beyond the norm and provides a basis for understanding whether your spending aligns with the value you’re providing.

A good sailor knows the importance of regular checks on the ship’s position. In the world of IT, this is similar to benchmarking. Regular benchmarking of your IT costs against industry standards can serve as a navigational beacon, helping you chart the course towards cost efficiency and maximum value delivery.

At its core, benchmarking is a method of comparing your costs, processes and performance metrics to those of other businesses, for example the industry leaders or direct competitors. But it’s not just about numbers. It’s about understanding what the best practices are, what strategies are yielding results and how you can adapt these insights to your own organization’s context.

The fast-paced and dynamic nature of the IT sector makes regular benchmarking a necessity. It’s not enough to benchmark once and then forget about it. IT costs, influenced by factors such as new technological developments, market competition and regulatory changes, can fluctuate. Regular benchmarking ensures you’re  steering your ship by current coordinates.

While cost is a significant element in benchmarking, it’s essential to remember that it’s not only about finding the cheapest way to do things. The ultimate goal is to maximize the value your IT department delivers. This means benchmarking should also cover aspects like service quality, process efficiency and innovation capability. This comprehensive approach provides a fuller picture, guiding the effective allocation of resources.

Benchmarking in Practice

Benchmarking can take different forms, each offering unique insights. Cost benchmarking allows you to identify the cost level of your IT department. Price benchmarking helps you understand how competitive and healthy your key contracts are. Functional benchmarking compares your operations with those of industry leaders, even from different sectors.

Moreover, strategic benchmarking allows you to examine how other organizations achieve their business success. It’s about analyzing the big-picture strategies and the long-term vision. Given the integral role IT plays in business success, strategic benchmarking can offer invaluable insights.

Embracing benchmarking requires a certain mindset. It’s about acknowledging that there are lessons to be learned from others, about being open to change and about striving for continuous improvement. Developing this mindset within your team and promoting a culture of learning can truly harness the power of benchmarking.

In conclusion, benchmarking, when done regularly and comprehensively, provides a realistic and fact-based perspective on your IT costs and performance. It’s an essential tool in your arsenal to navigate the inflation-induced wave, keeping your IT department not just afloat, but sailing smoothly towards its destination.

Navigating the Waters Ahead

These strategies are not just about surviving the wave of inflation. They are about adapting to new realities, steering the ship in a new direction and ultimately coming out stronger on the other side. Yes, cost-efficiency is critical, but let’s not forget the value that IT brings to the table. The role of IT leaders now is not only to control the costs but also to emphasize and enhance this value. Embrace the challenge and navigate the seas of change with confidence and foresight.

Interested to learn how the cost efficiencies of your internal technology services stack up against peer organizations? Visit our website for more information on our IT benchmarking service, IT Service Cost Management.

Over the past few years, a growing number of organizations around the world have made bold pledges – and set specific targets – to achieve environmental and social sustainability goals. However, many organizations continue to struggle to make progress toward achieving these goals. This is often due to a lack of a clear technology strategy that is aligned with corporate sustainability missions.

For most organizations, strategies for achieving these goals are typically driven by the Board of Director and/or C-suite. The challenge with this approach is:

  • Effectively communicating the importance of sustainability to the business
  • How the strategy will be executed
  • What role team and individuals must play in helping achieve goals

This is particularly true in IT, where lack of communication and guidance from executive management on the role IT, and IT technologies, can slow progress. While there are a variety of personas and functional leads who are responsible for contributing to corporate sustainability goals, IT must work cross-functionally to support its own objectives, while supporting the technology needs of departmental and corporate leads. When it comes to supporting corporate sustainability initiatives, IT has two primary responsibilities:

  • Reducing the sustainability impact of its own IT infrastructure
  • Leveraging technology solutions that allow the organization to visualize and improve performance

Sustainable IT Infrastructure

For many organizations, IT accounts for a sizable portion of an organization’s overall carbon emissions. For companies that have made aggressive commitments to reducing greenhouse gas emissions, IT will be an obvious area of priority.

Over the past few years, digital transformation has been a recurring theme in IT as organizations increasingly rely on digital technologies to run their business. As organizations move beyond digital transformation initiatives and look for growth built on digital-first strategies, they will need to focus on purposeful long-term goals like sustainability.

More organizations are starting to view their IT strategy through the lens of sustainability. We are seeing an increasing number of request for proposals (RFPs) with specific sustainability requirements for areas such as energy efficiency and carbon emissions. Companies are also looking at the full lifecycle of their IT assets and embedding sustainability data into asset lifecycle management. This allows leadership to make informed decisions about asset utilization, asset maintenance and repair, and end-of-life/reuse/recycle.

For their part, IT vendors recognize the importance that customers are placing on sustainability and are incorporating it into their solutions portfolios. Across the IT landscape, IT vendors are developing more energy efficient infrastructure and designing and manufacturing equipment for recycle/reuse. Cloud service providers, meanwhile, are increasing their use of renewable energy sources, while data center operators are driving energy efficiency through better resource utilization and cooling solutions.

Driving Improvement Through IT Sustainability Solutions

IT will also play an important role in supporting corporate sustainability missions by leveraging existing technologies and investing in new solutions that can help organizations track, report, manage, and improve sustainability performance.

Gaining access to the data needed to effectively manage sustainability performance is essential for establishing performance baselines and devising strategies for achieving future goals and targets. However, this can be challenging as sustainability data typically resides in different repositories, These repositories can be scattered throughout the organization and fall under the control of different functional leads.

Without visibility into sustainability data, the ability to effectively report on milestones and metrics for compliance purposes and meet established goals is compromised. IT needs to aggregate internal data and provide platforms for sharing data across the organization.

In the software area alone, there has been an explosion of sustainability solutions that give organizations greater visibility and awareness of performance.

IDC’s ESG Perception Survey revealed that most organizations are using multiple tools to manage sustainability, with nearly three-quarters of survey respondents citing data and performance management as the key features they are using.

IDC expects to see increased spending in software solutions for sustainability performance management as organizations look for greater observability of their impact across the organization, as well as the partner/supplier ecosystem.

It should be noted that greater awareness of an organization’s sustainability footprint has led forward-looking organizations to use sustainability as a lever for driving innovations in areas such as supply chain, distribution/shipping, and manufacturing processes. IDC believes that the shift from compliance-driven to business value-driven sustainability initiatives is taking place. Technology will play an even greater role in helping organizations identify the opportunities for leveraging sustainability to drive business innovation.

Conclusion

Technology will play a critically important role in helping organizations meet their sustainability targets and goals. Developing an IT strategy that aligns with corporate sustainability strategy is critical to identifying these technologies. New technologies will be needed to track performance, report progress to internal and external stakeholder, meet compliance and regulatory demands, and integrate sustainability data into existing business operations. Ultimately, greater visibility will allow organizations to expand from compliance-driven strategies to strategies that are focused on leveraging sustainability to drive business value.

For more insight and information on trends and market dynamics driving technology purchases for sustainability, please see the IDC eBook entitled “Driving Business Value Through Sustainable Transformation“.

The communications-platform-as-a-service (CPaaS) segment is in transition. Over the past few years, companies in this segment benefited from a tsunami of growth driven by the demand and subsequent adoption of digital customer engagement platforms.

However, the economic and social environment is different, and companies are no longer focused on impressive growth metrics. CPaaS companies thrived on the promise of years of 30%+ annual growth, attracting investment and hordes of new entrants. Today, there is a muted atmosphere born of the reality of investor wariness and disappointment fueled by overambitious growth.

Despite the setbacks of slowing sales cycles, restructuring, and downsizing, the industry is still one of the strongest IT sectors with attainable double-digit and profitable growth in reach of many companies. IDC forecasts the worldwide CPaaS market to grow from $14.3 billion in 2022 to $29.7 billion in 2026. CPaaS will continue to grow at a rapid pace (15.8% compound annual growth rate or CAGR for 2022-2027) as many enterprises embrace cloud-enabled communication API solutions and services that help them easy and affordably increase customer engagement and improve operational efficiencies.

Customer Experience

The COVID-19 pandemic that started in 2020 accelerated the shift by companies to digital infrastructure and the use of omni-channel digital customer engagement. As users become more demanding, communications must be omnichannel, interactive and enriched to provide personal, intelligent, and customized engagement. According to IDC research, spending on customer experience and digital engagement channels will be a key driver of IT spending over the next few years and will also be relatively immune to budget cuts due to adverse economic conditions.

While many companies are well along in their digital transformation journey, refining and perfecting customer engagement is still a complex process. Leveraging CPaaS platforms reduces the complexity of creating customized differentiated applications, especially with the introduction of low code and no code tools and unified APIs for omni channel engagement. As such, customer experience and digital engagement capabilities remain a top priority.

According to IDC survey data, customer experience will remain a key driver of digital infrastructure over the next few years.

Enterprises also recognize the need for consolidation of cloud communications platforms including seamless integration of CPaaS, UCaaS and CCaaS. This will also enable companies to rationalize spending on multiple platforms, while improving productivity for employees.

Multichannel and Uses Case Focus

Application-to-Person (A2P) messaging has been intrinsically dominated by SMS, being a regulated channel and effective way to reach a wide audience. However, as mobile channels are becoming more important for brands and enterprises, this has caused a proliferation of new channels in the A2P eco-system that are more suitable for interactive engagements such as chat apps, social media apps, Web-RTC voice and video as well as RCS and iChat.

IDC identifies six key services feature segments within the enterprise CPaaS market: voice, messaging, video, email, other APIs, and miscellaneous services. Messaging (this includes SMS as well as OTT messaging) is the largest segment and will remain so throughout the coming years. Voice is the second-largest service, but video is the fastest-growing service and driven by use cases in manufacturing, banking/insurance, and healthcare.

IDC’s enterprise communications surveys that are conducted yearly across the various regions help to gain more insights into the adoption, drivers, and challenges towards a large number of ICT solutions and services that are network, mobility, UCC and also CPaaS. It provides a view of the most used channels, use cases, deployments as well as criteria when selecting a CPaaS provider.

The European Enterprise Communications Survey, 2022: Attitudes Toward Communications Platform as a Service is one of these yearly published surveys which provides an insight into the adoption trends of CPaaS in Europe. The 2023 survey results are expected to be published in June. Another insightful CPaaS focusing survey report is the IDC CPaaS Developers Survey: 2022 that was published in March 2023. This survey provides a high-level overview of which applications developers are creating on CPaaS platforms, as well as various usage preferences in key markets such as Australia, Brazil, India, Singapore, the United Kingdom, and the United States.

Industry Dynamics

IDC assessed 23 CPaaS providers for the 2023 Worldwide CPaaS MarketScape study. This segment is entering a new phase. The market has become saturated with a diverse array of companies, including pure play CPaaS Providers, IT companies, network service providers, software providers and others. While the market is dominated by CPaaS specialists such as Twilio, Infobip, Sinch and MessageBird, companies that provide CPaaS as a complementary service or integrated with other services will become increasingly common. 

CPaaS providers are ideally suited to meet the requirements of companies to simplify, automate, and amplify customer experience excellence. The addressable market is expanding, driven by new tools and the march of technology that is opening up new possibilities for companies in this segment.

Advice for the Buyers Market

The following is a list of key attributes and factors for enterprises to consider in choosing a CPaaS partner:

  • Automation and AI-driven personalization capabilities: The ideal partner should demonstrate the ability to reduce complexity, while integrating a diverse range of applications and platforms to produce improved business outcomes including reduced marketing and operational costs.
  • Unified and conversational engagement capabilities: These include the ability to put customer channel preferences as the priority and provide channel choices depending on regional or regulatory and compliance requirements.
  • Enhanced tools and capabilities: While a diverse range of application programing interfaces (APIs) is important, developers should consider adjacent expertise such as low-code tools, integrated CCaaS even if it’s a minimal IVR, SaaS tools for agile and flexible application deployment, and customer data platform (CDP) whether in-house or via third-party integration.
  • Platform reliability and carrier integration: Yes, CPaaS is primarily software driven, but it also relies on efficient direct connectivity with network operators. The ability to provide cost-effective global routes, with high SLAs, and the expertise to ensure secure platforms is crucial to business continuity. Seek a proven record of accomplishment but retain a backup secondary provider in the event of the inevitable security breach.

The IDC MarketScape: Worldwide Communications Platform as a Service 2023 Vendor Assessment  is IDC’s most ambitious study of the CPaaS segment to date, with assessments of companies across the geographic and strategic spectrum. It represents a new chapter in the evolution of the industry and one that shows how CPaaS providers take on the dual challenges of meeting shifting enterprise requirements and the demands of investors.

Melissa Fremeijer-Holtz - Senior Research Manager, European Enterprise Infrastructure and Communications - IDC

Melissa Holtz (Fremeijer) is a senior research manager in IDC's European Enterprise Infrastructure and Communications group and is based in Amsterdam. As one of the lead analysts for the European Enterprise Communication Services research program, she focuses on the European enterprise managed UCC and communications-platform-as-a-service (CPaaS) market. She is also responsible for IDC's managed edge and content delivery services for the European region. She is a regular speaker at client and IDC events and is frequently quoted in the press.

You often see it on television: programs about people who are struggling financially. They run out of money at the end of the month, they can’t sell their house, they have a problematic debt burden, and so on. A common denominator is often the lack of insight into their own situation, and while coming up with ways to save money may not be very difficult, actually implementing and sticking to them is much harder.

I mean, it’s easy for an outsider to suggest that someone should get rid of their dog, but if that pet is their only source of comfort, it will take some effort.

The same goes for cloud costs: saving money is easier said than done. There are all sorts of great tools available from both cloud providers and third parties to help you understand your costs.

These tools provide various reports and dashboards, and even recommendations on which instances to remove or resize (rightsizing). With the right knowledge, you can also determine how to use discount options (reserved instances, savings plans, reserved capacity, etc.), how to manage licenses intelligently, and what you can do in your application architecture to save costs. And, of course, you can always turn off instances when you’re not using them.

All of this insight is great, but then comes the second part. Just as people have a hard time saying goodbye to their pets, users and administrators have a hard time shedding their old habits and ways of thinking. And that’s something cloud providers never talk about.

For example, consider turning off instances outside of working hours. In theory, this is an excellent way to save money, but instances are part of applications, which in turn are part of chains. It can happen that data exchange takes place in a chain outside of working hours.

Testing teams that are under a deadline may also need their environment outside of the predetermined working hours. And if environments are used in the management chain, they must also be available after working hours in case of an emergency. So savings are theoretically simple, but practice is more complicated. It can be done, but it takes a lot of effort.

Rightsizing is also less straightforward than it seems. Users and administrators are often hesitant to remove capacity: users see their performance decrease, and administrators see the risk of more outages because there is less excess capacity to handle issues. In the latter case, you need to analyze where these issues are coming from: a poor application can benefit from more capacity, but that is not a long-term solution.

If the roof is leaking, you can replace the bucket you use to catch the water with a mortar tub, but even that will eventually fill up. Ultimately, you’ll have to repair the roof.

So, objections can be raised for all types of savings. Eventually, you’ll need to adopt an approach that not only makes costs visible but also involves users and administrators, and leads to the right considerations on where to save on your cloud costs and where not to.

Don’t know where to start? Can’t figure it out quickly enough? IDC Metri has helped several organizations get started. Our specialists can help kickstart your cost-saving efforts in the cloud. Because understanding costs is one thing, but it’s only useful if they actually decrease.

 

Want to learn more? Subscribe to IDC Metri’s monthly newsletter full of actionable insights on IT benchmarking, intelligence, sourcing and more.

In times of economic uncertainty, businesses tend to become more cautious and hesitant with buying decisions. This presents a unique opportunity, however, for technology vendors to demonstrate their value as catalysts of growth. By providing a credible economic impact model, tech vendors can offer a clear and data-driven analysis of their impact on the social, economic, and environmental aspects of their business, at a global, regional, or country level. This can help accelerate decision-making processes, and ultimately drive opportunity. Moreover, as consumers and businesses become increasingly aware of their impact on society and the environment, an economic and sustainability impact assessment can be a necessity for doing business in the future. By demonstrating how they positively contribute to the local economy and environment, tech vendors can differentiate themselves from their competitors and attract customers who prioritize sustainability and social responsibility.

When do you need an Economic and Sustainability Impact Study?

An economic and sustainability impact analysis is an important tool when a technology provider wants, or needs, to evaluate the impact of its business on the economy. It provides credible scenarios based on third-party data and research with a deep understanding of both technology and economic impacts, to demonstrate overall value. When technology vendors need to show that their investment in a region, or in a technology, creates spinoff economic and social impacts.

Key Reasons for Creating an Economic Impact Study

Marketing Executives:

  1. To build brand equity with governments
  2. To attract and increase media attention
  3. To create trusted content that demonstrates thought leadership

Partner Marketing Professionals:

  1. To demonstrate the opportunity their technology provides customers and partners to generate revenue
  2. To attract and retain partners to their ecosystem and deepen their share of wallet

Sustainability Executives:

  1. To show that their company is a catalyst for good
  2. To create awareness that their company is driving growth in a sustainable way, through measurable results

Why is an Economic Impact Study a differentiation tool?

Because the study is created by a third-party research firm, with a deep understanding of technology and industry verticals, they provide a credible, therefore trusted, thought leadership content tool, one that demonstrates a vendor’s overall impact as a catalyst for sustainable economic growth.

Leading subject matter experts author the study’s findings and can quantify exactly how your company will provide growth in three key areas:

  1. Economic impact
    • Specifies increase to GDP
    • Quantifies job growth
  2. Ecosystem impact
    • Driving ecosystem opportunity
    • Accelerating partner value
  3. Sustainability impact
    • Measured reduction in greenhouse gas emissions
    • Investment in social diversity

What is the process involved in building an Economic Impact Study?

To estimate the overall economic of a technology provider, IDC utilizes a standard analytical framework, an Economic Impact Analysis, which leverages an input-output (I/O) framework.

Standard economic impact analysis evaluates three types of economic and social impact (GDP and job growth), as well as other impacts (such as, taxation):

  1. Direct: the effect on the direct supply chain for the solution
  2. Indirect: the effect on the supply chain and customers indirectly related to the solution
  3. Induced: secondary effects, not directly related to the solution. These can be effects generated in the economy from economic stimulus and the ripple effect on jobs and revenues, as an example.

How can you use an Economic Impact Study?

An economic impact study is a powerful tool that provides clear, quantified proof of your thought leadership. Used in marketing content strategies, it conveys your technology for good, for growth and for innovation.

  1. As a PR tool to generate increased media exposure and coverage
  2. In recruitment campaigns to attract and retain sustainability conscious talent
  3. Marketing and business outreach to show the business value, direct investment and infrastructure, contribution to GDP and employment, as well as tax revenues to support the global growth of the business in new geographies

IDC has been producing Economic Impact Models for more than 20 years. Our Macroeconomic Center of Excellence delivers credible, defensible assessments. Our technology research is fueled by more than 1300 of the world’s leading analysts who create non-bias, data-driven research. Learn More about IDC’s Economic Impact Model and thought leadership content solutions.

I was born in Ravenna, on the east coast of Emilia-Romagna, one of the most liveable and prosperous regions in Italy. Emilia-Romagna is home to 7.3% of the Italian population. It accounts for 9.2% of GDP and 11.8% of agricultural production.

It headquarters globally successful firms in automotive, motorbikes, food production, ceramic tiles, textile and fashion, biomedical engineering, construction, woodworking equipment and much more. Unemployment is at 5.1%, well below the 2022 national average of 8.2%. Life expectancy is higher than the national average.

There are white sandy beaches, natural reserves in coastal wetlands, and beautiful hills and mountains, which combined with a rich heritage — Ravenna alone boasts eight UNESCO heritage sites — and amazing food and wine attract tens of millions of tourists every year.

Besides these material treasures, there is a unique way of living in Emilia-Romagna. And even more so in Romagna, where I grew up; there’s an old saying that you can tell if you are in the Romagna part of the region because when a stranger shows up at someone’s door, they are welcomed with a smile and a glass of wine. On the Emilia side, they’ll be equally warmly welcomed, but with a glass of water!

There is a sense of shared joy, a passion for life and a pride in belonging to one’s community. A shared sense of resilience that drives people to go through the hardness of life with a smile on their face, and always trying to put a smile on someone else’s. Because there is always a little bit of magic, even in the small things.

As Federico Fellini, the world-famous movie director and one of the most beloved children of our region, once said: “Life is a combination of magic and pasta.”

It feels good to be a Romagnolo. And to visit Romagna … unless you happened to be there in the first two weeks of May 2023.

Smart River and Water Management: Preparing for Foreseeable Disasters

After many months of drought, in the first 17 days of May 2023, Romagna was hit by as much rain as it usually gets in six months. In some areas this meant up to 400mm of rain in two weeks. To put things in perspective, one of the worst hit municipalities, Faenza, which is home to 60,000 people, experiences on average 760mm of rain a year.

The stereotypical rainy London gets 690mm a year. The result of this unusually heavy rain was that 23 rivers burst their banks, resulting in 50 floods; 305 landslides devastated hills and mountains, 14 people died and over 36,000 people were displaced from their homes. The estimated economic damage to homes, factories, farms and public infrastructure is north of €5 billion, with around €600 million just to rebuild public infrastructure.

Climate change is increasing the frequency and intensity of these extreme weather events. Long-term environmental sustainability actions, which are progressing way too slowly, will not be enough.

Resilience to short-term shocks is imperative. Money is not the problem; in fact, there is an estimated €8 billion available from the Italian COVID Recovery and Resilience Plan and the “Italia Sicura” (Safe Italy) plan to make public infrastructure more resilient. This, however, is at risk of not being spent, or not spent well, because of lack of planning, skill gaps, slow public procurement, and insufficient competencies and capacity to audit.

Technology innovation is not a silver bullet, but when implemented wisely it can help fill some of those gaps. The increasing availability and granularity of data from satellite images, IoT sensors, weather monitoring and forecasting models already tell us that Italy has the highest amount of rain in Europe, with 300 billion cubic meters a year.

Building permitting systems, public works inspection systems and other sources tell us that Emilia-Romagna was the fourth worst region in terms of soil consumption in Italy in 2021, including in areas at high risk of flooding. By building on the existing knowledge, collecting more data and turning the data into intelligent smart river and water management insights, governments, water utilities and the public could make better decisions across the disaster resilience life cycle, from mitigation to preparedness, from response to recovery.

  • Mitigation: Governments can use a wide variety of tools to develop hazard maps that can identify areas most at risk and feed into planning and preparedness systems. Policymakers and building inspectors can feed intelligent insights into planning and operational simulation tools, such as digital twins, to simulate the impact of building code and permitting decisions to reduce soil consumption and require the use of more resilient building techniques and materials.
  • Preparedness: The benefits of building flood resilient systems (dams, levees, flood walls and diversion canals, etc.) to protect natural systems such as wetland, marshes and beaches, and using resilient building techniques such as tiled pavements instead of concrete for parking lots and roads to increase water absorption, can be augmented by making these assets and tools intelligent. The intelligence from those systems can enable real-time or preventive decisions about diversion tactics, rather than reacting only when the flood is too close.
  • Response: Real-time data from weather forecasting models, integrated with data from dam and river sensors, should be analysed to detect anomalies to automatically raise emergency alerts that can then promptly notify citizens, rather than having to rely on fire and police patrols roaming the roads of small rural villages and towns using loud speakers to tell citizens to evacuate homes or expecting mayors to post videos on social media hoping everybody pays attention, as happened in the past two weeks in Romagna. More intelligent use of data can also provide insights for command-and-control personnel to coordinate first responders and orchestrate the supply of food, clothes and medicine for shelters, instead of relying on emails, spreadsheets and phone calls.
  • Recovery: Digital twins would allow evidence-based infrastructure planning decisions and monitoring the progress of investments aimed to rebuild infrastructure, therefore increasing speed and transparency of projects to avoid wasting time and money. AR/VR tools can help engineers conduct inspections when anomalies are detected.

The same technology infrastructure — with a few additions in terms of sensors and applications — will provide intelligent insights for other use cases, such as water conservation in dry seasons, leakage reduction, biodiversity protection in rivers, marshes and ports, sustainable water transportation, and water quality.

Only two days after the peak of the emergency, millions of euros, as well as food, clothing and other supplies, had been donated to flooded areas in Emilia-Romagna from all over Italy and beyond. Boosted by the typical Romagnolo spirit, spontaneous neighbourhood efforts have mushroomed to clean mud from houses, roads and farms. Beaches have already been cleaned for the upcoming tourist season. But that resolve to recover quickly should not allow us to forget what happened. We know what the future holds. Extreme weather events will happen, not only in well-known high-risk flooding areas, such as the Indian Subcontinent, Southeast Asia, and Pacific and Caribbean Islands, but also in traditionally safer regions of the world.

Technology innovation will be critical to climate change resilience. But technology alone will not be enough. It’s not enough to feel compassion to help when disaster happens. We need to invest in mitigation and preparedness measures that generate the highest long-term returns.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.

AI Act: How Did We Get Here and Where Are We Now?

In April 2021, the European Commission submitted a detailed proposal of its plan to regulate artificial intelligence development and use in Europe: the AI Act. The AI Act’s goal is to ensure that the development and deployment of AI systems in Europe is safe, transparent and compliant with the EU’s fundamental rights and values ― protecting the public, while still fostering innovation.

The Commission adopted a “general approach” on a set of harmonized rules on artificial intelligence in November 2022, but rapid progress of the technology, together with the sudden wave of innovation in Generative AI systems, delayed the final discussion of the legislation as new amendments to cover the latest developments were explored. On May 11, the European Parliament committees approved the AI Act with a large majority in a vote that paves the way to the plenary vote in mid-June (June 14 as a tentative date).

Let’s now look at the main principles of the proposed regulation and how it will impact the AI market in the region.

Regulating the Development and Deployment of AI in the EU ―  Key Aspects of the AI ACT

The proposal identifies three (+1) risk categories for AI applications and applies different restrictions and obligations on system providers and users, depending on the category of the application in question:

  • Unacceptable risk: applications that involve subliminal practices, exploitative or social scoring systems by public authorities. Such applications will be banned.
  • High risk: applications related to education, healthcare and employment, such as CV-scanning, ranking job applicants, will be subject to specific legal requirements (e.g., ensure transparency and safety of the systems, complying with the Commission’s mandatory conformity requirements). Providers of “high-risk” systems will have obligations to establish quality management systems, keep up-to-date technical documentation, undergo conformity assessments (and re-assessments) of the systems, conduct post-market monitoring, and collaborate with market surveillance authorities.
  • Limited risk: this mostly includes AI systems such as chatbots that will be subject to specific transparency obligations (e.g., disclosing that interactions are performed by a machine, so that users can take informed decisions).
  • Minimal risk: applications that are not listed as risky, nor explicitly banned are left largely unregulated (e.g., AI-enabled video games). Currently, this category covers the majority of AI systems used in the EU.

How Will the AI Act Affect the European AI Landscape?

The introduction of the European AI Act has sparked discussions on its potential impact on the adoption of AI technologies. Will this regulation hinder AI innovation in Europe? The answer is not straightforward, as it depends on various factors and the evolving landscape.

AI regulation may impose compliance costs, administrative burdens, and legal uncertainty on businesses and developers. Extensive testing, validation, and monitoring of AI systems may become necessary, which can be time-consuming and expensive. There might also be limitations on the types of applications, industries, data, or algorithms used in AI systems.

However, when assessing the direct impact on AI use cases falling under the regulated risk categories, the outcome is not overwhelmingly negative. When we at IDC built a data model to verify which and how many AI use cases will be directly impacted (we considered those that would fall into the above listed risk categories) the outcome was only modest, and we have not seen the impact, defined by possible lost revenue, to be worrying.

The compliance costs and administrative burdens could be challenging for SMEs and startups, though, which may inhibit competition in Europe if larger, more established providers find it easier to comply.

Industries like healthcare, public administration or finance are likely to face more stringent requirements due to their potential impact on human life and safety. Transparency, explainability, human oversight, and restrictions on the use of, for example, biometric identification technologies are some of the obligations that might be imposed. While these requirements may limit certain applications, they also aim to protect privacy and individual rights. However, it’s important to note that this regulation offers a list of exemptions, so if you are a provider for national security interests, you may not need to worry about that too much.

On the positive side, regulation has the potential to enhance wider trust and confidence in AI systems. This is crucial in countering overhyped pop culture-fed media narratives of AI as a threat. A trusted regulatory framework always reduces legal uncertainty and creates a level playing field for businesses, public institutions and consumers and citizens. Wisely designed laws will improve the quality and safety of AI systems and will first and foremost safeguard individuals.

The AI Act aims to encourage AI technologies that align with ethical and societal values that the EU strongly supports, such as transparency, accountability, and human-centricity. It wants to stimulate research and development in these areas and promote collaboration and openness among organizations and regions. By establishing common standards and best practices, the EU facilitates knowledge exchange and expertise sharing.

Conclusion

Looking at AI regulation through the lens of healthcare offers valuable insights. Healthcare regulations ensure safety, efficacy, and patient rights. They impose requirements on manufacturers to meet necessary standards. Similarly, AI regulations can ensure ethical and safe technology use while balancing innovation and protection.

While the potential impact of the European AI Act on AI adoption and innovation may present challenges, it also offers opportunities. By adhering to the regulatory framework, AI providers can navigate the landscape effectively, gain public trust, and promote responsible AI practices.

As the AI Act progresses, it is crucial to stay updated with the latest developments. At IDC, we will closely follow the progress of the AI Act and will continue publishing comprehensive research, providing deeper insights into its implications and potential impact as we approach the EU vote in June.

 

If you want to know more about this, please contact the team: Lapo Fioretti, Andrea Siviero, Neil Ward-Dutton or Ewa Zborowska

Lapo Fioretti - Senior Research Analyst - IDC

Lapo Fioretti is a Senior Research analyst in IDC Digital Business Research Group, leading the European Emerging Technologies Strategies research. In his role, he advises ICT players on how European organizations leverage new technologies to create business value and achieve growth and analyzes the development and impact of emerging trends on the markets. Fioretti also co-leads the IDC Worldwide MacroTech Research program, focused on the intertwined connection between the Economical and Digital worlds - analyzing the impact key MacroEconomic factors have on the digital landscape and viceversa, how technologies are impacting economies around the world.