Several years since the introduction of watchOS in 2014, Apple is once again setting its sights on revolutionising a technology that has yet to fulfil its potential. While augmented reality (AR) and virtual reality (VR) are not new, they have been subject to the unpredictable nature of product launches, with numerous companies transitioning from pioneers to underachievers in double quick time.

Nearly 350 AR and VR headsets have been launched in the past 10 years. Each brand has presented its own vision of AR and VR, only to fall short of lofty expectations. How many times have we eagerly embraced a new device, anticipating its transformative impact on our lives, only to be swiftly let down again and again?

Why will it be different this time? And why is this announcement so important?

The Revolution of Technology

The significance of this announcement lies in the anticipation surrounding tech companies’ efforts to revolutionise the next generation of user interfaces.

Throughout much of the latter half of the 20th century, keyboards were the primary means of interacting with digital content. But we have since witnessed the rise and widespread adoption of the mouse, touch interfaces, multitouch, voice control and voice assistants, with Apple playing a leading role in advancing some of these. Over the years, various organisations have explored immersive technologies and in the past decade VR and AR have become accessible to both consumers and businesses.

No single consumer electronics brand has managed to truly transform our interaction with digital content, however. This is what Apple aims to achieve with the Vision Pro — and it has started with a bang.

Why Vision Pro Is a Game-Changer

I was lucky enough to experience the Vision Pro hands-on. This is a product that truly lives up to the expectations set out in the keynote. Every aspect of the device is extraordinary: the image quality, the eye tracking and hand gestures, the immersive 3D spatial photos and content, the FaceTime conversations with 3D holograms, the way it blends the virtual with the real world through EyeSight, the user-friendly interface, and the luxurious feel of a meticulously crafted device.

With the Vision Pro Apple has revolutionised AR and VR experiences with a device that surpasses any other headset I’ve ever tested. This ground-breaking product has propelled the world of augmented and virtual reality to a completely different level.

Over the past decade, the collective expenditure on VR and AR headsets has exceeded $21 billion, while the number of headsets shipped has reached 59 million. The market is poised for even greater expansion, thanks to Apple’s entrance, which is expected to ignite widespread adoption and compel competitors to enter the segment.

We forecast that combined shipments of AR, VR and mixed-reality (MR) devices will skyrocket to 97 million units between 2023 and 2027, generating estimated revenue of $49 billion.

Vision Pro Potential in Business

While Apple emphasised its consumer-focused approach during the keynote, the company must expand its vision beyond just the consumer segment. Gaming has traditionally dominated the VR landscape, and this is likely to continue in the coming years. But there is an emerging potential for commercial applications as enterprises seek ways to minimise expenses and enhance customer satisfaction. By 2027, training, collaboration and improving customer experience will account for more than 52% of overall expenditure on MR hardware.

Similarly, AR has predominantly catered to enterprise users for troubleshooting, product development and design purposes. But there is also a rising consumer market opportunity for personal productivity and entertainment.

To realise this potential, Apple will need to mobilise its extensive developer community. Given the large community of developers, the company is well positioned to drive content creation through its developer base, which will be pivotal in reaching a significantly broader customer base.

Vision Pro Is Expensive — But the Benefits Are Clear

The Vision Pro is not cheap, but focusing only on its cost overlooks the main benefit. The product is not designed to generate long lines outside stores on launch day.

Instead, it will be a platform for content creators to unlock their creativity and seize new opportunities. Just as the iPad empowered developers to leverage a larger screen for innovative applications, the Vision Pro delivers a flawless, intuitive and immersive experience to end users — critical for developers to focus on content opportunities and not on product glitches.

Developers want a device that enables them to offer premium and familiar experiences to users, while enterprises see the potential of MR in reducing costs across areas such as product development, training, industrial maintenance and emergency response. Embracing MR can also enhance collaboration and improve customer experiences.

Enterprises and developers need a high-quality device with exceptional specifications that empowers them to deliver outstanding experiences, all while minimising costs. The Vision Pro does just this.

For consumers, the Vision Pro offers innovative ways to engage with digital content. Although we can access content on various screen sizes, an exceptional experience often requires the optimal screen size. This often leads to compromising mobility to enhance the experience, as only smartphones, iPads and laptops offer truly mobile screens.

For instance, while movies can be enjoyed on smartphones, a larger screen in a theatre provides a significantly better viewing experience. In the workspace, working with multiple displays boosts productivity compared to relying on a single laptop screen. But users can’t be carrying multiple screens when they change locations. AR experiences can also be accessed via smartphones or tablets, but the ability to view content hands-free is a major enhancement to the overall experience.

For years, MR headsets have promised such features. The ability to individually access all desired displays for each specific experience is not a novel concept. But while other companies have made promises and only partially delivered on them, primarily in gaming and in limited commercial applications, Apple is now delivering what many players in the space acknowledge only it can deliver.

Three Improvements for Vision Pro?

Despite its disruptive nature, there is still room for improvement with the Vision Pro:

  • After using it for 30 minutes, I found myself wondering whether I could comfortably wear the device for a few hours. It was heavier than I’d thought, though that’s understandable considering the advanced technology it incorporates.
  • Another consideration is that the device essentially “glues” a screen to our eyes, so eye fatigue could be an issue. Users should be careful and look at ways to minimise discomfort during prolonged use.
  • Personal interactions. While EyeSight is one of the headset’s standout features, enabling users to connect with others without having to remove the device, it does raise practical concerns. How many of us would truly engage in conversations by displaying a digital representation of our eyes? This may require further evaluation to determine its real-world utility and acceptance.

In summary, Apple has been a disruptive force across multiple categories and industries, transforming personal computers, music players, smartphones and watches, to name a few. Its innovative products have not only set the standard for their respective categories, but have also revolutionised our lives in unimaginable ways.

With the introduction of the Vision Pro, Apple is initiating the next revolution in personal technology.

Please reach out if you have any questions, or follow me on Twitter or LinkedIn.

As the old adage goes, “A smooth sea never made a skilled sailor.” Nowhere is this more evident than in today’s IT landscape. CIOs across the globe are grappling with a new, unexpected wave in their voyage – inflation. What makes this wave particularly unsettling is that it’s blowing up the cost of all IT services without offering additional value. This paradigm shift has put cost-efficiency on every IT leader’s radar.

However, a leaner IT function doesn’t necessarily equate to a downgrade. It means that IT must now be savvy – not just technologically, but also financially. This requires a strategic reevaluation and a sharper toolkit. With this in mind, let’s dive into a robust discussion on navigating the cost tide as it stands to meet the needs of the current business environment.

Acknowledging the Storm: Current Economic Climate

Globaly business revenue is set to decline due to macroeconomic factors and constricted consumer spending. IT budgets, which are often proportional to business revenue, will undoubtedly feel the pinch. The pressure will be on IT departments to ensure every dollar is spent wisely.

Meanwhile, staffing and labor shortages for IT talent have escalated due to the digital skills gap, an evolving job market and pandemic-related disruptions. This has further complicated the IT budgeting equation, causing CIOs to rethink their talent strategy.

On the supply front, IT hardware, often sourced globally, has been affected by supply chain difficulties. The resultant unpredictability in both cost and availability requires us to reframe our IT sourcing and inventory strategy.

These challenges are multi-faceted, but they’re not insurmountable. The need of the hour is to act decisively, recalibrating our approach to ensure cost-efficiency and value delivery.

Taking the Helm: Practical Steps for CIOs

The current inflation-driven wave can’t be ridden out by simply releasing water. Instead, it requires us to take decisive actions and steer the ship in a new direction. Below are some of the key steps that CIOs and IT managers can consider.

Committing to Clear Technical Debt

In the world of IT, technical debt can accumulate much like financial debt in the real world. It is the cost that companies pay for short-term technological fixes that, over time, require an increasing amount of work just to keep the systems running. When unaddressed, it can lead to increased costs, inefficiencies and ultimately reduced agility and innovation.

Today, more than ever, we need to start chipping away at these debts. In this challenging economic environment, the cost of servicing this debt becomes even more burdensome. Paying down technical debt isn’t an easy task – it requires a well-thought-out plan, which might involve revising outdated code, rearchitecting inefficient systems, or even investing in new technologies. However, the benefit lies in streamlined processes, reduced costs and increased operational efficiency, all crucial in the inflation-impacted business climate.

Right-Sizing Staffing: A Delicate Dance

In an inflation-driven world, staffing becomes a high-wire act. The goal here isn’t merely about finding the balance between overstaffing and understaffing, but about making strategic decisions on how to most efficiently deploy human resources.

Firstly, consider which skills are most needed for your department’s strategic initiatives and day-to-day operations. Are these skills available in-house, or do you need to recruit? Then, evaluate the cost-benefit of full-time employees, contract workers, outsourced teams, and automation solutions. Implementing automation for repetitive tasks, for example, can not only cut costs but also free up your talented IT professionals to focus on more value-added activities.

The labor shortages in the IT industry only amplify the need for a thoughtful and strategic approach to staffing. By right-sizing your team, you can maximize output while keeping costs under control.

For more on this, please see the earlier blog post, Winning The War For Talent With IT Service Cost Management.

Adopting a Mature IT Budgeting Approach

Now more than ever, a mature and nimble IT budgeting process is crucial. Traditionally, IT budgets have been a once-a-year event, often rigid and slow to respond to changing business needs. However, the current economic climate calls for a more agile approach.

Incorporate frequent budget reviews, allowing adjustments in response to changing business conditions and IT demands. Cultivate transparency and communication about the budget within your team and across departments. Moreover, every line item on the budget should clearly tie back to the value it delivers. This means moving beyond the cost-center mindset and communicating IT’s contribution to business goals.

Regular Benchmarking: Keeping a Finger on the Pulse

Regular benchmarking of your IT costs against industry standards is a critical part of maintaining cost efficiency. It allows you to identify areas where costs may have inflated beyond the norm and provides a basis for understanding whether your spending aligns with the value you’re providing.

A good sailor knows the importance of regular checks on the ship’s position. In the world of IT, this is similar to benchmarking. Regular benchmarking of your IT costs against industry standards can serve as a navigational beacon, helping you chart the course towards cost efficiency and maximum value delivery.

At its core, benchmarking is a method of comparing your costs, processes and performance metrics to those of other businesses, for example the industry leaders or direct competitors. But it’s not just about numbers. It’s about understanding what the best practices are, what strategies are yielding results and how you can adapt these insights to your own organization’s context.

The fast-paced and dynamic nature of the IT sector makes regular benchmarking a necessity. It’s not enough to benchmark once and then forget about it. IT costs, influenced by factors such as new technological developments, market competition and regulatory changes, can fluctuate. Regular benchmarking ensures you’re  steering your ship by current coordinates.

While cost is a significant element in benchmarking, it’s essential to remember that it’s not only about finding the cheapest way to do things. The ultimate goal is to maximize the value your IT department delivers. This means benchmarking should also cover aspects like service quality, process efficiency and innovation capability. This comprehensive approach provides a fuller picture, guiding the effective allocation of resources.

Benchmarking in Practice

Benchmarking can take different forms, each offering unique insights. Cost benchmarking allows you to identify the cost level of your IT department. Price benchmarking helps you understand how competitive and healthy your key contracts are. Functional benchmarking compares your operations with those of industry leaders, even from different sectors.

Moreover, strategic benchmarking allows you to examine how other organizations achieve their business success. It’s about analyzing the big-picture strategies and the long-term vision. Given the integral role IT plays in business success, strategic benchmarking can offer invaluable insights.

Embracing benchmarking requires a certain mindset. It’s about acknowledging that there are lessons to be learned from others, about being open to change and about striving for continuous improvement. Developing this mindset within your team and promoting a culture of learning can truly harness the power of benchmarking.

In conclusion, benchmarking, when done regularly and comprehensively, provides a realistic and fact-based perspective on your IT costs and performance. It’s an essential tool in your arsenal to navigate the inflation-induced wave, keeping your IT department not just afloat, but sailing smoothly towards its destination.

Navigating the Waters Ahead

These strategies are not just about surviving the wave of inflation. They are about adapting to new realities, steering the ship in a new direction and ultimately coming out stronger on the other side. Yes, cost-efficiency is critical, but let’s not forget the value that IT brings to the table. The role of IT leaders now is not only to control the costs but also to emphasize and enhance this value. Embrace the challenge and navigate the seas of change with confidence and foresight.

Interested to learn how the cost efficiencies of your internal technology services stack up against peer organizations? Visit our website for more information on our IT benchmarking service, IT Service Cost Management.

Over the past few years, a growing number of organizations around the world have made bold pledges – and set specific targets – to achieve environmental and social sustainability goals. However, many organizations continue to struggle to make progress toward achieving these goals. This is often due to a lack of a clear technology strategy that is aligned with corporate sustainability missions.

For most organizations, strategies for achieving these goals are typically driven by the Board of Director and/or C-suite. The challenge with this approach is:

  • Effectively communicating the importance of sustainability to the business
  • How the strategy will be executed
  • What role team and individuals must play in helping achieve goals

This is particularly true in IT, where lack of communication and guidance from executive management on the role IT, and IT technologies, can slow progress. While there are a variety of personas and functional leads who are responsible for contributing to corporate sustainability goals, IT must work cross-functionally to support its own objectives, while supporting the technology needs of departmental and corporate leads. When it comes to supporting corporate sustainability initiatives, IT has two primary responsibilities:

  • Reducing the sustainability impact of its own IT infrastructure
  • Leveraging technology solutions that allow the organization to visualize and improve performance

Sustainable IT Infrastructure

For many organizations, IT accounts for a sizable portion of an organization’s overall carbon emissions. For companies that have made aggressive commitments to reducing greenhouse gas emissions, IT will be an obvious area of priority.

Over the past few years, digital transformation has been a recurring theme in IT as organizations increasingly rely on digital technologies to run their business. As organizations move beyond digital transformation initiatives and look for growth built on digital-first strategies, they will need to focus on purposeful long-term goals like sustainability.

More organizations are starting to view their IT strategy through the lens of sustainability. We are seeing an increasing number of request for proposals (RFPs) with specific sustainability requirements for areas such as energy efficiency and carbon emissions. Companies are also looking at the full lifecycle of their IT assets and embedding sustainability data into asset lifecycle management. This allows leadership to make informed decisions about asset utilization, asset maintenance and repair, and end-of-life/reuse/recycle.

For their part, IT vendors recognize the importance that customers are placing on sustainability and are incorporating it into their solutions portfolios. Across the IT landscape, IT vendors are developing more energy efficient infrastructure and designing and manufacturing equipment for recycle/reuse. Cloud service providers, meanwhile, are increasing their use of renewable energy sources, while data center operators are driving energy efficiency through better resource utilization and cooling solutions.

Driving Improvement Through IT Sustainability Solutions

IT will also play an important role in supporting corporate sustainability missions by leveraging existing technologies and investing in new solutions that can help organizations track, report, manage, and improve sustainability performance.

Gaining access to the data needed to effectively manage sustainability performance is essential for establishing performance baselines and devising strategies for achieving future goals and targets. However, this can be challenging as sustainability data typically resides in different repositories, These repositories can be scattered throughout the organization and fall under the control of different functional leads.

Without visibility into sustainability data, the ability to effectively report on milestones and metrics for compliance purposes and meet established goals is compromised. IT needs to aggregate internal data and provide platforms for sharing data across the organization.

In the software area alone, there has been an explosion of sustainability solutions that give organizations greater visibility and awareness of performance.

IDC’s ESG Perception Survey revealed that most organizations are using multiple tools to manage sustainability, with nearly three-quarters of survey respondents citing data and performance management as the key features they are using.

IDC expects to see increased spending in software solutions for sustainability performance management as organizations look for greater observability of their impact across the organization, as well as the partner/supplier ecosystem.

It should be noted that greater awareness of an organization’s sustainability footprint has led forward-looking organizations to use sustainability as a lever for driving innovations in areas such as supply chain, distribution/shipping, and manufacturing processes. IDC believes that the shift from compliance-driven to business value-driven sustainability initiatives is taking place. Technology will play an even greater role in helping organizations identify the opportunities for leveraging sustainability to drive business innovation.

Conclusion

Technology will play a critically important role in helping organizations meet their sustainability targets and goals. Developing an IT strategy that aligns with corporate sustainability strategy is critical to identifying these technologies. New technologies will be needed to track performance, report progress to internal and external stakeholder, meet compliance and regulatory demands, and integrate sustainability data into existing business operations. Ultimately, greater visibility will allow organizations to expand from compliance-driven strategies to strategies that are focused on leveraging sustainability to drive business value.

For more insight and information on trends and market dynamics driving technology purchases for sustainability, please see the IDC eBook entitled “Driving Business Value Through Sustainable Transformation“.

The communications-platform-as-a-service (CPaaS) segment is in transition. Over the past few years, companies in this segment benefited from a tsunami of growth driven by the demand and subsequent adoption of digital customer engagement platforms.

However, the economic and social environment is different, and companies are no longer focused on impressive growth metrics. CPaaS companies thrived on the promise of years of 30%+ annual growth, attracting investment and hordes of new entrants. Today, there is a muted atmosphere born of the reality of investor wariness and disappointment fueled by overambitious growth.

Despite the setbacks of slowing sales cycles, restructuring, and downsizing, the industry is still one of the strongest IT sectors with attainable double-digit and profitable growth in reach of many companies. IDC forecasts the worldwide CPaaS market to grow from $14.3 billion in 2022 to $29.7 billion in 2026. CPaaS will continue to grow at a rapid pace (15.8% compound annual growth rate or CAGR for 2022-2027) as many enterprises embrace cloud-enabled communication API solutions and services that help them easy and affordably increase customer engagement and improve operational efficiencies.

Customer Experience

The COVID-19 pandemic that started in 2020 accelerated the shift by companies to digital infrastructure and the use of omni-channel digital customer engagement. As users become more demanding, communications must be omnichannel, interactive and enriched to provide personal, intelligent, and customized engagement. According to IDC research, spending on customer experience and digital engagement channels will be a key driver of IT spending over the next few years and will also be relatively immune to budget cuts due to adverse economic conditions.

While many companies are well along in their digital transformation journey, refining and perfecting customer engagement is still a complex process. Leveraging CPaaS platforms reduces the complexity of creating customized differentiated applications, especially with the introduction of low code and no code tools and unified APIs for omni channel engagement. As such, customer experience and digital engagement capabilities remain a top priority.

According to IDC survey data, customer experience will remain a key driver of digital infrastructure over the next few years.

Enterprises also recognize the need for consolidation of cloud communications platforms including seamless integration of CPaaS, UCaaS and CCaaS. This will also enable companies to rationalize spending on multiple platforms, while improving productivity for employees.

Multichannel and Uses Case Focus

Application-to-Person (A2P) messaging has been intrinsically dominated by SMS, being a regulated channel and effective way to reach a wide audience. However, as mobile channels are becoming more important for brands and enterprises, this has caused a proliferation of new channels in the A2P eco-system that are more suitable for interactive engagements such as chat apps, social media apps, Web-RTC voice and video as well as RCS and iChat.

IDC identifies six key services feature segments within the enterprise CPaaS market: voice, messaging, video, email, other APIs, and miscellaneous services. Messaging (this includes SMS as well as OTT messaging) is the largest segment and will remain so throughout the coming years. Voice is the second-largest service, but video is the fastest-growing service and driven by use cases in manufacturing, banking/insurance, and healthcare.

IDC’s enterprise communications surveys that are conducted yearly across the various regions help to gain more insights into the adoption, drivers, and challenges towards a large number of ICT solutions and services that are network, mobility, UCC and also CPaaS. It provides a view of the most used channels, use cases, deployments as well as criteria when selecting a CPaaS provider.

The European Enterprise Communications Survey, 2022: Attitudes Toward Communications Platform as a Service is one of these yearly published surveys which provides an insight into the adoption trends of CPaaS in Europe. The 2023 survey results are expected to be published in June. Another insightful CPaaS focusing survey report is the IDC CPaaS Developers Survey: 2022 that was published in March 2023. This survey provides a high-level overview of which applications developers are creating on CPaaS platforms, as well as various usage preferences in key markets such as Australia, Brazil, India, Singapore, the United Kingdom, and the United States.

Industry Dynamics

IDC assessed 23 CPaaS providers for the 2023 Worldwide CPaaS MarketScape study. This segment is entering a new phase. The market has become saturated with a diverse array of companies, including pure play CPaaS Providers, IT companies, network service providers, software providers and others. While the market is dominated by CPaaS specialists such as Twilio, Infobip, Sinch and MessageBird, companies that provide CPaaS as a complementary service or integrated with other services will become increasingly common. 

CPaaS providers are ideally suited to meet the requirements of companies to simplify, automate, and amplify customer experience excellence. The addressable market is expanding, driven by new tools and the march of technology that is opening up new possibilities for companies in this segment.

Advice for the Buyers Market

The following is a list of key attributes and factors for enterprises to consider in choosing a CPaaS partner:

  • Automation and AI-driven personalization capabilities: The ideal partner should demonstrate the ability to reduce complexity, while integrating a diverse range of applications and platforms to produce improved business outcomes including reduced marketing and operational costs.
  • Unified and conversational engagement capabilities: These include the ability to put customer channel preferences as the priority and provide channel choices depending on regional or regulatory and compliance requirements.
  • Enhanced tools and capabilities: While a diverse range of application programing interfaces (APIs) is important, developers should consider adjacent expertise such as low-code tools, integrated CCaaS even if it’s a minimal IVR, SaaS tools for agile and flexible application deployment, and customer data platform (CDP) whether in-house or via third-party integration.
  • Platform reliability and carrier integration: Yes, CPaaS is primarily software driven, but it also relies on efficient direct connectivity with network operators. The ability to provide cost-effective global routes, with high SLAs, and the expertise to ensure secure platforms is crucial to business continuity. Seek a proven record of accomplishment but retain a backup secondary provider in the event of the inevitable security breach.

The IDC MarketScape: Worldwide Communications Platform as a Service 2023 Vendor Assessment  is IDC’s most ambitious study of the CPaaS segment to date, with assessments of companies across the geographic and strategic spectrum. It represents a new chapter in the evolution of the industry and one that shows how CPaaS providers take on the dual challenges of meeting shifting enterprise requirements and the demands of investors.

Melissa Fremeijer-Holtz - Senior Research Manager, European Enterprise Infrastructure and Communications - IDC

Melissa Holtz (Fremeijer) is a senior research manager in IDC's European Enterprise Infrastructure and Communications group and is based in Amsterdam. As one of the lead analysts for the European Enterprise Communication Services research program, she focuses on the European enterprise managed UCC and communications-platform-as-a-service (CPaaS) market. She is also responsible for IDC's managed edge and content delivery services for the European region. She is a regular speaker at client and IDC events and is frequently quoted in the press.

You often see it on television: programs about people who are struggling financially. They run out of money at the end of the month, they can’t sell their house, they have a problematic debt burden, and so on. A common denominator is often the lack of insight into their own situation, and while coming up with ways to save money may not be very difficult, actually implementing and sticking to them is much harder.

I mean, it’s easy for an outsider to suggest that someone should get rid of their dog, but if that pet is their only source of comfort, it will take some effort.

The same goes for cloud costs: saving money is easier said than done. There are all sorts of great tools available from both cloud providers and third parties to help you understand your costs.

These tools provide various reports and dashboards, and even recommendations on which instances to remove or resize (rightsizing). With the right knowledge, you can also determine how to use discount options (reserved instances, savings plans, reserved capacity, etc.), how to manage licenses intelligently, and what you can do in your application architecture to save costs. And, of course, you can always turn off instances when you’re not using them.

All of this insight is great, but then comes the second part. Just as people have a hard time saying goodbye to their pets, users and administrators have a hard time shedding their old habits and ways of thinking. And that’s something cloud providers never talk about.

For example, consider turning off instances outside of working hours. In theory, this is an excellent way to save money, but instances are part of applications, which in turn are part of chains. It can happen that data exchange takes place in a chain outside of working hours.

Testing teams that are under a deadline may also need their environment outside of the predetermined working hours. And if environments are used in the management chain, they must also be available after working hours in case of an emergency. So savings are theoretically simple, but practice is more complicated. It can be done, but it takes a lot of effort.

Rightsizing is also less straightforward than it seems. Users and administrators are often hesitant to remove capacity: users see their performance decrease, and administrators see the risk of more outages because there is less excess capacity to handle issues. In the latter case, you need to analyze where these issues are coming from: a poor application can benefit from more capacity, but that is not a long-term solution.

If the roof is leaking, you can replace the bucket you use to catch the water with a mortar tub, but even that will eventually fill up. Ultimately, you’ll have to repair the roof.

So, objections can be raised for all types of savings. Eventually, you’ll need to adopt an approach that not only makes costs visible but also involves users and administrators, and leads to the right considerations on where to save on your cloud costs and where not to.

Don’t know where to start? Can’t figure it out quickly enough? IDC Metri has helped several organizations get started. Our specialists can help kickstart your cost-saving efforts in the cloud. Because understanding costs is one thing, but it’s only useful if they actually decrease.

 

Want to learn more? Subscribe to IDC Metri’s monthly newsletter full of actionable insights on IT benchmarking, intelligence, sourcing and more.

In times of economic uncertainty, businesses tend to become more cautious and hesitant with buying decisions. This presents a unique opportunity, however, for technology vendors to demonstrate their value as catalysts of growth. By providing a credible economic impact model, tech vendors can offer a clear and data-driven analysis of their impact on the social, economic, and environmental aspects of their business, at a global, regional, or country level. This can help accelerate decision-making processes, and ultimately drive opportunity. Moreover, as consumers and businesses become increasingly aware of their impact on society and the environment, an economic and sustainability impact assessment can be a necessity for doing business in the future. By demonstrating how they positively contribute to the local economy and environment, tech vendors can differentiate themselves from their competitors and attract customers who prioritize sustainability and social responsibility.

When do you need an Economic and Sustainability Impact Study?

An economic and sustainability impact analysis is an important tool when a technology provider wants, or needs, to evaluate the impact of its business on the economy. It provides credible scenarios based on third-party data and research with a deep understanding of both technology and economic impacts, to demonstrate overall value. When technology vendors need to show that their investment in a region, or in a technology, creates spinoff economic and social impacts.

Key Reasons for Creating an Economic Impact Study

Marketing Executives:

  1. To build brand equity with governments
  2. To attract and increase media attention
  3. To create trusted content that demonstrates thought leadership

Partner Marketing Professionals:

  1. To demonstrate the opportunity their technology provides customers and partners to generate revenue
  2. To attract and retain partners to their ecosystem and deepen their share of wallet

Sustainability Executives:

  1. To show that their company is a catalyst for good
  2. To create awareness that their company is driving growth in a sustainable way, through measurable results

Why is an Economic Impact Study a differentiation tool?

Because the study is created by a third-party research firm, with a deep understanding of technology and industry verticals, they provide a credible, therefore trusted, thought leadership content tool, one that demonstrates a vendor’s overall impact as a catalyst for sustainable economic growth.

Leading subject matter experts author the study’s findings and can quantify exactly how your company will provide growth in three key areas:

  1. Economic impact
    • Specifies increase to GDP
    • Quantifies job growth
  2. Ecosystem impact
    • Driving ecosystem opportunity
    • Accelerating partner value
  3. Sustainability impact
    • Measured reduction in greenhouse gas emissions
    • Investment in social diversity

What is the process involved in building an Economic Impact Study?

To estimate the overall economic of a technology provider, IDC utilizes a standard analytical framework, an Economic Impact Analysis, which leverages an input-output (I/O) framework.

Standard economic impact analysis evaluates three types of economic and social impact (GDP and job growth), as well as other impacts (such as, taxation):

  1. Direct: the effect on the direct supply chain for the solution
  2. Indirect: the effect on the supply chain and customers indirectly related to the solution
  3. Induced: secondary effects, not directly related to the solution. These can be effects generated in the economy from economic stimulus and the ripple effect on jobs and revenues, as an example.

How can you use an Economic Impact Study?

An economic impact study is a powerful tool that provides clear, quantified proof of your thought leadership. Used in marketing content strategies, it conveys your technology for good, for growth and for innovation.

  1. As a PR tool to generate increased media exposure and coverage
  2. In recruitment campaigns to attract and retain sustainability conscious talent
  3. Marketing and business outreach to show the business value, direct investment and infrastructure, contribution to GDP and employment, as well as tax revenues to support the global growth of the business in new geographies

IDC has been producing Economic Impact Models for more than 20 years. Our Macroeconomic Center of Excellence delivers credible, defensible assessments. Our technology research is fueled by more than 1300 of the world’s leading analysts who create non-bias, data-driven research. Learn More about IDC’s Economic Impact Model and thought leadership content solutions.

I was born in Ravenna, on the east coast of Emilia-Romagna, one of the most liveable and prosperous regions in Italy. Emilia-Romagna is home to 7.3% of the Italian population. It accounts for 9.2% of GDP and 11.8% of agricultural production.

It headquarters globally successful firms in automotive, motorbikes, food production, ceramic tiles, textile and fashion, biomedical engineering, construction, woodworking equipment and much more. Unemployment is at 5.1%, well below the 2022 national average of 8.2%. Life expectancy is higher than the national average.

There are white sandy beaches, natural reserves in coastal wetlands, and beautiful hills and mountains, which combined with a rich heritage — Ravenna alone boasts eight UNESCO heritage sites — and amazing food and wine attract tens of millions of tourists every year.

Besides these material treasures, there is a unique way of living in Emilia-Romagna. And even more so in Romagna, where I grew up; there’s an old saying that you can tell if you are in the Romagna part of the region because when a stranger shows up at someone’s door, they are welcomed with a smile and a glass of wine. On the Emilia side, they’ll be equally warmly welcomed, but with a glass of water!

There is a sense of shared joy, a passion for life and a pride in belonging to one’s community. A shared sense of resilience that drives people to go through the hardness of life with a smile on their face, and always trying to put a smile on someone else’s. Because there is always a little bit of magic, even in the small things.

As Federico Fellini, the world-famous movie director and one of the most beloved children of our region, once said: “Life is a combination of magic and pasta.”

It feels good to be a Romagnolo. And to visit Romagna … unless you happened to be there in the first two weeks of May 2023.

Smart River and Water Management: Preparing for Foreseeable Disasters

After many months of drought, in the first 17 days of May 2023, Romagna was hit by as much rain as it usually gets in six months. In some areas this meant up to 400mm of rain in two weeks. To put things in perspective, one of the worst hit municipalities, Faenza, which is home to 60,000 people, experiences on average 760mm of rain a year.

The stereotypical rainy London gets 690mm a year. The result of this unusually heavy rain was that 23 rivers burst their banks, resulting in 50 floods; 305 landslides devastated hills and mountains, 14 people died and over 36,000 people were displaced from their homes. The estimated economic damage to homes, factories, farms and public infrastructure is north of €5 billion, with around €600 million just to rebuild public infrastructure.

Climate change is increasing the frequency and intensity of these extreme weather events. Long-term environmental sustainability actions, which are progressing way too slowly, will not be enough.

Resilience to short-term shocks is imperative. Money is not the problem; in fact, there is an estimated €8 billion available from the Italian COVID Recovery and Resilience Plan and the “Italia Sicura” (Safe Italy) plan to make public infrastructure more resilient. This, however, is at risk of not being spent, or not spent well, because of lack of planning, skill gaps, slow public procurement, and insufficient competencies and capacity to audit.

Technology innovation is not a silver bullet, but when implemented wisely it can help fill some of those gaps. The increasing availability and granularity of data from satellite images, IoT sensors, weather monitoring and forecasting models already tell us that Italy has the highest amount of rain in Europe, with 300 billion cubic meters a year.

Building permitting systems, public works inspection systems and other sources tell us that Emilia-Romagna was the fourth worst region in terms of soil consumption in Italy in 2021, including in areas at high risk of flooding. By building on the existing knowledge, collecting more data and turning the data into intelligent smart river and water management insights, governments, water utilities and the public could make better decisions across the disaster resilience life cycle, from mitigation to preparedness, from response to recovery.

  • Mitigation: Governments can use a wide variety of tools to develop hazard maps that can identify areas most at risk and feed into planning and preparedness systems. Policymakers and building inspectors can feed intelligent insights into planning and operational simulation tools, such as digital twins, to simulate the impact of building code and permitting decisions to reduce soil consumption and require the use of more resilient building techniques and materials.
  • Preparedness: The benefits of building flood resilient systems (dams, levees, flood walls and diversion canals, etc.) to protect natural systems such as wetland, marshes and beaches, and using resilient building techniques such as tiled pavements instead of concrete for parking lots and roads to increase water absorption, can be augmented by making these assets and tools intelligent. The intelligence from those systems can enable real-time or preventive decisions about diversion tactics, rather than reacting only when the flood is too close.
  • Response: Real-time data from weather forecasting models, integrated with data from dam and river sensors, should be analysed to detect anomalies to automatically raise emergency alerts that can then promptly notify citizens, rather than having to rely on fire and police patrols roaming the roads of small rural villages and towns using loud speakers to tell citizens to evacuate homes or expecting mayors to post videos on social media hoping everybody pays attention, as happened in the past two weeks in Romagna. More intelligent use of data can also provide insights for command-and-control personnel to coordinate first responders and orchestrate the supply of food, clothes and medicine for shelters, instead of relying on emails, spreadsheets and phone calls.
  • Recovery: Digital twins would allow evidence-based infrastructure planning decisions and monitoring the progress of investments aimed to rebuild infrastructure, therefore increasing speed and transparency of projects to avoid wasting time and money. AR/VR tools can help engineers conduct inspections when anomalies are detected.

The same technology infrastructure — with a few additions in terms of sensors and applications — will provide intelligent insights for other use cases, such as water conservation in dry seasons, leakage reduction, biodiversity protection in rivers, marshes and ports, sustainable water transportation, and water quality.

Only two days after the peak of the emergency, millions of euros, as well as food, clothing and other supplies, had been donated to flooded areas in Emilia-Romagna from all over Italy and beyond. Boosted by the typical Romagnolo spirit, spontaneous neighbourhood efforts have mushroomed to clean mud from houses, roads and farms. Beaches have already been cleaned for the upcoming tourist season. But that resolve to recover quickly should not allow us to forget what happened. We know what the future holds. Extreme weather events will happen, not only in well-known high-risk flooding areas, such as the Indian Subcontinent, Southeast Asia, and Pacific and Caribbean Islands, but also in traditionally safer regions of the world.

Technology innovation will be critical to climate change resilience. But technology alone will not be enough. It’s not enough to feel compassion to help when disaster happens. We need to invest in mitigation and preparedness measures that generate the highest long-term returns.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.

AI Act: How Did We Get Here and Where Are We Now?

In April 2021, the European Commission submitted a detailed proposal of its plan to regulate artificial intelligence development and use in Europe: the AI Act. The AI Act’s goal is to ensure that the development and deployment of AI systems in Europe is safe, transparent and compliant with the EU’s fundamental rights and values ― protecting the public, while still fostering innovation.

The Commission adopted a “general approach” on a set of harmonized rules on artificial intelligence in November 2022, but rapid progress of the technology, together with the sudden wave of innovation in Generative AI systems, delayed the final discussion of the legislation as new amendments to cover the latest developments were explored. On May 11, the European Parliament committees approved the AI Act with a large majority in a vote that paves the way to the plenary vote in mid-June (June 14 as a tentative date).

Let’s now look at the main principles of the proposed regulation and how it will impact the AI market in the region.

Regulating the Development and Deployment of AI in the EU ―  Key Aspects of the AI ACT

The proposal identifies three (+1) risk categories for AI applications and applies different restrictions and obligations on system providers and users, depending on the category of the application in question:

  • Unacceptable risk: applications that involve subliminal practices, exploitative or social scoring systems by public authorities. Such applications will be banned.
  • High risk: applications related to education, healthcare and employment, such as CV-scanning, ranking job applicants, will be subject to specific legal requirements (e.g., ensure transparency and safety of the systems, complying with the Commission’s mandatory conformity requirements). Providers of “high-risk” systems will have obligations to establish quality management systems, keep up-to-date technical documentation, undergo conformity assessments (and re-assessments) of the systems, conduct post-market monitoring, and collaborate with market surveillance authorities.
  • Limited risk: this mostly includes AI systems such as chatbots that will be subject to specific transparency obligations (e.g., disclosing that interactions are performed by a machine, so that users can take informed decisions).
  • Minimal risk: applications that are not listed as risky, nor explicitly banned are left largely unregulated (e.g., AI-enabled video games). Currently, this category covers the majority of AI systems used in the EU.

How Will the AI Act Affect the European AI Landscape?

The introduction of the European AI Act has sparked discussions on its potential impact on the adoption of AI technologies. Will this regulation hinder AI innovation in Europe? The answer is not straightforward, as it depends on various factors and the evolving landscape.

AI regulation may impose compliance costs, administrative burdens, and legal uncertainty on businesses and developers. Extensive testing, validation, and monitoring of AI systems may become necessary, which can be time-consuming and expensive. There might also be limitations on the types of applications, industries, data, or algorithms used in AI systems.

However, when assessing the direct impact on AI use cases falling under the regulated risk categories, the outcome is not overwhelmingly negative. When we at IDC built a data model to verify which and how many AI use cases will be directly impacted (we considered those that would fall into the above listed risk categories) the outcome was only modest, and we have not seen the impact, defined by possible lost revenue, to be worrying.

The compliance costs and administrative burdens could be challenging for SMEs and startups, though, which may inhibit competition in Europe if larger, more established providers find it easier to comply.

Industries like healthcare, public administration or finance are likely to face more stringent requirements due to their potential impact on human life and safety. Transparency, explainability, human oversight, and restrictions on the use of, for example, biometric identification technologies are some of the obligations that might be imposed. While these requirements may limit certain applications, they also aim to protect privacy and individual rights. However, it’s important to note that this regulation offers a list of exemptions, so if you are a provider for national security interests, you may not need to worry about that too much.

On the positive side, regulation has the potential to enhance wider trust and confidence in AI systems. This is crucial in countering overhyped pop culture-fed media narratives of AI as a threat. A trusted regulatory framework always reduces legal uncertainty and creates a level playing field for businesses, public institutions and consumers and citizens. Wisely designed laws will improve the quality and safety of AI systems and will first and foremost safeguard individuals.

The AI Act aims to encourage AI technologies that align with ethical and societal values that the EU strongly supports, such as transparency, accountability, and human-centricity. It wants to stimulate research and development in these areas and promote collaboration and openness among organizations and regions. By establishing common standards and best practices, the EU facilitates knowledge exchange and expertise sharing.

Conclusion

Looking at AI regulation through the lens of healthcare offers valuable insights. Healthcare regulations ensure safety, efficacy, and patient rights. They impose requirements on manufacturers to meet necessary standards. Similarly, AI regulations can ensure ethical and safe technology use while balancing innovation and protection.

While the potential impact of the European AI Act on AI adoption and innovation may present challenges, it also offers opportunities. By adhering to the regulatory framework, AI providers can navigate the landscape effectively, gain public trust, and promote responsible AI practices.

As the AI Act progresses, it is crucial to stay updated with the latest developments. At IDC, we will closely follow the progress of the AI Act and will continue publishing comprehensive research, providing deeper insights into its implications and potential impact as we approach the EU vote in June.

 

If you want to know more about this, please contact the team: Lapo Fioretti, Andrea Siviero, Neil Ward-Dutton or Ewa Zborowska

Lapo Fioretti - Senior Research Analyst - IDC

Lapo Fioretti is a Senior Research analyst in IDC Digital Business Research Group, leading the European Emerging Technologies Strategies research. In his role, he advises ICT players on how European organizations leverage new technologies to create business value and achieve growth and analyzes the development and impact of emerging trends on the markets. Fioretti also co-leads the IDC Worldwide MacroTech Research program, focused on the intertwined connection between the Economical and Digital worlds - analyzing the impact key MacroEconomic factors have on the digital landscape and viceversa, how technologies are impacting economies around the world.

If you are a CIO of an organization that has moved to agile, you may feel that you have lost some oversight over what is happening in the organization. As self-empowered teams work on the development of new functionality, product owners prioritize the work that needs to be done in upcoming sprints, turning the CIO role into more of a facilitator than a decision-making manager.

In theory there is nothing wrong with empowered team members. On the other hand, a certain degree of management is necessary to make sure all well intended initiatives are aligned to organizational goals, including the company mission and vision. IDC studies however show that 96% of CIOs perceive a lack of visibility in software development teams[1].

To stay relevant in an ever-changing environment, organizations need to deliver value to their customers. They need to provide value to attract new customers, to retain current customers and to stay profitable. Producing value is largely tasked to development teams in the organization.

Modern CIOs are struggling to find a balance between being a facilitator, trusting their people and being a manager.

It’s often crucial to periodically assess team performance that identifies the high-performing and the low-performing teams as a starting point to understand room for improvement: team performance optimization (TPO).

Team Performance

So, what is team performance? Teams need to provide as much value for the money as possible. The money is often fixed: X persons, working Y hours per week cost Z dollars. When we ask how much value is produced, it becomes more difficult to answer because measuring value is not quantifiable. Value is perceived differently by different individuals and can also vary over time.

It’s widely accepted that functionality, when well-prioritized, has a positive relation with value. Therefore, more functionality delivered should mean more value delivered. And functionality is objectively measurable. There are ISO standards available for functional size measurement, which means that it’s possible to measure the functionality that different teams produce in an objective, repeatable, verifiable way. The functional size units produced are measured by technology that also measures the product quality of the application the team is working on. This technology measures the code against all relevant standards, like ISO25010, ISO5055, CWE, OWASP, NIST, etc. The result is a score on several health factors: Robustness, Security, Efficiency, Changeability, Transferability, and Total Quality.

Optimizing Team Performance

IDC Agile Value Management measures the functional size produced per sprint, release or other time period and collects some basic data like effort via hours spent, cost of these hours and defects logged. This enables us to define 5 key Team Performance metrics:

  • Productivity[2]: Effort hours spent per functional size unit
  • Cost Efficiency: Cost of effort hours spent per functional size unit
  • Delivery Speed: Functional size units developed per calendar month
  • Process Quality: Defects per functional size unit
  • Value: Functional size units delivered per $1000 spent

These metrics are purely based on the functionality produced, regardless of the technology used or other non-functional requirements. Because these are objective metrics, we can benchmark these against our extensive database to compare the team metrics to carefully selected peers (based on for instance technology, size, complexity, industry, country, etc.). This results in indices that show the relative performance of teams against these peers. For instance, a Productivity Index of 15% means 15% better productivity than the peer group. In this way it becomes possible to both compare the performance of teams against the industry and against each other. Lastly, it becomes possible to identify the best performing teams and the teams that have room for improvement. The next figure shows this.

In this example, 12 teams are compared to each other showing their (trends in) productivity. The zero percent line indicates the industry average, and the dots represent the different measurements.

Visibility to Manage Value Creation Function

It’s important to understand that the management information does not necessarily need to be shared with the teams, as they may feel this can be used as a stick to punish low performance. This should never be used to punish, always to understand and to improve team performance.

These insights are the starting point for actual management based on facts. Start with questions like: why is one team performing better than others? Do they use better practices, do they have a better requirements analysis process, better developers that understand the application better, fewer defects, etc.? What is the quality and risk level of the applications the teams are working on? IDC consultants help the teams improve, using best practices that have a proven positive effect on team performance.

As stated before, IDC studies show that 96% of CIOs perceive a lack of visibility in their software development teams. Using Agile Value Management it becomes possible to get an integral view of team performance and the quality of and risks in an application, providing necessary visibility for leaders to move out of being a facilitator only, and to actually manage.

When agile is not properly managed the dollars spent per unit of value delivered can easily go through the roof while the time to value drops significantly.

Learn how IDC Metri helps our clients achieve 35 to 65% reduction in spending or reduced time to market in our eBook:

Sources

[1] IDC Perspective: CIO Guidance for Addressing the Lack of Visibility in Software Development Teams. Part of Future Enterprise Resiliency and Spending Survey (FERS Survey) wave 1, 798 respondents at CxO level.

[2] Productivity is universally defined as output divided by input. However, this results in very small number with many digits. Therefore, the inverse is used which is often referred to as Project Delivery Rate, and called Productivity because this is a term that’s easier to relate to.

Back in 2019, IDC’s security and trust team wrote about the potential of artificial intelligence (AI) in cybersecurity. At that time, the approach was to use AI to create analytics platforms that capture and replicate the tactics, techniques, and procedures of the finest security professionals and democratize the unstructured threat detection and remediation process. Using large volumes of structured and unstructured data, content analytics, information discovery, and analysis, as well as numerous other infrastructure technologies, AI-enabled security platforms use deep contextual data processing to answer questions, provide recommendations and direction, and hypothesize and formulate answers based on available evidence.

The goal at that time was – and still is – to augment the capabilities or enhance the efficiency of an organization’s most precious and scarce cybersecurity assets — cybersecurity professionals. The approach to development typically begins with the mundane and remedial and gradually graduates to increasingly complex use cases. Essentially, machine learning allows cybersecurity professionals to find the malicious “needle” in a haystack of data.

Use Cases for AI/ML Today

With the release of ChatGPT in November 2022, we are seeing increased excitement around all things AI and the application of AI technologies to enable secure outcomes. The greatest interest is in generative AI, but AI in security is hardly new. Machine learning, a form of AI that has been used in security for more than a decade, was used to generate malware signatures before algorithmic protections became the rage.

One long time use for machine learning has been behavioral analysis of users and entities (UEBA) to identify anomalous behaviors. This includes the configuration, applications, data flows, sign-ons, IP addresses accessed, and network flows of the devices in the environment. For example, does the device usually call out to another device? If not, an alert may be generated to have an analyst look into the unusual behavior.

Many vendors include UEBA as part of their security information and event management (SIEM) with alerting on the anomalies. For example, some SIEMs use ML models to detect domain-generated algorithms (DGA) that are used in DNS attacks.

Use Cases for Tomorrow

Vendors envision using AI for the thankless security tasks and saving humans from narrowly defined manually repetitive tasks, so they can pivot more quickly into investigating complex issues the machine does not understand. AI will not recognize what it has not been trained to see, so all new tactics and techniques will require human input.

Generative AI can easily translate from one language to another – spoken or machine language – which includes translating natural language queries into the vendor-specific languages needed to conduct the search in other tools. Today, SIEM vendors often use rules to correlate alerts into incidents that present more information to the analyst in one place.

AI will be trained to produce the context around an alert so analysts do not have to spend as much time on investigations, such as checking with an external service that can then label which domains are malicious. AI will handle investigations more efficiently, as well as prioritize which alerts should be handled first.

If trusted by the organization, AI may suggest or write playbooks based on regular actions taken by analysts. Eventually AI may be trusted to execute the playbooks, as well. Generative AI can recommend next steps using chatbots to provide responses about policies or best practices. One use may be a higher-level analyst confirming recommended actions for junior-level analysts. Eventually, organizations will use their own security data and threat pattern recognition capabilities to create predictive threat models.

Other uses for generative AI include:

• Generating reports from threat intelligence data

• Suggesting and writing detection rules, threat hunts, and queries for the SIEM

• Creating management, audit and compliance reports after an incident is resolved

• Reverse engineering malware

• Writing connectors that parse the ingested data correctly so it can be analyzed in log aggregation systems like a SIEM

• Helping software developers write code, search it for vulnerabilities, and offer suggested remediations

Moving Forward

The goal with AI has always been to improve the efficiency of the security analyst in their work of defending an organization against cyber adversaries. However, the cost of AI models and services may be too high for some organizations. Relying on AI to guide analysts and report on security events will only take off if the models are trustworthy.

The data used to train the model must be accurate or the AI-driven decisions will not have the desired effect. The terms confabulation or hallucination are being used to describe when a model is wrong because the models are trained to give some answer instead of saying I don’t know when it does not have an answer. The industry needs to avoid AI bias which occurs if the training set chosen is not diverse enough.

Customers and AI suppliers should understand the underlying data behind each decision so they may figure out if training went wrong and how models need to be retrained. And, vendors must protect the data used to train the models – if the data is breached the model could, for example, be trained to ignore malicious behavior instead of flagging it. Additionally, customers must have guardrails to ensure that they keep proprietary data out of public models.

Vendors will also need to check for drift with their models. AI models are not something that can be set up and forgotten. They must be tuned and updated with new information. Researchers and other cybersecurity vendors can turn to MITRE’s Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) for help in better understanding the threats to machine learning systems and for tactics and techniques – similar to those in the MITRE ATT&CK framework – to address and resolve issues.

Interested in learning more? Join us for a webinar on May 31st – Unlocking Business Success with Generative AI.

Michelle Abraham - Sr. Director, Research Cybersecurity - IDC

Michelle Abraham is a Senior Research Director in IDC's Security and Trust Group responsible for the Security Information and Event Management (SIEM), Exposure Management and Related Artificial Intelligence Technologies practice. Ms. Abraham's core research coverage includes SIEM platforms, exposure management platforms, attack surface management, breach and attack simulation, cybersecurity asset management, and device vulnerability management alongside AI-related security topics.