The plane ride to Shoptalk 2024 was a precursor of what was to come – a bit of a hike for a New Englander – the trip to Las Vegas took over 6 hours. Looking out my window seat, I witnessed the Hoover Dam and the accompanying Mike O’Callaghan-Pat Tillman arch bridge spanning the outcroppings in front of the dam.

Just like the bridge, the conference presented the opportunity to reach across the chasm and connect with retailers, vendors, and professionals in retail. Without doubt, it fulfilled this purpose.

The event drew about 10,000 people across 3 days with a substantial expo floor and educational sessions. The event opened with Sophie Wawro, President of Shoptalk and Mike Antonecchia, SVP discussing the focus of Shoptalk, standing on a sparkling stage that seemed right out of the Twilight Zone.

The duo mapped out the day’s events and agenda and opened the stage to the keynote speakers including Colleen Aubry from Amazon and Tony Spring, CEO at Macy’s Inc. with additional thoughts about Reimagining Retail from Gina Boswell, CEO at Bath & Body Works. The conference had broadened its scope from prior years. More specifically, Shoptalk, better known for their digital focus had adopted a track for Merchandising, Assortment and Supply Chain – entering deeper into the application of technology for the merchandising sector. Clearly, merchandising continues to be a hot topic for retail. We discuss some takeaways below.

The event is well known for it’s 1:1 meeting formats, and frankly it felt like speed dating to get to know people in the industry. Shoptalk had refined their meetings from prior years – and most importantly, made it a much smoother, straightforward process with announcements and a pair of gigantic digital clocks at either end of the Shoptalk meeting floor, making sure we kept to time. The tables were tiny, two-seaters you’d find in a cafe, but perfect for a quick conversation. As with any large event, as soon as the ‘voice of God’ announcements were made, decibel levels grew from a murmur to hundreds of conversations all at once, echoing off the walls of the Las Vegas Mandalay Bay event halls. However, I will say, most of these meetings were interesting enough to sit through the 15 minutes of introductions and beginnings of a conversation – which was the intent.

The diversity of meetings included large retailers like Macy’s and Sainsburys but also ecommerce firms like Peapod, independent consultants, and fellow Rethink Retail Top Retail Experts (you know who you are!) Being in the industry for decades, not everyone was a new face. The folks whom I knew offered reinforcement of new research topics such as Retail Media or Generative AI and others focused on my passion for Merchandising and Analytics. Some of the new folks I met surprised with future of retail conversations to both learn and understand their perspectives. Expanding my horizons was the most rewarding part.

Beyond the speed meeting rituals, we had rounds with top retail vendors that continuously make inroads in the market. Part of the view included exciting meetings with Oracle in a Starbucks, innovation meetings with Blue Yonder at their booth, touching base with Zebra and their focused expansion into merchandise planning, and also speaking with Centric Software, an interesting new add in to my merchandising and planning perspective.

I can’t leave out Bazaarvoice who hosted a fantastic post-dinner engagement with deep conversations about retail marketing and the media angle. Other productive engagements included conversations with CDP firm Twilio about retail data.

Shoptalk 2024 enabled a number of tracks, but my limited availability required that I focus on merchandising, assortment and supply chain with a small selection of education tracks. The most exciting session I attended was an interview with Aimee Bayer-Thomas, Chief Supply Chain Officer at Ulta Beauty. Key learnings from this session revolved around the impact and perspective of technology in terms of supply chain:

  • Tech is needed in retail:  While Ms Bayer-Thomas highlighted this fact, it is not by chance that retailers adopt technology at a rapid clip. Overall spending for IT spending in retail has grown at a pace of over 5.5% YOY, with expectations to grow to over 8% by 2027 (Source: IDC Data). For instance, Walmart incurred capital expenditures of almost $12 Billion on supply chain, customer facing initiatives and technology in just the United States a growth of 28.4% over Fiscal Year 2023.  Other large retailers are showing similar investments in technology innovation with companies like Home Depot investing $150 million into a venture capital fund in 2022. Target announced $4 to $5 billion of investment in 2023 “to expand its guest-centric services, operations network of stores and supply chain facilities, digital experiences and other capabilities.” Clearly, the investment in technology isn’t slowing down, and retailers continue to leverage a critical part of technology for supply chain and merchandising.
  • Tech is an enabler: Also focused on during Shoptalk’s session, Ulta Beauty leverages tech to enable outcomes. According to Ms. Bayer-Thomas, “Tech must improve strategic imperatives.”  The purpose of technology is to improve and optimize the ability for distribution centers and micro-fulfillment centers within the organizations to operate more efficiently and smoothly. Technology must service the end goals of these very different types of supply chain facilities that must work together in concert to yield the best results for retail.
  • Tech must fit the business: Not all technology is appropriate, and the criticality of the supply chain combined with the need to service the business means that any technology used must proactively fit business needs. Ulta Beauty is balancing the technology with investments in people, and for the most part must ensure that there is an active tradeoff in new technologies that will result in leaving the company merchandising and supply chain better off than without the tech. Some technologies are worth examining however. For instance, the growth of AI in retail since the launch of ChatGPT in the public arena has been exponential. Per Ms. Bayer-Thomas, “AI is something we’re watching”. More specifically, retailers are looking at AI for supply chain use cases, testing applications, and piloting AI capabilities across their organizations. Not all of these efforts coincide with supply chain or merchandising, but some highly interesting use cases have arisen. Find out more about such opportunities here:  Understanding the Generative AI Use Case Landscape: The Industry Perspective

As a retailer, following these guidelines are mandatory. The rapidly changing world of technology intersects with the high performing retail world creating conditions where retailers must invest in technology. Doing this correctly will mean the difference between successful applications of technology and failed projects that missed the mark. Technology must address a business need and not be pursued for technology’s sake.

Tech must improve strategic imperatives.

Aimee Bayer-Thomas, Chief Supply Chain Officer, Ulta Beauty

Any gathering of retailers, retail vendors, and a mix of adjacent professionals can be an opportunity for business value creation – but the most significant events proactively bring people together. Shoptalk 2024 is one of these avenues where business creation happens. Tagged as one of the top retail events, my experience at Shoptalk 2024 was the right kind of nudge, especially in understanding the criticality and balance required for applying technology to merchandising and supply chain. The event addressed many of the critical challenges in retail through content, people, and engagement with industry players.

Ananda Chakravarty - Research VP, Retail Merchandising and Marketing Analytic Strateg - IDC

Ananda Chakravarty is Vice President for IDC Retail Insights, responsible for the Retail Merchandising and Marketing Analytics Strategies practice. Mr. Chakravarty's core research covers in-store and digital retail merchandising, digital tools, artificial intelligence, intelligent store operations, retail marketing, and retail media. It includes application of data and data analytics for retail including pricing, tech, and decision making. Ananda builds actionable strategic research focused on retail business merchandising and marketing.

Value-based selling is a sales technique that focuses on understanding and reinforcing the various benefits a product or service will deliver to the customer, rather than focusing solely on the features or the price of the product. This approach is rooted in the belief that customers buy products and services because they perceive them to offer value that is greater than the cost. Value-based selling is about communicating this perceived value effectively, ensuring that the customer understands how the product or service can solve their problems, improve their situation, or help them achieve their goals.

Value Selling

The core of value-based selling lies in the salesperson’s ability to identify and understand the customer’s needs, challenges, and goals. This requires thorough research, active listening, and insightful questioning. Salespeople must then tailor their sales pitch to highlight how their product or service can address these specific needs, offering a solution that is not just beneficial but also superior to alternatives. This approach shifts the focus from the transaction itself to the relationship between the customer and the product or service, emphasizing long-term benefits and satisfaction.

Value-based selling also involves quantifying the value proposition. This means providing the customer with clear, tangible evidence of how the product or service will deliver a return on investment (ROI). This could be in the form of cost savings, increased revenue, improved efficiency, or other measurable benefits. By presenting a compelling case that the benefits of the product or service outweigh the costs, salespeople can more effectively persuade customers to make a purchase.

Now, let’s compare value-based selling with consultative selling. While both approaches are customer-centric and focus on solving the customer’s problems, there are key differences between them.

Consultative Selling

Consultative selling, on the other hand, adopts a more holistic approach to sales. It positions the salesperson as a trusted advisor who collaborates with the customer to identify solutions tailored to their requirements. Consultative selling is a sales approach where the salesperson acts as a consultant to the customer, working closely with them to identify and understand their needs, challenges, and objectives. The salesperson then recommends products or services that best meet those needs. This approach is characterized by a high level of engagement with the customer, with the salesperson asking questions, listening to the customer’s responses, and offering tailored advice and solutions. Unlike traditional sales approaches that focus solely on closing deals, consultative selling prioritizes building long-term relationships based on trust and mutual understanding.

The main difference between value-based selling and consultative selling lies in the focus and end goal of the sales process. In consultative selling, the emphasis is on the salesperson’s role as an advisor or consultant, with the primary goal being to find the best solution for the customer’s needs, regardless of whether it leads to a sale. The salesperson’s expertise and advice are the main value propositions in this approach.

In contrast, value-based selling focuses more on the value that the product or service will provide to the customer. While it also involves understanding the customer’s needs and offering tailored solutions, the emphasis is on communicating the specific benefits and ROI that the customer will gain from making a purchase. The salesperson’s role is not just to advise but to persuade the customer that their product or service offers the best value.

Both value-based selling and consultative selling require a deep understanding of the customer and a focus on building long-term relationships. However, value-based selling goes a step further by quantifying the value proposition and making a compelling case for the financial and strategic benefits of the product or service. This approach can be particularly effective in competitive markets where customers are looking for clear, tangible reasons to choose one product or service over another.

Tips for Implementing Each Sales Strategy

Understand Your Customer: Regardless of the sales methodology you employ, understanding your customer is paramount. Take the time to research their industry, challenges, and competitors. This knowledge will serve as the foundation for tailoring your approach and articulating the value proposition effectively.

Ask Probing Questions: In consultative selling, asking the right questions is key to uncovering the customer’s underlying needs and motivations. Focus on open-ended questions that encourage dialogue and allow the customer to express their concerns freely. Active listening is equally important, as it demonstrates your genuine interest in understanding their perspective.

Highlight Unique Value Propositions: In value selling, emphasize the distinctive features and benefits of your product or service that set it apart from the competition. Rather than adopting a one-size-fits-all approach, tailor your pitch to resonate with the customer’s specific pain points and priorities. Use case studies, testimonials, and ROI calculations to quantify the value proposition and reinforce your arguments.

Focus on Building Trust: Whether you’re practicing consultative selling or value selling, trust is the cornerstone of successful relationships. Be transparent, authentic, and empathetic in your interactions with customers. Demonstrate your expertise by offering insights and recommendations that genuinely add value to their business.

Adapt and Iterate: Sales is a dynamic field that requires constant adaptation to changing market conditions and customer preferences. Continuously evaluate your sales approach, solicit feedback from customers, and iterate on your strategies accordingly. Embrace innovation and leverage technology to streamline your sales process and enhance productivity.

While value-based selling and consultative selling share some similarities, they differ in their focus and approach. Value-based selling emphasizes the value and benefits of the product or service, aiming to demonstrate a clear ROI to the customer. Consultative selling, on the other hand, centers on the salesperson’s role as an advisor, with the goal of finding the best solution for the customer’s needs. Both approaches can be effective, but choosing the right one depends on the sales context, the nature of the product or service, and the specific needs and preferences of the customer.

Learn More About IDC’s:

The artificial intelligence (AI) revolution has been heralded as the most disruptive technological force since the digital age dawned. With generative AI (GenAI) models like ChatGPT and GPT4, Claude, and Gemini capturing global attention, businesses across sectors are scrambling to explore and harness these emerging capabilities.

However, amid the AI gold rush, a crucial truth is crystallizing – for organizations to truly innovate and drive transformative impact with AI, deep domain expertise and industry-specific knowledge about use cases are non-negotiable prerequisites. 

What Are GenAI Use Cases And How Do I Identify Them?

At their core, effective AI use cases are focused business initiatives that harness various technologies to achieve specific, measurable outcomes. They go beyond just implementing a single technology solution, instead centering on addressing core business needs or challenges through a strategic combination of tools and strategies. Critically, a use case is not simply a discrete offering like a chatbot or an isolated technology project. Rather, it’s a holistic approach that aligns technological capabilities with broader business objectives, enabling quantifiable results.

The most impactful use cases are driven by a clear understanding of organizational pain points and a vision for leveraging emerging technologies like GenAI to create innovative solutions. By identifying and pursuing well-defined use cases, businesses can unlock AI’s full potential while delivering tangible value.

GenAI use cases broadly fall into three principal categories: productivity, business function, and industry-specific.

Productivity use cases streamline work tasks like report summarization, job description generation, or code creation by integrating GenAI features into existing applications. Many derive value from pre-trained models.

Business function use cases involve integrating AI models with proprietary corporate data or specific departments/functions. Data governance is crucial, necessitating integration with established enterprise platforms. 

Industry use cases generally require extensive customization to offer significant value to larger enterprises. These specialized vertical applications entail tailored architectures and implementation efforts, leveraging unique data assets.

Preparing For AI Everywhere: The Great Data Grab

The “Great Data Grab of 2024” underscores the urgency around the need for preparation. AI providers are aggressively expanding their data repositories and platform portfolios through acquisitions, partnerships, and arduous self-collection efforts. The goal? To amass as much raw training data as possible to feed increasingly powerful AI models spanning industries and use cases.

This explosive growth in data and AI assets presents a double-edged sword. The sheer abundance opens up tantalizing opportunities to experiment and push boundaries. Yet, it also manifests a bewildering sprawl of options that can just as easily lead businesses astray if not approached with laser-focused strategy and domain-specific expertise.

The stakes are rising exponentially. IDC’s industry predictions suggest that by 2025, a staggering 40% of all professional services engagements will involve GenAI-augmented delivery models in some capacity. This wholesale disruption to human-delivered services will upend long-established processes, roles, and competency models. Success in this new paradigm hinges on skillfully blending cutting-edge AI capabilities with nuanced, sector-specific wisdom.

The Four Pillars of Preparation: Skills, Cost, Innovation, Governance

As organizations attempt to construct compelling GenAI value propositions, four overarching pillars emerge as focal points – skills, costs, innovation, and governance. Mature AI proficiencies in areas like machine learning (ML), natural language processing (NLP), data science, and cloud architecture are essential foundational elements.

Skills: Upskilling in core AI/ML disciplines like coding, model training, and data wrangling is now table stakes. But combining those horizontal proficiencies with rich, verticalized wisdom is what unlocks exponential value creation. A manufacturing AI wizard intimately versed in supply chain complexities and constraints will run circles around a generalist.

Cost: The underlying economics of productionizing and scaling advanced AI workloads at an enterprise level can be extremely complex and capital-intensive. Here, granular knowledge of industry-specific processes, inefficiencies, variable cost drivers and fluctuating demand signals is indispensable. Those insights ensure AI investments remain sustainable and tightly aligned with the pragmatic financial realities of a given business sector.

Innovation: AI is a potent catalyst for digital innovation, promising to streamline and augment development processes. But a relentless focus on measurable business results – not just shiny new tech for tech’s sake – must persist. Seasoned industry veterans skilled at mapping AI tools to specific sectoral pain points and KPIs help maintain this all-important outcomes-driven discipline.

Governance: Robust AI governance programs and rigorous data governance guardrails are no longer nice-to-have footnotes; they’re essential safeguards in an era of widespread AI adoption. Authoritative domain experts play a pivotal role in ensuring these frameworks holistically account for sector-specific risks, compliance requirements, ethical nuances, and regional regulatory variances from the outset.

Lest we forget, these escalating talent and policy demands are unfolding against a backdrop of continued infrastructure turbulence. Industry analysts forecast that well into 2025, enterprises will still be contending with degrees of uncertainty, cost volatility, and accessibility constraints surrounding the foundational compute, network and storage resources underpinning AI/ML workloads.

Navigating these intersecting infrastructure, skillset and governance complexities in pursuit of sustainable AI-driven innovation will require a depth of industry-specific intelligence. From evaluating build-vs-buy infrastructure options to right-sizing investments to mitigating sector-specific risks, vertical expertise is mission-critical.

Riding the AI Revolution: Essential Guidance

So as the AI revolution kicks into an even higher gear, here are some essential guideposts for businesses looking to leverage GenAI effectively:

Understand: Don’t treat AI as an isolated technological sphere. Immerse yourself in the unique cultural, operational, and financial dynamics, legacy constraints, customer/user pain points and market opportunities within your specific industry vertical. Your customers, employees and partners demand relevance and context, not generic AI solutions. 

Prioritize: Approach GenAI use case exploration through the lens of tangible, high-impact business outcomes. What are the most pressing strategic challenges specific to your domain? Prioritize accordingly, while carefully weighing factors like potential costs, risks, downstream adoption barriers and competitive implications. Provide crisp, industry-contextualized starting points and strategic roadmaps to chart the course.        

Establish: Certainly, it’s crucial to establish the right technical foundations like data-centric platforms, cost-effective AI infrastructure environments and robust API-driven integration frameworks. But recognize that these critical enablers must be purpose-built and customized to synchronize with the unique operating models, skills profiles and technology stacks present in each vertical. 

The AI age represents an era of incredible potential and opportunity. But it’s also one of unprecedented complexity – a reality that demands a sharp re-think of how companies acquire, nurture, and apply talent and domain expertise.

By treating specialized industry knowledge as a core competency on par with AI/ML capabilities themselves, businesses can firmly establish themselves as indispensable leaders driving the next waves of sectoral transformation.

After all, deep insight is the spark that will ultimately ignite an organization’s capacity to innovate. Generic AI will only get you so far. Expertise is what allows you to separate noise from signal and elevate AI into a tsunami of outcomes-driven impact.

Cambridge Healthtech Institute’s Bio-IT World Conference & Expo was held from April 15-17, in Boston. It represented an appealing blend of software companies, including many innovative start-ups, a few system integrators, some big pharma, such as Pfizer, Novartis, Merck, Astra Zeneca, Bristol Myers Squibb, Johnson & Johnson, Roche, Regeneron, and Sanofi to name a few, as well as investors. With over 3,000 Life Science and IT experts attending this event, it brought to the table an excellent fusion of life sciences innovation and technology.

Dr. Caroline Chung, VP and Chief Data Officer and Director of Data Science Development and Implementation of the Institute for Data Science in Oncology, at MD Anderson Cancer Center, in her excellent keynote on digital twins in cancer care and research emphasized the importance of the verification, validation, and uncertainty quantification (VVUQ) for building trust in digital twins. She drew an interesting parallel between medicine and AI, while, referencing Sir William Osler, the father of modern medicine’s quote, which states, ‘Medicine is a science of uncertainty and an art of probability’, highlighting the fact that while AI is indeed top of mind for the industry, there is still a lot of uncertainty that needs to be dealt with.

Bryan Martin, Research Fellow, and Head of AI, Abbvie presented the Abbvie Intelligence platform (including AI Chat, AI analyze, AI Translate, and AI Ask a Source) that leverages GenAI and has a running cost of less than 250 dollars per day for 50,000 users. He discussed how Abbvie overcame implementation challenges. IDC’s PlanScape for GenAI in life sciences and healthcare examines some of these challenges and provides guidance to the industry on how to navigate the GenAI journey.  Bryan discussed how Abbvie is in the process of implementing Project Delphi (automated document generation) for generating consent forms, protocols, clinical study reports, and periodic safety update reports (PSURs) using generative AI this year. He highlighted how Abbive is targeting 10 million dollars in cost savings based just on these four documents.

Anu Sharma, Principal Scientist, Director, Center for Observational and Real-World Evidence from Merck spoke about RWD, Merck’s real-world data platform that leverages Generative AI to identify patient populations to optimize trial design, treatment effectiveness, disease progression, disease burden. She shared how it shortened timelines for health technology assessment HTA submissions and regulatory submissions, from weeks to hours. While Rishi Gupta shared how Novartis was leveraging GenAI for drug discovery and generative chemistry, he also cautioned about the need to distinguish the hype from the reality. Nick Brown, Executive Director of Imaging and Data Analytics presented about Aegis, its digital twin that predicts dose dependent human drug-induced liver Injury (DILI) using gene changes, and shared how this has already been used across 200 compounds at Astra Zeneca.

The fact that the focus on generative AI is growing aggressively was stressed by one and all. This aligns with the findings from IDC’s Life Sciences Generative AI Survey which indicated that in just 4 months the percentage of companies investing significantly in GenAI had jumped from 13% to 46%.  Data used to train LLMs is not easily available and is expensive, and the data costs can serve as a roadblock for smaller start-ups to leverage GenAI. Hence, it is critically important to have venture capital investment to enable start-ups to be able to leverage GenAI. Generative AI reigned supreme at this event. Look out for IDC’ upcoming report on GenAI use cases in life sciences.

It was highlighted that industry is not looking only at real-world data (RWD) today but is increasingly recognizing the importance of synthetic data in the near future, as it is coming up against a data wall. In fact, IDC had predicted that by the end of 2027, 95% of pharma companies will have established strategic partnerships to access RWD, with 25% transitioning to the adoption of synthetic data. It was stressed that organizations should build out a defensible data strategy, focusing not only on long-term vision, but also on demonstrating immediate value to patients.

Other interesting topics ranged from quantum-aided-drug-design, to lab-automation-as-a-service (refer to IDC’s Lab of the Future Technology Solutions and Consulting Services MarketScape to understand the technology vendor landscape and the key trends in this space), and from data meshes to knowledge graphs.

Venture Innovation Partners was an interesting addition to the event bringing together government, venture capitalists, private equity professionals, bank investors, and executive leadership of both innovative biotechs, tech-bios, as well as large pharma to drive conversations around what it takes for biotech innovations to attract investments, and growth and investment strategies. Scott Penberthy, CTO, Google, highlighted transformative AI-driven initiatives and touched upon key investment opportunities in biotechs and drug discovery. IDC foresees that it is not only biotechs, but the innovative ecosystem of TechBios that will transform drug discovery.

There was a discussion around how organizations needed to ‘weaponize AI’ to use it to their competitive advantage. The need to both understand the external market, while aligning with internal organizational strategy were seen as key factors in driving ‘Build vs Buy’ decisions. The fact that innovation is a team sport was emphasized. In a final wrap up on how to sustain Massachusetts as a biotech hub and an investment powerhouse, Yvonne Hao, Secretary of economic Development, Commonwealth of Massachusetts, beautifully articulated the fact that Massachusetts as a state was big enough to drive investment and power innovation and yet small enough to keep it personal.

“Two things stood out at this event. One, not surprisingly, was ‘Generative AI’ – which clearly led the way. And the second was drug discovery. In addition, the Venture Innovation Partners initiative provided a great platform connecting venture capitalists, innovative tech start-ups, and large pharma, as well as government, to power innovation and help Boston retain its title as the world’s largest biotech hub. If you were seeking tech innovation in life sciences, Bio-IT was the place to be”, said Dr. Nimita Limaye, Research VP, Life Sciences, R&D Strategy and Technology, IDC.

Nimita Limaye - Research Vice President - IDC

Dr. Nimita Limaye is a Research VP with IDC Health Insights and provides research-based advisory and consulting services, as well as market analysis on key topics related to R&D Strategy and Technology in the life sciences industry. She addresses aspects such as the role of digital transformation in discovery research, e-clinical ecosystems, the role of NLP, AI, ML, DL, RPA, in transforming drug development, precision medicine, pharma R&D execution and strategic outsourcing models.

Marketing has been a hotbed of digital transformation for more than a decade but with the recent emergence of Generative AI (GenAI), the most profound changes lie ahead. GenAI improves continuously on a logarithmic/exponential curve of competency mastery. Its potential is limited only by the availability and cost of computing and whatever governance can be applied to control it. While we are amazed at its current abilities, it is just getting started and the rate of acceleration means that in three to five years it will be phenomenally more proficient at everything we teach it to do, and the more people use AI to do their jobs, the faster AI will learn to do everyone’s job.

GenAI has many use cases for marketing like creating personalized emails, social posts, product imagery, audience segments, and much more. However, in a few years, we will no longer think of AI in terms of use cases because single prompts will automatically generate, manage, and optimize processes, projects, and campaigns at orders of magnitude less cost and greater scale. That will have a profound impact on the nature of marketing work for people. GenAI will reduce the need to hire additional marketing staff, collapse some roles, and expand others. Ultimately the result will be fewer people working in marketing in the next five years. Ultimately the result will be fewer people working in marketing in the next five years.

The impact of course is not limited to marketing, it will affect most white-collar jobs up the org chart. Despite this, it is important to remember that fewer humans in the loop does not mean zero humans in the loop. AI may always be better at creating something out of everything, people will always be better at creating something out of nothing.

Furthermore, there will be important new go-to-market challenges and opportunities as AI Super Agents enable programmatic shopping. Marketers will need new methods to influence not only how people coach their AI agents to shop for them, but also how AI agents coach their human owners on what products and services to choose. That said, due to the acceleration of AI capabilities, now is the time for marketing leaders to begin planning how they will redefine their organizations, roles, skills, and practices.

How We Estimated The Productivity Impact Of AI

  1. Management and Planning
  2. Branding and Creative Services
  3. Campaign and Engagement
  4. Analytics and Reporting
  5. Other

We then estimated how much of each category of work can be delegated to GenAI over the next five years, which admittedly may be conservative. The model accounts for a step function of work delegation to AI in 2025, but there may be additional step functions in capabilities that have yet to be revealed in the evolution of AI. Of note, Sam Altman is on record estimating that once Artificial General Intelligence (AGI) becomes available in the early 2030s, it will automate 95% of all the creative work of marketing and all the creative work marketers hire outside service firms to do for them.

We then combined staffing levels and fully loaded cost estimates to calculate the productivity impact of adopting GenAI throughout a marketing team of 56 employees. The result is that in the next five years, GenAI will advance to the point where it can handle more than 40% of the collective work of marketing teams and potentially 100% of specific marketing tasks. Now is the time to start scenario planning for such a huge shift in the nature of work happening so fast.

Preparing For Fundamental Organizational Restructuring

Some key takeaways for marketing leaders to prepare their organization to take advantage of GenAI include:

  1. Every marketing team is different: It is critical to plan for your specific marketing mission, team structure, operating environment, and technology infrastructure. Some marketing teams are overweight on branding, advertising, direct marketing, lead generation, etc. Therefore, your plan needs to be uniquely designed around how much of what kind of work is assigned to each of your staff roles.
  2. Review work processes and data flows: GenAI solutions will consolidate into multi-modal platforms that can create, automate, and analyze whole projects, processes, and campaigns. But they will need vast amounts of data that is structured and tagged for training and retrieval augmentation as well as strong governance and security for optimal performance in the context of your business.
  3. Assess vendor roadmaps: Buyers should focus on the breadth and depth of use cases vendors support not only within marketing but across all customer-facing functions. Use cases will initially translate into business outcomes and create strong economic justification for future investment.
  4. Rapid roadmap: Buyers should also focus on how effectively the vendor’s architecture, tooling, and service resources accelerate the journey of operationalizing the use case roadmap.
  5. Determine the level of infrastructure required to support each type of work: Successful AI deployments will require significant infrastructure – whether provided by the vendor or the user. Marketing technology buyers should work with vendors to determine the required resources and partner with IT counterparts to determine the organization’s readiness to support each type of work. In some cases, the governance, security, data architecture, etc. may not be mature enough to support full GenAI enablement across the martech stack.
  6. No AI Islands: AI capabilities should be implemented from the data layer up not from the task automation layer down. While many GenAI apps exist, every instance of GenAI in a commercial enterprise should share common services for data, governance, security, etc.
  7. Prepare staff (and organizations) for fundamental job changes: Marketing leaders should assess how much work will be delegated to GenAI, and across which roles, based on the applicable use cases. They should prepare staff for significant changes to their roles which may necessitate upskilling, re-organization, elimination of some job titles, expansion of others, and the creation of entirely new career paths. While innovations, historically, are additive to the job market, the transition is inevitably challenging. Marketing leaders will need to consider organizational impact if they wish to successfully deploy GenAI with minimal disruption.
  8. Prepare your data: GenAI is fueled by data. Organizations that do not have real-time, clean, governed, data sets will not be able to take full advantage of this new generation of marketing technology. Martech buyers should partner immediately with IT counterparts to ensure CDP (customer data platforms), or similar Data Lake structures are in place to capture all customer interactions and deliver customer data as an enterprise service on which to base AI decisions across all departments.
  9. Audit your current vendors but be prepared to initiate RFPs (Request for Proposals): It may not be necessary for buyers to rush out and buy the latest and greatest AI tools – especially not individual, disconnected, point solutions. Marketing platform vendors have or will infuse GenAI capabilities into their solutions and as AI evolves from task to process many discrete AI capabilities will consolidate into marketing platforms.
  10. Ingenuity over Innovation: While GenAI will increase the productivity of various marketing functions by 40%-100%, sheer output is not the final measure of a marketing team’s success. As GenAI creates almost limitless personalization capacity, issues of brand identity, market positioning, differentiation, novel messaging, and anticipating cultural trends will soon become the new attributes of best-in-class marketing organizations.

Gerry Murray - Research Director, Marketing and Sales Technology - IDC

Gerry Murray is a Research Director with IDC's Marketing and Sales Technology service where he covers marketing technology and related solutions. He produces competitive assessments, market forecasts, innovator reports, maturity models, case studies, and thought leadership research. Prior to his role at IDC, Gerry spent six years in marketing at Softrax Corp. an enterprise financial solutions provider. There, he managed marketing programs that produced 4 million emails a year, multiple websites, interactive tools and product tours, an online game, collateral, and PR. Concurrently, he was Managing Editor at RevenueRecognition.com, a thought leadership site featuring partnerships with IDC and the Financial Accounting Standards Boards (FASB) which was quoted and referenced in leading industry publications such as CFO magazine, BusinessFinance, and others. Gerry spent the first half of his career at IDC advising executives from some of the world's largest software and services providers on market strategy, competitive positioning, and channel management. He was the Director of Knowledge Management Technology and conducted research on a worldwide scale including: market sizing and forecasting, ROI models, case studies, multi-client studies, focus groups, and custom consulting projects.

Artificial intelligence (AI) has a tremendous potential to accelerate organizations’ sustainable transformation journeys and create business value. AI can help automate ESG data collection and increase the accuracy of reporting, improve operational efficiencies and anticipate and respond to supply chain risks.

However, the use of AI also poses risks that can harm ESG performance, compliance, and trustworthiness. For example, an algorithm bias could lead to unfair or unethical decision making.   

As organizations are implementing artificial intelligence into their operations, they need to be strategic about the sustainability-related use cases that generate the greatest financial and non-financial ROI. And, they must be aware of the industry-specific risks that this technology can pose.

IDC recently conducted a comprehensive IT buyer survey uncovering current market trends around sustainability and AI, including pain points, spending intentions, and high in-demand use-cases. 

In order to get a more comprehensive understanding of their AI-related value proposition, IT vendors need to understand the full scope of the intersection between AI and sustainability. The different layers are often still presented as disconnected topic areas. According to IDC’s survey, less than 10% of organizations worldwide are currently addressing them through their sustainability/ESG function. This leads to fragmented messaging vis-à-vis their customers and an incomplete picture of their capabilities and responsibilities.

Likewise, IT buyers need to better understand their needs regarding AI-enabled sustainability solutions for the best possible ROI, and they need to be aware of the risks they expose themselves to when leveraging AI across their operations. 

IDC’s Sustainability Framework and the Role of AI 

Currently, more than three quarters (76%) of IT decision makers worldwide consider AI and its derivatives to be “critical” or “very important” for their organization’s sustainable transformation journey. More than 40% say that at least half of their AI spend has a sustainability-related component.  

Sustainable AI – Managing the Environmental and Social Footprint of AI 

The rapid growth of AI is driving up energy and computational demands on datacenters, requiring substantial infrastructure upgrades, as IDC analysts Rob Brothers, Sean Graham, and Shahin Hashim lay out here.

With an increase in power capacity comes a spike in carbon emissions and the building of physical assets that have embedded carbon and need to be decommissioned at some point. GenAI also requires power-hungry GPUs, which, on average, require 10-15 times more energy than CPUs (ibid.). Energy consumption is particularly high at the beginning of the GenAI lifecycle during the training and tuning phases of AI models.  

Expanding Power Capacity and Energy Consumption To Meet AI’s Infrastructure Needs 

Not surprisingly, environmental concerns are top of mind for organizations when deploying AI/GenAI. One-third of survey respondents said that they only work with or buy from IT vendors that meet certain environmental sustainability criteria. Only 2% said that they do not make buying decisions that factor in environmental considerations.  

Practitioners need to also balance the technology’s environmental footprint with its potential social and governance related impact (“Responsible AI”). These issues include biased decision-making and discriminatory outcomes, data security and privacy issues, and unethical business conduct due to the malicious use of AI. 

Of course, the materiality of each of these issue areas will be very industry-specific, and practitioners need to understand the concerns and demands of their various ESG stakeholder groups. For instance, these topics are covered by commonly used ESG reporting standards and frameworks (e.g., the SASB standards), which means that ESG investors will have a close look at corporate reporting and performance on these issues. 

AI for Sustainability – Identifying the Top Industry Specific Use Cases 

GenAI, and AI technologies in general, have the potential to substantially improve and accelerate organizations’ sustainable transformation efforts. The use cases will be very industry-specific, depending on:

  • The individual ESG issues areas that organizations need to address.
  • Their stakeholder environment.
  • The complexity of their supply chains.
  • Whether they are delivering physical or non-physical offerings.  

Demand for AI-enabled solutions also varies based on maturity and adoption levels. Naturally, organizations that are just getting started on their tech- and AI-enabled sustainability journey will need solutions that can help them get up to speed on meeting regulatory requirements. These organizations are looking for providers that can also help them figure out how to use the products effectively.

Organizations that are further ahead on the maturity curve require more industry-specific nuances in terms of functionality and issue coverage. Vendors will need to be more explicit about the ROI that these solutions can deliver, as they are likely competing against existing or other (new) sophisticated solutions that are procured to improve the delivery of concrete sustainability outcomes. 

Priorities and Pain Points by AI for Sustainability Adoption Level 

In order to determine the most sought-after use-cases, IDC surveyed current end-user demand for AI-enabled solutions across different dimensions that are essential for building the use-cases. The results help IT vendors develop commercial-grade offerings, and provide peer guidance to IT buyers that are trying to prioritize their AI and sustainability tech spend.

These categories include prioritization of ESG issue areas by industry (e.g., greenhouse gas emissions, waste and water management, human rights management, employee wellbeing and DE&I, etc.), biggest impact regarding sustainability management in the value chain (e.g., sourcing, manufacturing, shipping, end-of-life management, etc.), and challenges around the ESG data lifecycle/journey (e.g., collection of insights regarding the regulatory environment, breaking up of ESG data silos, stakeholder management, ESG report creation, etc.).

Below are examples of the top industry-specific use-cases that emerged for Manufacturing, Retail, Energy, and Life Sciences: 

Industry-Specific Use Cases of AI-Enabled Sustainability Solutions

Summary

AI intersects with sustainability/ESG in many ways. Successful organizations look at the intersection from a risks and opportunities perspective and tie their approach to Sustainable AI and AI for Sustainability closely to their overarching sustainable business strategy. This will also require organizational adjustments and alignments, as responsibilities for AI and sustainability span across multiple functions that include IT and LOB personas.

As sustainability strategies become more holistic and ESG materiality more the driver for action, AI and sustainability strategies need to account for the diversity in ESG issues that can be caused by the use of AI, and practitioners need to be able to pick the solutions that truly help address their organization’s most material issues. 

For more information on the topic, thought leadership research on sustainability/ESG, and an overview of IDC’s sustainability/ESG analysts and offerings, please visit our sustainability/ESG website

Bjoern Stengel - Sr. Manager, Data & Analytics - IDC

Bjoern Stengel is IDC's global sustainability research lead. His research focuses on how environmental, social, and governance (ESG) topics impact and shape business strategies and technology usage. He provides insights into market opportunities, adoption strategies, and use cases for sustainability-related technologies and services. Bjoern helps IDC's clients understand the impact of technology-enabled, sustainable transformation processes in the context of sustainable business strategies, operations, and products and services through research reports, news publications, and speaking engagements at industry events such as Climate Week NYC.

AI, which is poised to accelerate change more than any technology in history, has finally seized the CEO agenda. For those who viewed AI opportunistically, the introduction of generative AI (GenAI), along with the capabilities of large language models, is serving as a wake-up call to a new era.

According to IDC’s cross-industry Future Enterprise Resiliency and Spending Survey of January 2024, 37% of respondents globally believe GenAI will make a significant impact in the next 18 months. Nearly one-quarter said GenAI was beginning to disrupt their business, with 10% reporting it had already done so. Interestingly, these impacts are being felt most strongly by organizations in Asia/Pacific, followed closely by those Europe and the U.S.

Business competition remains fierce at the regional, national, and organizational levels. Conversations with numerous CEOs and senior managers about their approach to disruptive technologies, particularly AI and GenAI, prompted me to question whether leaders are asking themselves the right questions as they navigate this disruptive landscape.

Leaders of enterprises and medium-sized companies are adopting unique approaches. Some are enthusiastic about the potential productivity offered by innovative technologies. Others take a more cautious approach, advocating a measured strategy until the benefits of these technologies are proven on a broader scale.

This dichotomy often arises in discussions about emerging technologies like AI, cloud computing, and digital twins.

I’ve compiled a dozen key questions leaders should be asking as they guide their organizations through the dynamic — and potentially perilous — landscape of disruptive technologies:

  1. Recognizing Disruptive Technology

How can I determine whether it’s just buzz or a truly disruptive technology that our company can benefit from?

It’s crucial to develop a keen ability to distinguish between industry hype and genuinely disruptive technologies. This requires staying informed about emerging trends, engaging with industry experts, and fostering a culture that encourages innovative thinking.

  1. Building a Self-Learning Organization

How can I know if we have a self-learning organization whose organizational structure and processes enable us to identify, test, pilot, and objectively assess technology trends?

Organizational structures and processes must be assessed to ensure they foster a self-learning environment. This involves creating channels for identifying, testing, piloting, and objectively assessing technology trends within the company and promoting a culture of continuous learning and adaptation.

  1. Balancing Short- and Long-Term Focus

How can we benefit from new technology? Will a short-term focus jeopardize our competitive advantage?

Addressing immediate needs is crucial, but the long-term impact of new technology must also be evaluated. Embracing sustainable and forward-thinking strategies can help organizations avoid a myopic focus on short-term gains and instead build a competitive advantage.

  1. Data Protection and Cybersecurity

Personal productivity tools are great — but what about data protection and cybersecurity?

As organizations integrate personal productivity tools powered by AI, data protection and cybersecurity must be prioritized. Implementing robust measures to safeguard sensitive information is essential to reduce potential risks and ensure stakeholder trust.

  1. Technology Ecosystem

Do we need to be part of an ecosystem of technology vendors, advisors, and service providers?

Yes, it is critical to have access to a robust and versatile ecosystem of technology vendors, advisors, and service providers to navigate the complexities of emerging technologies. A collaborative approach enhances the organization’s capacity to understand, adopt, and integrate new technologies.

  1. Absorbing Innovation

How can I know if my organization has the ability to absorb another innovation? Will we need to create new dedicated positions, teams, or even departments?

Assessing the organization’s capacity to absorb new innovations is critical. Hence, it must be determined if your existing structures can accommodate technological changes or if dedicated positions, teams, or departments need to be created to facilitate a smooth integration.

  1. Avoiding Pilot Purgatory

In earlier technology deployment projects, we wound up parked in “pilot purgatory.” How can I know if we have learned from these experiences?

Another stop in “pilot purgatory” is possible if organizations haven’t learned from their previous technology deployment challenges. Organizations should establish clear guidelines and action plans for transitioning from pilot phases to full-scale implementation. This is vital to realize the full potential of tools like AI.

  1. Constant Change

Do our leaders need training to help them understand new paradigms and guide the organization in a world of constant change?

A continual education culture should be established to navigate the relentless change associated with emerging technologies. Such training would involve understanding new paradigms, learning how to foster adaptability, and creating a learning culture that supports leaders during times of uncertainty and rapid technological shifts.

  1. Balancing Human-Machine Collaboration

Who’s taking the lead: machines or humans?

Assess the roles of machines and humans within the organization. Striking a balance between automation and human involvement ensures harmonious collaboration that leverages the strengths of both, leading to increased efficiency and innovation.

  1. Regulatory Aspects

Should regulatory aspects be in our focus from the first discussions of the technology?

Prioritize regulatory considerations from the outset. Proactively addressing regulatory compliance ensures a smoother integration process and mitigates potential legal and ethical challenges.

  1. Contingency Planning

If we change or terminate technology at the company level, do we need a contingency plan?

A thoroughly prepared contingency plan should be in place when changing or terminating technology at the company level. This ensures minimal disruption and facilitates a smooth transition in case unforeseen challenges arise during the implementation or adoption process.

  1. Talent Management

How can I know if we have the talent to cultivate talent in the coming periods?

Focus on developing and retaining talent capable of driving technological advancement. This involves identifying, nurturing, and empowering individuals who possess the skills and mindset to lead the organization through the evolving landscape of AI and emerging technologies.

The Bottom Line

Being a leader who guides other leaders in transforming and revolutionizing industries and management domains demands a distinct set of skills and qualities, particularly the ability to pose the right questions — both to oneself and to the relevant stakeholders. Sometimes, despite our vantage point at the helm, we fail to anticipate the emergence of the next disruptive technology or product.

Some leaders might assert, “I rely on intuition, experience, and advisors to perceive what others cannot.” I advise caution. When it comes to leveraging technology adoption to gain a genuine competitive advantage, only a select few can keep pace with the relentless influx of new technologies.

It’s akin to a wild goose chase. But initiating the right discussions with yourself and your team can serve as a crucial starting point, potentially leading to the capture of flocks of opportunities!

What do software supply chain security and generative AI have to do with each other? Until recently, the answer was “not much.”

But that’s changing due to a new type of software supply chain risk known as package hallucination. Package hallucination creates novel opportunities for threat actors to plant malicious code within software supply chains and prey on developers who use generative AI to write code.

Here’s a look at how this type of attack works, why it adds a new layer of difficulty to software supply chain security, and what enterprises can do to stay ahead of this challenge.

What is Package Hallucination

Package hallucination happens when a large language model (LLM) references a software library, module, or other type of package that does not actually exist.

For instance, imagine you’re using an AI tool like GitHub Copilot to help develop a Python, and it spits out a line of code like the following:
import advancedmathlib

No Python module or package named advancedmathlib exists. If Copilot generated code like this, it would be hallucinating.

How Package Hallucination Affects Supply Chain Security

In some cases, AI-generated code that references packages that don’t exist would simply result in the code not compiling or running properly, because the application would fail when it tries to retrieve the nonexistent package.

But it’s possible that something more insidious could happen. If threat actors were to create a package with the same name as the one hallucinated by an AI model, and if they injected malicious code into that package, the application would likely download and run the malicious code.

Note, too, that the package does not need to be malicious at the outset. It could initially be legitimate but beacon to a command and control server that updates the package with malicious code at a later date – so simply scanning the package for malicious contents won’t always reveal the risk.

In this way, AI package hallucination creates novel opportunities for attackers to poison software supply chains.

To date, no real-world software supply chain security attack has been known to occur. But researchers at Lasso Security showed how easily this type of attack could happen. They found that AI models hallucinated software package names at surprisingly high rates of frequency and repetitiveness – with Gemini, the AI service from Google, referencing at least one hallucinated package in response to nearly two-thirds of all prompts issued by the researchers.

Even more striking, the researchers also uploaded a “dummy” package with one of the hallucinated names to a public repository and found that it was downloaded more than 30,000 times in a matter of weeks. This is proof positive that large numbers of developers are blindly trusting AI-generated code that references hallucinated packages, and that it would be quite easy for threat actors to exploit this risk.

A New Twist on an Old Story: Package Hallucination vs. Typosquatting

If software supply chain exploits involving AI hallucination seem familiar, it’s probably because they resemble other types of supply chain attacks – especially package typosquatting, a technique threat actors have long used to trick software developers into incorporating malicious code into applications.

Package typosquatting involves uploading malicious packages with names that are similar, but not identical, to popular software packages. For instance, an attacker typosquatting on PyTorch (a legitimate, widely used Python library) might name a package PyTorchh or Py_Torch. Through carelessness when coding or browsing software repositories, developers might accidentally import the malicious package into their applications.

However, compared to package typosquatting, AI package hallucination has the potential to be more insidious and harmful, for several reasons:
● When used as a software supply chain attack method, package hallucination is likely to have a much higher success rate than typosquatting because it doesn’t rely on errant keystrokes to trigger a successful attack. Instead, it exploits the tendency of programmers to run AI generated code without assessing or validating it first.
● Developers are more likely to fall for the package hallucination attacks because they may assume that code generated by popular AI-assisted development tools can be trusted.
● For attackers, identifying commonly hallucinated package names doesn’t require highly specialized skills or tremendous amounts of time and effort. They can simply generate code using AI services, then scan it for repeated instances of hallucinated package names, using the same method as the Lasso Security researchers.

In short, package hallucination will likely prove easier for threat actors to exploit, and lead to a higher rate of malicious package downloads, than traditional approaches to injecting malicious code into software supply chains.

Protecting Your Software Supply Chain From Package Hallucination

The good news is that protecting software supply chains from this new type of risk boils down to leveraging the defenses that enterprises should already have in place, such as:
● Generating a Software Bill of Materials (SBOM) for applications they develop. SBOMs identify the software components within applications, making it easier to determine whether they include any hallucinated packages that may contain malicious code. (Unfortunately, IDC research shows that only 28 percent of enterprises automatically generate SBOMs.)
● Using Software Composition Analysis (SCA) tools to scan codebases for vulnerable components, including unrecognized packages that may have been hallucinated.
● Establishing guidelines and policies for AI-assisted software development, such as rules requiring developers to validate third-party software components before integrating them into a codebase.

Software supply chain security was already a serious challenge, with attacks surging in recent years. The package hallucination risk suggests that the problem is likely to grow even worse, making it all the more important for enterprises to invest in effective software supply chain defense and visibility solutions.

Christopher Tozzi - Adjunct Research Advisor - IDC

Christopher Tozzi, an adjunct research advisor for IDC, is senior lecturer in IT and Society at Rensselaer Polytechnic Institute. He is also the author of thousands of blog posts and articles for a variety of technology media sites, as well as a number of scholarly publications. Prior to pivoting to his current focus on researching and writing about technology, Christopher worked full-time as a tenured history professor and as an analyst for a San Francisco Bay area technology startup. He is also a longtime Linux geek, and he has held roles in Linux system administration. This unusual combination of "hard" technical skills with a focus on social and political matters helps Christopher think in unique ways about how technology impacts business and society.

The world of partnering has never been more complex. Vendor strategies are evolving faster than ever to keep pace with changing customer buying behaviors and partner business models.

Understanding how partner business models are evolving can help vendors build a partnering framework that is robust and flexible enough to reward partner activities while remaining customer-led.

Trend 1: Partners Are Deepening Commitment to their Strategic Vendor Partner

The breadth of a partner’s portfolio can provide an indication of the level of commitment a partner gives to each vendor relationship. Partners with multiple strategic vendor relationships are likely dividing their energy and resources between multiple vendors. Partners that work with just one, two, or three core vendors will likely give greater attention to each relationship.

This is important. While each vendor has visibility into what their partners are doing with them, they may not know how they are engaging with other vendors.

IDC’s EMEA Partner Survey 2024 showed that partners derive more than half of their total revenue from activities connected to their most strategic vendor partner. Just 6% of partners expect the share of revenue connected to their core strategic vendor to decline in the next 12 months, with 45% expecting it to remain at today’s level. Half expect it to increase.

For partners, there are specific benefits from concentrating resources on a single core vendor relationship. At the vendor level, demonstrating commitment can lead to the allocation of more resources, drive new business, launch new technologies, or to co-sell and co-creation activities that drive new customer wins.

For the partner’s business model, deep commitment to a specific vendor’s portfolio and road map can provide clarity in terms of future business development planning and skills development in the organization.

Trend 2: P2P Collaboration Accelerates Within Non-Linear Go-To-Market Motions

Partners traditionally seek to serve as a single point of contact for the end customer. The customer turns to the partner to procure, deploy, and service their IT environment.

However, changes in customer buying behavior and new routes-to-market and deployment models have led to the emergence of non-linear go-to-market motions in which multiple partners can be involved at different stages of a single customer’s journey.

Customers have choices in terms of how they procure, consume, and optimize their IT environments. They can involve a unique combination of marketplaces, platforms, and partners.

IDC’s EMEA Partner Survey 2024 shows that 60% of partner revenues are now direct payments from end customers. This means that 40% of partner revenue is coming from elsewhere — as a sub-contractor through another partner, fund disbursement from a marketplace operator, or payments from vendors.

Trend 3: Looking Beyond the Primary Activity of Partners

Many vendors used to categorize their partner base according to their primary activity. Partners that primarily focused on reselling vendor products and solutions were categorized as VARs. Partners that derived most of their revenue through services were labelled as some form of managed services or consultancy services provider.

Results of our survey suggest there are potential risks in this approach. Most partners now operate multiple partner business models that span sell, service, and build roles for the customer. While only a small number of partners in the survey self-identified as cloud service providers, for example, a significant number offer this as a secondary business model.

It is increasingly important for vendors to consider the activity mix of each individual partner to uncover how they engage with the customer. Vendors that only engage with a partner based on their primary business activity are potentially leaving opportunities on the table to drive additional customer engagement through other areas of expertise and capabilities the partner possesses.

Bottom Line

Gaining a deeper understanding of the activity mix and commitment levels of partners is key for vendors to allocate resources based on partner potential and to look for untapped opportunity within their existing partner base.

Knowing how your partners interact with other vendors and customers — and knowing how important you are to their overall business and what their long-term strategy is — has become critical to inform vendor ecosystem strategies.

To learn more, listen to IDC’s 2024 Channels and Alliances Predictions webcast, or reach out to discover how we can help unlock partner potential for vendors of all sizes.

The Games of the XXXIII Olympiad, otherwise known as Paris 2024, will take place against a backdrop of the most sophisticated cyberthreat landscape ever. The capabilities of threat actors are evolving and substantial, and they pose a risk not only to Games operations directly, but to the wider Olympics ecosystem and the broader business environment.

The high-profile global nature of the event makes the Olympic Games an attractive target for threat actors motivated by varying goals. Athletes from 200 countries are expected to participate in the Games, with coverage broadcast around the world.

To mitigate Games-related risks, organizations in Europe will increase spending on cybersecurity services by $150M in 2024, according to our analysis. Of this figure, 63% ($94M) will be spent in France.

Cyberthreats rarely respect geographic borders. We expect a variety of tailored threats related to the Games to cause a ripple effect of increased spending across Europe, and to a lesser extent, around the world.  Some threats will target IT assets in use for the Games, while others will utilize phishing content themed around the Olympics to trick users into clicking on malicious links (among many other threat types).

A vicious cycle of risk is at play. It involves political factors that may trigger changes in the threat landscape, advances in AI, and a shortage of resources in organizations. This is driving cybersecurity and business leaders to bring forward cybersecurity services spending.

Professional cybersecurity services, including cyber-resilience consulting and incident management, will see increased spending. This should improve the ability of organizations to prevent or detect and respond to cybersecurity events. The level of risk and spending varies between vertical sectors.

The French national cybersecurity agency ANSSI has led multiple projects to mitigate the risks. It said, “The Paris 2024 Olympic and Paralympic Games are likely to attract the attention of various malicious cyber actors who may seek to take advantage of the event to gain visibility and make their claims known, damage the image and prestige of competitions such as those of France, or simply seek financial gains through extortion. These various threats to the Games are further amplified by the digitalization of this type of event in terms of the general organization, the running of the events, the logistical aspects, the infrastructure and the rebroadcasting of the events via different media.”

Indeed, Paris 2024 will be the most connected Olympics ever in terms of the IT estate, which includes back-of-house systems, critical national infrastructure, sport and broadcast technology, merchandising, and ticketing. The criticality of each asset varies significantly.

Organizations in France are moderately well prepared to address cyber-risk in comparison to their peers across Europe. But just 19% of large organizations in France believe their cybersecurity posture is mature or better. This is lower than the European average.

The Olympic Games will take place in Paris and 21 other cities across France from July 26–August 11, followed by the Paralympic Games from August 28–September 8, in the largest event ever held in France.

The International Olympic Committee is working with a range of global technology and cybersecurity providers to protect the Games. The cybersecurity issues involved are discussed in greater detail in a new IDC report, Cybersecurity and the 2024 Olympic and Paralympic Games.