AI-powered cyberattacks are accelerating in speed and sophistication, forcing organizations to rethink how they approach security, data governance, and risk. As enterprises embed generative AI into everyday workflows, they are introducing new architectural complexity and new exposure points.
In this conversation, Grace Trinidad, Research Director for AI Security and Trust at IDC, explains why this moment in cybersecurity feels different, why many AI-driven attacks are still a people problem, and why data and identity management must come first in any AI strategy.
What makes this moment in cybersecurity so different?
What makes this time in cybersecurity so difficult and painful to navigate is the speed. Cyberattacks are happening at orders of magnitude greater scale. We are seeing more threats and more vulnerabilities.
At the same time, organizations are embedding AI and generative AI into their workflows. Those technologies introduce their own vulnerabilities that enterprises have not previously had to face. It creates an entirely different architecture layered on top of what organizations have already invested in to secure their environments.
A lot of organizations are trying to figure out what the right approach looks like right now.
How is generative AI changing the threat landscape?
Generative AI has changed the threat landscape primarily because of the speed at which attackers can deploy and automate attacks.
Phishing attempts look more professional. They are no longer filled with obvious grammatical errors or typographical tells. Generative AI has accelerated the speed of deployment and automated many aspects of attack creation.
Criminals are using the same technologies that enterprises are using. They can iterate on attacks and make them more sophisticated each time. It creates an ongoing arms race. As enterprises ramp up their defenses, attackers ramp up their sophistication. That dynamic is going to continue for the foreseeable future.
Is this primarily a technology problem or a people problem?
It is still largely a people problem.
Many generative AI-enabled attacks today are still phishing and social engineering attempts. The technology has improved, but the entry point is often human behavior and workflow gaps.
For example, in the 2024 Hong Kong deepfake incident, funds were transferred after a highly convincing voice and video deepfake impersonated a senior executive. The employee sensed a red flag, but there was no verification workflow in place for confirming high-level requests.
The recommendations in cases like this are often low-tech. Organizations should implement verification processes for high-value transactions. That might mean two real people must sign off. It could involve authentication codes or additional confirmation pathways.
Redundancy is the name of the game in AI. Clear verification workflows are critical.
What blind spots are organizations facing right now?
Many organizations believe it will not happen to them because they have not yet been targeted. It is still early days. We have not seen widespread, devastating breaches directly tied to generative AI adoption. But that does not mean the risk is not real.
At IDC, we focus on two pillars to improve AI security posture: data controls and identity management.
Organizations need to know where their data resides, how it is secured, and how it will be used in AI systems. Identity is foundational. AI security begins by identifying who is using a particular AI technology, which technology they are using, and what they are using it for.
Identity and access management is the foundation of AI security.
Does AI security replace traditional cybersecurity?
No. AI security does not replace traditional security. It is a layer on top of traditional security.
Organizations still need networking security, identity and access management, and all core cybersecurity components. AI security builds on that existing foundation.
This space will continue to evolve quickly. We are seeing acquisitions, buying activity, and rapid innovation. The trajectory will likely mirror what we saw with cloud adoption. What cloud security looked like in its early years is very different from what it looks like today. AI security will mature in a similar way.
Where should organizations focus right now?
Start with data.
Many early adopters are encountering roadblocks because their data was not ready. It was not properly tagged, secured, or governed. Early data decisions are incredibly important.
Organizations should clean and organize their data, eliminate data that does not provide value, and ensure it is properly protected before integrating it into AI systems.
At the same time, they should modernize identity and access management. Many AI security technologies require a robust identity framework to function effectively. Data and identity are the two pillars organizations should prioritize now.
How should leaders balance AI innovation with resilience?
There is growing tension between digital sovereignty and AI innovation.
Organizations want to enable AI and generative AI workloads, but they also want to be resilient and less dependent on external vendors. Cloud outages and infrastructure disruptions have made downtime extremely costly. Tolerance for outages is near zero.
At the same time, AI innovation and AI security rely heavily on platforms and vendors. Few organizations have all the components, talent, and infrastructure required to secure AI workloads entirely on their own.
This creates a risk conversation. Enterprises must determine what level of risk they are willing to tolerate. We are moving away from a philosophy of securing everything at any cost toward a more tailored, risk-centered approach.
That means cybersecurity strategies will look different depending on an organization’s risk appetite and operational priorities. Finding the right balance between innovation, security, and resilience is one of the defining tensions organizations face this year.
For more insights on this topic, check out BizTech’s recent interview with Grace Trinidad.