As businesses across Asia/Pacific accelerate their digital transformation journeys, artificial intelligence (AI) is becoming a core enabler of innovation. From identity and access management (IAM) to risk-based trust frameworks, AI is reshaping the cybersecurity landscape. However, as AI adoption grows, so do concerns around security, trust, and compliance.
According to IDC’s Asia/Pacific Security Study, 2024, 76.5% of enterprises in the region say that they are not confident in their organization’s ability to detect and respond to AI-powered attacks. Most of them are concerned about AI-driven vulnerability scanning by attackers, the rapid exploitation of zero-day vulnerabilities, social engineering attacks that leverage user data and AI to personalize content and increase effectiveness, and AI-powered ransomware attacks with dynamic negotiation and extortion tactics. The risk of AI-driven risk vectors increases in verticals dealing with sensitive and confidential information.

With cybersecurity emerging as a central theme across the region, AI-fueled business models must address key challenges: How can organizations ensure AI systems are secure, transparent, and resilient? How should regulatory frameworks evolve to accommodate AI-driven cybersecurity? What steps can businesses take to balance AI innovation with trust?
While AI is poised to enhance security automation, its integration is far from seamless. According to IDC FutureScape: Worldwide Security and Trust 2025 Predictions – Asia/Pacific (Excluding Japan) (APeJ) Implications, by 2027, only 25% of consumer-facing companies in in the region APeJ will use AI-powered IAM for personalized, secure user experiences due to persistent difficulties with process integration and cost concerns. This indicates a trust gap in AI-driven authentication and identity protection, particularly in consumer-facing sectors like retail, banking, and e-commerce.