As businesses across Asia/Pacific accelerate their digital transformation journeys, artificial intelligence (AI) is becoming a core enabler of innovation. From identity and access management (IAM) to risk-based trust frameworks, AI is reshaping the cybersecurity landscape. However, as AI adoption grows, so do concerns around security, trust, and compliance.
According to IDC’s Asia/Pacific Security Study, 2024, 76.5% of enterprises in the region say that they are not confident in their organization’s ability to detect and respond to AI-powered attacks. Most of them are concerned about AI-driven vulnerability scanning by attackers, the rapid exploitation of zero-day vulnerabilities, social engineering attacks that leverage user data and AI to personalize content and increase effectiveness, and AI-powered ransomware attacks with dynamic negotiation and extortion tactics. The risk of AI-driven risk vectors increases in verticals dealing with sensitive and confidential information.
With cybersecurity emerging as a central theme across the region, AI-fueled business models must address key challenges: How can organizations ensure AI systems are secure, transparent, and resilient? How should regulatory frameworks evolve to accommodate AI-driven cybersecurity? What steps can businesses take to balance AI innovation with trust?
While AI is poised to enhance security automation, its integration is far from seamless. According to IDC FutureScape: Worldwide Security and Trust 2025 Predictions – Asia/Pacific (Excluding Japan) (APeJ) Implications, by 2027, only 25% of consumer-facing companies in in the region APeJ will use AI-powered IAM for personalized, secure user experiences due to persistent difficulties with process integration and cost concerns. This indicates a trust gap in AI-driven authentication and identity protection, particularly in consumer-facing sectors like retail, banking, and e-commerce.
Strengthening AI Regulatory, Compliance and Governance Across Asia/Pacific Japan (APJ)
A fragmented regulatory environment across Asia/Pacific further complicates the issue. While Singapore and Australia are advancing their AI governance frameworks, countries like India and Indonesia are still catching up, creating inconsistencies in how businesses implement AI security solutions. The challenge is ensuring AI-powered IAM meets evolving compliance requirements while remaining cost-effective and scalable.
One of the most critical shifts in cybersecurity will be the introduction of AI Bills of Materials (AI BoM). By 2028, 70% of data products will include a Data BoM, detailing how data was collected, processed, and consent was obtained. This evidentiary trail will be essential for demonstrating compliance and ensuring AI systems do not operate as black boxes.
At this point in time, AI governance is mandatory, rather than exploratory. Some nations have demonstrated leadership in shaping AI governance frameworks, setting the stage for responsible and secure AI adoption across the region. These countries are proactively developing policies and frameworks to ensure AI-driven technologies align with security, compliance, and ethical standards.
- Singapore has established AI Verify, a government-backed AI governance initiative to promote transparency and trust in AI solutions.
- Australia‘s AI Ethics Framework is shaping AI regulations, ensuring fairness, transparency, and security.
- India is developing its AI regulatory framework under the National AI Strategy, emphasizing AI risk management and data localization.
- Japan is incorporating Responsible AI into its business ecosystem, focusing on privacy, fairness, and regulatory alignment with global AI governance models.
Security Risks and Challenges Due to Rapid GenAI Adoption
The rapid expansion of generative AI (GenAI) across enterprises presents another pressing security challenge. IDC predicts that in 2025, 20% of organizations in APJ will move from proof-of-concept (POC) to production in specific GenAI use cases without a comprehensive risk-based assessment of their trust capabilities, potentially creating a cybersecurity house-of-cards scenario.
This lack of risk assessment poses significant dangers, including:
- Data leakage risks: GenAI models trained on sensitive data may inadvertently expose proprietary or personal information.
- Bias and fairness concerns: Without stringent governance, AI systems may reinforce biases, leading to compliance and reputational risks.
- Regulatory crackdowns: Governments across APeJ, particularly in China and India, are tightening AI security regulations, and non-compliant businesses may face significant penalties.
IDC's Unified AI Governance Model
At its core, IDC’s Unified AI Governance Model is a strategic framework designed to balance innovation with risk management, ensuring that AI deployment aligns with compliance, security, transparency, and ethical considerations. This model is built on four key pillars: transparency and explainability, security and resilience, compliance and privacy protection, and human-in-the-loop (HITL) governance. It recognizes that AI governance is not just a technical challenge but a cross-functional initiative involving regulatory alignment, risk assessment, and continuous monitoring.
IDC defines AI governance as a system of laws, policies, frameworks, practices, and processes that enable organizations to manage AI risks while driving business value. Governance must be integrated into strategy rather than treated as a reactive measure. Without it, enterprises face operational inefficiencies, legal exposure, and reputational risks. The model also acknowledges external influences, such as regional regulations, ethical considerations, and societal expectations, which vary significantly across APeJ markets. Ensuring that AI governance adapts to these external factors is critical for sustainable and trusted AI adoption.
IDC’s Unified AI Governance Model provides a structured approach to managing AI security and trust by addressing some key questions such as:
- Who is using what data, and where is it stored?
- How is PII data protected through encryption or anonymization?
- Are AI models being tested against risk controls and compliance requirements?
- Is there a risk assessment framework for GenAI deployments?
Path Forward: Cybersecurity and AI Governance for Asia/Pacific Businesses
To foster a secure AI-driven future, businesses must take a proactive approach to cybersecurity and AI governance. Key steps include:
- Embedding AI BoM in Cybersecurity Practices: Developing transparent AI security frameworks that document data provenance, consent mechanisms, and compliance checkpoints.
- Investing in AI-Powered IAM with Risk-Based Authentication: Incorporating adaptive authentication, behavioral analytics, and risk scoring to strengthen trust in AI-driven security systems, instead of relying solely on AI-driven IAM.
- Conducting Comprehensive Risk Assessments for GenAI Deployments: Establishing robust governance policies to prevent unintended risks when moving from GenAI POC to production.
- Integrating Autonomous AI for IT Operations: By 2027, GenAI and analytics deployments for IT operations use cases will increase team productivity by 15%, generating $1.5 billion in economic and business value. Automated IT service desk responses, anomaly detection, and predictive resource capacity planning will be critical for AI-enabled security frameworks.
- Collaborating with Regional Regulatory Bodies: Actively participatinge in shaping AI governance discussions, ensuring their cybersecurity policies align with emerging regulatory frameworks.
Partner with IDC | CSO to elevate your brand presence at Asia’s leading gathering of CISOs and IT security executives. Position your unique capabilities to become security leaders’ trusted vendor of choice in safeguarding their valuable corporate data in the cloud and in exploring the pivotal role of AI and quantum-proof technologies. Happening across 7 APACAsia/Pacific cities from April to November 2025, join us at the event to showcase your case studies, success stories, and more!
About the Author
Senior Research Manager, IDC Asia/Pacific
Sakshi is responsible for developing and socializing IDC’s point of view within security services, covering both legacy and modern cybersecurity technologies. Her role involves close collaboration with technology vendors and buyers, developing market insights, and providing research, consulting, and advisory services in the fields of security software and services.