From Spreadsheets to Self-Learning Systems: Asia-Pacific Banks Turn to AI to Police Compliance
Asia-Pacific banks are entering 2026 with a clear message from boards and regulators: compliance and financial‑crime controls can no longer be run on spreadsheets and siloed tools. A growing number of regional lenders are turning to artificial intelligence not just to catch more …

By
Amelia Rowe
Published
Jan 6, 2026
Read
3 min

Asia-Pacific banks are entering 2026 with a clear message from boards and regulators: compliance and financial‑crime controls can no longer be run on spreadsheets and siloed tools. A growing number of regional lenders are turning to artificial intelligence not just to catch more bad actors, but to fundamentally redesign how risk and regulation are managed across the institution.
At Oracle AI World 2025 in Las Vegas, Sovan Shatpathy, senior vice president for financial‑services product management, said Asia‑Pacific banks using Oracle’s AI‑driven compliance stack are reporting productivity gains of 70 percent or more in some investigative workflows. By automating transaction monitoring, alert triage and case aggregation, AI is freeing human analysts to focus on complex cases instead of chasing false positives. The biggest early wins are in anti‑money laundering (AML) and fraud, where models learn from past cases to refine risk scores and surface genuinely suspicious patterns.
Banks in Singapore, Hong Kong and Australia are among the most advanced adopters, driven by active regulators and exposure to cross‑border flows. Thai and Malaysian lenders are catching up as they digitalise trade‑finance corridors with Dubai, Singapore and other hubs, using AI to screen counterparties, documents and shipment data for anomalies. Shatpathy highlighted “big corridors of trade going through Dubai and Singapore” as a focal point for AI‑enabled guarantees and documentary‑trade risk checks.
However, rolling out AI in compliance is not just a technology project. A Deloitte 2026 global banking outlook notes that many AI programmes are “throttled by brittle and fragmented data foundations, mounting compliance demands, outdated legacy systems and internal resistance to change.” Banks that spent the past decade layering digital channels over old cores now face the harder work of cleaning and standardising data across jurisdictions, business lines and legal entities. Without that, even the smartest models will struggle.
Regulators in Singapore, Hong Kong, Japan and Australia are generally supportive but cautious. Supervisors want to see explainability, robust model‑risk governance and clear human accountability for AI‑assisted decisions. That has pushed banks to adopt “human‑in‑the‑loop” frameworks in early phases—AI proposes, compliance officers dispose—before gradually increasing automation where evidence shows better outcomes.
Beyond financial crime, AI is seeping into stress testing, liquidity management and capital‑markets risk. Scenario‑modelling engines are being used to simulate how portfolios would behave under combined shocks: an AI‑stock sell‑off, a renewed US‑China tariff escalation, or a Gulf supply disruption that spikes oil prices. Cash‑forecasting and account‑rebalancing tools are helping treasurers in regional banks from Bangkok to Jakarta manage intraday liquidity in real time, cutting the need for costly buffers.
In the Gulf—particularly the UAE, Saudi Arabia and Qatar—AI in compliance is tied to broader diversification plays in fintech and capital markets. As these countries push to become investment and payments hubs for “Emerging Asia,” they are under pressure to show that fast‑growing financial centres can also meet G7‑grade AML and sanctions‑screening standards. That is driving investment in shared utilities, AI‑enhanced KYC platforms and public‑private data‑sharing initiatives to track illicit flows without stifling growth.
The strategic stakes are high. Deloitte warns that 2026 “could be pivotal for banks as they aspire to become fully AI‑powered,” but also flags the risk of fragmented initiatives stuck as isolated proofs of concept. Institutions that fail to scale successful pilots may find cost‑income ratios and compliance headcounts rising faster than peers who manage the AI transition well.
For boards in Singapore, Dubai, Sydney and Tokyo, the question is shifting from “Should we use AI in compliance?” to “Where are we willing to trust AI most, and how will we govern it?” The answers will shape not just risk functions, but the competitiveness and valuations of Asia‑Pacific banks as investors increasingly reward those with credible, scalable digital‑risk architectures.

Written by
Amelia Rowe
Senior correspondent · Markets & Sovereign Capital
Amelia spent eight years inside a sovereign wealth fund before deciding she'd rather write about institutional money than allocate it. She covers central banking, sovereign capital, and the macro decisions that quietly choose which markets get the next decade. Sharp on monetary policy; impatient with anyone who confuses noise with signal. Based in London. Reach out at amelia.rowe@theplatinumcapital.com.




