Fintech Leaders Warn: AI Is Supercharging Scams, Forcing Industry to Rethink Cyber Defence

AI is making financial scams more personalised, convincing and scalable, prompting fintech executives in Asia and the Middle East to call for a united, data‑sharing approach to cyber defence. At recent regional events and in industry surveys, leaders warned that traditional rule‑

Charlotte Reeve

By

Charlotte Reeve

Published

Jan 2, 2026

Read

2 min

Fintech Leaders Warn: AI Is Supercharging Scams, Forcing Industry to Rethink Cyber Defence

AI is making financial scams more personalised, convincing and scalable, prompting fintech executives in Asia and the Middle East to call for a united, data‑sharing approach to cyber defence. At recent regional events and in industry surveys, leaders warned that traditional rule‑based fraud‑detection systems are struggling to keep up with deepfakes, voice‑cloned calls and AI‑written phishing messages targeting bank and fintech customers.

Computer Weekly reports that fintech and security specialists across ASEAN are particularly alarmed by a surge in “low‑and‑slow” attacks, where AI tools are used to study individual behaviour over time before launching tailored social‑engineering attempts. Instead of generic phishing emails, scammers now send highly specific messages referencing real transactions, contacts or life events, making them harder to spot.

To respond, experts advocate combining behavioural analytics—monitoring how users typically type, swipe, log in and transact—with device intelligence and real‑time AI models that can flag anomalies. This approach shifts the focus from looking for known bad patterns to identifying when a legitimate account suddenly behaves in an unusual way, regardless of the content of a message. Collaboration between banks, fintechs, telcos and regulators is seen as essential to make these systems effective across platforms.

Regulators are stepping in. Monetary authorities in Singapore, Malaysia and other ASEAN markets have issued guidelines and, in some cases, mandatory controls for scam‑prevention, including transaction‑signing, cooling‑off periods and joint bank‑telco taskforces. In the Gulf, central banks and telecom regulators are similarly working with payment providers to clamp down on spoofed SMS, fraudulent domains and mule accounts. Industry groups argue that these measures need to be updated frequently as attackers adapt.

Customer education remains a weak link. Surveys show that many users still underestimate the sophistication of AI‑generated content and overestimate their ability to spot fakes. Fintech firms are experimenting with in‑app warnings, simulated scam messages and targeted campaigns for vulnerable groups, but results are mixed. Some executives now advocate embedding more explicit friction—such as extra confirmations for risky transactions—even at the cost of some user convenience.

As 2026 approaches, the consensus among fintech leaders is that AI has fundamentally changed the cyber‑risk landscape—not only by enabling new attack vectors, but by raising expectations for equally advanced defence. Firms that invest early in AI‑native security, cross‑industry collaboration and user awareness may be better positioned to maintain trust in an increasingly adversarial digital environment.

Charlotte Reeve

Written by

Charlotte Reeve

Senior correspondent · Real Estate & Hospitality

Charlotte has interviewed most of the operators reshaping the Gulf skyline — and a few of the ones who tried and didn't. Her beat is property, mega-projects, and the hotel groups thinking in fifty-year cycles. Previously she wrote on design and architecture across Asia. She knows which buildings will survive a downturn before the spreadsheet does. Based in Dubai. Reach out at charlotte.reeve@theplatinumcapital.com.