OK regulator? How AI became respectable for AML controls

Dutch court case pressures supervisors to accept new tech; explainability the key challenge

  • In October 2022, a Dutch appeals court ruled in favour of a challenger bank called bunq on a number of counts, in a dispute with the Dutch regulator over its anti-money laundering controls.
  • Artificial intelligence forms the basis of bunq’s AML systems, and experts believe the case will increase the pressure for greater regulatory acceptance of AI for managing AML risks.
  • However, the court found against bunq in areas such as identifying politically exposed persons and sources of funds.
  • Experts say such risks are more resistant to pure data analysis, and require critical thinking that is best performed by humans.

It is not often that a bank takes its own regulator to court. It is even less common for a relatively new bank to be so bold. And it is downright uncommon for the bank in question to pick a fight over the use of new technology. But this is exactly the choice made by bunq, the self-styled “bank of the free”.

Headquartered in the Netherlands and operating online across the European Union, the 10-year-old digital challenger bank appealed against Dutch National Bank (DNB) over a 2021 court judgement. A landmark Dutch appeals tribunal ruling in October 2022 didn’t go entirely bunq’s way, but it did confirm there is no fundamental reason why banks should be forbidden to use artificial intelligence (AI) as the backbone of their anti-money laundering (AML) controls.

By stressing the importance of using AI in fighting financial crime, the Dutch court paved the way for long-term positive change in the industry
Bunq spokesperson

The appeals tribunal found in bunq’s favour on four out of seven counts of violating AML regulations. Specifically, the court found no fault with the bank’s processes for customer screening and monitoring, obtaining information from business customers, and transaction monitoring. All of those processes are based on AI.

According to a bunq spokesperson: “By stressing the importance of using AI in fighting financial crime, the Dutch court paved the way for long-term positive change in the industry.”

Although not a total acquittal, the ruling is viewed by AML experts across the industry as a milestone, confirming the trend towards greater official acceptance of AI as a means to boost the efficiency and reliability of AML controls.

“I think there’s going to be a before and after the bunq ruling,” says Francisco Mainez, a financial crime consultant at AML technology vendor Lucinity, who was previously head of financial crime data at HSBC.

In practice, many large banks have already been adopting AI for some years, but they still view the case as a source of additional encouragement.

“You could say that we are also on this track and that we will pursue it because it’s helpful,” says Robin de Jongh, head of financial crime detection at ABN Amro. “The bunq case just gave confirmation.”

And what of the regulators themselves? Asked about the implications of the court judgement for their policies on the use of AI, a DNB spokesperson states: “As the ruling also shows, ‘machine learning’ was not discussed (see paragraph 9.5 of the judgement).”

However, paragraph 9.5 found that the DNB had failed to provide evidence that bunq did not “exercise adequate continuous control over its business relationship with its customers”. The court concluded that the debate between the DNB and bunq over the use of machine learning “therefore no longer needs discussion”. In other words, the court did not regard the use of machine learning itself as evidence that AML processes were inadequate.

The DNB is now in talks with AML professionals in the Netherlands on the use of technology, holding a series of round tables to meet with AI professionals.

Fitter, happier

Traditional AML controls use a rules-based approach. Actions that deviate from set norms are flagged and investigated; a transaction above £10,000, for example, would create an alert. From the alert, an analyst would trawl through transactions related to the account.

Not only are rules-based approaches time-consuming, but they also lack accuracy. Current estimates find that 95% of Suspicious Activity Reports flagged by banks are false positives.

If AML staff are overwhelmed chasing red herrings, it increases the risk that genuine criminals slip through the net. The industry detects only 0.1% to 0.2% of laundered money, according to Iain Armstrong, head of global regulatory affairs at AML tech vendor ComplyAdvantage. He says criminals can outsmart relatively manual systems by learning the rules and evading them.

Given the length, complexity, and failure rate of the current process, it is no wonder banks are keen to embrace more automation and AI. The amount of data points that arise from customers’ transactions demand computer models to make sense of the multitude of information.

Lisanne Haarman, a financial regulatory specialist at law firm Houthoff believes the ruling will show regulators that “the use of AI can definitely be included in, for example, the onboarding process of clients, which then leads to more efficiency”.

In fact, the sheer quantity of data – a disadvantage for manual AML controls – is an advantage with AI because it enables banks to train the model more effectively. AI can reduce the number of false positives that are reported and cut the man-hours spent on basic searching, saving time for analysts to focus on a smaller number of transactions that are more likely to be suspicious.

The use of AI can definitely be included in, for example, the onboarding process of clients, which then leads to more efficiency
Lisanne Haarman, Houthoff

AI and machine learning tend to be better than humans at “convergent thinking”, Mainez explains. Simply put, convergent thinking looks to find the answer to a problem by following a set of logical steps. As computers and AI have greater memory and data processing, they can learn rules more easily and apply them to find an answer. Humans, on the other hand, tend to excel in divergent thinking – the ability to come up with new solutions, underlined by a greater sense of nuance.

The application of machine learning is already extending beyond just transaction monitoring. Natural Language Processing (NLP) can be used for know-your-client checks, to scan the internet for any negative news items linked to prospective clients before onboarding.

Armstrong says ComplyAdvantage has developed a taxonomy for NLP scanning of news stories, which is aligned with the typologies used by the Financial Action Task Force (FATF) and with the 22 offences outlined in the most recent EU money laundering directive. The FATF frequently releases typology reports for items such as financing the proliferation of weapons of mass destruction or trade-based money laundering, which highlight methodologies used by criminals in these activities.

AI can also be used for customer-facing parts of the onboarding process; for example, most banks use it to check a prospective client’s ID. These online checks ensure the ID is real, that the picture on the ID matches the client, and that the client is physically present during the check.

No surprises

However, the move from a rules-based system to AI is not without issues. Daniël Meel, head of innovation in detecting financial crime at ABN Amro, says rules-based systems, though less useful, are easier to explain. While AI models complete more useful work, they are complex, and the path an AI algorithm takes to reach a decision may be hard to explain.

For regulators to support the use of AI in AML controls, a clean, explainable process is essential. The DNB spokesperson says: “It is essential to retain human involvement in order to prevent AI from functioning as a kind of black box of which the underlying reasoning behind decision-making cannot be tracked back.”

Explainability remains high on regulators’ list of concerns with the technology, highlighted by the European Banking Authority in a 2020 report on big data and advanced analytics. Ruta Merkeviciute, the head of digital finance at the EBA, says the use of machine learning in AML controls will also be subject to supervisory scrutiny of the other “elements of trust” set out in that report, including the risk of bias in the algorithm or the data that feeds it, data quality and security, consumer protection, and the ability to audit the whole process.

Humans have common sense, whereas models just keep going even if something is wrong with them
Iain Armstrong, ComplyAdvantage

While there are different ways to prove explainability, experts say the crux of the issue comes down to displaying the AI’s decision-making process in a way that translates machine reasoning to human logic. Being able to trace the reasoning of a machine demands clear data sources for decisions and an ability to unpack the processes an AI undertakes.

“Humans have common sense, whereas models just keep going even if something is wrong with them,” says ComplyAdvantage’s Armstrong. “I think what the regulators want to see is an oversight process.”

In bunq’s case, Haarman explains the bank was able to fulfil the requirements for client screening in the eyes of the court, because it could provide copious data points demonstrating how the user profile was created using AI.

The ComplyAdvantage system uses reason codes to create a trail through the decisions of the AI model. These codes allow customers to translate the machine’s thinking process into a language that those without technology training can understand.

“It’s all about being able to understand, being able to configure in the right way to make sure that the results are trustworthy,” says Carolin Gardner, head of the AML unit at the EBA.

She adds that explainability issues tend to arise most often when banks have used vendor solutions they do not understand internally, or because the staff who had built the technology have since moved on.

Where humans still rule

But bunq’s AML processes have not been entirely vindicated – the court agreed with the DNB that there were deficiencies in the screening of politically exposed persons (PEPs) and investigating the sources of a customer’s financial resources.

Gardner at the EBA says in principle there is “no single area where it is impossible” to use AI for AML controls, but supervisory approval will depend on the quantity and quality of data available to assess each specific risk.

“If you don’t have the right data that fits your model and sufficient oversight, it’s going to be difficult for you to demonstrate that your AML controls are effective and that you meet your legal obligations,” she warns.

Ruta Merkeviciute
Ruta Merkeviciute, EBA

Birgit Snijder-Kuipers, a professor of AML law at Radboud University, says certain types of risk require critical thinking to analyse correctly. This is where AI alone cannot meet regulatory requirements.

“I don’t think at this stage AI replaces human interaction in the risk analysis,” she says.

Macro events such as war can rapidly alter the risk status of PEPs, she notes, so the added input of humans becomes essential.

Humans also need to step in when the regulatory regime itself changes. Each change in the law necessitates complex decisions on how to adapt existing systems to meet policy requirements. Lucinity’s Mainez explains that a human is better suited to make these decisions because they require divergent thinking.

In some areas, however, banks are trying to reduce human intervention. Specifically, firms are starting to work with unsupervised AI models, which learn from unlabelled data instead of the labelled data fed to supervised models. Meel at ABN Amro says he is not concerned that this will raise additional barriers to explainability – he says experienced analysts can use the same tools developed to explain supervised models for unsupervised models as well.

Get with the programme

Sources say the bunq case will increase pressure on regulators to come to terms with AI, and be able to supervise it adequately. Some speculate it may even lead to greater willingness to challenge regulators over the use of AI, although not all firms will have the appetite for a public dispute in court.

“It’s a reminder to the regulators they play a super-important role in the ecosystem, it’s a reminder to the regulators… we need regulation around AI,” says Christian Roberts, chief product officer at financial crime AI vendor Sentinels.

The DNB spokesperson says the regulator is “currently liaising with the sector to come to a common understanding of effective applications of data and technology”. The spokesperson adds this interaction might lead to the DNB issuing fresh guidance, but it is too soon to say at this stage.

I don’t think anyone is turning around and hoping to go back to the olden days
Carolin Gardner, EBA

The EBA’s Gardner is hopeful that regulators are ready for the change. She says the EBA has seen very high take-up among national regulators for training workshops on AI and big data.

“Everyone is learning, everyone is ready to take up the challenge, and I don’t think anyone is turning around and hoping to go back to the olden days,” says Gardner.

The uptake of AI is already generating pressure for a harmonised EU-wide approach to regulating it. The European Commission proposed a regulation on the use of artificial intelligence across all sectors in April 2021, which will have implications for financial services firms. The council of the EU decided a list of amendments in December 2022, with the European Parliament has yet to vote on its own version of the draft, after which the three co-legislators will need to negotiate a final text.

Asked whether further guidance from the EBA on AI would be useful, the DNB spokesperson says: “In general, meaningful guidance on complex matters is always very welcome, especially given its aim to safeguard a level playing field among member states.”

However, there is one important obstacle for regulators trying to get comfortable with AI: finding the staff to supervise it. The EBA’s Merkeviciute says supervisory teams that examine bank systems and technology should include individuals who are knowledgeable in AI, and already do in many cases. But regulators will be competing with the banks themselves to hire these specialists.

“There is a limited pool of real talent. Institutions will have people who can understand and use the AI, but there are likely to be challenges in having sufficient resources, especially for less well-known firms,” says Tom Balogh, an adviser on non-financial risk at A&O consulting and a former regulator.

Update, March 8, 2023: This story was updated to include attribution of comments from ABN Amro

Editing by Philip Alexander

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Calibrating interest rate curves for a new era

Dmitry Pugachevsky, director of research at Quantifi, explores why building an accurate and robust interest rate curve has considerable implications for a broad range of financial operations – from setting benchmark rates to managing risk – and hinges on…

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here