This article was paid for by a contributing third party.More Information.
Fighting financial fraud with AI
Risk and compliance professionals convened for a Risk.net webinar, How to successfully mitigate fraud – AI in action, in association with NICE Actimize to debate the use of artificial intelligence (AI) in the fight against fraud
With financial crime on the increase, detecting and mitigating fraud has become a priority across financial services. Banks are now looking at how they can improve processes, use data more effectively and harness new technologies to increase automation and enable better decision-making.
Panellists on the webinar, hosted by Risk.net in November, discussed the current state of play in the banking industry when it comes to using AI to fight fraud, sharing their experiences and suggesting what needs to be done next.
The use of AI to fight financial fraud, both internal and external, has become a hot topic. “AI is the future of fraud management, irrespective of the system you are using,” said Svetlana Belyalova, head of operational risk management at Rosbank, Societe Generale Group, during the webinar. “It brings a lot of value in both data management and decision-making.”
Amir Shachar, lead fraud research data scientist at NICE Actimize said AI is becoming “more and more significant” in the fight against financial crime. “If, in the past, it was just a ‘nice to have’ tool, in more and more financial institutions … AI is becoming a must-have aid for analysts to decide [whether] transactions are fraudulent or not.”
However, when it comes to tackling fraud through new technologies, it is early days for the banking industry and there is a wide range of responses. A few early movers have already implemented cutting-edge AI-type platforms, while others are still dependent on older systems and existing processes.
Charles Forde, group head of operational risk at Allied Irish Bank, called for early adopters of these technologies to talk more openly about what is and isn’t working so other firms can learn from them and best practices can be formed. He stressed this is not just about which technologies are being used, but about the approaches and operating models being employed. For example, does the responsibility for monitoring and responding to financial crime and fraud lie mainly with the first or second line of defence?
“I think there’s still a big variance in different firms in how the technologies are being applied, and in the operating model,” he said. “In some firms it’s primarily all in the first line. In some, the concentration of knowledge is in the second line.”
Forde said the latter operating model is only an interim solution. “Ultimately, this activity should sit next to the business that it is supporting, regardless of what type of business you’re in.”
Belyalova stressed that a firm’s maturity and operational processes for fraud management are key to selecting the technology that will be right for it. At many firms, the approach to dealing with fraud is changing. Traditionally, firms took a siloed approach to different types of fraud using different technologies, but now firms want to take a more holistic approach to understand, for example, if one type of fraud is feeding others. They also want to better manage the AI capabilities in these fraud systems.
“At the moment, we rely heavily on the elements of AI built into these fraud management [systems],” said Belyalova. “What we really need to know better is how to manage these AI capabilities in our real-time environment – how to make them more effective, and how to make these systems learn from our [ever-evolving] day-to-day situations.”
Shachar noted there is sometimes a misunderstanding around AI, with firms fearing they will lose control and the machine will make poor decisions. He said, however, that: “AI is just a recommendation and you can assign any weight you’d like to this recommendation in your final decision.” This knowledge, he said, usually makes it easier for firms to decide to move to AI. “We do see that, with time, they assign higher and higher weight to [the] recommendation.”
Data debate
How to use the reams of data available – which is often siloed throughout the organisation – to tackle fraud in the most effective way is a key topic at banks.
“We are overwhelmed with data,” Belyalova said. “Data correlation itself is a topic … We have to use it in different algorithms [for predictive analytics], not solely for AI but also to get a full picture of client behaviour.” This allows firms to customise client offerings more effectively and to manage deviations from typical client behaviours when it comes to known and evolving scams, she said. “So machine learning and AI is definitely a capability that allows us to do so.”
Additionally, data collected by banks for different reasons can also play a part in combating fraud by enabling better decision-making. Banks are often keen to provide certain types of data to firms such as NICE Actimize to build systems and models that might connect and correlate different datasets, allowing new insights and conclusions, Belyalova noted. The more data, the more decisions can be made automatically by risk engines, which reduces the need for human or expert decision-making and frees up staff, she added.
Forde pointed out that a lot of work needs to be done to standardise and clean data before it can be used. “Before you can really get to the point of looking at the data relevant to the financial crime space, and then to fraud specifically, firms need to look at an overall framework and taxonomy for data,” he said. This is to ensure the different data types and classifications – structured, unstructured, instrument, client and book data – have been properly understood. “Everything can be classified down to quite a granular level, and that will really help within the reusability and the transport of data,” he said.
It’s also very important, Forde said, to clearly define the policies, the frameworks and the risk appetite for data so everyone knows who is responsible for it. “Most of the medium and larger firms at this point do have a data function within the first line [of defence] and often a chief data officer or somebody responsible for data,” he noted. “I think that’s a very good approach.”
But there are still questions such as whether data goes directly to the risk function or sits within the business area, Forde said. “There are a lot of end-to-end considerations in how to apply the tools and get the most value from them.”
It is crucial to understand exactly who owns the data, he stressed. “There can’t be any question on that, because the validation and the maintenance of that data is going to be absolutely critical because that data could be feeding into a model for many things, whether it’s something for analysing fraud or for something else.”
Additionally, because external partners have done much of the work on financial crime and fraud, it’s important to apply the same governance and oversight to third parties’ data and to ensure it is valid, Forde added.
Model risk
Another concern for firms is model risk. Firms need clear policies on model development and management covering areas such as whose responsibility it is to defend models to regulators and who owns the data.
“There is still a lot to do in the industry on model risk, and model management, and on validation,” said Forde. “It’s only in the past few years that model risk has really been adequately recognised as a separate, distinct area of op risk.”
He believes that the oversight of processes and model validation activity should come from the second line, which is typically an op risk or enterprise risk function. “But there’s still not enough consensus in the industry on it, and information sharing about different approaches … would definitely help.”
Audience poll
How far along is your bank’s plan to adopt artificial intelligence (AI) and machine learning to tackle fraud?
- We have no plans to adopt AI or machine learning yet – 29%
- We are beginning our AI and machine learning efforts now – 29%
- We are discussing the use of AI and machine learning but no budget has been approved – 17%
- We have serious and effective machine learning and AI tools in place right now that fight fraud and anti-money laundering – 23%
He noted that the pace of discussion in the industry is accelerating quickly, driven by regulators. “[Model risk] is in the top three or four priorities for regulators like the European Central Bank,” he said.
Belyalova believes model development should be a collaboration between the first and second line of defence right from the start.
“One example that really works well is when you have somebody [from second line of defence] supervising fraud management at the entity level and [dedicated] fraud officers at first line of defence, which are responsible for [operational] management of specific types of fraud,” she said. “When you develop that model, you’ll need the practice and the insights from the first line, to consider [and challenge] them at the second line and then bring it together.”
This will avoid the risk of deploying poorly designed or validated models, she said. She stressed that the aim is to avoid developing a purely theoretical model that will not deliver value, and will bring greater risk after implementation.
Model behaviour
AI is already being incorporated into behavioural models designed to capture a sequence of events that might identify fraudulent activity, said Shachar. By capturing the historical behaviour of fraudsters, these models have predictive capabilities. AI can also be used in simpler behavioural models such as identifying a one-off transaction that might be priced out of line.
The issue of who owns the data that goes into models is something firms need to be very clear on and differs according to jurisdiction. “Having worked in organisations that have operated in dozens of jurisdictions globally, the approach to gauge the ownership and rights to data varies considerably,” said Forde.
While the European Union’s General Data Protection Regulation affords one of the highest levels of protection for the individual, data protection has become a high priority in many jurisdictions throughout the world, he pointed out.
“It comes back to having a core framework in the fraud context to address the monitoring and detection and response to any fraud, and then being able to accommodate the local legal and regulatory requirements in that jurisdiction,” he said.
Even if banks are prohibited from using certain datasets, much can be extrapolated and entered into models at an aggregated level – for example, data derived from identification of common behavioural patterns of client activities in certain online and payment services that are useful for detecting signs of fraudulent activity, said Belyalova.
She stressed that correctly operating the model itself and the supporting risk engine are much more important than having the full set of user data.
When it comes to banks giving their data to anti-fraud technology firms, there are some important questions to consider, Belyalova noted. For instance, are firms willing to have their client data feeding other banks’ engines, and what are the cross-border implications if that data is being stored in a different jurisdiction to the firm?
“That could be a real hurdle because it [involves] a cross-border transfer [and confidential information sharing],” she said. “This is absolutely a company decision whether you would like to share with others [internal data] collected from [known] fraud cases,” she said, noting that many companies “do not want to share [fraud-related data] with the provider or with the world”.
Shachar explained that companies such as NICE Actimize use federated learning, which produces models trained across several financial institutions, while not moving the data outside of their servers, thus maintaining the strict data regulations of each organisation. What is taken is metadata, or components of data that are useful in building or refining particular models.
“NICE Actimize monitors many, many datasets of some of the largest financial institutions in the world,” said Shachar. “We want to have a model that is able to capture patterns across different datasets; however, we can’t combine the datasets because of security and regulatory reasons, so we combine the patterns into a single model, but without bringing the data to one place.”
The results are striking, he said. “This framework of federated learning … where we can import insights from other banks, dramatically improves the performance [of our systems and models].”
This year, with the Covid‑19 pandemic leading to national lockdowns worldwide, incidences of online financial fraud have risen. “We see a surge in fraud activity, especially in online transactions,” said Shachar. However, he notes that increased fraudulent activity actually improves fraud models by giving them more inputs and therefore a better understanding of fraudulent behaviour.
A question many banks are considering is whether AI and automation will render certain roles redundant. Forde said he saw many places where human interpretation of machine alerts is very much required. While some roles will be impacted, others will be created, he said.
Inside jobs
Fighting internal fraud requires a very different approach to external fraud prevention, the panellists agreed. As external fraud is far more common than internal, there’s a lot more for the models to learn, making them easier to train, said Shachar.
“Internal fraud is much more rare, so we don’t have a lot of instances to learn from,” he said. “Our models would look for anomalies in, say, behaviour of bank employees. The AI has to be more involved than just plain supervised learning – it will have to be based on human-generated rules.”
Although suffering internal fraud is much rarer than succumbing to external fraud, the impact on the company would be far greater, and it needs to be dealt with differently, said Forde.
“In [dealing with internal fraud] effective supervision, management supervision, and defining roles and responsibilities is as important as the tools that you have to perform the oversight of employees,” he said.
Regulators certainly play a key role in financial fraud detection and prevention, but reputational risk is also a key driver, said Belyalova.
“It’s really a competitive advantage where the bank can show that it cares about its customers and that the level of complaints with regards to fraud is low. We are driven primarily by the market and client demands, [being supported] by the regulations,” she said.
In Europe, regulators want to see that controls are in place for the protection of the customer, not just against fraud, but that in all areas the customer is considered first and foremost, said Forde. Since the financial crisis, banks are doing this not just for their own reputational gain, but because “it’s the right thing to do”. “I think that,overall, the regulators are helping to drive that, so things are moving in the right direction.”
Shachar agreed that regulators are becoming a lot more hands on. As well as explaining its models to regulators, NICE Actimize also helps clients with that process. “Regulators will get into the details of any models we deliver,” he said. “They will scrutinise the model very, very rigorously. That’s a process we’re all for.”
Listen to the full webinar, How to successfully mitigate fraud – AI in action
The panellists were speaking in a personal capacity. The views expressed by the panel do not necessarily reflect or represent the views of their respective institutions.
Sponsored content
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@risk.net