Modelling cyber risk: FAIR’s fair?

Proponents say factor analysis can be applied to cyber risk; detractors retort results are still guesswork

cyber alchemist
Mark Long, NB Illustration

Of all the potential loss events banks’ operational risk managers are tasked with trying to model, cyber attacks are among the most challenging.

Firstly, say op risk managers, there’s the sheer range of cyber threats banks are exposed to, and the wide variability in the frequency with which they occur. Distributed denial-of-service attacks, viruses and email phishing threats are everyday occurrences for a bank; successful ransomware attacks and data breaches that result in large data thefts are – for now – relatively uncommon.

Modelling the frequency of any potential op risk loss event is difficult; but practitioners argue this is especially true for cyber, for three reasons.  

“Firstly, I think there is a non-linear relationship between controls and losses, as the controls are only as good as the weakest link,” says one senior op risk manager, citing the examples of staff turning off anti-virus software to download an attachment, or responding to a convincing-looking phishing email.

The relationship could hold the other way, too; a large, sophisticated bank could have inadequate cyber defences – but provided it is perceived to be strong, there is evidence to suggest it is less likely to be a target for cyber attack. The op risk manager cites recent payment network frauds being concentrated on emerging market banks as an example.

Finally, a bank cannot model its exposure to a so-called zero day attack – one that exploits an unknown vulnerability in its cyber safeguards, for which by definition it has no defence.

The loss impact of any of these events is also highly variable. For example, regulatory fines for poor systems and controls processes in the event of a data breach will be set at the discretion of supervisors; banks subject to the European Union’s forthcoming General Data Protection Regulation could be whacked with fines of up to 4% of their annual turnover in the event of a serious breach, or 2% if they simply fail to notify their regulator within 72 hours.

Other losses – ransom payments to cyber thieves, compensation to affected customers, loss of future business due to reputational damage – are also difficult if not impossible to quantify with any accuracy. Small wonder, then, that Rohan Amin, chief information security officer at JP Morgan, describes trying to model the loss a bank can expect from a particular cyber event as “at best, a guess”.

More than a decade after it was first applied to modelling cyber risk, the most commonly used approach to quantifying cyber threats among banks remains the Factor Analysis of Information Risk (Fair) model. The approach provides a straightforward map of risk factors and their interrelationships, with its outputs then used to inform a quantitative analysis, such as Monte Carlo simulations or a sensitivities-based analysis.

Many underwriters are not doing any underwriting at all. They’re simply saying, ‘this is my price for the [policy] limit’. It’s rather scary

John Elbl, Air

Proponents say the approach helps banks order and prioritise their defences against the myriad threats they face; detractors say its outputs are only as reliable as the inputs, which, due to the nature of the threats in the case of cyber risk, are inherently based on guesswork.

Shorn of a way of predicting losses accurately, banks may look to the traditional risk-transfer medium of insurance – though underwriters have long struggled to model the potential impact of cyber threats too. Modelling techniques have evolved rapidly in the past couple of years, firms say; it is now common for underwriters to tap the services of catastrophe modelling firms – companies more used to assessing potential losses from natural disasters – as well as niche cyber security firms, who can use a range of covert techniques such as ethical hacking to assess a potential clients’ defences.

However, amid swelling demand from banks for cyber cover, some fear underwriting standards have gone backwards: “Many underwriters are not doing any underwriting at all. They’re simply saying, ‘this is my price for the [policy] limit’. It’s rather scary,” John Elbl, cyber risk expert at Air, a catastrophe modelling firm, tells tells Risk.net.

Banks are all too cognizant that insurance can only ever be a loss mitigant, and not a defence against a potentially existential threat. As Gilles Mawas, senior expert in cyber, IT and third-party risk at BNP Paribas, recently put it: “Being reimbursed after you’re dead is irrelevant. If you lose €3 billion–5 billion ($3.4 billion–5.6 billion) and two years later you get back 50%, what’s the point?”

  • LinkedIn  
  • Save this article
  • Print this page