Op risk assessment hampered by lack of reliable data

Inadequacy of loss expectation data is a major cause of modelling risk for operational risk strategies, said Carol Alexander, professor of the ISMA Centre at the University of Reading in the UK, at Risk’s annual European conference.

The focus on operational risk has been increasing due to deregulation of capital flows and industries, the rapid growth of new companies and increased focus on “dubious accounting” practices, company fraud and Basel II, she said. However, accurate modelling of operational risk is hampered by a lack of data, said Alexander.

While Basel II calls for the assessment of operational risk, the process involves the quantitative modelling of it. One such approach is the loss model approach, which requires data on the frequency and severity distributions of data on losses resulting from operational risk, which is both hard to find and often skewed, she said.

There are two major sources of data for users of a loss model approach, public and consortium data, neither of which is reliable, said Alexander. Public data, mainly sourced from newspapers, is biased, she said. “This data is usually based on newsworthy events” such as the Enron and Tyco scandals, she said. “This data is not useful for quantitative assessment” because it is skewed towards severe events that seldom occur, said Alexander.

Meanwhile, consortium data, from regional operational risk groups or banks, focuses mainly on very general or “central” data that needs to be scaled to the size of an organisation. Yet, scaling brings another set of problems, she said. “When the data is sized for scale, you do not get enough information on the tails,” which includes information on less common events, she said. The solution is to integrate both the consortium and public data sets to compute the tails and get a complete picture of estimated loss data, she said.

However, there is debate about how the consortium and public data should be combined. While traditionally a classical method has been used – and this is what most operational risk software systems use – Alexander recommends a Bayesian model, which is "specifically designed to handle different data sets”, she said.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here