Sponsored webinar: A model approach – Examining operational risk capital modelling

ibm-logo-1160x760

The Panel

Patrick McDermott, vice-president, enterprise operational risk oversight, Freddie Mac
Patrick O'Brien, vice-president of OpenPages product management, IBM
Kenneth Swenson, risk management specialist, supervision and regulation, Federal Reserve bank of Chicago

Listen to the full webinar proceedings: A model approach – Examining operational risk capital modelling

A good place to start is the state of play with regard to advanced measurement approach (AMA) at those institutions that are using it. Which calculation methods do they tend to use and why?
Patrick McDermott, Freddie Mac: It is important to note that Freddie Mac is not a bank, and there are a lot of things that make our operational risk profile different from that of a bank. We do not have the branches, the tellers and the customers, and because of the nature of our institution, we don’t have very many low-severity, high-frequency operational risk loss events – we don’t have a lot of loss data. We do have some good data in the tail, which makes us a little different from other institutions, but scenario analysis is the approach that makes the most sense for us and resonates the most with our management.

Patrick O’Brien, IBM: At IBM, we work with a lot of financial institutions worldwide and it is a mistake to say that US financial services companies generally base their approaches on historical loss data while other regions of the world, such as Australia, use scenario analyses. I’ve worked with a couple of the banks that are AMA-certified in Australia and their scenario analysis approach is also informed by internal loss data, external loss data and business environment internal control factors (BEICF). It is not as polarised as you might think. A lot of the institutions are finding scenarios to be a good way to focus on the details of capital modelling, but they are certainly not excluding or ignoring the loss data in those analyses.

Kenneth Swenson, Federal Reserve Bank of Chicago: From a supervisor’s standpoint, we have tried to be a little bit hands-off, because the banks are not fond of us being overly prescriptive. And there are various ways of using the different elements – for example, and this is by no means an exhaustive list, do you use external data to inform the scenario process? Do you use external data to do a quality assessment of internal data? Do you use external data to inform the BEICF process and, moreover, what is the BEICF process?

Are there any trends in the way that banks are revising or overhauling their operational risk capital models?
McDermott: There is a much better realisation now that credit market events can have their roots in operational risk events. The risk management process can break down and cause significant losses, and so there is a real need on the part of the operational risk manager to understand the credit risk capital framework, the market risk capital framework and the boundaries.

O’Brien: What we are being asked to do is to be able to form that integrated risk view for financial services companies, so that is a trend we will see more often.

What has been the impact of the financial crisis on AMA modelling?
McDermott: It may mean there are several new data points in your loss data now. This really wasn’t just one event, there were a number of events in there, so there is much richer data in the tail now. It starts to help with one of the problems with the loss distribution approach (LDA) – that of not having much data in the tail. People have some more data there now.

Swenson: Perhaps one lesson learned from this crisis is that issues such as robo-signing or rogue trading represent an operational risk. And this leads to the question of whether operational risk should have had a seat at the table during the difficult days of 2008, and how controls should be put in place to prevent these types of risks in the future.

O’Brien: It is a good tool for explaining those capital numbers and, in a sense, justifying them. It is very difficult for operational risk professionals to explain their complicated models and the rationale behind the capital numbers those models produce to management, and scenarios do a great job of explaining that they are plausible and happened in other places.

McDermott: The scenarios can work well in that role. When you have an LDA approach, you have X billions of capital requirements, but what does it mean? What do we do about it? The scenarios can be placed alongside that, so even areas where there is enough loss data to do credible modelling can still gain a lot from scenarios as an illustration.

Are we going to see any kind of convergence either in the number and the type of scenarios used or in the way they are being used, as well as the goals that institutions are trying to achieve by using them?
O’Brien: I don’t see that convergence yet. There is a very wide range of cases for which they are used and we are really just experimenting now to find the real value.

McDermott: As I understand it, the Japanese regulators have a set of scenarios that all of the banks are expected to execute – natural catastrophes especially. I’m not sure I see that happening in the US, at least because of the diversity of natural catastrophes that we face, so I’m not sure I would foresee any convergence in the near future. It is still a fairly young discipline.

Swenson: There is a lot of variation in how they are used, depending on how long they have been used, what institution is using them, and whether it is a retail bank, a wholesale bank or an investment bank. Banks are just trying to make it work for their institutions and their AMA frameworks and, if that is the immediate goal, then that might be a good thing. It might be natural to think there will be convergence at some point, but I don’t view it as being any time soon.

O’Brien: I see a lot of growth and change going on within individual firms too. In one institution I know, there are 1,000 scenarios in play at any given time, but that is very costly to maintain, and the bottom 300 of those contribute less than 2% to their capital number. So they are looking at how to prune this down and make it more efficient. Certainly institutions are learning how to adapt the scenario analysis process and making it much easier to implement and sustainable within their institutions.

Since the crisis, we have seen a lot of regulatory emphasis on higher capital levels. How is this going to affect banks that are using the AMA, mainly because it allowed them to reduce operational risk capital levels, which may no longer be allowed?
O’Brien: Banks and financial institutions spend a lot of money on these risk management practices for two reasons. First, they want to improve their assessment of risk and have better risk management. But, second, there is the allure of reducing their regulatory operational risk capital. The lowering of the risk capital is a much easier return-on-investment argument to make to management to get the funding you need. On the other hand, regulators have put a lot more scrutiny into operational risk practices within financial institutions, which has forced them to improve. So, even without the added benefit of potentially lowering your risk capital, the interest or effort that companies are undertaking to improve their risk management practices, especially within operational risk, is not going to go by the wayside by any means.

McDermott: You can’t think about your operational risk management programmes solely as a regulatory requirement or as something to reduce your capital. There are intrinsic benefits in managing your operational risk – it increases the certainty of meeting your business objectives. It is incumbent on the operational risk manager to make that case and show the benefit they have added, but there is very little appetite for seeing reduction in operational risk programmes because you are saying that it doesn’t give you the capital relief. You still need to manage your operational risk. Our regulator is very keen on that.

If institutions are drawing on sources of external loss data, what precautions and what caveats do they need to keep in mind to ensure that data remains applicable to them?
O’Brien: We should define what kind of external loss data we are talking about because there are very different formats – for example, Algo FIRST is a database with publicly available use cases (descriptions of the losses). Certainly, the quantitative information is there, but there is a lot of qualitative data assessment of what went wrong, where the control failures were, and so on. Those are very useful because they help when you are talking to management about what can go wrong – pointing to things that have happened elsewhere is very helpful. Then you have a data consortium – for example, the Operational Riskdata eXchange Association (ORX) – which is clearly quantitative in the sense that you are tracking lots of different losses by the region, the product line information, the event type and the actual amount. But these losses have very little information about the context of why the loss occurred. Ideally, the banks will want to include as much data that is relevant to them from a risk exposure perspective.

McDermott: There is a lot of value in the qualitative type of external data, where you have well-written descriptions of events with some context around them. It really helps with the scenario analysis because it helps us look for plausible events that we have not covered. It allows us to get a sense of what the severities and frequencies should look like – it gives us some benchmarking information. 

Swenson: There is also the process side of it. If you are the corporate-line or business-line risk manager, you are facing off against someone in the business line, perhaps an executive, and you need to tell them more about the business line than they know, which is not necessarily a tenable situation. But it is different if you have data to back you up about what happened at a peer organisation, with great detail about what kinds of controls were involved, exactly how it happened, what region it happened in, what mitigating actions were put in place and whether it was a recurring event. It gives you the ability as a risk manager to face off against the business line with information that almost puts you on a similar footing.

McDermott: That is an excellent point. We had to use the data exactly that way in a testy situation with our chief financial officer. This was four years ago with our first-generation programme, but there was a situation where they made the flat statement “that can’t happen”. We were able to go through in detail how it could happen, how bad it could be, what the control failures would be and what mitigating actions made sense.

If this is such a useful tool, do regulators or the industry need to be doing more to encourage institutions to share their loss data, to share these actual case histories with each other?
O’Brien: It is difficult to do that. When banks pool and share their quantitative loss data, it is anonymised before it is, sent back out, so you don’t know where it has come from. It would probably be a lot more difficult to get the other kind of information – the qualitative information – and not be able to figure out who it came from.

Swenson: The challenge of sharing dirty laundry is still one of the barriers. Maybe that could be an aspirational thing but, right now, there are a lot of other fish to fry.

O’Brien: Even within banks, there are a lot of restrictions on who will be able to share this data. There is concern about certain kinds of risk events – having internal employees know too much about how losses occurred is not a good thing as far as internal fraud is concerned. There are lessons to be learned, but there is also very sensitive information about how things were actually broken that you don’t want to make known until you fix those things. One of the difficult problems for an operational risk manager when getting these programmes off the ground is to get people to be open about when there is a loss and what happened. It takes a while to build the credibility and assurance that it isn’t going to become a witch hunt. If there was some expectation that this information was going to get shared externally, it would be astronomically more difficult to get people to co-operate. And an operational risk manager, especially in the early days of the programme, is really reliant on the co-operation of the business units. You need to get them to play along.

Swenson: Loss integrity is a rather hard thing with respect to loss tracking, even with regard to your own losses. So, when you are talking about getting someone else’s loss data and relying on the integrity of that information, it is another layer of faith.

O’Brien: Part of the challenge of using that data and making it appropriate for an individual bank is that you have to deal with the scaling issues – you have to scale the data and fit distributions to it to make it more appropriate for your individual bank. What the data providers are working on now is providing tools for individual banks to do that work on their own – they will scale the data and select regions and business lines that are relevant to them.

In recognising that the use of the four data elements of the AMA approach is useful for managing your operational risk, is there actually a risk management benefit in quantifying it in capital terms? Some banks feel that the capital model adds little value.
McDermott: That is exactly the reason that scenario analysis works so well for us. We have a number of major scenarios, they result in significant losses and they are very plausible. We have the known internal experts on these things participating in the workshop and we use very detailed scenarios. It takes us about a year to run through the programme and hundreds of hours of staff time, but it results in these very credible scenarios. And, if we didn’t go through the step of modelling and getting the capital out of it, we wouldn’t have those dollar-denominated risks. The risk measurement is the most important part of the capital piece for us, and that can give us the real lever to stimulate business action.

O’Brien: I agree. The main driver for operational risk management is improving risk management practices, reducing operational losses, improving your internal control framework, and preventing and anticipating significant events that could affect the institution. These are all critical to the operational risk discipline. But understanding unanticipated events, especially large ones, is critical – and the capital modelling process drives you towards that more so than the qualitative assessment approach. So there is a lot of added value in pushing towards the capital modelling side. It also forces you to look at the correlation between different factors – there is not a lot of that happening otherwise, and the capital modelling process seems to drive a lot of it within the banks with which we work.

Is there a sense that the BEICF area is the weak link in operational risk capital modelling? Is that the area in which most caution needs to be exercised because you are not drawing on industry-wide data sources and on generally acknowledged methods of calculating scenarios?
McDermott: It might be the place where you make your capital credible because it is a reflection of your institution, it is not a reflection of someone else’s institution.

Swenson: Another way to look at it is that expert judgement is used to run a firm, and it can be a competitive differentiator. Understanding the firm well – conducting risk assessments, talking about trends across different business areas, regions and products, and going through this back-testing process – turns the operational risk almost into a form of validation or quality assurance. If used in that context, it could be a strength rather than a limitation.

O’Brien: Institutions I have worked with that have had poor risk management practices have usually had poor internal control frameworks and other processes as well. Their business processes are not operating very well overall, so I don’t know if there is cause and effect. Improving both should really be the goal.

Click here media/download/925326 to view the article in PDF format.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here