Sponsored by ?

This article was paid for by a contributing third party.More Information.

Integrating operational risk into business management

Sponsored webinar: MetricStream

linked-fingers

Operational Risk & Regulation convened a panel, sponsored by MetricStream, to discuss methods of incorporating an operational risk framework within decision-making processes and real-time risk awareness, as well as the effective use of loss data and the role of technology in risk-reporting.

Operational Risk & Regulation convened a panel, sponsored by MetricStream, to discuss methods of incorporating an operational risk framework within decision-making processes and real-time risk awareness, as well as the effective use of loss data and the role of technology in risk-reporting.

The Panel

MetricStream
Brenda Boultwood, Senior Vice President, Industry Solutions

Union Bank
Marty Blaauw, Senior Vice President, Operational Risk Manager

Bank of America
Daniel Varghese, Senior Vice President, Corporate Audit, Operational 

Listen to the full proceedings of the MetricStream webinar: Integrating operational risk into business management

What mechanisms are available to integrate operational risk into business management?
Daniel Varghese, Bank of America: The first thing is ensuring that the tone and the policy have been set. For example, each bank setting their standards of behaviour through a code of conduct and all employees knowing that the operational risk framework is part of the risk culture of the bank. What that means is ensuring each employee understands the risk function and the role that risk plays in business decisions.

Marty Blaauw, Union Bank: The tone at the top is very important. At a more tactical level, scenario workshops are one of our most successful tools, due to the in-depth risk analysis with senior managers. We create operational risk reports specific to each business line in conjunction with the business unit risk managers, which drive a dialogue on the operational risk profile. Also, the capital allocation process creates a lot of discussion between the business lines on what impacts their allocation of the capital and provides incentives for improved control.

Brenda Boultwood, MetricStream: In addition, we are seeing a desire to create a well-defined risk appetite that includes statements about operational risk, then linking that risk appetite to our business plans and making sure it is reflected in our policies. That then has to be linked into our strategic plans to ensure we can achieve the targets, whether that is growth, earnings or other objectives, within the risk appetite that we have set. Operational risk is often measured using various key risk indicators (KRIs), which might get reported against certain statements of risk appetite, or risks that have been identified in the risk assessments – whether they are self-assessments or operational risk teams performing the assessments. We are also seeing the embedding of operational risk into the use test so, as new products are introduced, or large deals, new markets or potential merger and acquisition activity, we ask whether they are accounting fully for the operational risk that could be the result of such new activities in the company.


Which is the most common model to follow on integrating operational risk and who is following it?
Brenda Boultwood: I think there is definitely an aspiration to create linkages between risk management, compliance and internal audit, and even areas such as vendor management and the way we are managing business continuity or our response to disasters. A lot of companies, once they begin that integration, see there is tremendous overlap. For example, the controls that might be tested as part of an internal audit review of a certain business area may overlap significantly with an area of compliance, such as Sarbanes-Oxley. So you are getting to the point where you perform the test once and leverage it often.

Marty Blaauw: I think that many banks are really struggling with how these independent groups really work together and leverage each other’s work. I know we are moving forward with that at Union Bank, and I would hope that in a few years we will have made more progress. I think many of us are really striving to make things more streamlined and simpler for the business. At Union Bank, we finally have the risk assessments for regulatory compliance, Sarbanes-Oxley and operational risk all on the same platform. We each have somewhat customised assessments, but all control documentation and testing is documented in that same platform, giving business unit risk managers a comprehensive view of their control environment. I wouldn’t say we are fully integrated, but we are really trying to leverage each other’s work.


One of the big regulatory trends over the last few years has been pushing more and more regulatory requirements onto senior management, and onto the board of directors in particular. If the optimum would be for constant real-time awareness, how practical is this?
Daniel Varghese: The trend we have seen is more real time, especially with the new risk data aggregation principles that are coming out in 2016. Senior management and the board need to make decisions and we all know that stale data is not going to help. Real-time awareness is essential for senior management.

Brenda Boultwood: There is definitely regulatory pressure to improve the data quality in terms of both accuracy and timeliness, but also to ensure that the data that senior management and the board are receiving is relevant. If there is a six-week lag before the board receives the information, that is no longer considered good enough. Whether it is market risk, credit risk, operational risk, vendor supply chain risk or IT control risk, having that information in real time is obviously a goal.

Marty Blaauw: Before you put it before the board, you need to ensure the data is good, accurate and complete. That said, you do not want to wait six weeks to do that, so we put our board reports out on a quarterly basis within the first few weeks of the end of each quarter. Those are actually enterprise risk reports that include credit risk, op risk, compliance, market risk – all the risk types in one consolidated risk report. That is good for looking at trends and significant events. We are also trying to get closer to real time by looking at KRIs and risk appetite statements. We are asking the business lines to identify the most important metrics they are monitoring and ensure they are monitoring those on a real-time basis. We are certainly not completely there yet, but that is where we are trying to go. We will still be pushing the general operational risk information up to the board on a quarterly basis, but more in the form of trending, directional risk profile, and reporting on areas where tolerance levels have been exceeded.


Perhaps the board and senior management do not actually need to see all this real-time data?
Daniel Varghese: At the senior management level, they need to move towards getting these automatic reports and dashboards. The slower you get the data, the slower your decisions are, and I think the industry and the regulatory environment realises this as well, so we will need to move towards having automatic data capture.


Where does compliance risk management fit into op risk management? Is that something that you identify as a discrete risk, and how is that reflected in the way organisations are set up?
Marty Blaauw: We do see compliance risk as a distinct risk type, so when we report to the board, we report on operational risk and we also report separately on compliance risk. When a business is looking at rating its overall operational risk, compliance risk is one element considered in the rating. We have some overlapping reporting – we have separate compliance and operational risk committees – but we attend each other’s committees, we report to each other’s committees and we share some of that reporting.

Daniel Varghese: This goes back to the question of what fundamental policies you have, the role that compliance plays and the role that risk plays. They should be able to share their information and rely on each other but still be able to provide their independent view to management of where the company is in terms of op risk, or in terms of complying with a specific regulation. If compliance is testing, does op risk need to carry out the same testing? They should be able to leverage where it is possible so it is not a ‘check the checker’ type of exercise. But it is crucial for organisations to report to the board and to management on both roles.


It is a requirement to use scenario analysis as part of your capital calculation, but how can you use the same techniques for business management, for making strategic decisions?
Brenda Boultwood: Integrating operational risk into business management is one of the important tools that you can use. But I would say that scenario planning as part of an operational risk management framework is probably the most challenging part, and perhaps one of the most immature parts, of operational risk management. However, there is great interest in ensuring that we not only understand the relevant risks as we perform our risk assessment, but that we are asking our subject matter experts in the business and in our functions to think about what would happen if key controls broke down, or if there was an environmental change. Often, we are conducting workshops and bringing in subject matter experts to speak about the impact of certain changes – whether that is a regulatory change or a change in a market. It is one thing to report these standard metrics and these trends, but what the board wants to know is that we are also thinking about what could go wrong – the scenarios that might challenge our capital levels or create a significant drop in earnings. Bringing relevant scenarios to management and to the board is something you can do as a head of operational risk or a chief risk officer.

Marty Blaauw: We do a lot of planning when we do our scenario workshops, and we prepare a package for all of the workshop participants with the results of our analysis. Scenario workshops bring together people from different silos, and what we do in corporate op risk is to supply them with as much data as possible to inform the discussion. We pull in cases from the IBM First database on what has happened at other institutions, as well as internal loss events and other data, so we can have some really good conversations. Our workshops have been very successful in integrating the business lines with risk management, particularly when we have changes. If we have an emerging risk, we can pull everyone together and talk about the upstream and the downstream controls and really get a feel for everything that could potentially be impacted.

Daniel Varghese: Another example of where risk plays a role is looking at strategic decisions or new product reviews. It is essential that the scenario analysis data or the internal loss data is being pulled in. It is fundamental to consider how our operational risk environment will be able to help support the new decision we are making and, if not, whether management is willing to take that risk.


How do you address the issues around using internal or external operational risk data?
Marty Blaauw: They have different purposes. We use internal loss data primarily in our capital calculation and to benchmark our model. Where we have insufficient internal data, we have to supplement that occasionally with external data from a consortium. But we use both really heavily when we are doing our scenario workshops. Every time we have a scenario workshop, we go through the Fitch database and pull anything we think could be relevant. Then we can challenge the business line by saying: ‘This happened at this institution, could it happen here? And, if so, how big could it be in our institution? How relevant this is to us?’ From the consortium data, you generally do not get a lot of detail about the control environment, but you can do a comparison with the size of the losses your peers have sustained in that particular business line for that particular event type. We also use the internal loss history very heavily in our scenario analysis. If we have ever taken a large class action lawsuit or a fine in a particular area, we have to have a discussion about whether that could happen again, and how big it could be.

We use that same loss data to compare back against the risk and control self-assessments. If we had a large historical loss or we see large losses in the industry, yet the business has not identified that as a high-risk area, we use that as a challenge – we have a discussion about whether we are missing a risk or whether we feel that we have improved the controls and are no longer susceptible to that risk. We also use loss data a lot when we are looking at change. If we are looking at an acquisition, we may look at the industry data, particularly if it is a new business line for us, and talk to senior management about what we are seeing in the industry in relation to that business line to help inform them. So both the external data and the internal data are used – depending on the type of data, we use it for different purposes.

How can you pick out the most relevant features of a loss data point?
Brenda Boultwood: You have to understand the details of the data that is being reported, as well as the requirements for reporting into the consortium. But, once that data has come from an external source into your company, it should go through data quality checks and some form of normalisation to be brought into your database. It is an expectation that there is some type of quality review happening inside your company – you have to ensure that a human being is actually looking at the events that are coming from these outside sources to ensure that they are relevant to the business areas, to the Basel categories or the causal factors that they have been tied to by that third party. But you also need to check that the data is actually accurate. The person responsible for the data review might accept it all or they may reject certain cases, but it is important to provide evidence of that process of review of the data coming into your databases, whether it is used to help you supplement internal loss event data or used for building your internal scenarios. It is important that you can vouch for that data quality.

When you are talking about internal loss data, it is challenging enough to go through the investigation – certain loss events can actually take years to fully investigate and close out, especially if you are looking at recoveries and other factors that impact the full value of that loss. But then where do we take that data? If it is internal data, do we link it back to the control that failed? If we had a loss or a near miss, do we tie it back to the control, and then force the re-rating of that control environment and perhaps trigger a new risk assessment in that business? You need the ability to take that internal and external data and show you are triggering action, that the operational risk information is being used to create activity and potentially change results, whether it is for an overall risk assessment or thinking about the way you are calculating your own capital numbers from those external loss events.


Is it a technological issue that means there are very few companies actually using any kind of real-time operational risk management reporting method?
Daniel Varghese: Technology plays a vital role. The larger the institution, the more we are going to rely on technology to consolidate it and to ensure the data is used for risk management purposes. The data is only going to be good as what the user has put in, so the more details that we can search and review on – whether it is the causes or the processes or the risk categories – the better. It is crucial for risk management purposes to try to be as predictive as possible on what our risks are. In smaller institutions, technology might not play such a vital role but, in collection of loss data, the more we use the tool, the better it is for data aggregation and for reporting. At the beginning, we all used Excel. That was probably not the best way, but it is where most people started. Now we are using more advanced tools – people have either brought in technology to support the internal loss data collection or built it themselves, because they needed to do risk management data aggregation to help in managing risk, in predicting the next loss area or constructing controls. So I think it is vital to move as near to real time as possible.

Marty Blaauw: For us, technology is critical. While we do not have dashboards for senior management in real time, we slice and dice this data in many different ways, and I would say that our business unit risk managers are heavy users of our op risk technology. We load our loss data into the risk assessment database. We even load in the external losses that we think are relevant to them, so they can align them into the risk assessment. We slice and dice this data by business line, we provide reports for the divisions, for the groups, for senior management, for the operational risk committee and for the risk and capital committee. Without that technology support, you cannot actively collect and manipulate the data as easily. We collect the loss data directly into our system and we pass the entries to charge them off to the general ledger. So, for the most part, people cannot even get their loss charged off unless we get it in our system. That means we are getting timely and accurate loss data and can reconcile it, and that also gives the businesses a real-time view of their loss data.

It is also how we give visibility to audit, compliance, Sarbanes-Oxley and op risk – by sharing that data through a shared platform. So, for us, it is key and it is really what has pushed us towards stronger risk management.

Brenda Boultwood: Small banks may feel they can get by with the Microsoft Office Suite and Excel spreadsheets – that might serve at that point in their development to house and manage the data they need, whether it is for a risk assessment or a loss event. But, as the size of the organisation grows, technology really does become imperative. I also want to highlight a few other points. One is that when we talk about operational risk data, we are talking about several different types of data. We are talking about loss event data, risk assessment, KRIs, the scenario analysis and the capital calculation and analytics you might perform on that information. We are not necessarily seeing it in the US yet but, in some regulatory jurisdictions, having a siloed approach to collecting the different types of operational risk data – one system for the risk assessment, another system for loss event data – is really no longer good enough. Regulators in these jurisdictions are looking for the ability to create linkages between, for example, a large internal loss and the specific control failure that occurred, or between your KRIs and your risks. You might also have key control indicators linking those to the specific controls. I think this will become a requirement everywhere and that makes technology really critical, even for a small company, because it is hard to do that in a two-dimensional spreadsheet.


Would it be accurate to say there is now more regulatory attention on this and, therefore, people are keener to demonstrate that they are doing something about it?
Daniel Varghese: With the regulatory environment, we need to be able to show how we are presenting the information to management from a risk standpoint and technology is very important. It is going to help us to provide the evidence we need of how we reached our conclusions, and what we based our decisions on. Manually pulling data is always going to be a concern, so the more manual an environment you have, the more the data could be in question. Not that technology is going to be an answer for all data accuracy, but it will help. You have to do what is right for your institution. If you find that you are in a larger organisation or you have many more data points, technology will help in your decision-making, and help you substantiate it by showing how you came to your conclusion.

Marty Blaauw: There is a regulatory expectation that we look at this holistically and that the data be tied together. We pull all of our loss data and the external loss data into the risk assessment. There is an expectation that you are back-testing your risks against your data. You need the technology to tie that all together and see it all in one place.

Brenda Boultwood: If you look at the public disclosures of large banks, you see that the capital numbers they report are largely operational risk capital numbers. I think that organisations are learning through the unfortunate large loss events we have seen in the headlines, but also growing more aware as their own risk assessment techniques improve. Operational risk management is really an important way of making better business decisions.

Read/download the article in PDF format

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here