Skip to main content

The headaches of op risk integration

The banking industry, spurred by Basel II, is acquiring systems to cope with the host of disparate issues that fall under the rubric of ‘operational risk’. But how do you stitch these piecemeal solutions together? Clive Davidson reports

cover103-jpg

Banks and third-party software suppliers have been working hard to develop a range of tools to deal with different aspects of operational risk. But to provide consistent and meaningful results on which to base an effective risk management process, as well as for regulatory capital calculations, particularly in the light of Basel II, the individual tools must be integrated. While many banks and systems vendors have made strides in developing individual tools, few have managed to integrate these into a complete operational risk management framework.

The Basel Committee on Banking Supervision set out its view of the key elements of an operational risk framework in a set of guidelines published in February (Sound Practices for the Management and Supervision of Operational Risk). These reflect much of the practice that banks have been developing over the past few years, supported by consultants and software suppliers.

The first thing a bank must do is define what operational risk is, within its organisation, then devise a strategy for how it should be identified, assessed, monitored and controlled or mitigated, says the Basel Committee. Each of these elements – identification, assessment, etc – requires one or more tools that will enable the bank to tackle it in an efficient, systematic, thorough and as objective or scientific manner as possible, says Mike Finlay, founder of London-based operational risk management consultancy Raging Torrents. Basel II also gives banks the incentive of reducing regulatory capital to cover operational risk through an advanced measurement approach, which gives rise to the need for some form of capital modelling tool.

Identifying and assessing operational risk can be approached from several directions, such as self-assessment, through staff workshops and/or questionnaires. Internal data on losses experienced by the bank reveal past and ongoing exposures, while external data on losses experienced by other institutions can highlight potential risks, especially of low-frequency, high-severity events. Analytical models and scenario analysis can help a bank quantify the probability of events and their possible impact.

Self-assessment tools have been available for some time and are widely used in some other industries. As banks and their regulators have turned their attention to op risk, traditional self-assessment tool suppliers such as Carddecisions, based in Ontario, Canada, and Methodware, based in Wellington, New Zealand, have adapted their products to the financial industry, while several financially focused companies have brought out new tools for this purpose, such as New York-based JP Morgan Treasury Services and dbs Financial Systems, based in Poole, England. Methodware recently took over the op risk management products of London-based Amelia Financial Services when it went into receivership in December, and has integrated them to create a product suite called Enterprise Risk Assessor.

Triangle philosophy
JP Morgan Treasury Services’ Horizon product is based on what the company calls a “triangle philosophy” – a self-assessment module gives a current and forward view of risk, an audit module validates the self-assessment results, while a key performance indicator module that uses things such as actual events and losses, gives an historical view. “These three elements act as checks and balances to each other, and should give institutions a cohesive and effective view of their operational risk,” says Craig Spielmann, Horizon executive at JP Morgan Treasury Services. Horizon is used by 17 major financial institutions, he says.

Dbs Financial has a module in its dbsAccord operational risk management system to support managers running workshops in their business units, where they meet staff to identify risks and score them for likelihood and potential impact on the institution. “The manager records the information on a laptop computer and later uploads it to the central database,” says Andy Blackburn, technical director at dbs Financial. DbsAccord is based on a custom system that dbs Financial developed for the overseas division of one of the UK’s major banks, which the company will not identify, and which dbs Financial has now turned into a general product that it is demonstrating to a number of banks and consultants.

Collecting internal loss data has been a starting point of an operational risk management programme for many institutions. It is possible to do it with general spreadsheets or databases, but this can have limitations, warns Peter Hill, operational risk solution manager for Toronto-based risk management systems supplier Algorithmics. “Many banks are already collecting loss event data using forms that have been created in Excel, for example,” he says. “The disadvantages of such an approach quickly become apparent. There is potentially a lot of information on any loss event to collect, certainly if it is to become meaningful in later management information reporting processes and analytics. Such an approach may work for periodic reporting, but a fully web-enabled system becomes paramount the larger and more geographically and functionally diverse the organisation becomes.”

Algorithmics provides a module designed to handle loss data with its Algo OpRisk suite of operational risk tools, which is used by UK bank Halifax Bank of Scotland (HBOS). Germany’s Commerzbank also uses some of Algorithmics’ OpRisk tools, as do two other banks that the company will not name. Connecticut-based OpRisk Analytics and New York-based OpVantage also offer internal data tools, as well as providing so-called external data on loss events experienced across the financial industry.

More than 10 banks use OpRisk Analytics’ data service, and Bank of New York, Deutsche Bank, Fortis Bank and Société Générale are beta testing its suite of other operational risk tools. OpVantage has more than 20 users of its data service and 20 using its suite of tools. There are also consortia initiatives whereby banks plan to share their loss data, such as the Zurich-based Operational Riskdata eXchange Association.

“But everyone needs to start [loss data capture] internally,” says Jonathan Howitt, director of operational risk at Dresdner Kleinwort Wasserstein (DrKW), which used the software development expertise of London-based Raft International when it was building its operational risk toolset – Radar – which Raft has subsequently turned into a third-party product. Internal data provides the most useful information, he says, and the best use of external data is to compare it with internal losses. But the external data has to be relevant and applicable to the bank.

Once risks are identified and assessed they must be monitored, and the Basel Committee recommends the use of key risk or early warning indicators, which should be “forward looking and could reflect potential sources of operational risk such as rapid growth, the introduction of new products, employee turnover, transaction breaks, system downtime and so on”. Key risk indicators require a database to capture and track information, with the ability to set thresholds and report on breaches, as well as analyse performance and trends. If the database allows the input of information on actions to be taken on issues highlighted by the indicators, it can also become an operational risk control mechanism. A major European bank that does not wish to be identified, and which has developed its own suite of operational risk management tools, has added the ability to note actions against its risk indicators and assign responsibilities for the actions to individuals, as well as due dates. It also includes an escalation mechanism if due dates are missed.

Capital calculation module
The other major element that many banks will want in their framework, especially if they are planning to take the advanced measurement approach of Basel II, will be an analytics or capital calculation module. Such a module could be used with internal or external data to calculate capital to be held against unexpected losses. A number of banks initiated their operational risk management programmes with a capital model and external data, calculating risk capital then attempting to allocate that down through the organisation, as opposed to starting with the internal assessment of risk at the ground level of the organisation and working up to an overall calculation of the capital requirement. However, there is considerable debate about the merits of these top-down versus bottom-up approaches.

“A lot of people start out thinking that the top-down approach is the only approach that would work because of the lack of internal data,” says Ali Samad-Khan, chairman and chief executive officer at OpRisk Analytics. “But while the paucity of internal data is indeed the problem, top-down modelling may not be the solution. After all, even with 10 years of loss experience, it is unlikely that an institution will have enough data to precisely measure its exposure to tail events, which is what drives value-at-risk. A better solution is to use external data. But external data, because it comes from institutions that, for example, have different sizes or control cultures, cannot be used directly.”

But combining internal and external data at the firm level is not a good idea. It assumes that the internal business mix is identical to the average external business mix, which in most cases is unlikely. Furthermore, even if a bank were able to reliably calculate risk capital at the firm level, it would still be necessary to allocate this capital to the business lines. “Unfortunately, this cannot be done without using a credible bottom-up method. It is just not possible to allocate capital on simplistic criteria such as business unit size or revenues, because that ignores the different risk profiles various business units might have. Doing so would not only be inaccurate, but could also create perverse incentives,” says Samad-Khan.

Lloyd Hardin, managing director of OpVantage, agrees that it is necessary to take both approaches. “A top-down approach allows you to get some ballpark figure of the total capital that is required for the safety of the institution, but it doesn’t really help you make better decisions about where you should be focusing your [op risk management] efforts because it doesn’t get granular enough. The bottom-up approach is a good way of understanding your business and where risk might be that you can do something about.” The bottom-up approach will also eventually lead to a capital calculation, and if this is in the same range as the top-down calculation the figures will validate each other. If not, they will indicate the need for investigation. “So banks need to use both approaches concurrently,” he says.

Few banks are at the stage where they can run both approaches concurrently, and many are still in the early stages of developing or acquiring a suite of tools to support their operational risk framework. But while the Basel Committee has helped clarify the key elements of the framework, and software suppliers are busy filling out their suites of tools, banks should be aware that the individual tools must be integrated with one another, and preferably into the bank’s overall systems and management information infrastructure, says Finlay of Raging Torrents.

“You need to have an integrated technology environment to get a real-time tap into the sources of data that are necessary to produce [operational risk] management information,” Finlay says. Such an environment would be able to bring sources of data together for comparison or to supplement one another or give context, providing managers with as rich and complete information as possible for their decision-making, he says.

Banks that have been pioneering operational risk management practice have discovered that the integration process is not necessarily easy. DrKW took around 18 months to develop its risk indicators component on a consistent platform with the loss data tool. “We wanted to have [the risk indicators module] on a data model that was consistent with the loss data,” says DrKW’s Howitt. “To do this, all our risk indicators had to be pinned to the same dimensions as we pin the losses to – that is, product, location, process, system if relevant, organisational unit owner and risk category. Getting the data model right took a long time, but it means we can now look at the relationship between risk indicators and losses.”

Integration
HBOS is able to to integrate its key risk indicators with internal loss data, together with self-assessement information. The bank built the self-assessment element in-house after the Halifax and the Bank of Scotland merged in 2001. Both banks had had standalone PC-based self-assessment systems, but the new merged entity wanted an application that could operate over the bank’s intranet “for ownership, accountability and improved reporting,” says Matt Kimber, head of operational risk at HBOS, and found no third-party product at the time that would meet its requirements.

Called Aspect OR, the bank’s application uses scenario simulations to assist management in assessing each risk’s potential impact on the organisation. The frequency and severity from two scenarios per risk are then used to estimate operational risk exposure. Meanwhile, HBOS has been working with Algorithmics to develop Algo OpData for capturing internal losses and key risk indicators, and brought that programme into the merged entity. HBOS ensured that it used the same categorisation of risks across both Aspect OR and Algo OpData based on the bank’s overall framework, whereby “the definition of operational risk is the same throughout the organisation”, says Ammy Seth, group head of operational risk at HBOS.

The two applications are integrated, so information can be cross-referenced, for example a loss can be linked with an identified risk and a key risk indicator. On top of both systems sits a reporting application from California-based Business Objects that allows the bank to flexibly analyse and report on the data. The bank also has a programme for evaluating the various approaches that are evolving for modelling operational risk capital. Its objective is to provide itself with the choice of progressing to the more advanced approaches when appropriate, says Seth.

DrKW and HBOS have got further than most banks in implementing and integrating the elements of an operational risk management framework. Implementing the Basel Committee’s sound practices and preparing for the advanced measurement approach is a major challenge for a bank of any size, and no institution can do it all at once, with many still to begin. The availability of third-party tools will speed up the process, although bigger banks may prefer to develop their own.

The major European bank says that although it is monitoring the evolution of third-party tools, it is unlikely that it will replace any substantial elements of its framework with these new products. “For an organisation of our size, with lots of different business lines and products, I believe we are better off with our own tools,” says the European bank’s head of operational risk. “We are still on a learning curve in terms of operational risk, and we need flexibility.”

In-house
Recent research undertaken by North Carolina-based risk management software supplier SAS Institute suggests that the cost and lack of appropriate functionality of the first generation of third-party tools drove banks to build in-house, says Peyman Mestchian, head of SAS’s UK risk management practice. This month his company is launching a new integrated suite of operational risk management tools.

Howitt at DrKW says none of the suppliers yet has a full suite of tools up and running “to the extent that an institution of any significant size would require”. To be fair to the suppliers, none claims to have all elements fully developed and implemented at a major customer site, and most acknowledge that some of the tools in their suite are still being developed. But this work is moving on apace and the suppliers are confident that they will cover all the bases soon. Banks that are monitoring their progress seem to agree. “I think there will be a few comprehensive [operational risk management software] suites available in a year’s time,” says Howitt.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Chartis RiskTech100® 2024

The latest iteration of the Chartis RiskTech100®, a comprehensive independent study of the world’s major players in risk and compliance technology, is acknowledged as the go-to for clear, accurate analysis of the risk technology marketplace. With its…

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here