OpRisk North America highlights the limitations of risk management

Limits and boundaries

New York

Solid economic growth might have lifted the spirits of the rest of the US financial sector in 2012, but at the OpRisk North America conference, held in New York on March 20–23, the atmosphere was cautious. Speakers and discussions focused on the constraints of operational risk management – the risks of model failure, the tendency of risk managers to ignore empirical research and risk indicators, the possible flaws of operational risk capital calculations, and the increase in operational risk despite all the lessons that should have been drawn from the crisis and across the banking industry.

Model risk
Mark Levonian, senior deputy comptroller for economics at the Office of the Comptroller of the Currency (OCC) in Washington, DC, opened the conference with his keynote speech on the morning of March 21 (the previous day had been devoted to technical seminars). He cautioned operational risk managers against relying too much on risk models.

“Op risk models should be a key part of the risk management process, but we have to ask ourselves whether the models are wrong. The use of models is crucial to good risk management today, but sometimes even the best models don’t work the way they are supposed to. In fact, all models don’t work sometimes. The risk that the models won’t work – model risk – is an operational risk in itself, and is a risk that has to be actively managed,” said Levonian.

He pointed to the operational risk methodology laid out in the advanced measurement approach (AMA) of Basel II, which mandates that operational risk models be built using a combination of internal loss data, public anonymous external loss data, business environment and internal control factors, and scenario analysis.

While Levonian acknowledged the value of the respective risk measurement tools, he advised operational risk managers to be vigilant against poor-quality data creeping into model assumptions, or applying inappropriate risk weights that could skew risk model outcomes and expose firms to risk.

“Several of the key elements of model risk revolve around data integrity. The use of external loss data, which is mandated under the AMA, requires the mapping of loss categories from an external source to a firm’s own data. Mapping is an art and needs judgement, and typically there is some scaling going on there to make the data relevant internally. Model risk managers should be asking what assumptions were made around the mapping process, what alternatives were considered and what the impact of those choices is on the resulting output of the operational risk model,” Levonian said.

He urged risk managers to ensure regulated entities appoint at least one member of the operational risk team to periodically test the resilience of risk models, to “see if they break, and then to fix them if they are not working”.

He also called for more scrutiny to be applied to other modelling assumptions, including the basic units of measure selected from across business lines that are used to build risk models, and for a re-examination of where severity thresholds are set for the inclusion of loss data in operational risk analyses. He advised risk managers of the need to factor macroeconomic risk factors into their scenario analyses.

“The Basel Accord and the AMA can foster the sense that operational risk is simply a compliance exercise and has perhaps no real value. I hear often that people are just trying to do this to fulfil obligations to regulators without interfering with the business of the bank. That is a mistake. Operational risk models must be developed to foster good decision-making around the management of risk, not to satisfy some regulatory capital calculation,” Levonian said.

The second day’s keynote speaker was Mitsutoshi Adachi, director in the examination planning division at the Bank of Japan and chair of the operational risk subgroup of the Basel Committee on Banking Supervision’s standards implementation group (Sigor). He, too, focused on the issue of model risk, pointing out that the financial crisis had highlighted failures of risk modelling and governance, and had demonstrated that modelling errors could pose a significant threat.

“One of the lessons from the crisis is that there is danger associated with model error or model risk,” Adachi said. “This could also apply to the advanced measurement approach: it can’t be eliminated, but removing unacceptable practices could reduce the risk and enhance confidence.”
As a possible example, Adachi pointed to the fact that operational risk capital levels have remained broadly unchanged, despite a spike in operational risk losses since the start of the crisis. “We are concerned by the divergent trend of operational risk losses and operational risk capital,” he said.

This persuaded Sigor to conduct a study on the AMA modelling practices at banks – which has shown a wide variability between the approaches at different banks, Adachi said. “Large variability could be a sign some banks assumed higher levels of model risk than is acceptable,” he added.
One potential solution could be a form of benchmarking, using the basic indicator or standard approaches as a base. However, there is a tension between the desire to create outcomes that are simple and comparable across institutions, and the desire to achieve greater risk sensitivity in modelling, he said.

Even leaving aside potential issues with operational risk models, speakers at the conference said, the whole approach of using capital levels as an incentive for better operational risk management was flawed – and capital levels could be becoming irrelevant.

Calculation issues
Speaking on a regulatory panel immediately after Adachi’s address, Christopher Haines, the head of operational risk management at American Express based in New York, drew a distinction between the metrics used elsewhere and the capital number used in operational risk. “We were all hooked on value-at-risk, or we read our credit reports every morning. But op risk is different – there’s no number that you can read every morning or even every month, there’s no VAR for op risk. We collect the loss data and the risk control self-assessment data, but we don’t see a proportionate relation between losses and the capital model – it’s less transparent.”
Emre Balta, a senior financial economist at the OCC’s credit risk analysis division in Washington, DC, pointed out uncertainties in the calculation used: “The problem is there is a lot of model risk in operational risk. The choice of copula used, for example, can have a large effect on the capital figure, and there is not enough data for dependency modelling.”

In terms of risk management, Haines argued, “we might be at a point of bifurcation – the capital number is about protecting the corporation, but behavioural changes such as compensation reviews are more important for risk management. Capital charges are less effective than cuts to compensation in changing behaviour”.

Jane Carlin, global head of financial holding company governance and assurance at Morgan Stanley in New York, went further. “I question the extent to which op risk capital reflects the op risk at an institution. The lumpiness of operational risk losses suggests it might not,” she said. “Operational risk capital calculations might be becoming another VAR – we all got hooked on VAR before realising that differences in calculation methods made it meaningless as a comparison, and stress tests were undoubtedly a better reflection of risk than VAR. Operational risk management is moving away from using the capital number as a barometer and as an incentive tool.”

But Haines held back from abandoning capital figures forever. “I’m not ready to give up on operational risk capital forever, but it’s a toddler compared with market and credit capital calculations. It’s maybe something to revisit later, but at the moment we are still moulding it as we go along. Everyone would love a number for operational risk, but I don’t think op risk capital is that number.” Lumpy loss data made the operational risk capital figure impractically volatile, he said. “We’d all love to have a VAR for operational risk, but it’s a bit of a Holy Grail,” he added.
Carlin agreed. “Capital tells us something, but not as much as we would hope. We can’t rely on it exclusively,” she said.

Empirical analysis
And, other speakers warned, many operational risk managers were ignoring some of the most valuable empirical tools of their profession – key risk indicators (KRIs) for monitoring operational risk, and research aimed at managing it.

Marcelo Cruz, global head of operational risk analytics at Morgan Stanley in New York, noted practitioners had focused less attention on KRIs than other areas of operational risk.
“Operational risk is not a function of past losses – it is a function of your environment,” he told delegates.

Cruz noted KRIs must be objective and therefore measurable, simple enough to be easily tracked and understood by management, clearly identifiable, and representative. As a benchmark, practitioners should focus on 20 to 30 core key risk indicators, he said.

More broadly, speakers called on operational risk professionals to take the opportunity to revise existing programmes at a time when their influence in financial institutions is high. In doing so, risk managers should think critically about how to create a risk management programme that goes beyond compliance with regulatory requirements. “That is the right approach if we don’t want to go through the same experience again next business cycle,” said Tariq Scott Bokhari, operational risk manager at GE Capital, based in Charlotte, North Carolina.

An operational risk programme needs to be overhauled if it is perceived as a ‘corporate tax’ on the business, too much time is spent on box-ticking exercises, there is a lack of incentive to be proactive in operational risk management, experienced risk managers have left the team, or there is a lack of awareness in the business about the operational risk function.

A possible improvement would be to develop a common language, with risks clearly defined across the business, said Bokhari. He also suggested firms put a dollar value on risk. “We must strike a careful balance and not over-emphasise what we have, but at the same time put a stake in the ground and start to require sizing in dollars for operational risks,” he said.

Academic Mark Garmaise, associate professor of finance at the University of California in Los Angeles, used his own research to argue that more empirical analysis of operational risks such as fraud could improve operational risk management and shatter some long-held assumptions.
Garmaise analysed five years of mortgage-lending information from a single bank between 2004 and 2008 to detect patterns in misreporting borrower assets. He theorised that fraudulent lenders would be inclined to report incorrect asset values that were slightly more than a round number – $102,000 rather than $98,000 or $301,000 rather than $299,000 – because they believed banks would make lending decisions based on round-number thresholds. A lender who reported his assets as $102,000 would be more likely to be fraudulent – and therefore more likely to be delinquent – than one with $98,000 reported assets.

The data backed this up, with a marked jump in delinquency rates at the round-number threshold. “The data shows a jump of 20 percentage points. Just below the threshold, you have around 10% delinquency. Just above it you have around 30%,” Garmaise explained.

The next step was to break down the loan data by category – the larger the jump, the higher the level of fraud in loans of that type. The results were surprising, Garmaise said. “You would expect the jump to vary with different loan officers and mortgage brokers – some might be better at detecting and refusing fraudulent applications than others. It’s not that some are necessarily cleverer than others, but fraudulent loans might have other features as well that caused their rejection.”

To his surprise, Garmaise found that experienced loan officers saw larger jumps – and therefore higher rates of fraud – than their more junior colleagues. “I think it’s complacency. Those experienced officers were completing far more business. They were very good at handling other anticipated risks but not unexpected risks.” The same was true over time – fraud rates for a single loan officer rose the longer they spent with the company.

The research produced other startling results as well. “You would expect fraud to be lower at large brokers, which have procedures in place, than at the smaller, fly-by-night brokers. But although there’s fraud at both large and small brokers, the jump at the larger firms is double that at smaller firms. Maybe there’s actually less oversight at larger offices, or it could be there is competition between brokers in the same office, which is leading to less scrutiny.”

The launch of new products also reduced fraud rather than increasing it, Garmaise found.
The study’s findings would imply that banks should rotate loan officers regularly and use mixed-seniority teams, as well as applying greater scrutiny to larger brokers and older product lines. But the wider point applies across operational risk, Garmaise said. “There has not been much data work done on operational risk – it is mainly intuition and anecdote. But you have to do the data work. You need data to do this level of analysis and internal databases would be valuable. Hopefully this could lead to the development of empirically validated best practices for operational risk.”

New wave
The need for improvement is stark, one regulator said, with high-volume, low-margin businesses such as credit card and auto loans in particular being found at the centre of a new wave of operational risk.

Carolyn DuChene, deputy comptroller for operational risk at the OCC in Washington, DC, told the conference: “We see operational risk as high and increasing across the institutions we supervise. A lot of the breakdown is in high-volume, low-margin areas, just as it was in the mortgage market earlier. We are also seeing it in auto and credit card portfolios.”

She added that other areas of operational risk are also experiencing increases. “Litigation is increasing – it is a large driver of losses, the largest at some banks,” she said. “And outsourcing risks are still not well understood or well managed. There are a lot of cases where outsourced work doesn’t meet the standards the business has for itself. We’ve seen lax monitoring – it’s becoming more and more prevalent to outsource, but you need to have the monitoring and management around it.”

Regulatory change is also causing greater risks, said another speaker, Stacy Coleman, head of operational risk at the Federal Reserve Bank of New York. “Many aspects of the US Dodd-Frank Act include new reporting requirements, which will require huge infrastructure changes.”
The changes forced on bank business models by the Volcker rule and similar measures will also bring new risk, added DuChene. “As the industry reshapes itself, op risk will be present as a result of the changes in business models. Washington, DC is as busy now as it has ever been – we are working through these regulations, which are much more complex and larger than previous rules.”
Coleman highlighted other risks as well. “We are also focused on compensation. And cyber-security is keeping me awake at night more than anything else – there are so many pieces that you have to get right,” she said.

But the changes have brought advantages too, said another speaker, Alfred Seivold, a senior examination specialist at the Federal Deposit Insurance Corporation. “The crisis was a setback for Basel II, but it brought in many new rules – living wills, for example – which increase our understanding,” he said.

It has also forced regulators to be more co-operative, DuChene said. “We are taking time to look at the unintended consequences with other agencies. That means we have had to miss some deadlines, but I would rather miss the deadlines and get things right.”

Coleman agreed: “We are trying to leverage what the other regulators are doing and not duplicate work – Dodd-Frank pushes us to be much more aware of each other.” The same is true inside banks, she added. “It’s important to communicate with other risk disciplines, rather than being siloed. Some people might prefer to work in a silo but real success only comes from talking to the other risk categories.”

Speakers throughout the conference returned to the key issue of risk culture – with the OCC’s Levonian warning: “It’s hard for good controls to overcome bad culture”, but also admitting the difficulties involved in changing a poor culture. “We can see it, but it’s hard to change. Usually, it changes through replacement of staff, and through leadership – we can recognise an environment where operational risk management is seen as valuable and not as an annoyance or a career risk,” he said.

Spreading operational risk information to other departments was one important element, commented Sean O’Malley, director of compliance architecture and quantitative solutions at Citi. “The toughest cultural hurdle was the entrenched idea that the compliance side already knew what the risk was,” he said. “In fact, they only knew anecdotally, they did not have full enterprise-wide data. When we put that information in the hands of other people, they can challenge beliefs on things that might otherwise go overlooked.”

Good risk culture starts from the top, many speakers said, but one speaker warned that this should not lead to loading more and more responsibility onto the board of directors. “After every crisis, someone suggests the board should have stopped it – but what is it realistic to expect from them?” asked Jay Lorsch, professor of human relations at Harvard Business School. “Directors are supposed to be independent, but that can mean they have little experience in the industry. And it’s not only very difficult to put someone on a bank board who has banking experience – it’s very difficult for them to acquire it once they are there, due to the pace of change and complexity of the industry, and the breadth of knowledge required.”

In the end, he said, regulators and risk managers should remember that “boards are human institutions – and they are designed for broad oversight, they can’t deal with complicated issues close in. Congress was looking for solutions [through legislation such as Sarbanes-Oxley] and tends to lean on the board – but looking at the board to do more is probably not the solution we need. What you need is a board that can ensure the appropriate risk management tools are used effectively, and that can call up the senior management when there is a problem”.

Marcelo Cruz discusses his impressions of the conference in the May issue of Operational Risk & Regulation.

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: