Fighting fat-finger syndrome

Trading errors are a major source of operational losses for banks. Navroz Patel looks at some of the systems that banks and software firms have come up with for managing the risk

Fat-finger syndrome, the operational affliction that is jokingly said to cause traders to enter costly mistaken deals on their keyboards, struck again in December when yet another trader pressed the wrong keys.

Banking analysts speculated that investment bank UBS Warburg could have lost as much as $100 million when a trader on its Japanese equity desk transposed the share price and the amount of shares he wanted to sell and sold 610,000 Dentsu shares at 16 yen each instead of 16 shares at 610,000 yen.

Last year, another digitally-challenged trader sold a £300 million ($430 million) lot of shares instead of the intended £3 million's worth, causing a temporary £40 billion drop in the value of the UK's leading companies.

And in November a mistaken equity futures trade on Eurex, the German financial futures exchange, caused an 800-point fall in the Dax share index.

But there are ways of managing the problem.

"Everyone within Dresdner can access our in-house op risk solution to input operational loss events onto a central database," says Jonathan Howitt, London-based head of operational risk at German bank Dresdner Kleinwort Wasserstein.

Dresdner's Radar

More than 350 staff took part last year in operational loss collation on the bank's browser-based database known as Radar, Howitt says.

He believes Dresdner is one of the most advanced banks in terms of managing operational risk, particularly at the trading floor level. The bank began automating loss data collation and op risk reporting in early 2000. Indeed, Dresdner began commercially marketing Radar, which it runs on its own corporate intranet, in October 2000.

Radar was developed with London-based software firm Raft International and was first deployed within the bank's global equity division and interest rate futures and options area a year ago. "We then rolled it out globally in mid-2001," says Howitt. "This was unique -- nobody in the industry had been able to get a complete workflow solution for loss recording implemented globally in a trading environment."

The submitted loss data is used to generate monthly risk reports that highlight a variety of operational risks within the bank. One important class of operational risk is headcount turnover as an indicator of staffing problems. "Say that your average annual staff turnover within a trading group is 20%, but that in one year turnover reached 50%," says Howitt. Radar would allow Dresdner to monitor the problem and assess how much it might be costing the bank.

But he acknowledges this is far from modelling. "At the moment we can only profile this kind of op risk statistic," he says, "but ultimately we would like to model it properly."

Erroneous trades and processing errors are, of course, another significant source of op risk. Say, for example, a trader mishears a client and books a trade as a sell rather than a buy, and that confirmation of the transaction occurs a day late. Because the trade is settled late, there is an accompanying cost of interest when the dealer's interest-rate hedge would have been closed.

"So here, there may be two observable losses recorded in Radar from a single operational error: the cash claim and the out-trade cost," says Howitt.

Dresdner's database allows the source of the op risk to be pinpointed and tackled -- be that by modifying technology or changing the culture of the desk by getting traders to repeat trades with clients before transacting.

A firm's IT system is one of the most obvious sources of op risk. But it is very difficult to quantify the impact that a technology-related operational loss might have on an organisation. It took a relatively long time to decide on the best method of assessing the op risk generated by system downtime, according to Howitt.

"For example, if managers see on a report that a major system went down for three hours, this could be serious," he says. "However, if the event happens out of trading hours, it doesn't matter so much. Quantifying the criticality of such an event is subtle. Effectively, we agreed a risk rating between the IT people and the system user."

The op risk data and profiling generated by Radar is now taken into account when the bank's executives discuss strategy. Dresdner recently used the program to assess the impact of expanding a particular product group, where a purely market risk analysis had suggested substantial profit potential. Howitt declined to specify which group.

The database allowed the bank to extract loss data for that group and then look at the impact of two-, four- and eightfold increases in trading volume on operational losses. "We found that doubling volumes had a minimal impact on op risk," says Howitt. "However, the eightfold increase scenario caused losses to shoot up."

Such scenario testing allowed the bank to effectively price the op risk -- allowing it to recognise the trade-off between increased trading profits and the need to invest in better technology to be able to handle such growth, he adds.

Other banks

Other banks are also starting to realise the value of this kind of automated floor-level op risk reporting and profiling. "This year we went live with our Risk Event Database -- an errors database that allows operational events to be tracked on a firm-wide basis," says Joseph Sabatini, New York-based managing director and head of corporate operational risk at US bank JP Morgan Chase.

The tool was originally developed within JP Morgan a few years ago, prior to the investment bank's merger with Chase Manhattan. It has taken the past year to mitigate the op risk inherent in introducing the Risk Event Database across the merged organisation -- be that by training staff or enhancing the system so that it can handle the increased scale of business at the bank post-merger.

While the system is currently less developed and automated than Dresdner's system, JP Morgan Chase will also eventually be able to use the database to produce reports profiling individual business loss experience or to look at firm-wide trends. And the risk transparency the database gives to managers increases staff awareness about the causes of op risk -- irrespective of any further quantitative analysis, Sabatini says.

"We are after better risk management rather than precise risk measurement," he says. "Formally tracking loss experience pays for itself. As people become aware of the magnitude of losses -- even moderate amounts -- loss experience goes down."

But that's not to say op risk reporting and profiling cannot help with more sophisticated modelling approaches. By building a database of historical operational losses, those involved in high-level op risk modelling can use this as a rich data source in order to stress-test their models -- a fact not lost on software firms.

In December, Toronto-based financial software vendor Algorithmics released OpCapital -- the latest addition to the OpRisk application it launched in May last year. The new module allows users to fit mathematical distributions to data gathered by a bank using OpRisk.

Dan Rosen, vice-president in charge of research and new solutions at Algorithmics, says: "The added engine allows the user to calculate op risk capital based on a loss distribution approach. This is fairly unique, because we have implemented a full simulation approach based on the mark-to-future technology used in our existing risk analytics for market and credit risk."

In the traditional approach, the simulation would initially be carried out over the entire bank, encompassing all units. One unit would then be removed from the calculation, and then the remainder resimulated. By a process of deduction, the difference between the results of these two simulations reveals the amount the removed unit contributes to the loss distribution of the entire bank's op risk. But recalculating thousands of scenarios is computationally intensive.

But the mark-to-future approach exploits the mathematical properties of the entire matrix of loss events for each unit and scenario to construct a more efficient simulation, Rosen claims. "It allows you to easily see what happens to your overall op risk if, say, a single unit increases in size by 10%, or one or two other parameters are changed," he adds.

UK mortgage bank Halifax, the Bank of Scotland and a US bank that Algorithmics declined to name are currently piloting Algorithmics' OpRisk application.

Data exchanges

And the increased efforts of individual banks and software firms recently to build stores of historical loss data could have more widespread benefits due to an industry-wide exchange of operational loss and risk data.

This venture -- known as the Multinational Operational Risk Exchange, or MORExchange -- began receiving operational loss data in November 2000. Member banks, such as JP Morgan Chase, CIBC and Royal Bank of Canada, submit loss data to the exchange -- managed by New York-based software firm NetRisk via its OpVantage franchise. OpVantage then analyses and scales the data before returning it. Firms can thus benchmark their own operational loss data against industry-wide data.

And MORExchange is gearing-up to become part of the new Operational Risk Exchange (ORX). ORX will have many more members than the MORExchange, but will make use of many of the same concepts and techniques.

Data standards and approaches for data sharing are agreed and contracts will be signed this month. Along with PricewaterhouseCoopers Switzerland, OpVantage will be a managing agent for the ORX.

But NetRisk's involvement in op risk management isn't limited to the MORExchange initiative. OpVantage has developed an internet-based op risk management system called OpVar that incorporates a database of loss events and modelling capabilities. OpVar integrates NetRisk's RiskOps software and PricewaterhouseCoopers' OpVaR database application.

OpVar can be used for risk management and also sophisticated risk-adjusted return on capital analysis. In October, version 4.0 of OpVar was released, and the database now includes more than 7,000 op risk losses greater than $1 million -- totalling $272 billion. Banks signed-up for OpVar include ING Barings and Société Générale.

Professional services firm Ernst & Young also markets an op risk management and reporting tool -- Horizon -- developed from JP Morgan's original op risk tool, a predecessor to the Risk Event Database.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here