Clear and present danger

Transparency is a concept that has been bandied about by banks a lot lately, usually when they know the press is watching. But just how transparent are banking practices, and just how clear is the Basel approach to operational risk?

howard-stein-2010

A sage observation of Ralph Waldo Emerson, speaking about a dinner guest, was: “The louder he talked of his honour, the faster we counted our spoons.” In other words, let’s spend a few minutes on the intersection of risk and integrity. One of the most overused words of the fiscal crisis is ‘transparency’. The other is ‘sustainability’. Maybe there is a third: ‘moral hazard’. Every big event seems to convert some previously useful terms into jargon. As jargon, with its emotional content, a word often loses clarity. It is perverse when ‘transparency’ loses clarity, an oxymoronic event.

Moving beyond the word, however, transparency is in substance the subject of this article.

Let’s say (for the sake of simplicity) that Basel II ‘invented’ operational risk as a category of risk, joining it to the two existing major branches of risk: credit and market/price risk. In point of fact, Basel II defined and codified op risk, defined an initial taxonomy, gave recognition to some early stage tools, mandated data collection, and even managed to give the nod to operational risk management in addition to operational risk measurement. Of the 50 or so representatives of internationally active banks on the Institute of International Finance’s committee addressing the Bank for International Settlement’s Basel II operational risk policy development group, 49 were policy folks and measurers, and I believe I was the only line risk manager.

Remember, Basel II was focused on measurement for capital rather than measurement of risk for purposes of managing it. Managing risk would have forced Basel to clarify the difference between ‘control activities’, including compliance activities, on the one hand, and ‘risk activities’ on the other. Managerially, op risk has been captured by the profession of control. Using a control mentality, op risk has been determined to be something to be avoided. I don’t agree with that mindset. Some operational risks should be avoided; some should be accepted, consistent with a risk tolerance established just as other risk tolerances are accepted. It should be risk-managed both as individual exposures and as a portfolio of exposure; it should be priced. In recent times financial institutions, particularly banks, are often manufacturers of products and are in the business of taking risks, hopefully profitably. The other side of the coin of risk tolerance is not the absence of tolerance, but is risk appetite.

But Basel II decreed that if a loss would previously have been defined as a credit or market/price risk, it should remain defined that way. Sorry, but you cannot live a lie. Put another way, sometimes quantitative leaps produce qualitative regressions1. Here was new knowledge, or at least the comprehensive (for the era) and coherent organisation and presentation of knowledge, directly linked to its denial. I call this anti-transparency. This ostrich was itself at risk not only because it had its unseeing head buried in the sand, but because that left its backside way up in the air. Risk architects loved their models not wisely, but too well. Shame on them, now that they knew better.

This seemed to me, at the time and more so today, to be the opposite of transparent.

Let’s test some examples. If banks lent money to Enron and Worldcom based on fabricated (a fancy word for fraudulent) financial statements, was this really a credit event (and a market/price risk event for losses on investment in, or held-for-sale transitory ownership of, those companies’ securities) or was this an operational risk event? Operational risk clearly and un-debatably includes fraud. Put in other words, why should the mistaken existence of a credit line define the loss event? By, in my opinion, mis-defining the event type and the categorisation of the losses, did we improve credit and/or market risk management, and aid the training of those risk-takers, or did we just confuse ourselves and our risk and our capital setting? Did it look like a credit loss event to Arthur Andersen? The heck it did! What did it look like to the board of directors over at AT&T when they learned they had acted to make AT&T competitive in important part based on benchmarking to Worldcom’s phony numbers?2 But what was it made to look like to regulators of internationally active banks, and to those misled bankers and their regulators, when they worked on their models and worked to define correlations – or the lack thereof – between products, geographies and risk types, which were basic elements of the regulatory definition of appropriate and adequate capital? Credit? Nonsense.

Let’s take an example from the consumer side. For years, banks have been charging punitive fees to credit card holders who ‘exceed’ their credit line. Given the enhanced state of authorisation systems, have the users really exceeded the line – making them violators of an agreement or, at least, misbehaving – or has the issuer, the credit grantor, sensing an interchange and a financing profit, increased the line to allow the transaction? If not, then the amount in excess of the credit line is almost certainly an operational loss event, not a credit event, because it represents a failure of the exposure limits and/or authorisation systems.

Credit requires a willing lender and a willing borrower. If purposely authorised, did the customer “exceed” the line or was the line increased by the bank, and not yet advised to the customer? What will the evidence show?

This is different from an example I can draw from transaction services, where an intraday credit line will normally vastly exceed an overnight credit line but where the lender has no control over the overnight exposure size once the exposure is accepted intraday. Or to translate this to a consumer example, where the aggregate line for a statistically based credit programme will be vastly exceeded by the sum of the individual lines granted to individual borrowers. One is lines established for maximum possible risk and the other is lines established for maximum probable risk. Would that our measurement and management methodologies, risk taking, risk mitigation, and risk tolerances for operational risk’s expected/unexpected/catastrophic trilogy of risk severity and frequency be subtle and nuanced enough to take all this into account. If the definition is purposely opaque and distorted, any such expectation even for the distant future is completely unrealistic.

Setting aside popular dismay about whether such credit card fees are fair, and returning directly to the issue of risk classification, when a write-off occurs, is it attributed to authorisation system failure – an operational risk loss – or to credit loss because the line was raised?

Most fundamental and important: does the Basel II classification – or mis-classification – of risk add or detract from clarity regarding each of measurement and management, and in doing so is it serving the profession and our companies well? (And society, if you want to include that important constituency.)

Note that I have provided examples that can categorise loss events both in and out of each risk type. I’m not an advocate for one type of risk; I’m an advocate for integrity and transparency in classification.

In discussions at the time, those who argued for historical continuity and fealty had a reason that trumped all opposing arguments for accuracy and transparency. It was persuasively argued that there existed long-dated series of loss data that formed the basis of ratings and performance expectations and that provided the data for models that were critically important to credit risk measurement. Who among us was going to be so bold as to suggest that we do great harm to the reality of credit risk measurement and methodology in the pursuit of ephemeral and theoretical transparency?

Especially when facing the political power on the side of historically powerful risk managers. Have you any knowledge of a CRO who comes from the operational risk branch of risk management? Or do they all come from credit risk, except where the magnitude of trading and other quant activities has elevated the role of market risk?

And there is another phenomenon at work. Risk managers who, within their companies’ risk tolerances, minimise or accurately predict actual events and the losses actually incurred (loss in event of default in credit terms) are best performers; but big losses make for big budgets and big budgets accommodate big staffing and a big seat at the table. As some CEOs who took a cue from former New York CEO/mayor Ed Koch’s saying “How’m I doing?”  have found out to their subsequent dismay, you can’t ask the person who heads your trading business how the portfolio is doing, and sometimes, if the CEO has defined ‘product’ as more important than ‘geography’ or ‘customer relationship’ and ‘line’ as more important than ‘staff’, and staff is bonused – when you get right down to it – by profits seen as resulting solely from line, you can’t even ask your compromised CRO.

So, those of us who had argued for integrity of classification of risks and risk events were stopped dead in our tracks. A few argued in favour of a dual series, with ‘real’ causation captured so that after a number of years the events could be re-categorised and several accurate series could be implemented. I say “argued”, but it was a whimper, and we were not persuasive.

Little did we know how flawed the protected species of data was (and how flawed its use). Since the fiscal crisis began we have been treated to silly statements such as the one claiming the market experienced “25 standard deviation moves, several days in a row”.3 As an historical observation this just cannot be; if it were an observation based on Goldman Sachs’ models, it would be truly terrifying. Let me hypothesise that, unfortunate hyperbole and observations that take appropriate account of distributions really used in developing risk models (which aren’t normal curves) aside, the data set Basel II protected is too time-constrained, too short, to be useful even were we to properly categorise events and think in terms of probability of catastrophic events in a fat-tail distribution.

And taxonomies are important. You get a very different distribution of events based on your definition of events/occurrences (and how you aggregate or don’t aggregate losses). For example, there was recently an unpronouncably Icelandic volcanic ash cloud over Europe. How often do devastating ash clouds (and include lava flows if you want to) occur? Well, there was Pompeii. So not very frequently. But there was a serious flood last year and a tsunami a couple of years ago, and an earthquake or several of them over the past few years. So if the category you use is ‘natural disaster’ rather than volcanic eruption, the frequency is far higher. What about fraud? If you aren’t transparent and you mis-characterise the data, especially if you do so purposely, you don’t have a prayer of creating accurate models and establishing accurate expectations.

Andrew Haldane, executive director for financial stability at the Bank of England, wrote an interesting paper entitled, Why banks failed the stress test, in February 2009. It suggests that recent market price behaviours are within what ought to be expected parameters if the models don’t limit themselves to the “golden decade” from June 1997 to June 2007 but instead avoid “disaster myopia” and incorporate data going back to, say, the South Sea Bubble. Tulips anyone? Eating sardines versus trading sardines?4 More recent you say – myopia might be bad, but the world has changed since the early and mid-1700s. Then include 1940’s World War II attacks on merchant ships, 1974’s price controls, and 1987’s Black Monday and the failure of ‘portfolio insurance’. How about May’s 15-minute market plunge, where it seemed market prices of underlying stocks followed indexes rather than vice versa and, lo and behold, superfast non-synchronised computer-driven trading on exchanges with different rules and capabilities created an outlier event. In other words, there is data that can be used to build relevant models. But that’s apparently not what’s in use (or even in use badly).

The result is that Basel’s duck don’t quack on the transparency and integrity of data dimensions. If we can’t get Basel to change it, let’s get Barney to be frank (an intentional pun) about it; let’s act in the interests of accuracy and transparency. And reality. Let’s be substantive populists.

What ought to be truly frightening is that a huge set of losses recognised as operational losses result from business practices. (This would, of course, be vastly larger if all losses caused by business practices were properly categorised, but I won’t further argue that today; there is plenty of ammunition available, so I can keep that powder dry.) Some come from practices all fair-minded people of whatever political persuasion and ideology can readily see are sleazy or ignorant, including numerous instances that result from that ubiquitous cause, ‘moral hazard’. (Hmmm, are loss events whose basic cause is moral hazard operational risk events by definition? That might be a step too far.)

But many business practice events and losses result from changing social mores. It is difficult to capture such data where the underlying cause of the underlying cause (ad infinitum) is a sociological adjustment. Theoretically it is possible, but it takes a bevy of wise men in a world where three blind mice seem to have been in charge. And if a loss comes from a retroactive societal adjustment in business practice standards, the loss is real, but is the cause business practices? It is not easy, but transparency is crucial to data, analytic, and forecasting integrity (though it is not a condition sufficient in and of itself).

Lloyd Blankfein, CEO at Goldman Sachs, has said the bank will instal a business practices committee. It will examine Goldman’s practices top to bottom. Good – I approve. But this will be only as good as the vision, subtlety, open-mindedness and degree of understanding of the practices that exist. I would argue for bottom to top to complement top to bottom, because I do wonder if a one-time tops-down effort will suffice. I think it needs to be embedded throughout a business and in continuous operation, but I endorse the effort and wish them well.

To be blunt, failures on the business practices front not infrequently lead to catastrophic operational loss events.

I have argued for ‘transparency’ as a necessary predicate to sound risk management. I have argued for transparent reasoning in partnership with categorisation of risk events in accordance with their actual natures, which I’ve called ‘integrity’. I find it amazing that we still need to debate this in what I think is universally acknowledged: that the US has the most transparent markets in the world, and that that transparency is a major competitive advantage and reducer of risk in our markets. There is still time to adjust Basel. The financial crisis has not only emphasised the need for adjustment, but has bought us the time and the environment to do it. I’ve argued for making operational risk management a risk discipline rather than ‘just’ a control/compliance discipline used primarily to satisfy the Sarbanes-Oxley Act, Federal Deposit Insurance Corporation Improvement Act, and other requirements. In sum, if we address the issues of transparency and integrity, risk managers, senior executives, and regulators would no longer have to speak loudly about their honour, and we would no longer have to so carefully keep count of our spoons.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Financial crime and compliance50 2024

The detailed analysis for the Financial crime and compliance50 considers firms’ technological advances and strategic direction to provide a complete view of how market leaders are driving transformation in this sector

Investment banks: the future of risk control

This Risk.net survey report explores the current state of risk controls in investment banks, the challenges of effective engagement across the three lines of defence, and the opportunity to develop a more dynamic approach to first-line risk control

Op risk outlook 2022: the legal perspective

Christoph Kurth, partner of the global financial institutions leadership team at Baker McKenzie, discusses the key themes emerging from Risk.net’s Top 10 op risks 2022 survey and how financial firms can better manage and mitigate the impact of…

Emerging trends in op risk

Karen Man, partner and member of the global financial institutions leadership team at Baker McKenzie, discusses emerging op risks in the wake of the Covid‑19 pandemic, a rise in cyber attacks, concerns around conduct and culture, and the complexities of…

Moving targets: the new rules of conduct risk

How are capital markets firms adapting their approaches to monitoring and managing conduct risk following the Covid‑19 pandemic? In a Risk.net webinar in association with NICE Actimize, the panel discusses changing regulatory requirements, the essentials…

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here