Effective model risk management requires a combination of analytical skills, governance and organizational structure, as well as the ability to negotiate. In early studies of model risk management (MRM), analytics and model validation were the primary points of focus. Various writers led the way in discussing the possibility of model errors and the need for analytic review and skepticism toward models and results. Others addressed the risk of assuming models to accurately represent or measure risk within financial institutions. Losses due to model failure resulted in fresh regulations addressing the risk posed by inaccurate or inappropriate models. Very few writers, however, have addressed comprehensive model risk governance, organization or negotiation structures within MRM. This paper expands on the foundation of model risk analytics to address the governance, organizational and human behavior challenges associated with enterprise MRM. It proposes a comprehensive framework for model risk governance, organizational responsibilities and human behavior covering the Risk Committee of the Board to MRM. It expands the definition of models into decision-support tools (DSTs) so that end-user computing (EUC), big data analytics and machine-learning DSTs may be included in model risk governance, organization, negotiation and control.
1 Introduction: a comprehensive approach to governance, organization and analytics
In 2015, Chartis conducted a global survey of financial institutions titled “Leading practices in capital adequacy”. In this survey, only 9% of tier-1 firms reported including model risk governance as a subset activity of their risk committee of the board of directors (RC/BOD) “to a great extent”; 68% of tier-1 firms opted not to include it at all (Chartis 2015). Under the Dodd–Frank Act, US banks are required to include model validation in both Dodd–Frank Act stress testing (DFAST) and Comprehensive Capital Analysis and Review (CCAR). Despite this fact – and despite the regulatory requirements set by the Federal Reserve Board (FRB; 2011), the Office of the Comptroller of the Currency and Board of Governors of the Federal Reserve System (OCC and BOG–FRS; 2011), the Federal Housing Finance Agency (FHFA; 2013) and the Basel Committee on Banking Supervision (BCBS; 2015) – it appears that model validation has not been fully incorporated into any comprehensive model risk governance and management framework. Without a holistic approach to model risk governance, measurement and management, it will not be long before another “London Whale” or Long-Term Capital Management (LTCM) model risk event occurs. Without effective board and enterprise-wide governance and controls, the prevalence of computerized models, and our increasing reliance on them for decision making, brings with it the potential for huge losses of earnings and shareholder value, regardless of how much regulatory oversight is attempted. The same is true of the growth of big data analytics and machine learning.
Developing a comprehensive approach to model risk management (MRM) poses many challenges. The first challenge is that the focus of both regulatory agencies and the academic literature has heretofore been primarily on model analytics and accuracy, with very little emphasis on governance, organizational structures or human behavior. The second challenge is that much of this activity and focus has been on “big” models, treating end-user computing (EUC) systems, eg, spreadsheets, as an afterthought and generally ignoring big data analytics, machine learning and predictive modeling. Certainly, reviewing models for “accuracy” is a necessary condition for effective MRM. However, what has often been missed is that model “accuracy” is a necessary, but not always sufficient, condition for effective MRM.
Effective MRM must include a comprehensive model risk governance and organizational framework. This framework should encompass a knowledgeable and engaged RC/BOD, an executive-level model risk committee (MRC) reporting to the RC/BOD and a chief model risk officer. The latter should report to the MRC and RC/BOD in the same way that a chief internal auditor would generally report directly to the board’s audit committee.
A corollary to the challenge of focusing exclusively on big model results is that models are only as good as the data and assumptions that feed them. Data integrity (also called data quality management (DQM)), assumptions inventories and the mapping of relationships between data, assumptions and results – for individual models as well as for the enterprise – are crucial components of MRM. In effect, model data governance should include the same guidelines as those detailed in the BCBS Standard 239 (Basel Committee on Banking Supervision 2013).
Model risk metrics, model risk mapping and key risk indicators (KRIs) are other important elements of comprehensive MRM. The adage that “you only manage what you measure” applies to model risk within any financial institution. If the RC/BOD is not receiving regular KRIs on model risk directly from the chief model risk officer, if they are not fully engaging with these reports (which are committed to understanding and challenging the officer’s findings), or if the internal audit (ie, the third line of defense) is not auditing MRM, then the financial institution is not managing its model risk effectively. These KRIs must include a feedback loop, by which the RC/BOD may track and report back on the status of its model risk recommendations (whether issued, pending or remediated); this would provide metrics as well as a way to measure organizational compliance with MRM governance and controls. KRIs and risk mapping should also facilitate the flow of information from model developers and owners to MRM and the RC/BOD.
The challenge is how to transform a conventional enterprise, with the ethos of “model validation reports equals model risk control” or “model risk analytics equals model risk control”, into a comprehensive MRM-focused environment that engages the entire organization, from governance via human behavior to decision making based on models or decision-support tools (DSTs).
This paper is divided into eight further sections.
Section 2 expands the definition of model risk to include the decision-making processes that the model is supporting.
Section 3 addresses the governance and organizational roles and responsibilities related to MRM within an enterprise, including that of the RC/BOD.
Section 4 proposes an organizational structure for MRM and discusses the practical problems associated with alternative structures.
Section 5 addresses the need to understand and judge model “suitability for purpose”.
Section 6 discusses the governance and behavioral aspects of model validation recommendations, negotiations and escalation.
Section 7 explores the need for including EUCs in model risk governance and model risk inventories.
Section 8 addresses emerging model technologies such as big data analytics, machine learning and complex adaptive systems, outlining the need to include them in the enterprise’s MRM framework.
Section 9 summarizes and brings these individual components into a coherent framework.
2 Model risk management includes decision processes as well as data, analytics and output
Models are simply a representation of reality. There are many ways in which the human race represents reality, including art, literature, architecture, music and maps. Financial models are simply a mathematical application of this same desire to represent some aspect of reality. Computer models specifically are an extension of this desire to represent reality via an electronic medium. Often, computer models are used to forecast the future.
Forecasting the future is a practice with a long history. From crystal balls, throwing knucklebones, reading entrails, tea leaves and coffee grounds to sophisticated computer models, humankind has always wanted to know “what happens if…?” or “what will the future bring?”
Financial forecasting and modeling have been present in banks for as long as any form of financial institution has been in existence. The abacus is arguably one of the first handheld calculators, invented by the Babylonians in 2400 BC (Stephenson 2010). The Chinese, Egyptians, Greeks and Romans all developed versions of this device. These early calculators enabled the user to compute sums much faster and more reliably than an unaided human brain. Budgeting and forecasting future bank income has been prudent bank management behavior since the era of the abacus and gold houses.
However, when forecasting the future, or even measuring the past, the model designer, owner and decision maker should always be asking: How accurate is this model’s representation of reality? A healthy skepticism should be applied to the results of all models. As Box and Draper (1987) stated: “all models are wrong, but some models are useful”.
Box and Draper (1987) were not the only ones to address model risk and the potential for errors. Emmanuel Derman, then at Goldman Sachs, was one of the first to explicitly discuss model risk within financial institutions (1996). Reason (2000) addressed the human error factor in models and management. Persaud (2000), then at State Street Bank, addressed the “herd” or contagion effect of market-sensitive risk management and models. Taleb (2004) addressed the problem of trying to forecast the unknown in Fooled by Randomness. Derman (2005) also reminded management that they should exercise a healthy skepticism toward models in his paper “Beware of economists bearing Greek symbols”.
One issue related to model risk and the potential for model error that has not been explored very thoroughly is that of “agency”. Reason (2000) discussed the human-error factor, but what of model errors that have been introduced intentionally, perhaps in order to generate short-term returns and hence performance bonuses? In particular, the model risk agency problem is present in areas of modeling where there is no observed market pricing and the enterprise uses mark-to-model for performance criteria. If the model developer/owner is rewarded for mark-to-model performance, how is the enterprise to protect itself against self-dealing model construction? Unless there is an independent and comprehensive model risk governance, oversight and accountability framework in place, the enterprise remains vulnerable.
The Black–Scholes, or Black–Scholes–Merton, option-pricing model is one of the most widely used models and success stories of modern financial economics. However, its core construction required several simplifying assumptions that later proved problematic. Kamal (1998) addressed one of these simplifying model assumptions in his report “When you cannot hedge continuously”. The Black–Scholes model also assumes that margin requirements are immaterial and can always be met, that is, assuming away liquidity risk from the margin requirement of options or futures contracts. The 1980s “risk-controlled arbitrage” mortgage strategy, promoted by Smith-Breeden Associates, Inc., and the failure of LTCM (Lowenstein 2000) unfortunately proved that these embedded, simplifying model assumptions could lead to failure and the loss of millions of dollars.
Given the inherent inaccuracy of models, why use them for decision making within an enterprise? Models and supporting computational devices are often useful and have a very long history. One could argue that computing and modeling began with the Babylonian abacus in 2400 BC (Stephenson 2010). With the progression from the abacus to mechanical calculators to Cray’s supercomputers to the personal computer and spreadsheets (an important EUC) to complex systems, big data analytics and artificial intelligence, computational tools and models continue to evolve in complexity, if not always in usefulness. Increasingly complex models need to be approached ever more cautiously. In his prescient comment in 1980, George Box wrote: “No statistical model can safely be assumed adequate. Perspicacious criticism employing diagnostic checks must therefore be applied. But while such checks are always necessary, they may not be sufficient, because some discrepancies may on the one hand be potentially disastrous and on the other be not easily detectable.”
If models are useful in helping us to understand the “what ifs” and in measuring future uncertainty, then they are, in fact, useful for decision making. Thus, perhaps we should also label them DSTs. If models are useful as DSTs but need to be applied with “perspicacious criticism employing diagnostic checks”, then MRM must, of necessity, encompass the model topology of the entire enterprise, from large computer models via desktop EUC applications to big data analytics and machine learning.
One of the challenges of current practice as regards MRM is that model risk can come from any DST construct within the organization. However, the attention of most MRM officers is centered on large applications; often they do not identify, oversee or manage the risks associated with EUCs, big data or artificial intelligence models. McConnell (2014) addresses this issue when dissecting the JP Morgan London Whale. The failure of model risk controls, including EUCs, cost JPMorgan Chase US$6 billion in trading losses and another US$1 billion in regulatory fines.
The conclusion here is that model risk is independent of both hardware and software platforms. Model errors can originate from EUCs, specialized risk applications, big data analytics or from machine learning and complex adaptive systems. Hence, MRM must address all decisions taken throughout the enterprise that are dependent on models, regardless of the model platform or organizational location.
3 Governance of model risk
It is important to address comprehensive model risk from the comprehensive decision-support framework rather than from a solely analytical perspective. Model validation does not equal MRM. Model risk governance and organizational structures are both integral parts of MRM.
One of the challenges of MRM is that most of the literature focuses on either the technical analysis and quantification of risk (see Christodoulakis and Satchell 2008; Meyer and Quell 2016; Morini 2011, 2014) or on the failure of models to accurately represent or measure risk. Persaud (2000, 2001, 2008), Rebonato (2001, 2007), Millo and MacKenzie (2008) and the majority of The Journal of Risk Model Validation (2007 ff) papers focus on model accuracy and reliability as a necessary but insufficient condition for MRM. Cresp et al (2017) identify the next phase in MRM evolution as the commoditization or factorization of MRM measurement, without addressing the need for simultaneous comprehensive MRM governance. These are all important factors when evaluating the relative accuracy of models, but in the absence of a strong governance and organizational structure, it is difficult to assess the decision-making usefulness or suitability of purpose of these models within the enterprise.
The regulatory environment also requires very specific quantification of risk via the guidelines issued by the FRB (2011), OCC (2012) and FHFA (2013) as well as the Fundamental Review of the Trading Book (FRTB) proposed by the Bank for International Settlements (BIS; 2016). Yet the success or failure of any MRM framework relies on the corporate governance framework that provides its foundation. Measurement or identification of model inaccuracies is insufficient without the corporate governance framework needed to enforce model-related or behavioral changes within the enterprise. Yoost (2013) and Scandizzo (2016) are two of the few authors to address the important governance components of MRM.
3.1 Who is responsible for model risk management governance and oversight?
The board of directors is responsible for the overall safety and soundness of the organization. This is well documented and discussed by Scandizzo (2013) and Yoost (2016), among others. Regulatory entities, including the BCBS (2015), FRB (2011, 2015), OCC (2011) and similar organizations, have given additional direction to boards of directors for their fiduciary responsibilities with regard to risk, including model risk. Via Supervisory Letter SR-15-18 (FRB 2015), the FRB expanded the roles and responsibilities of the board to include understanding model assumptions and risks. To quote FRB SR-15-18, especially with respect to the use of model overlays (also called “on-top” adjustments) and model performance, “the board should direct senior management to provide information about the firm’s estimation approaches, model overlays, and assessments of model performance”.
Befitting their fiduciary governance role over the safety and soundness of the financial institution, the board of directors is ultimately responsible for MRM. However, as with other responsibilities requiring special skills or talent, boards typically delegate responsibility for MRM to a dedicated RC/BOD. Current and emerging risks that the RC/BOD are responsible for are addressed by Yoost (2013); this includes model risk. Scandizzo (2016) addresses the overall governance responsibilities of the board. The regulatory guidelines set by the FRB, OCC, FHFA and BIS also explicitly state that boards should take responsibility for MRM, although these documents provide very little detail about how these duties should be carried out.
An RC/BOD’s oversight of MRM must be based on an understanding of the current and anticipated levels of model risk within the enterprise. Regulatory requirements insist that the board determine their “appetite for risk”. This is a challenging responsibility for the RC/BOD committee, in that model risk analytics is a highly specialized field. As explained by Derman (2005), it is easy for an RC/BOD to become overwhelmed by technical details. However, there are many aspects of MRM oversight and governance that the RC/BOD can, and should, understand without the need for “learning Greek”. Yoost (2013) argues that understanding model risk and presenting effective challenges to management are two important responsibilities of the RC/BOD.
One of the challenges facing RC/BODs and senior management teams is that members tend to “manage only what they measure”. Thus, the RC/BOD should insist on management presenting them with regular KRIs on the state of the enterprise model risk. It is important for RC/BOD members not to equate “model validations” with MRM. Again, model validations are a necessary but insufficient condition for MRM. The development of appropriate KRIs for the purpose of MRM is evolving, but some of the current “best in class” KRIs include both dynamic and static measures of model risk, based around inventories of enterprise models and model assumptions, location and type of risk measures, and the status of recommendations and remediation within the enterprise.
Scandizzo (2005) discusses the use and application of risk mapping and KRIs within operational risk management. Similar principles need to be applied by the RC/BOD, and model risk mapping and metrics should be developed for each individual enterprise. The study of model risk metrics should include, but not be limited to, answering the following questions.
Where are the primary model risks currently located?
What decisions are currently being made based on these models? What decisions will be made in the future?
What is the nature of the risk with each of the models: is it, for example, financial, regulatory, reputational or customer related? How are the models being redesigned, operated and/or controlled to reduce that risk?
What recommendations for model improvements have been made previously?
What have the model owners done to remediate these recommendations? How long did it take them to do so?
Are model owners playing “hide the model” or resisting the oversight of MRM in some other way?
If so, has MRM been able to effectively escalate these areas of resistance to the RC/BOD? And so forth.
The mapping of model risk and the development of effective KRIs are both in their infancy, but these practices are key components of effective long-term governance and MRM.
4 Organizational delegation from board to management
As regards the day-to-day operations of the enterprise, the board and its RC/BOD delegate their authority to management. This delegation of authority includes the management and oversight of model risk. Thus, the organizational structure of management is another important element of effective enterprise MRM. As discussed above, model validation alone does not equal MRM.
Anecdotally, it appears that the most common organizational structure for the delegation of model risk authority entails the RC/BOD delegating day-to-day oversight of all model risk to a chief risk officer (CRO). For most RC/BODs, this is the path of least resistance. The organizational logic is thus: the CRO reports to the RC/BOD on all risk activities; model risk is a risk activity, and therefore the CRO should be made responsible for model risk.
Unfortunately, there is an inherent conflict of interest when the MRM group reports to the CRO. The CRO and enterprise risk management group rely on models to measure and report risks to the RC/BOD. In order to be as effective as possible, the MRM group must be independent of all model developers, owners and users, as per the regulatory requirements of the FRB, OCC, FHFA and BIS. A foundation of MRM is that the model validators exercise independent professional judgment and do not share the same perspective as the developers and users of those models. MRM cannot be said to be fully independent if a percentage of the models to be evaluated and overseen belong to the CRO.
How is the MRM group to remain independent to freely critique and judge the accuracy/suitability for purpose of models that the CRO is ultimately responsible for? In such circumstances, even assessing the models’ documentation becomes a conflict of interest. This is a problem that the model risk profession is only just beginning to wrestle with.
The conflict of interest between the CRO and MRM is similar to that historically faced by internal auditors until the internal auditor–chief financial officer (IA–CFO) structure was redefined. It is all too easy for the CRO to exercise implicit or explicit control over “independent” MRM judgments when salaries, bonuses, promotions and budgets are directly managed by the CRO: “Need your software upgraded? Additional staff? A promotion? Well, we’ll have to think about that … while reflecting on the times when the MRM group was critical of the credit or market value model that CRO is ultimately responsible for.”
Similar, but more subtle, cultural pressure can be intentionally or unintentionally exerted on MRM via an “us versus them” ethos within the enterprise risk management (ERM) group. MRM independence is threatened whenever the CRO emphasizes that “we are all part of the ERM group and we all collaborate within the group”, or exercises similar cultural pressure. When this type of cultural pressure is applied within the ERM group, it removes the independence of the MRM, putting political pressure on MRM to downplay or eliminate criticism of ERM models, since “we are all one collaborative family”.
“You are being disruptive to the organization’s goals and objectives”; “you are not being collaborative”; “stop challenging all of our assumptions”: these are criticisms frequently directed at effective model risk officers and staff. The more effective an MRM unit, the more likely these and similar criticisms are to be voiced within the organization, and raised at the C level of the organization by the groups producing the most model risk. Thus, it is important for the RC/BOD to provide its MRM unit with organizational “cover” in the form of a direct reporting relationship between the MRM and the RC/BOD. Without this protective governance cover from the RC/BOD, even the CRO is likely to exercise undue or inappropriate influence over the MRM unit.
The appeals process for model developers, owners and decisions makers is also an important organizational structure for MRM. Most governance and organizational structures include an outlet for appeal in the event that model owners do not agree with MRM judgments. Yet there is a potential for conflict of interest in the appeals process between CRO-owned models and non-CRO-owned models. Both groups should be treated similarly as regards model validations, recommendations, appeals and remediation. But what if they are not?
If a non-ERM department manager refuses to make the recommended changes, is there an appeals procedure for MRM to pursue?
What if an ERM department manager refuses to accept the MRM recommendations?
Supposing the CRO supports the ERM department manager, what further appeals/escalation procedures are available to MRM?
What if the CRO refuses to discipline the ERM manager?
MRM reporting to an IA presents a similar yet slightly different challenge to both MRM and IA independence. The three-tiered approach to MRM includes: tier 1, model owner; tier 2, model risk management; tier 3, IA. The IA represents a control point over MRM. IAs of MRM should assess whether MRM is effectively issuing recommendations based on model validations. They should also track the aging of recommendations and review model-owner remediation before closing outstanding recommendations. In structures where MRM reports to an IA, either MRM or the IA gives up independence of control.
The evolving solution for maintaining MRM independence is for MRM to report directly to the RC/BOD, with administrative responsibility at CEO level. This is the same organizational framework that the IA and the audit committee of the BOD settled on after addressing the potential conflicts of interest between the CFO and IA. It solves the problem of both implicit and explicit conflicts of interest between the CRO and MRM. This structure also maintains direct communication and accountability between MRM and the RC/BOD. It also grants the MRM a direct channel of communication with the RC/BOD, whose authority allows it to intervene when any model developer/owner refuses to act on the recommendations of MRM.
A second organizational structure (offering a solution to the delegation of authority from RC/BOD to MRM) involves the creation of a model risk committee (MRC), charged with the intermediate responsibility of overseeing and governing enterprise MRM. Regulatory focus and current “best practices” within financial institutions present this delegation of authority from RC/BOD to MRC as a way for enterprises to translate the RC/BOD governance structure into a management framework.
From the point of view of MRM, the advantage of the MRC is that it can meet on a more frequent basis than a typical RC/BOD. If composed of C-level senior executives who are not themselves model owners, the committee can be an effective way to delegate the authority for overseeing MRM functions within the organization. The C-level executives can recuse themselves when discussing their own specific models or EUCs. This committee composition maintains independence for both MRM and the MRC, while allowing for a more technical discussion of model risk analytics than is possible at the RC/BOD level. If, however, the MRC is composed solely of model owners, then the MRC may become a forum for model-owner education and shared experience. Alternatively, should the MRC/model owners seek to exercise veto power over MRM recommendations and governance, MRM may opt to give up its independence.
The organizational structure of the MRM department depends on the structure, complexity, type of risks and size of the enterprise itself. Large financial institutions exposed to various types of risk in multiple currencies/countries, regulatory regimes and customers/products will form large complex MRM structures built to govern, measure and manage these complex risks. Smaller enterprises with fewer sources of risk will have smaller organizational structures.
Every MRM department should have a structure that allows the chief model risk officer to effectively manage both model risk governance and model risk analytics. Model risk governance should include both the policies/procedures and organizational structure of MRM throughout the enterprise. It should also be responsible for risk metrics and reporting to the RC/BOD and MRC. Effective tracking of model validation schedules, recommendations, deadlines and remediations are minimum requirements for model risk governance. Without model risk governance, the enterprise equates model validation with MRM de facto, with less than optimal results.
5 Understanding model “suitability for purpose”
“Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” [Box and Draper (1987)]
Effective MRM requires judgment about the efficacy of decision making by each model employed by the enterprise. This judgment about the effectiveness or usefulness of each model, in terms of the decisions it has been used to support, is often called “suitability for purpose” or “suitability for use”. Models are built with the intention of assisting executives to make better decisions than they could make without them. Paraphrasing Box and Draper (1987) somewhat, the model may not be 100% correct, but it may be correct enough to be useful. Again, the term DSTs may be a better description than simply “model”.
Models as DSTs may be more or less accurate and still produce reasonable decision support compared with less-informed decisions or results. Conversely, if management/board decisions are being made based on reasonably accurate models that are nevertheless inappropriate for the decisions in question, then the model is not suitable for use and should be rejected as inappropriate.
There is also a time dimension to suitability for use. If a model was worthwhile in terms of making decisions months or years ago, but the market or externalities have since changed, then the model is no longer suitable for use. On the positive side, a model that is only slightly wrong and better than all other alternatives for a given decision could be considered “suitable for use” for that particular decision-making process. Model aging must be addressed in MRM governance and controls.
Given this need for MRM to render an independent judgment on the “suitability for use” of a model at a given moment in time, MRM executives need to have expertise in the relevant enterprise risks and decision requirements as well as knowledge of the quantitative needs, structures and analytics of the models overseen. This combination of qualitative management decision making and quantitative analytics is difficult to find and maintain within an enterprise.
A corollary to independent judgment about “suitability for use” at a given moment in time is the question of materiality of individual aspects of the model as well as the model in aggregate. There is little discussion within the regulatory or MRM literature about materiality. In fact, one might argue that, since the regulatory guidance does not allow for judgments regarding materiality, all model errors are material. However, “suitability for use” requires MRM to make judgments about immateriality of data elements, assumptions or calculations as part of the decision-making process each model is used to support. Thus, materiality should not be ignored by MRM.
Model validations of erroneous or poorly constructed analytic tools are relatively easy to identify and perform. More challenging for MRM professionals are those models that may be technically or theoretically correct but have been misapplied. Organizational, political or human behavioral issues related to the decision framework based on favorite models that are outdated or irrelevant to the decision being made can potentially put the enterprise at financial, regulatory, legal or reputational risk.
Suitability for use also applies to the aging of models within the enterprise. The life cycle of models moves from development through implementation and use, and ends in aging, decay and retirement. Sometimes market forces change rapidly, as with the models that used the London Interbank Offered Rate (Libor) for discounting and valuing derivatives in 2008. Firms that recognized the need to shift their models from Libor to an overnight indexed swap (OIS) discounting were protected, while firms that were not agile or perceptive enough to recognize the shift in model convention lost money until they changed their models.
It is important for the enterprise to have inventories of models and inventories of assumptions. Inventories of models should include risk classification for those models and can be used to guide the RC/BOD and MRM in the frequency of and attentiveness to the models with the greatest potential for model risk. Controls around the model inventory, change logs and communication between model users and MRM are important but beyond the scope of this paper. An inventory of assumptions for each model is also an important control for MRM and the enterprise. This inventory should include a list of macro-assumptions as well as embedded model-developer assumptions and specific detailed assumptions. KRIs built around the assumptions inventory should be monitored and reported on periodically to the RC/BOD. Macro-assumptions are important so that what “everyone knows to be true” may be explicitly documented and reviewed by all stakeholders. Had this been done with mortgage credit models prior to 2007, the mortgage finance industry would hopefully have realized that assuming a never-declining House Price Index was suboptimal: a dangerous assumption on which to issue subprime mortgages. An assumptions inventory can also be useful in the event of a change in market conditions, enabling MRM to quickly identify which models have been affected across the enterprise.
Disputes over suitability of use judgments by MRM require the enterprise to have a well-developed escalation process within its model risk governance policies. Anecdotally, this is one of the greater political and organizational challenges facing MRM. The final judgment of “suitable” or “not suitable” for use puts MRM in direct conflict with the developers and owners who believe that the model they have built, or are currently using, should be implemented or continue to be used as before. Yet, as the examples of Libor models, LTCM, JPMorgan Chase’s London Whale and other model failures have shown, the toughest part of MRM’s role in an organization is to challenge powerful political interests. Without the support of the RC/BOD and an escalation process that protects both MRM and the model owner, judgments about “suitability for use” are not trustworthy.
6 Model validation recommendations, negotiations and escalation
MRM’s responsibility to render judgment on a model’s suitability for use requires not only the qualitative and quantitative expertise discussed in the previous section. MRM members must also possess excellent negotiating skills. Model owners and their senior executives will, at times, disagree with MRM’s recommendations for improvement in the modeling or decision-making processes required by MRM. Thus, MRM personnel will certainly need to possess judgment, negotiating skills and the support of the RC/BOD in order to effectively manage model risk in the enterprise.
Skills of negotiation and persuasion are important qualities for MRM, because there are times when model developers/owners will resist recommendations or “findings” by MRM, either because of specific wording or to test how strongly MRM believes in a particular recommendation/finding. In concert with negotiating skills, a sound understanding of human and political behavior is important when proposing changes that will reduce the enterprise model risk, while perhaps costing the model owner or the organization a certain amount of time and money. At the same time, it is important for those in charge of MRM to be attentive and diplomatic listeners who are able to respond to model developers’ and owners’ opinions and contributions. No single group within the enterprise is likely to have all of the best knowledge of models, DSTs and their usefulness to the enterprise. Listening, negotiation, persuasion and compromise where appropriate are key to effective MRM. In addition, building a political base and knowing that they possess the strong support of the RC/BOD makes negotiations over recommendations or report language easier for MRM prior to issuing a judgment on suitability for use or a model validation report.
Not all recommendations/findings are equal; some are instant – “stop using the model until fixed” – while others stipulate “fix it within a time frame”. Recommendations/findings often address model risks other than model analytics, targeting areas that also cause model risk. For example, key-person dependency (that is, when only one person knows how to run a key model) is a model risk. Equally, lack of documentation is more than analytics related, since missing or poor desktop procedures do not adequately prepare employees to operate or understand a model on staff turnover. Poorly trained model operators are also a model risk, and recommendations need to be made accordingly.
The Sarbanes–Oxley Act of 2002 requires auditing to judge the “tone at the top” of the enterprise regarding risk culture and compliance. The “tone at the top” risk culture should be consistent throughout the organization and should encompass all model-related functions, in order for MRM to be effective. When the chief MRM officer reports to the RC/BOD, and the RC/BOD is engaged to ensure that model risk is managed, then MRM is able to effectively negotiate with sometimes reluctant or recalcitrant model owners. As with IAs, the possibility of escalating model improvement or error correction recommendations to the RC/BOD by MRM, with predictable support for MRM from the RC/BOD, means that model owners or developers will be reluctant to refuse recommendations simply because of costs, inconvenience or pride.
The tone at the top, and demonstrably consistent engagement from the RC/BOD in MRM, also mitigates the problem of “we built this, but it is not a model” (also known as “hide the model”). In a recent informal meeting of model risk officers, this author asked how many of the other model risk officers had ever had a model owner try to hide their models from MRM, or to claim that their EUC or computer program was “too simple to be a model”. After the laughter died down, the response was near unanimous, except for one sarcastic comment that any allegation about model owners trying to hide models from MRM was “fake news”.
Model owners and developers are incentivized to hide or delay oversight by MRM because of the cost to them, in terms of time and resources, that undergoing a model validation entails. There are further costs to the model owner/developer when the time comes to remediate the recommendations made by MRM. There is also a delay in implementing a new or upgraded model when MRM performs validations prior to putting the model into production.
However, the RC/BOD must be seen as the ultimate authority for MRM, providing the “tone” and organizational incentives so that model owners/developers will voluntarily undergo model identification and validation, and action recommendations. Penalties for hiding models, combined with automated tools for identifying models, provide both a “carrot and stick” approach for enforcement of MRM policies and procedures.
Decision-making committees within the enterprise serve as a key resource for those seeking to understand the decision framework for models; they are also a source of emerging and/or undocumented models. MRM should have a seat at the table of every key decision-making committee that utilizes models. The credit committee, asset/liability committee and investment committee are examples of committees where MRM should be present, for how else can MRM judge models’ suitability for use? MRM must understand the decision-making process and organizational framework in order to make judgments about the model’s suitability for use and its role in supporting decisions.
Escalation procedures for disputes between model owners/developers and MRM are also related to model risk governance. Such disputes will certainly occur, and it is important to detail the escalation process as both a safety valve for resolving them and as a guarantee to the RC/BOD that neither MRM nor model owners will act in an arbitrary, capricious manner. Escalation of disputes, first to the MRC and then to the RC/BOD, provides an opportunity for both sides to be heard and perhaps to ensure there is a legitimate difference of opinion that is rightfully the remit of the MRC or RC/BOD.
7 Managing the model risk of decision-support tools and end-user computing
Earlier in this paper, we suggested that the term “DSTs” provides a better framework for MRM than “model”. The continued demand for agile or rapid decision making throughout enterprises, and the ease and familiarity of spreadsheets and other desktop applications, means that enterprises make use of a wide range of EUC tools. These DSTs are often precursors to larger model applications, in that model owners have identified a need for decision tools but have found neither the time nor the budget to develop a larger application.
In the early days of MRM evolution, value-at-risk (VaR) models were almost the only models deemed worthy of oversight, as they were considered to be the most complex and to have the greatest potential for loss. Market value models were added to the model inventory as model risk awareness grew. Post-housing crisis, credit models were added to the model inventory. Next, operational risk models were added to the model risk inventory, especially as the regulators became more aware of operational risks to the enterprise. However, anecdotally, neither spreadsheets nor EUCs have generally been included in MRM model inventories to date.
EUCs proliferate throughout the enterprise, many with errors in construction or cell formulas, lacking documentation or desktop procedures. Epitomizing key-person dependency, only one or two people in any given enterprise may know how the EUC works. Croll (2005, 2009) and Powell et al (2009) have addressed these issues under the moniker of “spreadsheet risks”. McConnell (2014) classifies JPMorgan Chase’s London Whale spreadsheets as primarily operational risk with some elements of model risk. However, to date, neither the MRM professional nor the academic literature has fully included these model risks in overall risk model governance or analytical frameworks.
Banking regulators’ definition of models as “computational engines that use data and assumptions that are not axiomatically correct” (Federal Housing Finance Agency 2013) has perhaps inhibited the use of a broader definition of models as “calculation tools used to support decisions”, or DSTs. In using this narrower definition of models, many MRM departments avoided including EUC decision tools in their model inventories. This narrow focus is gradually being corrected within the MRM profession, but it will take time for MRM departments and model owners to adjust.
8 Expanding model risk management: big data, heuristics and complex systems
The development and use of heuristics, big data analytics, machine learning, expert systems and complex adaptive systems also suggests that MRM is still in its infancy. These relatively new approaches to ex post analytics or ex ante forecasting enable the enterprise to analyze in more depth, basing their decisions on data and analytical approaches that were not previously possible. Probabilistic, adaptive human and organizational behaviors can all be analyzed or modeled in a such a way as to give the enterprise greater ability and agility to meet with customer needs and to react to changing market conditions, all of which leads to greater control over risks. On the downside, these DSTs rely heavily on more data, assumptions and complex computations that are challenging to test and thus validate.
The challenge of these new big data analytics is that they are being rapidly developed in areas of the enterprise not traditionally included in model risk inventories or oversight. Yet the same model risks are present in these emerging support tools.
Is the data accurate?
If not, what should one do with erroneous data?
Are the methods used for scrubbing data valid?
Are the model analytics valid, reasonable or even useable? How do we know we know?
Is there anything we do not know? What if our data or analytics are wrong?
To reference Box and Draper (1987) once again: How wrong do these models have to be before they are unusable? Is anyone in MRM overseeing these new tools? If not, how long before an enterprise has a major financial or reputational loss due to this emerging area of model risk?
The question attached to all DSTs should always be the following: How do we know we can make effective decisions from this DST? Or, to paraphrase Box and Draper (1987): How wrong is this model? Will developers of a new DST based on big data analytics, expert systems, complex adaptive systems, etc, recognize and convey to decision makers the weaknesses and potential errors within their models? Or will they follow the more typical path, and imply or assume that their model results are infallible? History suggests that the fallibility of these newly emerging DSTs will go unrecognized until a high-profile model failure occurs. It is the responsibility of the board and MRM to avoid such mishaps. Thus, these new areas of model risk must be quickly incorporated into the enterprise MRM framework.
In summary, ensuring that DSTs (also known as models) are fed with – and produce – useable data, assumptions, calculations and results (eg, analytics) is a necessary but not sufficient condition for effective MRM. A strong governance and organizational structure must also be in place throughout the entire enterprise, from the RC/BOD to the MRM. The RC/BOD must be knowledgeable and engaged in oversight of MRM, receiving KRIs on MRM just as it does for other risks. Model validation reports are insufficient information for the RC/BOD.
MRM must understand the risks and decisions made by the board and management, so that reviewing models or DSTs can be judged as “suitable for use”. Risk-mapping and KRIs that monitor model risks, including feedback loops between model developers, owners, MRM and the RC/BOD, support and enhance the identification of current and emerging areas of model risk within the enterprise.
The MRM must be organizationally and politically supported by the RC/BOD, so that everyone in the enterprise understands that judgments, recommendations and actions taken by MRM are supported from the top. Only with this support can MRM effectively negotiate with model owners/developers. This tone at the top also ensures that model owners/developers do not play hide the model from MRM.
Last, the definition of model needs to be expanded to “DST”, so that the EUCs supporting enterprise decisions may be included in the model inventory, given risk classifications and evaluated appropriately. Big data analytics, complex adaptive systems, artificial intelligence and related emerging decision-support technologies also require a second line of defense, via which they may be evaluated and recommended to the enterprise as “suitable for use” or not.
Given the challenges discussed here, given the nascent stage of many of these governance and organizational structures, and given emerging model risks in the form of EUCs, big data analytics, machine learning and complex adaptive systems, it is important that MRM expand its scope and mission. Enterprises need to become more aware of these risks and address them through comprehensive MRM governance and organizational structure, supported by solid analytics. Only then will enterprises be able to safely rely on these decision tools in the future.
Declaration of interest
The views expressed in this article are those of the author and do not necessarily represent the views of the Federal Home Loan Bank of New York. The author is not providing business consulting advice, and the FHLBNY is not a financial or investment advisor. It is solely the reader’s responsibility to evaluate the risks and merits of any modeling, funding or investment proposal. The author reports no conflicts of interest. The author alone is responsible for the content and writing of the paper.
Basel Committee on Banking Supervision (2013). Principles for effective risk data aggregation and risk reporting. Standard 239, January, Bank for International Settlements.
Basel Committee on Banking Supervision (2015). Corporate governance principles for banks. Report, Bank for International Settlements.
Board of Governors of the Federal Reserve System (2015). Federal Reserve supervisory assessment of capital planning and positions for LISCC firms and large and complex firms. Supervisory Letter SR 15–18, Board of Governors of the Federal Reserve System.
Box, G. E. P. (1980). Sampling and Bayes’ inference in scientific modeling and robustness. Journal of the Royal Statistical Society A 143(4), 383–430.
Box, G. E. P., and Draper, N. R. (1987). Empirical Model-Building and Response Surfaces. Wiley.
Chartis (2015). Leading practices in capital adequacy. Research Report, Chartis.
Christodoulakis, G. A., and Satchell, S. (2008). The Analytics of Risk Model Validation. Quantitative Finance Series. Academic Press, Burlington, MA.
Cresp, I., Kumar, P., Noteboom, P., and Taymans, M. (2017). The evolution of model risk management. McKinsey, February.
Croll, G. J. (2005). The importance and criticality of spreadsheets in the City of London. Working Paper, EuSpRIG.
Croll, G. J. (2009). Spreadsheets and the financial collapse. Working Paper, European Spreadsheet Risks Interest Groups.
Daníelsson, J. (2000). The emperor has no clothes: limits to risk modeling. Working Paper, London School of Economics.
Derman, E. (1996). Model risk. Quantitative Strategies Research Notes. Report, Goldman Sachs.
Derman, E. (2005). Beware of economists bearing Greek symbols. Harvard Business Review 83(10), 16–17.
Federal Housing Finance Agency (2013). Model risk management guidance. Advisory Bulletin AB2013-07, November, FHFA.
Hénaff, P., and Martini, C. (2010). Model validation: theory, practice and perspectives.The Journal of Risk Model Validation 5(4), 3–15 (http://doi.org/chbf).
Ifrah, G. (2000). The Universal History of Numbers. Wiley.
Kamal, M. (1998). When you cannot hedge continuously: corrections to Black–Scholes. Quantitative Strategies Research Notes. Report, Goldman Sachs.
Lowenstein, R. (2000). When Genius Failed: The Rise and Fall of Long-Term Capital Management. Random House.
McConnell, P. (2014). Dissecting the JP Morgan whale: a post-mortem. The Journal of Operational Risk 9(2), 59–100.
Meyer, C., and Quell, P. (2016). Risk Model Validation, 2nd edn. Incisive Risk Information Limited, London.
Millo, Y., and MacKenzie, D. (2008). The usefulness of inaccurate models: the emergence of financial risk management. Accounting, Organizations and Society 34, 638–653.
Morini, M. (2011). Understanding and Managing Model Risk: A Practical Guide for Quants, Traders and Validators. Wiley Finance.
Office of the Comptroller of the Currency and Board of Governors of the Federal Reserve System (2011). Supervisory guidance on model risk management. Report, OCC and BOG–FRS.
Persaud, A. (2000). Sending the herd off the cliff edge: the disturbing interaction between herding and market sensitive risk management models. World Economics 1(4), 15–26.
Persaud, A. (2001). Cohabiting with Goliath: the enduring legacy of LTCM. World Economics 2(4), 105–116.
Persaud, A. (2008). Why bank risk models failed. Article on VOXEU.org (the portal of the Centre for Economic Policy Research), April. URL: http://bit.ly/2yYMMcm.
Powell, S. G. (2009). Errors in operational spreadsheets. Journal of Organizational and End User Computing 21(3), 24–36.
Powell, S. G., Baker, K. R., and Lawson, B. (2008). A critical review of the literature on spreadsheet errors. Decision Support Systems 46, 128–138.
Reason, J. (2000). Human error: models and management. British Medical Journal 320, 768–70.
Rebonato, R. (2001). Theory and practice of model risk management. Working Paper, Quantitative Research Centre of the Royal Bank of Scotland/Oxford University. URL: http://bit.ly/2AbYXDZ.
Rebonato, R. (2007). Plight of the Fortune Tellers: Why We Need to Manage Risk Differently. Princeton University Press.
Scandizzo, S. (2005). Risk mapping and key risk indicators in operational risk management. Review of Banking, Finance and Monetary Economics 34(2), 231–256.
Scandizzo, S. (2013). Risk and Governance: A Framework for Banking Organisations. Risk Books, London.
Scandizzo, S. (2016). The Validation of Risk Models: A Handbook for Practitioners (Applied Quantitative Finance), 1st edn. Palgrave Macmillan.
Sibbertsen, P., Stahl, G., and Luedtke, C. (2008). Measuring model risk. The Journal of Risk Model Validation 2(4), 65–81.
Stephenson, S. K. (2010). Ancient computers. Report, Cornell University Library.
Taleb, N. N. (2004). Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Penguin.
Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Penguin.
Tunaru, R. (2015). Model Risk in Financial Markets: From Financial Engineering to Risk Management. World Scientific Publishing Company, Singapore.
Yoost, D. A. (2013). Board oversight of model risk is a challenging imperative. RMA Journal, November, 24–30.
Yoost, D. A. (2016). A Director’s Voyage Through Risk Management. The Risk Management Association, Philadelphia, PA.