Journal of Operational Risk

Risk.net

What is essential is invisible to the eye: prioritizing near misses to prevent future disasters

Andrea Giacchero and Jacopo Moretti

  • Near miss analysis can be used to recognize the inadequacies in the internal processes
  • The model aims at prioritizing near misses in a decreasing order of significance
  • Near miss prioritization allows us to identify the most urgent mitigation actions

A near miss is a negative and anomalous event that causes an accident without damage to people, corporates or environmental assets due to fortunate and/or random circumstances. A series of these events can be the precursor to a harmful incident with serious consequences because of inadequacies in internal processes. The management of a company should mitigate such inadequacies promptly to avoid future disasters. Near miss analysis is a milestone of the operational risk management framework in financial institutions. Therefore, near misses represent a primary information source to analyze the operational risk exposure of the company, since they can reveal gaps in the control environment. The model proposed in this paper aims at identifying the most dangerous events that could happen in a financial institution using near miss data collection. The output of the model is a near miss ranking, in decreasing order of significance in terms of possible damage, which supports the management in prioritizing mitigation actions, in line with the principles of parsimony and efficiency.

1 Introduction

Ex post incident analysis is a widespread practice in various industries to increase plant security and to improve the health and safety of workers. An analysis of this kind is incomplete if its scope is limited to past incidents with losses. Near misses, on the other hand, permit a forward-looking analysis that could help risk management to perceive the riskiness of a business process. As mentioned in Gnoni and Saleh (2017), a company can prevent operational risk events by managing near misses in a timely and proper manner.

The notion of a near miss is widely known in the chemical industry, health sector, aeronautics and the financial markets. A near miss is an unexpected event without material consequences that could, however, have caused damage to people, corporate and environmental assets. Chapelle (2018) highlights that “a near miss is a loss avoided by sheer luck or due to accidental prevention outside the normal course of controls”. In the banking industry, according to Cruz et al (2014), a rapidly and fully recovered loss is considered a near miss.

Near misses can indicate the existence of a breakdown in the security management system before a disaster happens. Thus, the operational risk management should develop and maintain a database to collect operational risk data, identifying a set of focal points within the organization to carry out the data entry. Loss data collection requires training initiatives to improve the near miss awareness of the people involved in collecting operational risk data for the company.

To foster a strong security culture, we must overcome the belief that near misses are proof of resilience. According to Sepeda (2010), the management must not delight in an escaped danger. The management can improve its risk awareness only by looking at near misses as a symptom of vulnerability and as a possible prelude to a disaster. Further, in the absence of a near miss analysis process, the management risks losing a clear view of the inadequacies in their internal processes. Dee et al (2013) state that an effective analysis of near misses needs an objective assessment to identify the most dangerous operational risk events (ie, low-frequency high-impact events) against which it is necessary to implement proper mitigation actions. According to Basel Committee on Banking Supervision (2004), operational risk is defined as “the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk”. In addition, Basel Committee on Banking Supervision (2011) defines near misses as operational risk events that do not lead to a loss. Note that a near miss incurred by a financial institution could be considered as an operational loss for another financial institution (and vice versa). In this case, company data can be supplemented with external data that may include loss events not captured in a company’s internal data set. However, note that, according to Embrechts and Hofert (2011), external data may need to be adjusted or scaled before use.

Often the so-called direct costs associated with an operational risk event are much smaller than the so-called sunk costs. The “safety iceberg” model (Bird and Germain 1996) states that the incidents with damage are the tip of an iceberg, while the near misses constitute the submerged part of the iceberg. Thus, a financial institution should also mitigate what is not clearly visible, to reduce the number of incidents with damage and, consequently, the direct costs.

The aim of this paper is to provide a methodology to support the operational risk management of a financial institution in identifying the most critical near misses. The model proposes a semiquantitative approach to draw up a ranking of the near misses that can cause significant damage to a company. The output of the model is a scalar (between 0 and 1, where 1 indicates the highest riskiness), which enables a financial institution to rank the near misses in decreasing order of significance with respect to the potential damage that they can cause. As a result, the model uses the information (eg, event type, risk factors, business line) related to the near misses to support the financial institution’s management in prioritizing mitigation actions to protect the most exposed internal processes. Note that a thorough loss data collection activity can represent a forward-looking stress test of the model, because it measures the effectiveness of the mitigation actions.

The paper is organized as follows. Section 2 provides a short literature review of the near miss management systems in various industries. Section 3 presents the model to prioritize the near misses, focusing on the elements that characterize their riskiness. Section 4 provides a case study that illustrates in detail the use of the model. The conclusions are presented in Section 5.

2 Near misses: an effective organizational remedy against disasters or an ideological standard?

Several definitions of near miss use the concept of an escaped incident. A near miss, as claimed by Jones et al (1999), is a situation that could have evolved into an effective incident if the sequence of risk factors were not interrupted. An alternative definition by Philley et al (2003) says that near misses are those events that, if the circumstances were slightly different, could have caused operational problems or damage to people and/or other assets. There are several examples of near misses in many sectors. However, most of the literature about near misses refers to health and safety in the workplace (see, for example, Oktem et al 2013). In the petrochemical industry, as suggested by Phimister et al (2003), a near miss is an anomalous event that has the potential to produce a loss, but does not.

The definition of a near miss is unclear in some circumstances. For instance, as claimed by Kleindorfer et al (2012), a category 4 hurricane that goes close to a region with a high population density but harms an uninhabited neighboring area is a near miss for the government of the populated region. Subsequently, the government should analyze the potential severity of such hurricanes and prepare the necessary precautions to minimize the damage of a future analogous event. For air traffic control companies, a near miss is an incident in which two aircraft fly close enough to suggest an alarm, but not sufficiently close to cause damage.

As shown in Dillon et al (2016), the US National Aeronautics and Space Administration (NASA), aiming to avoid future incidents with dramatic consequences (such as occurred in 1986 with Space Shuttle Challenger), commissioned a study by a group of researchers at Brigham Young University of the near misses (inflight anomalies) during unmanned flights. The data analysis of the malfunctions during unmanned flights from 1989 to 2010 shows that when the project leaders highlighted the importance of security, NASA treated near misses as quasi incidents, not as successes. This empirical evidence proves that when the management of a company cares about security, the number of near misses collected will increase. However, the results of this study can be extended from the aerospace industry to other sectors in which security is crucial and, further, to the financial sector, in which customer trust is the most important asset to defend.

Note that often the occurrence of a disaster event is preceded by a series of less significant accidents and by a further series of accidents without consequences. Oktem and Meel (2008) provide the following list of major incidents preceded by a series of near misses.

  1. (i)

    The explosion of the Space Shuttle Challenger seventy-three seconds after liftoff due to the breakdown of an O-ring, despite O-ring erosion being present as early as the second space shuttle mission.

  2. (ii)

    The explosion of an oil refinery plant of the Hindustan Petroleum Corp in India in 1997, preceded by the corrosion of some pipes that produced fuel leaks.

  3. (iii)

    The Paddington rail crash in 1999, preceded by eight dangerous “signals passed at danger” situations in the preceding six years.

Even the banking industry provides an analogous trend. Indeed, some of the greatest losses have been preceded by signals that did not always cause a loss. According to Muermann and Oktem (2002), some risk managers in financial institutions consider near misses as an effective loss and, thus, define a near miss as an event, a sequence of events or a series of unusual events from which lessons can be learned to improve the operations of a company. Muermann and Oktem also claim that these signals could even provide profits, as with Barings Bank before its bankruptcy. Leaver et al (2018) provide a set of potential near misses (eg, errors in the data entry of deal parameters, bugs in the information and communications technology (ICT) systems) in financial trading.

Tinsley et al (2012) interpret near misses from two perspectives: on the one hand, as a sign of a weakness in the internal processes of a company (ie, high sensitivity of the management to risk); and on the other, as proof of the resilience of the company in spite of the errors and of the setbacks that caused the near miss (ie, low sensitivity of the management to risk).

The second perspective considers that a near miss does not generally affect the success of a project, even though it can be the precursor to a more serious problem. Behavioral biases affect the precise assessment of risk by inducing people to consider near misses as successes, and thus to assume that the degree of risk is lower than the original estimation. More precisely, these biases lead operational risk managers to fail to identify near misses. Further, they influence the conduct of people that tend to ignore the available information and, consequently, to exclude the existence of a threat or to consider mitigation unnecessary.

Several risk management frameworks assume that a company can improve entire business processes just by mitigating the root causes of the events that resulted in an actual loss. However, a decreasing number of accidents with a loss and/or damage, although a great achievement, may lead to complacency. In fact, underestimation or misreporting of dangerous accidents may lead to a biased view of the overall risk exposure of the company.

According to Kleindorfer et al (2012), behavioral biases put the management of a company in the so-called comfort zone, which manifests itself in the forms of positive illusions, unsafe conditions and unobserved problems, namely lack of awareness, ignorance and complacency. The comfort zone is where managers, employees and customers have the impression that everything is smooth sailing. Note that such ignorance and lack of awareness do not allow these companies to understand their real problems.

One consequence of behavioral biases for a company is the normalization of deviance from the optimal situation, which results in a reduction in the ability to recognize anomalies over time. This ability returns only when a huge loss occurs. However, since the management of a company perceives a near miss as a success instead of an indicator of vulnerability, the management itself is unable to interpret near misses as signals of a future disaster.

Using rare incidents to assess the riskiness of an operational risk event could provide biased estimates of its probability of occurrence and of its impacts. For this reason, the management needs a framework to assess all the potential accidents11 1 To identify the potential accidents that can affect a company, the operational risk management can also consider external data. For instance, a loss event that occurred at one company could be assessed by another company as a potential accident. (regardless of the significance of the damage they could cause) and to identify the top risks. This framework should specify that the internal processes with the highest number of near misses that can cause the most serious damage require the most urgent intervention. Indeed, an in-depth analysis of the near misses and the effects surrounding their occurrence could help a company to implement more effective risk management.

The activity of collecting and analyzing near misses is an effective preventive measure and not an ideological standard. Near miss analysis allows the management of a company to improve their knowledge about the internal processes and production cycles. The aim of the analysis is both to reduce the number and amount of operational losses and to spread the risk culture throughout the company. For instance, according to Jones et al (1999), some industries (eg, the European chemical industry) have detected a clear improvement in their security by increasing the number of near misses collected and by honing their analysis. Unfortunately, Nivolianitou et al (2006) highlight that a lack of near miss data is the most important threat to the operational risk management process.

3 A semiquantitative model to optimize near miss data collection

Near misses are risk indicators of possible weaknesses in the internal processes, as well as damaging events. Thus, the operational risk management can use their near miss data collection to assess the seriousness of potential operational risk events given the initial warnings, thus achieving a better estimate of their riskiness and a more suitable representation of the real risk profile of the company.

Our model, inspired by the operational risk management framework of the Bank for International Settlements and designed for financial institutions, aims at prioritizing near misses, in decreasing order of significance, to identify the most urgent mitigation actions. Indeed, the main aim of operational risk management in a financial institution is to prevent operational risk events.

We propose a model that can be part of the existing advanced measurement approach (AMA). Indeed, our model aims to strengthen the current operational risk management system of a financial institution, by supporting the improvement of the internal processes and internal control system (note that our model does not aim to calculate the operational risk capital requirement). More precisely, we have the following.

  • For financial institutions that have adopted an approach other than the AMA, the model can be used to support the management in identifying the most critical near misses in the internal processes in order to mitigate them together with the risk events identified using other techniques (eg, risk self-assessment). Indeed, the model provides a risk ranking of the near misses (in decreasing order of significance) that adapts well to the management’s need to focus on the mitigation of major risks, due to budget constraints.

  • For financial institutions that have adopted the AMA, the model can be integrated into the broader operational risk management framework of the institution. Also, in this case, together with the main techniques of operational risk management (loss data collection, key risk indicators, risk self-assessment, etc), the model contributes toward identifying the most urgent mitigating actions.

The risk events that the operational risk management function must consider in order to run the model are those of a single year. Indeed, this is the time interval over which many financial institutions choose to identify the major operational risk events and to define their mitigating actions. The operational risk management function can run the model, for instance, at the beginning of a new year. Further, by running the model over subsequent years, the operational risk management can implement a backtesting procedure for the mitigating actions against near misses implemented in the previous years. Indeed, generally a financial institution expects a sharp decrease in the number of events (both losses and near misses) against which it has implemented a mitigating action.

The output of the model is the risk index (IR), which expresses the overall riskiness of each near miss:

  IR=w1INM+w2IP+w3ICL,  

where IR[0,1], in which the value 1 represents the maximum riskiness of the risk index, INM is the near miss index, IP is the process index, ICL is the cluster index and w1,w2,w3[0,1] are arbitrary coefficients established by the operational risk management that indicate the importance of the near miss index, the process index and the cluster index, respectively.22 2 Coefficients w1, w2, w3 depend both on the experience and the sensitivity of the operational risk management and on the context of the company. It is assumed that, by construction, (w1+w2+w3)=1 to respect the condition IR[0,1].

Note that the values of all the variables listed in the paper are ordinal numbers. Thus, a near miss with IR=0.8 is not twice as risky as another near miss with IR=0.4 (the same is true for all the indexes).

In the following subsections we show how to calculate the elements of the risk index.

3.1 Near miss index

The near miss index (INM) expresses the significance of each near miss as a function of the following elements:

  • the severity index (IS), which concerns the potential impacts of the near misses;

  • the risk factor index (IRF), which considers the number of risk factors of the near misses;

  • the frequency index (IF), which represents the frequency of occurrence of the near misses;

  • the control index (IC), which pertains to the adequacy of the existing controls;

  • the reputational index (IREP), which regards the potential reputational impact; and

  • the business line index (IBL), which pertains to the business line in which the near misses happen.

The near miss index is an arithmetic mean of these elements and it ranges from 0 to 1 (indeed, by construction, each of its elements ranges from 0 to 1). This index, inspired by the work of Gnoni and Lettera (2012), is calculated as follows:

  INM=IS+IRF+IF+IC+IREP+IBL6.  

The closer the near miss index is to 1, the greater the riskiness of a near miss. The following paragraphs show how to calculate the elements of the near miss index.

3.1.1 Severity index

Table 1: Values of the impacts.
Variable Impact description Rating Description

D1 (economic impact)

Potential consequences in terms of lower revenues or higher costs, including compliance impact (eg, sanctions)

0

No impact

    1

Low: losses below a given threshold (eg, percentage of the net assets)

    2

High: losses above a given threshold (eg, percentage of the net assets)

D2 (managerial impact)

Potential delays or reprocessing that do not respect the deadlines for achieving the goals of the company

0

No impact

    1

Low: the incident is handled by middle management

    2

High: the incident is handled by senior management

D3 (legal impact)

Potential complaints, injunctions, legal proceedings (civil lawsuits, class actions and accusatorial proceedings)

0

No impact

    1

Low: complaints, injunctions, civil lawsuits

    2

High: class actions, accusatorial proceedings

The severity index (IS) concerns the potential impacts when the near miss results in losses and other damage. The severity index is calculated as follows:

  IS=i=1nDii=1nmaxDi,  

where Di is the rating of the ith impact and n is the number of impact typologies.

The severity index provides three possible impact typologies (economic, managerial and legal). Concerning the economic impact, a near miss that could cause a high potential loss is more significant than another near miss that could cause a smaller potential loss. Considering that each near miss could simultaneously have one or more impacts, it is assumed that these impacts are independent of one another (the occurrence of an impact does not necessarily imply or prejudice the occurrence of the other(s)). Further, it is also assumed that the impacts have the same importance, even though the importance of the impacts should reflect the risk appetite expressed by the board.33 3 For instance, the most significant impact in a small company could be the economic one, while in a company with a complex governance the managerial impact could be the most important. Table 1 provides the rating scale for these impacts.

3.1.2 Risk factor index

A near miss could be due to one or more causes (so-called risk factors). The risk factor index (IRF) expresses the riskiness of each near miss as a function of the number of its risk factors. It is calculated as follows:

  IRF=number of risk factors4,  

where the denominator is equal to 4 because of the number of risk factor typologies considered in the definition of operational risk from the Bank for International Settlements: “Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events” (Basel Committee on Banking Supervision 2004). However, operational risk management could consider subtypologies of these four risk factors to improve the granularity of the analysis.

It is assumed that each of the risk factors has identical importance, because they can potentially cause both significant and trivial events. Indeed, the human error (people risk) in clicking on an email attachment or hyperlink could open a simple advertisement or could lead to a data breach (eg, phishing). Further, the higher the number of risk factor typologies linked to a given near miss, the greater the number of mitigation actions needed to mitigate the risk exposure.

3.1.3 Frequency index

The frequency index (IF) pertains to the number of events of a given near miss over a certain time horizon. It is calculated as follows:

  IF=FmaxF,  

where F is a variable that denotes the number of times in which a near miss is repeated, which ranges between 1 and 4.

A high frequency of near misses in a process could reveal its vulnerability in terms of the inadequacy of the existing controls. It is assumed that the reference time horizon is equal to one year; Table 2 gives the values of the variable F considering a time horizon of one year.

Table 2: Possible values of the frequency variable, F.
Description Value
Near miss occurred more than five times in the last year 4
Near miss occurred 4–5 times in the last year 3
Near miss occurred 2–3 times in the last year 2
Near miss occurred once in the last year 1
Table 3: Possible values of the controls adequacy variable, C.
Description Value
Inadequate controls: 3
the control is absent  
the control is not auditable  
Partially inadequate controls: 2
the control does not meet the minimum  
 requested standards  
the control is an auditable operational  
 practice (lack of formalization)  
Partially adequate controls: 1
 the control meets the minimum  
 requested standards  
Adequate controls: 0
 the control guarantees adequate  
 coverage against operational risk events  

3.1.4 Control index

The control index (IC) expresses the riskiness of each near miss as a function of the adequacy of the controls. This index is calculated as follows:

  IC=CmaxC,  

where C is a variable that assesses the adequacy of the design of the existing controls (formalization, segregation of duty, timing, auditability and level of automation). The adequacy of the controls is determined according to the control assessment methodology of the company. The variable C takes the values listed in Table 3.

3.1.5 Reputational index

Reputational risk is excluded from the definition of operational risk. However, operational risk events can cause serious reputational threats (eg, damage to the image of the financial institution). The reputational index (IREP) denotes the riskiness of each near miss as a function of the reputational impact on the image of the company. It is calculated as follows:

  IREP=RmaxR,  

where R is a variable pertaining to the potential reputational impact and takes the values listed in Table 4.

Table 4: Possible values of the reputational impact variable, R.
Description Value
Presence of major reputational impacts: 2
 significant and enduring deterioration of the stakeholders’  
 trust (eg, competent national authority, shareholders,  
 investors, customers)  
 event resolution requires funds allocation and dedicated  
 staff  
Presence of minor reputational impacts: 1
 limited deterioration of trust (easily recoverable) that  
 concerns a limited set of people  
 problem resolution does not require fund allocation and  
 dedicated staff  
Absence of reputational impacts 0

3.1.6 Business line index

The business line index (IBL) expresses the riskiness of each near miss as a function of the business line on which it has an impact. Basel Committee on Banking Supervision (2004) divides the activities of a financial institution into eight business lines, each providing a so-called beta factor that makes a contribution to the total capital charge. Table 5 summarizes the beta factors of the business lines and provides the values of the variable BL, which denotes the riskiness of the business lines.

Table 5: Possible values of the business line variable.
  Beta BL
Business line factor (%) value
Corporate finance 18 3
Trading and sales 18 3
Payment and settlement 18 3
Commercial banking 15 2
Agency services 15 2
Retail banking 12 1
Asset management 12 1
Retail brokerage 12 1

The business line index (IBL) is calculated as follows:

  IBL=BLmaxBL.  

3.2 Process index

An internal process is a set of sequential tasks to achieve a certain goal. The set of internal processes constitutes the internal process model. The process index (IP) is a variable that concerns the internal process in which a near miss occurs. More precisely, it is a Boolean variable of the following form:

  IP={1if the near miss has an impact on a core or critical internal process,0otherwise.  

A financial institution identifies the core and critical internal processes within its internal process model. Core processes concern the business of the firm: more precisely, the core processes are those essential for realizing the company’s purpose. Critical internal processes are those that are periodically defined as systemic or critical in the business continuity plan.

3.3 Cluster index

The cluster index44 4 This index supposes the existence of an operational risk self-assessment process. (ICL) uses the potential operational risk events (identified during the risk self-assessment activity on the internal processes) as a corrective factor to corroborate the prioritization of the near misses. Therefore, it incorporates both the subjective assessment of the process owners during risk self-assessment and the expert judgment of the operational risk management. To calculate the cluster index, the operational risk management function should first pick out all the potential operational risks and group them into homogeneous clusters. A homogeneous cluster is a set of events with similar characteristics concerning both the event type and the operational context in which they have an impact. For instance, all the operational risk events related to ICT security can constitute a cluster (eg, malware, distributed denial of service, trojan). Another example is the set of all the operational risk events concerning health and safety in the workplace relating to the safety process. Grouping operational risk into clusters is useful to study a phenomenon that has the same root causes but that can present potential loss events for different event types. The identification of this phenomenon helps to improve the granularity of the analysis. Further, clusters facilitate communication with the board of directors of the financial institution, providing simpler and clearer terms, which are, however, easily attributable to the loss event types of Basel II.

Once the process owners have self-assessed the potential operational risk events, the operational risk management should rank (based on expert judgment) the clusters in decreasing order of significance. Given all the identified clusters, the cluster index is calculated as the ratio of the cluster variable (CL) to the number of clusters (n). Formally,

  ICL=CLn,  

where ICL[0,1], and the cluster variable, which provides a value for each cluster as a function of the respective ranking, is calculated as follows:

  CL=n-ranking+1.  

4 Case study

This section provides a case study to better explain how the model to prioritize near misses works in practice. For simplicity and confidentiality, the data in this case study is artificial (however, the data simulates practical cases).

This example considers: four internal processes (see Table 6), of which two are critical and one is core; five clusters (see Table 7); w1=w2=0.4 and w3=0.2.

Table 6: Internal processes.
Internal process Core Critical
Loan granting Yes Yes
Information technology No Yes
Risk management No No
Communication No No
Table 7: Clusters.
Cluster CL value
ICT security 5
ICT systems 4
Loan management 3
External communication 2
Risk measurement 1

The values of the severity index, frequency index, control index and reputational index are those listed in Tables 14, respectively. Let us consider a database of the near misses containing the events in Tables 810, which show three different parts of the database, respectively.55 5 The operational risk event databases generally contain several columns. However, we present a simplified version for illustrative purposes.

Table 8: Near misses database (part 1).
        Year of
ID Description Internal process Cluster occurrence
01

Errors in the data in the investor presentation

Communication

External communication

2018
02 Ransomware

Information technology

ICT security

2018
03

Unavailability of the front-office ICT system

Information technology

ICT systems

2018
04 Ransomware

Information technology

ICT security

2018
05

Data entry errors regarding guarantees

Loan granting

Loan management

2018
06

Data entry errors regarding risk metrics

Risk management

Risk measurement

2018
07 Ransomware

Information technology

ICT security

2018
08

Disavowed signature and/or signature of people without proper power

Loan granting

Loan management

2018
09

Lack of preservation of physical goods given as collateral

Loan granting

Loan management

2018
10

Unavailability of the front-office ICT system

Information technology

ICT systems

2018
Table 9: Near misses database (part 2).
          Reputational
    Economic Managerial Legal impacts on
ID Business line impact impact impact stakeholders
01 Agency services N/A Low N/A Major
02 Corporate finance Low Low Low Minor
03 Trading and sales High High High Major
04 Corporate finance Low Low Low Minor
05 Retail banking High Low N/A None
06 Commercial banking N/A Low N/A None
07 Trading and sales Low Low Low Minor
08 Commercial banking High High High Minor
09 Retail banking Low High Low None
10 Payment and settlement High High High Major
Table 10: Near misses database (part 3).
  Risk factors  
     
ID #1 #2 #3 Adequacy of controls
01 People Process   Inadequate
02 ICT External events People Partially adequate
03 ICT     Partially inadequate
04 ICT External events People Partially adequate
05 People Process External events Adequate
06 People     Partially inadequate
07 ICT External events People Partially adequate
08 Process     Adequate
09 Process External events   Inadequate
10 ICT     Partially inadequate

In Table 11, we report the results of our model applied to the near misses in the database.

Table 11: Application of the model.
ID ?? ??? ?? ?? ???? ??? ??? ?? ??? ??
01 0.17 0.50 0.25 1.00 1.00 0.67 0.60 0.00 0.4 0.32
02 0.50 0.75 0.50 0.33 0.50 1.00 0.60 1.00 1.00 0.84
03 1.00 0.25 0.50 0.67 1.00 1.00 0.74 1.00 0.6 0.81
04 0.50 0.75 0.50 0.33 0.50 1.00 0.60 1.00 1.00 0.84
05 0.50 0.75 0.25 0.00 0.00 0.33 0.31 1.00 0.8 0.68
06 0.17 0.25 0.25 0.67 0.00 0.67 0.33 0.00 0.2 0.17
07 0.50 0.75 0.50 0.33 0.50 1.00 0.60 1.00 1.00 0.84
08 1.00 0.25 0.25 0.00 0.50 0.67 0.44 1.00 0.8 0.74
09 0.67 0.50 0.25 1.00 0.00 0.33 0.46 1.00 0.8 0.74
10 1.00 0.25 0.50 0.67 1.00 1.00 0.74 1.00 0.6 0.81

Given the value of the risk index IR in Table 12, we can find a ranking of the typologies of the near misses that occurred. This ranking represents useful information for the management of a company to identify the most urgent operational risk events against which to implement proper mitigation actions. Table 12 also shows that the most critical near miss typology is ransomware, which provides a risk index equal to 0.81.

Table 12: Ranking of the near miss typologies.
Ranking Near miss typology ??
#1 Ransomware 0.84
#2 Unavailability of the front-office ICT system 0.81
#3 Lack of preservation of physical goods 0.75
  given as collateral  
#4 Disavowed signature and/or signature of people 0.72
  without proper power  
#5 Data entry errors regarding guarantees 0.68
#6 Errors in the investor presentation 0.31
#7 Data entry errors regarding risk metrics 0.15

5 Conclusions

The management of a company should keep under consideration any small flaw, anomaly or breakdown, given that it can be the precursor to a serious accident. Indeed, a near miss is an event that indicates the vulnerabilities of a system. Further, according to Morrison et al (2011), the assessment and management of near misses are two necessary components of the security management system of a company.

The near miss incident management framework is effective when it provides a systemic vision instead of focusing on human error. Indeed, the focus is on the risk factors that caused the event (and not on the employee who made the error). The main idea is to concentrate on the organizational context of the company rather than the individual performance of the employees, with the aim of redesigning internal processes and promoting a risk culture. The management of a financial institution should encourage employees to analyze each event, regardless of its effects, because even a trivial near miss can reveal a vulnerability. Thus, by mitigating the root causes of the near misses, a financial institution can prevent damaging events.

When a near miss or an incident with damage occurs, the operational risk management of a financial institution should understand which controls failed and why. The aim is to strengthen the security of a system and to prevent future errors. Pidgeon and O’Leary (2000) claim that the importance of near miss analysis is underrated in many industries because of the difficulty of detection. However, the identification of a lot of near misses could be a symptom of the risk awareness of the employees of a company. The main idea is to avoid the behavior of a drunkard who searches for their lost keys under a street light just because it is the only illuminated place (Fitoussi 2013). The risk managers that analyze the root causes of the operational risk events that have already happened behave like the drunkard that does not search for their keys in the darkness. However, these risk managers should also consider near misses, which could hide invisible pitfalls. A manager that is willingly blind to the signals of a threat risks crystallizing the status quo, driving the company to consider near misses as closed issues. In this case, near miss management becomes an ideological standard instead of an organizational remedy.

Identifying near misses is the first step toward realizing the proactive management of risk factors. For instance, consider two identical plants. In the first plant the employees identify several near misses, while in the second plant the employees identify zero near misses. It is reasonable to consider the first plant to be safer than the second because, given that the plants are identical, they should produce the same number of near misses. Thus, only the employees of the first plant, analyzing the risk factors that generate the events, contribute to improve its safety.

This paper presents a theoretical model and a practical application to value the near miss data collection. Indeed, the model assesses all the available characteristics of the near misses (eg, potential impacts, frequency, risk factors, adequacy of controls). Further, it can support the decision making of a company concerning their mitigation of operational risk exposure. Indeed, one of the main issues for management is to secure the internal processes of the company with a limited budget. The output of the model is a near miss ranking that can help management to identify the most dangerous operational risk events in order to plan the implementation of mitigation actions, starting with the most urgent, in line with the principles of parsimony and efficiency.

Further studies are underway to test the model on several data sets. The model presented in this paper does not need a huge amount of data, because its main aim is to prioritize the near misses recorded in the loss data collection activity of a financial institution. A thorough loss data collection activity (ie, compliant with Basel Committee guidelines) can provide a sufficient number of near misses to implement the model in a financial institution. However, even in the absence of historical series of loss data, a financial institution can implement the model using consortium data.

Reality has the disconcerting habit of confronting us with the unexpected, for which we are not prepared (Arendt 1972). The model of this paper could contribute to management of the unexpected.

Declaration of interest

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

References

  • Arendt, H. (1972). Crises of the Republic: Lying in Politics, Civil Disobedience on Violence, Thoughts on Politics, and Revolution. Houghton Mifflin Harcourt, Boston, MA.
  • Basel Committee on Banking Supervision (2004). International convergence of capital measurement and capital standards: a revised framework. Bank for International Settlements, Basel. URL: http://www.bis.org/publ/bcbs107.pdf.
  • Basel Committee on Banking Supervision (2011). Operational risk: supervisory guidelines for the advanced measurement approaches. Bank for International Settlements, Basel. URL: http://www.bis.org/publ/bcbs196.pdf.
  • Bird, F., and Germain, G. (1996). Loss Control Management: Practical Loss Control Leadership, revised edn. International Loss Control Institute.
  • Chapelle, A. (2018). Operational Risk Management: Best Practices in the Financial Services Industry. Wiley (https://doi.org/10.1002/9781119548997).
  • Cruz, M. G., Peters, G. W., and Shevchenko, P. V. (2014). Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk. Wiley (https://doi.org/10.1002/9781118573013).
  • Dee, S., Cox, B., and Ogle, R. (2013). Using near misses to improve risk management decisions. Process Safety Progress 32(4), 322–327 (https://doi.org/10.1002/prs.11632).
  • Dillon, R., Tinsley, C., Madsen, P., and Rogers, E. (2016). Organizational correctives for improving recognition of near-miss events. Journal of Management 42(3), 671–697 (https://doi.org/10.1177/0149206313498905).
  • Embrechts, P., and Hofert, M. (2011). Practices and issues in operational risk modeling under Basel II. Lithuanian Mathematical Journal 51(2), 180–193 (https://doi.org/10.1007/s10986-011-9118-4).
  • Fitoussi, J.-P. (2013). Le Théorème du Lampadaire. Les Liens Qui Liberent Editions, Paris.
  • Gnoni, M. G., and Lettera, G. (2012). Near-miss management systems: a methodological comparison. Journal of Loss Prevention in the Process Industries 25(3), 609–616 (https://doi.org/10.1016/j.jlp.2012.01.005).
  • Gnoni, M. G., and Saleh, J. (2017). Near-miss management systems and observability-in-depth: handling safety incidents and accident precursors in light of safety principles. Safety Science 91, 154–167 (https://doi.org/10.1016/j.ssci.2016.08.012).
  • Jones, S., Kirchsteiger, C., and Bjierke, W. (1999). The importance of near miss reporting to further improve safety performance. Journal of Loss Prevention in the Process Industries 12(12), 59–67 (https://doi.org/10.1016/S0950-4230(98)00038-2).
  • Kleindorfer, P., Oktem, U., Pariyani, A., and Seider, W. (2012). Assessment of catastrophe risk and potential losses in industry. Computers and Chemical Engineering 47, 85–96 (https://doi.org/10.1016/j.compchemeng.2012.06.033).
  • Leaver, M., Griffiths, A., and Reader, T. (2018). Near misses in financial trading: skills for capturing and averting error. Human Factors 60(5), 640–657 (https://doi.org/10.1177/0018720818769598).
  • Morrison, D. R., Fecke, M., and Martens, J. D. (2011). Migrating an incident reporting system to a CCPS process safety metrics model. Journal of Loss Prevention in the Process Industries 24(6), 819–826 (https://doi.org/10.1016/j.jlp.2011.06.008).
  • Muermann, A., and Oktem, U. (2002). The near-miss management of operational risk. Journal of Risk Finance 4(1), 25–36 (https://doi.org/10.1108/eb022951).
  • Nivolianitou, Z., Konstandinidou, M., Kiranoudis, C., and Markatos, N. (2006). Development of a database for accidents and incidents in the Greek petrochemical industry. Journal of Loss Prevention in the Process Industries 19, 630–638 (https://doi.org/10.1016/j.jlp.2006.03.004).
  • Oktem, U., and Meel, A. (2008). Near-miss management: a participative approach to improving system reliability. In Encyclopedia of Quantitative Risk Analysis and Assessment, Melnick, E. L., and Everitt, B. S. (eds), pp. 1154–1163. Wiley (https://doi.org/10.1002/9780470061596.risk0508).
  • Oktem, U. G., Seider, W. D., Soroush, M., and Pariyani, A. (2013). Improve process safety with near-miss analysis. Chemical Engineering Progress 109(5), 20–27.
  • Philley, J., Pearson, K., and Sepeda, A. (2003). Updated CCPS Investigation Guidelines book. Journal of Hazardous Materials 104, 137–147 (https://doi.org/10.1016/S0304-3894(03)00240-1).
  • Phimister, J., Oktem, U., Kleindorfer, P., and Kunreuther, H. (2003). Near-miss incident management in the chemical process industry. Risk Analysis 23(3), 445–459 (https://doi.org/10.1111/1539-6924.00326).
  • Pidgeon, N., and O’Leary, M. (2000). Man-made disasters: why technology and organizations (sometimes) fail. Safety Science 34, 15–30 (https://doi.org/10.1016/S0925-7535(00)00004-7).
  • Sepeda, A. L. (2010). Understanding process safety management. Chemical Engineering Progress 106(8), 26–33.
  • Tinsley, C., Dillon, R. L., and Cronin, M. (2012). How near-miss events amplify or attenuate risky decision making. Management Science 58(9), 1596–1613 (https://doi.org/10.1287/mnsc.1120.1517).

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: