The FRTB’s P&L attribution-based eligibility test: an alternative proposal

Spinaci, Benigno, Fraquelli and Montoro propose two alternatives to the P&L attribution test

Spinaci, Benigno, Fraquelli and Montoro propose two alternatives to the P&L attribution test

CLICK HERE TO VIEW THE PDF

The profit and loss (P&L) attribution test, a key part of being eligible to use internal models under Basel’s new market risk capital framework, is designed such that there is a high probability of internal models being erroneously rejected. Here, Marco Spinaci, Manuela Benigno, Andrea Fraquelli and Adolfo Montoro highlight the model risk embedded in the test and propose two alternatives

Several critiques have been advanced regarding the current profit-and-loss (P&L) attribution tests (Basel Committee on Banking Supervision 2016), according to the Fundamental Review of the Trading Book (FRTB). The tests are required for trading desks to be eligible for the internal model approach (IMA) and are currently based on two ratios, the ‘mean ratio’ M1 and the ‘variance ratio’ M2, which must be smaller than 10% and 20%, respectively. However, these hard limits (particularly the second one) are difficult to meet in practice. As a result, several proposals have been issued within the industry to assess the main shortcomings of the current definition, namely:

  • (Strictness) Many desks, if the risk P&L is based on data that is not perfectly aligned with front-office data, would fail the variance ratio test many times.
  • (Instability) The test metrics proposed by regulators, particularly ‘variance ratio’, are extremely sensitive to outliers. Further, even when eligible, desks would often jump in and out of modellability, especially when using a monthly window.
  • (Asymmetry) To some extent, the current definition favours underestimation of risk.

In this note, two alternative proposals to the regulatory formula are analysed. We have endeavoured to separate actual methodology differences from the calibration of the thresholds for the ratios (the 10% and 20% above).

The proposed methodologies are advantageous relative to the regulators’ proposal in two significant ways: they do not tend to favour underestimation of risk over overestimation (generally being more symmetric), and they give more stable estimates over time. Note that the results are extremely sensitive to threshold choice, the current level of which excludes too many desks.

Interpretations of both monthly and yearly P&L vectors are investigated. The two quantities bounded by the mean and variance ratio thresholds, namely |M1| and M2, are, on average, higher and more volatile, due to our limited sample. Therefore, with monthly vectors, one needs to significantly increase both the mean and variance ratio thresholds, and even then most desks would jump in and out of eligibility for the IMA (even without assuming changes in constituents). Since this effect is highly undesirable, we recommend departing from a monthly window, increasing the sampling size and recalibrating the test threshold accordingly.

Components and assumptions

The P&L attribution (PLA) test needs to be computed by comparing hypothetical P&L (HPL) and risk theoretical P&L (RTPL).

According to our interpretation of Basel Committee on Banking Supervision (2016), HPL is a measure of the mark-to-market/model value of a trading desk’s instruments derived from front office pricing functions based on a full set of risk factors (and can therefore be considered the ‘benchmark’ measure). Such P&L must exclude any valuation adjustment except that accounted for in a daily Basel Committee on Banking Supervision FAQ (2017).

RTPL is defined in two different ways in the FRTB:

  • In the glossary it is defined as ‘the daily desk level P&L that is predicted by the risk management model conditional on a realisation of all relevant factors that enter the model’.
  • In the body of the text it is presented as ‘the P&L that would be produced by the bank’s pricing model for the desk if they only included the risk factors used in the risk model’.

The difference in these definitions is fundamental and has severe implications for the test result. In this article, we will observe the second definition. We will elaborate on this choice later as well as explain our understanding of the PLA scope.

Regarding notation, we reserve the term ‘unexplained P&L’ (UPL) to explain the simple difference UPL=RTPL-HPL. The test metrics currently proposed by regulators (Basel Committee on Banking Supervision 2016) are defined as:

  M1 =|mean(UPL)|σ(HPL)   (also called mean ratio)  
  M2 =Var(UPL)Var(HPL)   (also called variance ratio)  

Such metrics need to be computed on a monthly basis.11Basel Committee on Banking Supervision (2016, page 89, 183(b)) states: ‘These ratios are calculated monthly and reported prior to the end of the following month’. This does not indicate they should be calculated on monthly vectors, but rather with monthly frequency. Although not specified in the regulation, the current understanding is that the bank will need to use one year’s worth of data and, for each month, based on around 21 observations, derive M1 and M2, ending up with 12 sets of M1 and M2, respectively. Each month, banks must compare M1 and M2 against the prescribed maximum thresholds (currently 10% and 20%) and verify, for each desk, whether there is a breach. If the number of breaches over the last 12 months is less than or equal to 3, and provided the desk satisfies its backtesting requirements, then the desk is eligible to be capitalised using the IMA. If not, it will be capitalised using the standardised approach.

We test a yearly sample size, assuming HPL and RTPL are 250 business days in length (yearly window). This follows the interpretation that the mean and variance ratios are computed monthly, using the previous year’s data (ie, the data overlaps). The alternative – they are computed monthly using the previous month’s data only (21 business days, without overlaps) – will also be tested and discussed.

In this article, we reference the Pearson correlation between HPL and RTPL, denoted by ρ=ρ(RTPL,HPL) (the shorter notation is used whenever this causes no confusion).

A further co-ordinate is used: x=σ(RTPL)/σ(HPL).

This number should generally be close to 1. In general, x>1 represents overestimation of the risk measures, ie, RTPL is more volatile than HPL, while x<1 represents underestimation, ie, RTPL is less volatile than HPL.

P&L attribution scope, rationale and limitations

Scope of PLA test.

The PLA is a type of analysis commonly used by traders/product controllers to attribute or explain daily fluctuations in the value of portfolios by their key drivers (Acuity Derivatives 2012). This means the computed P&L (value today - value from prior day) is broken down by risk factors, the so-called P&L explain (impact of price moves, impact of interest rate moves, impact of volatility moves, etc) and P&L unexplained or residual P&L (eg, typical risk factor co-movements). This exercise is used to test whether the overall function employed to calculate the P&L explain is accurate enough to materially replicate the overall computed P&L for the day. The FRTB PLA test is based on the same principles underlying the finance PLA, but it has been designed in a new structured framework based on the comparison between HPL and RTPL.

The intended purpose of the PLA test is to assess the eligibility of HPL’s representation (ie, risk factor completeness and accuracy of valuation functions) in the risk model compared with the HPL itself.

According to our interpretation, the below topics are in scope:

  • The risk factor completeness should capture the shortcomings of the risk model in terms of missing risk factors, missing higher-order sensitivities, etc. For example, if for a certain position the HPL is a function of n risk factors, HPL=f(x1,x2,,xn), the risk factor completeness would then capture the approximations made in the internal model when computing the RTPL as a function of a potentially reduced set of risk factors: RTPL=g(x1,x2,,xm) with mn.
  • The accuracy of valuation functions should reflect any approximations made to the risk pricing functions used in the internal model. For example, this includes assessing the incremental accuracy of Greek-based risk models, ladders or full revaluation-based risk models.

According to our interpretation, the below topics are out of scope:

  • Any valuation adjustments not posted on a daily basis (or more frequently).
  • Any market data alignment.
  • Any modelling of historical scenarios: for example, calibration of historical return assumptions; usage of a scaling technique for market data to make risk models more reactive to sudden change in the market conditions; idiosyncratic risk modelling; and full or partial usage of proxies, including transformation so we can apply historical scenarios (in terms of market observables) to the parameterised end-of-day price.

Given the interpretation of the PLA scope described, the only meaningful definition, which should be adopted, is that in the body of the FRTB’s text.

As mentioned above, the main regulatory intent behind the PLA test is to verify the ability of the risk model to accurately replicate the representation of the HPL on a daily basis. Arguably, this is not really the role of the risk model: it encompasses features such as calibrations of distribution assumptions, risk factor mappings, definition of proxies, calibration of historical time series, etc, which together produce the predictive risk measure. In other words, the role of the risk model is mainly to produce sound and conservative predictions for potential future losses to ensure the capital of the bank is appropriately safeguarded. For this reason, the scope of the PLA cannot be extended to alignment of market data, valuation adjustment or testing of modelling of historical scenarios – points that are tested anyway via backtesting. The PLA test should be limited to testing risk factor completeness and valuation accuracy, while backtesting should be regarded as the gold standard for end-to-end validation of a risk model.

P&L attribution rationale.

From a statistical point of view, the PLA analysis can be seen in the context of a regression analysis, where the HPL may be thought of as the dependent variable (Y), the RTPL as the independent variable (X) and the unexplained P&L as the regression error or ε (up to a sign). In a regression model we assume Y=Xβ+ε, and consequently the error term is defined as ε=Y-Xβ. If the model is specified correctly, the expected value of the errors is zero and the variance of the errors is constant: E(ε)=0 and E(ε2)=σε2.

Then, the R2 coefficient measures the fraction of the total variability of Y explained by the model, or one minus the fraction of the unexplained variance:

  R2=1-σε2σTOT2  

If X is a good estimator of Y, then residuals are small and, in the (x,y) axes, points cluster close to the regression line. Further, the PLA test has been designed so the regression line lies close to the diagonal (figure 1(a)). However, if the model is not correctly specified, eg, if the risk factor coverage is low and/or there is high measurement noise, the points will be dispersed far away from the regression line, which itself may not be aligned with the diagonal (figure 1(b)).

Figure 1: Correctly (respectively incorrectly) specified model
Figure 1: Correctly (respectively incorrectly) specified model with high (respectively low) risk factor coverage and low (respectively high) measurement noise. (a) Relationship between RTPL and HPL when the model is correctly specified. (b) Relationship between RTPL and HPL when the model is not correctly specified. Example based on P&L series used for hypothetical portfolio analysis.

The FRTB mean and variance ratios have been designed with the above setting in mind, but under the implicit assumption that β=1. In this case:

  • the mean ratio is a test for ?[ε]=0, as it requires that this be within ± 10% of σ(X); and
  • the variance ratio is a bound on the coefficient of determination, R280%.

Variance ratio as function of correlation and volatility.

Before discussing alternatives, let us consider the current ratios proposed by the regulators. We express them in terms of x and ρ:

  M2 =Var(UPL)Var(HPL)=Var(RTPL)-2ρσ(HPL)σ(RTPL)+Var(HPL)Var(HPL)  
    =x2-2ρx+1   (1)

We can plot the values of M2, together with the regulatory threshold x2-2ρx+10.2, in a (x,ρ) plot (see figure 2).

It can be seen that the line passes by the point (x,ρ)=(1;0.9), ie, in the ideal case x=1, the HPL and RTPL need to be at least 90% correlated. This is a very high threshold and is very unlikely to be passed.

Further, the region is quite ‘skewed to the left’, meaning that x<1 is actually easier to pass than x>1. In other words, given the skewed threshold, portfolios that underestimate risk are more likely to pass the eligibility test. This is evident here, as the peak of the curve does not occur when x=1 but when x=0.80.8944, where the condition is ρ0.889.44%.

While the variance test checks that HPL and RTPL are close on each day (because their difference must essentially always be small to have a small variance), the mean ratio is a distributional test, meaning the averages of the HPL and RTPL must be close. However, on a day-by-day basis, they might differ significantly and still have M10.

Changing the threshold.

Now let us focus on how the displayed threshold is changed by a possible recalibration. Figure 3 is obtained for values 0.4, 0.6 and 0.8. Although the test clearly becomes easier to pass, the skew to the left becomes more pronounced.

Figure 2: Variance ratio threshold in terms of x and ρ
Figure 2: Variance ratio threshold in terms of x and ρ. The background shows different values of the regulators’ variance ratio from 0 (blue) to 500% (yellow).
Figure 3: Variance ratios with different thresholds

Figure 3: Variance ratios with different thresholds, together with points representing simulated data. The background shows different values of the regulators’ variance ratio from 0 (blue) to 500% (yellow).

To put these values into context, we plot the correlations and ratios of standard deviations obtained from simulated portfolios; these are represented by the white dots in figure 3.

Each dot represents a whole simulated portfolio over 100 years and can be thought of as the median value for each portfolio. Such portfolios differ in nature by having different degrees of volatility caused by risk factor coverage, correlation among missing risk factors and different levels of operational noise.

Most points fall within the 40% threshold level; however, points close to the edge of this area will have, on a yearly or monthly basis, no more than a 50% chance of passing the PLA test; therefore, they will jump in and out of acceptance (especially when monthly P&L vectors are used), which is not desirable. For this reason, the threshold should be set at a significantly larger value than 20%, but this does not solve many issues (eg, underestimation is favoured).

Proposed alternatives

Proposal A: Spearman rank correlation and Kolmogorov-Smirnov distance test.

A simple proposal is to replace the variance ratio M2 and mean ratio M1, respectively, with two statistical measures: the Spearman rank correlation (Spearman 1904) and the Kolmogorov-Smirnov (KS) distributional fitting test (Kolmogorov 1993; Smirnov 1948). The former will ensure the HPL and RTPL move in the same direction on a daily basis, while the latter will ensure the distributions of the RTPL and HPL are somehow comparable, in a similar way to the M2 and M1 measures.

Regarding correlation measures, computing the rank correlation between RTPL and HPL seems to be most appropriate, as opposed to using a custom definition of correlation or the Pearson correlation (Pearson 1895, tables 54 and 55). This is because of the following properties:

  • It is a well-known statistical measure. The rank correlation measures the correlation between the ranks of two time series and proves this relationship is monotonic.
  • It is robust in the presence of outliers.
  • It makes use of the full distribution, while still giving slightly more importance to the tails than the body of the distribution.
  • If there are no outliers or other similar issues, it gives results similar to the Pearson correlation; therefore, it is both easy to interpret and works with similar thresholds to the Pearson correlation.

Given the above characteristics, the rank correlation is a good replacement for the variance ratio test.

Nevertheless, a clear limit of this test is that no discrimination is made regarding the over/underestimation of risk. When plotted on the (x,ρ) axes, the condition ρSpearmanc is essentially equivalent to a horizontal threshold ρc: see the horizontal line in figure 5.

To overcome this limitation, another possible standard test is to compare the distributions of the RTPL and HPL using a KS distance test.

Given the two samples HPL and RTPL (generally not independent of each other), we can define the KS distance as:

  KS(HPL,RTPL)Supx|FHPL(x)-FRTPL(x)|  

where FHPL(x) (respectively FRTPL) denotes the empirical cumulative distribution function of HPL (respectively, RTPL). Then, the test is based on a bound on the KS distance: KS(HPL,RTPL)KSα. For argument’s sake, we set:

  KS5%1.362n12%  

(where n is the sample size, in this case, n=250), but such a level should be carefully calibrated. Unlike in the original KS test, the above does not imply that the two samples have the same distribution with statistical significance level α. This is because, by construction, HPL and RTPL can never be independently generated, as they are based on the same risk factors. However, this is an intrinsic limitation of the PLA test setup that is common to any chosen methodology.

Given the characteristics described above, the KS test appears to be a good replacement for the mean ratio. Like the regulatory mean test, it does not compare the P&L vectors in a day-by-day fashion: only the distributions of RTPLs and HPLs influence the results.22The regulatory mean ratio is defined as mean(RTPL-HPL)/σ(HPL) or (mean(RTPL)-mean(HPL))/σ(HPL). This is independent of any day-by-day comparison of distributions. However, it does overcome the asymmetric problem of the regulatory ratio, since it is symmetric around x=1 (in a logarithmic scale), and it checks the difference between σ(RTPL) and σ(HPL), covering a feature of the variance ratio via a more robust measure less sensitive to outliers.

Although the probabilities displayed in figure 4 are, strictly speaking, only valid for Gaussian vectors, the KS test is purely non-parametric; therefore, they should generalise to most continuous distributions. As expected, the results are independent of the correlation (because they only focus on the distributions), and passing the test depends solely on whether the RTPL and HPL distributions (in this case, only depending on the standard deviations) are shaped similarly.

Figure 4: Estimated probability of passing KS test
Figure 4: Estimated probability of passing KS test, based on simulated centred and correlated Gaussian samples and a confidence level of 95%. Dark blue means a 100% probability of passing, white means a 0% probability of passing. The red line is the regulators’ variance ratio test, reported for comparison. The graph is obtained by simulating yearly P&L vectors (250 returns).
​​​​​​​Figure 5: Probability of passing a KS test

Figure 5: Probability of passing a KS test and a bound on the rank correlation simultaneously . The line ρ=70% shows that, at least for Gaussian vectors, the Spearman rank correlation can just be thought of as an estimator of the correlation between HPL and RTPL. The probabilities shown here have been estimated by simulating (HPL, RTPL) as a centred Gaussian vector.

During the development of this article, other, similar statistical tests, such as Cramer-von Mayer, were investigated, resulting in very similar outcomes.

We could argue that other tests (such as Anderson-Darling), which focus on the tail of the distribution, would be more useful in this case and employed instead of the KS distance test. However, we think the KS’s focus on the distribution centre is a remarkable feature, since:

  • It allows us to automatically include the mean test that was otherwise included in the mean ratio test.
  • The PLA rules as described in the FRTB mandate tests that are a function of the first and second moments of the HPL and RTPL distributions and the correlation between the two; the first two moments are captured by the KS distance.
  • It would not be advisable to over weight the tails of the distributions, because they are more likely to be the result of data noise than other factors. Whether the model is sufficiently conservative under stressed scenarios should be checked via backtesting, not PLA. It is unreasonable to ask the risk model to perfectly mimic the price model under huge shocks; what is important is that, if the HPL has a large shock, the RTPL will also have one of roughly the same intensity.

Figure 6: Different thresholds for M2

Figure 6: Different thresholds for M2 obtained by applying a minimalistic set of changes that assure the desired properties. The red line shows the original regulators’ variance ratio test, for comparison.

A clear limitation of this test is that, since P&L vectors are not compared in a day-by-day fashion, no discrimination is made along the correlation axis, ie, both negatively and highly correlated P&Ls have the same chance of passing the test. Such limitations can be overcome by merging the KS test with the Spearman rank correlation test.

By combining the two tests, we obtain one that provides both ‘vertical’ caps and ‘horizontal’ caps, as is the case with the current regulatory test. The probability of passing this test is just an intersection of the corresponding plots. The right confidence interval and correlation level will need to be calibrated to ensure the tests can be used with real portfolios.

To summarise, although the rank correlation and KS tests depart from the current regulatory formula, they overcome the limitations of the PLA test while retaining its purpose, by testing the alignment of P&Ls on a daily basis and checking the distribution comparability, as well as targeting risk factor completeness and valuation accuracy. Further, such statistical measures are symmetric and more robust than the regulatory formulas. Clearly, an adequate calibration is required to define a sensible correlation threshold and test the confidence interval.

Proposal B: symmetric variance ratio.

Assuming we do not want to diverge much from the current regulatory formula, one possible approach is to modify it so some improved properties are incorporated into the PLA test by assuming it is:

  • easiest to pass when ρ=1 and x=1;
  • symmetric around x=1, meaning the result should be the same for x and 1/x (ie, favour accurate, rather than under- or over-, estimation of the risk);
  • monotonic in ρ, meaning for a given x a higher correlation should always improve the chance of passing.

Some easy arithmetic formulas may be deduced from this.

The following provides a minimalistic change to the current regulators’ formula. It solves the asymmetry issue between x>1 and x<1 and correctly favours x=1 over x1. The formula is:

  M2=σ2(UPL)σ2(RTPL)+σ2(HPL)  

That is, instead of dividing by either Var(RTPL) or Var(HPL), we divide by their sum in order to have a symmetric formula. In terms of x and ρ, this becomes:

  M2=1-2ρxx2+1  

Figure 6 represents the outcome.

The same modification can be applied to the mean ratio by defining:

  M1=|mean(UPL)|σ2(HPL)+σ2(RTPL)  

To summarise, the above proposal has the quality of being very similar to what has already been put forward by regulators, but it overcomes the conceptual issue of favouring risk underestimation and provides a more tractable alternative from a calibration point of view. Further, the symmetry of the test profile produced will help in the calibration process, ensuring the test is more tractable in the presence of outliers.

Sample-size testing.

Moving back to the KS test, and focusing on different sample-size testing, we highlight how monthly P&L vectors make for an easier-to-pass KS test (since, for statistical samples, 21 observations are much less restrictive than 250). Using 21 observations, the threshold inspired by the KS test becomes 1.362/2142%. This is a very permissive threshold that will lead to a statistically uninformative test with a nearly 100% pass rate for all reasonable pairs (RTPL, HPL) (think of a graph similar to figure 4, where the background is entirely dark blue).

Figure 7: KS + Spearman probability of passing with different sample windows
Figure 7: KS + Spearman probability of passing with different sample windows. (a) Monthly. (b) Yearly. Dots represent simulated portfolios.

This is, however, a criticality of monthly returns, and not of KS test thresholds, which simply adapt to the much noisier information. Indeed, in figure 7(b), we show how the simulated portfolios look when using one-year rolling (on a monthly basis) observation, ie, around 250 daily values for each sample. Figure 7(a) shows how the correlation and volatility ratio look when the sample size shrinks to around 21 daily observations, ie, daily observations in a month.

From figure 7(a), it is clear that having a sample based on monthly observations will produce very unstable results due to portfolios frequently jumping in and out of eligibility. Such behaviour is drastically reduced once the sample size is increased, eg, to yearly observation, as shown in figure 7(b). Further for many of the simulated portfolios, |M1| never exceeds 10% on a yearly basis but does so on a monthly basis. And besides being higher, they are also more volatile.

Although a yearly rolling window generates more stable results, the definition of the right sample window requires further analysis and calibration.

Conclusions

We highlighted several drawbacks of the current proposed regulatory metrics, eg, that the current variance ratio favours the underestimation of risk and has a high sensitivity to outliers. Two alternative proposals are exposed:

  • (Proposal A) A threshold on the Spearman rank correlation between RTPL and HPL to test the alignment of the P&Ls on a daily basis as well as a distributional test, such as the KS test, to ensure the distributions of the two P&Ls are aligned. These would replace the variance and mean ratios, respectively.
  • (Proposal B) To prevent diverging too much from the regulatory formula, but in order to fix the highlighted issues, a ‘minimal change’ to the current definition is explained in the section introducing proposal B (we simply replace σ2(HPL) with σ2(RTPL)+σ2(HPL) in the denominator).

Proposals A and B are easy to implement and seem to solve the asymmetry and instability issues of the current definition. Although proposal A seems to be more statistically sound, it has a slightly more complex interpretation if compared with the current variance ratio formula.

The results are supported both by the plots using the (x,ρ) co-ordinates described in the section on components and assumptions, and by tests on the simulated data. Both stationary and non-stationary portfolios were employed, but no significant differences were measured.

When monthly P&L vectors (of length 21 business days) are analysed, these produce much more unstable variance and mean ratios. In particular, the latter creates a test that is much harder to pass in such a case. Using monthly vectors would therefore require a large increase in the thresholds employed, although even this would lead to many trading desks jumping in and out of IMA eligibility. This effect is not desirable, and we stand firm in cautioning against the use of monthly P&L vectors. On the contrary, we highly recommend the use of increased sample size.

To conclude, we believe PLA is a valuable diagnostic test, which, given the associated challenges and modified as suggested in the paper, should be employed as an input to a more holistic model performance assessment rather than as a tool to assess eligibility. We recommend reconsidering the current planned application of the PLA test within the FRTB regulatory framework and instead investing in assessing a gold-standard technique such as backtesting. We believe this represents a more sustainable candidate to determine the soundness and eligibility of banks’ internal models, since it is an end-to-end validation tool that encompasses all components thereof.

Marco Spinaci is an associate, Manuela Benigno is an assistant vice president, Andrea Fraquelli is a vice-president and Adolfo Montoro is a director in the risk methodology team at Deutsche Bank in London. This article reflects the authors’ personal views and does not necessarily represent the opinion of Deutsche Bank.

Email: marco.spinaci@db.com, Manuela.benigno@db.com,
Email: andrea.fraquelli@db.com, adolfo.montoro@db.com.

References

Acuity Derivatives, 2012
Why P&L attribution? Or judging weathermen
White Paper, September 10, Acuity Derivatives

Basel Committee on Banking Supervision, 2016
Minimum capital requirement for market risk
Report, January, Bank for International Settlements

Basel Committee on Banking Supervision, 2017
Frequently asked question on minimum capital requirement for market risk
Report, January, Bank for International Settlements

Kolmogorov AN, 1933
Sulla determinazione empirica di una legge di distribuzione
Giornale dell’Istituto Italiano degli Attuari 4, pages 83–91

Pearson K, 1895
Notes on regression and inheritance in the case of two parents
Proceedings of the Royal Society of London 58, pages 240–242

Smirnov N, 1948
Table for estimating the goodness of fit of empirical distributions
Annals of Mathematical Statistics 19, pages 279–281

Spearman C, 1904
The proof and measurement of association between two things
American Journal of Psychology 15, pages 72–101

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here