Investors are pressing for improvements to risk models for private assets as they look to boost exposures to private equity and real estate.
According to a BlackRock survey of institutional investors in January, almost a third of investors want to increase exposure to private equity and nearly two-fifths want to increase exposure to private real estate. But modelling the risk of these assets is notoriously tricky because of the scarcity of data to base models on, and because of the challenge of identifying correlations between private and publicly traded assets.
In response, model vendors such as MSCI and Axioma are developing new private asset risk models, but are taking very different approaches.
"The goal is to be able to take any potential investment and be able to put it on an equal footing with everything else. To be able to look at Apple stock and a building in San Francisco and use the same language and understand what sort of relationship they have," says Peter Shepard, head of multi-asset class and alternatives research at MSCI based in San Francisco.
Traditionally there have been two ways of thinking about private asset risk, according to Shepard, who worked on MSCI's model for private real estate, private credit and private equity, which was released on April 25.
The first was to take appraiser valuations at face value. However, valuations tend to be infrequent and lag the market, as well as being subjective and therefore likely to smooth out private asset volatility.
"The valuations aren't a true reflection of value. They're a subjective measure or a stale accounting figure. So their variability is uncoupled from the variability of the real assets. For quite a long time the industry managed to keep up the perception that assets were low risk and uncorrelated [with public markets], based on this misinterpretation of the accounting figures," says Shepard.
The reality, he says, is that "private assets are exposed to a lot of the risk factors that drive traditional assets".
A second approach was to link private assets to a publicly traded proxy. Such an approach acknowledges that private assets are exposed to some of the same risk factors as publicly traded assets. But it ignores factors specific to private assets, such as a liquidity premium or the fundamentally different risk of an early stage start-up compared with a listed company. Diversification benefits are therefore ignored. It is also tough to find a suitable proxy.
For quite a long time the industry managed to keep up the perception that assets were low risk and uncorrelated
Peter Shepard, MSCI
"Sometimes you can have a very good match. Other times you have to accept that something is better than nothing, rather than being very accurate," says Enrico Massignani, Trieste-based head of risk management at Generali Investments Europe.
The goal of the new models is to provide a more accurate picture of private asset risk between these two extremes, capturing the benefits of private assets without overstating them.
"A lot of investors in private assets were really caught off guard during the financial crisis," Shepard says, as private assets turned out to be more correlated to publicly traded assets than expected. "It's really hard to react in the moment. The risk management has to happen well before the event."
A big hurdle, though, is the lack of quality data. Private equity is not subject to the reporting requirements that apply to public companies. Valuations are subjective because they take into account managers' views of the future. Meanwhile, transaction information for real estate is fragmented and often includes a significant time lag.
This lack of data affects the level of detail of the models, says Dieter Vandenbussche, head of equity research at Axioma in Atlanta, whose firm is developing a new private equity model.
"You want to decide what level of granularity you want to use to construct this model. Should you [model] venture capital separately from buyout? Should you try to isolate venture capital funds that focus on certain industries and try to construct a model for those? When you try to slice [the data] thinly enough you're almost certain to have too little data to calibrate a model."
MSCI seeks to incorporate different types of data in its approach so any weaknesses can partially offset each other. Its model incorporates valuations, public proxies such as real estate investment trusts (REITs) or public equity indexes, and transaction data.
The model then uses beta coefficients – the sensitivity to factors that drive the public proxy – and what MSCI calls "pure private" factors to determine the difference between the private asset and its public proxy.
To get round the artificial smoothing of valuations, the firm uses de-smoothing techniques based on the methodology developed by Jeffrey Fisher, David Geltner and Brian Webb in a 1994 paper titled Value indices of commercial real estate: A comparison of index construction methods.
The firm then combines the de-smoothed valuation data with prior guesses of the beta relationship for various categories of private asset, based on economic intuition. The prior beta for venture capital, for example, is based on the beta for small-cap companies, but allows for lower leverage than might be typical in the small-cap sector.
"We find it's very important to have a prior to narrow the region, so beta isn't 50 and -50. But the details of [the prior] really don't matter much," says Shepard. Using its method MSCI determines, for example, that US east coast offices have a greater correlation with US office REITs than with Midwest offices.
There's only a relatively small subset of funds that you have cashflow data for. We end up having to slice it at a pretty broad level
Dieter Vandenbussche, Axioma
In contrast, Axioma's private equity model – due to be rolled out in the third quarter of 2016 – uses cashflow data from private equity funds to estimate sensitivities to public factors.
Axioma's model uses cashflow data obtained by data provider Preqin, collected mainly from fund managers but also through methods such as freedom of information requests to public institutional investors. A disadvantage is this narrows the universe of funds for which data is available.
"There's only a relatively small subset of funds that you have cashflow data for. We end up having to slice it at a pretty broad level, and effectively say that all [the] funds in [a certain] cohort will get similar exposure to [certain] factors, which doesn't really give a very granular view," Vandenbussche says.
That narrowness is balanced, though, against the ability to estimate correlation sensitivities directly from the data.
As a first step, Axioma uses cashflow data from liquidated funds to derive a dynamic growth rate for each fund over time. The firm assumes these growth rates are a function of public and private factor sensitivities. In a second step, the firm finds the value of public factor sensitivities that best explains the path of the growth rates over time – leaving a residual element of the growth rate that cannot be explained by market drivers.
The last step is to split that residual growth rate into a part driven by a private factor common across multiple funds and a part that is truly idiosyncratic. The model assumes there is only one private factor per private equity category.
"We let the data calibrate what the factor sensitivities should be. The assumptions come in which factors we say should drive the risk of those private equity instruments," Vandenbussche says. Yet he acknowledges the limitations in modelling private asset risk.
Those limitations are unlikely to change while granular and timely data on private assets remains hard to get hold of. "Certainly there's a lot of room for debate here. I don't think there's any way of establishing a true model," he says. Vendors will continue to strive to refine the models, but as Vandenbussche puts it: "These are all approximations in one way or another."
The week on Risk.net, August 4–10Receive this by email