Sponsored webinar: BNY Mellon

BNY Mellon logo

L&PR 0512 data management

 

 

 

 

 

 

 

The panel
Michael Faulkner, editor, Life & Pension Risk
John Boggis, vice-president, regional sales director, Eagle Investment Systems
Daniel Gorton, director, KPMG
Peter Luckhurst, senior product manager, BNY Mellon Asset Servicing
Randle Williams, group investment actuary, Legal & General Group

What are the main asset data management issues that insurers are grappling with at the moment?
Daniel Gorton, KPMG: The issues around data are many, and that is the problem – the sheer complexity of what the Solvency II Directive requirements actually mean for insurers. They have to gather data they haven’t used before, they have to demonstrate an unprecedented level of control and understanding of that data, and they have to articulate how that data is transformed across the many systems and manual interventions to get to the final solvency number. All of those processes have to be re-engineered, alongside a lot of other work that is going on in other areas of the Solvency II programme, and that leads to this complexity, which is causing the challenges that we are seeing. The directive also requires internal and external data to be treated the same. So, while Solvency II programmes have perhaps initially focused on what needs to be done within their own houses, now we are seeing insurers having to go to their providers to try to understand and control the data they are receiving from third parties. Those challenges are not helped by the changing timelines of Solvency II and the pressure they will all experience once these processes are operational. The reporting of Solvency II numbers will take about six weeks for insurers, which means data will need to be provided and be quality-assured within a very short time at the beginning of that process.

Randle Williams, Legal & General Group: The key points about the complexity and timing are absolutely spot on. The reality is that, although insurers knew about Solvency II coming in quite some time ago, they haven’t had clarity over the Level 2 aspects – they have focused very heavily on the valuation of the liabilities. On top of that, a lot of insurers have been accustomed to dealing with Individual Capital Assessment as part of their reporting to the Financial Services Authority (FSA) – that is a risk-based approach as well, and many of them thought it would just be a gentle step from one regime to the other. The question of the assets has been pushed down the timetable in people’s view of what is going to be a really big showstopper. It is only now that it is putting pressure on firms as the time requirements of getting everything together quarterly are becoming apparent. There is also the sudden dawning realisation that the asset data needed is much more granular in terms of its depth and the fact you will probably have to report every asset you have both individually and at a group level. This is on top of the inability to build a data warehouse because you can’t specify quickly enough what you need. It is a big issue, whether you are doing it in house or you are dealing with firms externally. And putting it all in together in a tidy manner is actually very difficult and complicated.

Peter Luckhurst, BNY Mellon Asset Servicing: It is very easy to fall into the trap of thinking about the asset requirements in terms of just the Pillar III reporting. But, in terms of the capital requirement, there is potentially a different set of data requirements in the Pillar I solvency calculations, whether it’s an internal model or standard model. Also, the Own Risk and Solvency Assessment (ORSA) and the Pillar II requirements add another dimension to the granularity of the information that an insurance company is expected to understand around its assets. Certainly the feedback that the European Insurance and Occupational Pensions Authority (EIOPA) has given around asset data as well as some of the commentary around the data requirements have indicated that it feels insurance companies should know the granularity of information on assets because they need it to manage their risks.

Daniel Gorton: One of the concerns is the lack of engagement with the asset management community as insurers have been coming to grips with their own data issues. This is going to squeeze the time that asset managers have to respond to these requests when they finally come. In addition, insurers are asking for almost everything because they can’t be clear on exactly what they need, which really doesn’t help the asset manager either. So, we are set up for some problems unless the asset management community can proactively start helping the insurers understand what information can be delivered and work with them to get the right data in place.

John Boggis, Eagle Investment Systems: The biggest challenge is actually identifying the data you need and from where in your organisation you can get it. Insurance businesses are complicated, they work with lots of different third parties – investment banks, third-party administrators, asset managers and data vendors – and being able to get all of that data together in one place to then feed into the calculations is really the crux of the matter.

What issues are yet to be determined in the rules?
Peter Luckhurst: The detail is still lacking. Probably what is best defined at the moment is the Pillar III reporting requirements. We have all seen the quantitative reporting templates (QRTs), we have seen the 80 or so data elements in there, but it is the individual insurer’s responsibility to make those assessments, particularly with Pillar II, as to what it deems necessary to undertake that monitoring and risk management process. This is what is creating the vagueness out there. How can you have a data warehouse if you don’t know all of the data points you are potentially going to be asked for – it is a particular conundrum. Insurers and their asset providers have a set of data that might represent 85% of what they will end up needing, but that last 10–15% is still to be finalised.

John Boggis: A lot of the problem has been in the communication between the insurer and the asset manager. We spend a lot of time talking to insurers who are plain in saying: ‘Our asset manager will solve this problem for us, it will deliver this information’. And, at the same meeting, we have spoken to exactly the same asset manager that has said: ‘Actually, we have no idea how we are going to get this information’. So the insurer hadn’t really communicated that it was going to need this data time after time after time, on a regular basis and not just a one-off as part of a test. So the seriousness of not delivering that data hadn’t really been communicated either. Other insurers we have talked to said they had spoken to their asset managers about getting a certain other data set, but had not added that if they weren’t able to supply this data in the way that they needed then there was a good chance they were going to take this business elsewhere. So it becomes a proper commercial argument to be able to provide this data.

Peter Luckhurst: The communication has improved over the last few months, particularly as part of the public consultation that EIOPA did in November, so there is progress there. And certainly the asset management community is more aware of the issues. It is interesting because some of our dialogue has been with the regulator and it hadn’t quite appreciated some of the complexities the asset data requests were going present to the community as well as the industry-wide challenges – it is not just an individual asset manager being able to get the information, but how is it going to work as an industry? How are we going to exchange this information?

Randle Williams: Part of it is just a misunderstanding between the two parties because an insurance company will say it wants assets in a particular manner for its purposes and a fund manager, quite naturally, will have the information about the value of the assets because that is what it needs to do. So they do not know quite what they require – even though insurers are trying to ask for everything they can as they don’t want to rule anything out unless they are absolutely certain the FSA has said on behalf of the European regulators that they don’t need it. And, of course, that won’t happen for some time.

Daniel Gorton: And there is the question of repeatability. There has been a lot of work done to get Solvency II answered in a project way, if you like. But, when these processes roll into business as usual and the timescales are so short, how repeatable are some of these quality and data processes? If there has to be a large assembly effort for data in the asset manager, which is then passed to the insurer to carry out its own perhaps unwieldy or manual approaches to proving quality and governance of that information, then they will simply run out of time. Repeatability requires a higher degree of automation than we currently see in the market.

John Boggis: Once the asset manager is able to supply you with information – as an insurer you probably work with between six and 10 asset managers – which takes time, then the insurer has to make sense of the data, which will come in different formats from different asset managers. So there is a normalisation task before you can then even start looking at the quality side. So it is something that has to be automated.

How has Legal & General Group attempted to address some of these issues around automation and gathering the right information?
Randle Williams: A lot of work has gone into improving the process we had – trying to understand how the changes worked compared to the current process. But it is fair to say that we have concluded we need to take a more holistic view about how we go about the assets because they come in many different formats, and there is history behind how certain insurance companies arrange themselves. So it has become a considerable piece of work to build something that is robust and can deliver in the timescale required. It has also raised questions about the accuracy of data. For the assets, for example, if you have a dividend that is due but not received or there is a stock you can’t value or there are some problems and delays because you are dealing with a custodian in the emerging market area, a decision has to be made about how important it is to have that, particularly from a solvency angle.

How challenging is it to develop this automation – a fully industrialised process to deal with this data management?
John Boggis: That is essentially what Eagle does: it helps people to approach these challenges. This is actually what we have been talking to the market about for more than 20 years, predominantly on the buy side with regard to helping people around the challenges of asset data. So, when we started working on Solvency II 18–24 months ago, we saw this as a data management challenge on how you can identify where all the data is in your organisation, how you can gather it and how you can then normalise it. But also how you can run tests on the information to make sure the data you use is complete, appropriate and accurate. And these types of processes and tools are what we have been delivering for quite a long time.

Daniel Gorton: Automation is another thing that is really split by the size of an organisation – the larger firms tend to have a greater degree of automation and the smaller firms tend to rely more on manual processes and controls. But all organisations are looking to leverage their existing control frameworks, which is absolutely right. A lot of the controls they perform are related to data in some way, and the challenge is extending that control framework to make it fit for the purpose for Solvency II. The downside of that approach is you may end up with some unwieldy manual controls where a more holistic approach would say: ‘Let’s automate this, let’s put in more tools, which may or may not be expensive, but will simplify our lives considerably in terms of that business-as-usual requirement’. I haven’t seen the degree of automation and tooling around controls that perhaps we all expected at the beginning of Solvency II, but there is a long way to go. And, as these processes come into a Solvency II business-as-usual world, we will see another round of control optimisation and greater tooling.

Are the internal data issues significant for companies? How much of a challenge is it for firms with a lot of legacy business – or even without a lot of legacy business – to get hold of this internal data?
Randle Williams: The biggest difficulty in that area tends to be on the liability side. Maybe a company has taken over three or four firms and they are all on old systems, and maybe there are products that people don’t sell anymore, so there’s a lack of understanding of the product. The asset side has been a bit easier, except you do have to work out which assets relate to which liabilities and you will have to understand the new issues as they relate to each investment house. Some of the rules in Solvency II imply that a relationship between two firms within the same group has to be regarded as two firms from two different groups, so there are some potential problems in terms of what you might expect to happen as those develop and evolve. Some of this is still settling, so it is difficult to know how the rules will end up. But you could create a situation where you have two firms in a group, with some assets and some liabilities in different places and maybe one company in the group has reinsured it to another part of the group. At the moment, you are regarded as having the same risk, but it is not clear that will be the final outcome, which is technically understandable, but it makes it more complicated when you are trying to put these systems together.

Daniel Gorton: The legacy challenge was mainly on the liability side and companies have spent a lot of time in the last couple of years remediating some of that. The dataflows we see have typically become complex over time and a lot of systems and sub-systems that weren’t fully documented or fully articulated wouldn’t stand the rigours of a Solvency II data quality test. A lot of those have been reworked and made transparent so the dataflows can be articulated from start to finish, as is required by data governance.

Peter Luckhurst: I wouldn’t like to undersell the challenges around data. When you dig into the detail, the combination of the types of data elements means your insurer needs to pull that information from multiple sources. So we are not just talking about something the asset manager potentially has available to them in their system, but it is also some of the risk management information, it is the performance information. It isn’t a straightforward exercise of saying ‘I have a source for my data on assets’, it is going to have to come from multiple sources – not least of all the insurers themselves. Certainly, there are some of the data elements that require input from the insurer for it to be reported correctly.

Daniel Gorton: Data requirements for the ORSA are still a big outstanding question. Firms focused a lot on Pillar I at the outset of Solvency II. Pillar III received a lot of the focus when the QRTs had to be completed, but Pillar II hasn’t really been fully addressed in many programmes from the data perspective.

Randle Williams: From the fund manager’s point of view, the insurer has to ask many more questions around custodianship. If you have one custodian, does the insurer really understand when the assets aren’t in the UK? Or maybe there is a different priority between the US and the rest of the world – questions they should have asked before. But it is becoming increasingly important that they understand this in more granular detail, and not just a few people but the organisation as a whole needs to understand it. That’s a big change for the insurance company and it might be quite hard to answer some of the questions from a fund manager’s point of view because these are quite tricky areas to get into.

Daniel Gorton: Sourcing new data over and above the usual data is one of the hardest things they have had to grapple with so far. For instance, the requirement to be able to look through fund of funds and understand the ultimate assets, and the need to apportion data into complementary identification codes (CICs), is causing huge amounts of difficulty because there are so many variations to a potential answer.

There are further complications when you get into the areas of private equity, hedge funds and property to a degree. The fund managers will have to explain their strategy and describe their assets in individual detail. It is not at all clear how that will work in practice and the insurer may conclude that, if that manager does not want to give them the answer, then they can’t afford to hold the assets.

That is clearly a potential area of major tension between insurers and asset managers in terms of what information the asset managers can provide or are willing to provide because of the commercial sensitivities of that information. Do you think this could be a big problem?
Randle Williams: You can see it exacerbated in the area of the general use of derivatives because, in the case of clearing across Europe, there are big companies in some countries that are very comfortable using derivatives for purposes like currency hedging. It is fair to say some other firms and countries are more advanced in the use of derivatives, and yet there is an uncomfortable feeling in certain countries. And EIOPA has been a bit reticent at working out what it thinks is acceptable and not acceptable in their usage. In the UK, firms have become very used to using derivatives for certain purposes and they have become accustomed to the counterparty risk, they have become familiar with the collateral issues. Clearly there is still something to resolve around the changes to over-the-counter rules, but that whole area – which has become second nature to the UK market – isn’t second nature to lots of other markets. When you go to southern or eastern European states, it’s not at all clear.

To what extent are asset managers set up to provide the information?
Peter Luckhurst: Again, a lot comes down to size. Obviously the bigger managers have the resources and often already have a data warehouse within their shops. A number of them have been built on acquisitions and mergers, and that places challenges on them that are often solved by having a data warehouse over their accounting of operational systems. So those larger entities are probably in a better position, certainly in terms of providing a consistent view of their information. What they are prepared to provide is a different debate. Certainly some of them have the infrastructure to be able to do it, but whether they are commercially prepared to do it or whether there are other restrictions placed on them because of the nature of the funds they are operating, that is still an industry challenge that needs to be worked through. With regard to Daniel’s earlier comment about the dialogue between the insurers and asset managers in agreeing to a data standard, that would make the communication between the two types of entities more efficient on a market basis, not just on an individual client-to-customer basis. We obviously see both sides of the equation from our point of view as a third-party administrator to both insurers and asset managers, and there are some significant industry challenges out there. We have mentioned look-through, which is obviously a big one, and CICs is another one. The regulator perhaps hasn’t appreciated all of those challenges, but there has been further dialogue with them and they are starting to understand them, particularly the look-through issue – how complex that could potentially be if the insurer goes to the nth degree of understanding the line-by-line information of the individual direct holding of the fund, particularly where you are talking about fund-to-fund structures. That is a very complex scenario.

John Boggis: We have met insurers that have more than 100 third-party managers, so getting line-level data from all of those asset managers becomes a huge task.

Randle Williams: I totally support the concept of a standard interface, but it’s unfortunately coming too slow to evolve in time for the deadlines, I suspect. One of the areas of tension between the parties is always over who pays for the data interface. If the fund manager pays for it, then it will look for a lock-in clause to make sure it can recoup its costs. These are all new challenges that the parties aren’t used to discussing.

Will fund managers be forced to provide this information because otherwise their insurance clients will move?
Daniel Gorton: That is going to be the cost of doing business with insurance clients. The insurers need the data to comply with Solvency II, so they will naturally expect their asset managers to be able to provide it. So, some of these unforeseen consequences will be contractual and legal issues. If we are expecting asset managers to share a much broader range of information to allow insurers to assemble look-through and fund information, then we are going to see a rise in non-disclosure agreements and contracting between many different organisations. The data vendors themselves, such as Bloomberg, will want to know where their information actually ends up. There is a lot of contractual impact around Solvency II on the asset side because there are now so many more potential data flows between organisations.

Randle Williams: And it will get worse because, since they need data to be available on day one or day two, it inevitably means there will be times when fund managers come back and say some small part wasn’t actually right. But we haven’t really got rules on the degree of proportionality that applies, and that will make everyone a bit jumpy because insurers will worry that they haven’t got the right data and will therefore have problems with the FSA, while fund managers will feel very uncomfortable at being asked to do something that is stretching their systems.

Daniel Gorton: That goes back to the communication challenge. You can’t have data under remediation being reported in your model so during the reporting cycle of Solvency II, if numbers are found to be incorrect, the communication channel between an asset manager and an insurer to understand and rectify that data problem is going to need to be very timely. And again, that probably doesn’t exist at the moment, so the co-operation between insurance and assets management will need to be increased significantly.

Peter Luckhurst: If you overlay the issue of look-through on a fund-of-fund relationship, it becomes even more complex because you could be sitting there as an asset manager saying: ‘I don’t have any insurance clients, so Solvency II isn’t going to affect me’, but you are in a fund-of-funds structure further up the chain and, lo and behold, you get that request. And then does the insurer require the lower level fund manger to provide that data directly to them in order to meet its timeline? Is there a contractual relationship for the insurer to come and ask for that or does it need to go up through the layers of the fund-of-funds structure? And that adds a delay in terms of delivery.

Daniel Gorton: To what degree is that top-level asset manager providing the fund-of-funds data but not the granular data? To what degree is it responsible for the more granular information that has been provided? While the insurer will ultimately be held responsible for understanding the information, the asset manager still has a huge amount of work to do to compile and understand those many levels through to the ultimate asset. Also, each level will probably be provided in a different format and at a different time. So, if you are at the top of a very big chain, it will be a huge task to be able to get the information in a single format that you can then provide to an insurance client.

To what extent are insurers going to be able to obtain competitive advantage by gathering the right data and managing it in the right way?
John Boggis: The pursuit for greater transparency within an organisation has been something that pretty much everyone within the financial markets has been looking for, since Lehman’s and the downturn in the market. And, if anything, this is formalising that process within Solvency II.

Peter Luckhurst: We are talking about an insurance entity that is not necessarily a single company in the eyes of Solvency II. Unfortunately – or maybe fortunately, depending on the scenario – Solvency II has caused a number of insurance company groups to look at their structure, and maybe make decisions about how they are structured because of Solvency II, not because of commercial reasons but because of regulatory reasons. That is probably a negative from the Solvency II point of view, that the regulation is driving those decisions rather than the commercial issues around the structure. 

Randle Williams: It is not really a competitive advantage, but it is a competitive disadvantage if you can’t do it properly. It almost becomes the cost of doing business. It may have the effect of making insurers a bit more cautious because, if you have assets that are a bit more complex or hard to understand, you might conclude that they are not a big part of your business. Why would you want to spend a disproportionate amount of time on these complex products? So it could impact innovation in new investment products, at least in the short term.

Are there other benefits to be obtained by thinking about asset data management in the wider context – linking it to other reporting requirements placed on firms from other regulation?
Daniel Gorton: Solvency II is but one of a whole raft of regulations that insurers face. Indeed, the financial services industry as a whole faces a huge number of regulations, all of which overlap to some degree. The data that underpins all of this compliance effort is very often common. The firms that will really get the most benefit out of Solvency II are those that have been able to look more holistically across the range of reporting, regulation and compliance efforts, and use the same data process across all of them. So they efficiently gather the data once and report it as many times as they require. An overview of all of your regulatory reporting is absolutely key. We will experience another round of optimisation once the time pressures of Solvency II have been met, so that all of these regulatory efforts come together and organisations become more efficient in their reporting processes.

John Boggis: Many people we have spoken to are looking for a return on the investment they are putting into Solvency II by having better internal reporting. Improved management information reporting, for example, is a return you get from gathering all of this data and making sure it is all useable and accessible within your organisation.

Material contained within this presentation is intended for information purposes only. It is not intended to provide professional counsel or investment advice on any matter, and is not to be used as such. No statement or expression is an offer or solicitation to buy or sell any products or services mentioned. The views expressed within this presentation are those of the contributors only and not those of The Bank of New York Mellon or any of its subsidiaries or affiliates, and no representation is made as to the accuracy, completeness, timeliness, merchantability or fitness for a specific purpose of the information provided in this presentation.

View the webinar proceedings in PDF format

 

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here