From plug and pray to plug and play

The appliance software business model provides a way of radically reducing thecomplexity of software delivery, and in turn will improve the quality oftechnology and dramatically reduce the total cost of ownership of applications.It will enable the shift towards utility computing and managed services in thefuture, argues Ron Dembo

risk-1104-sr-tech-cover-gif

The current way many firms deliver and support enterprise software is inefficient for both the software vendor and the client. Large enterprise software applications such as risk management can be difficult to deliver on time and on budget, while upgrades are expensive, and the total cost of ownership is high.

The industry’s attempts over the years to change the business model have led to some improvements. However, what is needed is a completely new model for software delivery – one in which the application and its component parts become like self-contained appliances that plug into the client’s organisation, while retaining the flexibility to provide tailor-made business solutions. This will greatly reduce the complexity of software implementation, and dramatically reduce the cost. But it will require collaboration between the vendor and the client to achieve.

The problems of software delivery are not unique to risk management – they apply to all large complex enterprise software applications, such as enterprise resource planning, customer relationship management and so on. The current model of delivery of such applications is not collaborative. The vendor might sell some services to implement and integrate the application, but essentially it is a complete packaged sale. Everyone is aware of the difficulties of this model – the extended implementation times, the costs, etc – and software vendors have made various attempts to make implementation easier, but with little impact.

ASP (application service provision) models have reduced the total cost of ownership to some extent, but at the expense of functionality and flexibility. So how can we change things? In exploring a possible solution, we will focus on risk management, although it will apply to all major enterprise software applications.

Complexity

The first step is to recognise that we are dealing with a complex world. Banks are complex. They have all sorts of legacy systems, and the quality of code of these systems and of the overall architecture is likely to be inconsistent. And banks change and evolve. Meanwhile, financial markets are also complex and ever changing. Enterprise risk management is a process of trying to measure and report on exposures in this dynamic complex environment, and the software for this is inevitably complex. It is also the case that very few banks have a clearly articulated architecture for risk management – not surprising given how much this has evolved over the past few years.

So the question becomes: how can we redesign the model of software delivery to reduce this complexity? Reducing complexity will mean we can reduce the time to deliver applications, make it easier to ensure quality and to maintain the application, and, perhaps most important of all, reduce the total cost of ownership. The challenge is: how can we improve this model? We don’t want just a tightening up of quality and assurance programmes – we want a model that cuts costs by an order of magnitude.

Let’s look at the current state of affairs in more detail. You have complex software trying to solve a complex problem, where the application is installed in a complex environment. There is no way the environment in which the vendor builds will match the environment that it is eventually going to operate in at the bank. The vendor might use one database for its application, but the client has another as a standard.

In any reasonable-sized bank there are a number of vendors with sub-systems based on different architectural principles that are of different maturity, and in addition there are in-house developed systems.

Further complicating this, there are external constraints such as organisational policies and brand loyalty. Even if both developer and client use the same database, it is unlikely that they will be using the same release. This can result in some incompatibility and implementation problems. It becomes impossible for a vendor to fully test an application before it arrives at the client, hence plug and pray.

To overcome this problem, the environment in which the software is developed should be bundled in with the application and delivered along with it, so that the client doesn’t even need to know what it is.

Think of it like a camcorder, which uses the DOS operating system – the user doesn’t even need to know it is there because it is hidden under the hood, and everything required to operate that appliance is organised around a couple of simple interfaces and controls, which obviate the need for a knowledge of DOS. By the same token, delivering applications such as risk management can be radically improved if the environment of the software is shrink-wrapped, so that only the interfaces to the client environment count, and not the various versions of the vendor’s and client’s environments. This way, we can avoid trying to integrate two complex environments together (see figure 1). And the software becomes more like the camcorder – more like an appliance.

Test guarantee

There is another advantage to this appliance model of delivering software – the developers of the application can guarantee that they have tested it in the environment in which it will actually reside.

The second problem with the current model of software delivery is that the client is usually a big complex organisation, with its own particular calculation and reporting requirements, so that it needs data from the application, and has to feed data into it as well.

Typically, implementations involve enormously complex connectivity between the application and the client’s environment. The client gets deeply into certain parts of the application, and is intimately linked to it. So, even if the development environment is packaged with the application, the client still needs to know too much about what is going on inside the box. It’s like wanting to get a picture from a camcorder on to a PC, and having to open the camcorder up and fix a wire to a circuit inside it, and doing the same thing at the PC end; if camcorders were built this way, who would buy them? Camcorders have interfaces that are easy to attach to make downloading a cinch – the interfaces between the two appliances are clean. We need to achieve the same thing when linking enterprise software with the client’s environment.

The software vendor and the client need to agree a well-defined interface between the application and the client’s environment, whereby the client can request information about the application and the application can deliver it without the client having to know anything about the guts of the software. So, if the client wants to know, say, the duration of a bond, instead of having to specify the name of the database, and the table within it, and the line within that, the client can simply pass their request to the interface and the application will fetch the data. Where the data is and how it gets to the interface is entirely the vendor’s job.

The software vendor’s responsibilities must be separated cleanly from those of the client (see figure 2), and they must agree a standard interface between the two of them. The vendor then agrees to deliver software that will work with the interface, and the client agrees to maintain its environment to operate with the standard. To create such an interface, and to agree on a separation of responsibilities, will take collaboration, and it will take discipline on the part of both client and vendor to implement and maintain. An XML-based standard may be used to achieve this goal.

Reduced complexity

In this new world, where the environment of the application is now controlled, where the interfaces are agreed on and standardised, and where the client needs to know nothing about the internal workings to get information from the application – which now operates more like an appliance – complexity is reduced dramatically. We have removed the environmental issue. We have removed the human resource issue where the bank has to staff up with people who know technical details of the software. We avoid the shocks that come with changes/bug fixes in the software, and the cost of upgrading drops dramatically.

It is important to note that this appliance model of software delivery does not mean turning the application into a black box. If the client, or a regulator for that matter, wants to know about the internal workings of the application, they can. But the point is that the client doesn’t have to know the details of the software in order to operate it and extract the information they want.

And this is also much closer to what regulators would like to see when they audit a client, because there will be a simple interface that defines all the points of contact between the client and the software application, every aspect of which is documented.

So far we have discussed the new model only in terms of data, and of getting information in and out of the application. Data is a massive issue in risk management – there is real-time data and historic data, market data, position data, derived risk factor data, instrument terms and conditions, counterparty definitions and entity structures, and so on. That is why the interface between the application and the client has to be so clean and well defined. But in the case of risk, there is more to it than just the data interface – there must also be an application programming interface (API) for analytics.

Analytics must be part of the new model of software delivery because banks change radically from day to day in terms of the analytics they require for their business. Most tier-one and tier-two banks now have important structured products groups, which make a lot of money. The structured products business is all about innovation, and these groups are constantly creating new models. And time to market is crucial in the business, so the new models must be brought into the bank’s processing systems quickly – a bank doesn’t want to have to rebuild its risk system every time a new product comes on the market.

There must be an interface that will allow new models to be integrated with the application quickly and easily. This is simply good architectural practice: acknowledging that change will occur and designing an architecture to accommodate change. For example, a few years ago credit derivatives were the purview of few banks and volumes were small. Now they are mainstream and are growing exponentially. A bank that had not accounted for such change would find it expensive to add such risk functionality to their enterprise system, and hence would have natural barriers to doing the business. A bank that had a well-designed risk environment could add such product types efficiently and cheaply, and would be able to handle this and other future product types, thereby creating a competitive advantage.

Reduced contact points

So in the appliance model for risk management there will just be two points of contact between the application and the client – one for data and one for analytics (see figure 3). The vendor and the client agree to operate around these two interfaces – they agree on standards for the interfaces, on a process for upgrading them and for testing what gets delivered to them, and they each agree on their responsibilities on their side of the interface.

Now, because we have ring-fenced the application and its environment, if the vendor were to have a major upgrade, it could deliver the new version without the client almost even knowing that the upgrade had occurred – as long as the interfaces don’t change. And the vendor could remotely control and service the product without the client knowing about it, or without knowing how the client runs its business.

Because it is so cheap to deliver software in an appliance model, the software vendor will deliver identical software to clients no matter what their business. What the client will license and pay for is how much of the interfaces they use. Some banks might access only a small portion of the application by licensing a small section of the interface, while others may use it all. This is similar to what hardware suppliers such as Sun Microsystems, IBM and HP are now proposing to do by charging clients for the processing hours they use, and not for which boxes they buy. Delivering the same software to every client makes the vendor’s life easier, by reducing complexity and thereby reducing cost.

The appliance model also enables risk management to be delivered to many more parts of the bank, because it is now so much easier to do so. If a trader wants to use some of the functionality of an enterprise risk management system, in today’s world it is brutally hard to deliver it because of the infrastructure you need to get the trader up and running. But if the trader could simply plug into the risk management appliance and get a Monte Carlo simulator, and it was easy and cheap, then why wouldn’t they do it?

The appliance model of delivering software also fits in with how the world of technology is changing. The big hardware suppliers such as Sun, IBM and Hewlett-Packard are adopting a utility model for delivering processing power. Not only are they planning to charge only for the processing power organisations use on boxes installed in-house, but they are also creating huge data processing centres with tens of thousands of servers with the aim of selling processing time online on demand. Because in the appliance model, the vendor delivers the same software to everyone, and because the interfaces are clear and well defined, the vendor could just as easily deliver the software to a third-party processing utility such as Sun, HP or IBM to run the application, either fully or partially, on behalf of the client.

If a vendor wanted to do this today, it would almost have to deliver a copy of the client’s environment to the utility, which would be a massive undertaking. In the appliance world, the vendor simply sends the same software to the processing utility as it does to the client, and the utility acts as a seamless extension to whatever is being run in-house by the client. So the whole technology parcel – hardware and software – becomes a service.

In the real world, clients will want custom extensions and custom analytics – that is ongoing. But because the interfaces are so clear in the appliance model, anyone can build the extensions – the client, the vendor or the utility service provider. It won’t require someone who is expert in the application – all it will need is someone who can read an API, and such people are easier to find than are expert financial engineers.

In fact, it is my view that the utility model of providing processing power won’t work unless vendors start building software that is more like appliances. But if they do, then the whole effort of running an application will become more like using the internet: the client won’t care about what language the software is written in, or in which environment it is running, or where the processing is taking place – all they will care about is that it works and is easy to operate.

So how does the appliance model differ from the current ASP model of delivering software? With an ASP what you see is what you get over the wire – you can’t change, or customise it. You can’t tell Bloomberg, for instance, to change the interface on its data and analytics service. An ASP may be cheap and easy, but it is rigid. Nevertheless, it makes sense for certain applications for certain clients in certain situations. By contrast, the appliance model is a managed service with all the benefits of custom extensions. It is a managed service whether the client is managing it, or the vendor or a hosting service. It is designed from scratch to be easy to operate and agnostic about where it is physically. It is a service with clearly defined interfaces. It has all the benefits of custom extensions because the interfaces allow the client to add arbitrary extensions on top of the software, such as adding in new analytics, without having to get down inside the application where any change can break the whole system.

It is important to repeat that the appliance model is not synonymous with a black box. People sometimes object to the idea of software as a managed service because they want to control their own applications. The appliance model with the processing utility means it becomes a simple business decision as to whether the client runs the application on their site, or at a utility – the client is not prescribed to do it one way or the other, and still retains control.

Encapsulating an application, and the environment in which it runs, as an appliance, with clean interfaces and agreed separation of responsibilities between vendor and client, greatly reduces the complexity of delivering software. It will make deployment much quicker, and significantly reduce the cost of maintenance. It also makes business continuity and disaster recovery much easier. If a third-party processing utility is providing managed services to a client, and the client’s power fails, the utility can pick up the full operation of the application (like the client, the utility will have a complete copy of the software).

There are other benefits to the appliance model. Business continuity is becoming an essential feature of risk architecture design. Previously, if a risk department was reporting once a day to the enterprise, and once a quarter to management, it didn’t matter a great deal if the software went down for a couple of hours. But now if you are in a Basel II world and your loan officer depends on the software, it does matter. The appliance model makes it cheaper and easier to achieve business continuity and disaster recovery.

It also becomes easier for the vendor to provide round-the-clock maintenance because every single client has the same copy of the software. In other words, the appliance model provides all the flexibility of the current model of installed software, with none of the headaches. And all the vendor and client have to commit to is an agreement on standard interfaces.

So how can a bank get from the current model of plug and pray, to the appliance model of plug and play? There are a number of steps it needs to take. The bank must start by setting out its long-term vision of how it wants to operate in the future – this will guide the design of the systems architecture. Next, it needs to make the business case for adopting the new model – implementing an appliance model could take one to two years, but because it will reduce complexity it will cut ongoing software costs dramatically. The bank must then identify all the interfaces between itself and the application, document these fully, and fill in the gaps. It must have a process in place for keeping it current. It must agree with the vendor on a standard set of interfaces, and a separation of responsibilities. (The appliance model is in the interests of vendors too. By collaborating with its clients, a software vendor can achieve higher margins while simultaneously reducing the complexity and the cost of delivery and ownership of its applications.) Then the bank should have a tactical plan of how it will move incrementally to the long-term goal of a full appliance model.

So far we have talked about just a single piece of appliance software – in this case, a risk management application. But as I said at the outset, this model can apply to any enterprise software. I believe banks should redesign their systems towards a goal of a set of mutually exclusive appliances.

These appliances should map to the functions of the bank, therefore the need for the functions they are ‘responsible’ for are mutually exclusive. By this I mean that each appliance is ‘entrusted’ with its own set of tasks. For example, the risk appliance could be given the task of generating the curves on which financial instruments are valued. This should not be done anywhere else in the bank – if anyone needs the curves, they get them from the risk application. The financial appliance will perform the financial reporting calculations, the ‘plumbing’ appliance will have the task of conveying data from one place to another, storing it safely, retrieving it, and so on. The full set of appliances should map on to the functions of the bank, and reflect its operations (see figure 4).

With Basel II looming over banks’ heads, the appliance model is particularly timely. Implementing Basel II will require integrating many types of applications – the diverse types of risk software, financial software, etc – each one of which could be immensely complex in itself, and all of which must be integrated in some way. Unless we can put some order into this, it will be a nightmare to implement. The appliance model offers a way forward. I will discuss this further in subsequent columns.

Software business models

ASP – You get what you see; rigid but costs well known; useful in cases where customisation is not an important requirement

Appliance – Standardised interfaces; customisable; easy and cheap to maintain; can be run as a managed service; but requires collaboration between client and vendor

Custom – Traditional way of delivering enterprise software; customer has lots of say in developments; highly customised; no two implementations exactly alike; very expensive to maintain

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Chartis RiskTech100® 2024

The latest iteration of the Chartis RiskTech100®, a comprehensive independent study of the world’s major players in risk and compliance technology, is acknowledged as the go-to for clear, accurate analysis of the risk technology marketplace. With its…

T+1: complacency before the storm?

This paper, created by WatersTechnology in association with Gresham Technologies, outlines what the move to T+1 (next-day settlement) of broker/dealer-executed trades in the US and Canadian markets means for buy-side and sell-side firms

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here