Bruno Dupire was 15 years old when Fischer Black, Myron Scholes and Robert Merton published their keystone work on the calculation of option prices. He would be almost 30 by the time he first entered the finance industry, after an early life as an academic mathematician - but the foundation of his career since, as for so many other quants, has been his efforts to expand and refine the Black-Scholes model. After short periods at several smaller French banks, he came to Societe Generale (SG) in Paris in 1991. "When I joined SG in Paris, I was bombarded with plenty of new problems, but the first thing was to have the simplest possible extension of Black-Scholes to fit market data," he recalls. Specifically, he concentrated on combining the model with time-dependent volatility data. The original Black-Scholes model assumed constant volatility - one of its most severe limitations. Dupire attacked the problem via an easy stepping stone: fitting term structures of time-dependent interest rates and volatility to the volatility implied by option prices. From this foundation, he moved on to deal with a more challenging area: modelling stochastic volatility. Success would facilitate the creation and pricing of a completely new product - variance swaps. The final paper, Model art, was published in Risk in September 1993. Dupire's 1993 paper followed the lead set by David Heath, Robert Jarrow and Andrew Morton in 1987, when they produced a model for pricing interest rate options, despite the absence of a market for short-term rates. Starting from an observed yield curve, they were able to produce a series of instantaneous forward rates, which in turn allowed them to derive a unique arbitrage-free price, allowing pricing and hedging of any interest rate option. "The paper showed you can combine European options to create a logarithmic profile. And if you delta hedge this profile, you capture the square of the realised volatility - called realised variance, which is the payout of the variance swap," Dupire explains. "The first building block was the construction of variance swaps and the creation of the notion of forward variance, and the extension of it as instantaneous forward variance. Once you've shown you can compute it and lock it from existing options, you can make some assumptions on how it evolves through time and can risk-neutralise it." However, the formula failed to attract attention. "It is virtually unknown," Dupire concedes. This is largely because the paper was closely followed later the same year by Steven Heston's closed-form stochastic volatility pricing formula, which is now much more widely used. But Dupire had already started working on another problem - modelling the volatility smile (a reference to the fact that the volatility implied from the market price of vanilla options varies with maturity and strike). "I realised that calibrating the model to the market was extremely important," he says. "It was well understood on the interest rate side but not on the volatility side, which is more complex. Historically, it has always taken time to absorb notions from the rates side, such as stochastic models or calibration. I realised that the stochastic volatility model fitted the term structure of implied volatility, but not the full surface of implied volatility - which meant the dependence of implied volatility on maturity and strike, commonly called the skew." Previous attempts to adapt Black-Scholes to incorporate the smile had not been successful. With the growth of markets in exotic products such as barrier options - which are highly dependent on skew - the market clearly needed a workable pricing model. Dupire's first attempt was not a success - a discrete model he developed at Societe Generale in 1992 using a trinomial tree "worked quite well", he says, but "it wasn't clear that it would converge to something well defined if you move from weekly time steps to daily to a continuous time model". To price properly, a model was required that could operate in continuous time - and in his Risk paper of January 1994, Dupire, now at Paribas, described one. Published as Pricing with a smile, Dupire's paper is still the most cited work ever printed in Risk. At the heart of his model is the concept of local volatility, which is calculated from the prices of options across all strikes and maturities using the same tree framework approach as in his first attempt. Local volatility can also be calculated for discrete models using partial differential equations. The model was a success almost immediately. "I think the concept propagated quite quickly - if not like wildfire," says Dupire. Although the tree method is now little-used, the Dupire equation is still a standard tool. "Some French banks have 1,000 or 2,000 PCs working day and night to revalue the full book on this," he adds. "It's used for pricing and risk management - it has been nice to see it receive wide acceptance." Using Dupire's methods, local volatility can not only be described but can also be traded, allowing dealers to take positions on the likely level of volatility at any point in the future. "It's exactly like with rates. If you see you have forward rates of 10% and you think the rate will be lower when you arrive at that date, you can borrow and lend at similar maturities to lock the forward rate and capture this difference," Dupire says. "What I've done is the same thing with volatility - it's more complex because you have both strike and maturity, but I think it will stand the test of time because it was important to develop the concept." Using the model is not straightforward. The most obvious problem is that the calculation of local volatility requires a full range of option prices, which is not always available. Implementing a volatility arbitrage strategy using local volatility modelling also works perfectly only in "a slightly idealised world where you have a frictionless market and plenty of strikes and maturities", Dupire says. And, he warns, regarding it as a prediction tool is a mistake: "The model postulates that the forward values you have computed will be realised in the future, which is really a very arbitrary statement. If you ran some tests, you would see that everything is stochastic and you don't have a good fit. But the important thing is that, as local volatilities are forward values, any stochastic volatility model without jumps that is calibrated to the market will be centred on the local volatility. This is important, because some people say the local volatility model is bad as it's a poor predictor of future volatilities. I would say 'wonderful' - if the market systematically deviates from local volatility, you can put in place a statistical arbitrage strategy." Take-up was rapid among quants, but, as with many other models, the concept of local volatility was slow to creep on to the trading floor. "In banks, you have to distinguish two things: models that a quant team develops internally to test different ideas, and models used by traders as an official revaluation system. Very few banks use stochastic volatility models for full revaluation and aggregation of positions, and almost none use models with jumps, while all serious quant teams have developed many versions of both of them." A lag in take-up is inevitable for any new model, he says. Dupire kept working on the volatility question after 1994. In 1996, he published the widely cited A unified theory of volatility (a Paribas working paper), which he describes as "merging the two approaches - the stochastic volatility model consistent with the term structure of implied volatility in the 1993 paper, and the 1994 paper that determined state volatility consistent with the full surface - to develop a stochastic model consistent with the full surface". But the model was both conceptually complex and computationally intensive, and modern approaches to the problem tend to rely more on jump processes. Today, Dupire's attention remains on volatility. "I'm always more comfortable starting from a real problem. I hate thinking 'Well, I had fun doing that, but so what?' Volatility is still expanding in interesting directions - for example, volatility arbitrage between different measures of historical volatility, or computing the full surface of historical volatility, and arbitrage of the dynamics of the skew. Now we have a lot of volatility derivatives - options, Vix options, options on realised variance - and all these products have interesting and complex links." He gives the example of a project to price options on realised variance, which yielded a practically useful, although theoretically messy result. "I realised it was linked to the Skorohod embedding problem in stochastic calculus. There was a direct map with the solutions. But there were 22 Skorohod solutions corresponding to different models of pricing exotic options, so we had 22 different prices. We were eventually able to tell the traders 'you have to sell all the puts where the implied strike is above the variance call strike'. I would say that was a good use of theory - we were able to go back to the original problem with a very precise recommendation." He's also working on what he calls "the engineering side" - designing better methods to visualise financial data, such as animations of changing yield curves. But his attention is being drawn more and more to the error margins of financial modelling. Just as the volatility smile represents the mismatch between the Black-Scholes model and the real world, measures of hedge incompleteness will give risk managers the tools they need to evaluate whatever hedges they have put in place. "I've developed a concept I call H2 that tells you how much of the initial risk you have managed to hedge," he explains. "You first define your target, which is the instrument you want to hedge, and the model in which you are computing the risk, and the set of hedging instruments or hedging strategies - your target could be an exotic option and your strategy would be an initial position in vanilla options plus delta hedge; or it could be a collateralised debt obligation that will be hedged with individual credit default swaps. It's a very generic approach, but eventually I managed to compute a coefficient of hedgability - a measure of hedge efficiency that tells you how much of the initial risk it has absorbed." Liquidity - in the sense of market depth or the ability of a market to absorb a large order - is also becoming more accessible as a subject of study by quants, Dupire says. "Historically, it has not interested quants, for several reasons. First, years ago we didn't have electronic trading, and although we always had the notion of the order book, we couldn't model it. Now, it's more easily quantifiable. Also, quants concentrated on structuring, not on the prop trading desk. The classic job of a quant was to compute the price of a product, which is then sold at a margin that allows the bank to remain risk-neutral. And pure quants tended to dislike it because it is very data-driven and experimental." But the growth of hedge funds, which have hired quants to focus on risk premiums, has had an educational role for banks, Dupire says - and execution, as distinct from pricing or hedging, has become a hot topic since 2000. As part of the wave of mathematicians and physicists moving from academia to the banking world, Dupire has seen the crossover between science and finance at close quarters, and has used concepts such as Brownian motion and heat diffusion in his own financial work. But he warns of the danger of assuming that the subjects are completely congruent. "They have a tendency to come with their own methods and twist existing problems to a zone where they feel comfortable," he says. "In physics, people like to use spectral methods, such as Fourier analysis, to get expressions for characteristic functions, and there's a tendency among newly arrived quants to think that the problems of finance are of this type - parameterisation of the model that allows you an analytical formula. I think that's interesting and it can help to have speedy computations, but for me it's very far from the real problems of finance. The real issues are more related to hedging issues or liquidity structuring." Despite his own background in mathematical physics, Dupire has his eye on a different field. "I don't think there's going to be a big wave of behavioural finance, but there should be. It has many sound points, but quants tend to disregard it - not many banks are hiring psychologists. Some people say it is just a nice collection of useless anecdotes, but I think it is very profound." As he sees it, behavioural finance has the potential to explain, and conceivably model, many of the sources of error that fall under the heading of 'market irrationality' and still bedevil quants whose models are based on the concept of rational actors. "Look at mortgage-backed securities, for example. The borrower has an option to prepay and some quants would say that this is a rational choice problem - but people don't behave optimally. Herding, panic, news flow and overreaction have to be better understood and modelled, and banks account for this through a prepayment model." At the micro-structural level, market analysis will have to explore the irrational as well as the rational side of trading activity in order to build better models, he says. Despite his past successes, it seems there's still plenty to keep Dupire busy. See also: Pricing with a smile Derivatives Pricing: The Classic Collection The legacy of Dupire...
Start a FREE trial or subscribe to continue reading:
Start a 4 week free trial
Try Risk.net's premium content for a limited period. Register now for your FREE trial to one of our leading brands.
*not available to previous trialists or subscribers.
Log In or Subscribe Now
Subscribe to Risk.net Business now to access all our premium news & features content for 1 year.
Pay by Credit Card for immediate access.