Perturbation Methods in Credit Derivatives. Colin Turfus
Читать онлайн книгу.even of analytic approximations such as SABR [Hagan et al., 2002]. For example Brigo and Mercurio [2006] observed of the short‐rate model of Black and Karasinski [1991] that
the rather good fitting quality of the model to market data, and especially to the swaption volatility surface, has made the model quite popular among practitioners and financial engineers. However,…the Black–Karasinski (1991) model is not analytically tractable. This renders the model calibration to market data more burdensome than in the Hull and White (1990) Gaussian model, since no analytic formulae for bonds are available.
It is undoubtedly true that the relative tractability of the Hull–White model has been an important factor resulting in its much wider adoption as an industry standard.
No single reason can be cited to account for the relatively limited use to which analytic approximations are put. Practitioners' views vary greatly depending on the types of models they are looking at and what they are using them for. A number of factors can be pointed to, as we shall elaborate in the following section. For the moment we make the following observations, specifically comparing analytic pricing with a Monte Carlo approach.
There is a general distrust by financial engineers of methods involving any kind of approximation. The fact that, if results involve power series‐like constructions, it may not be possible to guarantee arbitrage‐free prices in 100% of cases is often cited as a reason to avoid use of such approximations in pricing models intended for production purposes. Furthermore, it can be more work to assess the error implicit in a given approximation than it is to compute prices in the first place.
While analytic methods are computationally more efficient, they appear to be intrinsically less scalable than Monte Carlo methods from a development and implementation standpoint. Whereas the Monte Carlo implementation of a model mainly involves the simulation of the underlying variables, with different products merely requiring different payoffs to be applied, each product variant tends to have a different analytic formula with limited scope for reuse with reference to other products. Also, if an additional stochastic factor is included in a Monte Carlo method, this can often be handled as an incremental change, while in the case of analytic methods, they will often break down completely when an additional risk factor is added.
Another argument that is not infrequently heard against the introduction of new analytic results is that it is just too much trouble to integrate them into pricing libraries which are already quite mature. An accompanying argument may be that, since the libraries of financial institutions are already written in highly optimised C++ code, any gains that might be made are only likely to be marginal.
There is also a suspicion concerning the utility of perturbation methods insofar as, while the most interesting and challenging problems in derivatives pricing occur where stochastic effects have a significant impact on the pricing, most perturbation approaches have some kind of reliance on the smallness of a volatility parameter, usually a term variance.1 But, for this parameter to have a significant impact on pricing it cannot be too “small”, so we are led to the expectation that we will need a large number of terms in any approximating series to secure adequate convergence in many cases of importance.
A more recent argument which the author has encountered in a number of conversations with fellow researchers is that, insofar as more efficient ways are sought to carry our repetitive execution of pricing algorithms, the strategy adopted in the future will increasingly be to replace the time-consuming solution of SDEs and PDEs not with analytic formulae but with machine-learned algorithms which can execute orders of magnitude faster (see for example Horvath et al. [2019]). The cost of adopting this approach is a large amount of up-front computational effort in the training phase where the full numerical algorithm is run multiple times over many market data configurations and product specifications to allow the machine-learning algorithm to learn what the “right answer” looks like so that it might replicate it. There will also be a concomitant loss of accuracy. But if, as is often the case, the requirement is to calculate prices for a given portfolio or the CVA associated with a given “netting set” of trades with a given counterparty over multiple scenarios for risk management or other regulatory purposes, the upfront cost can be amortised against a huge amount of subsequent usage of the machine-learned algorithm. Since machine-learning approaches are a fairly blunt instrument, there is not the need to customise the approach to the particular problem addressed, as would be necessary if perturbation methods were used instead as a speed-up strategy wherein some accuracy is traded for speed.
Finally, there is not uncommonly a perception that, unlike with earlier analytic options pricing formulae which were deduced using suitable application of the Girsanov theorem, with which financial engineers tend to be familiar, perturbation‐based methods are by comparison something of a dark art. Many of the results are derived using Malliavin calculus or Lie theory, with which relatively few financial engineers are familiar, and often presented in published research papers in notation which is relatively opaque and quite closely tied in to the method of derivation. Other derivations are performed using methodologies and notations borrowed from quantum mechanics or other areas of theoretical physics, areas with which a contemporary financial engineer is unlikely to be familiar. There is, furthermore, not a clearly defined body of theory which the practitioners of perturbation analysis seek to rely on; books which offer a unified approach to perturbation methods applicable to a range of problems in derivatives pricing such as Fouque et al. [2000], Fouque et al. [2011] and Antonov et al. [2019] are few and far between.
1.2 IN DEFENCE OF PERTURBATION METHODS
Although the arguments presented above challenging the merit of attempts to extend the range of analytic formulae available for derivatives pricing by means of perturbation expansion techniques may appear compelling, we suggest that, when they are unpicked a little, their apparent validity starts to unravel. More specifically they are seen to be premised on a view of what is possible with perturbation methods which is challengeable in the light of recent theoretical developments, in particular those set out in this book. They, furthermore, depend on a view of what practical purposes option pricing methods need to address in the industry and the consequent constraints they must satisfy which is likewise challengeable and not altogether up to date.
While the development of derivatives pricing methods was based on the concept of risk‐neutral pricing to guarantee the absence of arbitrage opportunities through which market makers could systematically lose money, the use of pricing models is increasingly in practice for risk management purposes, rather than the calculation of prices for market‐making purposes. So, even if it is the case that an approximation method might technically give rise to arbitrage opportunities in a small number of extreme cases, provided no trading takes place at these prices this is not necessarily a problem. Indeed we are often in a risk management context more interested in real‐world probabilities than in their risk‐neutral counterparts, on account of the fact it is extreme real‐world events and their frequency of occurrence in practice which can lead to the destabilisation or demise of a financial institution. For example, a report by Fintegral and IACPM [2015] surveying 37 global and regional financial institutions concludes that calculation of counterparty credit risk (CCR) tends to operate under “real‐world” assumptions using historical volatilities to calibrate the Monte Carlo simulation.Also, since risk management is generally about portfolio aggregates rather than individual trades, and typically involves computing prices under hypothetical future scenarios, it is not so important to be able to estimate the size of errors associated with the pricing of individual trades as the expected aggregate error, which can often be estimated to a sufficient degree of accuracy by fairly heuristic methods. This is recognised in the Basel IV (FRTB) regulatory framework which has been proposed to replace VaR: internal models used for risk management purposes don't have to be validated in terms of their ability to price individual trades accurately, but rather the aggregate risk numbers produced need to be sufficiently close to those obtained using end‐of‐day pricing models in a back‐testing exercise.Another factor is that, whereas the main criterion pricing models have to satisfy is accurate calculation of the first moment of a distribution,