Bayesian Risk Management. Sekerke Matt

Читать онлайн книгу.

Bayesian Risk Management - Sekerke Matt


Скачать книгу
characteristics enumerated above do not exhaust all dimensions of model risk, however. Even if a model is correctly specified and parameterized inasmuch as it produces reliable forecasts for currently observed data, the possibility remains that the model may fail to produce reliable forecasts in the future.

      Two assumptions are regularly made about time series as a point of departure for their statistical modeling:

      1. Assuming the joint distribution of observations in a time series depends not on their absolute position in the series but only on their relative position in this series is to assume that the time series is stationary.

      2. If sample moments (time averages) taken from a time series converge in probability to the moments of the data-generating process, then the time series is ergodic.

      Time series exhibiting both properties are said to be ergodic stationary. However, I find the term time-invariant more convenient. For financial time series, time-invariance implies that the means and covariances of a set of asset returns will be the same for any T observations of those returns, up to sampling error. In other words, no matter when we look at the data, we should come to the same conclusion about the joint distribution of the data, and converge to the same result as T becomes large.

      Standard statistical modeling practice and classical time series analysis proceed from the underlying assumption that time series are time-invariant, or can be made time-invariant using simple transformations like detrending, differencing, or discovering a cointegrating vector (Hamilton 1994, pp. 435–450, 571). Time series models strive for time-invariance because reliable forecasts can be made for time-invariant processes. Whenever we estimate risk measures from data, we expect those measures will be useful as forecasts: Risk only exists in the future.

      However, positing time-invariance for the sake of forecasting is not the same as observing time-invariance. Forecasts from time-invariant models break down because time series prove themselves not to be time-invariant. When the time-invariance properties desired in a statistical model are not found in empirical reality, unconditional time series models are no longer a possibility: Model estimates must be conditioned on recent history in order to supply reasonable forecasts, greatly foreshortening the horizon over which data can be brought to bear in a relevant way to develop such estimates.

      In this book, I will pursue the hypothesis that the greatest obstacle to the progress of quantitative risk management is the assumption of time-invariance that underlies the naïve application of statistical and financial models to financial market data. A corollary of this hypothesis is that extreme observations seen in risk models are not extraordinarily unlucky realizations drawn from the extreme tail of an unconditional distribution describing the universe of possible outcomes. Instead, extreme observations are manifestations of inflexible risk models that have failed to adapt to shifts in the market data. The quest for models that are true for all time and for all eventualities actually frustrates the goal of anticipating the range of likely adverse outcomes within practical forecasting horizons.

Ergodic Stationarity in Classical Time Series Analysis

      To assume a financial time series is ergodic stationary is to assume that a fixed stochastic process is generating the data. This data-generating process is a functional form combining some kind of stochastic disturbance summarized in a parametric probability distribution, with other parameters known in advance of the financial time series data being realized. The assumption of stationarity therefore implies that if we know the right functional form and the values of the parameters, we will have exhausted the possible range of outcomes for the target time series. Different realizations of the target time series are then just draws from the joint distribution of the conditioning data and the stochastic disturbance. This is why a sample drawn from any segment of the time series converges to the same result in an ergodic stationary time series. While we cannot predict where a stationary time series will go tomorrow, we can narrow down the range of possible outcomes and make statements about the relative probability of different outcomes. In particular, we can make statements about the probabilities of extreme outcomes.

      Put differently, when a statistical model is specified, stationarity is introduced as an auxiliary hypothesis about the data that allows the protocols of statistical sampling to be applied when estimating the model. Stationarity implies that parameters are constant and that further observations of the data improve their estimates. Sampling-based estimation is so widely accepted and commonplace that the extra hypothesis of stationarity has dropped out of view, almost beyond criticism. Consciously or unconsciously, the hypothesis of stationarity forms a basic part of a risk manager's worldview – if one model fails, there must be another encompassing model that would capture the anomaly; some additional complication must make it possible to see what we did not see in the past.

      Yet stationarity remains an assumption, and it is important to understand its function as the glue that holds together classical time series analysis. The goal in classical time series econometrics is to estimate parameters and test hypotheses about them. Assuming stationarity ensures that the estimated parameter values converge to their “correct” values as more data are observed, and tests of hypotheses about parameters are valid.

      Both outcomes depend on the law of large numbers, and thus they both depend on the belief that when we observe new data, those data are sampled from the same process that generated previous data. In other words, only if we assume we are looking at a unitary underlying phenomenon can we apply the law of large numbers to ensure the validity of our estimates and hypothesis tests. Consider, for the example, the discussion of ‘Fundamental Concepts in Time-Series Analysis’ in the textbook by Fumio Hayashi (2000, pp. 97–98) concerning the ‘Need for Ergodic Stationarity’:

      The fundamental problem in time-series analysis is that we can observe the realization of the process only once. For example, the sample on the U.S. annual inflation rate for the period from 1946 to 1995 is a string of 50 particular numbers, which is just one possible outcome of the underlying stochastic process for the inflation rate; if history took a different course, we would have obtained a different sample…

      Of course, it is not feasible to observe many different alternative histories. But if the distribution of the inflation rate remains unchanged [my emphasis] (this property will be referred to as stationarity), the particular string of 50 numbers we do observe can be viewed as 50 different values from the same distribution.

      The discussion is concluded with a statement of the ergodic theorem, which extends the law of large numbers to the domain of time series (pp. 101–102).

      The assumption of stationarity is dangerous for financial risk management. It lulls us into believing that, once we have collected enough data, we have completely circumscribed the range of possible market outcomes, because tomorrow will just be another realization of the process that generated today. It fools us into believing we know the values of parameters like volatility and equity market beta sufficiently well that we can ignore any residual uncertainty from their estimation. It makes us complacent about the choice of models and functional forms because it credits hypothesis tests with undue discriminatory power. And it leads us again and again into crisis situations because it attributes too little probability to extreme events.

      We cannot dismiss the use of ergodic stationarity as a mere simplifying assumption, of the sort regularly and sensibly made in order to arrive at an elegant and acceptable approximation to a more complex phenomenon. A model of a stationary time series approximates an object that can never be observed: a time series of infinite length. This says nothing about the model's ability to approximate a time series of any finite length, such as the lifetime of a trading strategy, a career, or a firm. When events deemed to occur 0.01 percent of the time by a risk model happen twice in a year, there may be no opportunity for another hundred years to prove out the assumed stationarity of the risk model.

Recalibration Does Not Overcome the Limits of a Time-Invariant Model

      Modern financial crises are intimately connected with risk modeling built on the assumption of stationarity. For large actors like international banks, brokerage houses, and institutional investors, risk models matter a lot for the formation of expectations. When those models depend on the assumption of stationarity, they lose the ability to adapt to data that are inconsistent with the assumed data-generation


Скачать книгу