Monday 7 December 2015

Things to think of in model use – part 2




Widespread use of models has been the standard in the financial industry for years. Due to, amongst other reasons, increased competition, cost cuttings, modernization, tighter regulations and a general tougher business climate, other industries have also been increasingly reliant on models in their daily work. Over the past few years there has been a slight shift back from complication (some may say over-complication) in models and using non -intuitive assumptions to simply “passing the elevator test”. This blog aims to shed some light on the general use of models.



Where is the model risk?








As shown above, in general a model is specified in a certain way and use different inputs and assumptions in a calculation engine to produce outputs such as prices, risk estimates etc. Model risk lurks both among inputs such as general data problems, lack of data or misspecifications, and outputs such as wrong use; for example forgetting model scope or assumptions.

Prior to using models one should always do a qualitative assessment which is dependent on the industry or securities analyzed. Assessing model risk could be split in three parts:

  1. Assessing the model’s explanatory power, including the review of:

  • analysis bias and inefficiency in model estimations,
  • the model’s ability to display the contribution of key factors in the outputs,
  • model outputs as compared to empirical historical results
  • back-testing results
     2. Assessing the model’s forecasting ability, specifically:

  • its  ability to cope with multicollinearity,
  • its sensitivities to key input parameters,
  • the capacity to aggregate data and
  • the expected level of uncertainty
      3. Adequate stress testing through:

  • the analysis of the model’s predictions of outcome for extreme values,
  • the review of statistical assumptions,
  • the identification of potential additional variables and
  • the review outputs with these included,
  • as well as the assessment of the model’s ability to shock input factors and derive results.

A practical example:


Creating a radar chart of model explanatory power and forecasting ability relative to your own requirement specifications may for example give the below picture.


Figure 1: Comparing to own requirements


Scoring indicates the proposed model excels in its capacity to aggregate data but is mediocre with respect to analysis bias, inefficiency and ability to cope with multicollinearity

The stand-alone assessments could then be supplemented by a comparison with alternatives or peers. The proposed model has a total score of 30 vs 29 for the best practice model and 28 for an “even model” in the chart below. It appears better in data aggregation, key factor display and in back-testing than other best-practice models, but is less appropriate with respect to analysis bias, efficiency and sensitivity to key input parameters

Figure 2: Comparing to other models


Figure 3: Comparing to reality


Finally you should back-test model outputs against historic outcomes as well as stress test model inputs to check for sensitivities and general sense. In this example, the model does a good job estimating the future uncertainty based on history.

Complexity or simplicity

Increased accuracy and generality normally comes at the expense of data requirements, model specification and calculation time. Sometimes “less is more” and you should decide upon how accurate results you actually need before taking on the burden with a general heavy-to-maintain-model.

For example if you choose to use the normal Black & Scholes formula to price options, you only need five variables to describe option prices, but you assume a normal distribution (generally inaccurate if fat tails exist) and the key external driver is implied volatility (brings up the questions of level, which one to use, sensitivity to this etc.). But accepting this allows an analytically solution to be computed easily with the assumption of no-arbitrage which normally satisfies most investors’ needs.

Using a more advanced model (for instance a GEM model) on the other hand, requires a high number of variables fed into the model. Furthermore, several assumptions have to be made on every single variable used for correct modelling. The distribution of returns is similar to actual historical observations or presumed distributions. Such models have strong dependency on the choices made to design every variable. Hence there is increased model risk and in worst cases also limited explanatory power, practical use and popularity.

Final words on three selected parts of the model validation approach

  1. Market data and key factors:

In the assessment of the choice of key factors, one should find out if the simplification of the complex multi-dimensional reality based on a selection of given variables describe the reality as correctly as needed. While reviewing market data used to populate key factors, ask if there are sufficient available market data to model the chosen variables. If there isn’t, are the proxies used sufficiently reliable and will they continue to be so? In the overall assessment of explanatory power, does it seem that the combination of market data and key factors make a good fit to understand the economic fundamentals of the model? Once these questions are answered in the affirmative, consider if the inputs chosen are good enough to ensure the model achieves its requirements.

And what if some of the criteria are not met? If so, you must consider adjustments in choice of variables, identify alternative suitable market data and suggest ways to fine-tune the selection of core variables. All these three elements should be based on the expertise of the teams on the given asset class and economic background surrounding the assets (geography, stage in life cycle, sector, product type etc.)

       2. Assumptions and model infrastructure:


While reviewing statistical assumptions in the model, conduct an assessment on the assumptions taken on the distribution of returns, number of simulations needed, and the probability of type 1 vs type 2 errors etc.

While reviewing the calibration and design of the mapping process you should ask yourself how exogenous inputs are used in the calculation engine (do a review of the numerical input and approximation made by the engine, when no analytical solution exists).

Regarding adequacy of the IT infrastructure surrounding the model consider if the IT processing is robust enough i.e. look at calculation capabilities, controls over manual overriding of entries, audit trails of changes made etc.

Once all of the questions from the above three sections are answered affirmatively, assurance on the reliability of the calculation process can be obtained.

Again, what if some criteria are not met? In that case, consider adjusting your statistical assumptions, enhancing the process by improving mapping and calibration, and improving governance around manual intervention.
        3.Output review and testing:


In your review of the analytics produced by the calculation engine, carry out an assessment on the choice of analytics provided by the model and their suitability to understanding the validation figure and the level of uncertainty.

Testing model input data sensitivities is important so review the model behavior when significant changes in the inputs are made to assess stability. Also what results are produced in scenarios or special stress situations and can they be explained?

It is also useful to carry out a common sense check of the model – is it providing meaningful figures for the purpose it was designed for?

Finally a back-testing is essential i.e. do a comparison of the model vs real outcomes and an analysis of potential divergences.

The review of model outputs require a mix of quantitative skills to challenge results based on alternative models and bespoke statistical tests, and qualitative skills to challenge results on the grounds of fundamentals of the asset class and on expertise gained on the asset class.


No comments:

Post a Comment