Showing posts with label Risk Management. Show all posts
Showing posts with label Risk Management. Show all posts

Tuesday, 26 April 2016

Panama papers: again, another reminder

The Panama Papers bring out several issues related to vulnerabilities of controls from countries and companies. We have heard about them…tax havens… offshore companies, banking secrecy, money laundering, political exposed persons… but what are they? How are they related to each other?

To start explaining, it is important to add another role as important as it is Internal Audit within the companies: the Compliance Officer. Where does it come from? What are their responsibilities?

After the most important financial scandals that took place in 2002 such as Enron and WorldCom the authority decided to tighten the nuts and the regulation changed. New rules were placed for public companies such as the prohibition of not being an auditor and consultant for the same company, disclosures if a company is dealing with a fraud, rotation of audit partners (5 years top), creation of the Audit Committee and how to protect whistleblowers among other things. And a new role emerges from this: the Compliance area.

Compliance as its name says it has the duty to comply with the law (externally) and with the policies and procedures (internally). Its difference with Internal Audit is to prevent rather than detect. As we all know either an external or internal Auditor determines a scope based on the nature of its revision in order to analyze what is being doing vs. what it should be. (For more information refer to the article, Value: Internal Audit) The bottom line: an auditor analyzes something that already has happened (after). Meanwhile compliance should be involved before taking a decision. (I.e. a contract, hire key staff, new provider, etc.)

Compliance should be in charge of manage the money laundering risk, which is defined as to give legality to money that comes from illicit activities. Those illicit activities are several among: traffic of drugs, human organs and humans. Prostitution, forgery, pornography, bribes, etc. The term “illicit” depends upon the legal framework of each country. The criminal will look for “paradises to launder money”…those countries or companies which can help him to launder lots of money at a low cost in a very quick time.

A tax haven is defined as a territory where taxes are levied at a low rate or has a system of banking secrecy. This means that banks are not allowed to give to the authorities the information of their clients… the real owner… is a “top secret” and it has to be kept as that, unless there is a criminal complaint. Offshore companies (legal entity), refer to be incorporated or register on tax havens.

Therefore, for its characteristics tax havens are used by some people for purposes of confidentiality…others for launder money and others to pay less tax or hide money from the IRS. The latest two mean a crime: tax evasion.

But tax evasion differs from money laundering. Although both are crimes they have specific characteristics. Therefore, depending on the circumstances, someone can be accused of both or just one.

Then there is another key concept: political exposed person…“PEP”. It is defined as someone who is or has been entrusted with a prominent function. Historically PEPs have shown us that tend to be corrupt. Taking bribes is illicit money; dirty money. It has to be laundered. Someone who is corrupt does not want to be known as such, so the money has to be seen as “clean”… as legal.

One of the key elements to deter and prevent money laundering is to know your customer (“KYC”) and apply customer due diligence. (“CDD”) Countries, authorities and companies need to know who the real owner is, as well as who controls. Criminals use among many methods:  shell companies, front man or identity theft to disguise its identity; therefore verify who really is the owner, it is an extremely important control.

Although there is still an investigation carried out in Panama, it reminds us (again) the importance of internal control that companies should have and countries should promote. How many factors have in common with the Enron case?

-Worldwide there are flaws in the laws that generate legal technicalities that help criminals or there are still issues to be regulated. In Enron case energy wasn't regulated. Today it is the offshore industry.

-In both cases there were rumors of corruption.

-Lack of transparency: Enron didn’t present a Balance Sheet meanwhile in Panama due to bank secrecy information is not provided.

-Enron used “mark to market” for accounting valuation and afterwards a “hypothetical future value” among the creation of several companies to disguise the fraud and real owner. (It included a trust). Today in Panama it is reported a number of companies created by complex structures, also.

-Statements from both executives of companies were: “we didn't do anything wrong”. The rationalization is the same.

Regardless of the mentioned above and the importance of controls and managing risks there is something more transcendent: values. Why do people even knowing that something is wrong, they do it?

And the history…again is repeated…with so much similarities…




By Mónica Ramírez Chimal, México
Partner of her own consultancy Firm, Asserto RSC:  www.TheAssertoRSC.com

Author of the books, “Don´t let them wash, Nor dry!” and “Make life yours!” published in Spanish and English. She has written several articles about risks, data protection, virtual currencies, money laundering. Monica is international lecturer and instructor and has been Internal Audit and Compliance Director for an international company.

Friday, 8 April 2016

FATF chief talks de-risking dangers and correspondent banking


First part of my interview with the head of the world’s anti-money laundering task force on de-risking, and the fraught relationship between disruptive fintech companies and the banks which handle their accounts.

Wholesale de-risking or de-banking, which involves denying accounts to customers deemed high risk, is rapidly becoming a major sector issue and even a “crisis” in areas in vital need of cross-border transfers.

David Lewis, executive secretary of the Financial Action Task Force (FATF), said the agency would continue to rail against a practice it believes fuels black markets, following two statements issued last year.

Troubled areas served by charities, or those such as the Caribbean whose economies are heavily dependent on remittance flows, are directing anger at the US.

Lewis said an FATF report into correspondent banking relationships and the impact of cutting money remitters, payment services providers and other sectors, is nearing completion.

“It’s a concern to us, as it undermines transparency within the financial sector and law enforcements ability to follow the money,” he said.

Wholesale de-risking is considered a blanket decision by banks, without due regard to the risk of individual customers within that decision-making process.

“We see it impacting mostly on customer segments that have a risk of money laundering and terrorist financing and where the profits for the bank are relatively small,” Lewis said, adding that it was a major concern for several other international finance organisations.

“We are working with the Financial Stability Board (FSB) to look at implications of banks cutting off correspondent relationships.

“This is affecting particularly areas such as the Caribbean and Africa.”

The FSB echoed claims it would merely push honest remitters underground in an attempt to stay in business.

Lewis said there are two main drivers which affect correspondent banks, money service business, charities or a host of other sectors including fintechs.

“Banks are seeing the costs of compliance rise, largely because they have been found wanting by regulators and law enforcement agencies and are finally putting in place the necessary controls,” he said.

“Sometimes I see in the press talk of stronger or revised regulations causing this, while what we are actually seeing is just the regulators starting to enforce the rules that have always been there.

“Banks are getting caught out, they are responding by investing a lot more heavily in compliance. 

“This is squeezing their margin, and ultimately that is changing the risk-reward relationship to a point where the profits are so small that it’s not worth the potential reputational damage for the bank, let alone the profit, to engage in that relationship.”

US authorities are preparing guidance on the matter, with the Office of the Comptroller of the Currency stating banks’ fear of breaching AML laws is often misguided.

“Aggressive” enforcement of the Bank Secrecy Act is seen as the number one problem.

Recently Barclays announced it was selling its Africa banking arm, having decided the reward was too low given the risks of serving a volatile jurisdiction.

As legal experts have pointed out banks will consider several factors prior to making a decision, but it is often a perfectly legal business choice despite claims of competition malpractice.

Lewis said: “We are seeing them [banks] assess their relationships not just on the basis of AML/CFT risk, but on broader legal and regulatory risk and reputational risk and the commercial appetite along with the geographic appetite.”

“We have seen correspondent banks, money service business, charities and other sectors including fintechs particularly hit by de-risking.

“Secondly, we have to clarify regulatory expectations.

“The nub of that is the question of whether a bank is required to know their customer’s customer (KYCC).”

Despite the FATF insisting this is only the case “in exceptional circumstances”, many are taking no chances.

“There is a growing perception within the banking sector that they are,” Lewis said.

“However, whether they are required to know it or not, it increasingly seems that they are not comfortable maintaining the relationship if they do know it.

“There is such a fear of regulatory action that if they don’t have confidence in their customer’s customer they are pulling back.

He described it as a “big issue”.

“If they have any doubt at all on circumstances where they are required to KYCC or in situations where customers don’t offer a great profit, they are cutting them off,” Lewis said.

“We are concerned about that as it reduces transparency in financial transactions.

“It increases the ML/TF risks we are trying to address.”

In the forthcoming report, Lewis said the guidance on correspondent banking clarifies what FATF thinks regulators should be doing.

Everyone in the industry is advised to pay attention to the paper, and Lewis admits the outcome is not always easy.

“It’s a different question then about whether regulators go away and do that,” he added.

“Implementation is the big challenge here.”

Lewis echoes the fears of those who have witnessed the problems that can come with de-risking, and why cutting off a payment processor will not stop a crime from occurring.

“We recognize sometimes banks have to profile customers and look at risk from a number of different angles, we want to promote a common sense risk-based approach that doesn’t exclude a large number of customers and essentially increase risk.

“The nub of this issue for us is how to get banks to properly assess the risks and take a proper risk-based approach rather than make these blanket decisions,” he said.

“Of course if you are like a HSBC and have 55m customers, you have to be realistic about what is possible.

“What happens is if you are a big bank you get rid of the customer but not the risk, is that they move to become a customer of one of your customers.

“Ultimately, if you are a big bank you’re a clearing bank, and the customer you’ve exited will become a customer of another bank and that will probably be using you as a clearing bank.

“So you’re still exposed to the risk, you’ve just less sight of it and less ability to manage it,” Lewis concluded.




Written by:
Mark Taylor
News Editor, PaymentsCompliance
BlockchainBriefing

Wednesday, 16 March 2016

Let’s draw! The importance of the flowchart

Have you ever tried to draw a flowchart? If so, how many times have you done it so accurately reflects the company’s operation? I bet you have done it several times because it isn't easy. Doesn’t matter which software you are using (there a plenty in the market) because by itself won’t tell you how to do it. Or if you decide to make it manually. The key element is how to turn the operation of the company or its areas, processes, transactions or even systems, among other things into a flowchart.

The first step is to get to know and understand the operation of the company, how an area works or how a system functions…that is to say, to know what you want to capture. You will need to obtain the information in order to understand. It will be useful to go over one of my articles posted in this blog “Foundations for work: how to get the information” where you can find tips.

After you get the information needed then you will have to decide if it is more convenient to draw by areas or job posts. All depends on how the information flows and how much detail you have. For example, if you want to capture the company’s operation in general it is best to make the flowchart using areas. But if you want to capture the recruitment and selection process in which several persons intervene, then it is best to include job posts so it can show the people who participates. There are other factors which affect such: as the size of the company, if the people who have the same job posts do the same, etc. There is no rule but only to be consistent on either using areas or job posts.

The second step is to know the flowchart symbols. For practical reasons the basic shapes are: 



You can draw a flowchart either vertical or horizontal. I find easier to draw horizontal, but feel free to do it as you want. Keep in mind to draw: there can be included several “starts”, “ends”, and as many activities as needed, decisions, systems, etc. What it is important to keep in mind is that the description included in the boxes, cylinders or diamonds should be as clear as possible; neat, straightforward to the point. If you need to add more information, then you can use a description process (like a footnote) to explain outside the flowchart more detail on that activity.




Tip: Start by defining the areas or people who participates in the process. Then define which the trigger of your diagram is. It is the client? A supplier? An order from the CEO?

Be patience. You can start drawing the flowchart and in the middle realized that due on how the process flows, it is necessary to change the order of the areas or people involved. It is normal. Do it as many times needed so your flowchart “flows”…

It can also happen that at some point you realize that…you don’t have all the information needed! Either you forgot to ask it or during the interview, you didn't cover it. This happens. Go back and ask as much information as needed. But do not invent or assume. You are trying to capture the company’s reality so it’s important to be objective, professional and do it according the results of the interview.

Usually this situation happens with Directors. My experience has been they are more focused talking about general aspects of the company rather than what they do.

Be precautious when the company has several people with the same job position. Regardless if they tell you to do the same thing, interview them. Not in group, individually. For example, if they are five persons, interview three. If what they say is the same, then you can extrapolate they do the same. I’ve found out that the Director, Manager or person in charge can tell you that his employees do the same but in reality they don’t. I’ve also found out that sometimes even the boss doesn't know what its employees do! Or, unfortunately you realized there is a person that although you’ve interviewed when you try to capture on the flowchart his activities they don’t have nothing to do with what the area does. Or are the same activities of another person but in different area or job post.

Yes, making a flowchart can give you lots of surprises! (To you and the Directors)

After you finished, print the flowchart, if needed paste the sheets (it all depends on its length) and look it from the distance. Is the flowchart understandable? Does it flow from left to right? Is it complete? Does it make sense?

Make as much attempts as necessary to make the answer of all these questions a YES.

The benefits of using flowcharts are:
-Understand the flow of transactions from its origin to be materialized in the financial statements.
-Understand how an area operates and how its activities are related or affect other areas.
-Corroborate what people are really doing.
-Identify which are the reporting lines and documents generated or used.
-Identify the manual and electronic activities. (The latest is identified as a system)
-Identify risks and controls. Find the risk source. (In the next article we’ll talk about Business Risk Management so you can understand the concepts and include them in your flowchart)

We can say making a flowchart is equivalent to “take a picture” on how the company operates or is. Tip: for better results, become what you want to draw. Become money, documents, systems… and ask yourself: if I were one of these, on how many hands would I be? How many people would touch me? Yes, you can have fun drawing!




By Mónica Ramírez Chimal, México
Partner of her own consultancy Firm, Asserto RSC:  www.TheAssertoRSC.com

Author of the books, “Don´t let them wash, Nor dry!” and “Make life yours!” published in Spanish and English. She has written several articles about risks, data protection, virtual currencies, money laundering. Monica is international lecturer and instructor and has been Internal Audit and Compliance Director for an international company.

Friday, 5 February 2016

Accounting for expected credit losses under IFRS 9: Thoughts on benchmarking, calibration and the economic value

After the release of the new impairment standard in July last year, financial institutions started to crunch numbers to get a feeling of the impact of expected credit losses on their allowances. On the one hand, inclusion of lifetime losses in accounting for financial instruments is a logical and consistent step towards adequate consideration of changes in riskiness. It is already now the basis for evaluation of impaired assets. On the other hand, the net value per IFRS 9 accounting standards at day one is lower than the economic value. For (performing) loans which are sold at fair value, an immediate loss at day one has to be reported.

For better understanding of the accounting value in comparison to an economic benchmark, the final standard needs to be seen in the light of its true intent. The IASB reconfirmed in the “Basis for Conclusions”, that IFRS 9 impairment was the result of an iterative process to approximate an ideal accounting model for expected losses under operational constraints:

BC5.88ff: In the IASB’s view, the model in the 2009 Impairment Exposure Draft most faithfully represents expected credit losses … but [respondents] said that the proposals would present significant operational challenges … These operational challenges arose because entities typically operate separate accounting and credit risk management systems. BCIN.13 In response, the IASB decided to modify the impairment model proposed in the 2009 Impairment Exposure Draft to address those operational difficulties while replicating the outcomes of that model that it proposed in that Exposure Draft as closely as possible.

With this intent in mind, how can we derive a benchmark for the accounting value under the upcoming impairment standard? A valid benchmark to reflect principles of IFRS 9 has to have the following properties: 1. It shall fully reflect changes in the riskiness of the borrower, as measured by changes to the expected credit loss. 2. It shall be neutral with respect to other (mostly external) impacts, such as changes to capital requirements, profit expectations, yield curves, interest rates or market sentiment. 3. The effective interest rate method shall be applied to amortize cost and income over the life of the loan.

The discounted cash flow (DCF) method with a fixed discount rate (reflecting the required neutrality with respect to external impacts) gives such a benchmark, when using expected cash flows, corresponding to contractual ones net of expected losses, as basis for the evaluation. Due to its underlying principles we can call this benchmark “idealized Amortized Cost Value”, or iACV©. This value actually corresponds to the net carrying amount as outlined in the Exposure Draft 2009.

The figure shows the typical situation for a non-amortizing loan, term 5 years, and constant gross carrying amount of 100. The y-axis denotes the net value after allowances, whereas time in years is depicted on the x-axis. Expected credit losses increase gradually with time, causing transition to bucket two after 1.5 years in this example. The smooth curve describes the benchmark iACV© as defined above. The other curve describes the net accounting value according to IFRS 9. It starts more conservative (caused by the 12-months ECL allowance), but gets non-conservative with gradually increasing risk level. Transition to bucket two causes the accounting value to become conservative again, at a much more significant level than in bucket one.

The development of iACV© allows for a continuous quantification of the impact of changed lifetime loss expectations on a credit portfolio, without artificial cliff effects. Any changes to allowances on top of this benchmark, quantified by the difference between the accounting value and iACV©, are caused by specific choices when implementing the local IFRS 9 architecture (primarily the transition process between buckets one and two) and the approximative nature of the final standard.

When evaluating the local implementation of IFRS 9 impairment, this deviation should be among the key indicators: It reflects the closeness to the primary intent of IASB. High positive deviations indicate potentially hidden reserves. Large negative deviations may indicate hidden liabilities. If you want to be ready for the upcoming challenges by auditors and regulators, you should ask yourself the following key questions:
  1. Do you have the necessary steering tools in place when drafting and fine-tuning the transition process from bucket one to two?
  2. Are you prepared for discussions with auditors and regulators to justify the level of allowances derived on basis of your local IFRS 9 architecture?
  3. Can your risk systems monitor iACV© to ensure full visibility of ECL impacts on an ongoing basis?
If you are interested into these and other considerations around IFRS 9 impairment, please follow me on LinkedIn, send me an email or join the 2nd Annual Credit Risk Management Forum in Amsterdam 19th and 20th May.

The presented opinions and methods in this presentation are solely the responsibility of the author and should not be interpreted as reflecting those of UniCredit Bank Austria AG and CFP.









Written by:
Wolfgang Reitgruber
Head of Credit Risk Modelling, UniCredit S.p.A., Austria
Senior Advisor, Walkergasse 25, 1210 Vienna, Austria

Monday, 7 December 2015

Things to think of in model use – part 2




Widespread use of models has been the standard in the financial industry for years. Due to, amongst other reasons, increased competition, cost cuttings, modernization, tighter regulations and a general tougher business climate, other industries have also been increasingly reliant on models in their daily work. Over the past few years there has been a slight shift back from complication (some may say over-complication) in models and using non -intuitive assumptions to simply “passing the elevator test”. This blog aims to shed some light on the general use of models.



Where is the model risk?








As shown above, in general a model is specified in a certain way and use different inputs and assumptions in a calculation engine to produce outputs such as prices, risk estimates etc. Model risk lurks both among inputs such as general data problems, lack of data or misspecifications, and outputs such as wrong use; for example forgetting model scope or assumptions.

Prior to using models one should always do a qualitative assessment which is dependent on the industry or securities analyzed. Assessing model risk could be split in three parts:

  1. Assessing the model’s explanatory power, including the review of:

  • analysis bias and inefficiency in model estimations,
  • the model’s ability to display the contribution of key factors in the outputs,
  • model outputs as compared to empirical historical results
  • back-testing results
     2. Assessing the model’s forecasting ability, specifically:

  • its  ability to cope with multicollinearity,
  • its sensitivities to key input parameters,
  • the capacity to aggregate data and
  • the expected level of uncertainty
      3. Adequate stress testing through:

  • the analysis of the model’s predictions of outcome for extreme values,
  • the review of statistical assumptions,
  • the identification of potential additional variables and
  • the review outputs with these included,
  • as well as the assessment of the model’s ability to shock input factors and derive results.

A practical example:


Creating a radar chart of model explanatory power and forecasting ability relative to your own requirement specifications may for example give the below picture.


Figure 1: Comparing to own requirements


Scoring indicates the proposed model excels in its capacity to aggregate data but is mediocre with respect to analysis bias, inefficiency and ability to cope with multicollinearity

The stand-alone assessments could then be supplemented by a comparison with alternatives or peers. The proposed model has a total score of 30 vs 29 for the best practice model and 28 for an “even model” in the chart below. It appears better in data aggregation, key factor display and in back-testing than other best-practice models, but is less appropriate with respect to analysis bias, efficiency and sensitivity to key input parameters

Figure 2: Comparing to other models


Figure 3: Comparing to reality


Finally you should back-test model outputs against historic outcomes as well as stress test model inputs to check for sensitivities and general sense. In this example, the model does a good job estimating the future uncertainty based on history.

Complexity or simplicity

Increased accuracy and generality normally comes at the expense of data requirements, model specification and calculation time. Sometimes “less is more” and you should decide upon how accurate results you actually need before taking on the burden with a general heavy-to-maintain-model.

For example if you choose to use the normal Black & Scholes formula to price options, you only need five variables to describe option prices, but you assume a normal distribution (generally inaccurate if fat tails exist) and the key external driver is implied volatility (brings up the questions of level, which one to use, sensitivity to this etc.). But accepting this allows an analytically solution to be computed easily with the assumption of no-arbitrage which normally satisfies most investors’ needs.

Using a more advanced model (for instance a GEM model) on the other hand, requires a high number of variables fed into the model. Furthermore, several assumptions have to be made on every single variable used for correct modelling. The distribution of returns is similar to actual historical observations or presumed distributions. Such models have strong dependency on the choices made to design every variable. Hence there is increased model risk and in worst cases also limited explanatory power, practical use and popularity.

Final words on three selected parts of the model validation approach

  1. Market data and key factors:

In the assessment of the choice of key factors, one should find out if the simplification of the complex multi-dimensional reality based on a selection of given variables describe the reality as correctly as needed. While reviewing market data used to populate key factors, ask if there are sufficient available market data to model the chosen variables. If there isn’t, are the proxies used sufficiently reliable and will they continue to be so? In the overall assessment of explanatory power, does it seem that the combination of market data and key factors make a good fit to understand the economic fundamentals of the model? Once these questions are answered in the affirmative, consider if the inputs chosen are good enough to ensure the model achieves its requirements.

And what if some of the criteria are not met? If so, you must consider adjustments in choice of variables, identify alternative suitable market data and suggest ways to fine-tune the selection of core variables. All these three elements should be based on the expertise of the teams on the given asset class and economic background surrounding the assets (geography, stage in life cycle, sector, product type etc.)

       2. Assumptions and model infrastructure:


While reviewing statistical assumptions in the model, conduct an assessment on the assumptions taken on the distribution of returns, number of simulations needed, and the probability of type 1 vs type 2 errors etc.

While reviewing the calibration and design of the mapping process you should ask yourself how exogenous inputs are used in the calculation engine (do a review of the numerical input and approximation made by the engine, when no analytical solution exists).

Regarding adequacy of the IT infrastructure surrounding the model consider if the IT processing is robust enough i.e. look at calculation capabilities, controls over manual overriding of entries, audit trails of changes made etc.

Once all of the questions from the above three sections are answered affirmatively, assurance on the reliability of the calculation process can be obtained.

Again, what if some criteria are not met? In that case, consider adjusting your statistical assumptions, enhancing the process by improving mapping and calibration, and improving governance around manual intervention.
        3.Output review and testing:


In your review of the analytics produced by the calculation engine, carry out an assessment on the choice of analytics provided by the model and their suitability to understanding the validation figure and the level of uncertainty.

Testing model input data sensitivities is important so review the model behavior when significant changes in the inputs are made to assess stability. Also what results are produced in scenarios or special stress situations and can they be explained?

It is also useful to carry out a common sense check of the model – is it providing meaningful figures for the purpose it was designed for?

Finally a back-testing is essential i.e. do a comparison of the model vs real outcomes and an analysis of potential divergences.

The review of model outputs require a mix of quantitative skills to challenge results based on alternative models and bespoke statistical tests, and qualitative skills to challenge results on the grounds of fundamentals of the asset class and on expertise gained on the asset class.


Tuesday, 10 November 2015

Things to think of in model use – part 1 in a series of 2 blogs


Things to Think of in Model use – part 1 in a series of 2 blogs


In today’s financial industry widespread model use has been the standard for years. From increased competition, cost cuttings, modernization and a general tougher business climate including tighter regulations from authorities, also other industries have increasingly been relying on models in their daily work. Over the past few years there has perhaps been a slight shift back from maybe too complicated to understand model outputs relying on sometimes non-intuitive assumptions and to simply “pass the elevator test”. This blog shed some general light on model use.
What is a model?
Models are an abstraction and simplification of reality. They are next to never more complex than “the real thing”. Models attempt to describe the reality, but are NOT the reality. They should be understood as a guide to make decisions in an uncertain environment. Never mix up a model with the world itself in other worlds. In model use, it is thus essential to have a critical reflection about the assumptions used in model and be aware of model risk

I will remember that I did not make the world, and it does not satisfy my equations, though I will use model boldly to estimate values, I will not be overly impressed by mathematics I will never sacrifice reality for elegance without explaining why I have done so nor will I give the people who use my model false comfort about its accuracy.
Instead, I will make explicit its assumptions and oversights, I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension
Emmanuel Derman & Paul Wimott, 07 Jan 2009






A model should in general both offer:






Example of a model process:



The key question: among the myriad of influencing variables, what are the key material and primary risk factors and how do they influence the use of the model ?



Effective use is an ongoing loop of recurring tasks:








A typical model lifecycle:








… and eventually: scrap or replacement of model.


Objectives in the model lifecycle framework are an understanding of model risk sources and how to measure these. The consequences of using the model need to be assessed in light of the model risks, prior to use. Again: Never forget that model is not reality but a conscious simplified version of reality. Simplification consists in selecting key explanatory variables and identifying limits of the model.



Model design:
The key components of a model consist of input such as market data, key factors definitions, methodological and statistical assumptions as well as IT feeding processes. A calculation engine suited for the complexity and desired frequencies of the calculations then can output valuation levels, principal component analysis, probability distributions, mark-to-model vs mark-to-market values as well as the expected level of uncertainty. There should be a repeating model testing and reshaping process in place.

Model risk:

Different types of model risk are typically data problems, model misspecifications and/or inefficiency and wrong use (user risk).

Data problems:

For data problems, key considerations are relevance vs obsolescence of data, data corruption, data filtering i.e. signal vs noise, selection of appropriate data to minimize selection bias, data availability vs costs (information cost vs existence of proxies), appropriate feeding of the data in the calculation engine an well as uncertainty of data.

Model misspecification and inefficiency:

Specification and efficiency should be based on three key aspects: 1) appropriate assumptions based on empirical behaviors, 2) selection of variables with identification of correlations amongst them, 3) robustness over time of the explanatory power (R2), both in-sample and out-of sample qualities.

Wrong use:

Models need to be used for the right task otherwise its conclusions can be meaningless if it is not the case (for example using an equity risk model to predict risk in a fixed income portfolio). Further a review of the outputs and its analysis with a critical view is crucial. How sensitive is the model to assumptions and how accurate or correct are our inputted assumptions? The output should rather be a range than an exact figure. And finally the model output is not the reality but an estimate of reality should our assumptions hold.


The information, views, and opinions expressed in this blog are solely those of the author in person and generally do not reflect the views and opinions of SKAGEN Funds.