It is a heavyweight.
I hate copying out entire articles, good writing deserves the traffic.
In this case I'm afraid they will put this major piece behind the paywall.
There's a reason that Reactions motto is "Financial intelligence for the global insurance market."
From Reactions:
Have risk carriers developed a dangerous dependency on models? General Re CEO Tad Montross advises moderation.
Over the past two decades the insurance industry has seen the use of models increase dramatically which creates a new management challenge – understanding and managing all the models. The list is getting long; Catastrophe Models, Risk Based Capital, Dynamic Financial Analysis, Solvency I, Solvency II, ICA’s, Enterprise Risk Models, various Rating Agency Capital Models and Predictive Models. While the intent (to better underwrite and manage risk) is admirable, the complexity of the models introduces a significant challenge.
Understanding the models, particularly their limitations and sensitivity to assumptions, is the new task we face. Many of the banking and financial institution problems and failures over the past decade can be directly tied to model failure or optimistic judgments in the setting of assumptions or the parameterisation of a model.
Insurance is a unique business. It is a business where we sell a product whose true cost is not known for many years. As such, the pricing is based on many assumptions about exposure, expected loss frequencies and severities. Judgment has always been a critical element in making risk assumption and risk management decisions. The new reliance on models has not taken judgment out of the process. It has simply moved it from the front line underwriter or claims examiner to the team responsible for selecting and parameterising the models. And ultimately senior management approves the use of the models and their parameterisation. In many ways the judgments that are made about model use and their parameters are much more difficult, less obvious and more complex than the individual risk judgments of old.
The legitimacy or the appropriateness of a model, the quality of the data and the assumption setting are the keys to this new paradigm. Models are simply tools. A model is not reality – it is simply a representation. It is intended to be representative of what might happen but not what will happen. Unfortunately it can easily be manipulated or tweaked to produce a desired result.
The ironic challenge is to ensure that by investing in and using models to better manage or price risk, we are not inadvertently taking on more risk than we want or that we are mispricing that risk. Most of the large banks had extensive risk management processes in place and very sophisticated models with reports produced daily showing the risk positions for their portfolio.
With the benefit of hindsight, the risk measures and tolerances were set such that the tails of the distributions were obscured. This same phenomenon is a similar challenge for the insurance industry. How should we measure risk and what risk appetite is appropriate? Do we understand the tails of the distributions? Do we understand the sensitivity to the assumptions we’ve made? Every model has limitations. Do we understand the limitations in our model and the impact it has on the projected results? While well intentioned, a model’s development team has biases that will be embedded in the construction of the model. Understanding these limitations and biases is important as we think about how to use a model and how to put its results in context.
Cat models
Probabilistic cat models have been around for thirty years and have brought much greater discipline and focus to the management and quantification of catastrophe exposures. This has been a positive development for the industry. Having said that, the actual track records of the models have not been good. The one thing we can all agree on is that the model estimates are wrong. Just last year the initial estimates for Hurricane Ike were 50% to 60% off the mark. So the actual to modeled variance can be huge – suggesting a large margin of safety is appropriate when using these tools to measure capital at risk.
Particularly, in extreme events, the variance can be even larger since the calibration is more difficult. Why do we invest so heavily and spend so much time using cat models to measure and manage our accumulations? Simply because while imperfect, they are the best tool we have.
While many industry reports and analysts speak to the 1% or 0.2% loss amounts, few qualify their statements with supporting information on how the model was actually used and parameterised. The judgments, with respect to occurrence vs. aggregate loss amounts, VAR vs. TVAR, storm surge, medium vs. long term frequencies, loss amplification, secondary uncertainty and data quality/resolution can produce wildly different loss estimates. In some cases the range can be twofold. That’s startling, given the aura of precision the EP (exceeding probability) curves project.
Regulatory risk based capital models
Regulatory and Rating Agency Capital Adequacy Models have evolved over the past decade but they are pretty straightforward models. These are generally statistical analytic models used to estimate the capital required to manage the risks on the balance sheet (underwriting, reserves and assets) at selected confidence intervals. The risk factors and diversification benefits are the major topics of debate with these models. But, the models themselves are transparent and results can easily be reverse engineered. Today, some companies are also running proprietary Economic Capital Models which introduces a whole different level of complexity and interpretative challenges.
Internal economic capital models
More recently and now in preparation for Solvency II, regulators and the industry are exploring different approaches to economic capital modeling. These models are often proprietary in nature and try to provide a more tailored understanding of a firm’s risk position and capital requirements.
Basically a forward looking mark to market, fully discounted view of the balance sheet, economic capital models incorporate dependencies, loss distributions and, in some cases, economic scenario generators. These results are then used to reduce confidence intervals around expected results and try to quantify the risk of the enterprise. These models are very complex, and because of the multiple moving pieces, it is difficult to ascertain the sensitivity to specific assumptions. They also rely on a number of assumptions which cannot be parameterised using existing data. But once again, the aura of precision is mesmerising.
Predictive models
Predictive models have really caught on in the past five years. First, for Personal Auto and, more recently, Commercial Lines pricing. They are tools developed to better segment pricing and claims handling decisions using multi-variable analyses. Sometimes referred to as black box underwriting, a predictive model is simply applying more exposure information, bringing in more variables and exploring their relationships to make a better underwriting or pricing decision.
These tools are revolutionising underwriting segmentation and exposure underwriting and have profound implications for the entire market. While it will take some time to select the right variables and to get the correlations correct, the use of these tools is changing the game. The pool of risks available and the whole concept of average pricing for a class of risks is over.
These new predictive models are not replacing judgment but they are moving the application of judgment back in the decision chain, creating some interesting management challenges in execution.
Managing the models
There are several practical suggestions that we try to follow when using or considering the use of a model:
• Don’t be seduced by a model. There are some very cool, complicated tools out there. But, if someone can’t explain how the model works in simple terms, avoid it.
• Do extensive sensitivity testing on all the assumptions and dials that can be adjusted on a model. Ask which parameters or assumptions the model is most sensitive to and stress test them aggressively.
• Audit for the completeness and quality of the data entered and document the assumptions used in the model.
• Maintain a qualitative as well as a quantitative framework to identify, assess and manage risk. The qualitative framework is a good check and balance on the quantitative model.
• In addition to training our managers on how to run models, we need to train them how to use them, how to identify their vulnerabilities and weaknesses.
Insurance is a complex business that has became increasingly reliant on models – ever more complex models. Models are simply tools and are not good or bad. It is how we manage and use them that will determine if we can avoid a fate similar to the banks.