Micro-level Stochastic Loss Reserving Models for Insurance


Book Description

Accurate loss reserves are essential for insurers to maintain adequate capital and to efficiently price their insurance products. Loss reserving for Property & Casualty insurance is usually based on macro-level models with aggregate data in a run-off triangle. The macro-level models may generate material errors in the reserve estimates when assumptions underlying the estimates evolve over time in an unanticipated way. In recent years, a small set of literature has proposed reserving models that use underlying individual claims data to estimate outstanding liabilities based on individual claim level information, analogous to approaches used in the life insurance industry. These models are referred to as "micro-level models". In this dissertation, I specify a micro-level model with a hierarchical structure to model the individual claim development that has the flexibility to accommodate assumptions that evolve dynamically over time. The dissertation consists of a simulation study and an empirical study. In the simulation study, I simulate claims data under different environmental changes and use both the macro- and micro-level models to estimate the outstanding liabilities. The results demonstrate that there are many scenarios in which the micro-level model outperforms the macro-level model by generating reserve estimates with smaller reserve errors and higher precision. For actuaries responsible for setting reserves, this study highlights scenarios in which micro-level models outperform traditional macro-level models and so can provide a new tool to provide insights when establishing accurate loss reserves. In the empirical study, I demonstrate the application of a micro-level model in a large portfolio of workers compensation insurance provided by a major P&C insurer. The model is estimated with historic data, validated with a hold-out sample, and compared with commonly-used macro-level models. I show that the micro-level model provides a more realistic reserve estimate than that given by the macro-level models, and the estimation error is largely reduced through the use of individual claims data. The micro-level model is more likely to capture the downside potential in reserves and to provide adequate allowance when extreme scenarios occur. I conclude that micro-level models provide valuable alternatives to traditional models for loss reserving.




Micro-Level Stochastic Loss Reserving for General Insurance


Book Description

To meet future liabilities general insurance companies will set-up reserves. Predicting future cash-flows is essential in this process. Actuarial loss reserving methods will help them to do this in a sound way. The last decennium a vast literature about stochastic loss reserving for the general insurance business has been developed. Apart from few exceptions, all of these papers are based on data aggregated in run-off triangles. However, such an aggregate data set is a summary of an underlying, much more detailed data base that is available to the insurance company. We refer to this data set at individual claim level as "micro-level data." We investigate whether the use of such micro-level claim data can improve the reserving process. A realistic micro-level data set on liability claims (material and injury) from a European insurance company is modeled. Stochastic processes are specified for the various aspects involved in the development of a claim: the time of occurrence, the delay between occurrence and the time of reporting to the company, the occurrence of payments and their size and the final settlement of the claim. These processes are calibrated to the historical individual data of the portfolio and used for the projection of future claims. Through an out-of-sample prediction exercise we show that the micro-level approach provides the actuary with detailed and valuable reserve calculations. A comparison with results from traditional actuarial reserving techniques is included. For our case-study reserve calculations based on the micro-level model are to be preferred; compared to traditional methods, they reflect real outcomes in a more realistic way.




Stochastic Claims Reserving Methods in Insurance


Book Description

Claims reserving is central to the insurance industry. Insurance liabilities depend on a number of different risk factors which need to be predicted accurately. This prediction of risk factors and outstanding loss liabilities is the core for pricing insurance products, determining the profitability of an insurance company and for considering the financial strength (solvency) of the company. Following several high-profile company insolvencies, regulatory requirements have moved towards a risk-adjusted basis which has lead to the Solvency II developments. The key focus in the new regime is that financial companies need to analyze adverse developments in their portfolios. Reserving actuaries now have to not only estimate reserves for the outstanding loss liabilities but also to quantify possible shortfalls in these reserves that may lead to potential losses. Such an analysis requires stochastic modeling of loss liability cash flows and it can only be done within a stochastic framework. Therefore stochastic loss liability modeling and quantifying prediction uncertainties has become standard under the new legal framework for the financial industry. This book covers all the mathematical theory and practical guidance needed in order to adhere to these stochastic techniques. Starting with the basic mathematical methods, working right through to the latest developments relevant for practical applications; readers will find out how to estimate total claims reserves while at the same time predicting errors and uncertainty are quantified. Accompanying datasets demonstrate all the techniques, which are easily implemented in a spreadsheet. A practical and essential guide, this book is a must-read in the light of the new solvency requirements for the whole insurance industry.




Stochastic Loss Reserving Using Generalized Linear Models


Book Description

In this monograph, authors Greg Taylor and Gráinne McGuire discuss generalized linear models (GLM) for loss reserving, beginning with strong emphasis on the chain ladder. The chain ladder is formulated in a GLM context, as is the statistical distribution of the loss reserve. This structure is then used to test the need for departure from the chain ladder model and to consider natural extensions of the chain ladder model that lend themselves to the GLM framework.




Loss Reserving


Book Description

All property and casualty insurers are required to carry out loss reserving as a statutory accounting function. Thus, loss reserving is an essential sphere of activity, and one with its own specialized body of knowledge. While few books have been devoted to the topic, the amount of published research literature on loss reserving has almost doubled in size during the last fifteen years. Greg Taylor's book aims to provide a comprehensive, state-of-the-art treatment of loss reserving that reflects contemporary research advances to date. Divided into two parts, the book covers both the conventional techniques widely used in practice, and more specialized loss reserving techniques employing stochastic models. Part I, Deterministic Models, covers very practical issues through the abundant use of numerical examples that fully develop the techniques under consideration. Part II, Stochastic Models, begins with a chapter that sets up the additional theoretical material needed to illustrate stochastic modeling. The remaining chapters in Part II are self-contained, and thus can be approached independently of each other. A special feature of the book is the use throughout of a single real life data set to illustrate the numerical examples and new techniques presented. The data set illustrates most of the difficult situations presented in actuarial practice. This book will meet the needs for a reference work as well as for a textbook on loss reserving.




Handbook on Loss Reserving


Book Description

This handbook presents the basic aspects of actuarial loss reserving. Besides the traditional methods, it also includes a description of more recent ones and a discussion of certain problems occurring in actuarial practice, like inflation, scarce data, large claims, slow loss development, the use of market statistics, the need for simulation techniques and the task of calculating best estimates and ranges of future losses. In property and casualty insurance the provisions for payment obligations from losses that have occurred but have not yet been settled usually constitute the largest item on the liabilities side of an insurer's balance sheet. For this reason, the determination and evaluation of these loss reserves is of considerable economic importance for every property and casualty insurer. Actuarial students, academics as well as practicing actuaries will benefit from this overview of the most important actuarial methods of loss reserving by developing an understanding of the underlying stochastic models and how to practically solve some problems which may occur in actuarial practice.




Joint Model Prediction for Individual-level Loss Reserving and a Framework to Improve Ratemaking in Non-life Insurance


Book Description

In non-life insurance, a loss reserve represents the insurer's best estimate of outstanding liabilities for losses that occurred on or before a valuation date. The accurate prediction of outstanding liabilities is key to setting reserves and calibrating insurance rates, which are two interconnected primary functions of actuaries. For instance, inadequate reserves could lead to deficient rates and thereby increase solvency risk. Also, excessive reserves could increase the cost of capital and regulatory scrutiny. Therefore, reserving accuracy is essential for insurers to meet regulatory requirements, remain solvent, and stay competitive. The loss reserve prediction in non-life insurance is usually based on macro-level models that use aggregate loss data summarized in a run-off triangle. The main strengths of the macro-level models are that they are easy to implement and interpret. But, the limited ability to handle heterogeneity among triangle cells and changes to the business environment may lead to inaccurate predictions. Recently, micro-level reserving techniques have gained traction as they allow an analyst to use the information on the policy, the individual claim, and the development process to predict outstanding liabilities. Granular covariate information allows environmental changes to be captured naturally to improve reserve predictions. In non-life insurance, the payment history can be predictive of the timing of a settlement for individual claims. Ignoring the association between the payment process and the settlement process could bias the prediction of outstanding payments. To address this issue, In this dissertation, I introduce into the literature of micro-level loss reserving a joint modeling framework that incorporates longitudinal payments of a claim into the intensity process of the claim settlement. I discuss statistical inference and focus on the prediction aspects of the model. I demonstrate applications of the proposed model in the reserving practice and identify scenarios where the joint model outperforms macro-level reserving methods using simulated data. Moreover, I present a detailed empirical analysis using data from a property insurance provider. I fit the joint model to a training dataset and use the fitted model to predict the future development of open claims. The prediction results using out-of-sample data show that the joint model framework outperforms existing reserving models that ignore the payment-settlement association. In pricing insurance contracts for non-life insurers, current methods often only consider the information on closed claims and ignore open claims. In case of a shift in the insurer's book risk profile, open claims could reflect the change in a timely manner compared to closed claims. This dissertation presents an intuitive ratemaking model by employing a marked Poisson process framework. The framework ensures that the multivariate risk analysis is done using the information on all reported claims and makes an adjustment for incurred but not reported claims based on the reporting delay distribution. Using data from a property insurance provider, I show that by determining rates based on current data, the proposed ratemaking framework leads to better alignment of premiums with claims experience. Among other things, accurate risk pricing suggests that all market participants, insurers, and customers, bear reasonable costs for risks assumed.




A Multivariate Micro-Level Insurance Counts Model With a Cox Process Approach


Book Description

When calculating the risk margins of a company with multiple Lines of Business-typically, a quantile in the right tail of an aggregate loss, assumptions about the dependence structure between the different Lines are crucial. Many current multivariate reserving methodologies focus on aggregated claims information, typically in the format of claim triangles. This aggregation is subject to some inefficiencies, such as possibly insufficient data points, and potential elimination of useful information. This inefficiency is particularly problematic for the estimation of dependence. So-called 'micro-level models', on the other hand, utilise more granular levels of observations. Such granular data lend themselves naturally to a stochastic process modelling approach. However, the literature interested in the incorporation of a dependency structure with a micro-level approach is still scarce.In this paper, we extend the literature of micro-level stochastic reserving models to the multivariate context. We develop a multivariate Cox process to model the joint arrival process of insurance claims in multiple Lines of Business. This allows for a dependency structure between the frequencies of claims. We also explicitly incorporate known covariates, such as seasonality patterns and trends, which may explain some of the relationship between two insurance processes (or at least help tease out those relationships). We develop a filtering algorithm to estimate the unobservable stochastic intensities. Model calibration is illustrated using real data from the AUSI data set.




Claim Models


Book Description

This collection of articles addresses the most modern forms of loss reserving methodology: granular models and machine learning models. New methodologies come with questions about their applicability. These questions are discussed in one article, which focuses on the relative merits of granular and machine learning models. Others illustrate applications with real-world data. The examples include neural networks, which, though well known in some disciplines, have previously been limited in the actuarial literature. This volume expands on that literature, with specific attention to their application to loss reserving. For example, one of the articles introduces the application of neural networks of the gated recurrent unit form to the actuarial literature, whereas another uses a penalized neural network. Neural networks are not the only form of machine learning, and two other papers outline applications of gradient boosting and regression trees respectively. Both articles construct loss reserves at the individual claim level so that these models resemble granular models. One of these articles provides a practical application of the model to claim watching, the action of monitoring claim development and anticipating major features. Such watching can be used as an early warning system or for other administrative purposes. Overall, this volume is an extremely useful addition to the libraries of those working at the loss reserving frontier.




Stochastic Loss Reserving Using Bayesian MCMC Models


Book Description

"The emergence of Bayesian Markov Chain Monte-Carlo (MCMC) models has provided actuaries with an unprecedented flexibility in stochastic model development. Another recent development has been the posting of a database on the CAS website that consists of hundreds of loss development triangles with outcomes. This monograph begins by first testing the performance of the Mack model on incurred data, and the Bootstrap Overdispersed Poisson model on paid data. It then will identify features of some Bayesian MCMC models that improve the performance over the above models. The features examined include 1) recognizing correlation between accident years; (2) introducing a skewed distribution defined over the entire real line to deal with negative incremental paid data; (3) allowing for a payment year trend on paid data; and (4) allowing for a change in the claim settlement rate. While the specific conclusions of this monograph pertain only to the data in the CAS Loss Reserve Database, the breadth of this study suggests that the currently popular models might similarly understate the range of outcomes for other loss triangles. This monograph then suggests features of models that actuaries might consider implementing in their stochastic loss reserve models to improve their estimates of the expected range of outcomes"--front cover verso.