Essays on Risk Management of Financial Market with Bayesian Estimation


Book Description

This dissertation consists of three essays on modeling financial risk under Bayesian framework. The first essay compares the performances of Maximum Likelihood Estimation (MLE), Probability-Weighted Moments (PWM), Maximum Product of Spacings (MPS) and Bayesian estimation by using the Monte Carlo Experiments on simulated data from GEV distribution. I compare not only how close the estimates are to the true parameters, but also how close the combination of the three parameters in terms of estimated Value-at-Risk (VaR) to the true VaR. The Block Maxima Method based on student-t distribution is used for analysis to mimic the real world situation. The Monte Carlo Experiments show that the Bayesian estimation provides the smallest standard deviations of estimates for all cases. VaR estimates of the MLE and the PWM are closer to the true VaR, but we need to choose the initial values carefully for MLE. MPS gives the worst approximation in general. The second essay analyzes the movement of implied volatility surface from 2005 to 2014. The study period is divided into four sub-periods: Pre-Crisis, Crisis, Adjustment period and Post-Crisis. The Black-Scholes model based daily implied volatility (IV) is constructed and the time series of IV given different moneyness and time to maturity is fitted into a stochastic differential equation with mean-reverting drift and constant elasticity of variance. After estimating the parameters using a Bayesian Metropolis Hastings algorithm, the comparison across different time periods is conducted. As it is natural to expect abnormality in Crisis and Adjustment period, it is interesting to see the difference between Post-Crisis movement and the Pre-Crisis's. The results reveal that if the catastrophe does not permanently change the investment behavior, the effect from Crisis may last longer than expected. It is unwise to assume the market movement or investment behavior would be identical in Pre-Crisis and Post-Crisis periods. Market participants learn from Crisis and behave differently in Post-Crisis comparing to Pre-Crisis. The third essay attempts to predict financial stress by identifying leading indicators under a Bayesian variable selection framework. Stochastic search variable selection (SSVS) formulation of George and McCulloch (1993) is used to select more informative variables as leading indicators among a number of financial variables. Both linear model and Probit model under normal error assumption and fat tail assumption are used for analysis. Financial stress indexes issued by Federal Reserve Banks combined with Bloom(2009) and Ng(2015)'s paper are used to identify financial stress. An ex-post approach based on historical perspective and ex ante approach combined with rolling window are used for analysis. The results show promising predictive power and the selection of variables can be used to signal financial crisis period.




Bayesian Risk Management


Book Description

A risk measurement and management framework that takes model risk seriously Most financial risk models assume the future will look like the past, but effective risk management depends on identifying fundamental changes in the marketplace as they occur. Bayesian Risk Management details a more flexible approach to risk management, and provides tools to measure financial risk in a dynamic market environment. This book opens discussion about uncertainty in model parameters, model specifications, and model-driven forecasts in a way that standard statistical risk measurement does not. And unlike current machine learning-based methods, the framework presented here allows you to measure risk in a fully-Bayesian setting without losing the structure afforded by parametric risk and asset-pricing models. Recognize the assumptions embodied in classical statistics Quantify model risk along multiple dimensions without backtesting Model time series without assuming stationarity Estimate state-space time series models online with simulation methods Uncover uncertainty in workhorse risk and asset-pricing models Embed Bayesian thinking about risk within a complex organization Ignoring uncertainty in risk modeling creates an illusion of mastery and fosters erroneous decision-making. Firms who ignore the many dimensions of model risk measure too little risk, and end up taking on too much. Bayesian Risk Management provides a roadmap to better risk management through more circumspect measurement, with comprehensive treatment of model uncertainty.




Financial Risk Management with Bayesian Estimation of GARCH Models


Book Description

This book presents in detail methodologies for the Bayesian estimation of sing- regime and regime-switching GARCH models. These models are widespread and essential tools in n ancial econometrics and have, until recently, mainly been estimated using the classical Maximum Likelihood technique. As this study aims to demonstrate, the Bayesian approach o ers an attractive alternative which enables small sample results, robust estimation, model discrimination and probabilistic statements on nonlinear functions of the model parameters. The author is indebted to numerous individuals for help in the preparation of this study. Primarily, I owe a great debt to Prof. Dr. Philippe J. Deschamps who inspired me to study Bayesian econometrics, suggested the subject, guided me under his supervision and encouraged my research. I would also like to thank Prof. Dr. Martin Wallmeier and my colleagues of the Department of Quantitative Economics, in particular Michael Beer, Roberto Cerratti and Gilles Kaltenrieder, for their useful comments and discussions. I am very indebted to my friends Carlos Ord as Criado, Julien A. Straubhaar, J er ^ ome Ph. A. Taillard and Mathieu Vuilleumier, for their support in the elds of economics, mathematics and statistics. Thanks also to my friend Kevin Barnes who helped with my English in this work. Finally, I am greatly indebted to my parents and grandparents for their support and encouragement while I was struggling with the writing of this thesis.




Essays on Bayesian Inference in Financial Economics


Book Description

This dissertation consists of three essays on Bayesian inference in financial economics. The first essay explores the impact of discretization errors on the parametric estimation of continuous-time financial models. Euler and other discretization schemes cause discretization errors in solving stochastic differential equations. The empirical impact of these discretization errors on estimating two continuous-time financial models is investigated by using Monte Carlo experiments to compare the "exact" estimator and "Euler" estimator for the Euler scheme. The primary finding is that reducing the discretization interval to reduce the discretization error does not necessarily improve the performance of the estimators. This implies that discretization schemes may yield reliable results when the sampling interval is regularly small and shortening the discretization intervals or using data augmentation techniques may be redundant in practice. The second essay examines the identification problem in state-space models under the Bayesian framework. Underidentifiability causes no real difficulty in the Bayesian approach in that a legitimate posterior distribution might be achieved for unidentified parameters when appropriate priors are imposed. When estimating unidentified parameters, Markov chain Monte Carlo algorithms may yield misleading results even if the algorithms seem to converge successfully. In addition, the identification problem does really not matter when the prediction of state-space models instead of parameter estimation is concerned. The third essay extensively studies credit risk models using Bayesian inference. Bayesian inference is conducted and Markov chain Monte Carlo algorithms are developed for three popular credit risk models. Empirical results show that these three models in which the same PD (probability of default) can be estimated using different information may yield quite different results. Motivated by the empirical results about credit risk model uncertainty, I propose a "combined" Bayesian estimation method to incorporate information from different datasets and model structure for estimating the PD. This new approach provides an insight in dealing with two practical problems, model uncertainty and data insufficiency, in credit risk management.




Coherent Stress Testing


Book Description

In Coherent Stress Testing: A Bayesian Approach, industry expert Riccardo Rebonato presents a groundbreaking new approach to this important but often undervalued part of the risk management toolkit. Based on the author's extensive work, research and presentations in the area, the book fills a gap in quantitative risk management by introducing a new and very intuitively appealing approach to stress testing based on expert judgement and Bayesian networks. It constitutes a radical departure from the traditional statistical methodologies based on Economic Capital or Extreme-Value-Theory approaches. The book is split into four parts. Part I looks at stress testing and at its role in modern risk management. It discusses the distinctions between risk and uncertainty, the different types of probability that are used in risk management today and for which tasks they are best used. Stress testing is positioned as a bridge between the statistical areas where VaR can be effective and the domain of total Keynesian uncertainty. Part II lays down the quantitative foundations for the concepts described in the rest of the book. Part III takes readers through the application of the tools discussed in part II, and introduces two different systematic approaches to obtaining a coherent stress testing output that can satisfy the needs of industry users and regulators. In part IV the author addresses more practical questions such as embedding the suggestions of the book into a viable governance structure.




Essays in Risk Management and Financial Econometrics


Book Description

This dissertation consists of three chapters that concern risk management and financial econometrics. Fannie Mae and Freddie Mac’s implicit government guarantee is widely argued to cause irresponsible risk taking. Despite moral-hazard concerns, this paper presents evidence that Fannie Mae and Freddie Mac (the GSEs) more effectively managed home price risks during the 2000-2006 housing boom than private insurers. Mortgage origination data reveal that the GSEs were selecting loans with increasingly higher percentage of down payments, or lower loan to value ratios (LTVs), in boom areas than in other areas. Furthermore, the decline of LTVs in boom areas stems entirely from the segment insured by the GSEs only, and none of the decline stems from the segment co-insured by private mortgage insurers. Private mortgage insurers also did not lower their exposure to home price risks along other dimensions, including the percentage of high LTV GSE loans they insured. To quantify how the GSEs’ portfolios would have performed under alternative home price scenarios, I build an insurance valuation model based on competing-risk hazard regressions, calibrated Hull and White term-structure model, and forecasting prepayment and default speeds. I find that the GSEs’ risk management would have been sufficient for the historically average 32% mean reversion but insufficient for the realized 95% mean reversion between 2006 and 2011. My results highlight that post-crisis reform of the mortgage insurance industry should carefully consider additional factors besides moral hazard, such as mortgage insurers’ future home price assumptions. The second chapter studies high dimensional time series, with application to estimating the mean variance frontier. One persistent challenge in macroeconmics and finance is how to draw inference from data with a large cross section but short time series. Financial econometric techniques all are designed for large time series and small cross-sections. However, financial data typically has a large cross section and short time series (large-N small-T). One particular large-N small-T impact is the underestimation of risk in the mean variance frontier. We propose a correction for the finite sample bias when the underlying returns are high dimensional linear time series. Our algorithm first corrects for the bias in eigenvalues of the asset return covariance matrix, and then estimate the contribution of each leading factor to the mean variance frontier. A cross validation method is employed to select the optimal number of leading factors. Performance of the proposed methods is examined through extensive simulation studies. The third chapter studies how expected home prices affect borrowers’ default behavior. One of the penalties mortgage defaulters face is being locked out of the mortgage market and missing the home price appreciation. I find that this penalty deters some borrowers from defaulting. A higher future home price growth implies a lower ex-ante default probability. Furthermore, high credit score borrowers react more to past home price declines and future home price appreciation than low credit score borrowers. This suggests that high credit score borrowers are more likely to be strategic defaulters. A model is built to study the effect of changing the cooling off period. In high expected home price appreciation areas, a longer cooling-off period amplifies the impact of each foreclosure. In low expected home price appreciation areas, a longer cooling-off period reduces the number of foreclosures.




Bayesian Methods in Finance


Book Description

Bayesian Methods in Finance provides a detailed overview of the theory of Bayesian methods and explains their real-world applications to financial modeling. While the principles and concepts explained throughout the book can be used in financial modeling and decision making in general, the authors focus on portfolio management and market risk management—since these are the areas in finance where Bayesian methods have had the greatest penetration to date.




Essays in Risk Modeling, Asset Pricing and Network Measurement in Finance


Book Description

"Modelling financial interconnections and forecasting extreme losses are crucial for risk management in financial markets. This thesis studies multivariate risk spillovers at the high-dimensional market network level, as well as univariate extreme risk modelling at the asset level. The first chapter proposes a novel time series econometric method to measure high-dimensional directed and weighted market network structures. Direct and spillover effects at different horizons, between nodes and between groups, are measured in a unified framework. Using a similar network measurement framework, the second chapter investigates the relationship between stock illiquidity spillovers and the cross-section of expected returns. I find that central industries in illiquidity transmission networks earn higher average stock returns (around 4% per year) than other industries.The third chapter proposes a new Dynamic Stable GARCH model, which involves the use of stable distribution with time-dependent tail parameters to model and forecast tail risks in an extremely high volatility environment. We can differentiate extreme risks from normal market fluctuations with this model." --







Artificial Intelligence in Asset Management


Book Description

Artificial intelligence (AI) has grown in presence in asset management and has revolutionized the sector in many ways. It has improved portfolio management, trading, and risk management practices by increasing efficiency, accuracy, and compliance. In particular, AI techniques help construct portfolios based on more accurate risk and return forecasts and more complex constraints. Trading algorithms use AI to devise novel trading signals and execute trades with lower transaction costs. AI also improves risk modeling and forecasting by generating insights from new data sources. Finally, robo-advisors owe a large part of their success to AI techniques. Yet the use of AI can also create new risks and challenges, such as those resulting from model opacity, complexity, and reliance on data integrity.