High-Frequency Financial Econometrics


Book Description

A comprehensive introduction to the statistical and econometric methods for analyzing high-frequency financial data High-frequency trading is an algorithm-based computerized trading practice that allows firms to trade stocks in milliseconds. Over the last fifteen years, the use of statistical and econometric methods for analyzing high-frequency financial data has grown exponentially. This growth has been driven by the increasing availability of such data, the technological advancements that make high-frequency trading strategies possible, and the need of practitioners to analyze these data. This comprehensive book introduces readers to these emerging methods and tools of analysis. Yacine Aït-Sahalia and Jean Jacod cover the mathematical foundations of stochastic processes, describe the primary characteristics of high-frequency financial data, and present the asymptotic concepts that their analysis relies on. Aït-Sahalia and Jacod also deal with estimation of the volatility portion of the model, including methods that are robust to market microstructure noise, and address estimation and testing questions involving the jump part of the model. As they demonstrate, the practical importance and relevance of jumps in financial data are universally recognized, but only recently have econometric methods become available to rigorously analyze jump processes. Aït-Sahalia and Jacod approach high-frequency econometrics with a distinct focus on the financial side of matters while maintaining technical rigor, which makes this book invaluable to researchers and practitioners alike.




Time Series in High Dimension: the General Dynamic Factor Model


Book Description

Factor models have become the most successful tool in the analysis and forecasting of high-dimensional time series. This monograph provides an extensive account of the so-called General Dynamic Factor Model methods. The topics covered include: asymptotic representation problems, estimation, forecasting, identification of the number of factors, identification of structural shocks, volatility analysis, and applications to macroeconomic and financial data.




Large-dimensional Panel Data Econometrics: Testing, Estimation And Structural Changes


Book Description

This book aims to fill the gap between panel data econometrics textbooks, and the latest development on 'big data', especially large-dimensional panel data econometrics. It introduces important research questions in large panels, including testing for cross-sectional dependence, estimation of factor-augmented panel data models, structural breaks in panels and group patterns in panels. To tackle these high dimensional issues, some techniques used in Machine Learning approaches are also illustrated. Moreover, the Monte Carlo experiments, and empirical examples are also utilised to show how to implement these new inference methods. Large-Dimensional Panel Data Econometrics: Testing, Estimation and Structural Changes also introduces new research questions and results in recent literature in this field.




Dynamic Factor Models


Book Description




High-Dimensional Covariance Estimation


Book Description

Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and machine learning. Recently, the classical sample covariance methodologies have been modified and improved upon to meet the needs of statisticians and researchers dealing with large correlated datasets. High-Dimensional Covariance Estimation focuses on the methodologies based on shrinkage, thresholding, and penalized likelihood with applications to Gaussian graphical models, prediction, and mean-variance portfolio management. The book relies heavily on regression-based ideas and interpretations to connect and unify many existing methods and algorithms for the task. High-Dimensional Covariance Estimation features chapters on: Data, Sparsity, and Regularization Regularizing the Eigenstructure Banding, Tapering, and Thresholding Covariance Matrices Sparse Gaussian Graphical Models Multivariate Regression The book is an ideal resource for researchers in statistics, mathematics, business and economics, computer sciences, and engineering, as well as a useful text or supplement for graduate-level courses in multivariate analysis, covariance estimation, statistical learning, and high-dimensional data analysis.




The Oxford Handbook of Economic Forecasting


Book Description

Greater data availability has been coupled with developments in statistical theory and economic theory to allow more elaborate and complicated models to be entertained. These include factor models, DSGE models, restricted vector autoregressions, and non-linear models.




The Elements of Financial Econometrics


Book Description

A compact, master's-level textbook on financial econometrics, focusing on methodology and including real financial data illustrations throughout. The mathematical level is purposely kept moderate, allowing the power of the quantitative methods to be understood without too much technical detail.




Essays in Honor of Cheng Hsiao


Book Description

Including contributions spanning a variety of theoretical and applied topics in econometrics, this volume of Advances in Econometrics is published in honour of Cheng Hsiao.




Discretization of Processes


Book Description

In applications, and especially in mathematical finance, random time-dependent events are often modeled as stochastic processes. Assumptions are made about the structure of such processes, and serious researchers will want to justify those assumptions through the use of data. As statisticians are wont to say, “In God we trust; all others must bring data.” This book establishes the theory of how to go about estimating not just scalar parameters about a proposed model, but also the underlying structure of the model itself. Classic statistical tools are used: the law of large numbers, and the central limit theorem. Researchers have recently developed creative and original methods to use these tools in sophisticated (but highly technical) ways to reveal new details about the underlying structure. For the first time in book form, the authors present these latest techniques, based on research from the last 10 years. They include new findings. This book will be of special interest to researchers, combining the theory of mathematical finance with its investigation using market data, and it will also prove to be useful in a broad range of applications, such as to mathematical biology, chemical engineering, and physics.




Mixed Effects Models for Complex Data


Book Description

Although standard mixed effects models are useful in a range of studies, other approaches must often be used in correlation with them when studying complex or incomplete data. Mixed Effects Models for Complex Data discusses commonly used mixed effects models and presents appropriate approaches to address dropouts, missing data, measurement errors, censoring, and outliers. For each class of mixed effects model, the author reviews the corresponding class of regression model for cross-sectional data. An overview of general models and methods, along with motivating examples After presenting real data examples and outlining general approaches to the analysis of longitudinal/clustered data and incomplete data, the book introduces linear mixed effects (LME) models, generalized linear mixed models (GLMMs), nonlinear mixed effects (NLME) models, and semiparametric and nonparametric mixed effects models. It also includes general approaches for the analysis of complex data with missing values, measurement errors, censoring, and outliers. Self-contained coverage of specific topics Subsequent chapters delve more deeply into missing data problems, covariate measurement errors, and censored responses in mixed effects models. Focusing on incomplete data, the book also covers survival and frailty models, joint models of survival and longitudinal data, robust methods for mixed effects models, marginal generalized estimating equation (GEE) models for longitudinal or clustered data, and Bayesian methods for mixed effects models. Background material In the appendix, the author provides background information, such as likelihood theory, the Gibbs sampler, rejection and importance sampling methods, numerical integration methods, optimization methods, bootstrap, and matrix algebra. Failure to properly address missing data, measurement errors, and other issues in statistical analyses can lead to severely biased or misleading results. This book explores the biases that arise when naïve methods are used and shows which approaches should be used to achieve accurate results in longitudinal data analysis.