Forecasting in the Presence of Structural Breaks and Model Uncertainty


Book Description

Forecasting in the presence of structural breaks and model uncertainty are active areas of research with implications for practical problems in forecasting. This book addresses forecasting variables from both Macroeconomics and Finance, and considers various methods of dealing with model instability and model uncertainty when forming forecasts.




The Oxford Handbook of Economic Forecasting


Book Description

This Handbook provides up-to-date coverage of both new and well-established fields in the sphere of economic forecasting. The chapters are written by world experts in their respective fields, and provide authoritative yet accessible accounts of the key concepts, subject matter, and techniques in a number of diverse but related areas. It covers the ways in which the availability of ever more plentiful data and computational power have been used in forecasting, in terms of the frequency of observations, the number of variables, and the use of multiple data vintages. Greater data availability has been coupled with developments in statistical theory and economic analysis to allow more elaborate and complicated models to be entertained; the volume provides explanations and critiques of these developments. These include factor models, DSGE models, restricted vector autoregressions, and non-linear models, as well as models for handling data observed at mixed frequencies, high-frequency data, multiple data vintages, methods for forecasting when there are structural breaks, and how breaks might be forecast. Also covered are areas which are less commonly associated with economic forecasting, such as climate change, health economics, long-horizon growth forecasting, and political elections. Econometric forecasting has important contributions to make in these areas along with how their developments inform the mainstream.




Handbook of Economic Forecasting


Book Description

The highly prized ability to make financial plans with some certainty about the future comes from the core fields of economics. In recent years the availability of more data, analytical tools of greater precision, and ex post studies of business decisions have increased demand for information about economic forecasting. Volumes 2A and 2B, which follows Nobel laureate Clive Granger's Volume 1 (2006), concentrate on two major subjects. Volume 2A covers innovations in methodologies, specifically macroforecasting and forecasting financial variables. Volume 2B investigates commercial applications, with sections on forecasters' objectives and methodologies. Experts provide surveys of a large range of literature scattered across applied and theoretical statistics journals as well as econometrics and empirical economics journals. The Handbook of Economic Forecasting Volumes 2A and 2B provide a unique compilation of chapters giving a coherent overview of forecasting theory and applications in one place and with up-to-date accounts of all major conceptual issues. - Focuses on innovation in economic forecasting via industry applications - Presents coherent summaries of subjects in economic forecasting that stretch from methodologies to applications - Makes details about economic forecasting accessible to scholars in fields outside economics




Forecasting Financial Time Series Using Model Averaging


Book Description

Believing in a single model may be dangerous, and addressing model uncertainty by averaging different models in making forecasts may be very beneficial. In this thesis we focus on forecasting financial time series using model averaging schemes as a way to produce optimal forecasts. We derive and discuss in simulation exercises and empirical applications model averaging techniques that can reproduce stylized facts of financial time series, such as low predictability and time-varying patterns. We emphasize that model averaging is not a "magic" methodology which solves a priori problems of poorly forecasting. Averaging techniques have an essential requirement: individual models have to fit data. In the first section we provide a general outline of the thesis and its contributions to previ ous research. In Chapter 2 we focus on the use of time varying model weight combinations. In Chapter 3, we extend the analysis in the previous chapter to a new Bayesian averaging scheme that models structural instability carefully. In Chapter 4 we focus on forecasting the term structure of U.S. interest rates. In Chapter 5 we attempt to shed more light on forecasting performance of stochastic day-ahead price models. We examine six stochastic price models to forecast day-ahead prices of the two most active power exchanges in the world: the Nordic Power Exchange and the Amsterdam Power Exchange. Three of these forecasting models include weather forecasts. To sum up, the research finds an increase of forecasting power of financial time series when parameter uncertainty, model uncertainty and optimal decision making are included.




The Methodology and Practice of Econometrics


Book Description

Building upon, and celebrating the work of David Hendry, this volume consists of a number of specially commissioned pieces from some of the leading econometricians in the world. It reflects on the recent advances in econometrics and considers the future progress for the methodology of econometrics.




Macroeconomic Forecasting in the Era of Big Data


Book Description

This book surveys big data tools used in macroeconomic forecasting and addresses related econometric issues, including how to capture dynamic relationships among variables; how to select parsimonious models; how to deal with model uncertainty, instability, non-stationarity, and mixed frequency data; and how to evaluate forecasts, among others. Each chapter is self-contained with references, and provides solid background information, while also reviewing the latest advances in the field. Accordingly, the book offers a valuable resource for researchers, professional forecasters, and students of quantitative economics.




Model-Free Prediction and Regression


Book Description

The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.




Volatility and Time Series Econometrics


Book Description

Robert Engle received the Nobel Prize for Economics in 2003 for his work in time series econometrics. This book contains 16 original research contributions by some the leading academic researchers in the fields of time series econometrics, forecasting, volatility modelling, financial econometrics and urban economics, along with historical perspectives related to field of time series econometrics more generally. Engle's Nobel Prize citation focuses on his path-breaking work on autoregressive conditional heteroskedasticity (ARCH) and the profound effect that this work has had on the field of financial econometrics. Several of the chapters focus on conditional heteroskedasticity, and develop the ideas of Engle's Nobel Prize winning work. Engle's work has had its most profound effect on the modelling of financial variables and several of the chapters use newly developed time series methods to study the behavior of financial variables. Each of the 16 chapters may be read in isolation, but they all importantly build on and relate to the seminal work by Nobel Laureate Robert F. Engle.




Advances in Economic Forecasting


Book Description

The book's contributors assess the performance of economic forecasting methods, argue that data can be better exploited through model and forecast combination, and advocate for models that are adaptive and perform well in the presence of nonlinearity and structural change.




Large Dimensional Factor Analysis


Book Description

Large Dimensional Factor Analysis provides a survey of the main theoretical results for large dimensional factor models, emphasizing results that have implications for empirical work. The authors focus on the development of the static factor models and on the use of estimated factors in subsequent estimation and inference. Large Dimensional Factor Analysis discusses how to determine the number of factors, how to conduct inference when estimated factors are used in regressions, how to assess the adequacy pf observed variables as proxies for latent factors, how to exploit the estimated factors to test unit root tests and common trends, and how to estimate panel cointegration models.