Robust Inference Using Higher Order Influence Functions


Book Description

We present a theory of point and interval estimation for nonlinear functionals in parametric, semi-, and non-parametric models based on higher order influence functions (Robins 2004, Sec. 9, Li et al., 2006, Tchetgen et al., 2006, Robins et al., 2007). Higher order influence functions are higher order U-statistics. Our theory extends the first order semiparametric theory of Bickel et al. (1993) and van der Vaart (1991) by incorporating the theory of higher order scores considered by Pfanzagl (1990), Small and McLeish (1994), and Lindsay and Waterman (1996). The theory reproduces many previous results, produces new non- n results, and opens up the ability to perform optimal non- n inference in complex high dimensional models. We present novel rate-optimal point and interval estimators for various functionals of central importance to biostatistics in settings in which estimation at the expected n rate is not possible, owing to the curse of dimensionality. We also show that our higher order influence functions have a multi-robustness property that extends the double robustness property of first order influence functions described by Robins and Rotnitzky (2001) and van der Laan and Robins (2003).







Robust Inference


Book Description




Statistical Inference


Book Description

In many ways, estimation by an appropriate minimum distance method is one of the most natural ideas in statistics. However, there are many different ways of constructing an appropriate distance between the data and the model: the scope of study referred to by "Minimum Distance Estimation" is literally huge. Filling a statistical resource gap, Stati




Longitudinal Data Analysis


Book Description

Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory




Handbook of Matching and Weighting Adjustments for Causal Inference


Book Description

An observational study infers the effects caused by a treatment, policy, program, intervention, or exposure in a context in which randomized experimentation is unethical or impractical. One task in an observational study is to adjust for visible pretreatment differences between the treated and control groups. Multivariate matching and weighting are two modern forms of adjustment. This handbook provides a comprehensive survey of the most recent methods of adjustment by matching, weighting, machine learning and their combinations. Three additional chapters introduce the steps from association to causation that follow after adjustments are complete. When used alone, matching and weighting do not use outcome information, so they are part of the design of an observational study. When used in conjunction with models for the outcome, matching and weighting may enhance the robustness of model-based adjustments. The book is for researchers in medicine, economics, public health, psychology, epidemiology, public program evaluation, and statistics who examine evidence of the effects on human beings of treatments, policies or exposures.




Proceedings of the Second Seattle Symposium in Biostatistics


Book Description

This volume contains a selection of papers presented at the Second Seattle Symposium in Biostatistics: Analysis of Correlated Data. The symposium was held in 2000 to celebrate the 30th anniversary of the University of Washington School of Public Health and Community Medicine. It featured keynote lectures by Norman Breslow, David Cox and Ross Prentice and 16 invited presentations by other prominent researchers. The papers contained in this volume encompass recent methodological advances in several important areas, such as longitudinal data, multivariate failure time data and genetic data, as well as innovative applications of the existing theory and methods. This volume is a valuable reference for researchers and practitioners in the field of correlated data analysis.




Robust Statistical Procedures


Book Description

A broad and unified methodology for robust statistics—with exciting new applications Robust statistics is one of the fastest growing fields in contemporary statistics. It is also one of the more diverse and sometimes confounding areas, given the many different assessments and interpretations of robustness by theoretical and applied statisticians. This innovative book unifies the many varied, yet related, concepts of robust statistics under a sound theoretical modulation. It seamlessly integrates asymptotics and interrelations, and provides statisticians with an effective system for dealing with the interrelations between the various classes of procedures. Drawing on the expertise of researchers from around the world, and covering over a decade's worth of developments in the field, Robust Statistical Procedures: Asymptotics and Interrelations: Discusses both theory and applications in its two parts, from the fundamentals to robust statistical inference Thoroughly explores the interrelations between diverse classes of procedures, unlike any other book Compares nonparametric procedures with robust statistics, explaining in detail asymptotic representations for various estimators Provides a timesaving list of mathematical tools for the problems under discussion Keeps mathematical abstractions to a minimum, in spite of its largely theoretical content Includes useful problems and exercises at the end of each chapter Offers strategies for more complex models when using robust statistical procedures Self-contained and rounded in approach, this book is invaluable for both applied statisticians and theoretical researchers; for graduate students in mathematical statistics; and for anyone interested in the influence of this methodology.




Robust Inference in Econometrics with Applications to Time Series and Panel Data Models


Book Description

Abstract: Having robust methods of inference is important in econometrics to achieve reliable results. This thesis tackles robustness issues in three different contexts: structural change in panel data robust to a short transition period, inference on the mean of a time series robust to the so-called ill-posed problem, inference on the slope of a trend function robust to the stationary or integrated nature of the noise component. Chapter 1 considers testing for and estimating an unknown structural break date in panel data models in the presence of individual specific effects and serial correlation for both short and long panels. I allow for a time varying effect after a regime change in the form of a short transition period. A statistic that has a pivotal limit distribution under a standard asymptotic framework is proposed. It is shown to be robust to the transition period. The usefulness of the method is illustrated via simulations and empirical applications. Chapter 2 deals with the relevance of so-called impossibility results in the context of estimating the spectral density function of a stationary process at the zero frequency. As shown previously, any estimate will have an infinite minimax risk. Most often it is a nuisance parameter of which an estimate is needed to obtain test statistics that have a pivotal distribution. In this context, I argue that such an impossibility result is irrelevant. I show that, in the presence of the discontinuities that cause the ill-posedness problem, using the true value leads to tests that have either 0 or 100% size and, hence, lead to confidence intervals that are completely uninformative. On the other hand, tests based on standard estimates will have well defined limit distributions and, accordingly, be more informative and robust. Chapter 3 is concerned with inference on the slope of the trend function of a time series whose noise component can be stationary or integrated. I focus on a procedure suggested by Perron and Yabu (2009). I prove that it has the correct size uniformly over the specified parameter space but that it is not uniformly asymptotically similar.




The Work of Raymond J. Carroll


Book Description

This volume contains Raymond J. Carroll's research and commentary on its impact by leading statisticians. Each of the seven main parts focuses on a key research area: Measurement Error, Transformation and Weighting, Epidemiology, Nonparametric and Semiparametric Regression for Independent Data, Nonparametric and Semiparametric Regression for Dependent Data, Robustness, and other work. The seven subject areas reviewed in this book were chosen by Ray himself, as were the articles representing each area. The commentaries not only review Ray’s work, but are also filled with history and anecdotes. Raymond J. Carroll’s impact on statistics and numerous other fields of science is far-reaching. His vast catalog of work spans from fundamental contributions to statistical theory to innovative methodological development and new insights in disciplinary science. From the outset of his career, rather than taking the “safe” route of pursuing incremental advances, Ray has focused on tackling the most important challenges. In doing so, it is fair to say that he has defined a host of statistics areas, including weighting and transformation in regression, measurement error modeling, quantitative methods for nutritional epidemiology and non- and semiparametric regression.