Handbook of Matching and Weighting Adjustments for Causal Inference


Book Description

An observational study infers the effects caused by a treatment, policy, program, intervention, or exposure in a context in which randomized experimentation is unethical or impractical. One task in an observational study is to adjust for visible pretreatment differences between the treated and control groups. Multivariate matching and weighting are two modern forms of adjustment. This handbook provides a comprehensive survey of the most recent methods of adjustment by matching, weighting, machine learning and their combinations. Three additional chapters introduce the steps from association to causation that follow after adjustments are complete. When used alone, matching and weighting do not use outcome information, so they are part of the design of an observational study. When used in conjunction with models for the outcome, matching and weighting may enhance the robustness of model-based adjustments. The book is for researchers in medicine, economics, public health, psychology, epidemiology, public program evaluation, and statistics who examine evidence of the effects on human beings of treatments, policies or exposures.




Essays on Matching and Weighting for Causal Inference in Observational Studies


Book Description

A simulation study with different settings is conducted to compare the proposed weighting scheme to IPTW, including generalized propensity score estimation methods that also consider explicitly the covariate balance problem in the probability estimation process. The applicability of the methods to continuous treatments is also tested. The results show that directly targeting balance with the weights, instead of focusing on estimating treatment assignment probabilities, provides the best results in terms of bias and root mean square error of the treatment effect estimator. The effects of the intensity level of the 2010 Chilean earthquake on posttraumatic stress disorder are estimated using the proposed methodology.




Handbook of Design and Analysis of Experiments


Book Description

This carefully edited collection synthesizes the state of the art in the theory and applications of designed experiments and their analyses. It provides a detailed overview of the tools required for the optimal design of experiments and their analyses. The handbook covers many recent advances in the field, including designs for nonlinear models and algorithms applicable to a wide variety of design problems. It also explores the extensive use of experimental designs in marketing, the pharmaceutical industry, engineering and other areas.




Causal Inference in Statistics, Social, and Biomedical Sciences


Book Description

This text presents statistical methods for studying causal effects and discusses how readers can assess such effects in simple randomized experiments.




Target Estimation and Adjustment Weighting for Survey Nonresponse and Sampling Bias


Book Description

We elaborate a general workflow of weighting-based survey inference, decomposing it into two main tasks. The first is the estimation of population targets from one or more sources of auxiliary information. The second is the construction of weights that calibrate the survey sample to the population targets. We emphasize that these tasks are predicated on models of the measurement, sampling, and nonresponse process whose assumptions cannot be fully tested. After describing this workflow in abstract terms, we then describe in detail how it can be applied to the analysis of historical and contemporary opinion polls. We also discuss extensions of the basic workflow, particularly inference for causal quantities and multilevel regression and poststratification.




Handbook of Bayesian Variable Selection


Book Description

Bayesian variable selection has experienced substantial developments over the past 30 years with the proliferation of large data sets. Identifying relevant variables to include in a model allows simpler interpretation, avoids overfitting and multicollinearity, and can provide insights into the mechanisms underlying an observed phenomenon. Variable selection is especially important when the number of potential predictors is substantially larger than the sample size and sparsity can reasonably be assumed. The Handbook of Bayesian Variable Selection provides a comprehensive review of theoretical, methodological and computational aspects of Bayesian methods for variable selection. The topics covered include spike-and-slab priors, continuous shrinkage priors, Bayes factors, Bayesian model averaging, partitioning methods, as well as variable selection in decision trees and edge selection in graphical models. The handbook targets graduate students and established researchers who seek to understand the latest developments in the field. It also provides a valuable reference for all interested in applying existing methods and/or pursuing methodological extensions. Features: Provides a comprehensive review of methods and applications of Bayesian variable selection. Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. Includes contributions by experts in the field. Supported by a website with code, data, and other supplementary material




Handbook of Bayesian, Fiducial, and Frequentist Inference


Book Description

The emergence of data science, in recent decades, has magnified the need for efficient methodology for analyzing data and highlighted the importance of statistical inference. Despite the tremendous progress that has been made, statistical science is still a young discipline and continues to have several different and competing paths in its approaches and its foundations. While the emergence of competing approaches is a natural progression of any scientific discipline, differences in the foundations of statistical inference can sometimes lead to different interpretations and conclusions from the same dataset. The increased interest in the foundations of statistical inference has led to many publications, and recent vibrant research activities in statistics, applied mathematics, philosophy and other fields of science reflect the importance of this development. The BFF approaches not only bridge foundations and scientific learning, but also facilitate objective and replicable scientific research, and provide scalable computing methodologies for the analysis of big data. Most of the published work typically focusses on a single topic or theme, and the body of work is scattered in different journals. This handbook provides a comprehensive introduction and broad overview of the key developments in the BFF schools of inference. It is intended for researchers and students who wish for an overview of foundations of inference from the BFF perspective and provides a general reference for BFF inference. Key Features: Provides a comprehensive introduction to the key developments in the BFF schools of inference Gives an overview of modern inferential methods, allowing scientists in other fields to expand their knowledge Is accessible for readers with different perspectives and backgrounds




Handbook of Sharing Confidential Data


Book Description

Statistical agencies, research organizations, companies, and other data stewards that seek to share data with the public face a challenging dilemma. They need to protect the privacy and confidentiality of data subjects and their attributes while providing data products that are useful for their intended purposes. In an age when information on data subjects is available from a wide range of data sources, as are the computational resources to obtain that information, this challenge is increasingly difficult. The Handbook of Sharing Confidential Data helps data stewards understand how tools from the data confidentiality literature—specifically, synthetic data, formal privacy, and secure computation—can be used to manage trade-offs in disclosure risk and data usefulness. Key features: • Provides overviews of the potential and the limitations of synthetic data, differential privacy, and secure computation • Offers an accessible review of methods for implementing differential privacy, both from methodological and practical perspectives • Presents perspectives from both computer science and statistical science for addressing data confidentiality and privacy • Describes genuine applications of synthetic data, formal privacy, and secure computation to help practitioners implement these approaches The handbook is accessible to both researchers and practitioners who work with confidential data. It requires familiarity with basic concepts from probability and data analysis.




Number-Theoretic Methods in Statistics


Book Description

This book is a survey of recent work on the application of number theory in statistics. The essence of number-theoretic methods is to find a set of points that are universally scattered over an s-dimensional unit cube. In certain circumstances this set can be used instead of random numbers in the Monte Carlo method. The idea can also be applied to other problems such as in experimental design. This book will illustrate the idea of number-theoretic methods and their application in statistics. The emphasis is on applying the methods to practical problems so only part-proofs of theorems are given.




Handbook of Approximate Bayesian Computation


Book Description

As the world becomes increasingly complex, so do the statistical models required to analyse the challenging problems ahead. For the very first time in a single volume, the Handbook of Approximate Bayesian Computation (ABC) presents an extensive overview of the theory, practice and application of ABC methods. These simple, but powerful statistical techniques, take Bayesian statistics beyond the need to specify overly simplified models, to the setting where the model is defined only as a process that generates data. This process can be arbitrarily complex, to the point where standard Bayesian techniques based on working with tractable likelihood functions would not be viable. ABC methods finesse the problem of model complexity within the Bayesian framework by exploiting modern computational power, thereby permitting approximate Bayesian analyses of models that would otherwise be impossible to implement. The Handbook of ABC provides illuminating insight into the world of Bayesian modelling for intractable models for both experts and newcomers alike. It is an essential reference book for anyone interested in learning about and implementing ABC techniques to analyse complex models in the modern world.