Statistical Methods for Dynamic Treatment Regimes


Book Description

Statistical Methods for Dynamic Treatment Regimes shares state of the art of statistical methods developed to address questions of estimation and inference for dynamic treatment regimes, a branch of personalized medicine. This volume demonstrates these methods with their conceptual underpinnings and illustration through analysis of real and simulated data. These methods are immediately applicable to the practice of personalized medicine, which is a medical paradigm that emphasizes the systematic use of individual patient information to optimize patient health care. This is the first single source to provide an overview of methodology and results gathered from journals, proceedings, and technical reports with the goal of orienting researchers to the field. The first chapter establishes context for the statistical reader in the landscape of personalized medicine. Readers need only have familiarity with elementary calculus, linear algebra, and basic large-sample theory to use this text. Throughout the text, authors direct readers to available code or packages in different statistical languages to facilitate implementation. In cases where code does not already exist, the authors provide analytic approaches in sufficient detail that any researcher with knowledge of statistical programming could implement the methods from scratch. This will be an important volume for a wide range of researchers, including statisticians, epidemiologists, medical researchers, and machine learning researchers interested in medical applications. Advanced graduate students in statistics and biostatistics will also find material in Statistical Methods for Dynamic Treatment Regimes to be a critical part of their studies.




Real-World Evidence in Medical Product Development


Book Description

This book provides state-of-art statistical methodologies, practical considerations from regulators and sponsors, logistics, and real use cases for practitioners for the uptake of RWE/D. Randomized clinical trials have been the gold standard for the evaluation of efficacy and safety of medical products. However, the cost, duration, practicality, and limited generalizability have incentivized many to look for alternative ways to optimize drug development. This book provides a comprehensive list of topics together to include all aspects with the uptake of RWE/D, including, but not limited to, applications in regulatory and non-regulatory settings, causal inference methodologies, organization and infrastructure considerations, logistic challenges, and practical use cases.




Dynamic Treatment Regimes


Book Description

Dynamic Treatment Regimes: Statistical Methods for Precision Medicine provides a comprehensive introduction to statistical methodology for the evaluation and discovery of dynamic treatment regimes from data. Researchers and graduate students in statistics, data science, and related quantitative disciplines with a background in probability and statistical inference and popular statistical modeling techniques will be prepared for further study of this rapidly evolving field. A dynamic treatment regime is a set of sequential decision rules, each corresponding to a key decision point in a disease or disorder process, where each rule takes as input patient information and returns the treatment option he or she should receive. Thus, a treatment regime formalizes how a clinician synthesizes patient information and selects treatments in practice. Treatment regimes are of obvious relevance to precision medicine, which involves tailoring treatment selection to patient characteristics in an evidence-based way. Of critical importance to precision medicine is estimation of an optimal treatment regime, one that, if used to select treatments for the patient population, would lead to the most beneficial outcome on average. Key methods for estimation of an optimal treatment regime from data are motivated and described in detail. A dedicated companion website presents full accounts of application of the methods using a comprehensive R package developed by the authors. The authors’ website www.dtr-book.com includes updates, corrections, new papers, and links to useful websites.




Methods in Comparative Effectiveness Research


Book Description

Comparative effectiveness research (CER) is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care (IOM 2009). CER is conducted to develop evidence that will aid patients, clinicians, purchasers, and health policy makers in making informed decisions at both the individual and population levels. CER encompasses a very broad range of types of studies—experimental, observational, prospective, retrospective, and research synthesis. This volume covers the main areas of quantitative methodology for the design and analysis of CER studies. The volume has four major sections—causal inference; clinical trials; research synthesis; and specialized topics. The audience includes CER methodologists, quantitative-trained researchers interested in CER, and graduate students in statistics, epidemiology, and health services and outcomes research. The book assumes a masters-level course in regression analysis and familiarity with clinical research.




Flexible Imputation of Missing Data, Second Edition


Book Description

Missing data pose challenges to real-life data analysis. Simple ad-hoc fixes, like deletion or mean imputation, only work under highly restrictive conditions, which are often not met in practice. Multiple imputation replaces each missing value by multiple plausible values. The variability between these replacements reflects our ignorance of the true (but missing) value. Each of the completed data set is then analyzed by standard methods, and the results are pooled to obtain unbiased estimates with correct confidence intervals. Multiple imputation is a general approach that also inspires novel solutions to old problems by reformulating the task at hand as a missing-data problem. This is the second edition of a popular book on multiple imputation, focused on explaining the application of methods through detailed worked examples using the MICE package as developed by the author. This new edition incorporates the recent developments in this fast-moving field. This class-tested book avoids mathematical and technical details as much as possible: formulas are accompanied by verbal statements that explain the formula in accessible terms. The book sharpens the reader’s intuition on how to think about missing data, and provides all the tools needed to execute a well-grounded quantitative analysis in the presence of missing data.




Adaptive Treatment Strategies in Practice: Planning Trials and Analyzing Data for Personalized Medicine


Book Description

Personalized medicine is a medical paradigm that emphasizes systematic use of individual patient information to optimize that patient's health care, particularly in managing chronic conditions and treating cancer. In the statistical literature, sequential decision making is known as an adaptive treatment strategy (ATS) or a dynamic treatment regime (DTR). The field of DTRs emerges at the interface of statistics, machine learning, and biomedical science to provide a data-driven framework for precision medicine. The authors provide a learning-by-seeing approach to the development of ATSs, aimed at a broad audience of health researchers. All estimation procedures used are described in sufficient heuristic and technical detail so that less quantitative readers can understand the broad principles underlying the approaches. At the same time, more quantitative readers can implement these practices. This book provides the most up-to-date summary of the current state of the statistical research in personalized medicine; contains chapters by leaders in the area from both the statistics and computer sciences fields; and also contains a range of practical advice, introductory and expository materials, and case studies.




Targeted Learning


Book Description

The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the target parameter representing the scientific question of interest. This book is aimed at both statisticians and applied researchers interested in causal inference and general effect estimation for observational and experimental data. Part I is an accessible introduction to super learning and the targeted maximum likelihood estimator, including related concepts necessary to understand and apply these methods. Parts II-IX handle complex data structures and topics applied researchers will immediately recognize from their own research, including time-to-event outcomes, direct and indirect effects, positivity violations, case-control studies, censored data, longitudinal data, and genomic studies.




Discrete Choice Methods with Simulation


Book Description

This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.







Mixed Effects Models for Complex Data


Book Description

Although standard mixed effects models are useful in a range of studies, other approaches must often be used in correlation with them when studying complex or incomplete data. Mixed Effects Models for Complex Data discusses commonly used mixed effects models and presents appropriate approaches to address dropouts, missing data, measurement errors, censoring, and outliers. For each class of mixed effects model, the author reviews the corresponding class of regression model for cross-sectional data. An overview of general models and methods, along with motivating examples After presenting real data examples and outlining general approaches to the analysis of longitudinal/clustered data and incomplete data, the book introduces linear mixed effects (LME) models, generalized linear mixed models (GLMMs), nonlinear mixed effects (NLME) models, and semiparametric and nonparametric mixed effects models. It also includes general approaches for the analysis of complex data with missing values, measurement errors, censoring, and outliers. Self-contained coverage of specific topics Subsequent chapters delve more deeply into missing data problems, covariate measurement errors, and censored responses in mixed effects models. Focusing on incomplete data, the book also covers survival and frailty models, joint models of survival and longitudinal data, robust methods for mixed effects models, marginal generalized estimating equation (GEE) models for longitudinal or clustered data, and Bayesian methods for mixed effects models. Background material In the appendix, the author provides background information, such as likelihood theory, the Gibbs sampler, rejection and importance sampling methods, numerical integration methods, optimization methods, bootstrap, and matrix algebra. Failure to properly address missing data, measurement errors, and other issues in statistical analyses can lead to severely biased or misleading results. This book explores the biases that arise when naïve methods are used and shows which approaches should be used to achieve accurate results in longitudinal data analysis.