Dynamic Treatment Regimes


Book Description

Dynamic Treatment Regimes: Statistical Methods for Precision Medicine provides a comprehensive introduction to statistical methodology for the evaluation and discovery of dynamic treatment regimes from data. Researchers and graduate students in statistics, data science, and related quantitative disciplines with a background in probability and statistical inference and popular statistical modeling techniques will be prepared for further study of this rapidly evolving field. A dynamic treatment regime is a set of sequential decision rules, each corresponding to a key decision point in a disease or disorder process, where each rule takes as input patient information and returns the treatment option he or she should receive. Thus, a treatment regime formalizes how a clinician synthesizes patient information and selects treatments in practice. Treatment regimes are of obvious relevance to precision medicine, which involves tailoring treatment selection to patient characteristics in an evidence-based way. Of critical importance to precision medicine is estimation of an optimal treatment regime, one that, if used to select treatments for the patient population, would lead to the most beneficial outcome on average. Key methods for estimation of an optimal treatment regime from data are motivated and described in detail. A dedicated companion website presents full accounts of application of the methods using a comprehensive R package developed by the authors. The authors’ website www.dtr-book.com includes updates, corrections, new papers, and links to useful websites.




Statistical Methods for Dynamic Treatment Regimes


Book Description

Statistical Methods for Dynamic Treatment Regimes shares state of the art of statistical methods developed to address questions of estimation and inference for dynamic treatment regimes, a branch of personalized medicine. This volume demonstrates these methods with their conceptual underpinnings and illustration through analysis of real and simulated data. These methods are immediately applicable to the practice of personalized medicine, which is a medical paradigm that emphasizes the systematic use of individual patient information to optimize patient health care. This is the first single source to provide an overview of methodology and results gathered from journals, proceedings, and technical reports with the goal of orienting researchers to the field. The first chapter establishes context for the statistical reader in the landscape of personalized medicine. Readers need only have familiarity with elementary calculus, linear algebra, and basic large-sample theory to use this text. Throughout the text, authors direct readers to available code or packages in different statistical languages to facilitate implementation. In cases where code does not already exist, the authors provide analytic approaches in sufficient detail that any researcher with knowledge of statistical programming could implement the methods from scratch. This will be an important volume for a wide range of researchers, including statisticians, epidemiologists, medical researchers, and machine learning researchers interested in medical applications. Advanced graduate students in statistics and biostatistics will also find material in Statistical Methods for Dynamic Treatment Regimes to be a critical part of their studies.




Adaptive Treatment Strategies in Practice: Planning Trials and Analyzing Data for Personalized Medicine


Book Description

Personalized medicine is a medical paradigm that emphasizes systematic use of individual patient information to optimize that patient's health care, particularly in managing chronic conditions and treating cancer. In the statistical literature, sequential decision making is known as an adaptive treatment strategy (ATS) or a dynamic treatment regime (DTR). The field of DTRs emerges at the interface of statistics, machine learning, and biomedical science to provide a data-driven framework for precision medicine. The authors provide a learning-by-seeing approach to the development of ATSs, aimed at a broad audience of health researchers. All estimation procedures used are described in sufficient heuristic and technical detail so that less quantitative readers can understand the broad principles underlying the approaches. At the same time, more quantitative readers can implement these practices. This book provides the most up-to-date summary of the current state of the statistical research in personalized medicine; contains chapters by leaders in the area from both the statistics and computer sciences fields; and also contains a range of practical advice, introductory and expository materials, and case studies.




Textbook of Clinical Trials in Oncology


Book Description

There is an increasing need for educational resources for statisticians and investigators. Reflecting this, the goal of this book is to provide readers with a sound foundation in the statistical design, conduct, and analysis of clinical trials. Furthermore, it is intended as a guide for statisticians and investigators with minimal clinical trial experience who are interested in pursuing a career in this area. The advancement in genetic and molecular technologies have revolutionized drug development. In recent years, clinical trials have become increasingly sophisticated as they incorporate genomic studies, and efficient designs (such as basket and umbrella trials) have permeated the field. This book offers the requisite background and expert guidance for the innovative statistical design and analysis of clinical trials in oncology. Key Features: Cutting-edge topics with appropriate technical background Built around case studies which give the work a "hands-on" approach Real examples of flaws in previously reported clinical trials and how to avoid them Access to statistical code on the book’s website Chapters written by internationally recognized statisticians from academia and pharmaceutical companies Carefully edited to ensure consistency in style, level, and approach Topics covered include innovating phase I and II designs, trials in immune-oncology and rare diseases, among many others




Handbook of Statistical Methods for Randomized Controlled Trials


Book Description

Statistical concepts provide scientific framework in experimental studies, including randomized controlled trials. In order to design, monitor, analyze and draw conclusions scientifically from such clinical trials, clinical investigators and statisticians should have a firm grasp of the requisite statistical concepts. The Handbook of Statistical Methods for Randomized Controlled Trials presents these statistical concepts in a logical sequence from beginning to end and can be used as a textbook in a course or as a reference on statistical methods for randomized controlled trials. Part I provides a brief historical background on modern randomized controlled trials and introduces statistical concepts central to planning, monitoring and analysis of randomized controlled trials. Part II describes statistical methods for analysis of different types of outcomes and the associated statistical distributions used in testing the statistical hypotheses regarding the clinical questions. Part III describes some of the most used experimental designs for randomized controlled trials including the sample size estimation necessary in planning. Part IV describe statistical methods used in interim analysis for monitoring of efficacy and safety data. Part V describe important issues in statistical analyses such as multiple testing, subgroup analysis, competing risks and joint models for longitudinal markers and clinical outcomes. Part VI addresses selected miscellaneous topics in design and analysis including multiple assignment randomization trials, analysis of safety outcomes, non-inferiority trials, incorporating historical data, and validation of surrogate outcomes.







Proceedings of the Second Seattle Symposium in Biostatistics


Book Description

This volume contains a selection of papers presented at the Second Seattle Symposium in Biostatistics: Analysis of Correlated Data. The symposium was held in 2000 to celebrate the 30th anniversary of the University of Washington School of Public Health and Community Medicine. It featured keynote lectures by Norman Breslow, David Cox and Ross Prentice and 16 invited presentations by other prominent researchers. The papers contained in this volume encompass recent methodological advances in several important areas, such as longitudinal data, multivariate failure time data and genetic data, as well as innovative applications of the existing theory and methods. This volume is a valuable reference for researchers and practitioners in the field of correlated data analysis.




The Bipolar Book


Book Description

As a major mainstay of clinical focus and research today, bipolar disorder affects millions of individuals across the globe with its extreme and erratic shifts of mood, thinking and behavior. Edited by a team of experts in the field, The Bipolar Book: History, Neurobiology, and Treatment is a testament and guide to diagnosing and treating this exceedingly complex, highly prevalent disease. Featuring 45 chapters from an expert team of contributors from around the world, The Bipolar Book delves deep into the origins of the disorder and how it informs clinical practice today by focusing on such topics as bipolar disorder occurring in special populations, stigmatization of the disease, the role genetics play, postmortem studies, psychotherapy, treatments and more. Designed to be the definitive reference volume for clinicians, students and researchers, Aysegül Yildiz, Pedro Ruiz and Charles Nemeroff present The Bipolar Book as a "must have" for those caregivers who routinely deal with this devastating disease.




Targeted Learning


Book Description

The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the target parameter representing the scientific question of interest. This book is aimed at both statisticians and applied researchers interested in causal inference and general effect estimation for observational and experimental data. Part I is an accessible introduction to super learning and the targeted maximum likelihood estimator, including related concepts necessary to understand and apply these methods. Parts II-IX handle complex data structures and topics applied researchers will immediately recognize from their own research, including time-to-event outcomes, direct and indirect effects, positivity violations, case-control studies, censored data, longitudinal data, and genomic studies.




The Elements of Joint Learning and Optimization in Operations Management


Book Description

This book examines recent developments in Operations Management, and focuses on four major application areas: dynamic pricing, assortment optimization, supply chain and inventory management, and healthcare operations. Data-driven optimization in which real-time input of data is being used to simultaneously learn the (true) underlying model of a system and optimize its performance, is becoming increasingly important in the last few years, especially with the rise of Big Data.