Empirical Likelihood Method in Survival Analysis


Book Description

Empirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN. The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empirical likelihood for censored data. It also covers semi-parametric accelerated failure time models, the optimality of confidence regions derived from empirical likelihood or plug-in empirical likelihood ratio tests, and several empirical likelihood confidence band results. While survival analysis is a classic area of statistical study, the empirical likelihood methodology has only recently been developed. Until now, just one book was available on empirical likelihood and most statistical software did not include empirical likelihood procedures. Addressing this shortfall, this book provides the functions to calculate the empirical likelihood ratio in survival analysis as well as functions related to the empirical likelihood analysis of the Cox regression model and other hazard regression models.




Empirical Likelihood Methods in Biomedicine and Health


Book Description

Empirical Likelihood Methods in Biomedicine and Health provides a compendium of nonparametric likelihood statistical techniques in the perspective of health research applications. It includes detailed descriptions of the theoretical underpinnings of recently developed empirical likelihood-based methods. The emphasis throughout is on the application of the methods to the health sciences, with worked examples using real data. Provides a systematic overview of novel empirical likelihood techniques. Presents a good balance of theory, methods, and applications. Features detailed worked examples to illustrate the application of the methods. Includes R code for implementation. The book material is attractive and easily understandable to scientists who are new to the research area and may attract statisticians interested in learning more about advanced nonparametric topics including various modern empirical likelihood methods. The book can be used by graduate students majoring in biostatistics, or in a related field, particularly for those who are interested in nonparametric methods with direct applications in Biomedicine.




Empirical Likelihood


Book Description

Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It al




Innovative Strategies, Statistical Solutions and Simulations for Modern Clinical Trials


Book Description

"This is truly an outstanding book. [It] brings together all of the latest research in clinical trials methodology and how it can be applied to drug development.... Chang et al provide applications to industry-supported trials. This will allow statisticians in the industry community to take these methods seriously." Jay Herson, Johns Hopkins University The pharmaceutical industry's approach to drug discovery and development has rapidly transformed in the last decade from the more traditional Research and Development (R & D) approach to a more innovative approach in which strategies are employed to compress and optimize the clinical development plan and associated timelines. However, these strategies are generally being considered on an individual trial basis and not as part of a fully integrated overall development program. Such optimization at the trial level is somewhat near-sighted and does not ensure cost, time, or development efficiency of the overall program. This book seeks to address this imbalance by establishing a statistical framework for overall/global clinical development optimization and providing tactics and techniques to support such optimization, including clinical trial simulations. Provides a statistical framework for achieve global optimization in each phase of the drug development process. Describes specific techniques to support optimization including adaptive designs, precision medicine, survival-endpoints, dose finding and multiple testing. Gives practical approaches to handling missing data in clinical trials using SAS. Looks at key controversial issues from both a clinical and statistical perspective. Presents a generous number of case studies from multiple therapeutic areas that help motivate and illustrate the statistical methods introduced in the book. Puts great emphasis on software implementation of the statistical methods with multiple examples of software code (both SAS and R). It is important for statisticians to possess a deep knowledge of the drug development process beyond statistical considerations. For these reasons, this book incorporates both statistical and "clinical/medical" perspectives.







Survival Analysis Using S


Book Description

Survival Analysis Using S: Analysis of Time-to-Event Data is designed as a text for a one-semester or one-quarter course in survival analysis for upper-level or graduate students in statistics, biostatistics, and epidemiology. Prerequisites are a standard pre-calculus first course in probability and statistics, and a course in applied linear regression models. No prior knowledge of S or R is assumed. A wide choice of exercises is included, some intended for more advanced students with a first course in mathematical statistics. The authors emphasize parametric log-linear models, while also detailing nonparametric procedures along with model building and data diagnostics. Medical and public health researchers will find the discussion of cut point analysis with bootstrap validation, competing risks and the cumulative incidence estimator, and the analysis of left-truncated and right-censored data invaluable. The bootstrap procedure checks robustness of cut point analysis and determines cut point(s). In a chapter written by Stephen Portnoy, censored regression quantiles - a new nonparametric regression methodology (2003) - is developed to identify important forms of population heterogeneity and to detect departures from traditional Cox models. By generalizing the Kaplan-Meier estimator to regression models for conditional quantiles, this methods provides a valuable complement to traditional Cox proportional hazards approaches.




Counting Processes and Survival Analysis


Book Description

The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "The book is a valuable completion of the literature in this field. It is written in an ambitious mathematical style and can be recommended to statisticians as well as biostatisticians." -Biometrische Zeitschrift "Not many books manage to combine convincingly topics from probability theory over mathematical statistics to applied statistics. This is one of them. The book has other strong points to recommend it: it is written with meticulous care, in a lucid style, general results being illustrated by examples from statistical theory and practice, and a bunch of exercises serve to further elucidate and elaborate on the text." -Mathematical Reviews "This book gives a thorough introduction to martingale and counting process methods in survival analysis thereby filling a gap in the literature." -Zentralblatt für Mathematik und ihre Grenzgebiete/Mathematics Abstracts "The authors have performed a valuable service to researchers in providing this material in [a] self-contained and accessible form. . . This text [is] essential reading for the probabilist or mathematical statistician working in the area of survival analysis." -Short Book Reviews, International Statistical Institute Counting Processes and Survival Analysis explores the martingale approach to the statistical analysis of counting processes, with an emphasis on the application of those methods to censored failure time data. This approach has proven remarkably successful in yielding results about statistical methods for many problems arising in censored data. A thorough treatment of the calculus of martingales as well as the most important applications of these methods to censored data is offered. Additionally, the book examines classical problems in asymptotic distribution theory for counting process methods and newer methods for graphical analysis and diagnostics of censored data. Exercises are included to provide practice in applying martingale methods and insight into the calculus itself.




Survival Analysis


Book Description

Making complex methods more accessible to applied researchers without an advanced mathematical background, the authors present the essence of new techniques available, as well as classical techniques, and apply them to data. Practical suggestions for implementing the various methods are set off in a series of practical notes at the end of each section, while technical details of the derivation of the techniques are sketched in the technical notes. This book will thus be useful for investigators who need to analyse censored or truncated life time data, and as a textbook for a graduate course in survival analysis, the only prerequisite being a standard course in statistical methodology.




Flexible Imputation of Missing Data, Second Edition


Book Description

Missing data pose challenges to real-life data analysis. Simple ad-hoc fixes, like deletion or mean imputation, only work under highly restrictive conditions, which are often not met in practice. Multiple imputation replaces each missing value by multiple plausible values. The variability between these replacements reflects our ignorance of the true (but missing) value. Each of the completed data set is then analyzed by standard methods, and the results are pooled to obtain unbiased estimates with correct confidence intervals. Multiple imputation is a general approach that also inspires novel solutions to old problems by reformulating the task at hand as a missing-data problem. This is the second edition of a popular book on multiple imputation, focused on explaining the application of methods through detailed worked examples using the MICE package as developed by the author. This new edition incorporates the recent developments in this fast-moving field. This class-tested book avoids mathematical and technical details as much as possible: formulas are accompanied by verbal statements that explain the formula in accessible terms. The book sharpens the reader’s intuition on how to think about missing data, and provides all the tools needed to execute a well-grounded quantitative analysis in the presence of missing data.




The Statistical Analysis of Failure Time Data


Book Description

Contains additional discussion and examples on left truncationas well as material on more general censoring and truncationpatterns. Introduces the martingale and counting process formulation swillbe in a new chapter. Develops multivariate failure time data in a separate chapterand extends the material on Markov and semi Markovformulations. Presents new examples and applications of data analysis.