Statistical Hypothesis Testing in Context: Volume 52


Book Description

Fay and Brittain present statistical hypothesis testing and compatible confidence intervals, focusing on application and proper interpretation. The emphasis is on equipping applied statisticians with enough tools - and advice on choosing among them - to find reasonable methods for almost any problem and enough theory to tackle new problems by modifying existing methods. After covering the basic mathematical theory and scientific principles, tests and confidence intervals are developed for specific types of data. Essential methods for applications are covered, such as general procedures for creating tests (e.g., likelihood ratio, bootstrap, permutation, testing from models), adjustments for multiple testing, clustering, stratification, causality, censoring, missing data, group sequential tests, and non-inferiority tests. New methods developed by the authors are included throughout, such as melded confidence intervals for comparing two samples and confidence intervals associated with Wilcoxon-Mann-Whitney tests and Kaplan-Meier estimates. Examples, exercises, and the R package asht support practical use.




Statistical Hypothesis Testing in Context


Book Description

This coherent guide equips applied statisticians to make good choices and proper interpretations in real investigations facing real data.




Statistical Inference as Severe Testing


Book Description

Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.




Random Graphs and Complex Networks: Volume 2


Book Description

Complex networks are key to describing the connected nature of the society that we live in. This book, the second of two volumes, describes the local structure of random graph models for real-world networks and determines when these models have a giant component and when they are small-, and ultra-small, worlds. This is the first book to cover the theory and implications of local convergence, a crucial technique in the analysis of sparse random graphs. Suitable as a resource for researchers and PhD-level courses, it uses examples of real-world networks, such as the Internet and citation networks, as motivation for the models that are discussed, and includes exercises at the end of each chapter to develop intuition. The book closes with an extensive discussion of related models and problems that demonstratemodern approaches to network theory, such as community structure and directed models.




Encyclopedia of Biopharmaceutical Statistics - Four Volume Set


Book Description

Since the publication of the first edition in 2000, there has been an explosive growth of literature in biopharmaceutical research and development of new medicines. This encyclopedia (1) provides a comprehensive and unified presentation of designs and analyses used at different stages of the drug development process, (2) gives a well-balanced summary of current regulatory requirements, and (3) describes recently developed statistical methods in the pharmaceutical sciences. Features of the Fourth Edition: 1. 78 new and revised entries have been added for a total of 308 chapters and a fourth volume has been added to encompass the increased number of chapters. 2. Revised and updated entries reflect changes and recent developments in regulatory requirements for the drug review/approval process and statistical designs and methodologies. 3. Additional topics include multiple-stage adaptive trial design in clinical research, translational medicine, design and analysis of biosimilar drug development, big data analytics, and real world evidence for clinical research and development. 4. A table of contents organized by stages of biopharmaceutical development provides easy access to relevant topics. About the Editor: Shein-Chung Chow, Ph.D. is currently an Associate Director, Office of Biostatistics, U.S. Food and Drug Administration (FDA). Dr. Chow is an Adjunct Professor at Duke University School of Medicine, as well as Adjunct Professor at Duke-NUS, Singapore and North Carolina State University. Dr. Chow is the Editor-in-Chief of the Journal of Biopharmaceutical Statistics and the Chapman & Hall/CRC Biostatistics Book Series and the author of 28 books and over 300 methodology papers. He was elected Fellow of the American Statistical Association in 1995.




Statistical Power Analysis for the Behavioral Sciences


Book Description

Statistical Power Analysis is a nontechnical guide to power analysis in research planning that provides users of applied statistics with the tools they need for more effective analysis. The Second Edition includes: * a chapter covering power analysis in set correlation and multivariate methods; * a chapter considering effect size, psychometric reliability, and the efficacy of "qualifying" dependent variables and; * expanded power and sample size tables for multiple regression/correlation.




All of Statistics


Book Description

Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.




Modern Discrete Probability


Book Description

Providing a graduate-level introduction to discrete probability and its applications, this book develops a toolkit of essential techniques for analysing stochastic processes on graphs, other random discrete structures, and algorithms. Topics covered include the first and second moment methods, concentration inequalities, coupling and stochastic domination, martingales and potential theory, spectral methods, and branching processes. Each chapter expands on a fundamental technique, outlining common uses and showing them in action on simple examples and more substantial classical results. The focus is predominantly on non-asymptotic methods and results. All chapters provide a detailed background review section, plus exercises and signposts to the wider literature. Readers are assumed to have undergraduate-level linear algebra and basic real analysis, while prior exposure to graduate-level probability is recommended. This much-needed broad overview of discrete probability could serve as a textbook or as a reference for researchers in mathematics, statistics, data science, computer science and engineering.




Generalized Additive Models for Location, Scale and Shape


Book Description

This text provides a state-of-the-art treatment of distributional regression, accompanied by real-world examples from diverse areas of application. Maximum likelihood, Bayesian and machine learning approaches are covered in-depth and contrasted, providing an integrated perspective on GAMLSS for researchers in statistics and other data-rich fields.




The Fundamentals of Heavy Tails


Book Description

Heavy tails –extreme events or values more common than expected –emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.




Recent Books