A Chronicle of Permutation Statistical Methods


Book Description

The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, analysis of variance, contingency table analysis, and measures of association and agreement. A non-mathematical approach makes the text accessible to readers of all levels.




A Primer of Permutation Statistical Methods


Book Description

The primary purpose of this textbook is to introduce the reader to a wide variety of elementary permutation statistical methods. Permutation methods are optimal for small data sets and non-random samples, and are free of distributional assumptions. The book follows the conventional structure of most introductory books on statistical methods, and features chapters on central tendency and variability, one-sample tests, two-sample tests, matched-pairs tests, one-way fully-randomized analysis of variance, one-way randomized-blocks analysis of variance, simple regression and correlation, and the analysis of contingency tables. In addition, it introduces and describes a comparatively new permutation-based, chance-corrected measure of effect size. Because permutation tests and measures are distribution-free, do not assume normality, and do not rely on squared deviations among sample values, they are currently being applied in a wide variety of disciplines. This book presents permutation alternatives to existing classical statistics, and is intended as a textbook for undergraduate statistics courses or graduate courses in the natural, social, and physical sciences, while assuming only an elementary grasp of statistics.




Permutation Statistical Methods with R


Book Description

This book takes a unique approach to explaining permutation statistics by integrating permutation statistical methods with a wide range of classical statistical methods and associated R programs. It opens by comparing and contrasting two models of statistical inference: the classical population model espoused by J. Neyman and E.S. Pearson and the permutation model first introduced by R.A. Fisher and E.J.G. Pitman. Numerous comparisons of permutation and classical statistical methods are presented, supplemented with a variety of R scripts for ease of computation. The text follows the general outline of an introductory textbook in statistics with chapters on central tendency and variability, one-sample tests, two-sample tests, matched-pairs tests, completely-randomized analysis of variance, randomized-blocks analysis of variance, simple linear regression and correlation, and the analysis of goodness of fit and contingency. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity, depend only on the observed data, and do not require random sampling. The methods are relatively new in that it took modern computing power to make them available to those working in mainstream research. Designed for an audience with a limited statistical background, the book can easily serve as a textbook for undergraduate or graduate courses in statistics, psychology, economics, political science or biology. No statistical training beyond a first course in statistics is required, but some knowledge of, or some interest in, the R programming language is assumed.




Permutation Statistical Methods


Book Description

This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. lly-informed="" audience,="" and="" can="" also="" easily="" serve="" as="" textbook="" in="" graduate="" course="" departments="" such="" statistics,="" psychology,="" or="" biology.="" particular,="" the="" audience="" for="" book="" is="" teachers="" of="" practicing="" statisticians,="" applied="" quantitative="" students="" fields="" medical="" research,="" epidemiology,="" public="" health,="" biology.




Statistical Methods: Connections, Equivalencies, and Relationships


Book Description

The primary purpose of this book is to introduce the reader to a wide variety of interesting and useful connections, relationships, and equivalencies between and among conventional and permutation statistical methods. There are approximately 320 statistical connections and relationships described in this book. For each connection or connections the tests are described, the connection is explained, and an example analysis illustrates both the tests and the connection(s). The emphasis is more on demonstrations than on proofs, so little mathematical expertise is assumed. While the book is intended as a stand-alone monograph, it can also be used as a supplement to a standard textbook such as might be used in a second- or third-term course in conventional statistical methods. Students, faculty, and researchers in the social, natural, or hard sciences will find an interesting collection of statistical connections and relationships - some well-known, some more obscure, and some presented here for the first time.




The Measurement of Association


Book Description

This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.




Randomization, Masking, and Allocation Concealment


Book Description

Randomization, Masking, and Allocation Concealment is indispensable for any trial researcher who wants to use state of the art randomization methods, and also wants to be able to describe these methods correctly. Far too often the subtle nuances that distinguish proper randomization from flawed randomization are completely ignored in trial reports that state only that randomization was used, with no additional information. Experience has shown that in many cases, the type of randomization that was used was flawed. It is only a matter of time before medical journals and regulatory agencies come to realize that we can no longer rely on (or publish) flawed trials, and that flawed randomization in and of itself disqualifies a trial from being robust or high quality, even if that trial is of high quality otherwise. This book will help to clarify the role randomization plays in ensuring internal validity, and in drawing valid inferences from the data. The various chapters cover a variety of randomization methods, and are not limited to the most common (and most flawed) ones. Readers will come away with a profound understanding of what constitutes a valid randomization procedure, so that they can distinguish the valid from the flawed among not only existing methods but also methods yet to be developed.




Combinatorial Inference in Geometric Data Analysis


Book Description

Geometric Data Analysis designates the approach of Multivariate Statistics that conceptualizes the set of observations as a Euclidean cloud of points. Combinatorial Inference in Geometric Data Analysis gives an overview of multidimensional statistical inference methods applicable to clouds of points that make no assumption on the process of generating data or distributions, and that are not based on random modelling but on permutation procedures recasting in a combinatorial framework. It focuses particularly on the comparison of a group of observations to a reference population (combinatorial test) or to a reference value of a location parameter (geometric test), and on problems of homogeneity, that is the comparison of several groups for two basic designs. These methods involve the use of combinatorial procedures to build a reference set in which we place the data. The chosen test statistics lead to original extensions, such as the geometric interpretation of the observed level, and the construction of a compatibility region. Features: Defines precisely the object under study in the context of multidimensional procedures, that is clouds of points Presents combinatorial tests and related computations with R and Coheris SPAD software Includes four original case studies to illustrate application of the tests Includes necessary mathematical background to ensure it is self–contained This book is suitable for researchers and students of multivariate statistics, as well as applied researchers of various scientific disciplines. It could be used for a specialized course taught at either master or PhD level.




Sex Estimation of the Human Skeleton


Book Description

Sex Estimation of the Human Skeleton is a comprehensive work on the theory, methods, and current issues for sexing human skeletal remains. This work provides practitioners a starting point for research and practice on sex estimation to assist with the identification and analysis of human remains. It contains a collection of the latest scientific research, using metric and morphological methods, and contains case studies, where relevant, to highlight methodological application to real cases. This volume presents a truly comprehensive representation of the current state of sex estimation while also detailing the history and how we got to this point. Divided into three main sections, this reference text first provides an introduction to the book and to sex estimation overall, including a history, practitioner preferences, and a deeper understanding of biological sex. The second section addresses the main methodological areas used to estimate sex, including metric and morphological methods, statistical applications, and software. Each chapter topic provides a review of older techniques and emphasizes the latest research and methodological improvements. Chapters are written by practicing physical anthropologists and also include their latest research on the topics, as well as relevant case studies. The third section addresses current considerations and future directions for sex estimation in forensic and bioarchaeological contexts, including DNA, secular change, and medical imaging Sex Estimation of the Human Skeleton is a one-of-a-kind resource for those involved in estimating the sex of human skeletal remains. - Provides the first comprehensive text reference on sex estimation, with historical perspectives and current best practices - Contains real case studies to underscore key estimation concepts - Demonstrates the changing role of technology in sex estimation




Analysis of Incidence Rates


Book Description

Incidence rates are counts divided by person-time; mortality rates are a well-known example. Analysis of Incidence Rates offers a detailed discussion of the practical aspects of analyzing incidence rates. Important pitfalls and areas of controversy are discussed. The text is aimed at graduate students, researchers, and analysts in the disciplines of epidemiology, biostatistics, social sciences, economics, and psychology. Features: Compares and contrasts incidence rates with risks, odds, and hazards. Shows stratified methods, including standardization, inverse-variance weighting, and Mantel-Haenszel methods Describes Poisson regression methods for adjusted rate ratios and rate differences. Examines linear regression for rate differences with an emphasis on common problems. Gives methods for correcting confidence intervals. Illustrates problems related to collapsibility. Explores extensions of count models for rates, including negative binomial regression, methods for clustered data, and the analysis of longitudinal data. Also, reviews controversies and limitations. Presents matched cohort methods in detail. Gives marginal methods for converting adjusted rate ratios to rate differences, and vice versa. Demonstrates instrumental variable methods. Compares Poisson regression with the Cox proportional hazards model. Also, introduces Royston-Parmar models. All data and analyses are in online Stata files which readers can download. Peter Cummings is Professor Emeritus, Department of Epidemiology, School of Public Health, University of Washington, Seattle WA. His research was primarily in the field of injuries. He used matched cohort methods to estimate how the use of seat belts and presence of airbags were related to death in a traffic crash. He is author or co-author of over 100 peer-reviewed articles.