The Myth of Statistical Inference


Book Description

This book proposes and explores the idea that the forced union of the aleatory and epistemic aspects of probability is a sterile hybrid, inspired and nourished for 300 years by a false hope of formalizing inductive reasoning, making uncertainty the object of precise calculation. Because this is not really a possible goal, statistical inference is not, cannot be, doing for us today what we imagine it is doing for us. It is for these reasons that statistical inference can be characterized as a myth. The book is aimed primarily at social scientists, for whom statistics and statistical inference are a common concern and frustration. Because the historical development given here is not merely anecdotal, but makes clear the guiding ideas and ambitions that motivated the formulation of particular methods, this book offers an understanding of statistical inference which has not hitherto been available. It will also serve as a supplement to the standard statistics texts. Finally, general readers will find here an interesting study with implications far beyond statistics. The development of statistical inference, to its present position of prominence in the social sciences, epitomizes a number of trends in Western intellectual history of the last three centuries, and the 11th chapter, considering the function of statistical inference in light of our needs for structure, rules, authority, and consensus in general, develops some provocative parallels, especially between epistemology and politics.




The Myth of Statistical Inference


Book Description

This book proposes and explores the idea that the forced union of the aleatory and epistemic aspects of probability is a sterile hybrid, inspired and nourished for 300 years by a false hope of formalizing inductive reasoning, making uncertainty the object of precise calculation. Because this is not really a possible goal, statistical inference is not, cannot be, doing for us today what we imagine it is doing for us. It is for these reasons that statistical inference can be characterized as a myth. The book is aimed primarily at social scientists, for whom statistics and statistical inference are a common concern and frustration. Because the historical development given here is not merely anecdotal, but makes clear the guiding ideas and ambitions that motivated the formulation of particular methods, this book offers an understanding of statistical inference which has not hitherto been available. It will also serve as a supplement to the standard statistics texts. Finally, general readers will find here an interesting study with implications far beyond statistics. The development of statistical inference, to its present position of prominence in the social sciences, epitomizes a number of trends in Western intellectual history of the last three centuries, and the 11th chapter, considering the function of statistical inference in light of our needs for structure, rules, authority, and consensus in general, develops some provocative parallels, especially between epistemology and politics.




Statistical Inference as Severe Testing


Book Description

Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.




Understanding Statistics and Statistical Myths


Book Description

Addressing 30 statistical myths in the areas of data, estimation, measurement system analysis, capability, hypothesis testing, statistical inference, and control charts, this book explains how to understand statistics rather than how to do statistics. Every statistical myth listed in this book has been stated in course materials used by the author




Learning Statistics with R


Book Description

"Learning Statistics with R" covers the contents of an introductory statistics class, as typically taught to undergraduate psychology students, focusing on the use of the R statistical software and adopting a light, conversational style throughout. The book discusses how to get started in R, and gives an introduction to data manipulation and writing scripts. From a statistical perspective, the book discusses descriptive statistics and graphing first, followed by chapters on probability theory, sampling and estimation, and null hypothesis testing. After introducing the theory, the book covers the analysis of contingency tables, t-tests, ANOVAs and regression. Bayesian statistics are covered at the end of the book. For more information (and the opportunity to check the book out before you buy!) visit http://ua.edu.au/ccs/teaching/lsr or http://learningstatisticswithr.com




Evidence-Based Technical Analysis


Book Description

Evidence-Based Technical Analysis examines how you can apply the scientific method, and recently developed statistical tests, to determine the true effectiveness of technical trading signals. Throughout the book, expert David Aronson provides you with comprehensive coverage of this new methodology, which is specifically designed for evaluating the performance of rules/signals that are discovered by data mining.




Willful Ignorance


Book Description

An original account of willful ignorance and how this principle relates to modern probability and statistical methods Through a series of colorful stories about great thinkers and the problems they chose to solve, the author traces the historical evolution of probability and explains how statistical methods have helped to propel scientific research. However, the past success of statistics has depended on vast, deliberate simplifications amounting to willful ignorance, and this very success now threatens future advances in medicine, the social sciences, and other fields. Limitations of existing methods result in frequent reversals of scientific findings and recommendations, to the consternation of both scientists and the lay public. Willful Ignorance: The Mismeasure of Uncertainty exposes the fallacy of regarding probability as the full measure of our uncertainty. The book explains how statistical methodology, though enormously productive and influential over the past century, is approaching a crisis. The deep and troubling divide between qualitative and quantitative modes of research, and between research and practice, are reflections of this underlying problem. The author outlines a path toward the re-engineering of data analysis to help close these gaps and accelerate scientific discovery. Willful Ignorance: The Mismeasure of Uncertainty presents essential information and novel ideas that should be of interest to anyone concerned about the future of scientific research. The book is especially pertinent for professionals in statistics and related fields, including practicing and research clinicians, biomedical and social science researchers, business leaders, and policy-makers.




The Cult of Statistical Significance


Book Description

How the most important statistical method used in many of the sciences doesn't pass the test for basic common sense




Exploring the History of Statistical Inference in Economics


Book Description

Contributors to this special supplement explore the history of statistical inference, led by two motivations. One was the belief that John Maynard Keynes's distinction between the descriptive and the inductive function of statistical research provided a fruitful framework for understanding empirical research practices. The other was an aim to fill a gap in the history of economics by exploring an important part of the story left out of existing histories of empirical analysis in economics--namely "sinful" research practices that did not meet or point towards currently reigning standards of scientific research.




Fisher, Neyman, and the Creation of Classical Statistics


Book Description

Classical statistical theory—hypothesis testing, estimation, and the design of experiments and sample surveys—is mainly the creation of two men: Ronald A. Fisher (1890-1962) and Jerzy Neyman (1894-1981). Their contributions sometimes complemented each other, sometimes occurred in parallel, and, particularly at later stages, often were in strong opposition. The two men would not be pleased to see their names linked in this way, since throughout most of their working lives they detested each other. Nevertheless, they worked on the same problems, and through their combined efforts created a new discipline. This new book by E.L. Lehmann, himself a student of Neyman’s, explores the relationship between Neyman and Fisher, as well as their interactions with other influential statisticians, and the statistical history they helped create together. Lehmann uses direct correspondence and original papers to recreate an historical account of the creation of the Neyman-Pearson Theory as well as Fisher’s dissent, and other important statistical theories.