Statistics as Principled Argument


Book Description

In this illuminating volume, Robert P. Abelson delves into the too-often dismissed problems of interpreting quantitative data and then presenting them in the context of a coherent story about one's research. Unlike too many books on statistics, this is a remarkably engaging read, filled with fascinating real-life (and real-research) examples rather than with recipes for analysis. It will be of true interest and lasting value to beginning graduate students and seasoned researchers alike. The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric. Five criteria, described by the acronym MAGIC (magnitude, articulation, generality, interestingness, and credibility) are proposed as crucial features of a persuasive, principled argument. Particular statistical methods are discussed, with minimum use of formulas and heavy data sets. The ideas throughout the book revolve around elementary probability theory, t tests, and simple issues of research design. It is therefore assumed that the reader has already had some access to elementary statistics. Many examples are included to explain the connection of statistics to substantive claims about real phenomena.




Statistics As Principled Argument


Book Description

In this illuminating volume, Robert P. Abelson delves into the too-often dismissed problems of interpreting quantitative data and then presenting them in the context of a coherent story about one's research. Unlike too many books on statistics, this is a remarkably engaging read, filled with fascinating real-life (and real-research) examples rather than with recipes for analysis. It will be of true interest and lasting value to beginning graduate students and seasoned researchers alike. The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric. Five criteria, described by the acronym MAGIC (magnitude, articulation, generality, interestingness, and credibility) are proposed as crucial features of a persuasive, principled argument. Particular statistical methods are discussed, with minimum use of formulas and heavy data sets. The ideas throughout the book revolve around elementary probability theory, t tests, and simple issues of research design. It is therefore assumed that the reader has already had some access to elementary statistics. Many examples are included to explain the connection of statistics to substantive claims about real phenomena.




Statistical Inference as Severe Testing


Book Description

Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.




All of Statistics


Book Description

Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.




Statistics for Making Decisions


Book Description

Making decisions is a ubiquitous mental activity in our private and professional or public lives. It entails choosing one course of action from an available shortlist of options. Statistics for Making Decisions places decision making at the centre of statistical inference, proposing its theory as a new paradigm for statistical practice. The analysis in this paradigm is earnest about prior information and the consequences of the various kinds of errors that may be committed. Its conclusion is a course of action tailored to the perspective of the specific client or sponsor of the analysis. The author’s intention is a wholesale replacement of hypothesis testing, indicting it with the argument that it has no means of incorporating the consequences of errors which self-evidently matter to the client. The volume appeals to the analyst who deals with the simplest statistical problems of comparing two samples (which one has a greater mean or variance), or deciding whether a parameter is positive or negative. It combines highlighting the deficiencies of hypothesis testing with promoting a principled solution based on the idea of a currency for error, of which we want to spend as little as possible. This is implemented by selecting the option for which the expected loss is smallest (the Bayes rule). The price to pay is the need for a more detailed description of the options, and eliciting and quantifying the consequences (ramifications) of the errors. This is what our clients do informally and often inexpertly after receiving outputs of the analysis in an established format, such as the verdict of a hypothesis test or an estimate and its standard error. As a scientific discipline and profession, statistics has a potential to do this much better and deliver to the client a more complete and more relevant product. Nicholas T. Longford is a senior statistician at Imperial College, London, specialising in statistical methods for neonatal medicine. His interests include causal analysis of observational studies, decision theory, and the contest of modelling and design in data analysis. His longer-term appointments in the past include Educational Testing Service, Princeton, NJ, USA, de Montfort University, Leicester, England, and directorship of SNTL, a statistics research and consulting company. He is the author of over 100 journal articles and six other monographs on a variety of topics in applied statistics.




Statistics


Book Description

The Fourth Edition of Statistics: A Gentle Introduction shows students that an introductory statistics class doesn’t need to be difficult or dull. This text minimizes students’ anxieties about math by explaining the concepts of statistics in plain language first, before addressing the math. Each formula within the text has a step-by-step example to demonstrate the calculation so students can follow along. Only those formulas that are important for final calculations are included in the text so students can focus on the concepts, not the numbers. A wealth of real-world examples and applications gives a context for statistics in the real world and how it helps us solve problems and make informed choices. New to the Fourth Edition are sections on working with big data, new coverage of alternative non-parametric tests, beta coefficients, and the "nocebo effect," discussions of p values in the context of research, an expanded discussion of confidence intervals, and more exercises and homework options under the new feature "Test Yourself." Included with this title: The password-protected Instructor Resource Site (formally known as SAGE Edge) offers access to all text-specific resources, including a test bank and editable, chapter-specific PowerPoint® slides.




Principles of Statistical Inference


Book Description

In this definitive book, D. R. Cox gives a comprehensive and balanced appraisal of statistical inference. He develops the key concepts, describing and comparing the main ideas and controversies over foundational issues that have been keenly argued for more than two-hundred years. Continuing a sixty-year career of major contributions to statistical thought, no one is better placed to give this much-needed account of the field. An appendix gives a more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The mathematics is kept as elementary as feasible, though previous knowledge of statistics is assumed. The book will be valued by every user or student of statistics who is serious about understanding the uncertainty inherent in conclusions from statistical analyses.




Common Errors in Statistics (and How to Avoid Them)


Book Description

Praise for the Second Edition "All statistics students and teachers will find in this book a friendly and intelligentguide to . . . applied statistics in practice." —Journal of Applied Statistics ". . . a very engaging and valuable book for all who use statistics in any setting." —CHOICE ". . . a concise guide to the basics of statistics, replete with examples . . . a valuablereference for more advanced statisticians as well." —MAA Reviews Now in its Third Edition, the highly readable Common Errors in Statistics (and How to Avoid Them) continues to serve as a thorough and straightforward discussion of basic statistical methods, presentations, approaches, and modeling techniques. Further enriched with new examples and counterexamples from the latest research as well as added coverage of relevant topics, this new edition of the benchmark book addresses popular mistakes often made in data collection and provides an indispensable guide to accurate statistical analysis and reporting. The authors' emphasis on careful practice, combined with a focus on the development of solutions, reveals the true value of statistics when applied correctly in any area of research. The Third Edition has been considerably expanded and revised to include: A new chapter on data quality assessment A new chapter on correlated data An expanded chapter on data analysis covering categorical and ordinal data, continuous measurements, and time-to-event data, including sections on factorial and crossover designs Revamped exercises with a stronger emphasis on solutions An extended chapter on report preparation New sections on factor analysis as well as Poisson and negative binomial regression Providing valuable, up-to-date information in the same user-friendly format as its predecessor, Common Errors in Statistics (and How to Avoid Them), Third Edition is an excellent book for students and professionals in industry, government, medicine, and the social sciences.




Statistics for Mathematicians


Book Description

This textbook provides a coherent introduction to the main concepts and methods of one-parameter statistical inference. Intended for students of Mathematics taking their first course in Statistics, the focus is on Statistics for Mathematicians rather than on Mathematical Statistics. The goal is not to focus on the mathematical/theoretical aspects of the subject, but rather to provide an introduction to the subject tailored to the mindset and tastes of Mathematics students, who are sometimes turned off by the informal nature of Statistics courses. This book can be used as the basis for an elementary semester-long first course on Statistics with a firm sense of direction that does not sacrifice rigor. The deeper goal of the text is to attract the attention of promising Mathematics students.




Statistical Rethinking


Book Description

Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web Resource The book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.