Rethinking the Foundations of Statistics


Book Description

This important collection of essays is a synthesis of foundational studies in Bayesian decision theory and statistics. An overarching topic of the collection is understanding how the norms for Bayesian decision making should apply in settings with more than one rational decision maker and then tracing out some of the consequences of this turn for Bayesian statistics. The volume will be particularly valuable to philosophers concerned with decision theory, probability, and statistics, statisticians, mathematicians, and economists.




The Foundations of Statistics


Book Description

Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.




Statistical Rethinking


Book Description

Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web Resource The book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.




Reflections on the Foundations of Probability and Statistics


Book Description

This Festschrift celebrates Teddy Seidenfeld and his seminal contributions to philosophy, statistics, probability, game theory and related areas. The 13 contributions in this volume, written by leading researchers in these fields, are supplemented by an interview with Teddy Seidenfeld that offers an abbreviated intellectual autobiography, touching on topics of timeless interest concerning truth and uncertainty. Indeed, as the eminent philosopher Isaac Levi writes in this volume: "In a world dominated by Alternative Facts and Fake News, it is hard to believe that many of us have spent our life’s work, as has Teddy Seidenfeld, in discussing truth and uncertainty." The reader is invited to share this celebration of Teddy Seidenfeld’s work uncovering truths about uncertainty and the penetrating insights they offer to our common pursuit of truth in the face of uncertainty.




The Foundations of Statistics


Book Description




Think Stats


Book Description

If you know how to program, you have the skills to turn data into knowledge using the tools of probability and statistics. This concise introduction shows you how to perform statistical analysis computationally, rather than mathematically, with programs written in Python. You'll work with a case study throughout the book to help you learn the entire data analysis process—from collecting data and generating statistics to identifying patterns and testing hypotheses. Along the way, you'll become familiar with distributions, the rules of probability, visualization, and many other tools and concepts. Develop your understanding of probability and statistics by writing and testing code Run experiments to test statistical behavior, such as generating samples from several distributions Use simulations to understand concepts that are hard to grasp mathematically Learn topics not usually covered in an introductory course, such as Bayesian estimation Import data from almost any source using Python, rather than be limited to data that has been cleaned and formatted for statistics tools Use statistical inference to answer questions about real-world data




Probability, Statistics, and Data


Book Description

This book is a fresh approach to a calculus based, first course in probability and statistics, using R throughout to give a central role to data and simulation. The book introduces probability with Monte Carlo simulation as an essential tool. Simulation makes challenging probability questions quickly accessible and easily understandable. Mathematical approaches are included, using calculus when appropriate, but are always connected to experimental computations. Using R and simulation gives a nuanced understanding of statistical inference. The impact of departure from assumptions in statistical tests is emphasized, quantified using simulations, and demonstrated with real data. The book compares parametric and non-parametric methods through simulation, allowing for a thorough investigation of testing error and power. The text builds R skills from the outset, allowing modern methods of resampling and cross validation to be introduced along with traditional statistical techniques. Fifty-two data sets are included in the complementary R package fosdata. Most of these data sets are from recently published papers, so that you are working with current, real data, which is often large and messy. Two central chapters use powerful tidyverse tools (dplyr, ggplot2, tidyr, stringr) to wrangle data and produce meaningful visualizations. Preliminary versions of the book have been used for five semesters at Saint Louis University, and the majority of the more than 400 exercises have been classroom tested.




Data Driven Statistical Methods


Book Description

Calculations once prohibitively time-consuming can be completed in microseconds by modern computers. This has resulted in dramatic shifts in emphasis in applied statistics. Not only has it freed us from an obsession with the 5% and 1% significance levels imposed by conventional tables but many exact estimation procedures based on randomization tests are now as easy to carry out as approximations based on normal distribution theory. In a wider context it has facilitated the everyday use of tools such as the bootstrap and robust estimation methods as well as diagnostic tests for pinpointing or for adjusting possible aberrations or contamination that may otherwise be virtually undetectable in complex data sets. Data Driven Statistical Methods provides an insight into modern developments in statistical methodology using examples that highlight connections between these techniques as well as their relationship to other established approaches. Illustration by simple numerical examples takes priority over abstract theory. Examples and exercises are selected from many fields ranging from studies of literary style to analysis of survival data from clinical files, from psychological tests to interpretation of evidence in legal cases. Users are encouraged to apply the methods to their own or other data sets relevant to their fields of interest. The book will appeal both to lecturers giving undergraduate mainstream or service courses in statistics and to newly-practising statisticians or others concerned with data interpretation in any discipline who want to make the best use of modern statistical computer software.




Statistical Inference as Severe Testing


Book Description

Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.




Statistical Rules of Thumb


Book Description

Praise for the First Edition: "For a beginner [this book] is a treasure trove; for an experienced person it can provide new ideas on how better to pursue the subject of applied statistics." —Journal of Quality Technology Sensibly organized for quick reference, Statistical Rules of Thumb, Second Edition compiles simple rules that are widely applicable, robust, and elegant, and each captures key statistical concepts. This unique guide to the use of statistics for designing, conducting, and analyzing research studies illustrates real-world statistical applications through examples from fields such as public health and environmental studies. Along with an insightful discussion of the reasoning behind every technique, this easy-to-use handbook also conveys the various possibilities statisticians must think of when designing and conducting a study or analyzing its data. Each chapter presents clearly defined rules related to inference, covariation, experimental design, consultation, and data representation, and each rule is organized and discussed under five succinct headings: introduction; statement and illustration of the rule; the derivation of the rule; a concluding discussion; and exploration of the concept's extensions. The author also introduces new rules of thumb for topics such as sample size for ratio analysis, absolute and relative risk, ANCOVA cautions, and dichotomization of continuous variables. Additional features of the Second Edition include: Additional rules on Bayesian topics New chapters on observational studies and Evidence-Based Medicine (EBM) Additional emphasis on variation and causation Updated material with new references, examples, and sources A related Web site provides a rich learning environment and contains additional rules, presentations by the author, and a message board where readers can share their own strategies and discoveries. Statistical Rules of Thumb, Second Edition is an ideal supplementary book for courses in experimental design and survey research methods at the upper-undergraduate and graduate levels. It also serves as an indispensable reference for statisticians, researchers, consultants, and scientists who would like to develop an understanding of the statistical foundations of their research efforts. A related website www.vanbelle.org provides additional rules, author presentations and more.