Informative Hypotheses


Book Description

When scientists formulate their theories, expectations, and hypotheses, they often use statements like: ``I expect mean A to be bigger than means B and C"; ``I expect that the relation between Y and both X1 and X2 is positive"; and ``I expect the relation between Y and X1 to be stronger than the relation between Y and X2". Stated otherwise, they formulate their expectations in terms of inequality constraints among the parameters in which they are interested, that is, they formulate Informative Hypotheses. There is currently a sound theoretical foundation for the evaluation of informative hypotheses using Bayes factors, p-values and the generalized order restricted information criterion. Furthermore, software that is often free is available to enable researchers to evaluate the informative hypotheses using their own data. The road is open to challenge the dominance of the null hypothesis for contemporary research in behavioral, social, and other sciences.




A First Course in Linear Model Theory


Book Description

This innovative, intermediate-level statistics text fills an important gap by presenting the theory of linear statistical models at a level appropriate for senior undergraduate or first-year graduate students. With an innovative approach, the author's introduces students to the mathematical and statistical concepts and tools that form a foundation for studying the theory and applications of both univariate and multivariate linear models A First Course in Linear Model Theory systematically presents the basic theory behind linear statistical models with motivation from an algebraic as well as a geometric perspective. Through the concepts and tools of matrix and linear algebra and distribution theory, it provides a framework for understanding classical and contemporary linear model theory. It does not merely introduce formulas, but develops in students the art of statistical thinking and inspires learning at an intuitive level by emphasizing conceptual understanding. The authors' fresh approach, methodical presentation, wealth of examples, and introduction to topics beyond the classical theory set this book apart from other texts on linear models. It forms a refreshing and invaluable first step in students' study of advanced linear models, generalized linear models, nonlinear models, and dynamic models.




Handbook of Bayesian Variable Selection


Book Description

Bayesian variable selection has experienced substantial developments over the past 30 years with the proliferation of large data sets. Identifying relevant variables to include in a model allows simpler interpretation, avoids overfitting and multicollinearity, and can provide insights into the mechanisms underlying an observed phenomenon. Variable selection is especially important when the number of potential predictors is substantially larger than the sample size and sparsity can reasonably be assumed. The Handbook of Bayesian Variable Selection provides a comprehensive review of theoretical, methodological and computational aspects of Bayesian methods for variable selection. The topics covered include spike-and-slab priors, continuous shrinkage priors, Bayes factors, Bayesian model averaging, partitioning methods, as well as variable selection in decision trees and edge selection in graphical models. The handbook targets graduate students and established researchers who seek to understand the latest developments in the field. It also provides a valuable reference for all interested in applying existing methods and/or pursuing methodological extensions. Features: Provides a comprehensive review of methods and applications of Bayesian variable selection. Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. Includes contributions by experts in the field. Supported by a website with code, data, and other supplementary material




Bayesian Modeling Using WinBUGS


Book Description

A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all data sets and code are available on the book's related Web site. Requiring only a working knowledge of probability theory and statistics, Bayesian Modeling Using WinBUGS serves as an excellent book for courses on Bayesian statistics at the upper-undergraduate and graduate levels. It is also a valuable reference for researchers and practitioners in the fields of statistics, actuarial science, medicine, and the social sciences who use WinBUGS in their everyday work.




Bayesian Evaluation of Informative Hypotheses


Book Description

This book provides an overview of the developments in the area of Bayesian evaluation of informative hypotheses that took place since the publication of the ?rst paper on this topic in 2001 [Hoijtink, H. Con?rmatory latent class analysis, model selection using Bayes factors and (pseudo) likelihood ratio statistics. Multivariate Behavioral Research, 36, 563–588]. The current state of a?airs was presented and discussed by the authors of this book during a workshop in Utrecht in June 2007. Here we would like to thank all authors for their participation, ideas, and contributions. We would also like to thank Sophie van der Zee for her editorial e?orts during the construction of this book. Another word of thanks is due to John Kimmel of Springer for his con?dence in the editors and authors. Finally, we would like to thank the Netherlands Organization for Scienti?c Research (NWO) whose VICI grant (453-05-002) awarded to the ?rst author enabled the organization of the workshop, the writing of this book, and continuation of the research with respect to Bayesian evaluation of informative hypotheses.




Bayesian Data Analysis, Third Edition


Book Description

Now in its third edition, this classic book is widely considered the leading text on Bayesian methods, lauded for its accessible, practical approach to analyzing data and solving research problems. Bayesian Data Analysis, Third Edition continues to take an applied approach to analysis using up-to-date Bayesian methods. The authors—all leaders in the statistics community—introduce basic concepts from a data-analytic perspective before presenting advanced methods. Throughout the text, numerous worked examples drawn from real applications and research emphasize the use of Bayesian inference in practice. New to the Third Edition Four new chapters on nonparametric modeling Coverage of weakly informative priors and boundary-avoiding priors Updated discussion of cross-validation and predictive information criteria Improved convergence monitoring and effective sample size calculations for iterative simulation Presentations of Hamiltonian Monte Carlo, variational Bayes, and expectation propagation New and revised software code The book can be used in three different ways. For undergraduate students, it introduces Bayesian inference starting from first principles. For graduate students, the text presents effective current approaches to Bayesian modeling and computation in statistics and related fields. For researchers, it provides an assortment of Bayesian methods in applied statistics. Additional materials, including data sets used in the examples, solutions to selected exercises, and software instructions, are available on the book’s web page.




Hypothesis Testing and Model Selection in the Social Sciences


Book Description

Examining the major approaches to hypothesis testing and model selection, this book blends statistical theory with recommendations for practice, illustrated with real-world social science examples. It systematically compares classical (frequentist) and Bayesian approaches, showing how they are applied, exploring ways to reconcile the differences between them, and evaluating key controversies and criticisms. The book also addresses the role of hypothesis testing in the evaluation of theories, the relationship between hypothesis tests and confidence intervals, and the role of prior knowledge in Bayesian estimation and Bayesian hypothesis testing. Two easily calculated alternatives to standard hypothesis tests are discussed in depth: the Akaike information criterion (AIC) and Bayesian information criterion (BIC). The companion website ([ital]www.guilford.com/weakliem-materials[/ital]) supplies data and syntax files for the book's examples.




Learning Statistics with R


Book Description

"Learning Statistics with R" covers the contents of an introductory statistics class, as typically taught to undergraduate psychology students, focusing on the use of the R statistical software and adopting a light, conversational style throughout. The book discusses how to get started in R, and gives an introduction to data manipulation and writing scripts. From a statistical perspective, the book discusses descriptive statistics and graphing first, followed by chapters on probability theory, sampling and estimation, and null hypothesis testing. After introducing the theory, the book covers the analysis of contingency tables, t-tests, ANOVAs and regression. Bayesian statistics are covered at the end of the book. For more information (and the opportunity to check the book out before you buy!) visit http://ua.edu.au/ccs/teaching/lsr or http://learningstatisticswithr.com




Doing Bayesian Data Analysis


Book Description

There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis tractable and accessible to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS, is for first year graduate students or advanced undergraduates and provides an accessible approach, as all mathematics is explained intuitively and with concrete examples. It assumes only algebra and 'rusty' calculus. Unlike other textbooks, this book begins with the basics, including essential concepts of probability and random sampling. The book gradually climbs all the way to advanced hierarchical modeling methods for realistic data. The text provides complete examples with the R programming language and BUGS software (both freeware), and begins with basic programming examples, working up gradually to complete programs for complex analyses and presentation graphics. These templates can be easily adapted for a large variety of students and their own research needs.The textbook bridges the students from their undergraduate training into modern Bayesian methods. - Accessible, including the basics of essential concepts of probability and random sampling - Examples with R programming language and BUGS software - Comprehensive coverage of all scenarios addressed by non-bayesian textbooks- t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis). - Coverage of experiment planning - R and BUGS computer programming code on website - Exercises have explicit purposes and guidelines for accomplishment




Feature Engineering and Selection


Book Description

The process of developing predictive models includes many stages. Most resources focus on the modeling algorithms but neglect other critical aspects of the modeling process. This book describes techniques for finding the best representations of predictors for modeling and for nding the best subset of predictors for improving model performance. A variety of example data sets are used to illustrate the techniques along with R programs for reproducing the results.