The R Package Mitlsem


Book Description

This paper presents the R package MitISEM, which provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. The package provides also an extended MitISEM algorithm, 'sequential MitISEM', which substantially decreases the computational time when the target density has to be approximated for increasing data samples. This occurs when the posterior distribution is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of non-elliptical posterior shapes and that describe well-known data patterns in econometrics and finance. We show that the candidate distribution obtained by MitISEM outperforms those obtained by 'naive' approximations in terms of numerical efficiency. Further, the MitISEM approach can be used for Bayesian model comparison, using the predictive likelihoods.




Reliability, Risk, and Safety, Three Volume Set


Book Description

Containing papers presented at the 18th European Safety and Reliability Conference (Esrel 2009) in Prague, Czech Republic, September 2009, Reliability, Risk and Safety Theory and Applications will be of interest for academics and professionals working in a wide range of industrial and governmental sectors, including Aeronautics and Aerospace, Aut




To Bridge, to Warp or to Wrap? A Comperative Study of Monte Carlo Methods for Efficient Evaluation of Marginal Likelihood


Book Description

Important choices for efficient and accurate evaluation of marginal likelihoods by means of Monte Carlo simulation methods are studied for the case of highly non-elliptical posterior distributions. We focus on the situation where one makes use of importance sampling or the independence chain Metropolis-Hastings algorithm for posterior analysis. A comparative analysis is presented of possible advantages and limitations of different simulation techniques; of possible choices of candidate distributions and choices of target or warped target distributions; and finally of numerical standard errors. The importance of a robust and flexible estimation strategy is demonstrated where the complete posterior distribution is explored. In this respect, the adaptive mixture of Student-t distributions of Hoogerheide et al.(2007) works particularly well. Given an appropriately yet quickly tuned candidate, straightforward importance sampling provides the most efficient estimator of the marginal likelihood in the cases investigated in this paper, which include a non-linear regression model of Ritter and Tanner (1992) and a conditional normal distribution of Gelman and Meng (1991). A poor choice of candidate density may lead to a huge loss of efficiency where the numerical standard error may be highly unreliable.




Discrete Choice Methods with Simulation


Book Description

This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.




Contemporary Bayesian Econometrics and Statistics


Book Description

Tools to improve decision making in an imperfect world This publication provides readers with a thorough understanding of Bayesian analysis that is grounded in the theory of inference and optimal decision making. Contemporary Bayesian Econometrics and Statistics provides readers with state-of-the-art simulation methods and models that are used to solve complex real-world problems. Armed with a strong foundation in both theory and practical problem-solving tools, readers discover how to optimize decision making when faced with problems that involve limited or imperfect data. The book begins by examining the theoretical and mathematical foundations of Bayesian statistics to help readers understand how and why it is used in problem solving. The author then describes how modern simulation methods make Bayesian approaches practical using widely available mathematical applications software. In addition, the author details how models can be applied to specific problems, including: * Linear models and policy choices * Modeling with latent variables and missing data * Time series models and prediction * Comparison and evaluation of models The publication has been developed and fine- tuned through a decade of classroom experience, and readers will find the author's approach very engaging and accessible. There are nearly 200 examples and exercises to help readers see how effective use of Bayesian statistics enables them to make optimal decisions. MATLAB? and R computer programs are integrated throughout the book. An accompanying Web site provides readers with computer code for many examples and datasets. This publication is tailored for research professionals who use econometrics and similar statistical methods in their work. With its emphasis on practical problem solving and extensive use of examples and exercises, this is also an excellent textbook for graduate-level students in a broad range of fields, including economics, statistics, the social sciences, business, and public policy.




Mixtures


Book Description

This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject. The applications are drawn from scientific discipline, including biostatistics, computer science, ecology and finance. This area of statistics is important to a range of disciplines, and its methodology attracts interest from researchers in the fields in which it can be applied.




Mixture Model-Based Classification


Book Description

"This is a great overview of the field of model-based clustering and classification by one of its leading developers. McNicholas provides a resource that I am certain will be used by researchers in statistics and related disciplines for quite some time. The discussion of mixtures with heavy tails and asymmetric distributions will place this text as the authoritative, modern reference in the mixture modeling literature." (Douglas Steinley, University of Missouri) Mixture Model-Based Classification is the first monograph devoted to mixture model-based approaches to clustering and classification. This is both a book for established researchers and newcomers to the field. A history of mixture models as a tool for classification is provided and Gaussian mixtures are considered extensively, including mixtures of factor analyzers and other approaches for high-dimensional data. Non-Gaussian mixtures are considered, from mixtures with components that parameterize skewness and/or concentration, right up to mixtures of multiple scaled distributions. Several other important topics are considered, including mixture approaches for clustering and classification of longitudinal data as well as discussion about how to define a cluster Paul D. McNicholas is the Canada Research Chair in Computational Statistics at McMaster University, where he is a Professor in the Department of Mathematics and Statistics. His research focuses on the use of mixture model-based approaches for classification, with particular attention to clustering applications, and he has published extensively within the field. He is an associate editor for several journals and has served as a guest editor for a number of special issues on mixture models.




Data Science and Machine Learning


Book Description

Focuses on mathematical understanding Presentation is self-contained, accessible, and comprehensive Full color throughout Extensive list of exercises and worked-out examples Many concrete algorithms with actual code




Bayesian Data Analysis, Third Edition


Book Description

Now in its third edition, this classic book is widely considered the leading text on Bayesian methods, lauded for its accessible, practical approach to analyzing data and solving research problems. Bayesian Data Analysis, Third Edition continues to take an applied approach to analysis using up-to-date Bayesian methods. The authors—all leaders in the statistics community—introduce basic concepts from a data-analytic perspective before presenting advanced methods. Throughout the text, numerous worked examples drawn from real applications and research emphasize the use of Bayesian inference in practice. New to the Third Edition Four new chapters on nonparametric modeling Coverage of weakly informative priors and boundary-avoiding priors Updated discussion of cross-validation and predictive information criteria Improved convergence monitoring and effective sample size calculations for iterative simulation Presentations of Hamiltonian Monte Carlo, variational Bayes, and expectation propagation New and revised software code The book can be used in three different ways. For undergraduate students, it introduces Bayesian inference starting from first principles. For graduate students, the text presents effective current approaches to Bayesian modeling and computation in statistics and related fields. For researchers, it provides an assortment of Bayesian methods in applied statistics. Additional materials, including data sets used in the examples, solutions to selected exercises, and software instructions, are available on the book’s web page.