Biostatistical Methods


Book Description

Praise for the First Edition ". . . an excellent textbook . . . an indispensable reference for biostatisticians and epidemiologists." —International Statistical Institute A new edition of the definitive guide to classical and modern methods of biostatistics Biostatistics consists of various quantitative techniques that are essential to the description and evaluation of relationships among biologic and medical phenomena. Biostatistical Methods: The Assessment of Relative Risks, Second Edition develops basic concepts and derives an expanded array of biostatistical methods through the application of both classical statistical tools and more modern likelihood-based theories. With its fluid and balanced presentation, the book guides readers through the important statistical methods for the assessment of absolute and relative risks in epidemiologic studies and clinical trials with categorical, count, and event-time data. Presenting a broad scope of coverage and the latest research on the topic, the author begins with categorical data analysis methods for cross-sectional, prospective, and retrospective studies of binary, polychotomous, and ordinal data. Subsequent chapters present modern model-based approaches that include unconditional and conditional logistic regression; Poisson and negative binomial models for count data; and the analysis of event-time data including the Cox proportional hazards model and its generalizations. The book now includes an introduction to mixed models with fixed and random effects as well as expanded methods for evaluation of sample size and power. Additional new topics featured in this Second Edition include: Establishing equivalence and non-inferiority Methods for the analysis of polychotomous and ordinal data, including matched data and the Kappa agreement index Multinomial logistic for polychotomous data and proportional odds models for ordinal data Negative binomial models for count data as an alternative to the Poisson model GEE models for the analysis of longitudinal repeated measures and multivariate observations Throughout the book, SAS is utilized to illustrate applications to numerous real-world examples and case studies. A related website features all the data used in examples and problem sets along with the author's SAS routines. Biostatistical Methods, Second Edition is an excellent book for biostatistics courses at the graduate level. It is also an invaluable reference for biostatisticians, applied statisticians, and epidemiologists.




Applied Linear Regression


Book Description

Master linear regression techniques with a new edition of a classic text Reviews of the Second Edition: "I found it enjoyable reading and so full of interesting material that even the well-informed reader will probably find something new . . . a necessity for all of those who do linear regression." —Technometrics, February 1987 "Overall, I feel that the book is a valuable addition to the now considerable list of texts on applied linear regression. It should be a strong contender as the leading text for a first serious course in regression analysis." —American Scientist, May–June 1987 Applied Linear Regression, Third Edition has been thoroughly updated to help students master the theory and applications of linear regression modeling. Focusing on model building, assessing fit and reliability, and drawing conclusions, the text demonstrates how to develop estimation, confidence, and testing procedures primarily through the use of least squares regression. To facilitate quick learning, the Third Edition stresses the use of graphical methods in an effort to find appropriate models and to better understand them. In that spirit, most analyses and homework problems use graphs for the discovery of structure as well as for the summarization of results. The Third Edition incorporates new material reflecting the latest advances, including: Use of smoothers to summarize a scatterplot Box-Cox and graphical methods for selecting transformations Use of the delta method for inference about complex combinations of parameters Computationally intensive methods and simulation, including the bootstrap method Expanded chapters on nonlinear and logistic regression Completely revised chapters on multiple regression, diagnostics, and generalizations of regression Readers will also find helpful pedagogical tools and learning aids, including: More than 100 exercises, most based on interesting real-world data Web primers demonstrating how to use standard statistical packages, including R, S-Plus®, SPSS®, SAS®, and JMP®, to work all the examples and exercises in the text A free online library for R and S-Plus that makes the methods discussed in the book easy to use With its focus on graphical methods and analysis, coupled with many practical examples and exercises, this is an excellent textbook for upper-level undergraduates and graduate students, who will quickly learn how to use linear regression analysis techniques to solve and gain insight into real-life problems.




Preparing for the Worst


Book Description

A timely approach to downside risk and its role in stock market investments When dealing with the topic of risk analysis, most books on investments treat downside and upside risk equally. Preparing for the Worst takes an entirely novel approach by focusing on downside risk and explaining how to incorporate it into investment decisions. Highlighting this asymmetry of the stock market, the authors describe how existing theories miss the downside and follow with explanations of how it can be included. Various techniques for calculating downside risk are demonstrated. This book presents the latest ideas in the field from the ground up, making the discussion accessible to mathematicians and statisticians interested in applications in finance, as well as to finance professionals who may not have a mathematical background. An invaluable resource for anyone wishing to explore the critical issues of finance, portfolio management, and securities pricing, this book: Incorporates Value at Risk into the theoretical discussion Uses many examples to illustrate downside risk in U.S., international, and emerging market investments Addresses downside risk arising from fraud and corruption Includes step-by-step instructions on how to implement the methods introduced in this book Offers advice on how to avoid pitfalls in calculations and computer programming Provides software use information and tips




Generalized Inference in Repeated Measures


Book Description

A complete guide to powerful and practical statistical modeling using MANOVA Numerous statistical applications are time dependent. Virtually all biomedical, pharmaceutical, and industrial experiments demand repeated measurements over time. The same holds true for market research and analysis. Yet conventional methods, such as the Repeated Measures Analysis of Variance (Rm ANOVA), do not always yield exact solutions, obliging practitioners to settle for asymptotic results and approximate solutions. Generalized inference in Multivariate Analysis of Variance (MANOVA), mixed models, and growth curves offer exact methods of data analysis under milder conditions without deviating from the conventional philosophy of statistical inference. Generalized Inference in Repeated Measures is a concise, self-contained guide to the use of these innovative solutions, presenting them as extensions of–rather than alternatives to–classical methods of statistical evaluation. Requiring minimal prior knowledge of statistical concepts in the evaluation of linear models, the book provides exact parametric methods for each application considered, with solutions presented in terms of generalized p-values. Coverage includes: New concepts in statistical inference, with special focus on generalized p-values and generalized confidence intervals One-way and two-way ANOVA, in cases of equal and unequal variances Basic and higher-way mixed models, including testing and estimation of fixed effects and variance components Multivariate populations, including basic inference, comparison, and analysis of variance Basic, widely used repeated measures models including crossover designs and growth curves With a comprehensive set of formulas, illustrative examples, and exercises in each chapter, Generalized Inference in Repeated Measures is ideal as both a comprehensive reference for research professionals and a text for students.




Applied Regression Including Computing and Graphics


Book Description

A step-by-step guide to computing and graphics in regression analysis In this unique book, leading statisticians Dennis Cook and Sanford Weisberg expertly blend regression fundamentals and cutting-edge graphical techniques. They combine and up- date most of the material from their widely used earlier work, An Introduction to Regression Graphics, and Weisberg's Applied Linear Regression; incorporate the latest in statistical graphics, computing, and regression models; and wind up with a modern, fully integrated approach to one of the most important tools of data analysis. In 23 concise, easy-to-digest chapters, the authors present:? A wealth of simple 2D and 3D graphical techniques, helping visualize results through graphs * An improved version of the user-friendly Arc software, which lets readers promptly implement new ideas * Complete coverage of regression models, including logistic regression and generalized linear models * More than 300 figures, easily reproducible on the computer * Numerous examples and problems based on real data * A companion Web site featuring free software and advice, available at www.wiley.com/mathem atics Accessible, self-contained, and fully referenced, Applied Regression Including Computing and Graphics assumes only a first course in basic statistical methods and provides a bona fide user manual for the Arc software. It is an invaluable resource for anyone interested in learning how to analyze regression problems with confidence and depth.




Records


Book Description

The first and only comprehensive guide to modern record theory andits applications Although it is often thought of as a special topic in orderstatistics, records form a unique area, independent of the study ofsample extremes. Interest in records has increased steadily overthe years since Chandler formulated the theory of records in 1952.Numerous applications of them have been developed in such far-flungfields as meteorology, sports analysis, hydrology, and stock marketanalysis, to name just a few. And the literature on the subjectcurrently comprises papers and journal articles numbering in thehundreds. Which is why it is so nice to have this book devotedexclusively to this lively area of statistics. Written by an exceptionally well-qualified author team, Recordspresents a comprehensive treatment of record theory and itsapplications in a variety of disciplines. With the help of amultitude of fascinating examples, Professors Arnold, Balakrishnan,and Nagaraja help readers quickly master basic and advanced recordvalue concepts and procedures, from the classical record valuemodel to random and multivariate record models. The book follows arational textbook format, featuring witty and insightful chapterintroductions that help smooth transitions from one topic toanother and challenging chapter-end exercises, which expand on thematerial covered. An extensive bibliography and numerous referencesthroughout the text specify sources for further readings onrelevant topics. Records is a valuable professional resource forprobabilists and statisticians, in addition to appliedstatisticians, meteorologists, hydrologists, market analysts, andsports analysts. It also makes an excellent primary text forcourses in record theory and a supplement to order statisticscourses.




Modelling Under Risk and Uncertainty


Book Description

Modelling has permeated virtually all areas of industrial, environmental, economic, bio-medical or civil engineering: yet the use of models for decision-making raises a number of issues to which this book is dedicated: How uncertain is my model ? Is it truly valuable to support decision-making ? What kind of decision can be truly supported and how can I handle residual uncertainty ? How much refined should the mathematical description be, given the true data limitations ? Could the uncertainty be reduced through more data, increased modeling investment or computational budget ? Should it be reduced now or later ? How robust is the analysis or the computational methods involved ? Should / could those methods be more robust ? Does it make sense to handle uncertainty, risk, lack of knowledge, variability or errors altogether ? How reasonable is the choice of probabilistic modeling for rare events ? How rare are the events to be considered ? How far does it make sense to handle extreme events and elaborate confidence figures ? Can I take advantage of expert / phenomenological knowledge to tighten the probabilistic figures ? Are there connex domains that could provide models or inspiration for my problem ? Written by a leader at the crossroads of industry, academia and engineering, and based on decades of multi-disciplinary field experience, Modelling Under Risk and Uncertainty gives a self-consistent introduction to the methods involved by any type of modeling development acknowledging the inevitable uncertainty and associated risks. It goes beyond the “black-box” view that some analysts, modelers, risk experts or statisticians develop on the underlying phenomenology of the environmental or industrial processes, without valuing enough their physical properties and inner modelling potential nor challenging the practical plausibility of mathematical hypotheses; conversely it is also to attract environmental or engineering modellers to better handle model confidence issues through finer statistical and risk analysis material taking advantage of advanced scientific computing, to face new regulations departing from deterministic design or support robust decision-making. Modelling Under Risk and Uncertainty: Addresses a concern of growing interest for large industries, environmentalists or analysts: robust modeling for decision-making in complex systems. Gives new insights into the peculiar mathematical and computational challenges generated by recent industrial safety or environmental control analysis for rare events. Implements decision theory choices differentiating or aggregating the dimensions of risk/aleatory and epistemic uncertainty through a consistent multi-disciplinary set of statistical estimation, physical modelling, robust computation and risk analysis. Provides an original review of the advanced inverse probabilistic approaches for model identification, calibration or data assimilation, key to digest fast-growing multi-physical data acquisition. Illustrated with one favourite pedagogical example crossing natural risk, engineering and economics, developed throughout the book to facilitate the reading and understanding. Supports Master/PhD-level course as well as advanced tutorials for professional training Analysts and researchers in numerical modeling, applied statistics, scientific computing, reliability, advanced engineering, natural risk or environmental science will benefit from this book.




Markov Processes


Book Description

The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "[A]nyone who works with Markov processes whose state space is uncountably infinite will need this most impressive book as a guide and reference." -American Scientist "There is no question but that space should immediately be reserved for [this] book on the library shelf. Those who aspire to mastery of the contents should also reserve a large number of long winter evenings." -Zentralblatt für Mathematik und ihre Grenzgebiete/Mathematics Abstracts "Ethier and Kurtz have produced an excellent treatment of the modern theory of Markov processes that [is] useful both as a reference work and as a graduate textbook." -Journal of Statistical Physics Markov Processes presents several different approaches to proving weak approximation theorems for Markov processes, emphasizing the interplay of methods of characterization and approximation. Martingale problems for general Markov processes are systematically developed for the first time in book form. Useful to the professional as a reference and suitable for the graduate student as a text, this volume features a table of the interdependencies among the theorems, an extensive bibliography, and end-of-chapter problems.




Generalized, Linear, and Mixed Models


Book Description

An accessible and self-contained introduction to statistical models-now in a modernized new edition Generalized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects. A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed models is maintained throughout, and each chapter illustrates how these models are applicable in a wide array of contexts. In addition, a discussion of general methods for the analysis of such models is presented with an emphasis on the method of maximum likelihood for the estimation of parameters. The authors also provide comprehensive coverage of the latest statistical models for correlated, non-normally distributed data. Thoroughly updated to reflect the latest developments in the field, the Second Edition features: A new chapter that covers omitted covariates, incorrect random effects distribution, correlation of covariates and random effects, and robust variance estimation A new chapter that treats shared random effects models, latent class models, and properties of models A revised chapter on longitudinal data, which now includes a discussion of generalized linear models, modern advances in longitudinal data analysis, and the use between and within covariate decompositions Expanded coverage of marginal versus conditional models Numerous new and updated examples With its accessible style and wealth of illustrative exercises, Generalized, Linear, and Mixed Models, Second Edition is an ideal book for courses on generalized linear and mixed models at the upper-undergraduate and beginning-graduate levels. It also serves as a valuable reference for applied statisticians, industrial practitioners, and researchers.




Using the Weibull Distribution


Book Description

Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution and its statistical and probabilistic basis, providing a wealth of material that is not available in the current literature. The book begins by outlining the fundamental probability and statistical concepts that serve as a foundation for subsequent topics of coverage, including: • Optimum burn-in, age and block replacement, warranties and renewal theory • Exact inference in Weibull regression • Goodness of fit testing and distinguishing the Weibull from the lognormal • Inference for the Three Parameter Weibull Throughout the book, a wealth of real-world examples showcases the discussed topics and each chapter concludes with a set of exercises, allowing readers to test their understanding of the presented material. In addition, a related website features the author's own software for implementing the discussed analyses along with a set of modules written in Mathcad®, and additional graphical interface software for performing simulations. With its numerous hands-on examples, exercises, and software applications, Using the Weibull Distribution is an excellent book for courses on quality control and reliability engineering at the upper-undergraduate and graduate levels. The book also serves as a valuable reference for engineers, scientists, and business analysts who gather and interpret data that follows the Weibull distribution