Sensitivity Analysis in Linear Regression


Book Description

Treats linear regression diagnostics as a tool for application of linear regression models to real-life data. Presentation makes extensive use of examples to illustrate theory. Assesses the effect of measurement errors on the estimated coefficients, which is not accounted for in a standard least squares estimate but is important where regression coefficients are used to apportion effects due to different variables. Also assesses qualitatively and numerically the robustness of the regression fit.




Sensitivity Analysis in Linear Systems


Book Description

A text surveying perturbation techniques and sensitivity analysis of linear systems is an ambitious undertaking, considering the lack of basic comprehensive texts on the subject. A wide-ranging and global coverage of the topic is as yet missing, despite the existence of numerous monographs dealing with specific topics but generally of use to only a narrow category of people. In fact, most works approach this subject from the numerical analysis point of view. Indeed, researchers in this field have been most concerned with this topic, although engineers and scholars in all fields may find it equally interesting. One can state, without great exaggeration, that a great deal of engineering work is devoted to testing systems' sensitivity to changes in design parameters. As a rule, high-sensitivity elements are those which should be designed with utmost care. On the other hand, as the mathematical modelling serving for the design process is usually idealized and often inaccurately formulated, some unforeseen alterations may cause the system to behave in a slightly different manner. Sensitivity analysis can help the engineer innovate ways to minimize such system discrepancy, since it starts from the assumption of such a discrepancy between the ideal and the actual system.




Secondary Analysis of Electronic Health Records


Book Description

This book trains the next generation of scientists representing different disciplines to leverage the data generated during routine patient care. It formulates a more complete lexicon of evidence-based recommendations and support shared, ethical decision making by doctors with their patients. Diagnostic and therapeutic technologies continue to evolve rapidly, and both individual practitioners and clinical teams face increasingly complex ethical decisions. Unfortunately, the current state of medical knowledge does not provide the guidance to make the majority of clinical decisions on the basis of evidence. The present research infrastructure is inefficient and frequently produces unreliable results that cannot be replicated. Even randomized controlled trials (RCTs), the traditional gold standards of the research reliability hierarchy, are not without limitations. They can be costly, labor intensive, and slow, and can return results that are seldom generalizable to every patient population. Furthermore, many pertinent but unresolved clinical and medical systems issues do not seem to have attracted the interest of the research enterprise, which has come to focus instead on cellular and molecular investigations and single-agent (e.g., a drug or device) effects. For clinicians, the end result is a bit of a “data desert” when it comes to making decisions. The new research infrastructure proposed in this book will help the medical profession to make ethically sound and well informed decisions for their patients.




Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide


Book Description

This User’s Guide is a resource for investigators and stakeholders who develop and review observational comparative effectiveness research protocols. It explains how to (1) identify key considerations and best practices for research design; (2) build a protocol based on these standards and best practices; and (3) judge the adequacy and completeness of a protocol. Eleven chapters cover all aspects of research design, including: developing study objectives, defining and refining study questions, addressing the heterogeneity of treatment effect, characterizing exposure, selecting a comparator, defining and measuring outcomes, and identifying optimal data sources. Checklists of guidance and key considerations for protocols are provided at the end of each chapter. The User’s Guide was created by researchers affiliated with AHRQ’s Effective Health Care Program, particularly those who participated in AHRQ’s DEcIDE (Developing Evidence to Inform Decisions About Effectiveness) program. Chapters were subject to multiple internal and external independent reviews. More more information, please consult the Agency website: www.effectivehealthcare.ahrq.gov)




Sensitivity Analysis in Practice


Book Description

Sensitivity analysis should be considered a pre-requisite for statistical model building in any scientific discipline where modelling takes place. For a non-expert, choosing the method of analysis for their model is complex, and depends on a number of factors. This book guides the non-expert through their problem in order to enable them to choose and apply the most appropriate method. It offers a review of the state-of-the-art in sensitivity analysis, and is suitable for a wide range of practitioners. It is focussed on the use of SIMLAB – a widely distributed freely-available sensitivity analysis software package developed by the authors – for solving problems in sensitivity analysis of statistical models. Other key features: Provides an accessible overview of the current most widely used methods for sensitivity analysis. Opens with a detailed worked example to explain the motivation behind the book. Includes a range of examples to help illustrate the concepts discussed. Focuses on implementation of the methods in the software SIMLAB - a freely-available sensitivity analysis software package developed by the authors. Contains a large number of references to sources for further reading. Authored by the leading authorities on sensitivity analysis.




The Prevention and Treatment of Missing Data in Clinical Trials


Book Description

Randomized clinical trials are the primary tool for evaluating new medical interventions. Randomization provides for a fair comparison between treatment and control groups, balancing out, on average, distributions of known and unknown factors among the participants. Unfortunately, these studies often lack a substantial percentage of data. This missing data reduces the benefit provided by the randomization and introduces potential biases in the comparison of the treatment groups. Missing data can arise for a variety of reasons, including the inability or unwillingness of participants to meet appointments for evaluation. And in some studies, some or all of data collection ceases when participants discontinue study treatment. Existing guidelines for the design and conduct of clinical trials, and the analysis of the resulting data, provide only limited advice on how to handle missing data. Thus, approaches to the analysis of data with an appreciable amount of missing values tend to be ad hoc and variable. The Prevention and Treatment of Missing Data in Clinical Trials concludes that a more principled approach to design and analysis in the presence of missing data is both needed and possible. Such an approach needs to focus on two critical elements: (1) careful design and conduct to limit the amount and impact of missing data and (2) analysis that makes full use of information on all randomized participants and is based on careful attention to the assumptions about the nature of the missing data underlying estimates of treatment effects. In addition to the highest priority recommendations, the book offers more detailed recommendations on the conduct of clinical trials and techniques for analysis of trial data.




Global Sensitivity Analysis


Book Description

Complex mathematical and computational models are used in all areas of society and technology and yet model based science is increasingly contested or refuted, especially when models are applied to controversial themes in domains such as health, the environment or the economy. More stringent standards of proofs are demanded from model-based numbers, especially when these numbers represent potential financial losses, threats to human health or the state of the environment. Quantitative sensitivity analysis is generally agreed to be one such standard. Mathematical models are good at mapping assumptions into inferences. A modeller makes assumptions about laws pertaining to the system, about its status and a plethora of other, often arcane, system variables and internal model settings. To what extent can we rely on the model-based inference when most of these assumptions are fraught with uncertainties? Global Sensitivity Analysis offers an accessible treatment of such problems via quantitative sensitivity analysis, beginning with the first principles and guiding the reader through the full range of recommended practices with a rich set of solved exercises. The text explains the motivation for sensitivity analysis, reviews the required statistical concepts, and provides a guide to potential applications. The book: Provides a self-contained treatment of the subject, allowing readers to learn and practice global sensitivity analysis without further materials. Presents ways to frame the analysis, interpret its results, and avoid potential pitfalls. Features numerous exercises and solved problems to help illustrate the applications. Is authored by leading sensitivity analysis practitioners, combining a range of disciplinary backgrounds. Postgraduate students and practitioners in a wide range of subjects, including statistics, mathematics, engineering, physics, chemistry, environmental sciences, biology, toxicology, actuarial sciences, and econometrics will find much of use here. This book will prove equally valuable to engineers working on risk analysis and to financial analysts concerned with pricing and hedging.




Regression Analysis by Example


Book Description

The essentials of regression analysis through practical applications Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgement. Regression Analysis by Example, Fourth Edition has been expanded and thoroughly updated to reflect recent advances in the field. The emphasis continues to be on exploratory data analysis rather than statistical theory. The book offers in-depth treatment of regression diagnostics, transformation, multicollinearity, logistic regression, and robust regression. This new edition features the following enhancements: Chapter 12, Logistic Regression, is expanded to reflect the increased use of the logit models in statistical analysis A new chapter entitled Further Topics discusses advanced areas of regression analysis Reorganized, expanded, and upgraded exercises appear at the end of each chapter A fully integrated Web page provides data sets Numerous graphical displays highlight the significance of visual appeal Regression Analysis by Example, Fourth Edition is suitable for anyone with an understanding of elementary statistics. Methods of regression analysis are clearly demonstrated, and examples containing the types of irregularities commonly encountered in the real world are provided. Each example isolates one or two techniques and features detailed discussions of the techniques themselves, the required assumptions, and the evaluated success of each technique. The methods described throughout the book can be carried out with most of the currently available statistical software packages, such as the software package R. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.




TORUS 1 - Toward an Open Resource Using Services


Book Description

This book, presented in three volumes, examines environmental disciplines in relation to major players in contemporary science: Big Data, artificial intelligence and cloud computing. Today, there is a real sense of urgency regarding the evolution of computer technology, the ever-increasing volume of data, threats to our climate and the sustainable development of our planet. As such, we need to reduce technology just as much as we need to bridge the global socio-economic gap between the North and South; between universal free access to data (open data) and free software (open source). In this book, we pay particular attention to certain environmental subjects, in order to enrich our understanding of cloud computing. These subjects are: erosion; urban air pollution and atmospheric pollution in Southeast Asia; melting permafrost (causing the accelerated release of soil organic carbon in the atmosphere); alert systems of environmental hazards (such as forest fires, prospective modeling of socio-spatial practices and land use); and web fountains of geographical data. Finally, this book asks the question: in order to find a pattern in the data, how do we move from a traditional computing model-based world to pure mathematical research? After thorough examination of this topic, we conclude that this goal is both transdisciplinary and achievable.