The Behavioral and Social Sciences


Book Description

This volume explores the scientific frontiers and leading edges of research across the fields of anthropology, economics, political science, psychology, sociology, history, business, education, geography, law, and psychiatry, as well as the newer, more specialized areas of artificial intelligence, child development, cognitive science, communications, demography, linguistics, and management and decision science. It includes recommendations concerning new resources, facilities, and programs that may be needed over the next several years to ensure rapid progress and provide a high level of returns to basic research.




An Introduction to Data Analysis and Uncertainty Quantification for Inverse Problems


Book Description

Inverse problems are found in many applications, such as medical imaging, engineering, astronomy, and geophysics, among others. To solve an inverse problem is to recover an object from noisy, usually indirect observations. Solutions to inverse problems are subject to many potential sources of error introduced by approximate mathematical models, regularization methods, numerical approximations for efficient computations, noisy data, and limitations in the number of observations; thus it is important to include an assessment of the uncertainties as part of the solution. Such assessment is interdisciplinary by nature, as it requires, in addition to knowledge of the particular application, methods from applied mathematics, probability, and statistics. This book bridges applied mathematics and statistics by providing a basic introduction to probability and statistics for uncertainty quantification in the context of inverse problems, as well as an introduction to statistical regularization of inverse problems. The author covers basic statistical inference, introduces the framework of ill-posed inverse problems, and explains statistical questions that arise in their applications. An Introduction to Data Analysis and Uncertainty Quantification for Inverse Problems?includes many examples that explain techniques which are useful to address general problems arising in uncertainty quantification, Bayesian and non-Bayesian statistical methods and discussions of their complementary roles, and analysis of a real data set to illustrate the methodology covered throughout the book.




Logic, Language, Information and Computation


Book Description

Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the 4th volume of the FoLLI LNAI subline; containing the refereed proceedings of the 15th International Workshop on Logic, Language, Information and Computation, WoLLIC 2008, held in Edinburgh, UK, in July 2008. The 21 revised full papers presented together with the abstracts of 7 tutorials and invited lectures were carefully reviewed and selected from numerous submissions. The papers cover all pertinent subjects in computer science with particular interest in cross-disciplinary topics. Typical areas of interest are: foundations of computing and programming; novel computation models and paradigms; broad notions of proof and belief; formal methods in software and hardware development; logical approach to natural language and reasoning; logics of programs, actions and resources; foundational aspects of information organization, search, flow, sharing, and protection.




Uncertainty Quantification and Predictive Computational Science


Book Description

This textbook teaches the essential background and skills for understanding and quantifying uncertainties in a computational simulation, and for predicting the behavior of a system under those uncertainties. It addresses a critical knowledge gap in the widespread adoption of simulation in high-consequence decision-making throughout the engineering and physical sciences. Constructing sophisticated techniques for prediction from basic building blocks, the book first reviews the fundamentals that underpin later topics of the book including probability, sampling, and Bayesian statistics. Part II focuses on applying Local Sensitivity Analysis to apportion uncertainty in the model outputs to sources of uncertainty in its inputs. Part III demonstrates techniques for quantifying the impact of parametric uncertainties on a problem, specifically how input uncertainties affect outputs. The final section covers techniques for applying uncertainty quantification to make predictions under uncertainty, including treatment of epistemic uncertainties. It presents the theory and practice of predicting the behavior of a system based on the aggregation of data from simulation, theory, and experiment. The text focuses on simulations based on the solution of systems of partial differential equations and includes in-depth coverage of Monte Carlo methods, basic design of computer experiments, as well as regularized statistical techniques. Code references, in python, appear throughout the text and online as executable code, enabling readers to perform the analysis under discussion. Worked examples from realistic, model problems help readers understand the mechanics of applying the methods. Each chapter ends with several assignable problems. Uncertainty Quantification and Predictive Computational Science fills the growing need for a classroom text for senior undergraduate and early-career graduate students in the engineering and physical sciences and supports independent study by researchers and professionals who must include uncertainty quantification and predictive science in the simulations they develop and/or perform.




The Handbook of Research Synthesis and Meta-Analysis


Book Description

Praise for the first edition: "The Handbook is a comprehensive treatment of literature synthesis and provides practical advice for anyone deep in the throes of, just teetering on the brink of, or attempting to decipher a meta-analysis. Given the expanding application and importance of literature synthesis, understanding both its strengths and weaknesses is essential for its practitioners and consumers. This volume is a good beginning for those who wish to gain that understanding." —Chance "Meta-analysis, as the statistical analysis of a large collection of results from individual studies is called, has now achieved a status of respectability in medicine. This respectability, when combined with the slight hint of mystique that sometimes surrounds meta-analysis, ensures that results of studies that use it are treated with the respect they deserve....The Handbook of Research Synthesis is one of the most important publications in this subject both as a definitive reference book and a practical manual."—British Medical Journal When the first edition of The Handbook of Research Synthesis was published in 1994, it quickly became the definitive reference for researchers conducting meta-analyses of existing research in both the social and biological sciences. In this fully revised second edition, editors Harris Cooper, Larry Hedges, and Jeff Valentine present updated versions of the Handbook's classic chapters, as well as entirely new sections reporting on the most recent, cutting-edge developments in the field. Research synthesis is the practice of systematically distilling and integrating data from a variety of sources in order to draw more reliable conclusions about a given question or topic. The Handbook of Research Synthesis and Meta-Analysis draws upon years of groundbreaking advances that have transformed research synthesis from a narrative craft into an important scientific process in its own right. Cooper, Hedges, and Valentine have assembled leading authorities in the field to guide the reader through every stage of the research synthesis process—problem formulation, literature search and evaluation, statistical integration, and report preparation. The Handbook of Research Synthesis and Meta-Analysis incorporates state-of-the-art techniques from all quantitative synthesis traditions. Distilling a vast technical literature and many informal sources, the Handbook provides a portfolio of the most effective solutions to the problems of quantitative data integration. Among the statistical issues addressed by the authors are the synthesis of non-independent data sets, fixed and random effects methods, the performance of sensitivity analyses and model assessments, and the problem of missing data. The Handbook of Research Synthesis and Meta-Analysis also provides a rich treatment of the non-statistical aspects of research synthesis. Topics include searching the literature, and developing schemes for gathering information from study reports. Those engaged in research synthesis will also find useful advice on how tables, graphs, and narration can be used to provide the most meaningful communication of the results of research synthesis. In addition, the editors address the potentials and limitations of research synthesis, and its future directions. The past decade has been a period of enormous growth in the field of research synthesis. The second edition Handbook thoroughly revises original chapters to assure that the volume remains the most authoritative source of information for researchers undertaking meta-analysis today. In response to the increasing use of research synthesis in the formation of public policy, the second edition includes a new chapter on both the strengths and limitations of research synthesis in policy debates




Learning to Quantify


Book Description

This open access book provides an introduction and an overview of learning to quantify (a.k.a. “quantification”), i.e. the task of training estimators of class proportions in unlabeled data by means of supervised learning. In data science, learning to quantify is a task of its own related to classification yet different from it, since estimating class proportions by simply classifying all data and counting the labels assigned by the classifier is known to often return inaccurate (“biased”) class proportion estimates. The book introduces learning to quantify by looking at the supervised learning methods that can be used to perform it, at the evaluation measures and evaluation protocols that should be used for evaluating the quality of the returned predictions, at the numerous fields of human activity in which the use of quantification techniques may provide improved results with respect to the naive use of classification techniques, and at advanced topics in quantification research. The book is suitable to researchers, data scientists, or PhD students, who want to come up to speed with the state of the art in learning to quantify, but also to researchers wishing to apply data science technologies to fields of human activity (e.g., the social sciences, political science, epidemiology, market research) which focus on aggregate (“macro”) data rather than on individual (“micro”) data.




Computational and Statistical Methods for Protein Quantification by Mass Spectrometry


Book Description

The definitive introduction to data analysis in quantitative proteomics This book provides all the necessary knowledge about mass spectrometry based proteomics methods and computational and statistical approaches to pursue the planning, design and analysis of quantitative proteomics experiments. The author’s carefully constructed approach allows readers to easily make the transition into the field of quantitative proteomics. Through detailed descriptions of wet-lab methods, computational approaches and statistical tools, this book covers the full scope of a quantitative experiment, allowing readers to acquire new knowledge as well as acting as a useful reference work for more advanced readers. Computational and Statistical Methods for Protein Quantification by Mass Spectrometry: Introduces the use of mass spectrometry in protein quantification and how the bioinformatics challenges in this field can be solved using statistical methods and various software programs. Is illustrated by a large number of figures and examples as well as numerous exercises. Provides both clear and rigorous descriptions of methods and approaches. Is thoroughly indexed and cross-referenced, combining the strengths of a text book with the utility of a reference work. Features detailed discussions of both wet-lab approaches and statistical and computational methods. With clear and thorough descriptions of the various methods and approaches, this book is accessible to biologists, informaticians, and statisticians alike and is aimed at readers across the academic spectrum, from advanced undergraduate students to post doctorates entering the field.




Risk Analysis in Engineering


Book Description

Based on the author’s 20 years of teaching, Risk Analysis in Engineering: Techniques, Tools, and Trends presents an engineering approach to probabilistic risk analysis (PRA). It emphasizes methods for comprehensive PRA studies, including techniques for risk management. The author assumes little or no prior knowledge of risk analysis on the part of the student and provides the necessary mathematical and engineering foundations. The text relies heavily on, but is not limited to, examples from the nuclear industry, because that is where PRA techniques were first developed. Since PRA provides a best-estimate approach, the author pays special attention to explaining uncertainty characterization. The book begins with a description of the basic definitions and principles of risk, safety, and performance and presents the elements of risk analysis and their applications in engineering. After highlighting the methods for performing PRAs, the author describes how to assess and measure performance of the building blocks of PRAs, such as reliability of hardware subsystems, structures, components, human actions, and software. He covers methods of characterizing uncertainties and methods for propagating them through the PRA model to estimate uncertainties of the results. The book explores how to identify and rank important and sensitive contributors to the estimated risk using the PRA and performance assessment models. It also includes a description of risk acceptance criteria and the formal methods for making decisions related to risk management options and strategies. The book concludes with a brief review of the main aspects, issues, and methods of risk communication. Drawing on notes, homework problems, and exams from courses he has taught as well as feedback from his students, Professor Modarres provides a from-the-trenches method for teaching risk assessment for engineers. This is a textbook that is easy to use for students and professors alike.




Reliability Engineering and Risk Analysis


Book Description

Tools to Proactively Predict Failure The prediction of failures involves uncertainty, and problems associated with failures are inherently probabilistic. Their solution requires optimal tools to analyze strength of evidence and understand failure events and processes to gauge confidence in a design’s reliability. Reliability Engineering and Risk Analysis: A Practical Guide, Second Edition has already introduced a generation of engineers to the practical methods and techniques used in reliability and risk studies applicable to numerous disciplines. Written for both practicing professionals and engineering students, this comprehensive overview of reliability and risk analysis techniques has been fully updated, expanded, and revised to meet current needs. It concentrates on reliability analysis of complex systems and their components and also presents basic risk analysis techniques. Since reliability analysis is a multi-disciplinary subject, the scope of this book applies to most engineering disciplines, and its content is primarily based on the materials used in undergraduate and graduate-level courses at the University of Maryland. This book has greatly benefited from its authors' industrial experience. It balances a mixture of basic theory and applications and presents a large number of examples to illustrate various technical subjects. A proven educational tool, this bestselling classic will serve anyone working on real-life failure analysis and prediction problems.




The Crisis of Distribution and the Regulation of Economic Law


Book Description

The crisis of distribution is one of the longest standing and complicated issues facing human society. Imbued with social, political, historic, and cultural elements, it varies significantly across different countries as a result of all these factors. As an emerging economy which transferred from a planned to a market economy, China has experienced large distribution gaps since it implemented the Reform and Opening-up Policy in the early 1980s, requiring stronger economic law to mitigate and regulate the crisis of distribution. The two volumes examine the crisis of distribution that China faces and proposes policy and economic law methods that can be used to overcome the distribution dilemma. The author discusses the four main concepts and focus points of the crisis of distribution – distribution itself, the crises it faces, the rule of law and development before proposing a theoretical framework of "system–distribution–development" to resolve distribution problems that China faces. The book should be of keen interest to researchers and students of law, economics, and political science.