Statistical and Fuzzy Approaches to Data Processing, with Applications to Econometrics and Other Areas


Book Description

Mainly focusing on processing uncertainty, this book presents state-of-the-art techniques and demonstrates their use in applications to econometrics and other areas. Processing uncertainty is essential, considering that computers – which help us understand real-life processes and make better decisions based on that understanding – get their information from measurements or from expert estimates, neither of which is ever 100% accurate. Measurement uncertainty is usually described using probabilistic techniques, while uncertainty in expert estimates is often described using fuzzy techniques. Therefore, it is important to master both techniques for processing data. This book is highly recommended for researchers and students interested in the latest results and challenges in uncertainty, as well as practitioners who want to learn how to use the corresponding state-of-the-art techniques.




Theory and Applications of Time Series Analysis and Forecasting


Book Description

This book presents a selection of peer-reviewed contributions on the latest developments in time series analysis and forecasting, presented at the 7th International Conference on Time Series and Forecasting, ITISE 2021, held in Gran Canaria, Spain, July 19-21, 2021. It is divided into four parts. The first part addresses general modern methods and theoretical aspects of time series analysis and forecasting, while the remaining three parts focus on forecasting methods in econometrics, time series forecasting and prediction, and numerous other real-world applications. Covering a broad range of topics, the book will give readers a modern perspective on the subject. The ITISE conference series provides a forum for scientists, engineers, educators and students to discuss the latest advances and implementations in the foundations, theory, models and applications of time series analysis and forecasting. It focuses on interdisciplinary research encompassing computer science, mathematics, statistics and econometrics.




Intelligent and Fuzzy Systems


Book Description

This book presents recent research in intelligent and fuzzy techniques on digital transformation and the new normal, the state to which economies, societies, etc. settle following a crisis bringing us to a new environment. Digital transformation and the new normal-appearing in many areas such as digital economy, digital finance, digital government, digital health, and digital education are the main scope of this book. The readers can benefit from this book for preparing for a digital “new normal” and maintaining a leadership position among competitors in both manufacturing and service companies. Digitizing an industrial company is a challenging process, which involves rethinking established structures, processes, and steering mechanisms presented in this book. The intended readers are intelligent and fuzzy systems researchers, lecturers, M.Sc., and Ph.D. students studying digital transformation and new normal. The book covers fuzzy logic theory and applications, heuristics, and metaheuristics from optimization to machine learning, from quality management to risk management, making the book an excellent source for researchers.




Statistical and Fuzzy Approaches to Data Processing, with Applications to Econometrics and Other Areas


Book Description

Mainly focusing on processing uncertainty, this book presents state-of-the-art techniques and demonstrates their use in applications to econometrics and other areas. Processing uncertainty is essential, considering that computers - which help us understand real-life processes and make better decisions based on that understanding - get their information from measurements or from expert estimates, neither of which is ever 100% accurate. Measurement uncertainty is usually described using probabilistic techniques, while uncertainty in expert estimates is often described using fuzzy techniques. Therefore, it is important to master both techniques for processing data. This book is highly recommended for researchers and students interested in the latest results and challenges in uncertainty, as well as practitioners who want to learn how to use the corresponding state-of-the-art techniques. .




Classification and Data Analysis


Book Description

This volume gathers peer-reviewed contributions on data analysis, classification and related areas presented at the 28th Conference of the Section on Classification and Data Analysis of the Polish Statistical Association, SKAD 2019, held in Szczecin, Poland, on September 18–20, 2019. Providing a balance between theoretical and methodological contributions and empirical papers, it covers a broad variety of topics, ranging from multivariate data analysis, classification and regression, symbolic (and other) data analysis, visualization, data mining, and computer methods to composite measures, and numerous applications of data analysis methods in economics, finance and other social sciences. The book is intended for a wide audience, including researchers at universities and research institutions, graduate and doctoral students, practitioners, data scientists and employees in public statistical institutions.




Approximate Dynamic Programming


Book Description

A complete and accessible introduction to the real-world applications of approximate dynamic programming With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems. Approximate Dynamic Programming is a result of the author's decades of experience working in large industrial settings to develop practical and high-quality solutions to problems that involve making decisions in the presence of uncertainty. This groundbreaking book uniquely integrates four distinct disciplines—Markov design processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully model and solve a wide range of real-life problems using the techniques of approximate dynamic programming (ADP). The reader is introduced to the three curses of dimensionality that impact complex problems and is also shown how the post-decision state variable allows for the use of classical algorithmic strategies from operations research to treat complex stochastic optimization problems. Designed as an introduction and assuming no prior training in dynamic programming of any form, Approximate Dynamic Programming contains dozens of algorithms that are intended to serve as a starting point in the design of practical solutions for real problems. The book provides detailed coverage of implementation challenges including: modeling complex sequential decision processes under uncertainty, identifying robust policies, designing and estimating value function approximations, choosing effective stepsize rules, and resolving convergence issues. With a focus on modeling and algorithms in conjunction with the language of mainstream operations research, artificial intelligence, and control theory, Approximate Dynamic Programming: Models complex, high-dimensional problems in a natural and practical way, which draws on years of industrial projects Introduces and emphasizes the power of estimating a value function around the post-decision state, allowing solution algorithms to be broken down into three fundamental steps: classical simulation, classical optimization, and classical statistics Presents a thorough discussion of recursive estimation, including fundamental theory and a number of issues that arise in the development of practical algorithms Offers a variety of methods for approximating dynamic programs that have appeared in previous literature, but that have never been presented in the coherent format of a book Motivated by examples from modern-day operations research, Approximate Dynamic Programming is an accessible introduction to dynamic modeling and is also a valuable guide for the development of high-quality solutions to problems that exist in operations research and engineering. The clear and precise presentation of the material makes this an appropriate text for advanced undergraduate and beginning graduate courses, while also serving as a reference for researchers and practitioners. A companion Web site is available for readers, which includes additional exercises, solutions to exercises, and data sets to reinforce the book's main concepts.




Statistical Methods in Spatial Epidemiology


Book Description

Spatial epidemiology is the description and analysis of the geographical distribution of disease. It is more important now than ever, with modern threats such as bio-terrorism making such analysis even more complex. This second edition of Statistical Methods in Spatial Epidemiology is updated and expanded to offer a complete coverage of the analysis and application of spatial statistical methods. The book is divided into two main sections: Part 1 introduces basic definitions and terminology, along with map construction and some basic models. This is expanded upon in Part II by applying this knowledge to the fundamental problems within spatial epidemiology, such as disease mapping, ecological analysis, disease clustering, bio-terrorism, space-time analysis, surveillance and infectious disease modelling. Provides a comprehensive overview of the main statistical methods used in spatial epidemiology. Updated to include a new emphasis on bio-terrorism and disease surveillance. Emphasizes the importance of space-time modelling and outlines the practical application of the method. Discusses the wide range of software available for analyzing spatial data, including WinBUGS, SaTScan and R, and features an accompanying website hosting related software. Contains numerous data sets, each representing a different approach to the analysis, and provides an insight into various modelling techniques. This text is primarily aimed at medical statisticians, researchers and practitioners from public health and epidemiology. It is also suitable for postgraduate students of statistics and epidemiology, as well professionals working in government agencies.




Loss Models


Book Description

An essential resource for constructing and analyzing advanced actuarial models Loss Models: Further Topics presents extended coverage of modeling through the use of tools related to risk theory, loss distributions, and survival models. The book uses these methods to construct and evaluate actuarial models in the fields of insurance and business. Providing an advanced study of actuarial methods, the book features extended discussions of risk modeling and risk measures, including Tail-Value-at-Risk. Loss Models: Further Topics contains additional material to accompany the Fourth Edition of Loss Models: From Data to Decisions, such as: Extreme value distributions Coxian and related distributions Mixed Erlang distributions Computational and analytical methods for aggregate claim models Counting processes Compound distributions with time-dependent claim amounts Copula models Continuous time ruin models Interpolation and smoothing The book is an essential reference for practicing actuaries and actuarial researchers who want to go beyond the material required for actuarial qualification. Loss Models: Further Topics is also an excellent resource for graduate students in the actuarial field.




Bayesian Networks


Book Description

Bayesian Networks: An Introduction provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. Features include: An introduction to Dirichlet Distribution, Exponential Families and their applications. A detailed description of learning algorithms and Conditional Gaussian Distributions using Junction Tree methods. A discussion of Pearl's intervention calculus, with an introduction to the notion of see and do conditioning. All concepts are clearly defined and illustrated with examples and exercises. Solutions are provided online. This book will prove a valuable resource for postgraduate students of statistics, computer engineering, mathematics, data mining, artificial intelligence, and biology. Researchers and users of comparable modelling or statistical techniques such as neural networks will also find this book of interest.




Design and Analysis of Experiments, Volume 1


Book Description

This user-friendly new edition reflects a modern and accessible approach to experimental design and analysis Design and Analysis of Experiments, Volume 1, Second Edition provides a general introduction to the philosophy, theory, and practice of designing scientific comparative experiments and also details the intricacies that are often encountered throughout the design and analysis processes. With the addition of extensive numerical examples and expanded treatment of key concepts, this book further addresses the needs of practitioners and successfully provides a solid understanding of the relationship between the quality of experimental design and the validity of conclusions. This Second Edition continues to provide the theoretical basis of the principles of experimental design in conjunction with the statistical framework within which to apply the fundamental concepts. The difference between experimental studies and observational studies is addressed, along with a discussion of the various components of experimental design: the error-control design, the treatment design, and the observation design. A series of error-control designs are presented based on fundamental design principles, such as randomization, local control (blocking), the Latin square principle, the split-unit principle, and the notion of factorial treatment structure. This book also emphasizes the practical aspects of designing and analyzing experiments and features: Increased coverage of the practical aspects of designing and analyzing experiments, complete with the steps needed to plan and construct an experiment A case study that explores the various types of interaction between both treatment and blocking factors, and numerical and graphical techniques are provided to analyze and interpret these interactions Discussion of the important distinctions between two types of blocking factors and their role in the process of drawing statistical inferences from an experiment A new chapter devoted entirely to repeated measures, highlighting its relationship to split-plot and split-block designs Numerical examples using SAS® to illustrate the analyses of data from various designs and to construct factorial designs that relate the results to the theoretical derivations Design and Analysis of Experiments, Volume 1, Second Edition is an ideal textbook for first-year graduate courses in experimental design and also serves as a practical, hands-on reference for statisticians and researchers across a wide array of subject areas, including biological sciences, engineering, medicine, pharmacology, psychology, and business.