Applications of Computer Aided Time Series Modeling


Book Description

This book consists of three parts: Part One is composed of two introductory chapters. The first chapter provides an instrumental varible interpretation of the state space time series algorithm originally proposed by Aoki (1983), and gives an introductory account for incorporating exogenous signals in state space models. The second chapter, by Havenner, gives practical guidance in apply ing this algorithm by one of the most experienced practitioners of the method. Havenner begins by summarizing six reasons state space methods are advanta geous, and then walks the reader through construction and evaluation of a state space model for four monthly macroeconomic series: industrial production in dex, consumer price index, six month commercial paper rate, and money stock (Ml). To single out one of the several important insights in modeling that he shares with the reader, he discusses in Section 2ii the effects of sampling er rors and model misspecification on successful modeling efforts. He argues that model misspecification is an important amplifier of the effects of sampling error that may cause symplectic matrices to have complex unit roots, a theoretical impossibility. Correct model specifications increase efficiency of estimators and often eliminate this finite sample problem. This is an important insight into the positive realness of covariance matrices; positivity has been emphasized by system engineers to the exclusion of other methods of reducing sampling error and alleviating what is simply a finite sample problem. The second and third parts collect papers that describe specific applications.




Case Studies in Environmental Statistics


Book Description

This book offers a set of case studies exemplifying the broad range of statis tical science used in environmental studies and application. The case studies can be used for graduate courses in environmental statistics, as a resource for courses in statistics using genuine examples to illustrate statistical methodol ogy and theory, and for courses in environmental science. Not only are these studies valuable for teaching about an essential cross-disciplinary activity but they can also be used to spur new research along directions exposed in these examples. The studies reported here resulted from a program of research carried on by the National Institute of Statistical Sciences (NISS) during the years 1992- 1996. NISS was created in 1991 as an initiative of the national statistics or ganizations, with the mission to renew and focus efforts of statistical science on important cross-disciplinary problems. One of NISS' first projects was a cooperative research effort with the U.S. Environmental Protection Agency (EPA) on problems of great interest to environmental science and regulation, surely one of today's most important cross-disciplinary activities. With the support and encouragement of Gary Foley, Director of the (then) U.S. EPA Atmospheric Research and Exposure Assessment Laboratory, a project and a research team were assembled by NISS that pursued a program which produced a set of results and products from which this book was drawn.




Measuring Risk in Complex Stochastic Systems


Book Description

Complex dynamic processes of life and sciences generate risks that have to be taken. The need for clear and distinctive definitions of different kinds of risks, adequate methods and parsimonious models is obvious. The identification of important risk factors and the quantification of risk stemming from an interplay between many risk factors is a prerequisite for mastering the challenges of risk perception, analysis and management successfully. The increasing complexity of stochastic systems, especially in finance, have catalysed the use of advanced statistical methods for these tasks. The methodological approach to solving risk management tasks may, however, be undertaken from many different angles. A financial insti tution may focus on the risk created by the use of options and other derivatives in global financial processing, an auditor will try to evalu ate internal risk management models in detail, a mathematician may be interested in analysing the involved nonlinearities or concentrate on extreme and rare events of a complex stochastic system, whereas a statis tician may be interested in model and variable selection, practical im plementations and parsimonious modelling. An economist may think about the possible impact of risk management tools in the framework of efficient regulation of financial markets or efficient allocation of capital.




Stochastic Processes and Orthogonal Polynomials


Book Description

The book offers an accessible reference for researchers in the probability, statistics and special functions communities. It gives a variety of interdisciplinary relations between the two main ingredients of stochastic processes and orthogonal polynomials. It covers topics like time dependent and asymptotic analysis for birth-death processes and diffusions, martingale relations for Lévy processes, stochastic integrals and Stein's approximation method. Almost all well-known orthogonal polynomials, which are brought together in the so-called Askey Scheme, come into play. This volume clearly illustrates the powerful mathematical role of orthogonal polynomials in the analysis of stochastic processes and is made accessible for all mathematicians with a basic background in probability theory and mathematical analysis. Wim Schoutens is a Postdoctoral Researcher of the Fund for Scientific Research-Flanders (Belgium). He received his PhD in Science from the Catholic University of Leuven, Belgium.




Random and Quasi-Random Point Sets


Book Description

This volume is a collection of survey papers on recent developments in the fields of quasi-Monte Carlo methods and uniform random number generation. We will cover a broad spectrum of questions, from advanced metric number theory to pricing financial derivatives. The Monte Carlo method is one of the most important tools of system modeling. Deterministic algorithms, so-called uniform random number gen erators, are used to produce the input for the model systems on computers. Such generators are assessed by theoretical ("a priori") and by empirical tests. In the a priori analysis, we study figures of merit that measure the uniformity of certain high-dimensional "random" point sets. The degree of uniformity is strongly related to the degree of correlations within the random numbers. The quasi-Monte Carlo approach aims at improving the rate of conver gence in the Monte Carlo method by number-theoretic techniques. It yields deterministic bounds for the approximation error. The main mathematical tool here are so-called low-discrepancy sequences. These "quasi-random" points are produced by deterministic algorithms and should be as "super" uniformly distributed as possible. Hence, both in uniform random number generation and in quasi-Monte Carlo methods, we study the uniformity of deterministically generated point sets in high dimensions. By a (common) abuse oflanguage, one speaks of random and quasi-random point sets. The central questions treated in this book are (i) how to generate, (ii) how to analyze, and (iii) how to apply such high-dimensional point sets.




Seasonal Adjustment with the X-11 Method


Book Description

The most widely used statistical method in seasonal adjustment is implemented in the X-11 Variant of the Census Method II Seasonal Adjustment Program. Developed by the US Bureau of the Census, it resulted in the X-11-ARIMA software and the X-12-ARIMA. While these integrate parametric methods, they remain close to the initial X-11 method, and it is this "core" that Seasonal Adjustment with the X-11 Method focuses on. It will be an important reference for government agencies, and other serious users of economic data.




Case Studies in Bayesian Statistics


Book Description

The 4th Workshop on Case Studies in Bayesian Statistics was held at the Car negie Mellon University campus on September 27-28, 1997. As in the past, the workshop featured both invited and contributed case studies. The former were presented and discussed in detail while the latter were presented in poster format. This volume contains the four invited case studies with the accompanying discus sion as well as nine contributed papers selected by a refereeing process. While most of the case studies in the volume come from biomedical research the reader will also find studies in environmental science and marketing research. INVITED PAPERS In Modeling Customer Survey Data, Linda A. Clark, William S. Cleveland, Lorraine Denby, and Chuanhai LiD use hierarchical modeling with time series components in for customer value analysis (CVA) data from Lucent Technologies. The data were derived from surveys of customers of the company and its competi tors, designed to assess relative performance on a spectrum of issues including product and service quality and pricing. The model provides a full description of the CVA data, with random location and scale effects for survey respondents and longitudinal company effects for each attribute. In addition to assessing the performance of specific companies, the model allows the empirical exploration of the conceptual basis of consumer value analysis. The authors place special em phasis on graphical displays for this complex, multivariate set of data and include a wealth of such plots in the paper.




Noise Reduction by Wavelet Thresholding


Book Description

Wavelet methods have become a widely spread tool in signal and image process ing tasks. This book deals with statistical applications, especially wavelet based smoothing. The methods described in this text are examples of non-linear and non parametric curve fitting. The book aims to contribute to the field both among statis ticians and in the application oriented world (including but not limited to signals and images). Although it also contains extensive analyses of some existing methods, it has no intention whatsoever to be a complete overview of the field: the text would show too much bias towards my own algorithms. I rather present new material and own insights in the questions involved with wavelet based noise reduction. On the other hand, the presented material does cover a whole range of methodologies, and in that sense, the book may serve as an introduction into the domain of wavelet smoothing. Throughout the text, three main properties show up ever again: sparsity, locality and multiresolution. Nearly all wavelet based methods exploit at least one of these properties in some or the other way. These notes present research results of the Belgian Programme on Interuniver sity Poles of Attraction, initiated by the Belgian State, Prime Minister's Office for Science, Technology and Culture. The scientific responsibility rests with me. My research was financed by a grant (1995 - 1999) from the Flemish Institute for the Promotion of Scientific and Technological Research in the Industry (IWT).




Nonparametric Statistics for Stochastic Processes


Book Description

This book is devoted to the theory and applications of nonparametic functional estimation and prediction. Chapter 1 provides an overview of inequalities and limit theorems for strong mixing processes. Density and regression estimation in discrete time are studied in Chapter 2 and 3. The special rates of convergence which appear in continuous time are presented in Chapters 4 and 5. This second edition is extensively revised and it contains two new chapters. Chapter 6 discusses the surprising local time density estimator. Chapter 7 gives a detailed account of implementation of nonparametric method and practical examples in economics, finance and physics. Comarison with ARMA and ARCH methods shows the efficiency of nonparametric forecasting. The prerequisite is a knowledge of classical probability theory and statistics. Denis Bosq is Professor of Statistics at the Unviersity of Paris 6 (Pierre et Marie Curie). He is Editor-in-Chief of "Statistical Inference for Stochastic Processes" and an editor of "Journal of Nonparametric Statistics". He is an elected member of the International Statistical Institute. He has published about 90 papers or works in nonparametric statistics and four books.




Robust Bayesian Analysis


Book Description

Robust Bayesian analysis aims at overcoming the traditional objection to Bayesian analysis of its dependence on subjective inputs, mainly the prior and the loss. Its purpose is the determination of the impact of the inputs to a Bayesian analysis (the prior, the loss and the model) on its output when the inputs range in certain classes. If the impact is considerable, there is sensitivity and we should attempt to further refine the information the incumbent classes available, perhaps through additional constraints on and/ or obtaining additional data; if the impact is not important, robustness holds and no further analysis and refinement would be required. Robust Bayesian analysis has been widely accepted by Bayesian statisticians; for a while it was even a main research topic in the field. However, to a great extent, their impact is yet to be seen in applied settings. This volume, therefore, presents an overview of the current state of robust Bayesian methods and their applications and identifies topics of further in terest in the area. The papers in the volume are divided into nine parts covering the main aspects of the field. The first one provides an overview of Bayesian robustness at a non-technical level. The paper in Part II con cerns foundational aspects and describes decision-theoretical axiomatisa tions leading to the robust Bayesian paradigm, motivating reasons for which robust analysis is practically unavoidable within Bayesian analysis.