Robust Correlation


Book Description

This bookpresents material on both the analysis of the classical concepts of correlation and on the development of their robust versions, as well as discussing the related concepts of correlation matrices, partial correlation, canonical correlation, rank correlations, with the corresponding robust and non-robust estimation procedures. Every chapter contains a set of examples with simulated and real-life data. Key features: Makes modern and robust correlation methods readily available and understandable to practitioners, specialists, and consultants working in various fields. Focuses on implementation of methodology and application of robust correlation with R. Introduces the main approaches in robust statistics, such as Huber’s minimax approach and Hampel’s approach based on influence functions. Explores various robust estimates of the correlation coefficient including the minimax variance and bias estimates as well as the most B- and V-robust estimates. Contains applications of robust correlation methods to exploratory data analysis, multivariate statistics, statistics of time series, and to real-life data. Includes an accompanying website featuring computer code and datasets Features exercises and examples throughout the text using both small and large data sets. Theoretical and applied statisticians, specialists in multivariate statistics, robust statistics, robust time series analysis, data analysis and signal processing will benefit from this book. Practitioners who use correlation based methods in their work as well as postgraduate students in statistics will also find this book useful.




Introduction to Robust Estimation and Hypothesis Testing


Book Description

"This book focuses on the practical aspects of modern and robust statistical methods. The increased accuracy and power of modern methods, versus conventional approaches to the analysis of variance (ANOVA) and regression, is remarkable. Through a combination of theoretical developments, improved and more flexible statistical methods, and the power of the computer, it is now possible to address problems with standard methods that seemed insurmountable only a few years ago"--




Robust Statistics


Book Description

The Wiley-Interscience Paperback Series consists of selectedbooks that have been made more accessible to consumers in an effortto increase global appeal and general circulation. With these newunabridged softcover volumes, Wiley hopes to extend the lives ofthese works by making them available to future generations ofstatisticians, mathematicians, and scientists. "This is a nice book containing a wealth of information, much ofit due to the authors. . . . If an instructor designing such acourse wanted a textbook, this book would be the best choiceavailable. . . . There are many stimulating exercises, and the bookalso contains an excellent index and an extensive list ofreferences." —Technometrics "[This] book should be read carefully by anyone who isinterested in dealing with statistical models in a realisticfashion." —American Scientist Introducing concepts, theory, and applications, RobustStatistics is accessible to a broad audience, avoidingallusions to high-powered mathematics while emphasizing ideas,heuristics, and background. The text covers the approach based onthe influence function (the effect of an outlier on an estimater,for example) and related notions such as the breakdown point. Italso treats the change-of-variance function, fundamental conceptsand results in the framework of estimation of a single parameter,and applications to estimation of covariance matrices andregression parameters.




Introduction to Robust Estimation and Hypothesis Testing


Book Description

This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations. Introduction to Robust Estimation and Hypothesis Testing, Second Edition, focuses on the practical applications of modern, robust methods which can greatly enhance our chances of detecting true differences among groups and true associations among variables. - Covers latest developments in robust regression - Covers latest improvements in ANOVA - Includes newest rank-based methods - Describes and illustrated easy to use software




Robust Portfolio Optimization and Management


Book Description

Praise for Robust Portfolio Optimization and Management "In the half century since Harry Markowitz introduced his elegant theory for selecting portfolios, investors and scholars have extended and refined its application to a wide range of real-world problems, culminating in the contents of this masterful book. Fabozzi, Kolm, Pachamanova, and Focardi deserve high praise for producing a technically rigorous yet remarkably accessible guide to the latest advances in portfolio construction." --Mark Kritzman, President and CEO, Windham Capital Management, LLC "The topic of robust optimization (RO) has become 'hot' over the past several years, especially in real-world financial applications. This interest has been sparked, in part, by practitioners who implemented classical portfolio models for asset allocation without considering estimation and model robustness a part of their overall allocation methodology, and experienced poor performance. Anyone interested in these developments ought to own a copy of this book. The authors cover the recent developments of the RO area in an intuitive, easy-to-read manner, provide numerous examples, and discuss practical considerations. I highly recommend this book to finance professionals and students alike." --John M. Mulvey, Professor of Operations Research and Financial Engineering, Princeton University







Discrete Time Series, Processes, and Applications in Finance


Book Description

Most financial and investment decisions are based on considerations of possible future changes and require forecasts on the evolution of the financial world. Time series and processes are the natural tools for describing the dynamic behavior of financial data, leading to the required forecasts. This book presents a survey of the empirical properties of financial time series, their descriptions by means of mathematical processes, and some implications for important financial applications used in many areas like risk evaluation, option pricing or portfolio construction. The statistical tools used to extract information from raw data are introduced. Extensive multiscale empirical statistics provide a solid benchmark of stylized facts (heteroskedasticity, long memory, fat-tails, leverage...), in order to assess various mathematical structures that can capture the observed regularities. The author introduces a broad range of processes and evaluates them systematically against the benchmark, summarizing the successes and limitations of these models from an empirical point of view. The outcome is that only multiscale ARCH processes with long memory, discrete multiplicative structures and non-normal innovations are able to capture correctly the empirical properties. In particular, only a discrete time series framework allows to capture all the stylized facts in a process, whereas the stochastic calculus used in the continuum limit is too constraining. The present volume offers various applications and extensions for this class of processes including high-frequency volatility estimators, market risk evaluation, covariance estimation and multivariate extensions of the processes. The book discusses many practical implications and is addressed to practitioners and quants in the financial industry, as well as to academics, including graduate (Master or PhD level) students. The prerequisites are basic statistics and some elementary financial mathematics.




Intracranial Pressure and Brain Monitoring XIV


Book Description

Nearly 80 short papers originating from the 14th International Symposium on Intracranial Pressure and Brain Monitoring held in Tuebingen, Germany, in September 2010 present experimental as well as clinical research data related to the naming topics of the conference. The papers have undergone a peer-reviewing and are organized in the following sections: methods of brain monitoring and data analysis, methods of invasive and non-invasive ICP assessment, the role of autoregulation, the role of tissue oxygenation and near-infrared spectroscopy, hydrocephalus/IIH imaging and diagnosis, management and therapy of hydrocephalus, management and therapy of traumatic brain injury, management and therapy of subarachnoid and intracranial hemorrhage, experimental approaches to acute brain disease. The book gives a good overview on the latest research developments in the field of ICP and related brain monitoring and on management and therapy of relevant acute brain diseases.




Correlation Pattern Recognition


Book Description

Correlation is a robust and general technique for pattern recognition and is used in many applications, such as automatic target recognition, biometric recognition and optical character recognition. The design, analysis and use of correlation pattern recognition algorithms requires background information, including linear systems theory, random variables and processes, matrix/vector methods, detection and estimation theory, digital signal processing and optical processing. This book provides a needed review of this diverse background material and develops the signal processing theory, the pattern recognition metrics, and the practical application know-how from basic premises. It shows both digital and optical implementations. It also contains technology presented by the team that developed it and includes case studies of significant interest, such as face and fingerprint recognition. Suitable for graduate students taking courses in pattern recognition theory, whilst reaching technical levels of interest to the professional practitioner.




Correlation Clustering


Book Description

Given a set of objects and a pairwise similarity measure between them, the goal of correlation clustering is to partition the objects in a set of clusters to maximize the similarity of the objects within the same cluster and minimize the similarity of the objects in different clusters. In most of the variants of correlation clustering, the number of clusters is not a given parameter; instead, the optimal number of clusters is automatically determined. Correlation clustering is perhaps the most natural formulation of clustering: as it just needs a definition of similarity, its broad generality makes it applicable to a wide range of problems in different contexts, and, particularly, makes it naturally suitable to clustering structured objects for which feature vectors can be difficult to obtain. Despite its simplicity, generality, and wide applicability, correlation clustering has so far received much more attention from an algorithmic-theory perspective than from the data-mining community. The goal of this lecture is to show how correlation clustering can be a powerful addition to the toolkit of a data-mining researcher and practitioner, and to encourage further research in the area.