Nonparametric Statistics on Manifolds and Their Applications to Object Data Analysis


Book Description

A New Way of Analyzing Object Data from a Nonparametric ViewpointNonparametric Statistics on Manifolds and Their Applications to Object Data Analysis provides one of the first thorough treatments of the theory and methodology for analyzing data on manifolds. It also presents in-depth applications to practical problems arising in a variety of fields




Nonparametric Inference on Manifolds


Book Description

Ideal for statisticians, this book will also interest probabilists, mathematicians, computer scientists, and morphometricians with mathematical training. It presents a systematic introduction to a general nonparametric theory of statistics on manifolds, with emphasis on manifolds of shapes. The theory has important applications in medical diagnostics, image analysis and machine vision.




A Course in Mathematical Statistics and Large Sample Theory


Book Description

This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics - parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods.




Object Oriented Data Analysis


Book Description

Object Oriented Data Analysis is a framework that facilitates inter-disciplinary research through new terminology for discussing the often many possible approaches to the analysis of complex data. Such data are naturally arising in a wide variety of areas. This book aims to provide ways of thinking that enable the making of sensible choices. The main points are illustrated with many real data examples, based on the authors' personal experiences, which have motivated the invention of a wide array of analytic methods. While the mathematics go far beyond the usual in statistics (including differential geometry and even topology), the book is aimed at accessibility by graduate students. There is deliberate focus on ideas over mathematical formulas. J. S. Marron is the Amos Hawley Distinguished Professor of Statistics, Professor of Biostatistics, Adjunct Professor of Computer Science, Faculty Member of the Bioinformatics and Computational Biology Curriculum and Research Member of the Lineberger Cancer Center and the Computational Medicine Program, at the University of North Carolina, Chapel Hill. Ian L. Dryden is a Professor in the Department of Mathematics and Statistics at Florida International University in Miami, has served as Head of School of Mathematical Sciences at the University of Nottingham, and is joint author of the acclaimed book Statistical Shape Analysis.




Statistical Shape and Deformation Analysis


Book Description

Statistical Shape and Deformation Analysis: Methods, Implementation and Applications contributes enormously to solving different problems in patient care and physical anthropology, ranging from improved automatic registration and segmentation in medical image computing to the study of genetics, evolution and comparative form in physical anthropology and biology. This book gives a clear description of the concepts, methods, algorithms and techniques developed over the last three decades that is followed by examples of their implementation using open source software. Applications of statistical shape and deformation analysis are given for a wide variety of fields, including biometry, anthropology, medical image analysis and clinical practice. - Presents an accessible introduction to the basic concepts, methods, algorithms and techniques in statistical shape and deformation analysis - Includes implementation examples using open source software - Covers real-life applications of statistical shape and deformation analysis methods




Statistical Shape Analysis


Book Description

A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while retaining sufficient detail for more specialist statisticians to appreciate the challenges and opportunities of this new field. Computer code has been included for instructional use, along with exercises to enable readers to implement the applications themselves in R and to follow the key ideas by hands-on analysis. Statistical Shape Analysis: with Applications in R will offer a valuable introduction to this fast-moving research area for statisticians and other applied scientists working in diverse areas, including archaeology, bioinformatics, biology, chemistry, computer science, medicine, morphometics and image analysis .




Riemannian Geometric Statistics in Medical Image Analysis


Book Description

Over the past 15 years, there has been a growing need in the medical image computing community for principled methods to process nonlinear geometric data. Riemannian geometry has emerged as one of the most powerful mathematical and computational frameworks for analyzing such data. Riemannian Geometric Statistics in Medical Image Analysis is a complete reference on statistics on Riemannian manifolds and more general nonlinear spaces with applications in medical image analysis. It provides an introduction to the core methodology followed by a presentation of state-of-the-art methods. Beyond medical image computing, the methods described in this book may also apply to other domains such as signal processing, computer vision, geometric deep learning, and other domains where statistics on geometric features appear. As such, the presented core methodology takes its place in the field of geometric statistics, the statistical analysis of data being elements of nonlinear geometric spaces. The foundational material and the advanced techniques presented in the later parts of the book can be useful in domains outside medical imaging and present important applications of geometric statistics methodology Content includes: - The foundations of Riemannian geometric methods for statistics on manifolds with emphasis on concepts rather than on proofs - Applications of statistics on manifolds and shape spaces in medical image computing - Diffeomorphic deformations and their applications As the methods described apply to domains such as signal processing (radar signal processing and brain computer interaction), computer vision (object and face recognition), and other domains where statistics of geometric features appear, this book is suitable for researchers and graduate students in medical imaging, engineering and computer science. - A complete reference covering both the foundations and state-of-the-art methods - Edited and authored by leading researchers in the field - Contains theory, examples, applications, and algorithms - Gives an overview of current research challenges and future applications




Functional and High-Dimensional Statistics and Related Fields


Book Description

This book presents the latest research on the statistical analysis of functional, high-dimensional and other complex data, addressing methodological and computational aspects, as well as real-world applications. It covers topics like classification, confidence bands, density estimation, depth, diagnostic tests, dimension reduction, estimation on manifolds, high- and infinite-dimensional statistics, inference on functional data, networks, operatorial statistics, prediction, regression, robustness, sequential learning, small-ball probability, smoothing, spatial data, testing, and topological object data analysis, and includes applications in automobile engineering, criminology, drawing recognition, economics, environmetrics, medicine, mobile phone data, spectrometrics and urban environments. The book gathers selected, refereed contributions presented at the Fifth International Workshop on Functional and Operatorial Statistics (IWFOS) in Brno, Czech Republic. The workshop was originally to be held on June 24-26, 2020, but had to be postponed as a consequence of the COVID-19 pandemic. Initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008, the IWFOS workshops provide a forum to discuss the latest trends and advances in functional statistics and related fields, and foster the exchange of ideas and international collaboration in the field.




Minimum Divergence Methods in Statistical Machine Learning


Book Description

This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedure via the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.




Uncertainty Modelling in Data Science


Book Description

This book features 29 peer-reviewed papers presented at the 9th International Conference on Soft Methods in Probability and Statistics (SMPS 2018), which was held in conjunction with the 5th International Conference on Belief Functions (BELIEF 2018) in Compiègne, France on September 17–21, 2018. It includes foundational, methodological and applied contributions on topics as varied as imprecise data handling, linguistic summaries, model coherence, imprecise Markov chains, and robust optimisation. These proceedings were produced using EasyChair. Over recent decades, interest in extensions and alternatives to probability and statistics has increased significantly in diverse areas, including decision-making, data mining and machine learning, and optimisation. This interest stems from the need to enrich existing models, in order to include different facets of uncertainty, like ignorance, vagueness, randomness, conflict or imprecision. Frameworks such as rough sets, fuzzy sets, fuzzy random variables, random sets, belief functions, possibility theory, imprecise probabilities, lower previsions, and desirable gambles all share this goal, but have emerged from different needs. The advances, results and tools presented in this book are important in the ubiquitous and fast-growing fields of data science, machine learning and artificial intelligence. Indeed, an important aspect of some of the learned predictive models is the trust placed in them. Modelling the uncertainty associated with the data and the models carefully and with principled methods is one of the means of increasing this trust, as the model will then be able to distinguish between reliable and less reliable predictions. In addition, extensions such as fuzzy sets can be explicitly designed to provide interpretable predictive models, facilitating user interaction and increasing trust.