Tracking with Particle Filter for High-dimensional Observation and State Spaces


Book Description

This title concerns the use of a particle filter framework to track objects defined in high-dimensional state-spaces using high-dimensional observation spaces. Current tracking applications require us to consider complex models for objects (articulated objects, multiple objects, multiple fragments, etc.) as well as multiple kinds of information (multiple cameras, multiple modalities, etc.). This book presents some recent research that considers the main bottleneck of particle filtering frameworks (high dimensional state spaces) for tracking in such difficult conditions.




Particle Filter


Book Description

What is Particle Filter Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998. How you will benefit (I) Insights, and validations about the following topics: Chapter 1: Particle filter Chapter 2: Importance sampling Chapter 3: Point process Chapter 4: Fokker-Planck equation Chapter 5: Wiener's lemma Chapter 6: Klein-Kramers equation Chapter 7: Mean-field particle methods Chapter 8: Dirichlet kernel Chapter 9: Generalized Pareto distribution Chapter 10: Superprocess (II) Answering the public top questions about particle filter. (III) Real world examples for the usage of particle filter in many fields. Who this book is for Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of Particle Filter.







Particle Filters for High Dimensional Spatial Systems


Book Description

The objective of this work is to develop new filtering methodologies that allow state-space models to be applied to high dimensional spatial systems with fewer and less restrictive assumptions than the currently practical methods. Reducing the assumptions increases the range of systems that the state-space framework can be applied to and therefore the range of systems for which the uncertainty in estimates can be quantified and statements about the risk of particular outcomes made. The particle filter was developed to meet this objective because restrictive assumptions are fundamental to the alternative methods. Two barriers to applying particle filters to high dimension spatial systems were identified. The first barrier is the lack of a flexible and practically applicable high dimensional noise distribution for the evolution equation in the case of non-negative states. The second barrier is the tendency of the Monte Carlo ensemble approximating the state distribution updated by observations to collapse down to a single point. The first barrier is overcome by defining the evolution equation noise distribution using very flexible meta-elliptical distributions. The second barrier is overcome by using a particle smoother across a sequence of spatial locations to generate the Monte Carlo ensemble. Because this location-domain particle smoother only considers one location at a time, the dimensionality of the sampling problem is reduced and a diverse ensemble can be generated. The location-domain particle smoother requires that the evolution noise distribution be defined using a meta-elliptical distribution and that the observation errors at different locations are independent. If the system has spatial resolution that is 'too fine' and there are 'too many' observed locations then the number of distinct particles can fall below an acceptable level at the beginning of the location sequence. A second method for overcoming ensemble collapse is proposed for these systems. In the second method a particle smoother is used to generate separate samples from the marginal state distributions at each location. The marginal samples are combined into a single sample from the joint state distribution spanning all of the locations using a copula. This second method requires that the state distribution is meta-elliptical and that the observation errors at different locations are independent. The assumptions required by the proposed methods are fewer and vastly less restrictive than the assumptions required by currently practical methods. The statistical properties of the new methods are explored in a simulation study and found to out-perform a standard particle filter and the popular ensemble Kalman filter when the Kalman assumptions are violated. A demonstration of the new methods using a real example is also provided.




Particle Filters and Data Assimilation


Book Description

State-space models can be used to incorporate subject knowledge on the underlying dynamics of a time series by the introduction of a latent Markov state process. A user can specify the dynamics of this process together with how the state relates to partial and noisy observations that have been made. Inference and prediction then involve solving a challenging inverse problem: calculating the conditional distribution of quantities of interest given the observations. This article reviews Monte Carlo algorithms for solving this inverse problem, covering methods based on the particle filter and the ensemble Kalman filter. We discuss the challenges posed by models with high-dimensional states, joint estimation of parameters and the state, and inference for the history of the state process. We also point out some potential new developments that will be important for tackling cutting-edge filtering applications.




Nonlinear Data Assimilation


Book Description

This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters. The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now.




Genetic Algorithms in Search, Optimization, and Machine Learning


Book Description

A gentle introduction to genetic algorithms. Genetic algorithms revisited: mathematical foundations. Computer implementation of a genetic algorithm. Some applications of genetic algorithms. Advanced operators and techniques in genetic search. Introduction to genetics-based machine learning. Applications of genetics-based machine learning. A look back, a glance ahead. A review of combinatorics and elementary probability. Pascal with random number generation for fortran, basic, and cobol programmers. A simple genetic algorithm (SGA) in pascal. A simple classifier system(SCS) in pascal. Partition coefficient transforms for problem-coding analysis.




Applications of Evolutionary Computing


Book Description

Evolutionary Computation (EC) deals with problem solving, optimization, and machine learning techniques inspired by principles of natural evolution and - netics. Just from this basic de?nition, it is clear that one of the main features of theresearchcommunityinvolvedinthestudyofitstheoryandinitsapplications is multidisciplinarity. For this reason, EC has been able to draw the attention of an ever-increasing number of researchers and practitioners in several ?elds. In its 6-year-long activity, EvoNet, the European Network of Excellence in Evolutionary Computing, has been the natural reference and incubator for that multifaceted community. EvoNet has provided logistic and material support for thosewhowerealreadyinvolvedinECbut,inthe?rstplace,ithashadacritical role in favoring the signi?cant growth of the EC community and its interactions with longer-established ones. The main instrument that has made this possible has been the series of events, ?rst organized in 1998, that have spanned over both theoretical and practical aspects of EC. Ever since 1999, the present format, in which the EvoWorkshops, a collection of workshops on the most application-oriented aspects of EC, act as satellites of a core event, has proven to be very successful and very representative of the multi-disciplinarity of EC. Up to 2003, the core was represented by EuroGP, the main European event dedicated to Genetic Programming. EuroGP has been joined as the main event in 2004 by EvoCOP, formerly part of EvoWorkshops, which has become the European Conference on Evolutionary Computation in Combinatorial Optimization.