Probabilistic Numerics


Book Description

A thorough introduction to probabilistic numerics showing how to build more flexible, efficient, or customised algorithms for computation.




Operator-Adapted Wavelets, Fast Solvers, and Numerical Homogenization


Book Description

Although numerical approximation and statistical inference are traditionally covered as entirely separate subjects, they are intimately connected through the common purpose of making estimations with partial information. This book explores these connections from a game and decision theoretic perspective, showing how they constitute a pathway to developing simple and general methods for solving fundamental problems in both areas. It illustrates these interplays by addressing problems related to numerical homogenization, operator adapted wavelets, fast solvers, and Gaussian processes. This perspective reveals much of their essential anatomy and greatly facilitates advances in these areas, thereby appearing to establish a general principle for guiding the process of scientific discovery. This book is designed for graduate students, researchers, and engineers in mathematics, applied mathematics, and computer science, and particularly researchers interested in drawing on and developing this interface between approximation, inference, and learning.




Statistical Data Science


Book Description

As an emerging discipline, data science broadly means different things across different areas. Exploring the relationship of data science with statistics, a well-established and principled data-analytic discipline, this book provides insights about commonalities in approach, and differences in emphasis.Featuring chapters from established authors in both disciplines, the book also presents a number of applications and accompanying papers.




Stochastic Numerics for Mathematical Physics


Book Description

This book is a substantially revised and expanded edition reflecting major developments in stochastic numerics since the first edition was published in 2004. The new topics, in particular, include mean-square and weak approximations in the case of nonglobally Lipschitz coefficients of Stochastic Differential Equations (SDEs) including the concept of rejecting trajectories; conditional probabilistic representations and their application to practical variance reduction using regression methods; multi-level Monte Carlo method; computing ergodic limits and additional classes of geometric integrators used in molecular dynamics; numerical methods for FBSDEs; approximation of parabolic SPDEs and nonlinear filtering problem based on the method of characteristics. SDEs have many applications in the natural sciences and in finance. Besides, the employment of probabilistic representations together with the Monte Carlo technique allows us to reduce the solution of multi-dimensional problems for partial differential equations to the integration of stochastic equations. This approach leads to powerful computational mathematics that is presented in the treatise. Many special schemes for SDEs are presented. In the second part of the book numerical methods for solving complicated problems for partial differential equations occurring in practical applications, both linear and nonlinear, are constructed. All the methods are presented with proofs and hence founded on rigorous reasoning, thus giving the book textbook potential. An overwhelming majority of the methods are accompanied by the corresponding numerical algorithms which are ready for implementation in practice. The book addresses researchers and graduate students in numerical analysis, applied probability, physics, chemistry, and engineering as well as mathematical biology and financial mathematics.




Accelerating Monte Carlo methods for Bayesian inference in dynamical models


Book Description

Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal. Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.




Multivariate Algorithms and Information-Based Complexity


Book Description

The contributions by leading experts in this book focus on a variety of topics of current interest related to information-based complexity, ranging from function approximation, numerical integration, numerical methods for the sphere, and algorithms with random information, to Bayesian probabilistic numerical methods and numerical methods for stochastic differential equations.




Machine Learning in Modeling and Simulation


Book Description

Machine learning (ML) approaches have been extensively and successfully employed in various areas, like in economics, medical predictions, face recognition, credit card fraud detection, and spam filtering. There is clearly also the potential that ML techniques developed in Engineering and the Sciences will drastically increase the possibilities of analysis and accelerate the design to analysis time. With the use of ML techniques, coupled to conventional methods like finite element and digital twin technologies, new avenues of modeling and simulation can be opened but the potential of these ML techniques needs to still be fully harvested, with the methods developed and enhanced. The objective of this book is to provide an overview of ML in Engineering and the Sciences presenting fundamental theoretical ingredients with a focus on the next generation of computer modeling in Engineering and the Sciences in which the exciting aspects of machine learning are incorporated. The book is of value to any researcher and practitioner interested in research or applications of ML in the areas of scientific modeling and computer aided engineering.




Introduction to Uncertainty Quantification


Book Description

This text provides a framework in which the main objectives of the field of uncertainty quantification (UQ) are defined and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favorite problems to understand their strengths and weaknesses, also making the text suitable for a self-study. Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation and numerous application areas in science and engineering. This text is designed as an introduction to UQ for senior undergraduate and graduate students with a mathematical or statistical background and also for researchers from the mathematical sciences or from applications areas who are interested in the field. T. J. Sullivan was Warwick Zeeman Lecturer at the Mathematics Institute of the University of Warwick, United Kingdom, from 2012 to 2015. Since 2015, he is Junior Professor of Applied Mathematics at the Free University of Berlin, Germany, with specialism in Uncertainty and Risk Quantification.




Machine Learning and Knowledge Discovery in Databases


Book Description

The three volume proceedings LNAI 10534 – 10536 constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2017, held in Skopje, Macedonia, in September 2017. The total of 101 regular papers presented in part I and part II was carefully reviewed and selected from 364 submissions; there are 47 papers in the applied data science, nectar and demo track. The contributions were organized in topical sections named as follows: Part I: anomaly detection; computer vision; ensembles and meta learning; feature selection and extraction; kernel methods; learning and optimization, matrix and tensor factorization; networks and graphs; neural networks and deep learning. Part II: pattern and sequence mining; privacy and security; probabilistic models and methods; recommendation; regression; reinforcement learning; subgroup discovery; time series and streams; transfer and multi-task learning; unsupervised and semisupervised learning. Part III: applied data science track; nectar track; and demo track.




Monte Carlo and Quasi-Monte Carlo Methods


Book Description

This book presents the refereed proceedings of the Twelfth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at Stanford University (California) in August 2016. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising in particular, in finance, statistics, computer graphics and the solution of PDEs.