Subsampling Inference for the Mean of Heavy-Tailed Long-Memory Time Series


Book Description

In this article, we revisit a time series model introduced by MCElroy and Politis (2007a) and generalize it in several ways to encompass a wider class of stationary, nonlinear, heavy-tailed time series with long memory. The joint asymptotic distribution for the sample mean and sample variance under the extended model is derived; the associated convergence rates are found to depend crucially on the tail thickness and long memory parameter. A self-normalized sample mean that concurrently captures the tail and memory behaviour, is defined. Its asymptotic distribution is approximated by subsampling without the knowledge of tail or/and memory parameters; a result of independent interest regarding subsampling consistency for certain long-range dependent processes is provided. The subsampling-based confidence intervals for the process mean are shown to have good empirical coverage rates in a simulation study. The influence of block size on the coverage and the performance of a data-driven rule for block size selection are assessed. The methodology is further applied to the series of packet-counts from ethernet traffic traces.




Long-Memory Processes


Book Description

Long-memory processes are known to play an important part in many areas of science and technology, including physics, geophysics, hydrology, telecommunications, economics, finance, climatology, and network engineering. In the last 20 years enormous progress has been made in understanding the probabilistic foundations and statistical principles of such processes. This book provides a timely and comprehensive review, including a thorough discussion of mathematical and probabilistic foundations and statistical methods, emphasizing their practical motivation and mathematical justification. Proofs of the main theorems are provided and data examples illustrate practical aspects. This book will be a valuable resource for researchers and graduate students in statistics, mathematics, econometrics and other quantitative areas, as well as for practitioners and applied researchers who need to analyze data in which long memory, power laws, self-similar scaling or fractal properties are relevant.




Cyclostationarity: Theory and Methods


Book Description

In the last decade the research in signal analysis was dominated by models that encompass nonstationarity as an important feature. This book presents the results of a workshop held in Grodek—Poland in February 2013 which was dedicated to the investigation of cyclostationary signals. Its main objective is to highlight the strong interactions between theory and applications of cyclostationary signals with the use of modern statistical tools. An important application of cyclostationary signals is the analysis of mechanical signals generated by a vibrating mechanism. Cyclostationary models are very important to perform basic operations on signals in both time and frequency domains. One of the fundamental problems in diagnosis of rotating machine is the identification of significant modulating frequencies that contribute to the cyclostationary nature of the signals. The book shows that there are modern tools available for analyzing cyclostationary signals without the assumption of gaussianity. Those methods are based on the ideas of bootstrap, subsampling and Fraction-of-time (FOT) models. The book is organised in two parts. The first part will be dedicated to pure theory on cyclostationarity. Applications are presented in the second part including several mechanical systems such as bearings, gears, with or without damages.




Cyclostationarity: Theory and Methods – IV


Book Description

This book gathers contributions presented at the 10th Workshop on Cyclostationary Systems and Their Applications, held in Gródek nad Dunajcem, Poland in February 2017. It includes twelve interesting papers covering current topics related to both cyclostationary and general non stationary processes. Moreover, this book, which covers both theoretical and practical issues, offers a practice-oriented guide to the analysis of data sets with non-stationary behavior and a bridge between basic and applied research on nonstationary processes. It provides students, researchers and professionals with a timely guide on cyclostationary systems, nonstationary processes and relevant engineering applications.




Cyclostationarity: Theory and Methods III


Book Description

This book gathers contributions presented at the 9th Workshop on Cyclostationary Systems and Their Applications, held in Gródek nad Dunajcem, Poland in February 2016. It includes both theory-oriented and practice-oriented chapters. The former focus on heavy-tailed time series and processes, PAR models, rational spectra for PARMA processes, covariance invariant analysis, change point problems, and subsampling for time series, as well as the fraction-of-time approach, GARMA models and weak dependence. In turn, the latter report on case studies of various mechanical systems, and on stochastic and statistical methods, especially in the context of damage detection. The book provides students, researchers and professionals with a timely guide to cyclostationary systems, nonstationary processes and relevant engineering applications.




Subsampling


Book Description

Since Efron's profound paper on the bootstrap, an enormous amount of effort has been spent on the development of bootstrap, jacknife, and other resampling methods. The primary goal of these computer-intensive methods has been to provide statistical tools that work in complex situations without imposing unrealistic or unverifiable assumptions about the data generating mechanism. This book sets out to lay some of the foundations for subsampling methodology and related methods.




Statistica Sinica


Book Description







Mathematical Reviews


Book Description




Statistical Inference as Severe Testing


Book Description

Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.