Preface
Acknowledgments
Part I THEORY
1 Introduction
1.1 Distribution of extremes in random fields
1.2 Outline of the method
1.3 Gaussian and asymptotically Gaussian random fields
1.4 Applications
2 Basic examples
2.1 Introduction
2.2 A power-one sequential test
2.3 A kernel-based scanning statistic
2.4 Other methods
3 Approximation of the local rate
3.1 Introduction
3.2 Preliminary localization and approximation
3.2.1 Localization
3.2.2 A discrete approximation
3.3 Measure transformation
3.4 Application of the localization theorem
3.4.1 Checking Condition I*
3.4.2 Checking Condition V*
3.4.3 Checking Condition IV*
3.4.4 Checking Condition II*
3.4.5 Checking Condition III*
3.5 Integration
4 From the local to the global
4.1 Introduction
4.2 Poisson approximation of probabilities
4.3 Average run length to false alarm
5 The localization theorem
5.1 Introduction
5.2 A simplified version of the localization theorem
5.3 The localization theorem
5.4 A local limit theorem
5.5 Edge effects and higher order approximations
Part II APPLICATIONS
Nonparametric tests: Kolmogorov-Smirnov and Peacock
6.1 Introduction
6.1.1 Classical analysis of the Kolmogorov-Smirnov test
6.1.2 Peacock's test
6.2 Analysis of the one-dimensional case
6.2.1 Preliminary localization
6.2.2 An approximation by a discrete grid
6.2.3 Measure transformation
6.2.4 The asymptotic distribution of the local field and the global term
6.2.5 Application of the localization theorem and integration
6.2.6 Checking the conditions of the localization theorem
6.3 Peacock's test
6.4 Relations to scanning statistics
7 Copy number variations
7.1 Introduction
7.2 The statistical model
7.3 Analysis of statistical properties
7.3.1 The alternative distribution
7.3.2 Preliminary localization and approximation
7.3.3 Measure transformation
7.3.4 The localization theorem and the local limit theorem
7.3.5 Checking Condition V*
7.3.6 Checking Condition II*
7.4 The false discovery rate
8 Sequential monitoring of an image
8.1 Introduction
8.2 The statistical model
8.3 Analysis of statistical properties
8.3.1 Preliminary localization
8.3.2 Measure transformation, the localization theorem, and integration
8.3.3 Checking the conditions of the localization theorem
8.3.4 Checking Condition V*
8.3.5 Checking Condition IV*
8.3.6 Checking Condition II*
8.4 Optimal change-point detection
9 Buffer overflow
9.1 Introduction
9.2 The statistical model
9.2.1 The process of demand from a single source
9.2.2 The integrated process of demand
9.3 Analysis of statistical properties
9.3.1 The large deviation factor
9.3.2 Preliminary localization
9.3.3 Approximation by a cruder grid
9.3.4 Measure transformation
9.3.5 The localization theorem
9.3.6 Integration
9.3.7 Checking the conditions of the localization theorem
9.3.8 Checking Condition IV*
9.3.9 Checking Condition V*
9.3.10 Checking Condition II*
9.4 Heavy tail distribution, long-range dependence, and self-similarity
10 Computing Pickands' constants
10.1 Introduction
10.1.1 The double-sum method
10.1.2 The method based on the likelihood ratio identity
10.1.3 Pickands' constants
10.2 Representations of constants
10.3 Analysis of statistical error
10.4 Enumerating the effect of local fluctuations
Appendix: Mathematical background
A.1 Transforms
A.2 Approximations of sum of independent random elements
A.3 Concentration inequalities
A.4 Random walks
A.5 Renewal theory
A.6 The Gaussian distribution
A.7 Large sample inference
A.8 Integration
A.9 Poisson approximation
A.10 Convexity
References
Index