Antimatroid, The

thoughts on computer science, electronics, mathematics

Archive for the ‘Probability and Statistics’ Category

GPU Accelerated Expectation Maximization for Gaussian Mixture Models using CUDA

leave a comment »

C, CUDA, and Python source code available on GitHub

Introduction

Gaussian Mixture Models [1, 435-439] offer a simple way to capture complex densities by employing a linear combination of K multivariate normal distributions, each with their own mean, covariance, and mixture coefficient, \pi_{k}, s.t. \sum_{k} \pi_{k} = 1.

\displaystyle p( x ) = \sum_{k = 1}^{K} \pi_{k} p(x \lvert \mu_k, \Sigma_k)


Of practical interest is the learning of the number of components and the values of the parameters. Evaluation criteria, such as Akaike and Bayesian, can be used to identify the number of components, or non-parametric models like Dirichlet processes can be used to avoid the matter all together. We won’t cover these techniques here, but will instead focus on finding the values of the parameters given sufficient training data using the Expectation-Maximization algorithm [3], and doing so efficiently on the GPU. Technical considerations will be discussed and the work will conclude with an empirical evaluation of sequential and parallel implementations for the CPU, and a massively parallel implementation for the GPU for varying numbers of components, points, and point dimensions.

Multivariate Normal Distribution

The multivariate normal distribution With mean, \mu \in \mathbb{R}^d, d \in \mathbb{N}_1, and symmetric, positive definite covariance, \Sigma \in \mathbb{R}^{d \times d}, is given by:

\displaystyle 	p( x \lvert \mu, \Sigma ) = \frac{1}{\sqrt{(2\pi)^d \lvert \Sigma \rvert }} \exp{\left( - (x - \mu)^{T} \Sigma^{-} (x - \mu) / 2 \right)}


From a computational perspective, we will be interested in evaluating the density for N values. Thus, a naive implementation would be bounded by \mathcal{O}\left(N d^4\right) due to the matrix determinate in the normalization term. We can improve upon this by computing the Cholesky factorization, \Sigma = L L^T, where L is a lower triangular matrix [6, 157-158]. The factorization requires \mathcal{O} \left ( d^3 \right ) time and computing the determinate becomes \mathcal{O} \left (d \right) by taking advantage of the fact that \det\left(L L^T\right) = \det(L)^2 = \prod_i L_{i,i}^2. Further, we can precompute the factorization and normalization factor for a given parameterization which leaves us with complexity of the Mahalanobis distance given by the quadratic form in the exponential. Naive computation requires one perform two vector matrix operations and find the inverse of the covariance matrix with worst case behavior \mathcal{O} \left (d^3\right). Leveraging the Cholesky factorization, we’ll end up solving a series of triangular systems by forward and backward substitution in \mathcal{O} \left (d^2\right) and completing an inner product in \mathcal{O} \left (d\right) as given by L z = x - \mu, L^T z = y, and (x-\mu)^T y. Thus, our pre-initialization time is \mathcal{O} \left (d^3 \right) and density determination given by \mathcal{O} \left (N d^2 \right). Further optimizations are possible by considering special diagonal cases of the covariance matrix, such as the isotropic, \Sigma = \sigma I, and non-isotropic, \Sigma_{k,k} = \sigma_k, configurations. For robustness, we’ll stick with the full covariance.

\displaystyle \log p( x \lvert \mu, \Sigma ) = - \frac{1}{2} \left( d \log 2\pi + \log \lvert \Sigma \rvert \right ) - \frac{1}{2} (x - \mu)^{T} \Sigma^{-} (x - \mu)


To avoid numerical issues such as overflow and underflow, we’re going to consider \log p(x \lvert \mu, \Sigma) throughout the remainder of the work. For estimates of the covariance matrix, we will want more samples than the dimension of the data to avoid a singular covariance matrix [4]. Even with this criteria satisfied, it may still be possible to produce a singular matrix if some of the data are collinear and span a subspace of \mathbb{R}^d.

Expectation Maximization

From an unsupervised learning point of view, GMMs can be seen as a generalization of k-means allowing for partial assignment of points to multiple classes. A possible classifier is given by k^{*} = \arg\max_k \, \log \pi_{k} + \log p(x \lvert \mu_k, \Sigma_k). Alternatively, multiple components can be used to represent a single class and we argmax over the corresponding subset sums. The utility of of GMMs goes beyond classification, and can be used for regression as well. The Expectation-Maximization (EM) algorithm will be used to find the parameters of of the model by starting with an initial guess for the parameters given by uniform mixing coefficients, means determined by the k-means algorithm, and spherical covariances for each component. Then, the algorithm iteratively computes probabilities given a fixed set of parameters, then updating those parameters by maximizing the log-likelihood of the data:

\displaystyle \mathcal{L} \left( \mathcal{D} \lvert \mu, \Sigma \right) = \sum_{n = 1}^{N} \log p(x_n) = \sum_{n=1}^{N} \log{ \left [ \sum_{k = 1}^{K} \pi_{k} p \left( x_n \lvert \mu_k, \Sigma_k \right ) \right ] }


Because we are dealing with exponents and logarithms, it’s very easy to end up with underflow and overflow situations, so we’ll continue the trend of working in log-space and also make use of the “log-sum-exp trick” to avoid these complications:

\displaystyle \log p( x ) = a + \log \left[ \sum_{k = 1}^{K} \exp{ \left( \log \pi_{k} + \log p(x \lvert \mu_k, \Sigma_k) - a \right ) } \right ]


Where the a term is the maximum exponential argument within a stated sum. Within the expectation stage of the algorithm we will compute the posterior distributions of the components conditioned on the training data (we omit the mixing coefficient since it cancels out in the maximization steps of \mu_k and \Sigma_k, and account for it explicitly in the update of \pi_k):

\displaystyle \gamma_{k, n} = \frac{ p \left ( x_n \lvert \mu_k, \Sigma_k \right ) }{ p(x) } \qquad \Gamma_k = \sum_{n=1}^{N} \gamma_{k, n}


\displaystyle \log \gamma_{k, n} =  \log p \left ( x_n \lvert \mu_k, \Sigma_k \right )  - \log p(x) \qquad \log \Gamma_k = a + \log \left [ \sum_{n=1}^{N} \exp{ \left( \log \gamma_{k, n} - a \right )} \right ]


The new parameters are resolved within the maximization step:

\displaystyle \pi_{k}^{(t+1)} = \frac{ \pi_{k}^{(t)} \Gamma_k }{ \sum_{i=1}^{K} \pi_{i}^{(t)} \Gamma_i } \qquad \log \pi_{k}^{(t+1)} = \log \pi_{k}^{(t)} + \log \Gamma_k - a - \log \left [ \sum_{i=1}^{K} \exp{ \left( \log \pi_{i}^{(t)} + \log \Gamma_i - a \right )} \right ]


\displaystyle \mu_k^{(t+1)} = \frac{ \sum_{n=1}^{N} x_n \gamma_{n, k} }{ \Gamma_k  } \qquad \mu_k^{(t+1)} = \frac{ \sum_{n=1}^{N} x_n \exp{ \log \gamma_{n, k} } }{ \exp{ \log \Gamma_k }  }


\displaystyle \Sigma_k^{(t+1)} = \frac{ \sum_{n=1}^{N} (x_n - \mu_k^{(t+1)}) (x_n - \mu_k^{(t+1)})^T \gamma_{n, k} }{ \Gamma_k  }


\displaystyle \Sigma_k^{(t+1)} = \frac{ \sum_{n=1}^{N} (x_n - \mu_k^{(t+1)}) (x_n - \mu_k^{(t+1)})^T \exp \log \gamma_{n, k} }{ \exp \log \Gamma_k  }


The algorithm continues back and forth between expectation and maximization stages until the change in log likelihood is less than some epsilon, or a maximum number of user specified iterations has elapsed.

Implementations

Sequential Per iteration complexity given by \mathcal{O}\left(2 K N d^2 + K N d + 2K + N + K d^3\right). We expect d \le K < N because too many dimensions leads to a lot of dead space and too many components results in overfitting of the data. Thus, the dominating term for sequential execution is given by \mathcal{O}\left(2 K N d^2 \right).

Parallel There are two natural data parallelisms that appear in the algorithm. The calculation of the \mathcal{L} and \gamma across points, while the probability densities and parameter updates have natural parallelisms across components. Each POSIX thread runs the full iterative algorithm with individual stages coordinated by barrier synchronization. Resulting complexity is given by \mathcal{O}\left(\frac{2}{P} d^2 K N \right) for work coordinated across P processors.

cudaFlow

Massively Parallel The parallel implementation can be taken and mapped over to the GPU with parallelism taken across points and components depending on the terms being computed. There are several types of parallelism that we will leverage under the CUDA programming model. For the calculation of \log p\left(x | \mu_k, \Sigma_k \right) we compute each point in parallel by forming a grid of one dimensional blocks, and use streams with event synchronization to carry out each component in parallel across the streaming multiprocessors. Calculation of the loglikelihood and \log \gamma_{n,k} is done by computing and storing \log p(x), then updating the storage for \log p\left(x|\mu_k,\Sigma_k\right), and then performing a parallel reduction over \log p(x) to produce the loglikelihood. Parallel reductions are a core tasks are implemented by first standardizing the input array of points to an supremum power of two, then reducing each block using shared memory, and applying a linear map to the memory so that successive block reductions can be applied. Several additional approaches are discussed in [5]. Once the loglikelihood is computed, the streams are synchronized with the host and the result is copied from the device back to the host. To compute \log \Gamma_k, \log \gamma_{n,k} is copied to a working memory and a maximum parallel reduction is performed. The resulting maximum is used in a separate exponential map for numerical stability when computing the parallel reduction of each component to yield \log \Gamma_k. Updates to the mean and covariances are performed by mapping each term to a working memory allocated for each component’s stream and executing a parallel reduction to yield the updated mean and covariance. Once all component streams have been synchronized, the mixture coefficients and Cholesky decompositions of the covariances is computed with a single kernel invocation parallel in the number of components.

The main design consideration was whether or not use streams. For larger numbers of components, this will result in improved runtime performance, however, it comes at the cost of increased memory usage which limits the size of problems an end user can study with the implementation. Because the primary design goal is performance, the increase in memory was favorable to using less memory and executing each component sequentially.

To optimize the runtime of the implementation nvprof along with the NVIDIA Visual Profiler was used to identify performance bottlenecks. The original implementation was a naive port of the parallel C code which required frequent memory transfers between host and device resulting in significant CUDA API overhead that dominated the runtime. By transferring and allocating memory on the device beforehand, this allowed the implementation to execute primarily on the GPU and eliminate the API overhead. The second primary optimization was using streams and events for parallelization of the component probability densities and parameter updates in the maximization step. In doing so, this allowed for a K fold reduction since the components calculations would be performed in parallel. The next optimization step was to streamline the parallel reductions by using block reductions against fast shared block memory minimizing the number of global memory writes instead of performing iterated reductions against sequential addressing that preformed global memory reads and writes for each point. The final optimization step was to used pinned host memory to enable zero-copy transfers from DRAM to the GPU over DMA.

Evaluation

To evaluate the implementations we need a way of generating GMMs and sampling data from the resulting distributions. To sample from a standard univariate normal distribution one can use The Box-Muller transform, Zigguart method, or Ratio-of-uniforms method [7]. The latter is used here due to its simplicity and efficiency. Sampling from the multivariate normal distribution can by done by sampling a standard normal vector \eta \sim \mathcal{N}(0 ,I_d) and computing \mu + \Sigma^{1/2} \eta where \Sigma^{1/2} can be computed by Eigendecomposition, \Sigma^{1/2} = Q \Delta^{1/2} Q^{-}, or Cholesky factorization, \Sigma = L L^T, \Sigma^{1/2} = L. The latter is used since it is more efficient. The GMM describes a generative process whereby we pick a component at random with probability given by its mixture coefficient and then sample the underlying \mathcal{N}(\mu_k, \Sigma_k) distribution, and perform this process for the desired number of points.

The matter of generating GMMs it more interesting. Here we draw \pi_i = X_i / \sum_{j} X_j for X_i \sim \mathcal{U}(0, 1), alternatively, one could draw \pi \sim \text{Dir}(\alpha). Means are drawn by \mu \sim \mathcal{N}(0, a I_d) with a > 1 so that means are relatively spread out in \mathbb{R}^{d}. The more exciting prospect is how to sample the covariance matrix. This is where the Wishart distribution, \Sigma \sim W(I_d, d, n) for n > d - 1, comes in handy. The Wishart distribution is a model of what the sample covariance matrix should look like given a series of n x_i \sim \mathcal{N}(0, I_d) vectors. Based on a \mathcal{O}\left(d^2\right) method by [8], [9] gives an equally efficient method for sampling \Sigma^{1/2} = L by letting L_{i,i} \sim \chi^2(n - i) and L_{i,j} \sim \mathcal{N}(0, 1) for 0 \le i < d and 0 \le j < i.

To evaluate the performance of the different implementations, the wall clock time taken to run the algorithm on a synthetic instance was measured by varying each of the N, K, and d parameters while holding the other two fixed. From an end user perspective wall clock time is preferable to the time the operating system actually devoted to the problem since wall clock time is more valuable. There will be variability in the results since each instance requires a different number of iterations for the log likelihood to converge. Tests were conducted on a Xeon 1245 v5 3.5 Ghz system with 32GB of memory and an NVIDIA GTX 1060 6GB graphics card with 1280 cores.

runtime

Since the parameter space is relatively large Figures 2-5 look at varying one parameter will fixing the others to demonstrate the relative merits of each approach. When the number of points dominates the CUDA approach tends to be 18x faster; the Parallel approach tends to be 3x faster when the dimension is high; and CUDA is suitable when the num of components is high giving a 20x improvement relative to the sequential approach. Thus, when dealing with suitably large datasets, the CUDA based implementation is preferable delivering superior runtime performance without sacrificing quality.

It is important to note that the results obtained from the CUDA solution may differ to those the sequential and parallel approaches. This is due to nondeterministic round off errors associated with executing parallel reductions compared to sequential reductions [2], and differences in the handling of floating point values on the GPU [10], notably, the presence of fused multiple add on NVIDIA GPUs which are more accurate than what is frequently implemented in CPU architectures. The following two synthetic data sets illustrate typical results of the three schemes:

example

Conclusion

This work demonstrated the utility of using NVIDIA GPUs to train Gaussian mixture models by the Expectation Maximization algorithm. Speedups as high as 20x were observed on synthetic datasets by varying the number of points, components, and data dimension while leaving the others fixed. It is believed that further speedups should be possible with additional passes, and the inclusion of metric data structures to limit which data is considered during calculations. Future work would pursue more memory efficient solutions on the GPU to allow for larger problem instance, and focus on providing higher level language bindings so that it can be better utilized in traditional data science toolchains.

References

  1. Bishop, C. M. Pattern recognition and machine learning. Springer, 2006.
  2. Collange, S., Defour, D., Graillat, S., and Lakymhuk, R. Numerical reproducibility for the parallel reduction on multi- and many-core architectures. Parallel Computing 49 (2015), 83-97.
  3. Dempster, A. P., Laird, N. M., and Rubin, D. B. Maximum likelihood from incomplete data via the eme algorithm. Journal of the royal statistical society. Series B (methodological) (1977), 1-38.
  4. Fan, J., Liao, Y., and Liu, H. An overview of the estimation of large covariance and precision matrices. The Econometrics Journal 19, (2016) C1-C32.
  5. Harris, M. Optimizing cuda. SC07: High Performance Computing with CUDA (2007).
  6. Kincaid, D., and Cheney, W. Numerical analysis: mathematics of scientific computing. 3 ed. Brooks/Cole, 2002.
  7. Kinderman, A. J., and Monahan, J. F. Computer generation of random variables using the ratio of uniform deviates. ACM Transactions on Mathematical Software (TOMS) 3, 3 (1977), 257-260.
  8. Odell, P., and Feiveson, A. A Numerical procedure to generate a sample covariance matrix. Journal of the American Statistical Association 61, 313 (1966), 199-203.
  9. Sawyer, S. Wishart distributions and inverse-wishart sampling. URL: http://www.math.wustl.edu/~sawyer/hmhandouts/Wishart.pdf (2007).
  10. Whitehead, N., and Fit-Florea, A. Precision and performance: Floating point and ieee 754 compliance for nvidia gpus. rn(A + B) 21., 1 (2011), 18749-19424.

Expected Maximum and Minimum of Real-Valued Continuous Random Variables

with 14 comments

Introduction

This is a quick paper exploring the expected maximum and minimum of real-valued continuous random variables for a project that I’m working on. This paper will be somewhat more formal than some of my previous writings, but should be an easy read beginning with some required definitions, problem statement, general solution and specific results for a small handful of continuous probability distributions.

Definitions

Definition (1) : Given the probability space, (\Omega, \mathcal{F}, \mathbb{P}), consisting of a set representing the sample space, \Omega, a \text{Borel }\sigma \text{-algebra}, \mathcal{F}, and a Lebesgue measure, \mathbb{P}, the following properties hold true:

  1. Non-negativity: \mathbb{P}(F) \ge 0 \quad \forall F \in \mathcal{F}
  2. Null empty set: \mathbb{P}(\emptyset) = 0
  3. Countable additivity of disjoint sets \displaystyle \mathbb{P}\left( \bigcup_{i=0}^{\infty} F_i \right) = \sum_{i=0}^{\infty} \mathbb{P}(F_i) \quad F_i \subset \mathcal{F}

Definition (2) : Given a real-valued continuous random variable such that X : \Omega \to \mathbb{R}, the event the random variable takes on a fixed value, x \in \mathbb{R}, is the event \lbrace \omega : X(\omega) = x \rbrace \in \mathcal{F} measured by the probability distribution function f_X(x) = \mathbb{P}(X = x). Similarly, the event that the random variable takes on a range of values less than some fixed value, x \in \mathbb{R}, is the event \lbrace \omega : X(\omega) \le x \rbrace \in \mathcal{F} measured by the cumulative distribution function F_X(x) = \mathbb{P}(X \le x). By Definition, the following properties hold true:

  1. \displaystyle F_X(x) = \int_{-\infty}^{x} f_X(t) \, dt
  2. \displaystyle \lim_{x \to \infty} F_X(x) = 1
  3. \displaystyle 1 - \int_{-\infty}^{x} t f_X(t) \, dt = \int_{x}^{\infty} t f_X(t) \, dt

Defintion (3) : Given a second real-valued continuous random variable, Y : \Omega \to \mathbb{R}, The joint event \lbrace \omega : X(\omega) = x, Y(\omega) = y \rbrace \in \mathcal{F} (x,y) \in \mathbb{R}^2 will be measured by the joint probability distribution f_{X, Y}(x,y) = \mathbb{P}(X = x, Y = y). If X and Y are statistically independent, then f_{X,Y}(x,y) = f_X(x) f_Y(y).

Definition (4) : Given a real-valued continuous random variable, X : \Omega \to \mathbb{R}, the expected value is \displaystyle \mathbb{E}(X) = \int_{-\infty}^{\infty} x f_X(x) \, dx.

Definition (5) : (Law of the unconscious statistician) Given a real-valued continuous random variable, X, and a function, g : \mathbb{R} \to \mathbb{R}, then g(X) is also a real-valued continuous random variable and its expected value is \displaystyle \mathbb{E}(g(X)) = \int_{-\infty}^{\infty} g(x) f_X(x) \, dx provided the integral converges. Given two real-valued continuous random variables, X, Y, and a function, g : \mathbb{R}^2 \to \mathbb{R}, then g(X, Y) is also a real-valued continuous random variable and its expected value is \displaystyle \mathbb{E}(g(X,Y)) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(x,y) f_{X,Y}(x,y) \, dx \, dy. Under the independence assumption of Definition (3), the expected value becomes \displaystyle \mathbb{E}(g(X,Y)) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(x,y) f_X(x) f_Y(y) \, dx \, dy.

Remark (1) : For the remainder of this paper, all real-valued continuous random variables will be assumed to be independent.

Problem Statement

Theorem (1) : Given two real-valued continuous random variables X, Y \in \Omega \to \mathbb{R}, then the expected value of the minimum of the two variables is \mathbb{E} \left( \min{ \left( X, Y \right) } \right ) = \mathbb{E} \left( X \right ) + \mathbb{E} \left( Y \right ) - \mathbb{E} \left( \max{ \left( X, Y \right) } \right ).

Lemma (1) : Given two real-valued continuous random variables X, Y \in \Omega \to \mathbb{R}, then the expected value of the maximum of the two variables is \displaystyle \mathbb{E} \left( \max{ \left ( X, Y \right) } \right ) = \int_{-\infty}^{\infty} x f_X(x) F_Y(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) F_X(y) \, dy

Proof of Lemma (1) :

\displaystyle \mathbb{E} \left( \max{ \left ( X, Y \right) } \right ) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \max{\left( x, y \right)} f_X(x) f_Y(y) \, dx \, dy (Definition (5))

\displaystyle = \int_{-\infty}^{\infty} \int_{-\infty}^{x} x f_X(x) f_Y(y) \, dy \, dx + \int_{-\infty}^{\infty} \int_{-\infty}^{y} y f_X(x) f_Y(y) \, dx \, dy (Definition (1.iii))

\displaystyle = \int_{-\infty}^{\infty} x f_X(x) \left ( \int_{-\infty}^{x} f_Y(y) \, dy \right ) \, dx + \int_{-\infty}^{\infty} y f_Y(y) \left ( \int_{-\infty}^{y}  f_X(x)  \, dx \right ) \, dy (Fubini’s theorem)

\displaystyle = \int_{-\infty}^{\infty} x f_X(x) F_Y(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) F_X(y) \, dy \quad \square (Definition (2.i))

Proof of Theorem (1)

\displaystyle \mathbb{E} \left( \min{ \left ( X, Y \right) } \right ) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \min{\left( x, y \right)} f_X(x) f_Y(y) \, dx \, dy (Definition (4))

\displaystyle = \int_{-\infty}^{\infty} \int_{x}^{\infty} x f_X(x) f_Y(y) \, dy \, dx + \int_{-\infty}^{\infty} \int_{y}^{\infty} y f_X(x) f_Y(y) \, dx \, dy (Definition (1.iii))

\displaystyle = \int_{-\infty}^{\infty} x f_X(x) \left ( \int_{x}^{\infty} f_Y(y) \, dy \right ) \, dx + \int_{-\infty}^{\infty} y f_Y(y) \left ( \int_{y}^{\infty} f_X(x) \, dx \right ) \, dy (Fubini’s theorem)

\displaystyle = \int_{-\infty}^{\infty} x f_X(x) \left (1 - \int_{-\infty}^{x} f_Y(y) \, dy \right ) \, dx + \int_{-\infty}^{\infty} y f_Y(y) \left (1 - \int_{-\infty}^{y} f_X(x) \, dx \right ) \, dy (Definition (2.iii))

\displaystyle = \int_{-\infty}^{\infty} x f_X(x) \left (1 - F_Y(x) \right ) \, dx + \int_{-\infty}^{\infty} y f_Y(y) \left( 1 - F_X(y) \right ) \, dy (Definition (2.i))

\displaystyle = \int_{-\infty}^{\infty} x f_X(x) \, dx - \int_{-\infty}^{\infty} x f_X(x) F_Y(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) \, dy - \int_{-\infty}^{\infty} y f_Y(y) F_X(y)  \, dy

\displaystyle = \int_{-\infty}^{\infty} x f_X(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) \, dy - \left ( \int_{-\infty}^{\infty} x f_X(x) F_Y(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) F_X(y)  \, dy \right )

\displaystyle = \mathbb{E}(X) + \mathbb{E}(Y) - \mathbb{E} \left( \max{ \left( X, Y \right) } \right) \quad \blacksquare (Definition (4), Lemma (1))

Remark (2) : For real values x, y \in \mathbb{R}, \min{\left(x,y\right)} = x + y - \max{ \left( x, y \right) }.

Proof Remark (2) : If x \ge y, then \min{\left(x,y\right)} = y, otherwise x. If x \ge y, then \max{\left(x,y\right)} = x, otherwise y. If x \ge y, then \min{\left(x,y\right)} = y + \left( x - \max{\left(x,y\right)} \right ), otherwise, \min{\left(x,y\right)} = x + \left( y - \max{\left(x,y\right)} \right ). Therefore, \min{\left(x,y\right)} = x + y - \max{\left(x,y\right)} \quad \square

Worked Continuous Probability Distributions

The following section of this paper derives the expected value of the maximum of real-valued continuous random variables for the exponential distribution, normal distribution and continuous uniform distribution. The derivation of the expected value of the minimum of real-valued continuous random variables is omitted as it can be found by applying Theorem (1).

Exponential Distribution

Definition (6) : Given a real-valued continuous exponentially distributed random variable, X \sim \text{Exp}(\alpha), with rate parameter, \alpha > 0, the probability density function is \displaystyle f_X(x) = \alpha e^{-\alpha x} for all x \ge 0 and zero everywhere else.

Corollary (6.i) The cumulative distribution function of a real-valued continuous exponentially distributed random variable, X \sim \text{Exp}(\alpha), is therefore \displaystyle F_X(x) = 1 - e^{-\alpha x} for all x \ge 0 and zero everywhere else.

Proof of Corollary (6.i)

\displaystyle F_X(x) = \int_{-\infty}^{x} f_x(t) \, dt = \int_{0}^{x} \alpha e^{-\alpha t} \, dt = -\frac{\alpha}{\alpha} e^{-\alpha t} \bigg|_{0}^{x} = 1 - e^{- \alpha x} \quad \square

Corollary (6.ii) : The expected value of a real-valued continuous exponentially distributed random variable, X \sim \text{Exp}(\alpha), is therefore \displaystyle \frac{1}{\alpha}.

Proof of Corollary (6.ii)

The expected value is \mathbb{E}(X) = \frac{1}{\alpha} by Definition (4) and Lemma (2) \square.

Lemma (2) : Given real values \alpha, \gamma \in \mathbb{R} \quad \gamma \neq 0, then \displaystyle \int_{0}^{\infty} \alpha x e^{-\gamma x} \, dx = \frac{\alpha}{\gamma^2}.

Proof of Lemma (2) :

\displaystyle \int_{0}^{\infty} \alpha x e^{-\gamma x} \, dx = - x \frac{\alpha}{\gamma} e^{-\alpha x} \bigg|_{0}^{\infty} + \int_{0}^{\infty} \frac{\alpha}{\gamma} e^{-\gamma x} \, dx = - x \frac{\alpha}{\gamma} e^{-\alpha x} - \frac{\alpha}{\gamma}^2 e^{-\alpha x} \bigg|_{0}^{\infty}

\displaystyle = \lim_{x \to \infty} \left( - x \frac{\alpha}{\gamma} e^{-\alpha x} - \frac{\alpha}{\gamma^2} e^{-\alpha x} \right ) - \left( - \frac{\alpha}{\gamma^2} \right) = \frac{\alpha}{\gamma^2} \quad \square

Theorem (2) : The expected value of the maximum of the real-valued continuous exponentially distributed random variables X \sim \text{Exp}(\alpha), Y \sim \text{Exp}(\beta) is \displaystyle \frac{1}{\alpha} + \frac{1}{\beta} - \frac{1}{\alpha + \beta}.

Proof of Theorem (2) :

\displaystyle \mathbb{E} \left ( \max{\left( X, Y \right)} \right ) = \int_{-\infty}^{\infty} x f_X(x) F_Y(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) F_X(y) \, dy (Lemma (1))

\displaystyle = \int_{0}^{\infty} x \alpha e^{-\alpha x} \left( 1 - e^{-\beta x} \right ) \, dx + \int_{0}^{\infty} y \beta e^{-\beta y} \left( 1 - e^{-\alpha y} \right ) \, dy (Corollary (6.i))

\displaystyle =  \left( \int_{0}^{\infty} x \alpha e^{-\alpha x} \, dx \right )- \left( \int_{0}^{\infty} x \alpha e^{-(\alpha + \beta) x} \, dx \right ) + \left( \int_{0}^{\infty} y \beta e^{-\beta y} \, dy \right ) -  \left( \int_{0}^{\infty} y \beta e^{-(\alpha + \beta) y} \, dy \right) (Integral linearity)

\displaystyle = \frac{1}{\alpha} - \frac{\alpha}{(\alpha + \beta)^2} + \frac{1}{\beta} - \frac{\beta}{(\alpha + \beta)^2} (Lemma (2), Corollary (6.ii))

\displaystyle = \frac{1}{\alpha} + \frac{1}{\beta} - \frac{1}{\alpha+\beta} \quad \blacksquare

Normal Distribution

Definition (7) : The following Gaussian integral is the error function \displaystyle \text{erf}(x) = \frac{2}{ \sqrt{\pi} } \int_{0}^{x} e^{ - u^2 } \, du for which the following properties hold true:

  1. Odd function: \displaystyle \text{erf}(-x) = -\text{erf}(x)
  2. Limiting behavior: \displaystyle \lim_{x \to \infty} \text{erf}(x) = 1

Definition (8) : Given a real-valued continuous normally distributed random variable , X \sim \mathcal{N}(\mu, \sigma), with mean parameter, \mu. and standard deviation parameter, \sigma \neq 0, the probability density function is \displaystyle f_X(x) = \frac{1}{\sigma \sqrt{2 \pi} } e^{ -\frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right )^2 } for all values on the real line.

Corollary (8.i) : The cumulative distribution function of a real-valued continuous normally distributed random variable, X \sim \mathcal{N}(\mu, \sigma), is therefore \displaystyle F_X(x) = \frac{1}{2} \left (1 + \text{erf} \left ( \frac{x-\mu}{\sqrt{2} \sigma} \right ) \right ).

Proof of Corollary (8.i) :

\displaystyle F_X(x) = \int^{x}_{-\infty} \frac{1}{\sigma \sqrt{2 \pi} } e^{ - \left ( \frac{t - \mu}{\sqrt{2} \sigma} \right )^2 } \, dt (Definition (2.i))

\displaystyle = \frac{1}{ \sqrt{\pi} } \int_{-\infty}^{\frac{x - \mu}{\sqrt{2} \sigma}} e^{ - u^2 } \, du (U-substitution with \displaystyle u = \frac{t - \mu}{\sqrt{2} \sigma} \implies du = \frac{1}{\sqrt{2} \sigma} dt)

\displaystyle = \frac{1}{ \sqrt{\pi} } \int_{-\infty}^{ 0 } e^{ - u^2 } \, du + \frac{1}{ \sqrt{\pi} } \int_{0}^{ \frac{x - \mu}{\sqrt{2} \sigma} } e^{ - u^2 } \, du (Definition (2.iii))

\displaystyle = - \frac{1}{ \sqrt{\pi} } \int_{0}^{-\infty} e^{ - u^2 } \, du + \frac{1}{ \sqrt{\pi} } \int_{0}^{ \frac{x - \mu}{\sqrt{2} \sigma} } e^{ - u^2 } \, du (Reverse limits of integration)

\displaystyle = \frac{1}{2} \lim_{u \to \infty} - \text{erf}(-u) + \frac{1}{2} \text{erf} \left ( \frac{x - \mu}{\sqrt{2} \sigma} \right ) (Definition (7))

\displaystyle = \frac{1}{2} \lim_{u \to \infty} \text{erf}(u) + \frac{1}{2} \text{erf} \left ( \frac{x - \mu}{\sqrt{2} \sigma} \right ) (Definition (7.i))

\displaystyle = \frac{1}{2} + \frac{1}{2} \text{erf} \left ( \frac{x - \mu}{\sqrt{2} \sigma} \right ) (Definition (7.ii))

\displaystyle = \frac{1}{2} \left (1 + \text{erf} \left ( \frac{x-\mu}{\sqrt{2} \sigma} \right ) \right ) \quad \square

Corollary (8.ii) : The expected value of a real-valued continuous normally distributed random variable, X \sim \mathcal{N}(\mu, \sigma), is therefore \mathbb{E}(X) = \mu.

Proof of Corollary (8.ii) :

\displaystyle \mathbb{E}(X) = \int_{-\infty}^{\infty} x f_X(x) \, dx  = \int_{-\infty}^{\infty} x \frac{1}{\sigma \sqrt{2 \pi} } e^{ -\frac{1}{2} \left ( \frac{x - \mu}{\sigma} \right )^2 } \, dx (Definition (4))

\displaystyle  = \int_{-\infty}^{\infty} (\sqrt{2}\sigma u + \mu) \frac{1}{\sqrt{\pi} } e^{ - u^2 } \, du (U-substitution with \displaystyle u = \frac{x-\mu}{\sqrt{2} \sigma} \implies du = \frac{1}{\sqrt{2}\sigma} dx \sqrt{2}\sigma u + \mu = x)

\displaystyle  = \frac{\sqrt{2}\sigma}{\sqrt{\pi}} \int_{-\infty}^{\infty}  u  e^{ - u^2 } \, du +  \frac{\mu}{\sqrt{\pi}} \int_{-\infty}^{\infty} e^{ - u^2 } \, du (Integral linearity)

\displaystyle  = \frac{\sqrt{2}\sigma}{\sqrt{\pi}} \left( \int_{-\infty}^{0}  u  e^{ - u^2 } \, du + \int_{0}^{\infty}  u  e^{ - u^2 } \, du \right ) +  \frac{\mu}{\sqrt{\pi}} \int_{-\infty}^{\infty} e^{ - u^2 } \, du (Definition (1.iii))

\displaystyle  = \frac{\sqrt{2}\sigma}{\sqrt{\pi}} \left( - \int_{0}^{\infty}  u  e^{ - u^2 } \, du + \int_{0}^{\infty}  u  e^{ - u^2 } \, du \right ) +  2 \frac{\mu}{\sqrt{\pi}} \int_{0}^{\infty} e^{ - u^2 } \, du (u e^{-u^2} is odd, e^{-u^2} is even)

\displaystyle = \mu \frac{2}{\sqrt{\pi}} \left ( \frac{\sqrt{\pi}}{2} \lim_{x \to \infty} \text{erf}(x) \right ) = \mu \quad \square (Definition (7), Definition (7.ii))

Definition (9) : Given a real-valued continuous normally distributed random variable, X \sim \mathcal{N}(0, 1), the probability distribution function will be denoted as standard normal probability distribution function, \phi(x), and the cumulative distribution function as the standard normal cumulative distribution function, \Phi(x). By definition, the following properties hold true:

  1. Non-standard probability density function: If X \sim \mathcal{N}(\mu, \sigma), then \displaystyle f_X(x) = \frac{1}{\sigma} \phi \left( \frac{x - \mu}{\sigma} \right )
  2. Non-standard cumulative distribution function: If X \sim \mathcal{N}(\mu, \sigma), then \displaystyle F_X(x) = \Phi\left( \frac{x - \mu}{\sigma} \right )
  3. Complement: \Phi(-x) = 1 - \Phi(x)

Definition (10) : [PaRe96] Given \phi(x) and \Phi(x), the following integrals hold true:

  1. \displaystyle \int_{-\infty}^\infty x\Phi(a+bx)\phi(x) \, dx = \frac{b}{\sqrt{1+b^2}} \phi \left( \frac{a}{\sqrt{1+b^2}} \right )
  2. \displaystyle  \int_{-\infty}^\infty \Phi(a+bx)\phi(x) \, dx = \Phi \left ( \frac{a}{\sqrt{1+b^2}} \right )

Theorem (3) : The expected value of the maximum of the real-valued continuous normally distributed random variables X \sim \mathcal{N}(\mu, \sigma), Y \sim \mathcal{N}(\nu, \tau) is \displaystyle \sqrt{ \sigma^2 + \tau^2 } \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + (\nu - \mu) \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) + \mu.

Lemma (3) : Given real-valued continuous normally distributed random variables X \sim \mathcal{N}(\mu, \sigma), Y \sim \mathcal{N}(\nu, \tau), \displaystyle \int_{-\infty}^{\infty} y f_Y(y) F_X(y) \, dy = \frac{\tau^2}{\sqrt{\sigma^2 + \tau^2 }} \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + \nu \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ).

Proof of Lemma (3) :

\displaystyle \int_{-\infty}^{\infty} y f_Y(y) F_X(y) \, dy = \int_{-\infty}^{\infty} y \frac{1}{\tau} \phi \left ( \frac{y-\nu}{\tau} \right ) \Phi \left ( \frac{y-\mu}{\sigma} \right ) \, dy (Definition (9.i), Definition (9.ii))

\displaystyle = \int_{-\infty}^{\infty} (u \tau + \nu)  \phi(u) \Phi \left ( \frac{u \tau + \nu -\mu}{\sigma} \right ) \, du (U-substitution with \displaystyle u = \frac{y-\nu}{\tau} \implies du = \frac{1}{\tau} dy, y = u \tau + \nu)

\displaystyle = \tau \int_{-\infty}^{\infty} u \phi(u) \Phi \left ( \frac{u \tau + \nu -\mu}{\sigma} \right ) \, du  + \nu \int_{-\infty}^{\infty} \phi(u) \Phi \left ( \frac{u \tau + \nu -\mu}{\sigma} \right ) \, du (Integral linearity)

\displaystyle = \frac{\tau^2}{\sqrt{\sigma^2 + \tau^2 }} \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + \nu \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) \, \square (Definition (10.i), Definition (10.ii))

Proof of Theorem (3) :

\displaystyle \mathbb{E} \left( \max{ \left ( X, Y \right) } \right ) = \int_{-\infty}^{\infty} x f_X(x) F_Y(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) F_X(y) \, dy (Lemma (1))

\displaystyle = \int_{-\infty}^{\infty} x \frac{1}{\sigma} \phi \left ( \frac{x-\mu}{\sigma} \right ) \Phi \left ( \frac{x-\nu}{\tau} \right ) \, dy + \int_{-\infty}^{\infty} y \frac{1}{\tau} \phi \left ( \frac{y-\nu}{\tau} \right ) \Phi \left ( \frac{y-\mu}{\sigma} \right ) \, dy (Definition (11.i), Definition (11.ii))

\displaystyle = \frac{\sigma^2}{\sqrt{\sigma^2 + \tau^2}} \phi \left( \frac{\mu - \nu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + \mu \Phi \left ( \frac{\mu - \nu}{\sqrt{\sigma^2 + \tau^2}} \right ) + \frac{\tau^2}{\sqrt{\sigma^2 + \tau^2 }} \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + \nu \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) (Lemma (3))

\displaystyle = \frac{\sigma^2}{\sqrt{\sigma^2 + \tau^2}} \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + \mu \left ( 1 - \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) \right ) + \frac{\tau^2}{\sqrt{\sigma^2 + \tau^2 }} \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + \nu \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) (Definition (9.iii))

\displaystyle = \frac{\sigma^2}{\sqrt{\sigma^2 + \tau^2}} \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) - \mu \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) + \mu + \frac{\tau^2}{\sqrt{\sigma^2 + \tau^2 }} \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + \nu \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right )

\displaystyle = \sqrt{ \sigma^2 + \tau^2 } \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + (\nu - \mu) \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) + \mu \quad \blacksquare

Continuous Uniform Distribution

Definition (11) : Given a real-valued continuous uniformly distributed random variable, X \sim U(a,b), with inclusive boundaries a, b such that a < b, the probability density function is \displaystyle f_X(x) = \frac{1}{b-a} for all x \in [a, b] and zero everywhere else.

Corollary (11.i) : The cumulative distribution function of a real-valued continuous uniformly distributed random variable, X \sim U(a,b), is therefore \displaystyle F_X(x) = \begin{cases} 0 & x < a \\ \frac{x-a}{b-a} & x \in [a,b] \\ 1 & x > b \end{cases}.

Proof of Corollary (11.i) :

\displaystyle F_X(x) = \int_{-\infty}^{\infty} f_X(t) \, dt = \int_{a}^{b} \frac{1}{b-a} \, dt = \frac{1}{b-a} x \bigg|_{a}^{x} = \begin{cases} 0 & x < a \\ \frac{x-a}{b-a} & x \in [a,b] \\ 1 & x > b \end{cases}\quad \square.

Corollary (11.ii) : The expected value of a real-valued continuous uniformly distributed random variable, X \sim U(a,b), is therefore \displaystyle \frac{a+b}{2}.

Proof of Corollary (11.ii)

\displaystyle \mathbb{E}(X) = \int_{-\infty}^{\infty} x f_X(x) \, dx = \int_{a}^{b} x \frac{1}{b-a} \, dx = \frac{x^2}{2(b-a)} \bigg|_{a}^{b} = \frac{ b^2 -a^2 }{ 2(b-a) } = \frac{b+a}{2} \quad \square

Theorem (4) : The expected value of the maximum of real-valued continuous uniformly distributed random variables X \sim U(a,b), Y \sim U(c,d) is \displaystyle \begin{cases}  \frac{c+d}{2} & a < b \le c < d \\   \frac{4 (b^3 - c^3) - 3 (c + a) (b^2 - c^2) }{6(d-c)(b-a)} + \frac{d^2 - b^2}{2(d-c)} & a \le c < b \le d \\  \frac{4(b^3-c^3) - 3(c+a)(b^2-c^2)}{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} & a \le c < d \le b \\  \frac{4 (b^3-a^3) - 3 (c + a) (b^2 - a^2) }{6(d-c)(b-a)} + \frac{d^2-b^2}{2(d-c)} & c \le a < b \le d\\  \frac{4 (d^3 - a^3) -3 (c+a) (d^2-a^2) }{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} & c \le a < d \le b \\  \frac{a+b}{2} & c < d \le a < b  \end{cases}.

Proof of Theorem (4) :

\displaystyle \mathbb{E} \left ( \max{ \left ( X, Y \right )} \right ) = \int_{-\infty}^{\infty} x f_X(x) F_Y(x) \, dx + \int_{-\infty}^{\infty} y f_Y(y) F_X(y) \, dy (Lemma (1))

\displaystyle = \int_{a}^{b} x \frac{1}{b-a} \begin{cases} 0 & x < c \\ \frac{x - c}{d-c} & x \in [c,d] \\ 1 & \text{otherwise} \end{cases} \, dx + \int_{c}^{d} y \frac{1}{d-c} \begin{cases} 0 & y < a \\ \frac{y - a}{b-a} & y \in [a,b] \\ 1 & \text{otherwise} \end{cases} \, dy

Case (1) : a < b \le c < d

\displaystyle = \left ( \int_{a}^{b} x \frac{1}{b-a} 0 \, dx \right ) + \left ( \int_{c}^{d} y \frac{1}{d-c} 1 \, dy \right )

\displaystyle = \frac{c+d}{2} \quad \square

Case (2) : a \le c < b \le d

\displaystyle = \left ( \int_{a}^{c} x \frac{1}{b-a} 0 \, dx + \int_{c}^{b} x \frac{1}{b-a} \frac{x-c}{d-c} \, dx \right ) + \left ( \int_{c}^{b} y \frac{1}{d-c} \frac{y-a}{b-a} \, dy + \int_{b}^{d} y \frac{1}{d-c} 1 \, dy \right )

\displaystyle = \frac{2 x^3 - 3 c x^2}{6(b-a)(d-c)} \bigg|_{c}^{b} + \frac{2 y^3 - 3ay^2 }{6(d-c)(b-a)} \bigg|_{c}^{b} + \frac{y^2}{2(d-c)} \bigg|_{b}^{d}

\displaystyle = \frac{2 (b^3 - c^3) - 3 c (b^2 - c^2) }{6(b-a)(d-c)} + \frac{2 (b^3 - c^3) - 3 a (b^2 - c^2) }{6(d-c)(b-a)} + \frac{d^2 - b^2}{2(d-c)}

\displaystyle = \frac{4 (b^3 - c^3) - 3 (c + a) (b^2 - c^2) }{6(d-c)(b-a)} + \frac{d^2 - b^2}{2(d-c)} \quad \square

Case (3) : a \le c < d \le b

\displaystyle = \left ( \int_{a}^{c} x \frac{1}{b-a} 0 \, dx + \int_{c}^{b} x \frac{1}{b-a} \frac{x-c}{d-c} \, dx + \int_{d}^{b} x \frac{1}{b-a} 1 \, dx \right ) + \left ( \int_{c}^{d} y \frac{1}{d-c} \frac{y-a}{b-a} \, dy \right)

\displaystyle =   \frac{2x^3 - 3cx^2}{6(b-a)(d-c)} \bigg|_{c}^{b} +  \frac{x^2}{2(b-a)} \bigg|_{d}^{b} +  \frac{2y^3 - 3ay^2}{6(b-a)(d-c)} \bigg|_{c}^{d}

\displaystyle =   \frac{2(b^3-c^3) - 3c(b^2-c^2)}{6(b-a)(d-c)} +  \frac{b^2-d^2}{2(b-a)} +  \frac{2(d^3-c^3) - 3a(d^2-c^2)}{6(b-a)(d-c)}

\displaystyle = \frac{4(b^3-c^3) - 3(c+a)(b^2-c^2)}{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} \quad \square

Case (4) : c \le a < b \le d

\displaystyle = \left( \int_{a}^{b} x \frac{1}{b-a} \frac{x-c}{d-c} \, dx \right ) + \left( \int_{c}^{a} y \frac{1}{d-c} 0 \, dy + \int_{a}^{b} y \frac{1}{d-c} \frac{y-a}{b-a} \, dy + \int_{b}^{d} y \frac{1}{d-c} 1 \, dy \right )

\displaystyle =   \frac{2 x^3 - 3 c x^2 }{6(d-c)(b-a)} \bigg|_{a}^{b} +  \frac{2 y^3 - 3 a y^2 }{6(d-c)(b-a)} \bigg|_{a}^{b} +  \frac{y^2}{2(d-c)} \bigg|_{b}^{d}

\displaystyle =  \frac{2 (b^3-a^3) - 3 c (b^2 - a^2) }{6(d-c)(b-a)} +  \frac{2 (b^3-a^3) - 3 a (b^2 -a^2) }{6(d-c)(b-a)}  +  \frac{d^2-b^2}{2(d-c)}

\displaystyle =  \frac{4 (b^3-a^3) - 3 (c + a) (b^2 - a^2) }{6(d-c)(b-a)} + \frac{d^2-b^2}{2(d-c)} \quad \square

Case (5) : c \le a < d \le b

\displaystyle = \left ( \int_{a}^{d} x \frac{1}{b-a} \frac{x-c}{d-c} \, dx + \int_{d}^{b} x \frac{1}{b-a} 1 \, dx \right ) + \left ( \int_{c}^{a} y \frac{1}{d-c} 0 \, dy + \int_{a}^{d} y \frac{1}{d-c} \frac{y-a}{b-a} \, dy \right )

\displaystyle =   \frac{2 x^3 -3 c x^2}{6(b-a)(d-c)} \bigg|_{a}^{d} +  \frac{x^2}{2(b-a)} \bigg|_{d}^{b} +  \frac{2 y^3 -3 a y^2}{6(b-a)(d-c)} \bigg|_{a}^{d}

\displaystyle =   \frac{2 (d^3 - a^3) -3 c (d^2-a^2) }{6(b-a)(d-c)} +  \frac{b^2-d^2}{2(b-a)} +  \frac{2 (d^3-a^3) -3 a (d^2-a^2)}{6(b-a)(d-c)}

\displaystyle =   \frac{4 (d^3 - a^3) -3 (c+a) (d^2-a^2) }{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} \quad \square

Case (6) : c < d \le a < b

\displaystyle = \left ( \int_{a}^{b} x \frac{1}{b-a} 1 \, dx \right ) + \left ( \int_{c}^{d} y \frac{1}{d-c} 0 \, dy \right )

\displaystyle = \frac{a+b}{2}

\displaystyle \therefore \mathbb{E} \left ( \max{\left ( X, Y \right )} \right ) = \begin{cases}  \frac{c+d}{2} & a < b \le c < d \\   \frac{4 (b^3 - c^3) - 3 (c + a) (b^2 - c^2) }{6(d-c)(b-a)} + \frac{d^2 - b^2}{2(d-c)} & a \le c < b \le d \\  \frac{4(b^3-c^3) - 3(c+a)(b^2-c^2)}{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} & a \le c < d \le b \\  \frac{4 (b^3-a^3) - 3 (c + a) (b^2 - a^2) }{6(d-c)(b-a)} + \frac{d^2-b^2}{2(d-c)} & c \le a < b \le d\\  \frac{4 (d^3 - a^3) -3 (c+a) (d^2-a^2) }{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} & c \le a < d \le b \\  \frac{a+b}{2} & c < d \le a < b  \end{cases} \quad \blacksquare

Summary Table

The following summary table lists the expected value of the maximum of real-valued continuous random variables for the exponential distribution, normal distribution and continuous uniform distribution. The corresponding minimum can be obtained by Theorem (1).

Random Variables Maximum
X \sim Y \sim
\text{Exp}(\alpha) \text{Exp}(\beta) \displaystyle \frac{1}{\alpha} + \frac{1}{\beta} - \frac{1}{\alpha + \beta}
\mathcal{N}(\mu, \sigma) \mathcal{N}(\nu, \tau) \displaystyle \sqrt{ \sigma^2 + \tau^2 } \phi \left( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2 }} \right ) + (\nu - \mu) \Phi \left ( \frac{\nu - \mu}{\sqrt{\sigma^2 + \tau^2}} \right ) + \mu
\text{U}(a, b) \text{U}(c, d) \displaystyle \begin{cases}  \frac{c+d}{2} & a < b \le c < d \\   \frac{4 (b^3 - c^3) - 3 (c + a) (b^2 - c^2) }{6(d-c)(b-a)} + \frac{d^2 - b^2}{2(d-c)} & a \le c < b \le d \\  \frac{4(b^3-c^3) - 3(c+a)(b^2-c^2)}{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} & a \le c < d \le b \\  \frac{4 (b^3-a^3) - 3 (c + a) (b^2 - a^2) }{6(d-c)(b-a)} + \frac{d^2-b^2}{2(d-c)} & c \le a < b \le d\\  \frac{4 (d^3 - a^3) -3 (c+a) (d^2-a^2) }{6(b-a)(d-c)} + \frac{b^2-d^2}{2(b-a)} & c \le a < d \le b \\  \frac{a+b}{2} & c < d \le a < b  \end{cases}

References

[GrSt01] Grimmett, Geoffrey, and David Stirzaker. Probability and Random Processes. Oxford: Oxford UP, 2001. Print.

[PaRe96] Patel, Jagdish K., and Campbell B. Read. Handbook of the Normal Distribution. 2nd ed. New York: Marcel Dekker, 1996. Print.

Written by lewellen

2013-01-01 at 8:00 am

Minesweeper Agent

with one comment

Introduction

Lately I’ve been brushing up on probability, statistics and machine learning and thought I’d play around with writing a Minesweeper agent based solely on these fields. The following is an overview of the game’s mechanics, verification of an implementation, some different approaches to writing the agent and some thoughts on the efficacy of each approach.

Minesweeper

Background

Minesweeper was created by Curt Johnson in the late eighties and later ported to Windows by Robert Donner while at Microsoft. With the release of Windows 3.1 in 1992, the game became a staple of the operating system and has since found its way onto multiple platforms and spawned several variants. The game has been shown to be NP-Complete, but in practice, algorithms can be developed to solve a board in a reasonable amount of time for the most common board sizes.

Specification

Gameplay
An agent, \mathcal{A}, is presented a n \times m grid containing M uniformly distributed mines. The agent’s objective is to expose all the empty grid locations and none of the mines. Information about the mines’ grid locations is gained by exposing empty grid locations which will indicate how many mines exist within a unit (Chebyshev) distance of the grid location. If the exposed grid location is a mine, then the player loses the game. Otherwise, once all empty locations are exposed, the player wins. \textbf{PLAY-GAME}(\mathcal{A}, n, m, M)  \newline \indent H \leftarrow \textbf{INITIALIZE-HIDDEN}(n,m,M)  \newline \indent V \leftarrow \textbf{INITIALIZE-VISIBLE}(n,m,M)  \newline  \newline \indent \textbf{do}  \newline \indent \indent (i,j) \leftarrow \mathcal{A}(V)  \newline \indent \indent \textbf{EXPOSE}(H, V, i, j)  \newline \indent \indent \textbf{if} \quad V_{i,j} = \text{*}  \newline \indent \indent \indent \textbf{return} \quad \textit{Failure}  \newline \indent \textbf{while} \quad M \neq \lvert \textbf{U-LOCATIONS}(V) \rvert  \newline  \newline \indent \textbf{return} \quad \textit{Success}
Initialization
The board consists of hidden and visible states. To represent the hidden, H, and visible state, V, of the board, two character matrices of dimension n \times m are used.

Characters ‘0’-‘8’ represent the number of neighboring mines, character ‘U’ to represent an unexposed grid location and character ‘*’ for a mine.

Neighbors of a grid location (i,j) is the set of grid locations such that 1 \le \lVert (u,v) - (i,j) \rVert_{\infty} \le r. By default, r = 1.

\textbf{INITIALIZE-VISIBLE}(n, m)   \newline \indent \textbf{return} \quad \textbf{INITIALIZE}(n,m, \text{U})

\textbf{INITIALIZE-HIDDEN}(n, m, M)  \newline \indent H \leftarrow \textbf{INITIALIZE}(n,m,\text{0})  \newline  \newline \indent S \leftarrow \emptyset  \newline \indent \textbf{while} \quad \lvert S \lvert < M  \newline \indent \indent S \leftarrow S \cup \textbf{RANDOM}(\textbf{LOCATIONS}(H))  \newline   \newline \indent \textbf{foreach} \quad (i, j) \in S  \newline \indent \indent H_{i, j} = \text{*}  \newline \indent \indent \textbf{foreach} \quad (u,v) \in \textbf{NEIGHBORS}(n, m, i, j) \setminus S   \newline \indent \indent \indent H_{u,v} \leftarrow H_{u,v} + 1  \newline  \newline \indent \textbf{return} \quad H

Exposing Cells
The expose behavior can be thought of as a flood fill on the grid, exposing any empty region bordered by grid locations containing mine counts and the boundaries of the grid.

A matrix, F \in \mathbb{Z}^{n \times m}, represents the topography of the board. A value of zero is reserved for sections of the board that have yet to be visited, a value of one for those that have, two for those that are boundaries and three for mines. A stack, S, keeps track of locations that should be inspected.

If a cell location can be exposed, then each of its neighbors will be added to the stack to be inspected. Those neighbors that have already been inspected will be skipped. Once all the reachable grid locations have been inspected, the process terminates.

\textbf{EXPOSE}(H, V, i, j)  \newline \indent \textbf{foreach} \quad (u,v) \in \textbf{LOCATIONS}(H)  \newline \indent \indent \textbf{if} \quad H_{u,v} = \text{0}  \newline \indent \indent \indent F_{u,v} \leftarrow 0  \newline \indent \indent \textbf{if} \quad H_{i,j} = \text{*}  \newline \indent \indent \indent F_{u,v} \leftarrow 3  \newline \indent \indent \textbf{else}  \newline \indent \indent \indent F_{u,v} \leftarrow 2  \newline  \newline \indent \textbf{PUSH}(S, (i, j))  \newline \indent \textbf{do}  \newline \indent \indent (u, v) \leftarrow \textbf{POP}(S)  \newline \indent \indent \textbf{if} \quad F_{u,v} = 0  \newline \indent \indent \indent F_{u,v} \leftarrow 1  \newline \indent \indent \indent \textbf{foreach} \quad (r,s) \in \textbf{NEIGHBORS}(H, u, v)  \newline \indent \indent \indent \indent \textbf{PUSH}(S, (r,s))  \newline \indent \indent \textbf{elseif} \quad F_{u,v} \in (1, 2)  \newline \indent \indent \indent V_{u,v} \leftarrow H_{u, v}  \newline \indent \textbf{while} \quad \textbf{COUNT}(S) > 0

Verification

Methodology

Statistical tests are used to verify the random aspects of the game’s implementation. I will skip the verification of the game’s logic as it requires use of a number of different methods that are better suited for their own post.

There are two random aspects worth thinking about: the distribution of mines and the distribution of success (i.e., not clicking a mine) for random trials. In both scenarios it made since to conduct Pearson’s chi-squared test. Under this approach there are two hypotheses:

  • H_{0}: The distribution of experimental data follows the theoretical distribution
  • H_{a}: The distribution experimental data does not follow the theoretical distribution

H_{0} is accepted when the test statistic, \chi^2, is less than the critical value, \chi_{\alpha}^2. The critical value is determined by deciding on a p-value (e.g., 0.05, 0.01, 0.001), \alpha, that results in the tail area beneath the chi-squared distribution \chi_{k}^2(x) equal to \alpha. k is the degrees of freedom in the observation.

Mine distribution

The first aspect to verify was that mines were being uniformly placed on the board. For a standard 9 \times 9 board with 10 mines, the expectation is that each grid location should be assigned \frac{10}{81} N times for N trials. k = 80 for this experiment.

In the above experiment, \chi^2 = 71.78 and \chi_{0.05}^2 = 101.87. Since \chi^2 < \chi_{0.05}^2, this affirms H_{0} and that the implemented distribution of mines is indeed uniform with a statistical significance of 0.05.

Distribution of successful clicks

The second aspect to verify is that the number of random clicks before exposing a mine follows a hypergeometric distribution. The hypergeometric distribution is appropriate since we are sampling (exposing) without replacement (the grid location remains exposed after clicking). This hypothesis relies on a non-flood-fill exposure.

The distribution has four parameters. The first is the number of samples drawn (number of exposures), the second the number of successes in the sample (number of empty exposures), the third the number of successes in the population (empty grid locations) and the last the size of the population (grid locations): h(nm-M;nm-M,nm-M, nm).

The expected frequencies for the hypergeometric distribution is given by N h(nm - M; nm - M, nm - M, nm) for N trials. k = 70 in this case.

In the above experiment \chi^2 = 47.24 and \chi_{0.05}^2 = 90.53. Since \chi^2 < \chi_{0.05}^2, this affirms H_{0} and that the number of locations exposed prior to exposing a mine follows a hypergeometric distribution with a statistical significance of 0.05.

Also included in the plot is the observed distribution for a flood based exposure. As one might expect, the observed frequency of more exposures decreases more rapidly than that of the non-flood based exposure.

Agents

Methodology

Much like how a human player would learn to play the game, I decided that each model would have knowledge of game’s mechanics and no prior experience with the game. An alternative class of agents would have prior experience with the game as the case would be in a human player who had studied other player’s strategies.

To evaluate the effectiveness of the models, each played against a series of randomly generated grids and their respective success rates were captured. Each game was played on a standard beginner’s 9 \times 9 grid containing between [1, 10] mines.

For those models that refer to a probability measure, \mathbb{P}, it is assumed that the measure is determined empirically and treated as an estimate of the probability of an event and not as an a priori measure.

Marginal Model

Development

The first model to consider is the Marginal Model. It is designed to simulate the behavior of a naive player who believes that if he observes a mine at a grid location that the location should be avoid in future trials.

The model treats the visible board, V, as a matrix of discrete random variables where each grid location is interpreted as either \textit{Empty} or (a) \textit{Mine}. This model picks the grid location with the greatest empirical probability of being empty:

\underset{(i,j)}{\arg\max} \quad \mathbb{P}(X_{i,j} = \textit{Empty})

\textbf{MARGINAL-MODEL}(V)  \newline \indent p_{\textit{max}} \leftarrow \perp  \newline \indent (i, j)_{\textit{max}} \leftarrow \perp  \newline  \newline \indent \textbf{foreach} \quad (i,j) \in \textbf{U-LOCATIONS}(V)  \newline \indent \indent p \leftarrow \mathbb{P}(V_{i,j} = \textit{Empty})  \newline \indent \indent \textbf{if} \quad p > p_{\textit{max}}   \newline \indent \indent  \indent p_{\textit{max}} \leftarrow p  \newline \indent \indent \indent (i,j)_{\textit{max}} \leftarrow (i,j)  \newline  \newline \indent \textbf{return} \quad (i,j)_{\textit{max}}

Test Results

Since the mine distribution is uniform, the model should be equivalent to selecting locations at random. The expected result is that avoiding previously occupied grid locations is an ineffective strategy as the number of mines increases. This does however, provide an indication of what the success rate should look like for chance alone.

Conditional Model

Development

One improvement over the Marginal Model is to take into account the visual clues made visible when an empty grid location is exposed. Since an empty grid location represents the number of neighboring mines, the Conditional Model can look at these clues to determine whether or not an unexposed grid location contains a mine.

This boils down to determining the probability of \mathbb{P}(\textit{Mine} \lvert \textrm{Evidence}). A simplification in calculating the probability is to assume that each piece of evidence is independent. Under this assumption the result is a Naïve Bayes Classifier:

\mathbb{P}( C = c \vert X = x ) = \dfrac{\mathbb{P}(C = c) \prod \mathbb{P}( X_{i} = x_{i} \vert C = c)}{\prod\mathbb{P}(X_{i} = x_{i})}

As in the case of the Marginal Model, the Conditional Model returns the grid location that it has determined has the greatest probability of being empty given its neighbors:

\underset{(i,j)}{\arg\max} \quad \mathbb{P}(C_{i,j} = \textit{Empty} | N(V_{i,j}) )

\textbf{CONDITIONAL-MODEL}(V, r)  \newline \indent C \leftarrow \lbrace \textit{Empty}, \textit{Mine} \rbrace  \newline \indent S \leftarrow \textbf{U-LOCATIONS}(V)  \newline  \newline \indent T \leftarrow \emptyset    \newline \indent \textbf{foreach} \quad (i,j) \in S  \newline \indent \indent N \leftarrow \textbf{NEIGHBORS}(V, i, j, r)  \newline \indent \indent p_{\textit{max}} \leftarrow \perp  \newline \indent \indent c_{\textit{max}} \leftarrow \perp  \newline \indent \indent \textbf{foreach} \quad c \in C  \newline \indent \indent \indent p \leftarrow \mathbb{P}(C = c)  \newline \indent \indent \indent \textbf{foreach} \quad (u,v) \in N  \newline \indent \indent \indent \indent p \leftarrow p * \mathbb{P}( X_{i, j} = V_{i, j} \vert C = c )  \newline \indent \indent \indent \textbf{if} \quad p > p_{\textit{max}}  \newline \indent \indent \indent \indent p_{\textit{max}} \leftarrow p  \newline \indent \indent \indent \indent c_{\textit{max}} \leftarrow c  \newline \indent \indent \textbf{if} \quad c_{\textit{max}} = \textit {Empty}  \newline \indent \indent \indent T \leftarrow T \cup (i, j)  \newline  \newline \indent \textbf{return} \quad \textbf{RANDOM}(T)
Test Results

The Naïve Bayes Classifier is regarded as being an effective approach to classifying situations for a number of different tasks. In this case, it doesn’t look like it is effective at classifying mines from non-mines. The results are only slightly better than the Marginal Model.

Graphical Model

Development

One shortfall of the Conditional Model is that it takes a greedy approach in determining which action to take. A more sophisticated approach is to not just consider the next action, but the possible sequence of actions that will minimize the possibility of exposing a mine.

Each of the possible observable grids, S, can be thought of as a set of vertices in a graph whose corresponding set of edges represent the transition between a state, s, to the next observable state, s^{\prime}. Each transition was achieved by performing an action, a \in A, on the state. The specific action, \alpha : S \times S \to A, is chosen from a subset of permitted actions given the state. Each transition has a probability, \mathbb{P}(s^{\prime} \vert s), of taking place.

It is possible to pick a path, \pi, through this graph that minimizes the risk by assigning a reward, \rho : S \to \mathbb{R}, to each state and attempting to identify an optimal path, \pi^{*}_{s}, from the present state that yields the greatest aggregate reward,

\displaystyle \varrho(\pi) = \sum_{(s, s^{\prime}) \in \pi} \rho(s^{\prime}) \mathbb{P}(s^{\prime} \vert s)

Solving for \pi^{*}_{s} is equivalent to solving the Longest Path Problem and can be computed efficiently using a dynamic programming solution.

\displaystyle \pi_{s}^{*} \gets \underset{\pi_{s}}{\arg\max} \ \sum_{(\sigma, \sigma^{\prime}) \in \pi_{s}} \rho(\sigma^{\prime}) \mathbb{P}(\sigma^{\prime} \vert \sigma) \ \pi_{s} \in \Pi_{s}

\textbf{GRAPHICAL-MODEL}(V)  \newline \indent (a, r)_{\textit{max}} \gets (\bot, \bot)  \newline \indent T \gets \emptyset  \newline  \newline \indent \textbf{foreach} \quad U \in \textbf{SUB-GRIDS}(V)  \newline \indent \indent (a, r)_{U} \gets \textbf{OPTIMAL-ACTION}(U, \bot)  \newline \indent \indent \textbf{if} \quad r_{U} > r_{\textit{max}}  \newline \indent \indent \indent (a,r)_{\textit{max}} \gets (a,r)_{U}  \newline \indent \indent \indent T \gets \emptyset  \newline \indent \indent \textbf{if} \quad r_{U} = r_{\textit{max}}  \newline \indent \indent \indent T \gets T \cup (a,r)_{U}  \newline  \newline \indent (a, r)_{\textit{max}} \gets \textbf{RANDOM}(T)  \newline  \newline \indent \textbf{return} \quad a_{\textit{max}}  \newline  \newline  \textbf{OPTIMAL-ACTION}(V)  \newline \indent (a, r)_{\textit{max}} \gets (\bot, \bot)  \newline  \newline \indent \textbf{foreach} \quad (i,j) \in \textbf{U-LOCATIONS}(V)  \newline \indent \indent \textbf{foreach} \quad V^{\prime} \in \textbf{OUTCOMES}(V, (i,j))  \newline \indent \indent \indent (a, r)_{V^{\prime}} \gets \textbf{OPTIMAL-ACTION}(V^{\prime})  \newline \indent \indent \indent r \gets r_{V^{\prime}} + \mathbb{P}(V^{\prime} \vert V) * \textbf{REWARD}(V^{\prime})  \newline \indent \indent \indent \textbf{if} \quad r \ge r_{\textit{max}}  \newline \indent \indent \indent \indent (a, r)_{\textit{max}} \gets ((i,j), r)  \newline  \newline \indent \textbf{return} \quad (a,r)_{\textit{max}}  \newline  \newline  \textbf{REWARD}(V)  \newline \indent \textbf{if} \quad \textbf{SUCCESS}(V)  \newline \indent \indent \indent \textbf{return} \quad +100  \newline \indent \textbf{if} \quad \textbf{FAILURE}(V)  \newline \indent \indent \indent \textbf{return} \quad -100  \newline  \newline \indent \textbf{return} +1

From the optimal walk, a sequence of optimal actions is determined by mapping \alpha over the path. Taking the first action gives the optimal grid location to expose given the current visible state of the board.

This description constitutes a Markov Decision Process. As is the case for most stochastic processes, it is assumed that the process holds the Markov Property; that future states only depend upon the current states and none of the prior states. In addition to being a Markov Decision Process, this is also an example of Reinforcement Learning.

First thing to observe is that the game state space is astronomical. For a standard beginner’s grid there is at most a sesvigintillion (10^{81}) possible grids that a player can encounter. Which as an aside, is on the order of the number of atoms in the observable universe! The set of actions at each state is slightly more manageable with at most eighty-one actions.

To simplify the state space, I chose to only consider 3 \times 3 boards and when evaluating a full grid, consider the possible sub-grids and evaluate the optimal sequence of actions for each sub-grid and pick the maximum reward associated for each sub-grid that was evaluated as the action to take on the full grid.

Test Results

The Graphical Model produces results that are only a margin better than those of the Conditional Model.

Semi-deterministic Model

Development

The last model I’m going to talk about is a semi-deterministic model. It works by using the visible grid to infer the topology of the hidden grid and from the hidden grid, the topology that the visible grid can become.

The grid can be viewed as a graph. Each grid location is a vertex and an edge is an unexposed grid location’s influence on another grid location’s neighbor mine count.

For each of the exposed grid locations on the board, v_{i,j}, it’s neighbors, N(v_{i,j}), are all mines when the number of inbound edges E_{i,j} = d(v_{i,j}), matches the visible mine count V_{i,j}.

The model produces its inferred version, F, of the influence graph E by using the determined mine locations M.

For each of the grid locations that are exposed and the inferred influence matches the visible count, then each of the neighbors about that location can be exposed provided they are not already exposed and not an inferred mine.

From this set of possibilities, a mine location is chosen. When no mine locations can be determined, then an alternative model can be used.

\textbf{SEMI-DETERMINISTIC-MODEL}(V)  \newline \indent E \leftarrow 0_{n, m}  \newline \indent \textbf{foreach} \quad (i,j) \in \textbf{LOCATIONS}(V)  \newline \indent \indent \textbf{if} \quad V_{i,j} \ne \textit{U}  \newline \indent \indent \indent \textbf{continue}  \newline \indent \indent \textbf{foreach} \quad (u,v) \in \textbf{NEIGHBORS}(V, i, j)  \newline \indent \indent \indent E_{u,v} \leftarrow E_{u,v} + 1  \newline  \newline \indent M \leftarrow \textit{False}_{n,m}  \newline \indent \textbf{foreach} \quad (i,j) \in \textbf{LOCATIONS}(V)  \newline \indent \indent \textbf{if} \quad V_{i,j} \in \lbrace \textit{U}, \textit{0} \rbrace \lor V_{i,j} \neq E_{i,j}  \newline \indent \indent \indent \textbf{continue}  \newline \indent \indent \textbf{foreach} \quad (u,v) \in \textbf{NEIGHBORS}(V, i, j)  \newline \indent \indent \indent \textbf{if} \quad V_{i,j} = \textit{U}  \newline \indent \indent \indent \indent M_{u,v} \leftarrow \textit{True}   \newline  \newline \indent F \leftarrow 0_{n,m}  \newline \indent \textbf{foreach} \quad (i,j) \in \textbf{LOCATIONS}(V)  \newline \indent \indent \textbf{foreach} \quad (u,v) \in \textbf{NEIGHBORS}(V, i, j)  \newline \indent \indent \indent \textbf{if} \quad M_{u,v}  \newline \indent \indent \indent \indent F_{i,j} \leftarrow F_{i,j} + 1  \newline  \newline \indent S \leftarrow \emptyset  \newline \indent \textbf{foreach} \quad (i,j) \in \textbf{LOCATIONS}(V)  \newline \indent \indent \textbf{if} \quad V_{i,j} = \textit{U} \lor F_{i,j} \ne V_{i,j}  \newline \indent \indent \indent \textbf{continue}  \newline \indent \indent \textbf{foreach} \quad (u,v) \in \textbf{NEIGHBORS}(V, i, j)  \newline \indent \indent \indent \textbf{if} \quad V_{i,j} \ne \textit{U} \lor M_{u,v}  \newline \indent \indent \indent \indent \textbf{continue}  \newline \indent \indent \indent S \leftarrow S \cup (u, v)  \newline   \newline \indent \textbf{return} \quad \textbf{RANDOM}(S)

Test Results

Since the model is a more direct attempt at solving the board, its results are superior to the previously presented models. As the number of mines increases, it is more likely that it has to rely on a more probabilistic approach.

Summary

Each of the models evaluated offered incremental improvements over their predecessors. Randomly selecting locations to expose is on par with choosing a location based on previously observed mine locations. The Conditional Model and Graphical Model yield similar results since they both make decisions based on conditioned probabilities. The Semi-deterministic Model stands alone as the only one model that produced reliable results.

The success rate point improvement between the Condition and Marginal models is most notable for boards consisting of three mines and the improvement between Graphical and Semi-deterministic models for seven mines. Improvements between Random and Marginal models is negligible and between Conditional and Graphical is minor for all mine counts fewer than seven.

Given the mathematical complexity and nondeterministic nature of the machine learning approaches, (in addition the the complexity and time involved in implementing those approaches) they don’t seem justified when more deterministic and simpler approaches exist. In particular, it seems like most people have implemented their agents using heuristics and algorithms designed to solve constraint satisfaction problems. Nonetheless, this was a good refresher to some of the elementary aspects of probability, statistics and machine learning.

References

Classification – Naïve Bayes.” Data Mining Algorithms in R. Wikibooks. 3 Nov. 2010. Web. 30 Oct. 2011.

Windows Minesweeper.” MinesweeperWiki. 8 Sept. 2011. Web. 30 Oct. 2011.

Kaye, Richard. “Minesweeper Is NP-complete.” [pdf] Mathematical Intelligencer 22.2 (2000): 9-15. Web. 30 Oct. 2011.

Nakov, Preslav, and Zile Wei. “MINESWEEPER, #MINESWEEPER.” 14 May 2003. Web. 14 Apr. 2012.

Richard, Sutton, and Andrew G. Barto. “3.6 Markov Decision Processes.” Reinforcement Learning: An Introduction. Cambridge, Massachusetts: Bradford Book, 1998. 4 Jan. 2005. Web. 30 Oct. 2011.

Rish, Irene “An Empirical Study of the Naive Bayes Classifer.” [pdf] IJCAI-01 Workshop on Empirical Methods in AI (2001). Web. 30 Oct. 2011.

Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall/PearsonEducation., 2003. Print.

Sun, Yijun, and Jian Li. “Adaptive Learning Approach to Landmine Detection.” [pdf] IEEE Transactions of Aerospace and Electronic Systems 41.3 (2005): 1-9. 10 Jan. 2006. Web. 30 Oct. 2011.

Taylor, John R. An introduction to error analysis: the study of uncertainties in physical measurements. Sausalito, CA: University Science Books, 1997. Print.