Antimatroid, The

thoughts on computer science, electronics, mathematics

Posts Tagged ‘C++

GPU Accelerated Expectation Maximization for Gaussian Mixture Models using CUDA

leave a comment »

C, CUDA, and Python source code available on GitHub

Introduction

Gaussian Mixture Models [1, 435-439] offer a simple way to capture complex densities by employing a linear combination of K multivariate normal distributions, each with their own mean, covariance, and mixture coefficient, \pi_{k}, s.t. \sum_{k} \pi_{k} = 1.

\displaystyle p( x ) = \sum_{k = 1}^{K} \pi_{k} p(x \lvert \mu_k, \Sigma_k)


Of practical interest is the learning of the number of components and the values of the parameters. Evaluation criteria, such as Akaike and Bayesian, can be used to identify the number of components, or non-parametric models like Dirichlet processes can be used to avoid the matter all together. We won’t cover these techniques here, but will instead focus on finding the values of the parameters given sufficient training data using the Expectation-Maximization algorithm [3], and doing so efficiently on the GPU. Technical considerations will be discussed and the work will conclude with an empirical evaluation of sequential and parallel implementations for the CPU, and a massively parallel implementation for the GPU for varying numbers of components, points, and point dimensions.

Multivariate Normal Distribution

The multivariate normal distribution With mean, \mu \in \mathbb{R}^d, d \in \mathbb{N}_1, and symmetric, positive definite covariance, \Sigma \in \mathbb{R}^{d \times d}, is given by:

\displaystyle 	p( x \lvert \mu, \Sigma ) = \frac{1}{\sqrt{(2\pi)^d \lvert \Sigma \rvert }} \exp{\left( - (x - \mu)^{T} \Sigma^{-} (x - \mu) / 2 \right)}


From a computational perspective, we will be interested in evaluating the density for N values. Thus, a naive implementation would be bounded by \mathcal{O}\left(N d^4\right) due to the matrix determinate in the normalization term. We can improve upon this by computing the Cholesky factorization, \Sigma = L L^T, where L is a lower triangular matrix [6, 157-158]. The factorization requires \mathcal{O} \left ( d^3 \right ) time and computing the determinate becomes \mathcal{O} \left (d \right) by taking advantage of the fact that \det\left(L L^T\right) = \det(L)^2 = \prod_i L_{i,i}^2. Further, we can precompute the factorization and normalization factor for a given parameterization which leaves us with complexity of the Mahalanobis distance given by the quadratic form in the exponential. Naive computation requires one perform two vector matrix operations and find the inverse of the covariance matrix with worst case behavior \mathcal{O} \left (d^3\right). Leveraging the Cholesky factorization, we’ll end up solving a series of triangular systems by forward and backward substitution in \mathcal{O} \left (d^2\right) and completing an inner product in \mathcal{O} \left (d\right) as given by L z = x - \mu, L^T z = y, and (x-\mu)^T y. Thus, our pre-initialization time is \mathcal{O} \left (d^3 \right) and density determination given by \mathcal{O} \left (N d^2 \right). Further optimizations are possible by considering special diagonal cases of the covariance matrix, such as the isotropic, \Sigma = \sigma I, and non-isotropic, \Sigma_{k,k} = \sigma_k, configurations. For robustness, we’ll stick with the full covariance.

\displaystyle \log p( x \lvert \mu, \Sigma ) = - \frac{1}{2} \left( d \log 2\pi + \log \lvert \Sigma \rvert \right ) - \frac{1}{2} (x - \mu)^{T} \Sigma^{-} (x - \mu)


To avoid numerical issues such as overflow and underflow, we’re going to consider \log p(x \lvert \mu, \Sigma) throughout the remainder of the work. For estimates of the covariance matrix, we will want more samples than the dimension of the data to avoid a singular covariance matrix [4]. Even with this criteria satisfied, it may still be possible to produce a singular matrix if some of the data are collinear and span a subspace of \mathbb{R}^d.

Expectation Maximization

From an unsupervised learning point of view, GMMs can be seen as a generalization of k-means allowing for partial assignment of points to multiple classes. A possible classifier is given by k^{*} = \arg\max_k \, \log \pi_{k} + \log p(x \lvert \mu_k, \Sigma_k). Alternatively, multiple components can be used to represent a single class and we argmax over the corresponding subset sums. The utility of of GMMs goes beyond classification, and can be used for regression as well. The Expectation-Maximization (EM) algorithm will be used to find the parameters of of the model by starting with an initial guess for the parameters given by uniform mixing coefficients, means determined by the k-means algorithm, and spherical covariances for each component. Then, the algorithm iteratively computes probabilities given a fixed set of parameters, then updating those parameters by maximizing the log-likelihood of the data:

\displaystyle \mathcal{L} \left( \mathcal{D} \lvert \mu, \Sigma \right) = \sum_{n = 1}^{N} \log p(x_n) = \sum_{n=1}^{N} \log{ \left [ \sum_{k = 1}^{K} \pi_{k} p \left( x_n \lvert \mu_k, \Sigma_k \right ) \right ] }


Because we are dealing with exponents and logarithms, it’s very easy to end up with underflow and overflow situations, so we’ll continue the trend of working in log-space and also make use of the “log-sum-exp trick” to avoid these complications:

\displaystyle \log p( x ) = a + \log \left[ \sum_{k = 1}^{K} \exp{ \left( \log \pi_{k} + \log p(x \lvert \mu_k, \Sigma_k) - a \right ) } \right ]


Where the a term is the maximum exponential argument within a stated sum. Within the expectation stage of the algorithm we will compute the posterior distributions of the components conditioned on the training data (we omit the mixing coefficient since it cancels out in the maximization steps of \mu_k and \Sigma_k, and account for it explicitly in the update of \pi_k):

\displaystyle \gamma_{k, n} = \frac{ p \left ( x_n \lvert \mu_k, \Sigma_k \right ) }{ p(x) } \qquad \Gamma_k = \sum_{n=1}^{N} \gamma_{k, n}


\displaystyle \log \gamma_{k, n} =  \log p \left ( x_n \lvert \mu_k, \Sigma_k \right )  - \log p(x) \qquad \log \Gamma_k = a + \log \left [ \sum_{n=1}^{N} \exp{ \left( \log \gamma_{k, n} - a \right )} \right ]


The new parameters are resolved within the maximization step:

\displaystyle \pi_{k}^{(t+1)} = \frac{ \pi_{k}^{(t)} \Gamma_k }{ \sum_{i=1}^{K} \pi_{i}^{(t)} \Gamma_i } \qquad \log \pi_{k}^{(t+1)} = \log \pi_{k}^{(t)} + \log \Gamma_k - a - \log \left [ \sum_{i=1}^{K} \exp{ \left( \log \pi_{i}^{(t)} + \log \Gamma_i - a \right )} \right ]


\displaystyle \mu_k^{(t+1)} = \frac{ \sum_{n=1}^{N} x_n \gamma_{n, k} }{ \Gamma_k  } \qquad \mu_k^{(t+1)} = \frac{ \sum_{n=1}^{N} x_n \exp{ \log \gamma_{n, k} } }{ \exp{ \log \Gamma_k }  }


\displaystyle \Sigma_k^{(t+1)} = \frac{ \sum_{n=1}^{N} (x_n - \mu_k^{(t+1)}) (x_n - \mu_k^{(t+1)})^T \gamma_{n, k} }{ \Gamma_k  }


\displaystyle \Sigma_k^{(t+1)} = \frac{ \sum_{n=1}^{N} (x_n - \mu_k^{(t+1)}) (x_n - \mu_k^{(t+1)})^T \exp \log \gamma_{n, k} }{ \exp \log \Gamma_k  }


The algorithm continues back and forth between expectation and maximization stages until the change in log likelihood is less than some epsilon, or a maximum number of user specified iterations has elapsed.

Implementations

Sequential Per iteration complexity given by \mathcal{O}\left(2 K N d^2 + K N d + 2K + N + K d^3\right). We expect d \le K < N because too many dimensions leads to a lot of dead space and too many components results in overfitting of the data. Thus, the dominating term for sequential execution is given by \mathcal{O}\left(2 K N d^2 \right).

Parallel There are two natural data parallelisms that appear in the algorithm. The calculation of the \mathcal{L} and \gamma across points, while the probability densities and parameter updates have natural parallelisms across components. Each POSIX thread runs the full iterative algorithm with individual stages coordinated by barrier synchronization. Resulting complexity is given by \mathcal{O}\left(\frac{2}{P} d^2 K N \right) for work coordinated across P processors.

cudaFlow

Massively Parallel The parallel implementation can be taken and mapped over to the GPU with parallelism taken across points and components depending on the terms being computed. There are several types of parallelism that we will leverage under the CUDA programming model. For the calculation of \log p\left(x | \mu_k, \Sigma_k \right) we compute each point in parallel by forming a grid of one dimensional blocks, and use streams with event synchronization to carry out each component in parallel across the streaming multiprocessors. Calculation of the loglikelihood and \log \gamma_{n,k} is done by computing and storing \log p(x), then updating the storage for \log p\left(x|\mu_k,\Sigma_k\right), and then performing a parallel reduction over \log p(x) to produce the loglikelihood. Parallel reductions are a core tasks are implemented by first standardizing the input array of points to an supremum power of two, then reducing each block using shared memory, and applying a linear map to the memory so that successive block reductions can be applied. Several additional approaches are discussed in [5]. Once the loglikelihood is computed, the streams are synchronized with the host and the result is copied from the device back to the host. To compute \log \Gamma_k, \log \gamma_{n,k} is copied to a working memory and a maximum parallel reduction is performed. The resulting maximum is used in a separate exponential map for numerical stability when computing the parallel reduction of each component to yield \log \Gamma_k. Updates to the mean and covariances are performed by mapping each term to a working memory allocated for each component’s stream and executing a parallel reduction to yield the updated mean and covariance. Once all component streams have been synchronized, the mixture coefficients and Cholesky decompositions of the covariances is computed with a single kernel invocation parallel in the number of components.

The main design consideration was whether or not use streams. For larger numbers of components, this will result in improved runtime performance, however, it comes at the cost of increased memory usage which limits the size of problems an end user can study with the implementation. Because the primary design goal is performance, the increase in memory was favorable to using less memory and executing each component sequentially.

To optimize the runtime of the implementation nvprof along with the NVIDIA Visual Profiler was used to identify performance bottlenecks. The original implementation was a naive port of the parallel C code which required frequent memory transfers between host and device resulting in significant CUDA API overhead that dominated the runtime. By transferring and allocating memory on the device beforehand, this allowed the implementation to execute primarily on the GPU and eliminate the API overhead. The second primary optimization was using streams and events for parallelization of the component probability densities and parameter updates in the maximization step. In doing so, this allowed for a K fold reduction since the components calculations would be performed in parallel. The next optimization step was to streamline the parallel reductions by using block reductions against fast shared block memory minimizing the number of global memory writes instead of performing iterated reductions against sequential addressing that preformed global memory reads and writes for each point. The final optimization step was to used pinned host memory to enable zero-copy transfers from DRAM to the GPU over DMA.

Evaluation

To evaluate the implementations we need a way of generating GMMs and sampling data from the resulting distributions. To sample from a standard univariate normal distribution one can use The Box-Muller transform, Zigguart method, or Ratio-of-uniforms method [7]. The latter is used here due to its simplicity and efficiency. Sampling from the multivariate normal distribution can by done by sampling a standard normal vector \eta \sim \mathcal{N}(0 ,I_d) and computing \mu + \Sigma^{1/2} \eta where \Sigma^{1/2} can be computed by Eigendecomposition, \Sigma^{1/2} = Q \Delta^{1/2} Q^{-}, or Cholesky factorization, \Sigma = L L^T, \Sigma^{1/2} = L. The latter is used since it is more efficient. The GMM describes a generative process whereby we pick a component at random with probability given by its mixture coefficient and then sample the underlying \mathcal{N}(\mu_k, \Sigma_k) distribution, and perform this process for the desired number of points.

The matter of generating GMMs it more interesting. Here we draw \pi_i = X_i / \sum_{j} X_j for X_i \sim \mathcal{U}(0, 1), alternatively, one could draw \pi \sim \text{Dir}(\alpha). Means are drawn by \mu \sim \mathcal{N}(0, a I_d) with a > 1 so that means are relatively spread out in \mathbb{R}^{d}. The more exciting prospect is how to sample the covariance matrix. This is where the Wishart distribution, \Sigma \sim W(I_d, d, n) for n > d - 1, comes in handy. The Wishart distribution is a model of what the sample covariance matrix should look like given a series of n x_i \sim \mathcal{N}(0, I_d) vectors. Based on a \mathcal{O}\left(d^2\right) method by [8], [9] gives an equally efficient method for sampling \Sigma^{1/2} = L by letting L_{i,i} \sim \chi^2(n - i) and L_{i,j} \sim \mathcal{N}(0, 1) for 0 \le i < d and 0 \le j < i.

To evaluate the performance of the different implementations, the wall clock time taken to run the algorithm on a synthetic instance was measured by varying each of the N, K, and d parameters while holding the other two fixed. From an end user perspective wall clock time is preferable to the time the operating system actually devoted to the problem since wall clock time is more valuable. There will be variability in the results since each instance requires a different number of iterations for the log likelihood to converge. Tests were conducted on a Xeon 1245 v5 3.5 Ghz system with 32GB of memory and an NVIDIA GTX 1060 6GB graphics card with 1280 cores.

runtime

Since the parameter space is relatively large Figures 2-5 look at varying one parameter will fixing the others to demonstrate the relative merits of each approach. When the number of points dominates the CUDA approach tends to be 18x faster; the Parallel approach tends to be 3x faster when the dimension is high; and CUDA is suitable when the num of components is high giving a 20x improvement relative to the sequential approach. Thus, when dealing with suitably large datasets, the CUDA based implementation is preferable delivering superior runtime performance without sacrificing quality.

It is important to note that the results obtained from the CUDA solution may differ to those the sequential and parallel approaches. This is due to nondeterministic round off errors associated with executing parallel reductions compared to sequential reductions [2], and differences in the handling of floating point values on the GPU [10], notably, the presence of fused multiple add on NVIDIA GPUs which are more accurate than what is frequently implemented in CPU architectures. The following two synthetic data sets illustrate typical results of the three schemes:

example

Conclusion

This work demonstrated the utility of using NVIDIA GPUs to train Gaussian mixture models by the Expectation Maximization algorithm. Speedups as high as 20x were observed on synthetic datasets by varying the number of points, components, and data dimension while leaving the others fixed. It is believed that further speedups should be possible with additional passes, and the inclusion of metric data structures to limit which data is considered during calculations. Future work would pursue more memory efficient solutions on the GPU to allow for larger problem instance, and focus on providing higher level language bindings so that it can be better utilized in traditional data science toolchains.

References

  1. Bishop, C. M. Pattern recognition and machine learning. Springer, 2006.
  2. Collange, S., Defour, D., Graillat, S., and Lakymhuk, R. Numerical reproducibility for the parallel reduction on multi- and many-core architectures. Parallel Computing 49 (2015), 83-97.
  3. Dempster, A. P., Laird, N. M., and Rubin, D. B. Maximum likelihood from incomplete data via the eme algorithm. Journal of the royal statistical society. Series B (methodological) (1977), 1-38.
  4. Fan, J., Liao, Y., and Liu, H. An overview of the estimation of large covariance and precision matrices. The Econometrics Journal 19, (2016) C1-C32.
  5. Harris, M. Optimizing cuda. SC07: High Performance Computing with CUDA (2007).
  6. Kincaid, D., and Cheney, W. Numerical analysis: mathematics of scientific computing. 3 ed. Brooks/Cole, 2002.
  7. Kinderman, A. J., and Monahan, J. F. Computer generation of random variables using the ratio of uniform deviates. ACM Transactions on Mathematical Software (TOMS) 3, 3 (1977), 257-260.
  8. Odell, P., and Feiveson, A. A Numerical procedure to generate a sample covariance matrix. Journal of the American Statistical Association 61, 313 (1966), 199-203.
  9. Sawyer, S. Wishart distributions and inverse-wishart sampling. URL: http://www.math.wustl.edu/~sawyer/hmhandouts/Wishart.pdf (2007).
  10. Whitehead, N., and Fit-Florea, A. Precision and performance: Floating point and ieee 754 compliance for nvidia gpus. rn(A + B) 21., 1 (2011), 18749-19424.
Advertisements

A Greedy Approximation Algorithm for the Linear Assignment Problem

leave a comment »

Starting today, I will be posting some of the related source code for articles on GitHub.

Introduction

The Linear Assignment Problem (LAP) is concerned with uniquely matching an equal number of workers to tasks, n, such that the overall cost of the pairings is minimized. A polynomial time algorithm was developed in the late fifties by [6], and further refined by [9], called the Hungarian method. Named so after the work of Hungarian mathematicians König and Egerváry whose theorems in the 1930s form the basis for the method. While the Hungarian Method can solve LAP instances in \mathcal{O}\left(n^3\right) time, we wish to find faster algorithms even if it means sacrificing optimality in the process. Here we examine a greedy \alpha-approximation algorithm with \mathcal{O}\left(n^2 \log n \right) runtime in terms of its approximation factor and compare it empirically to the Hungarian method.

Linear Assignment Problem

\displaystyle \begin{aligned} C_n = \min & \sum_{i=1}^{n} \sum_{j=1}^{n} M_{i,j} x_{i,j} \\ s.t. & \sum_{i=1}^{n} x_{i,j} = 1, \quad j = 1, \ldots, n \\ & \sum_{j=1}^{n} x_{i,j} = 1, \quad i = 1, \dots, n \label{eqn:lap} \end{aligned}


The above linear program has cost, M \in \mathbb{Z}_{+}^{n \times n}, and assignment, x \in \lbrace 0,1 \rbrace^{n \times n}, matrices that specify the terms of the LAP. This is equivalent to finding a perfect matching in a weighted bipartite graph. A minimal cost may have several possible assignments, but we are only interested in finding just one. It is assumed that no one worker can do all jobs more efficiently by themselves than the distributing work across all workers. Likewise, if the costs are thought of as durations, then the minimum cost is the minimum sequential rather than parallel time taken to complete the tasks.

From a practical point of view, we may relax the integral constraint on M and allow all positive real-valued costs. For instances where there are more jobs than workers, and vice versa, dummy entries valued greater than the existing maximum may be added. Minimizing the cost is the default objective, but the maximum cost can be found by finding the optimal assignment for M^{\prime}_{i,j} = M_{max} - M_{i,j}, then finding the cost relative to M.

Algorithms

Brute Force Rather than using the mathematical programming or graph theoretic representation of the problem, we can instead view the problem as finding the assignment that minimizes the cost out of all possible assignments:

\displaystyle \pi^{*} = \underset{\pi \in \Pi_n}{\arg\min} \sum_{i=1}^{n} M_{i, \pi_i}


There are n! such assignments which can be produced using an iterative version of Heap’s algorithm [5] in \mathcal{O}\left(n!\right) time assuming one does differential scoring (opposed to calculating the score for each permutation which would result in an \mathcal{O}\left(n^2 (n-1)!\right) algorithm.)

Random The random algorithm selects a permutation \pi \in \Pi_n uniformly from the set of all possible assignment permutations in \mathcal{O}\left(n\right) time using the Fisher-Yates shuffle [4]. This obviously does not produce an optimal or near-optimal solution, but serves as a straw man to compare other results.

Greedy The greedy heuristic continues to cover the row and column of the smallest uncovered entry in the cost matrix until all entries are covered. The resulting set of entries then constitutes the assignment of workers to jobs. An inefficient \mathcal{O}\left(n^3\right) algorithm can be used to find the smallest entry every iteration, or a more efficient result of \mathcal{O}\left(n^2 \log n\right) can be obtained through the use of a sorted, array indexed hybrid mesh and queue. Let \texttt{QNode} represent a tuple consisting of row, column, and value; the previous entry in the matrix \le this value, and the next entry in this matrix \ge this value; and the \texttt{QNode}s (left, above, right, below) that are adjacent to this node.


Algorithm 1 A greedy algorithm for the LAP.


  • \textbf{procedure } \textsc{Greedy}(M)
    • A[i] \gets \bot \text{ for } i = 0 \ldots n - 1
    • Q[i] \gets \texttt{QNode} \text{ for } i = 0 \ldots n^2 - 1
    • \textsc{LinkMesh}(Q) // Adjacent node left, above, right, below properties
    • \textsc{Sort}(Q) // Sort in ascending order by node value
    • \textsc{LinkQueue}(Q) // Adjacent node previous and next properties
    • \qquad Q_{min} \gets Q[0]
    • \textbf{while } Q_{min} \neq nil \textbf{ do}
      • A[ Q_{min} \rightarrow row ] \gets Q_{min} \rightarrow col
      • Q_{min} \gets \textsc{DeleteNode}(Q, Q_{min}) // Deletes row and col of Q_{min}
    • \textbf{end while}
  • \qquad \textbf{return } A

Allocating and linking for assignment is \mathcal{O}\left(n\right); mesh \mathcal{O}\left(n^2\right); queue \mathcal{O}\left(2n^2\log n + n^2\right). Therefore, initialization requires \mathcal{O}\left(n^2 \log n\right) time. The body of the loop requires a constant time assignment of worker to job, and \mathcal{O}\left(2k - 1\right) time to remove the row and column from a k \times k matrix using a modified depth first search. Thus, the loop itself accounts for \mathcal{O}\left(n^2\right) time. The resulting time complexity is therefore \mathcal{O}\left(n^2 \log n\right) \square.

\displaystyle \begin{pmatrix} 62 & 31 & 79 & \fbox{6} & 21 & 37 \\ 45 & 27 & 23 & 66 & \fbox{9} & 17 \\ 83 & 59 & 25 & 38 & 63 & \fbox{25} \\ \fbox{1} & 37 & 53 & 100 & 80 & 51 \\ 69 & \fbox{72} & 74 & 32 & 82 & 31 \\ 34 & 95 & \fbox{61} & 64 & 100 & 82 \\ \end{pmatrix} \quad \begin{pmatrix} 62 & 31 & 79 & \fbox{6} & 21 & 37 \\ 45 & 27 & 23 & 66 & \fbox{9} & 17 \\ 83 & 59 & \fbox{25} & 38 & 63 & 25 \\ \fbox{1} & 37 & 53 & 100 & 80 & 51 \\ 69 & 72 & 74 & 32 & 82 & \fbox{31} \\ 34 & \fbox{95} & 61 & 64 & 100 & 82 \\ \end{pmatrix}

Breaking ties for the minimum uncovered value can result in different costs. This drawback is shown in the above example were choosing 25 at (3,6) yields a minimum cost of 174, where as the one at (3, 3) gives a minimum cost of 167. The next progression in the design of the greedy algorithm would be to try all minimum positions and keep the top k performing paths.

Hungarian The general idea behind the Kuhn-Munkres algorithm is that if we are given an initial assignment, we can make further assignments and potentially reassign workers until all workers have been tasked with a job. The high-level sketch of the algorithm starts with an initial assignment. While we have jobs that are unassigned, we look for qualified workers, ie, the zero entries. If a worker is already assigned to a job, but is also qualified for another, then we prime the alternative and continue to the next qualified worker, but if that is the only job the worker is qualified for, then we’d like to reassign any other worker already tasked to that job. This leads to a natural ripple effect represented by an alternating path of starred and primed entries. In Munkres’ paper [9] “starred” zero’s represent assignments of workers to jobs, and “primed” zero’s are alternative assignments. By flipping the bits of the path, we reassign workers to their alternative tasks while ensuring the assignment continues to be minimal by construction. After assigning as many workers as we have to, we then deduct the lowest cost to create a new qualified worker. Thus, every iteration we are guaranteed to make positive progress towards our goal of finding an optimal assignment. This scheme results in the worst case \mathcal{O}\left(n^3\right) time to complete.


Algorithm 2 The Hungarian method for the LAP.


  • \textbf{procedure } \textsc{HungarianMethod}(M)
    • M_{i,j} \gets M_{i,j} - \min_j M_{i,j} \text{ for } i = 0 \ldots n - 1
    • M_{i,j} \gets M_{i,j} - \min_i M_{i,j} \text{ for } j = 0 \ldots n - 1
    • Star the first uncovered zero in row i, cover the corresponding column j for i = 0 \ldots n - 1
    • \textbf{while } All columns not covered
      • \textbf{while } Uncovered zeros
        • Prime the current uncovered zero
        • \textbf{if } There’s a starred zero in this row
          • Uncover the starred zero’s column and cover the row
        • \textbf{else }
          • Find an alternating augmented path from the primed zero
          • Unstar the starred zeros on the path and star the primed zeros on the path
          • Remove all the prime markings and cover all stared zeros
          • \textbf{break}
        • \textbf{end if}
      • \textbf{end while}
      • \textbf{if } Found path
        • \textbf{continue}
      • \textbf{end if}
      • M^* = \min M_{i,j} over all uncovered i, j
      • M_{i,j} = M_{i,j} - M^* for all uncovered columns j
      • M_{i,j} = M_{i,j} + M^* for all covered rows i
    • \textbf{end while }
    • \textbf{return} Starred zeros // These are all the assignments
  • \textbf{end procedure}

To further illustrate the algorithm, consider the following example where starred entries are denoted by red, and primed entries by green:

hungarian-example

Analysis

The prevailing convention in the literature is to look at the approximation factor, \alpha, to determine how close the results of an approximation algorithm are to optimal [10]. Here this ratio is the expected minimum cost assignment of the algorithm under test to the same quantity given by the expected minimum assignment cost. Let M_{i,j} \sim \text{Exp}(1) be an n \times n a standard exponential random cost matrix. We resort to the exponential distribution for its ease of analyis and prominence in related literature. Cf. the works of [7], [8] for analysis based on M_{i,j} \sim \mathcal{U}(0,1).

Exponential Distribution Properties Let X \sim \text{Exp}(\lambda) have cumulative distribution function F_X(x) = 1 - \exp{\left(-\lambda x\right)} and expectation \mathbb{E}(X) = \lambda^{-}. The distribution demonstrates the memoryless property for expectations \mathbb{E}(X \lvert X > a) = \mathbb{E}(X) + a. Define the order statistic X_{1:n} = \min \lbrace X_{1}, \ldots, X_{n} \rbrace to be the minimum of n draws from \text{Exp}(\lambda). X_{1:n} \sim \text{Exp}(n \lambda) [2] with expectation \mathbb{E}(X_{1:n}) = \left(n \lambda\right)^-. If Y_n = \sum_{i = 1}^{n} X_i then Y_n \sim \text{Gamma}(n, \lambda) with expectation \mathbb{E}(Y_n) = n \lambda^{-}.

Expected Minimum Cost The expected minimum assignment cost for M is given by [1]:

\displaystyle \mathbb{E}(C_n) = \sum_{k = 1}^{n} \frac{1}{k^2} = H_{n}^{(2)}

Which is the generalized harmonic number of order two and converges to \zeta(2) = \pi^2/6. For the generalized harmonic numbers, H_{n}^{(k)}, \lim_{k\to\infty} H_{n}^{(k)} = \zeta(k) for k > 1.

Greedy The minimum value of an n \times n matrix is given by the order statistic M_{1:n^2} with expectation \mathbb{E}(M_{1:n^2}) = n^{-2}. The expected value of the minimum cost assignment is not just \sum_{i=0}^{n-1} (n-i)^{-2} because the expectation doesn’t take into account the previous iteration’s minimum value. To accomplish this we make use of the memoryless property of the exponential distribution to observe that the expected difference in minimums between iterations is the expected minimum value given by M_{i:k^2}. If we add up all these differences we get the expected minimum value of the k’th iteration; summing all these expectations then yields the expected minimum cost assignment:

\displaystyle \mathbb{E}(C_n) = \sum_{i=0}^{n-1} \sum_{j=0}^{i} \frac{1}{(n - j)^2} = \sum_{j=0}^{n-1} \frac{(n-j)}{(n-j)^2} = \sum_{j=0}^{n-1} \frac{1}{n-j} = H_n

This is the harmonic number of order one which does not converge. The resulting approximation factor is:

\displaystyle \alpha_n = \frac{H_n}{H_n^{(2)}}


Random The random algorithm will simply select an assignment permutation, so we are just adding up n \text{Exp}(1) distributed random variables leading to an expected cost of:

\displaystyle \mathbb{E}(C_n) = \sum_{i=1}^n \mathbb{E}(M_{i, \pi_i}) = n


And approximation factor:

\displaystyle \alpha_n = \frac{n}{H_n^{(2)}}


From this analysis one concludes that the greedy algorithm has an unbounded approximation factor that grows significantly slower than that of randomly selecting assignments.

Evaluation

evaluation

To illustrate the preceding results, Figure 1 shows the approximation factor for the greedy algorithm implementations against the derived approximation factor. The simulated results are based on 120 n \times n standard exponentially distributed matrices for 1 \le n \le 1000. Using the same conventions for the approximation factor, Figure 2 illustrates the runtime characteristics of the algorithms after rejecting outliers due to system fluctuations. Results obtained from source code compiled with -O3 flags and ran on a Xeon E3-1245 v5 3.5 Ghz system with 32 GBs of 2133Mhz DDR4 RAM. The algorithms coincide with the theoretical time complexities as shown in Table 2.

Solver MSE
GREEDY-EFFICIENT 0.002139
GREEDY-NAIVE 0.014161
HUNGARIAN 0.232998
Table 1: Mean square error of fitted model to mean runtime for each solver. Models given by the corresponding time complexity. Fit by Levenberg-Marquardt.


Summary

Brute Random Greedy Hungarian
Complexity \mathcal{O}\left(n!\right) \mathcal{O}\left(n\right) \mathcal{O}\left(n^2 \log n\right) \mathcal{O}\left(n^3\right)
\alpha_n 1 n / H_n^{(2)} H_n / H_n^{(2)} 1
Table 2: Merits of each approach.


Exact solutions can be delivered by the brute method when a handful of workers are being considered, and the Hungarian method should be considered for all other instances. Approximate solutions can be provided by the greedy algorithm with logarithmic degeneracy while providing a linear factor improvement over the Hungarian method. For inputs greater than those considered, the parallel Auction algorithm [3] is a suitable alternative and the subject of future work.

References

  1. Aldous, D. J. The \zeta(2) limit in the random assignment problem. Random Structures & Algorithms 18, 4 (2001), 381-418.
  2. Balakrishnan, N., and Rao, C. Handbook of statistics 16: Order statistics-theory and methods, 2000.
  3. Bertsekas, D. P. The auction algorithm: A distributed relaxation method for the assignment problem. Annals of operation research 4, 1 (1988), 105-123.
  4. Durtenfeld, R. Algorithm 235: random permutation. Communications of the ACM 7, 7 (1964), 420.
  5. Heap, B. Permutations by interchanges. The Computer Journal 6, 3 (1963), 293-298.
  6. Kuhn, H. W. The hungarian method for the assignment problem. Naval research logistics quarterly 2, 1-2 (1955), 83097.
  7. Kurtzberg, J. M. On approximation methods for the assignment problem. Journal of the ACM (JACM) 9, 4 (1962), 419-439.
  8. Steele, M. J. Probability and statistics in the service of computer science: illustrations using the assignment problem. Communications in Statistics-Theory and Methods 19, 11 (1990), 4315-4329.
  9. Munkres, J. Algorithms for the assignment and transportation problems. Journal of the society for industrial and applied mathematics 5, 1 (1957), 32-38.
  10. Williamson, D. P., and Shmoys, D. B. The design of approximation algorithms. Cambridge university press, 2011.

Written by lewellen

2017-03-21 at 11:12 am

Distributed k-Means Clustering

leave a comment »

Abstract
k-Means Clustering [10] is a fundamental algorithm in machine learning, and often the first approach a user will try when they want to discover the natural groupings in a collection of n-dimensional vectors. The algorithm iteratively picks cluster centers by assigning vectors to their closest cluster, then recalculates the centers based on the assigned vectors’ means. Here it is used as a toy algorithm to motivate the pilot study, design, implementation, and evaluation of a fault-tolerant distributed system with emphasis on efficient data transfer using remote direct memory access (RDMA), and a distributed synchronization barrier that accommodates transient workers. Experiments will evaluate the performance of these interests and quantify how well the system compares to Apache Spark before concluding with a discussion on transforming this pilot study into a high-performance, cross-platform, fault-tolerant distributed system for machine learning.

Introduction

The applications of k-Means clustering are numerous in business, engineering, and science where demands for prompt insights and support for increasingly large volumes of data motivate the need for a distributed system that exploits the inherent parallelism of the algorithm on today’s emerging hardware accelerators (e.g., FPGA [3], GPU [15], many integrated core, multi-core [9]).

There are however a number questions that arise when building such a system: what accelerators should be supported, how to partition data in an unbiased way, how to distribute those partitions, what each participant will calculate from their partition, how those individual results will be aggregated, and finally, how to synchronize tasks between participants to complete the job as a whole.

Two of these questions will be the focus of this work: how to efficiently transfer data to workers, and how to synchronize them in the presence of transient workers. For the former, remote direct memory access (RDMA) is used to distribute disk-based data from a coordinator to workers, and an extension to synchronization barriers is developed to allow workers to leave ongoing calculations, and return without interrupting other workers.

How well the system solves these issues will be measured by observed transfer rates and per iteration synchronization times. To understand the system’s scalability and place in the broader distributed machine learning landscape, its runtime performance will be evaluated against Apache Spark. Based on these results, future work will be discussed on how to move forward with the study to create a system that can meet the ever growing demands of users.

Background

1: \textbf{procedure } \textsc{k-Means}(C, X, d, N, K) \newline  2: \qquad \textbf{while } \text{not convered } \textbf{do } \newline  3: \qquad \qquad S_k \gets \{ x : k = \text{argmin}_{i} \lVert c_i - x \rVert_2, x \in X \} \newline  4: \qquad \qquad \Sigma_{k} \gets  \sum_{x \in S_k} x \newline  5: \qquad \qquad \kappa_{k} \gets \lVert S_k \rVert \newline  6: \qquad \qquad c_{k} = \Sigma_{k} / \kappa_{k} \newline  7: \qquad \textbf{end while} \newline  8: \qquad \textbf{return } C \newline  9: \textbf{end procedure}

k-Means clustering belongs to the family of expectation-maximization algorithms where starting from an initial guess C, K centroids are iteratively inferred from a dataset X containing N \mathbb{R}^d vectors. Each iteration the partial sum \Sigma_k, and quantity \kappa_k of vectors nearest the k^{th} centroid are aggregated. From these quantities the updated centroids c_k can be calculated for the next iteration until the difference in magnitude from the former is less than some tolerance \epsilon or a maximum number of iterations I is observed providing a \mathcal{O}(dIKN) linear time algorithm.

Parallelism is based on each participant p being assigned a partition of the data so that instead of computing \Sigma_k and \kappa_k on the entirety of X, it is on some disjoint subset X_p s.t. X = \cup_p X_p and \cap_p X_p = \emptyset. When each participant has computed their (\Sigma_k^p, \kappa_k^p), pairs the resulting set of values can be aggregated (\Sigma_k, \kappa_k) = \sum_{p} (\Sigma_k^p, \kappa_k^p) to yield the updated centroid values c_k for the next iteration.

To ensure that participants can formulate a unified view of the data, it is assumed that user supplied datasets are adequately shuffled. If the natural clusters of the data were organized in a biased way, participants would individually draw different conclusions about the centroids leading to divergence. Further, it will be assumed that a loss of an unbiased partition will not degrade the quality of results beyond an acceptable threshold under certain conditions (cf. [6] for a rich formalism based on coresets to support this assumption).

RDMA

One of the failings of conventional network programming is the need to copy memory between user and kernel space buffers to transmit data. This is an unnecessary tax that software solutions shouldn’t have to pay with runtime. The high performance computing community addressed this issue with remote direct memory access [12] whereby specialized network hardware directly reads from, and writes to pinned user space memory on the systems. These zero-copy transfers free the CPU to focus on core domain calculations and reduces the latency of exchanging information between systems over 56 Gbps InfiniBand or 40 Gbps RoCE.

Barriers

Synchronization barriers allow a number of participants to rendezvous at an execution point before proceeding as a group (cf. [12] for a survey of advanced shared-memory techniques). Multi-core kernels in the system use a counting (centralized) barrier: the first n-1 threads that approach the barrier will enter a critical section, decrement a counter, and sleep on a condition variable; the last thread to enter the critical section will reset the counter and signal the condition variable to wake the other threads before leaving the critical section. At this point all threads are synchronized and can proceed as a group.

Since the emphasis of this work is on fault tolerance, distributed all-to-all and one-to-all message passing versions of the counting barriers are considered (cf. [4] for more advanced distributed methods). In the former, every participant will notify and wait to hear from every other participant before proceeding. In the latter, every participant will notify and wait to hear from a coordinator before proceeding. The all-to-all design most closely resembles a counting barrier on each participant, whereas the all-to-one resembles a counting barrier on a single participant.

Design

System participants consist of a single coordinator and multiple workers. A coordinator has the same responsibilities as a worker, but is also responsible for initiating a task and coordinating recovery. Workers are responsible for processing a task and exchanging notifications. A task is a collection of algorithm parameters (maximum iteration, convergence tolerance, number of clusters K), initial guesses (K by d-dimensions), and unbiased partition of the dataset (N by d-dimensions) that a participant is responsible for computing. Notifications consist of identifying information (host name, port), the current iteration, and partial results (K counts and K by d-dimensional sums).

Dataset Transfer

fig-1

Figure 1: Example of transferring data between hosts using Accelio RDMA.

The first responsibility of the coordinator is to schedule portions of the disk based dataset to each worker. The coordinator will consult a list of participants and verify that the system can load the contents of the dataset into its collective memory. If so, the coordinator schedules work uniformly on each machine in the form of a task. A more sophisticated technique could be used, but a uniform loading will minimize how long each worker will wait on the others each iteration (assuming worker’s computing abilities are indistinguishable). The coordinator will sequentially distribute these tasks using RDMA, whereby the serialized contents of the task are read from disk into the coordinator’s user space memory are directly transfered over to an equally sized buffer in the worker’s user space memory before continuing on to the next worker.

Open source BSD licensed Accelio library [11] is used to coordinate the transfer of user space memory between machines as shown in Fig. (1). Following the nomenclature of the library, a client will allocate and register a block of user space memory; registration ensures that the operating system will not page out the memory while the network interface card is reading its contents. Next, the client will send a request asking the server to allocate an equally sized buffer. When the server receives this request, it will attempt to allocate and register the requested buffer. Once done, it will issue the rkey of the memory back to the client. An rkey is a unique identifier representing the location of the memory on the remote machine. Upon receipt of this information, the client will then ask the library to issue a RDMA write given the client’s and server’s rkeys. Since RDMA will bypass the server’s CPU all together, the server will not know when the operation completes; however, when the RDMA write completes, the client is notified and it can then notify the server that the operation is complete by terminating the connection and session.

During development it was discovered that the amount of memory that can be transfered this way is limited to just 64 MiB before the library errs out (xio_connection send queue overflow). To work around this limitation for larger partitions, the client will send chunks of memory in 64 MiB blocks. The same procedure detailed above is followed, however it is done for each block of memory with rkeys being exchanged for the appropriate offset on each client RDMA write complete notification until the entire contents of memory have been transfered. On each side the appropriate unregistering and deallocation of ancillary memory takes place and the worker deserializes the memory into a task before proceeding on to the first iteration of k-Means algorithm.

As an alternative to using direct RDMA writes, a design based on Accelio messaging was considered. In this design the client allocates memory for the serialized task and issues an allocation request to the server. The server services the request and the contents of memory are transfered in 8 KiB blocks before the exchange of messages is no longer necessary. While this approach requires fewer Accelio API calls to coordinate, it is significantly slower than the more involved direct RDMA based approach.

Iteration and aggregation

The k-Means algorithm is implemented as both a sequential and parallel multi-core kernel that can be selected at compile time. The sequential version is identical to what was discussed in the background, whereas the parallel kernel is a simplified version of the distributed system less data transfer and fault-tolerance enhancements. Data is partitioned onto threads equaling the total number of cores on the system. Each iteration threads are created to run the sequential kernel on each thread’s partition of the data. Once all the threads complete, they are joined back to the main thread where their partial results are aggregated by the main thread and used for the distributed synchronization.

Synchronization

In this distributed setting, the barrier must be able to accommodate the fluctuating presence of workers due to failures. Multiple designs were considered, but the all-to-all paradigm was chosen for its redundancy, local view of synchronization, and allows the coordinator to fail. The scheme may not scale well and future work would need to investigate alternative methods. Unlike the data transfer section, plain TCP sockets are used since the quantities of data being shared are significantly smaller.

A few assumptions need to be stated before explaining the protocol. First, the coordinator is allowed to be a single point of failure for recovery and failures for all participants are assumed to be network oriented up to network partitioning. Network partitions can operate independently, but cannot be reunified into a single partition. All other hardware resources are assumed to be reliable. Finally, the partition associated with a lost worker is not reassigned to another worker. It is simply dropped from the computations under the assumption that losing that partition will not significantly deteriorate the quality of the results up to some tolerance as mentioned in the introduction.

The first step to supporting transient workers is to support process restart. When a worker receives a task from the coordinator, it will serialize a snapshot to disk, service the task, and then remove the snapshot. When a worker process is started, it will deserialize an existing snapshot, and verify that the task is ongoing with the coordinator before joining, or discard the stale snapshot before awaiting the next task from the coordinator.

fig-2

Figure 2: Example in which a worker reintegrates with the group after being offline for several iterations. Red lines denote blocking. (Timeout queries not shown.)

The distributed barrier next maintains a list of active, inactive and recovered participants in the system. The first phase of synchronization is the notification phase in which each participant will issue identifying information, current iteration, and its partial results for the k-Means algorithm to all other active participants. Those participants that cannot be reached are moved from the active to inactive list.

The second phase is the waiting phase in which a listening thread accumulates notifications on two separate blocking unbounded queues in the background. The recovery queue is for notifications, and the results queue for sharing partial results. For each active participant the results queue will be dequeued, and for each inactive participant, the results queue will be peeked and dequeued if nonempty. Because the results queue is blocking, a timeout is used to allow a participant to verify that another participant hasn’t gone offline in the time it took the participant to notify the other and wait for its response. If the other participant has gone offline, then it is moved from the active to inactive list, otherwise the participant will continue to wait on the other.

Each partial result from the results queue will be placed into a solicited or unsolicited list based on if its origin is a participant that was previously notified. The coordinator will then locally examine the unsolicited list and place those zero iteration requests in the recovered list when it is in a nonzero iteration. Workers will examine the unsolicited list and discard any requests whose iteration does not match their own.

The recovery phase begins by an inactive worker coming back online and sending their results to the coordinator, and then waiting to receive the coordinators results and a current iteration notification. The next iteration, the coordinator will look at its recovered list and send the current iteration to the recovering worker, then wait until it receives a resynchronized notification. Upon receiving the current iteration notification, the recovering worker will then go and notify all the other workers in the cluster of its results, and wait for them to response before issuing a resynchronized notification to the coordinator. At which point the recovering worker is fully synchronized with the rest of the system. Once the coordinator receives this notification on its recovery queue, it will move the recovering worker off the inactive and recovery lists and on to the active list before notifying the other workers of its results.

Once the notification and waiting phase have completed, all participants are synchronized and may independently find the next set of centroids by aggregating their partial results from both solicited and unsolicited lists, and then begin the next iteration of the k-Means algorithm. This process will continue until convergence, or the maximum number of iterations has been reached.

Experiments

Experiments were conducted on four virtual machines running Red Hat 4.8 with 8 GiB of RAM and single tri-core Intel Xeon E312xx class 2.6 GHz processor. Host machines are equipped with Mellanox Connect X-3 Pro EN network interface cards supporting 40 Gbs RoCE. Reported times are wall time as measured by gettimeofday. All C++98 source code compiled using g++ without any optimization flags.

Spark runtime comparison ran on Amazon Web Services Elastic Map Reduce with four m1.large instances running dual core Xeon E5 2.0 GHz processor with 7.5 GiB of RAM supporting 10 Gbps Ethernet. Spark 1.5.2 comparison based on MLlib KMeans.train and reported time measured by System.currentTimeMillis. All Java 1.7 code compiled by javac and packaged by Maven.

Transfer Rates

fig-3

Figure 3: Transfer rates for variable block sizes up to 64 MiB and fixed 128 MiB payload.

fig-4

Figure 4: Transfer rates for fixed 64 MiB RDMA and 8 KiB message based block sizes and variable payloads up to 7 GiB.

Fig. (3) shows the influence of block size on transfer rate of a fixed payload for RDMA based transfers. As mentioned in the design section, Accelio only supports block sizes up to 64 MiB for RDMA transfers. When the block size is 8 KiB, it takes 117x longer to transfer the same payload than when a block size of 64 MiB is used. This performance is attributed to having to exchange fewer messages for the same volume of data. For the payload of 128 MiB considered, a peak transfer rate of 4.7 Gbps was obtained.

Fig. (4) looks at the relationship of a fixed 64 MiB block size for RDMA transfers up to 7 GiB before exhausting available system resources. A peak transfer rate of 7.7 Gbps is observed which is still significantly less than the 40 Gbps capacity of the network. This would suggest a few possibilities: the Mellanox X-3 hardware was configured incorrectly, there may be network switches limiting the transfer of data, or that there is still room for improvement in the Accelio library.

It is unclear why there should be a kink in performance at 2 GiB. Possible explanations considered the impacts of hardware virtualization, influence of memory distribution on the physical DRAM, and potential Accelio issues. Further experiments are needed to pinpoint the root cause.

To demonstrate the advanced performance of Accelio RDMA based transfers over Accelio messaging based transfers, Fig. (4) includes transfer performance based on 8 KiB messaging. For the values considered, the message based approach was 5.5x slower than the RDMA based approach. These results are a consequence of the number of messages being exchanged for the smaller block size and the overhead of using Accelio.

Recovery time

fig-5

Figure 5: Example of Worker A going offline at iteration 29 and coming back online at iteration 69.

Fig. (5) demonstrates a four node system in which one worker departs and returns to the system forty iterations later. Of note is the seamless departure and reintegration into the system without inducing increased synchronization times for the other participants. For the four node system, average synchronization time was 16 ms. For recovering workers, average reintegrate time was 225 ms with high variance.

Runtime performance

  Total Percentage
Sharing 721.9 6.7
Computing 8558.7 79.1
Synchronizing 1542.5 14.2
Unaccounted 6.2 0.1
Total 10820.4 100
Table 1: Time in milliseconds spent in each task for 100 iterations, d = 2, K = 4, N = 10,000,000.

Looking at the distribution of work, roughly 80% of the work is going towards actual computation that we care about and the remaining 20% what amounts to distributed system bookkeeping. Of that 20% the largest chunk is synchronization suggesting that a more efficient implementation is needed and that it may be worth abandoning sockets in favor of low latency, RDMA based transfers.

fig-6

Figure 6: Runtime for varying input based Accelio messaging and sequential kernel, and RDMA and parallel kernel. The latter typically being 2.5x faster

fig-7

Figure 7: Sequential, parallel, distributed versions of k-Means for varying input sizes with Spark runtime for reference.

Runtime of the system for varying inputs is shown in Fig. (6) based on its original Accelio messaging with sequential calculations, and final RDMA transfers with parallel calculations. The latter cuts runtime by 2.5x which isn’t terrible since each machine only has three cores.

The general runtime for different configurations for the final system is shown in Fig. (7). The sequential algorithm is well suited to process inputs less than ten thousand, parallel less than one million, and the distributed system for anything larger. In general, the system performed 40-50x faster than Spark on varying input sizes for a 9 core vs 8 core configuration. These are conservative estimates since the system does 100 fixed iterations, whereas and Spark’s MLlib \text{k-Means} \lvert \lvert implementation stops as soon as possible (typically 5-10 iterations).

fig-8

Figure 8: Speedup of the system relative to the sequential algorithm.

The overall speedup of the system relative to the sequential algorithm is shown in Fig. (8). For a 12 core system we observe at most a 7.3x runtime improvement (consistent with the performance and sequential vs. parallel breakdowns). In general, in the time it takes the distributed system to process 100 million entries, the sequential algorithm would only be able to process 13.7 million entries. Further optimization is desired.

Discussion

Related Work

The established trend in distributed machine learning systems is to use general purpose platforms such as Google’s MapReduce and Apache’s Spark and Mahout. Low latency, distributed shared memory systems such as FaRM and Grappa are gaining traction as ways to provide better performance thanks to declining memory prices and greater RDMA adoption. The next phase of this procession is exemplified by GPUNet [7], representing systems built around GPUDirect RDMA, and will likely become the leading choice for enterprise customers seeking performance given the rise of deep learning applications on the GPU.

The design of this system is influenced by these trends with the overall parallelization of the k-Means algorithm resembling the map-reduce paradigm and RDMA transfers used here reflecting the trend of using HPC scale technologies in the data center. Given that this system was written in a lower level-language and specialized for the task, it isn’t surprising that it delivered better performance (40-50x) than the leading general purpose system (Apache Spark) written in higher level language.

That said, given the prominence of general purpose distributed machine learning systems, specialized systems for k-Means clustering are uncommon and typically designed for unique environments (e.g., wireless sensor networks [13]). In the cases where k-means is investigated, the focus is on approximation techniques that are statistically bounded [6], and efficient communication patterns [2]; both of these considerations could further be incorporated into this work to realize better performance.

For the barrier, most of the literature focuses on static lists of participants in shared [12] and distributed [4] multi-processor systems with an emphasis on designing solutions for specific networks (e.g., n-dimensional meshes, wormhole-routed networks, etc). For barriers that accommodate transient participants, the focus is on formal verification with [8] and [1] focused on shared and distributed systems respectively. No performance benchmarks could be found for a direct comparison to this work, however, [5] presents a RDMA over InfiniBand based dissemination barrier that is significantly faster (\mu s vs ms) suggesting opportunities for future RDMA use.

Future Work

RDMA

Accelio provides a convenient API for orchestrating RDMA reads and writes at the expense of performance. Accelio serves a niche market and its maturity reflects that reality. There are opportunities to make Accelio more robust and capable for transferring large chunks of data. If this avenue of development is not fruitful, alternative libraries such as IBVerbs may be used in an attempt to obtain advertised transfer rates for RDMA enabled hardware.

As implemented, the distribution of tasks over RDMA is linear. This was done to take advantage of sequential read speeds from disk, but it may not scale well for larger clusters. Additional work could be done to look at pull based architectures where participants perform RDMA reads to obtain their tasks rather than the existing push based architecture built around RDMA writes. As well as exploring these paradigms for distributed file systems to accommodate larger datasets.

Calculations presented were multi-core oriented, but k-Means clustering can be done on the GPU as well. Technologies such as NVIDIA’s GPUDirect RDMA and AMD’s DirectGMA present an opportunity to accelerate calculations by minimizing expensive memory transfers between host and device in favor of between devices alone. Provided adequate hardware resources, this could deliver several orders magnitude faster runtime performance.

Kernel

As demonstrated in the experiments section, overall runtime is heavily dominated by the actual k-Means iteration suggesting refinements in the implementation will lead to appreciable performance gains. To this effect, additional work could be put into the sequential kernel to make better use of SIMD features of the underlying processor. Alternatively, the OpenBLAS library could be used to streamline the many linear algebra calculations since they already provides highly optimized SIMD features. Accelerators like GPUs, FPGAs, and MICs could be investigated to serve as alternative kernels for future iterations of the system.

Barrier

The barrier is by no means perfect and it leaves much to be desired. Beginning with allowing recovery of a failed coordinator, allowing reunion of network partitions, dynamic worker onboarding, and suspending calculations when too much of the system goes offline to ensure quality results. Once these enhancements are incorporated in to the system, work can be done to integrate the underlying protocols away from a socket based paradigm to that of RDMA. In addition, formal verification of the protocol would guide its use into production and on to larger clusters. The system works reasonably well on a small cluster, but further work is needed to harden it for modern enterprise clusters.

Overall System

As alluded to in the background, shuffling of data could be added to the system so that end users do not have to do so beforehand. Similarly, more sophisticated scheduling routines could be investigated to ensure an even distribution of work on a system of machines with varying capabilities.

While the k-Means algorithm served as a piloting example, work could be done to further specialize the system to accommodate a class of unsupervised clustering algorithms that fit the general map-reduce paradigm. The end goal is to provide a plug-n-play system with a robust assortment of optimized routines that do not require expensive engineers to setup, exploit, and maintain as is the case for most existing platforms.

Alternatively, the future of the system could follow the evolution of other systems favoring a generalized framework that enable engineers to quickly distribute arbitrary algorithms. To set this system apart from others, the emphasis would be on providing an accelerator agnostic environment where user specified programs run on whatever accelerators (FPGA, GPU, MIC, multi-core, etc.) are present without having to write code specifically for those accelerators. Thus saving time and resources for enterprise customers. Examples of this paradigm are given by such libraries as CUDAfy.NET and Aparapi for translating C# and Java code to run on arbitrary GPUs.

Conclusion

This work described a cross-platform, fault-tolerant distributed system that leverages the open source library Accelio to move large volumes of data via RDMA. Transfer rates up to 7.7 Gbps out of the desired 40 Gbps were observed and it is assumed if flaws in the library were improved, that faster rates could be achieved. A synchronization protocol was discussed that supports transient workers in the coordinated calculation of k-Means centroids. The protocol works well for small clusters, offering 225 ms reintegration times without significantly affecting other participants in the system. Further work is warranted to harden the protocol for production use. Given the hardware resources available the distributed system was 7.3x out of the desired 12x faster than the sequential alternative. Compared to equivalent data loads, the system demonstrated 40-50x better runtime performance than Spark MLlib’s implementation of the algorithm suggesting the system is a competitive alternative for k-Means clustering.

References

[1] Shivali Agarwal, Saurabh Joshi, and Rudrapatna K Shyamasundar. Distributed generalized dynamic barrier synchronization. In Distributed Computing and Networking, pages 143-154. Springer, 2011.

[2] Maria-Florina F Balcan, Steven Ehrlich, and Yingyu Liang. Distributed k-means and k-median clustering on general topologies. In Advances in Neural Information Processing Systems, pages 1995-2003, 2013.

[3] Mike Estlick, Miriam Leeser, James Theiler, and John J Szymanski. Algorithmic transformations in the implementation of k-means clustering on recongurable hardware. In Proceedings of the 2001 ACM/SIGDA ninth international symposium on Field programmable gate arrays, pages 103-110. ACM, 2001.

[4] Torsten Hoeer, Torsten Mehlan, Frank Mietke, and Wolfgang Rehm. A survey of barrier algorithms for coarse grained supercomputers. 2004.

[5] Torsten Hoeer, Torsten Mehlan, Frank Mietke, and Wolfgang Rehm. Fast barrier synchronization for inniband/spl trade. In Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. 20th International, pages 7-pp. IEEE, 2006.

[6] Ruoming Jin, Anjan Goswami, and Gagan Agrawal. Fast and exact out-of-core and distributed k-means clustering. Knowledge and Information Systems, 10(1):17-40, 2006.

[7] Sangman Kim, Seonggu Huh, Yige Hu, Xinya Zhang, Amir Wated, Emmett Witchel, and Mark Silberstein. Gpunet: Networking abstractions for gpu programs. In Proceedings of the International Conference on Operating Systems Design and Implementation, pages 6-8, 2014.

[8] Duy-Khanh Le, Wei-Ngan Chin, and Yong-Meng Teo. Verication of static and dynamic barrier synchronization using bounded permissions. In Formal Methods and Software Engineering, pages 231-248. Springer, 2013.

[9] Xiaobo Li and Zhixi Fang. Parallel clustering algorithms. Parallel Computing, 11(3):275-290, 1989.

[10] Stuart P Lloyd. Least squares quantization in pcm. Information Theory, IEEE Transactions on, 28(2):129-137, 1982.

[11] Mellanox Technologies Ltd. Accelio – the open source i/o, message, and rpc acceleration library.

[12] John M Mellor-Crummey and Michael L Scott. Algorithms for scalable synchronization on shared-memory multiprocessors. ACM Transactions on Computer Systems (TOCS), 9(1):21-65, 1991.

[13] Gabriele Oliva, Roberto Setola, and Christoforos N Hadjicostis. Distributed k-means algorithm. arXiv, 2013.

[14] Renato Recio, Bernard Metzler, Paul Culley, Je Hilland, and Dave Garcia. A remote direct memory access protocol specication. Technical report, 2007.

[15] Mario Zechner and Michael Granitzer. Accelerating k-means on the graphics processor via cuda. In Intensive Applications and Services, 2009. INTENSIVE’09. First International Conference on, pages 7-15. IEEE, 2009.

k-Means Clustering using CUDAfy.NET

leave a comment »

Introduction

I’ve been wanting to learn how to utilize general purpose graphics processing units (GPGPUs) to speed up computation intensive machine learning algorithms, so I took some time to test the waters by implementing a parallelized version of the unsupervised k-means clustering algorithm using CUDAfy.NET– a C# wrapper for doing parallel computation on CUDA-enabled GPGPUs. I’ve also implemented sequential and parallel versions of the algorithm in C++ (Windows API), C# (.NET, CUDAfy.NET), and Python (scikit-learn, numpy) to illustrate the relative merits of each technology and paradigm on three separate benchmarks: varying point quantity, point dimension, and cluster quantity. I’ll cover the results, and along the way talk about performance and development considerations of the three technologies before wrapping up with how I’d like to utilize the GPGPU on more involved machine learning algorithms in the future.

Algorithms

Sequential

The traditional algorithm attributed to [Stu82] begins as follows:

  1. Pick K points at random as the starting centroid of each cluster.
  2. do (until convergence)
    1. For each point in data set:
      1. labels[point] = Assign(point, centroids)
    2. centroids = Aggregate(points, labels)
    3. convergence = DetermineConvergence()
  3. return centroids

Assign labels each point with the label of the nearest centroid, and Aggregate updates the positions of the centroids based on the new point assignments. In terms of complexity, let’s start with the Assign routine. For each of the N points we’ll compute the distance to each of the K centroids and pick the centroid with the shortest distance that we’ll assign to the point. This is an example of the Nearest Neighbor Search problem. Linear search gives \mathcal{O}( K N ) which is preferable to using something like k-d trees which requires repeated superlinear construction and querying. Assuming Euclidean distance and points from \mathbb{R}^d, this gives time complexity \mathcal{O}( d K N ). The Aggregate routine will take \mathcal{O}(d K N). Assuming convergence is guaranteed in I iterations then the resulting complexity is \mathcal{O}(d K N I) which lends to an effectively linear algorithm.

Parallel

[LiFa89] was among the first to study several different shared memory parallel algorithms for k-means clustering, and here I will be going with the following one:

  1. Pick K points at random as the starting centroid of each cluster.
  2. Partition N points into P equally sized sets
  3. Run to completion threadId from 1 to P as:
    1. do (until convergence)
      1. sum, count = zero(K * d), zero(K)
      2. For each point in partition[threadId]:
        1. label = Assign(point, centroids)
        2. For each dim in point:
          1. sum[d * label + dim] += point[dim]
        3. count[label] = count[label] + 1
      3. if(barrier.Synchronize())
        1. centroids = sum / count
        2. convergence = DetermineConvergence()
  4. return centroids

The parallel algorithm can be viewed as P smaller instances of the sequential algorithm processing N/P chunks of points in parallel. There are two main departures from the sequential approach 1) future centroid positions are accumulated and counted after each labeling and 2) each iteration of P while loops are synchronized before continuing on to the next iteration using a barrier – a way of ensuring all threads wait for the last thread to arrive, then continue to wait as the last one enters the barrier, and exits allowing the other threads to exit.

In terms of time complexity, Assign remains unchanged at \mathcal{O}(d K), and incrementing the sums and counts for the point’s label takes time \mathcal{O}(d + 1). Thus for N/P points, a single iteration of the loop gives \mathcal{O}( N/P (d K + d + 1) ) time. Given P threads, the maximum time would be given by the thread that enters the barrier, and assuming at most I iterations, then the overall complexity is \mathcal{O}(d I ( N (K + 1) + K P + 1 ) / P). Which suggests we should see at most a \mathcal{O}(K P / (K + 1)) speedup over the sequential implementation for large values of N.

GPGPU

The earliest work I found on doing k-means clustering on NVIDIA hardware in the academic literature was [MaMi09]. The following is based on that work, and the work I did above on the parallel algorithm:

  1. Pick K points at random as the starting centroid of each cluster.
  2. Partition N into B blocks such that each block contains no more than T points
  3. do (until convergence)
    1. Initialize sums, counts to zero
    2. Process blockId 1 to B, SM at a time in parallel on the GPGPU:
      1. If threadId == 0
        1. Initialize blockSum, blockCounts to zero
      2. Synchronize Threads
      3. label = Assign(points[blockId * T + threadId], centroids)
      4. For each dim in points[blockId * T + threadId]:
        1. atomic blockSum[label * pointDim + dim] += points[blockId * T + threadId]
      5. atomic blockCount[label] += 1
      6. Synchronize Threads
      7. If threadId == 0
        1. atomic sums += blockSum
        2. atomic counts += blockCounts
    3. centroids = sums / counts
    4. convergence = DetermineConvergence()

The initialization phase is similar to the parallel algorithm, although now we need to take into account the way that the GPGPU will process data. There are a handful of Streaming Multiprocessors on the GPGPU that process a single “block” at a time. Here we assign no more than T points to a block such that each point runs as a single thread to be executed on each of the CUDA cores of the Streaming Multiprocessor.

When a single block is executing we’ll initialize the running sum and count as we did in the parallel case, then request that the threads running synchronize, then proceed to calculate the label of the point assigned to the thread atomically update the running sum and count. The threads must then synchronize again, and this time only the very first thread atomically copy those block level sum and counts over to the global sum and counts shared by all of the blocks.

Let’s figure out the time complexity. A single thread in a block being executed by a Streaming Multiprocessor takes time \mathcal{O}( 2K + (3K + 1)d  + 1 ) assuming that all T threads of the block execute in parallel, that there are B blocks, and S Streaming Multiprocessors, then the complexity becomes: \mathcal{O}(B / S (2K + (3K + 1)d  + 1) ). Since B = N / T, and at most I iterations can go by in parallel, we are left with \mathcal{O}( I N (2K + (3K + 1)d  + 1) / T S ). So the expected speedup over the sequential algorithm should be \mathcal{O}( d K T S / (2K + (3K + 1)d  + 1) ).

Expected performance

For large values of N, if we allow K to be significantly larger than d, we should expect the parallel version to 8x faster than the sequential version and the GPGPU version to be 255x faster than the sequential version given that P = 8, S = 2, T = 512 for the given set of hardware that will be used to conduct tests. For d to be significantly larger than K, then parallel is the same, and GPGPU version should be 340x faster than the sequential version. Now, it’s very important to point out that these are upper bounds. It is most likely that observed speedups will be significantly less due to technical issues like memory allocation, synchronization, and caching issues that are not incorporated (and difficult to incorporate) into the calculations.

Implementations

I’m going to skip the sequential implementation since it’s not interesting. Instead, I’m going to cover the C++ parallel and C# GPGPU implementations in detail, then briefly mention how scikit-learn was configured for testing.

C++

The parallel Windows API implementation is straightforward. The following will begin with the basic building blocks, then get into the high level orchestration code. Let’s begin with the barrier implementation. Since I’m running on Windows 7, I’m unable to use the convenient InitializeSynchronizationBarrier, EnterSynchronizationBarrier, and DeleteSynchronizationBarrier API calls beginning with Windows 8. Instead I opted to implement a barrier using a condition variable and critical section as follows:

// ----------------------------------------------------------------------------
// Synchronization utility functions
// ----------------------------------------------------------------------------

struct Barrier {
	CONDITION_VARIABLE conditionVariable;
	CRITICAL_SECTION criticalSection;
	int atBarrier;
	int expectedAtBarrier;
};

void deleteBarrier(Barrier* barrier) {
	DeleteCriticalSection(&(barrier->criticalSection));
	// No API for delete condition variable
}

void initializeBarrier(Barrier* barrier, int numThreads) {
	barrier->atBarrier = 0;
	barrier->expectedAtBarrier = numThreads;

	InitializeConditionVariable(&(barrier->conditionVariable));
	InitializeCriticalSection(&(barrier->criticalSection));
}

bool synchronizeBarrier(Barrier* barrier, void (*func)(void*), void* data) {
	bool lastToEnter = false;

	EnterCriticalSection(&(barrier->criticalSection));

	++(barrier->atBarrier);

	if (barrier->atBarrier == barrier->expectedAtBarrier) {
		barrier->atBarrier = 0;
		lastToEnter = true;

		func(data);

		WakeAllConditionVariable(&(barrier->conditionVariable));
	}
	else {
		SleepConditionVariableCS(&(barrier->conditionVariable), &(barrier->criticalSection), INFINITE);
	}

	LeaveCriticalSection(&(barrier->criticalSection));

	return lastToEnter;
}

A Barrier struct contains the necessary details of how many threads have arrived at the barrier, how many are expected, and structs for the condition variable and critical section.

When a thread arrives at the barrier (synchronizeBarrier) it requests the critical section before attempting to increment the atBarrier variable. It checks to see if it is the last to arrive, and if so, resets the number of threads at the barrier to zero and invokes the callback to perform post barrier actions exclusively before notifying the other threads through the condition variable that they can resume. If the thread is not the last to arrive, then it goes to sleep until the condition variable is invoked. The reason why LeaveCriticalSection is included outside the the if statement is because SleepConditionVariableCS will release the critical section before putting the thread to sleep, then reacquire the critical section when it awakes. I don’t like that behavior since its an unnecessary acquisition of the critical section and slows down the implementation.

There is a single allocation routine which performs a couple different rounds of error checking when calling calloc; first to check if the routine returned null, and second to see if it set a Windows error code that I could inspect from GetLastError. If either event is true, the application will terminate.

// ----------------------------------------------------------------------------
// Allocation utility functions
// ----------------------------------------------------------------------------

void* checkedCalloc(size_t count, size_t size) {
	SetLastError(NO_ERROR);

	void* result = calloc(count, size);
	DWORD lastError = GetLastError();

	if (result == NULL) {
		fprintf(stdout, "Failed to allocate %d bytes. GetLastError() = %d.", size, lastError);
		ExitProcess(EXIT_FAILURE);
	}

	if (result != NULL && lastError != NO_ERROR) {
		fprintf(stdout, "Allocated %d bytes. GetLastError() = %d.", size, lastError);
		ExitProcess(EXIT_FAILURE);
	}

	return result;
}

Now on to the core of the implementation. A series of structs are specified for those data that are shared (e.g., points, centroids, etc) among the threads, and those that are local to each thread (e.g., point boundaries, partial results).

// ----------------------------------------------------------------------------
// Parallel Implementation
// ----------------------------------------------------------------------------

struct LocalAssignData;

struct SharedAssignData {
	Barrier barrier;
	bool continueLoop;

	int numPoints;
	int pointDim;
	int K;

	double* points;
	double* centroids;
	int* labels;

	int maxIter;
	double change;
	double pChange;

	DWORD numProcessors;
	DWORD numThreads;

	LocalAssignData* local;
};

struct LocalAssignData {
	SharedAssignData* shared;
	int begin;
	int end;

	int* labelCount;
	double* partialCentroids;
};

The assign method does exactly what was specified in the parallel algorithm section. It will iterate over the portion of points it is responsible for, compute their labels and its partial centroids (sum of points with label k, division done at aggregate step.).

void assign(int* label, int begin, int end, int* labelCount, int K, double* points, int pointDim, double* centroids, double* partialCentroids) {
	int* local = (int*)checkedCalloc(end - begin, sizeof(int));

	int* localCount = (int*)checkedCalloc(K, sizeof(int));
	double* localPartial = (double*)checkedCalloc(pointDim * K, sizeof(double));

	// Process a chunk of the array.
	for (int point = begin; point < end; ++point) {
		double optDist = INFINITY;
		int optCentroid = -1;

		for (int centroid = 0; centroid < K; ++centroid) {
			double dist = 0.0;
			for (int dim = 0; dim < pointDim; ++dim) {
				double d = points[point * pointDim + dim] - centroids[centroid * pointDim + dim];
				dist += d * d;
			}

			if (dist < optDist) {
				optDist = dist;
				optCentroid = centroid;
			}
		}

		local[point - begin] = optCentroid;
		++localCount[optCentroid];

		for (int dim = 0; dim < pointDim; ++dim)
			localPartial[optCentroid * pointDim + dim] += points[point * pointDim + dim];
	}

	memcpy(&label[begin], local, sizeof(int) * (end - begin));
	free(local);

	memcpy(labelCount, localCount, sizeof(int) * K);
	free(localCount);

	memcpy(partialCentroids, localPartial, sizeof(double) * pointDim * K);
	free(localPartial);
}

One thing that I experimented with that gave me better performance was allocating and using memory within the function instead of allocating the memory outside and using within the assign routine. This in particular was motivated after I read about false sharing where two separate threads writing to the same cache line cause coherence updates to cascade in the CPU causing overall performance to degrade. For labelCount and partialCentroids they’re reallocated since I was concerned about data locality and wanted the three arrays to be relatively in the same neighborhood of memory. Speaking of which, memory coalescing is used for the points array so that point dimensions are adjacent in memory to take advantage of caching. Overall, a series of cache friendly optimizations.

The aggregate routine follows similar set of enhancements. The core of the method is to compute the new centroid locations based on the partial sums and centroid assignment counts given by args->shared->local[t].partialCentroids and args->shared->local[t].labelCount[t]. Using these partial results all the routine to complete in \mathcal{O}(P K d) time which assuming all of these parameters are significantly less than N, gives a constant time routine. Once the centroids have been updated, the change in their location is computed and used to determine convergence along with how many iterations have gone by. Here if more than 1,000 iterations have occurred or the relative change in position is less than some tolerance (0.1%) then the threads will terminate.

void aggregate(void * data) {
	LocalAssignData* args = (LocalAssignData*)data;

	int* assignmentCounts = (int*)checkedCalloc(args->shared->K, sizeof(int));
	double* newCentroids = (double*)checkedCalloc(args->shared->K * args->shared->pointDim, sizeof(double));

	// Compute the assignment counts from the work the threads did.
	for (int t = 0; t < args->shared->numThreads; ++t)
		for (int k = 0; k < args->shared->K; ++k)
			assignmentCounts[k] += args->shared->local[t].labelCount[k];

	// Compute the location of the new centroids based on the work that the
	// threads did.
	for (int t = 0; t < args->shared->numThreads; ++t)
		for (int k = 0; k < args->shared->K; ++k)
			for (int dim = 0; dim < args->shared->pointDim; ++dim)
				newCentroids[k * args->shared->pointDim + dim] += args->shared->local[t].partialCentroids[k * args->shared->pointDim + dim];

	for (int k = 0; k < args->shared->K; ++k)
		for (int dim = 0; dim < args->shared->pointDim; ++dim)
			newCentroids[k * args->shared->pointDim + dim] /= assignmentCounts[k];

	// See by how much did the position of the centroids changed.
	args->shared->change = 0.0;
	for (int k = 0; k < args->shared->K; ++k)
		for (int dim = 0; dim < args->shared->pointDim; ++dim) {
			double d = args->shared->centroids[k * args->shared->pointDim + dim] - newCentroids[k * args->shared->pointDim + dim];
			args->shared->change += d * d;
		}

	// Store the new centroid locations into the centroid output.
	memcpy(args->shared->centroids, newCentroids, sizeof(double) * args->shared->pointDim * args->shared->K);

	// Decide if the loop should continue or terminate. (max iterations 
	// exceeded, or relative change not exceeded.)
	args->shared->continueLoop = args->shared->change > 0.001 * args->shared->pChange && --(args->shared->maxIter) > 0;

	args->shared->pChange = args->shared->change;

	free(assignmentCounts);
	free(newCentroids);
}

Each individual thread follows the same specification as given in the parallel algorithm section, and follows the calling convention required by the Windows API.

DWORD WINAPI assignThread(LPVOID data) {
	LocalAssignData* args = (LocalAssignData*)data;

	while (args->shared->continueLoop) {
		memset(args->labelCount, 0, sizeof(int) * args->shared->K);

		// Assign points cluster labels
		assign(args->shared->labels, args->begin, args->end, args->labelCount, args->shared->K, args->shared->points, args->shared->pointDim, args->shared->centroids, args->partialCentroids);

		// Tell the last thread to enter here to aggreagate the data within a 
		// critical section
		synchronizeBarrier(&(args->shared->barrier), aggregate, args);
	};

	return 0;
}

The parallel algorithm controller itself is fairly simple and is responsible for basic preparation, bookkeeping, and cleanup. The number of processors is used to determine the number of threads to launch. The calling thread will run one instance will the remaining P - 1 instances will run on separate threads. The data is partitioned, then the threads are spawned using the CreateThread routine. I wish there was a Windows API that would allow me to simultaneously create P threads with a specified array of arguments because CreateThread will automatically start the thread as soon as it’s created. If lots of threads are being created, then the first will wait a long time before the last one gets around to reaching the barrier. Subsequent iterations of the synchronized loops will have better performance, but it would be nice to avoid that initial delay. After kicking off the threads, the main thread will run its own block of data, and once all threads terminate, the routine will close open handles and free allocated memory.

void kMeansFitParallel(double* points, int numPoints, int pointDim, int K, double* centroids) {
	// Lookup and calculate all the threading related values.
	SYSTEM_INFO systemInfo;
	GetSystemInfo(&systemInfo);

	DWORD numProcessors = systemInfo.dwNumberOfProcessors;
	DWORD numThreads = numProcessors - 1;
	DWORD pointsPerProcessor = numPoints / numProcessors;

	// Prepare the shared arguments that will get passed to each thread.
	SharedAssignData shared;
	shared.numPoints = numPoints;
	shared.pointDim = pointDim;
	shared.K = K;
	shared.points = points;
	
	shared.continueLoop = true;
	shared.maxIter = 1000;
	shared.pChange = 0.0;
	shared.change = 0.0;
	shared.numThreads = numThreads;
	shared.numProcessors = numProcessors;

	initializeBarrier(&(shared.barrier), numProcessors);

	shared.centroids = centroids;
	for (int i = 0; i < K; ++i) {
		int point = rand() % numPoints;
		for (int dim = 0; dim < pointDim; ++dim)
			shared.centroids[i * pointDim + dim] = points[point * pointDim + dim];
	}

	shared.labels = (int*)checkedCalloc(numPoints, sizeof(int));

	// Create thread workload descriptors
	LocalAssignData* local = (LocalAssignData*)checkedCalloc(numProcessors, sizeof(LocalAssignData));
	for (int i = 0; i < numProcessors; ++i) {
		local[i].shared = &shared;
		local[i].begin = i * pointsPerProcessor;
		local[i].end = min((i + 1) * pointsPerProcessor, numPoints);
		local[i].labelCount = (int*)checkedCalloc(K, sizeof(int));
		local[i].partialCentroids = (double*)checkedCalloc(K * pointDim, sizeof(double));
	}

	shared.local = local;

	// Kick off the threads
	HANDLE* threads = (HANDLE*)checkedCalloc(numThreads, sizeof(HANDLE));
	for (int i = 0; i < numThreads; ++i)
		threads[i] = CreateThread(0, 0, assignThread, &local[i + 1], 0, NULL);

	// Do work on this thread so that it's just not sitting here idle while the 
	// other threads are doing work.
	assignThread(&local[0]);

	// Clean up
	WaitForMultipleObjects(numThreads, threads, true, INFINITE);
	for (int i = 0; i < numThreads; ++i)
		CloseHandle(threads[i]);

	free(threads);

	for (int i = 0; i < numProcessors; ++i) {
		free(local[i].labelCount);
		free(local[i].partialCentroids);
	}

	free(local);

	free(shared.labels);

	deleteBarrier(&(shared.barrier));
}

C#

The CUDAfy.NET GPGPU C# implementation required a lot of experimentation to find an efficient solution.

In the GPGPU paradigm there is a host and a device in which sequential operations take place on the host (ie. managed C# code) and parallel operations on the device (ie. CUDA code). To delineate between the two, the [Cudafy] method attribute is used on the static public method assign. The set of host operations are all within the Fit routine.

Under the CUDA model, threads are bundled together into blocks, and blocks together into a grid. Here the data is partitioned so that each block consists of half the maximum number of threads possible per block and the total number of blocks is the number of points divided by that quantity. This was done through experimentation, and motivated by Thomas Bradley’s Advanced CUDA Optimization workshop notes [pdf] that suggest at that regime the memory lines become saturated and cannot yield better throughput. Each block runs on a Streaming Multiprocessor (a collection of CUDA cores) having shared memory that the threads within the block can use. These blocks are then executed in pipeline fashion on the available Streaming Multiprocessors to give the desired performance from the GPGPU.

What is nice about the shared memory is that it is much faster than the global memory of the GPGPU. (cf. Using Shared Memory in CUDA C/C++) To make use of this fact the threads will rely on two arrays in shared memory: sum of the points and the count of those belonging to each centroid. Once the arrays have been zeroed out by the threads, all of the threads will proceed to find the nearest centroid of the single point they are assigned to and then update those shared arrays using the appropriate atomic operations. Once all of the threads complete that assignment, the very first thread will then add the arrays in shared memory to those in the global memory using the appropriate atomic operations.

using Cudafy;
using Cudafy.Host;
using Cudafy.Translator;
using Cudafy.Atomics;
using System;

namespace CUDAfyTesting {
    public class CUDAfyKMeans {
        [Cudafy]
        public static void assign(GThread thread, int[] constValues, double[] centroids, double[] points, float[] outputSums, int[] outputCounts) {
            // Unpack the const value array
            int pointDim = constValues[0];
            int K = constValues[1];
            int numPoints = constValues[2];

            // Ensure that the point is within the boundaries of the points 
            // array.
            int tId = thread.threadIdx.x;
            int point = thread.blockIdx.x * thread.blockDim.x + tId;
            if (point >= numPoints)
                return;

            // Use two shared arrays since they are much faster than global 
            // memory. The shared arrays will be scoped to the block that this 
            // thread belongs to.

            // Accumulate the each point's dimension assigned to the k'th 
            // centroid. When K = 128 => pointDim = 2; when pointDim = 128 
            // => K = 2; Thus max(len(sharedSums)) = 256.
            float[] sharedSums = thread.AllocateShared<float>("sums", 256);
            if (tId < K * pointDim)
                sharedSums[tId] = 0.0f;

            // Keep track of how many times the k'th centroid has been assigned 
            // to a point. max(K) = 128
            int[] sharedCounts = thread.AllocateShared<int>("counts", 128);
            if (tId < K)
                sharedCounts[tId] = 0;

            // Make sure all threads share the same shared state before doing 
            // any calculations.
            thread.SyncThreads();

            // Find the optCentroid for point.
            double optDist = double.PositiveInfinity;
            int optCentroid = -1;

            for (int centroid = 0; centroid < K; ++centroid) {
                double dist = 0.0;
                for (int dim = 0; dim < pointDim; ++dim) {
                    double d = centroids[centroid * pointDim + dim] - points[point * pointDim + dim];
                    dist += d * d;
                }

                if (dist < optDist) {
                    optDist = dist;
                    optCentroid = centroid;
                }
            }

            // Add the point to the optCentroid sum
            for (int dim = 0; dim < pointDim; ++dim)
                // CUDA doesn't support double precision atomicAdd so cast down 
                // to float...
                thread.atomicAdd(ref(sharedSums[optCentroid * pointDim + dim]), (float)points[point * pointDim + dim]);

            // Increment the optCentroid count
            thread.atomicAdd(ref(sharedCounts[optCentroid]), +1);


            // Wait for all of the threads to complete populating the shared 
            // memory before storing the results back to global memory where 
            // the host can access the results.
            thread.SyncThreads();

            // Have to do a lock on both of these since some other Streaming 
            // Multiprocessor could be running and attempting to update the 
            // values at the same time.

            // Copy the shared sums to the output sums
            if (tId == 0)
                for (int i = 0; i < K * pointDim; ++i)
                    thread.atomicAdd(ref(outputSums[i]), sharedSums[i]);

            // Copy the shared counts to the output counts
            if (tId == 0)
                for (int i = 0; i < K; i++)
                    thread.atomicAdd(ref(outputCounts[i]), sharedCounts[i]);
        }

Before going on to the Fit method, let’s look at what CUDAfy.NET is doing under the hood to convert the C# code to run on the CUDA-enabled GPGPU. Within the CUDAfy.Translator namespace there are a handful of classes for decompiling the application into an abstract syntax tree using ICharpCode.Decompiler and Mono.Cecil, then converting the AST over to CUDA C via visitor pattern, next compiling the resulting CUDA C using NVIDIA’s NVCC compiler, and finally the compilation result is relayed back to the caller if there’s a problem; otherwise, a CudafyModule instance is returned, and the compiled CUDA C code it represents loaded up on the GPGPU. (The classes and method calls of interest are: CudafyTranslator.DoCudafy, CudaLanguage.RunTransformsAndGenerateCode, CUDAAstBuilder.GenerateCode, CUDAOutputVisitor and CudafyModule.Compile.)

        private CudafyModule cudafyModule;
        private GPGPU gpgpu;
        private GPGPUProperties properties;

        public int PointDim { get; private set; }
        public double[] Centroids { get; private set; }

        public CUDAfyKMeans() {
            cudafyModule = CudafyTranslator.Cudafy();

            gpgpu = CudafyHost.GetDevice(CudafyModes.Target, CudafyModes.DeviceId);
            properties = gpgpu.GetDeviceProperties(true);

            gpgpu.LoadModule(cudafyModule);
        }

The Fit method follows the same paradigm that I presented earlier with the C++ code. The main difference here is the copying of managed .NET resources (arrays) over to the device. I found these operations to be relatively time intensive and I did find some suggestions from the CUDAfy.NET website on how to use pinned memory- essentially copy the managed memory to unmanaged memory, then do an asynchronous transfer from the host to the device. I tried this with the points arrays since its the largest resource, but did not see noticeable gains so I left it as is.

At the beginning of each iteration of the main loop, the device counts and sums are cleared out through the Set method, then the CUDA code is invoked using the Launch routine with the specified block and grid dimensions and device pointers. One thing that the API does is return an array when you allocate or copy memory over to the device. Personally, an IntPtr seems more appropriate. Execution of the routine is very quick, where on some of my tests it took 1 to 4 ms to process 100,000 two dimensional points. Once the routine returns, memory from the device (sum and counts) is copied back over to the host which then does a quick operation to derive the new centroid locations and copy that memory over to the device for the next iteration.

        public void Fit(double[] points, int pointDim, int K) {
            if (K <= 0)
                throw new ArgumentOutOfRangeException("K", "Must be greater than zero.");

            if (pointDim <= 0)
                throw new ArgumentOutOfRangeException("pointDim", "Must be greater than zero.");

            if (points.Length < pointDim)
                throw new ArgumentOutOfRangeException("points", "Must have atleast pointDim entries.");

            if (points.Length % pointDim != 0)
                throw new ArgumentException("points.Length must be n * pointDim > 0.");

            int numPoints = points.Length / pointDim;

            // Figure out the partitioning of the data.
            int threadsPerBlock = properties.MaxThreadsPerBlock / 2;
            int numBlocks = (numPoints / threadsPerBlock) + (numPoints % threadsPerBlock > 0 ? 1 : 0);

            dim3 blockSize = new dim3(threadsPerBlock, 1, 1);

            dim3 gridSize = new dim3(
                Math.Min(properties.MaxGridSize.x, numBlocks),
                Math.Min(properties.MaxGridSize.y, (numBlocks / properties.MaxGridSize.x) + (numBlocks % properties.MaxGridSize.x > 0 ? 1 : 0)),
                1
                );

            int[] constValues = new int[] { pointDim, K, numPoints };
            float[] assignmentSums = new float[pointDim * K];
            int[] assignmentCount = new int[K];

            // Initial centroid locations picked at random
            Random prng = new Random();
            double[] centroids = new double[K * pointDim];
            for (int centroid = 0; centroid < K; centroid++) {
                int point = prng.Next(points.Length / pointDim);
                for (int dim = 0; dim < pointDim; dim++)
                    centroids[centroid * pointDim + dim] = points[point * pointDim + dim];
            }

            // These arrays are only read from on the GPU- they are never written 
            // on the GPU.
            int[] deviceConstValues = gpgpu.CopyToDevice<int>(constValues);
            double[] deviceCentroids = gpgpu.CopyToDevice<double>(centroids);
            double[] devicePoints = gpgpu.CopyToDevice<double>(points);

            // These arrays are written written to on the GPU.
            float[] deviceSums = gpgpu.CopyToDevice<float>(assignmentSums);
            int[] deviceCount = gpgpu.CopyToDevice<int>(assignmentCount);


            // Set up main loop so that no more than maxIter iterations take 
            // place, and that a realative change less than 1% in centroid 
            // positions will terminate the loop.
            int maxIter = 1000;
            double change = 0.0, pChange = 0.0;

            do {
                pChange = change;

                // Clear out the assignments, and assignment counts on the GPU.
                gpgpu.Set(deviceSums);
                gpgpu.Set(deviceCount);

                // Lauch the GPU portion
                gpgpu.Launch(gridSize, blockSize, "assign", deviceConstValues, deviceCentroids, devicePoints, deviceSums, deviceCount);

                // Copy the results memory from the GPU over to the CPU.
                gpgpu.CopyFromDevice<float>(deviceSums, assignmentSums);
                gpgpu.CopyFromDevice<int>(deviceCount, assignmentCount);

                // Compute the new centroid locations.
                double[] newCentroids = new double[centroids.Length];
                for (int centroid = 0; centroid < K; ++centroid)
                    for (int dim = 0; dim < pointDim; ++dim)
                        newCentroids[centroid * pointDim + dim] = assignmentSums[centroid * pointDim + dim] / assignmentCount[centroid];

                // Calculate how much the centroids have changed to decide 
                // whether or not to terminate the loop.
                change = 0.0;
                for (int centroid = 0; centroid < K; ++centroid)
                    for (int dim = 0; dim < pointDim; ++dim) {
                        double d = newCentroids[centroid * pointDim + dim] - centroids[centroid * pointDim + dim];
                        change += d * d;
                    }

                // Update centroid locations on CPU & GPU
                Array.Copy(newCentroids, centroids, newCentroids.Length);
                deviceCentroids = gpgpu.CopyToDevice<double>(centroids);

            } while (change > 0.01 * pChange && --maxIter > 0);

            gpgpu.FreeAll();

            this.Centroids = centroids;
            this.PointDim = pointDim;
        }
    }
}

Python

I include the Python implementation for the sake of demonstrating how scikit-learn was invoked throughout the following experiments section.

model = KMeans(
           n_clusters = numClusters, 
           init='random', 
           n_init = 1, 
           max_iter = 1000, 
           tol = 1e-3, 
           precompute_distances = False, 
           verbose = 0, 
           copy_x = False, 
           n_jobs = numThreads
           );

model.fit(X);    // X = (numPoints, pointDim) numpy array.

Experimental Setup

All experiments where conducted on a laptop with an Intel Core i7-2630QM Processor and NVIDIA GeForce GT 525M GPGPU running Windows 7 Home Premium. C++ and C# implementations were developed and compiled by Microsoft Visual Studio Express 2013 for Desktop targeting C# .NET Framework 4.5 (Release, Mixed Platforms) and C++ (Release, Win32). Python implementation was developed and compiled using Eclipse Luna 4.4.1 targeting Python 2.7, scikit-learn 0.16.0, and numpy 1.9.1. All compilers use default arguments and no extra optimization flags.

For each test, each reported test point is the median of thirty sample run times of a given algorithm and set of arguments. Run time is computed as the (wall) time taken to execute model.fit(points, pointDim, numClusters) where time is measured by: QueryPerformanceCounter in C++, System.Diagnostics.Stopwatch in C#, and time.clock in Python. Every test is based on a dataset having two natural clusters at .25 or -.25 in each dimension.

Results

Varying point quantity

point-quantity
Figure X: Left-to-right: C++, C#, Python run time to cluster 10 to 107 two dimensional points in to two clusters.

Both the C++ and C# sequential and parallel implementations outperform the Python scikit-learn implementations. However, the C++ sequential and parallel implementations outperforms their C# counterparts. Though the C++ sequential and parallel implementations are tied, as it seems the overhead associated with multithreading overrides any multithreaded performance gains one would expect. The C# CUDAfy.NET implementation surprisingly does not outperform the C# parallel implementation, but does outperform the C# sequential one as the number of points to cluster increases.

So what’s the deal with Python scikit-learn? Why is the parallel version so slow? Well, it turns out I misunderstood the nJobs parameter. I interpreted this to mean that process of clustering a single set of points would be done in parallel; however, it actually means that the number of simultaneous runs of the whole process will occur in parallel. I was tipped off to this when I noticed multiple python.exe fork processes being spun off which surprised me that someone would implement a parallel routine that way leading to a more thorough reading the scikit-learn documentation. There is parallelism going on with scikit-learn, just not the desired type. Taking that into account the linear one performs reasonably well for being a dynamically typed interpreted language.

Varying point dimension

point-dimension
Figure X: Left-to-right: C++, C#, Python run time to cluster 105, 2 to 27 dimensional points in to two clusters.

The C++ and C# parallel implementations exhibit consistent improved run time over their sequential counterparts. In all cases the performance is better than scikit-learn’s. Surprisingly, the C# CUDAfy.NET implementation does worse than both the C# sequential and parallel implementations. Why do we not better CUDAfy.NET performance? The performance we see is identical to the vary point quantity test. So on one hand it’s nice that increasing the point dimensions did not dramatically increase the run time, but ideally, the CUDAfy.NET performance should be better than the sequential and parallel C# variants for this test. My leading theory is that higher point dimensions result in more data that must be transferred between host and device which is a relatively slow process. Since I’m short on time, this will have to be something I investigate in more detail in the future.

Varying cluster quantity

cluster-quantity
Figure X: Left-to-right: C++, C#, Python run time to cluster 105 two dimensional points in to, 2 to 27 clusters.

As in the point dimension test, the C++ and C# parallel implementations outperform their sequential counterparts, while the scikit-learn implementation starts to show some competitive performance. The exciting news of course is that varying the cluster size finally reveals improved C# CUDAfy.NET run time. Now there is some curious behavior at the beginning of each plot. We get \le 10 \text{ ms} performance for two clusters, then jump up into about \le 100 \text{ ms} for four to eight clusters. Number of points and their dimension are held constant, but we allocate a few extra double’s for the cluster centroids. I believe this has to do with cache behavior. I’m assuming for fewer than four clusters everything that’s needed sits nicely in the fast L1 cache, and moving up to four and more clusters requires more exchanging of data between L1, L2, L3, and (slower) memory memory to the different cores of the Intel Core i7-2630QM processor I’m using. As before, I’ll need to do some more tests to verify that this is what is truly happening.

Language comparison

lang-compare
Figure X: Left-to-right: point quantity, point dimension, and cluster quantity run time summaries for C++, C#, and Python implementations. Columns in yellow are the fastest observed implementation and paradigm for the given test.

For the three tests considered, the C++ implementations gave the best run time performance on point quantity and point dimension tests while the C# CUDAfy.NET implementation gave the best performance on the cluster quantity test.

The C++ implementation could be made to run faster be preallocating memory in the same fashion that C# does. In C# when an application is first created a block of memory is allocated for the managed heap. As a result, allocation of reference types in C# is done by incrementing a pointer instead of doing an unmanaged allocation (malloc, etc.). (cf. Automatic Memory Management) This allocation takes place before executing the C# routines, while the same allocation takes place during the C++ routines. Hence, the C++ run times will have an overhead not present in the C# run times. Had I implemented memory allocation in C++ the same as it’s done in C#, then the C++ implementation would be undoubtedly even faster than the C# ones.

While using scikit-learn in Python is convenient for exploratory data analysis and prototyping machine learning algorithms, it leaves much to be desired in performance; frequently coming ten times slower than the other two implementations on the varying point quantity and dimension tests, but within tolerance on the vary cluster quantity tests.

Future Work

The algorithmic approach here was to parallelize work on data points, but as the dimension of each point increases, it may make sense to explore algorithms that parallelize work across dimensions instead of points.

I’d like to spend more time figuring out some of the high-performance nuances of programming the GPGPU (as well as traditional C++), which take more time and patience than a week or two I spent on this. In addition, I’d like to dig a little deeper into doing CUDA C directly rather than through the convenient CUDAfy.NET wrapper; as well as explore OpenMP and OpenCL to see how they compare from a development and performance-oriented view to CUDA.

Python and scikit-learn were used a baseline here, but it would be worth spending extra time to see how R and Julia compare, especially the latter since Julia pitches itself as a high-performance solution, and is used for exploratory data analysis and prototyping machine learning systems.

While the emphasis here was on trying out CUDAfy.NET and getting some exposure to GPGPU programming, I’d like to apply CUDAfy.NET to the expectation maximization algorithm for fitting multivariate Gaussian mixture models to a dataset. GMMs are a natural extension of k-means clustering, and it will be good to implement the more involved EM algorithm.

Conclusions

Through this exercise, we can expect to see modest speedups over sequential implementations of about 2.62x and 11.69x in the C# parallel and GPGPU implementations respectively when attempting to find large numbers of clusters on low dimensional data. Fortunately the way you use k-means clustering is to find the cluster quantity that maximizes the Bayesian information criterion or Akaike information criterion which means running the vary centroid quantity test on real data. On the other hand, most machine learning data is of a high dimension so further testing (on a real data set) would be needed to verify it’s effectiveness in a production environment. Nonetheless, we’ve seen how parallel and GPGPU based approaches can reduce the time it takes to complete the clustering task, and learned some things along the way that can be applied to future work.

Bibliography

[LiFa89] Li Xiaobo and Fang Zhixi, “Parallel clustering algorithms”, Parallel Computing, 1989, 11(3): pp.275-290.

[MaMi09] Mario Zechner, Michael Granitzer. “Accelerating K-Means on the Graphics Processor via CUDA.” First International Conference on Intensive Applications and Services, INTENSIVE’09. pp. 7-15, 2009.

[Stu82] Stuart P. Lloyd. Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28:129-137, 1982.

Written by lewellen

2015-09-01 at 8:00 am

Algorithms for Procedurally Generated Environments

with one comment

Fig. 1: Demonstration of different graphics techniques and features of the procedurally generated environment.

Introduction

I recently completed a graduate course in Computer Graphics that required us to demonstrate a significant understanding of OpenGL and general graphics techniques. Given the short amount of time to work with, I chose to work on creating a procedurally generated environment consisting of land, water, trees, a cabin, smoke, and flying insects. The following write-up explains my approach and the established algorithms that were used to create the different visual effects showcased in the video above.

Terrain

terrain-2
Fig. 2: First few iterations of the Midpoint Displacement algorithm.

There are a variety of different techniques for creating terrain. More complex ones rely on visualizing a three dimensional scalar field, while simpler ones visualize a two dimensional surface defined by a fixed image, or dynamically using a series of specially crafted functions or random behavior. I chose to take a fractal-based approach given by [FFC82]’s Midpoint Displacement algorithm. A two dimensional grid of size (2^n + 1)^2 (for n > 1) is allocated with each entry representing the height of the terrain at that row and column position. Beginning with the corners, the corresponding four midpoints, as well as singular center, are found and their height calculated by drawing from a uniform random variable whose support is given by the respective corners. The newly assigned values now form four separate squares which can be assigned values recursively as shown above. The Midpoint Displacement algorithm produces noisy surfaces that are jagged in appearance. To smooth out the results, a 3 \times 3 Gaussian Filter is applied to the surface twice in order to produce a more natural, smooth looking surface.

Face normals can be found by taking the cross product of the forward differences in both the row and column directions, however this leads to a faceted looking surface. For a more natural appearance, vertex normals are calculated using central differences in the row and column direction. The cross product of these approximations then gives the approximate surface normal at each vertex to ensure proper lighting.

terrain-texturing
Fig. 3: Example process of blending an elevation band texture with the base texture to create geographically correct looking terrain textures.

Texturing of the terrain is done by dynamically blending eight separate textures together based on terrain height as shown above. The process begins by loading in a base texture which is applied to all heights of the terrain. The next texture to apply is loaded, and an alpha mask is created and applied to the next texture based on random noise and a specific terrain height band. Blending of the masked texture and base texture is a function of the terrain’s height where the normalized height is passed through a logistic function to decide what portion of each texture should be used. The combined texture then serves as a new base texture and the process repeats until all textures have been blended together.

Water

In order to generate realistic looking water, a number of different OpenGL abilities were employed to accurately capture a half dozen different water effects consisting of reflections, waves, ripples, lighting, and Fresnel effects. Compared to other elements of the project, water required the largest graphics effort to get right.

reflection-passes
Fig. 4: The different rendering passes for creating realistic looking water with reflections and refraction.

To obtain reflections, a three-pass rendering process is used. In the first pass, the scene is clipped and rendered to a frame buffer (with color and depth attachments) from above the water revealing only what is below the surface for the refraction effects. The second pass clips and renders the scene to a frame buffer (with only a color attachment) from below the water surface revealing only what is above the water for the reflection effects. The third pass then combines these buffers on the water surface through vertex and fragment shaders to give the desired appearance. To map the frame buffer renderings to the water surface, the clipped coordinates calculated in the vertex shader are converted to normalized device coordinates through perspective division in the fragment shader which allows one to map the (u, v) coordinate of the texture as it should appear on the screen to coordinates of the water surface.

shader-effects
Fig. 5: Reflection, ripples normal map, water depth, refraction, specular normal map, and depth buffer textures used to achieve different visual effects.

To create the appearance of water ripples, a normal map is sampled and the resulting time varying displacement is used to sample the reflection texture for the fragment’s reflection color. Next, a similar sampling of the normal map is done at a coarser level to emulate the specular lighting that would appear on the subtle water waves created by the vertex shader. Refraction ripples and caustic lighting are achieved by sampling from the normal map just as the surface ripples and specular lighting were. To make the water appear cloudy, the depth buffer from the refraction rendering is used in conjunction with water depth so that terrain deeper under water is less visible as it would be in real life.

To combine the reflection and refraction components, the Fresnel Effect is used. This effect causes the surface of the water to vary in appearance based on viewing angle. When the viewer’s gaze is shallow to the water surface, the water is dominated by the reflection component, while when gazing downward, the water is more transparent giving way to the refraction component. The final combination effect is to adjust the transparency of the texture near the shore so that shallower water reveals more of the underlying terrain.

Flora

flora-examples
Fig. 6: Examples of plants generated by using the Stochastic Lindenmayer Systems framework.

The scene consists of single cottonwood trees, but the underlying algorithm based on Stochastic Lindermayer Systems [Lin68] which can produce a large variety of flora as shown above. The idea is that an n-ary tree is created with geometric annotations consisting of length and radius, and relative position, yaw, and pitch to its parent node. Internal nodes of the tree are rendered as branches or stems, while leaf nodes as groups of leaves, flowers, fruits and so on. Depending on the type of plant one wishes to generate, these parameters take on different values, and the construction of the n-ary tree varies. For a palm tree, a linked list is created, whereas a flower may have several linked lists sharing a common head, and a bush may be a factorial tree with a regular pitch.

Smoke

smoke
Fig. 7: Examples of smoke plumes generated using random particles, Metaballs, and Marching Tetrahedra algorithms.

The primary algorithmic challenge of the project was to visualize smoke coming from the chimney of the cabin in the scene. To achieve this, a simple particle system was written in conjunction with [Bli82]’s Metaballs and [PT+90]’s Marching Tetrahedra algorithms. The tetrahedral variant was chosen since it easier to implement from scratch than [LC87]’s original Marching Cubes algorithm. The resulting smoke plumes produced from this chain of algorithms is shown above.

particles
Fig. 8: Visual explanation of the smoke generation process in two dimensions.

Particles possess position and velocity three dimensional components, and are added at a fixed interval to the system in batches with random initial values on the xy-plane and zero velocity terms. A uniform random vector field is created with random x, y components and fixed, positive z component. Euler’s Forward Method is applied to the system to update each particle’s position and velocity. Any particles that escape from the unit bounding cube are removed from the system. This process produces the desired Brownian paths that are typical of smoke particles. To visualize each particle, the Metaballs algorithm is used to create a potential field about each particle. The three dimensional grid is populated in linear time with respect to the number of particles by iterating over a fixed volume about each particle since there is no need to go outside of the fixed volume where the point’s potential field is surely zero.

marching
Fig. 9: Cube segmented into 6 tetrahedra, and the two primary cases of the Marching Tetrahedra algorithm.

The resulting scalar field from this process is passed along to the Marching Tetrahedra algorithm. The algorithm will inspect each volume of the grid in cubic time with respect to the grid edge size. The eight points of the volume are then assigned inside / outside labellings with those volumes completely inside or outside ignored. Those having mixed labellings contain a segment of the surface we wish to render. A single volume is segmented into 6 tetrahedra[1], with two tetrahedra facing each plane resulting in a common corner shared by all as shown above. Each tetrahedron then has sixteen cases to examine leading to different surfaces. Two of these cases are degenerate; all inside or all outside. The remaining fourteen cases can be reduced to two by symmetry as shown above. To ensure the surface is accurate, the surface vertices are found by linearly interpolating between inside / outside grid points.

Face normals can be computed directly from the resulting surface planes by taking the usual cross product. In order to calculate vertex normals, numerical differentiation is used to derive the gradient of the scalar field at the grid point using backward, central, and forward differences depending on availability. Taking the calculated normals at each grid point, the surface normals at the previously interpolated surface vertices are then the linear interpolation of the corresponding grid point normals. Given more time, I would have liked to put more time into surface tracking and related data structures to reduce the cubic surface generation process down to just those volumes that require a surface to be drawn.

Butterflies

pursuit
Fig. 10: Example series of movements and completed pursuit curve of predator and prey.

For the final stretch goal, the proposed static options were eschewed in favor of adding in a dynamic element that would help bring the scene to life without being obtrusive. As a result, a kaleidoscope of butterflies that meander through the scene were introduced.

Each butterfly follows a pursuit curve as it chases after an invisible particle following a random walk. Both butterfly and target are assigned positions drawn from a uniform random variable with unit support. At the beginning of a time step, the direction to where the particle will be at the next time step from where the butterfly is at the current time step is calculated, and the butterfly’s position is then incremented in that direction. Once the particle escapes the unit cube, or the butterfly catches the particle, then the particle is assigned a new position, and the game of cat and mouse continues.

Each time step the butterfly’s wings and body are rotated a slight amount by a fixed value, and by the Euler angles defined by its direction to give the correct appearance of flying. To add variety to the butterflies, each takes on one of three different appearances (Monarch, and Blue and White Morpho varieties) based on a fair dice roll. One flaw with the butterflies is that they do not take into account the positions of other objects in the scene and can be often seen flying into the ground, or through the cabin.

Conclusions

This project discussed a large variety of topics related to introductory computer graphics, but did not cover other details that were developed including navigation, camera control, lighting, algorithms for constructing basic primitives, and the underlying design of the C++ program or implementation of the GLSL shaders. While most of the research applied to this project dates back nearly 35 years, the combination of techniques lends to a diverse and interesting virtual environment. Given more time, additional work could be done to expand the scene to include more procedurally generated plants, objects, and animals, as well as additional work done to make the existing elements look more photorealistic.

References

[Bli82] James F. Blinn. A generalization of algebraic surface drawing. ACM Trans. Graph., 1(3):235-256, 1982.

[FFC82] Alain Fournier, Donald S. Fussell, and Loren C. Carpenter. Computer render of stochastic models. Commun. ACM, 25(6):371-384, 1982.

[LC87] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1987, pages 163-169, 1987.

[Lin68] Astrid Lindenmayer. Mathematical models for cellular interactions in development i. filaments with one-side inputs. Journal of theoretical biology, 18(3):280-299, 1968.

[PT+90] Bradley Payne, Arthur W Toga, et al. Surface mapping brain function on 3d models. Computer Graphics and Applications, IEEE, 10(5):33-41, 1990.

Abstract Algebra in C#

Motivation

In C++ it is easy to define arbitrary template methods for computations involving primitive numeric types because all types inherently have arithmetic operations defined. Thus, a programmer need only implement one method for all numeric types. The compiler will infer the use and substitute the type at compile time and emit the appropriate machine instructions. This is C++’s approach to parametric polymorphism.

With the release of C# 2.0 in the fall of 2005, the language finally got a taste of parametric polymorphism in the form of generics. Unfortunately, types in C# do not inherently have arithmetic operations defined, so methods involving computations must use ad-hoc polymorphism to achieve the same result as in C++. The consequence is a greater bloat in code and an increased maintenance liability.

To get around this design limitation, I’ve decided to leverage C#’s approach to subtype polymorphism and to draw from Abstract Algebra to implement a collection of interfaces allowing for C++ like template functionality in C#. The following is an overview of the mathematical theory used to support and guide the design of my solution. In addition, I will present example problems from mathematics and computer science that can be represented in this solution along with examples how type agnostic computations that can be performed using this solution.

Abstract Algebra

Abstract Algebra is focused on how different algebraic structures behave in the presence of different axioms, operations and sets. In the following three sections, I will go over the fundamental sub-fields and how they are represented under the solution.

In all three sections, I will represent the distinction between algebraic structures using C# interfaces. The type parameters on these interfaces represent the sets being acted upon by each algebraic structure. This convention is consistent with intuitionistic (i.e., Chruch-style) type theory embraced by C#’s Common Type System (CTS). Use of parameter constraints will be used when type parameters are intended to be of a specific type. Functions on the set and elements of the set will be represented by methods and properties respectively.

Group Theory

Group Theory is the simplest of sub-fields of Abstract Algebra dealing with the study of a single binary operation, (\cdot), acting on a set a, b, c \in S. There are five axioms used to describe the structures studied under Group Theory:

  1. Closure: (\cdot) : S \times S \to S
  2. Associativity: (a \cdot b) \cdot c = a \cdot (b \cdot c)
  3. Commutativity : a \cdot b = b \cdot a
  4. Identity: a \cdot e = e \cdot a
  5. Inverse: a \cdot b = e

The simplest of these structures is the Groupoid satisfying only axiom (1). Any Groupoid also satisfying axiom (2) is known as a Semi-group. Any Semi-group satisfying axiom (4) is a Monoid. Monoid’s also satisfying axiom (5) are known as Groups. Any Group satisfying axiom (3) is an Abelian Group.

public interface IGroupoid<T> {
    T Operation(T a, T b);
}

public interface ISemigroup<T> : IGroupoid<T> {

}

public interface IMonoid<T> : ISemigroup<T> {
    T Identity { get; }
}

public interface IGroup<T> : IMonoid<T> {
    T Inverse(T t);
}

public interface IAbelianGroup<T> : IGroup<T> {

}

Ring Theory

The next logical sub-field of Abstract Algebra to study is Ring Theory which is the study of two operations, (\cdot) and (+), on a single set. In addition to the axioms outlined above, there is an addition axiom for describing how one operations distributes over the other.

  1. Distributivity: a \cdot (b + c) = (a \cdot b) + (a \cdot c), (a + b) \cdot c = (a \cdot c) + (b \cdot c)

All of the following ring structures satisfy axiom (6). Rings are distinguished by the properties of their operands. The simplest of these structures is the Ringoid where both operands are given by Groupoids. Any Ringoid whose operands are Semi-groups is a Semi-ring. Any Semi-ring whose first operand is a Group is a Ring. Any Ring whose second operand is a Monoid is a Ring with Unity. Any Ring with Unity whose second operand is a Group is Division Ring. Any Division Ring whose operands are both Abelian Groups is a Field.

public interface IRingoid<T, A, M>
    where A : IGroupoid<T>
    where M : IGroupoid<T> {
    A Addition { get; }
    M Multiplication { get; }

    T Distribute(T a, T b);
}

public interface ISemiring<T, A, M> : IRingoid<T, A, M>
    where A : ISemigroup<T>
    where M : ISemigroup<T> {

}

public interface IRing<T, A, M> : ISemiring<T, A, M>
    where A : IGroup<T>
    where M : ISemigroup<T> {

}

public interface IRingWithUnity<T, A, M> : IRing<T, A, M>
    where A : IGroup<T>
    where M : IMonoid<T> {

}

public interface IDivisionRing<T, A, M> : IRingWithUnity<T, A, M>
    where A : IGroup<T>
    where M : IGroup<T> {

}

public interface IField<T, A, M> : IDivisionRing<T, A, M>
    where A : IAbelianGroup<T>
    where M : IAbelianGroup<T> {

}

Module Theory

The last, and more familiar, sub-field of Abstract Algebra is Module Theory which deals with structures with an operation, (\circ) : S \times R \to R, over two separate sets: a,b \in S and x,y \in R that satisfy the following axioms.

  1. Distributivity of S: a \circ (x + y) = (a \circ x) + (a \circ y)
  2. Distributivity of R: (a + b) \circ x = (a \circ x) + (b \circ x)
  3. Associativity of S: a \circ (b \circ x) = (a \cdot b) \circ x

All of the following module structures satisfy axioms (7)-(9). A Module consists of a scalar Ring and an vector Abelian Group. Any Module whose Ring is a Ring with Unity is a Unitary Module. Any Unitary Module whose Ring with Unity is a Abelian Group is a Vector Space.

public interface IModule<
    TScalar, 
    TVector, 
    TScalarRing, 
    TScalarAddativeGroup, 
    TScalarMultiplicativeSemigroup,
    TVectorAddativeAbelianGroup
>
    where TScalarRing : IRing<TScalar, TScalarAddativeGroup, TScalarMultiplicativeSemigroup>
    where TScalarAddativeGroup : IGroup<TScalar>
    where TScalarMultiplicativeSemigroup : ISemigroup<TScalar>
    where TVectorAddativeAbelianGroup : IAbelianGroup<TVector> 
{

    TScalarRing Scalar { get; }
    TVectorAddativeAbelianGroup Vector { get; }

    TVector Distribute(TScalar t, TVector r);
}

public interface IUnitaryModule<
    TScalar, 
    TVector, 
    TScalarRingWithUnity, 
    TScalarAddativeGroup, 
    TScalarMultiplicativeMonoid,
    TVectorAddativeAbelianGroup
> 
    : IModule<
        TScalar, 
        TVector, 
        TScalarRingWithUnity, 
        TScalarAddativeGroup, 
        TScalarMultiplicativeMonoid,
        TVectorAddativeAbelianGroup
    >
    where TScalarRingWithUnity : IRingWithUnity<TScalar, TScalarAddativeGroup, TScalarMultiplicativeMonoid>
    where TScalarAddativeGroup : IGroup<TScalar>
    where TScalarMultiplicativeMonoid : IMonoid<TScalar>
    where TVectorAddativeAbelianGroup : IAbelianGroup<TVector>
{

}

public interface IVectorSpace<
    TScalar,
    TVector,
    TScalarField,
    TScalarAddativeAbelianGroup,
    TScalarMultiplicativeAbelianGroup,
    TVectorAddativeAbelianGroup
>
    : IUnitaryModule<
        TScalar,
        TVector,
        TScalarField,
        TScalarAddativeAbelianGroup,
        TScalarMultiplicativeAbelianGroup,
        TVectorAddativeAbelianGroup
    >
    where TScalarField : IField<TScalar, TScalarAddativeAbelianGroup, TScalarMultiplicativeAbelianGroup>
    where TScalarAddativeAbelianGroup : IAbelianGroup<TScalar>
    where TScalarMultiplicativeAbelianGroup : IAbelianGroup<TScalar>
    where TVectorAddativeAbelianGroup : IAbelianGroup<TVector> 
{

}

Representation of Value Types

The CTS allows for both value and reference types on the .NET Common Language Infrastructure (CLI). The following are examples of how each theory presented above can leverage value types found in the C# language to represent concepts drawn from mathematics.

Enum Value Types and the Dihedral Group D_8

One of the simplest finite groups is the Dihedral Group of order eight, D_{8}, representing the different orientations of a square, e, obtained by reflecting the square about the vertical axis, b, and rotating the square by ninety degrees, a. The generating set is given by \lbrace a, b \rbrace and gives rise to the set \lbrace e, a, a^2, a^3, b, ba, ba^2, ba^3 \rbrace These elements are assigned names as follows: \text{Rot}(0) = e, \text{Rot}(90) = a, \text{Rot}(180) = a^2, \text{Rot}(270) = a^3, \text{Ref}(\text{Ver}) = b, \text{Ref}(\text{Desc}) = ba, \text{Ref}(\text{Hoz}) = ba^2 and \text{Ref}(\text{Asc}) = ba^3. The relationship between these elements is visualized below:

The easiest way to represent this group as a value type is with an enum.

enum Symmetry { Rot000, Rot090, Rot180, Rot270, RefVer, RefDes, RefHoz, RefAsc }

From this enum we can define the basic Group Theory algebraic structures to take us to D_8.

public class SymmetryGroupoid : IGroupoid<Symmetry> {
    public Symmetry Operation(Symmetry a, Symmetry b) {
        // 64 cases
    }
}

public class SymmetrySemigroup : SymmetryGroupoid, ISemigroup<Symmetry> {

}

public class SymmetryMonoid : SymmetrySemigroup, IMonoid<Symmetry> {
    public Symmetry Identity {
        get { return Symmetry.Rot000; }
    }
}

public class SymmetryGroup : SymmetryMonoid, IGroup<Symmetry> {
    public Symmetry Inverse(Symmetry a) {
        switch (a) { 
            case Symmetry.Rot000:
                return Symmetry.Rot000;
            case Symmetry.Rot090:
                return Symmetry.Rot270;
            case Symmetry.Rot180:
                return Symmetry.Rot270;
            case Symmetry.Rot270:
                return Symmetry.Rot090;
            case Symmetry.RefVer:
                return Symmetry.RefVer;
            case Symmetry.RefDes:
                return Symmetry.RefAsc;
            case Symmetry.RefHoz:
                return Symmetry.RefHoz;
            case Symmetry.RefAsc:
                return Symmetry.RefDes;
        }

        throw new NotImplementedException();
    }

}

Integral Value Types and the Commutative Ring with Unity over \mathbb{Z} / 2^n \mathbb{Z}

C# exposes a number of fixed bit integral value types that allow a programmer to pick an integral value type suitable for the scenario at hand. Operations over these integral value types form a commutative ring with unity whose set is the congruence class \mathbb{Z} / 2^n \mathbb{Z} = \lbrace \overline{0}, \overline{1}, \ldots, \overline{2^n-1}  \rbrace where n is the number of bits used to represent the integer and \overline{m} is the equivalance class \overline{m} = \lbrace m + k \cdot 2^n\rbrace with k \in \mathbb{Z}.

Addition is given by \overline{a} + \overline{b} = \overline{(a + b)} just as multiplication is given by \overline{a} \cdot \overline{b} = \overline{(a \cdot b)}. Both statements are equivalent to the following congruence statements: (a + b) \equiv  c \pmod{2^n} and (a \cdot b) \equiv c \pmod{2^n} respectively.

Under the binary numeral system, modulo 2^n is equivalent to ignoring the bits exceeding n-1, or equivalently, \displaystyle \sum_{i = 0}^{\infty} c_i 2^i \equiv \sum_{i = 0}^{n-1} c_i 2^i \pmod{2^n} where c \in \lbrace 0, 1 \rbrace. As a result only the first (right most) bits need to be considered when computing the sum or product of two congruence classes, or in this case, integer values in C#. Thus, in the following implementation, it is not necessary to write any extra code to represent these operations other than writing them in their native form.

The reason why we are limited to a commutative ring with unity instead of a full field is that multiplicative inverses do not exist for all elements. A multiplicative inverse only exists when ax \equiv 1 \pmod{2^n} where x is the multiplicative inverse of a. For a solution to exist, \gcd(a, 2^n) = 1. Immediately, any even value of a will not have a multiplicative inverse in \mathbb{Z} \ 2^n \mathbb{Z}. However, all odd numbers will.

public class AddativeIntegerGroupoid : IGroupoid<long> {
    public long Operation(long a, long b) {
        return a + b;
    }
}

public class AddativeIntegerSemigroup : AddativeIntegerGroupoid, ISemigroup<long> {

}

public class AddativeIntegerMonoid : AddativeIntegerSemigroup, IMonoid<long> {
    public long Identity {
        get { return 0L; }
    }
}

public class AddativeIntegerGroup : AddativeIntegerMonoid, IGroup<long> {
    public long Inverse(long a) {
        return -a;
    }
}

public class AddativeIntegerAbelianGroup : AddativeIntegerGroup, IAbelianGroup<long> {

}

public class MultiplicativeIntegerGroupoid : IGroupoid<long> {
    public long Operation(long a, long b) {
        return a * b;
    }
}

public class MultiplicativeIntegerSemigroup : MultiplicativeIntegerGroupoid, ISemigroup<long> {

}

public class MultiplicativeIntegerMonoid : MultiplicativeIntegerSemigroup, IMonoid<long> {
    public long Identity {
        get { return 1L; }
    }
}

public class IntegerRingoid : IRingoid<long, AddativeIntegerGroupoid, MultiplicativeIntegerGroupoid> {
    public AddativeIntegerGroupoid Addition { get; private set;}
    public MultiplicativeIntegerGroupoid Multiplication { get; private set;}

    public IntegerRingoid() {
        Addition = new AddativeIntegerGroupoid();
        Multiplication = new MultiplicativeIntegerGroupoid();
    }

    public long Distribute(long a, long b) {
        return Multiplication.Operation(a, b);
    }
}

public class IntegerSemiring : IntegerRingoid, ISemiring<long, AddativeIntegerSemiring, MultiplicativeIntegerSemiring> {
    public AddativeIntegerSemiring Addition { get; private set;}
    public MultiplicativeIntegerSemiring Multiplication { get; private set;}

    public IntegerSemiring() : base() {
        Addition = new AddativeIntegerSemiring();
        Multiplication = new MultiplicativeIntegerSemiring();
    }
}

public class IntegerRing : IntegerSemiring, IRing<long, AddativeIntegerGroup, MultiplicativeIntegerSemigroup>{
    public new AddativeIntegerGroup Addition { get; private set; }

    public IntegerRing() : base() {
        Addition = new AddativeIntegerGroup();
    }
}

public class IntegerRingWithUnity : IntegerRing, IRingWithUnity<long, AddativeIntegerGroup, MultiplicativeIntegerMonoid> {
    public MultiplicativeIntegerMonoid Multiplication { get; private set; }

    public IntegerRingWithUnity() : base() {
        Multiplication = new MultiplicativeIntegerMonoid();
    }
}

Floating-point Value Types and the Real Vector Space \mathbb{R}^n

C# offers three types that approximate the set of Reals: floats, doubles and decimals. Floats are the least representative followed by doubles and decimals. These types are obviously not continuous, but the error involved in rounding calculations with respect to the calculations in question are negligible and for most intensive purposes can be treated as continuous.

As in the previous discussion on the integers, additive and multiplicative classes are defined over the algebraic structures defined in the Group and Ring Theory sections presented above. In addition to these implementations, an additional class is defined to describe a vector.

public class Vector<T> {
    private T[] vector;

    public int Dimension {
        get { return vector.Length; }
    }

    public T this[int n] {
        get { return vector[n]; }
        set { vector[n] = value; }
    }

    public Vector() {
        vector = new T[2];
    }
}

With these classes, it is now possible to implement the algebraic structures presented in the Module Theory section from above.

public class RealVectorModule : IModule<double, Vector<double>, RealRing, AddativeRealGroup, MultiplicativeRealSemigroup, VectorAbelianGroup<double>> {
    public RealRing Scalar {
        get;
        private set;
    }

    public VectorAbelianGroup<double> Vector {
        get;
        private set;
    }

    public RealVectorModule() {
        Scalar = new RealRing();
        Vector = new VectorAbelianGroup<double>(new AddativeRealAbelianGroup());
    }

    public Vector<double> Distribute(double t, Vector<double> r) {
        Vector<double> c = new Vector<double>();
        for (int i = 0; i < c.Dimension; i++)
            c[i] = Scalar.Multiplication.Operation(t, r[i]);
        return c;
    }
}

public class RealVectorUnitaryModule : RealVectorModule, IUnitaryModule<double, Vector<double>, RealRingWithUnity, AddativeRealGroup, MultiplicativeRealMonoid, VectorAbelianGroup<double>> {
    public new RealRingWithUnity Scalar {
        get;
        private set;
    }

    public RealVectorUnitaryModule()
        : base() {
        Scalar = new RealRingWithUnity();
    }
}

public class RealVectorVectorSpace : RealVectorUnitaryModule, IVectorSpace<double, Vector<double>, RealField, AddativeRealAbelianGroup, MultiplicativeRealAbelianGroup, VectorAbelianGroup<double>> {
    public new RealField Scalar {
        get;
        private set;
    }

    public RealVectorVectorSpace()
        : base() {
        Scalar = new RealField();
    }
}

Representation of Reference Types

The following are examples of how each theory presented above can leverage reference types found in the C# language to represent concepts drawn from computer science.

Strings, Computability and Monoids

Strings are the simplest of reference types in C#. From an algebraic structure point of view, the set of possible strings, \Sigma^{*}, generated by an alphabet, \Sigma, and paired with a concatenation operation, (+), forms a monoid.

public class StringGroupoid : IGroupoid<string> {
    public string Operation(String a, String b) {
        return string.Format("{0}{1}", a, b);
    }
}

public class StringSemigroup : StringGroupoid, ISemigroup<string> {

}

public class StringMonoid : StringSemigroup, IMonoid<string> {
    public string Identity {
        get { return string.Empty; }
    }
}

Monoids over strings have a volley of applications in the theory of computation. Syntactic Monoids describe the smallest set that recognizes a formal language. Trace Monoids describe concurrent programming by allowing different characters of an alphabet to represent different types of locks and synchronization points, while the remaining characters represent processes.

Classes, Interfaces, Type Theory and Semi-rings

Consider the set of types \mathcal{T}^{*}, that are primitive and constructed in C#. The generating set of \mathcal{T}^{*} is the set of primitive reference and value types, \mathcal{T}, consisting of the types discussed thus far. New types can be defined by defining classes and interfaces.

A simple operation (\oplus) over \mathcal{T}^{*} takes two types, \alpha, \beta, and yields a third type, \gamma, known as a sum type. In type theory, this means that an instance of \gamma can be either an instance of \alpha or \beta. A second operation (\otimes) over \mathcal{T}^{*} takes two types and yields a third type representing a tuple of the first two types. In other words, \alpha \otimes \beta = (\alpha, \beta).

Both operations form a semi-group (\mathcal{T}^{*}, \otimes) and (\mathcal{T}^{*}, \otimes) and in conjunction the two form a semi-ring.

To implement this semi-ring is a little involved. The .NET library supports emitting dynamic type definitions at runtime. For sum types, this would lead to an inheritance view of the operation. Types \alpha and \beta would end up deriving from \gamma which means that any sequence of sums would yield an inheritance tree. A product type would result in composition of types with projection operations, \pi_{n} : \prod_{i = 0} \tau_{i} \to \tau_{n}, to access and assign the n\text{'th} element of the composite. Both type operation implementations are outside the scope of this write-up and I’ll likely revisit this topic in a future write-up.

Delegates and Process Algebras

The third type of reference type to mention is the delegate type which is C#’s approach to creating first-class functions. The simplest of delegates is the built-in Action delegate which represents a single procedure taking no inputs and returning no value.

Given actions a, b \in \mathcal{A}, we can define a possible execution operator, (\Vert) : \mathcal{A} \times \mathcal{A} \to \mathcal {A}, where either a or b is executed denoted as a \lVert b. The choice operation forms a commutative semigroup (\mathcal{A}, \lVert) since operations are associative a \lVert (b \lVert c) = (a \lVert b) \lVert c and the operation is commutative a \lVert b = b \lVert a.

A product operation, (\to) : \mathcal{A} \times \mathcal{A} \to \mathcal {A}, representing the sequential execution of a and then b is given by a \to b. The sequence operator forms a groupoid with unity since the operation is not associative a \to (b \to c) \neq (a \to b) \to c and there is an identity action e representing a void operation resulting in e \to a = a.

Both operations together form a ringoid, (\mathcal{A}, \to, \lVert) since the sequence operation distributes over the choice operation a \to (b \lVert c) = (a \to b) \lVert (a \to c). Meaning that a takes place and then b or c takes places is equivalent to a and then b takes place or a and then c takes place.

public class SequenceGroupoidWithUnity<Action> : IGroupoid<Action> {
    public Action Identity { 
        get { return () => {}; }
    }

    public Action Operation(Action a, Action b) {
        return () => { a(); b(); }
    }
}

public class ChoiceGroupoid<Action> : IGroupoid<Action> {
    public Action Operation(Action a, Action b) {
        if(DateTime.Now.Ticks % 2 == 0)
            return a;
        return b;
    }
}

The process algebra an be extended further to describe parallel computations with an additional operation. The operations given thus far enable one to derive the possible execution paths in a process. This enables one to comprehensively test each execution path to achieve complete test coverage.

Examples

The motivation of this work was to achieve C++’s approach to parametric polymorphism by utilizing C# subtype polymorphism to define the algebraic structure required by a method (akin to the built-in operations on types in C++). To illustrate how these interfaces are to be used, the following example extension methods operate over a collection of a given type and accept the minimal algebraic structure to complete the computation. The result is a single implementation of the calculation that one would expect in C++.

static public class GroupExtensions {
    static public T Sum<T>(this IEnumerable<T> E, IMonoid<T> m) {
        return E
            .FoldL(m.Identity, m.Operation);
    }
}

static public class RingoidExtensions {
    static public T Count<T, R, A, M>(this IEnumerable<R> E, IRingWithUnity<T, A, M> r)
        where A : IGroup<T>
        where M : IMonoid<T> {

        return E
            .Map((x) => r.Multiplication.Identity)
            .Sum(r.Addition);
    }

    static public T Mean<T, A, M>(this IEnumerable<T> E, IDivisionRing<T, A, M> r)
        where A : IGroup<T>
        where M : IGroup<T> {

        return r.Multiplication.Operation(
            r.Multiplication.Inverse(
                E.Count(r)
            ),
            E.Sum(r.Addition)
        );
    }

    static public T Variance<T, A, M>(this IEnumerable<T> E, IDivisionRing<T, A, M> r)
        where A : IGroup<T>
        where M : IGroup<T> {

        T average = E.Mean(r);

        return r.Multiplication.Operation(
            r.Multiplication.Inverse(
                E.Count(r)
            ),
            E
                .Map((x) => r.Addition.Operation(x, r.Addition.Inverse(average)))
                .Map((x) => r.Multiplication.Operation(x, x) )
                .Sum(r.Addition)
        );
    }
}

static public class ModuleExtensions {
    static public TV Mean<TS, TV, TSR, TSRA, TSRM, TVA>(this IEnumerable<TV> E, IVectorField<TS, TV, TSR, TSRA, TSRM, TVA> m)
        where TSR : IField<TS, TSRA, TSRM>
        where TSRA : IAbelianGroup<TS>
        where TSRM : IAbelianGroup<TS>
        where TVA : IAbelianGroup<TV> {

        return m.Distribute(
            m.Scalar.Multiplication.Inverse(
                E.Count(m.Scalar)
            ),
            E.FoldL(
                m.Vector.Identity,
                m.Vector.Operation
            )
        );
    }
}

Conclusion

Abstract Algebra comes with a rich history and theory for dealing with different algebraic structures that are easily represented and used in the C# language to perform type agnostic computations. Several examples drawn from mathematics and computer science illustrated how the solution can be used for both value and reference types in C# and be leveraged in the context of a few example type agnostic computations. The main benefit of this approach is that it minimizes the repetitious coding of computations required under the ad-hoc polymorphism approach adopted by the designers of C# language. The downside is that several structures must be defined for the types being computed over and working with C# parameter constraint system can be unwieldy. While an interesting study, this solution would not be practical in a production setting under the current capabilities of the C# language.

References

Baeten, J.C.M. A Brief History of Process Algebra [pdf]. Department of Computer Science, Technische Universiteit Eindhoven. 31 Mar. 2012.

ECMA International. Standard ECMA-335 Common Language Infrastructure [pdf]. 2006.

Fokkink, Wan. Introduction of Process Algebra [pdf]. 2nd ed. Berlin: Springer-Verlang, 2007. 10 Apr. 2007. 31 Mar. 2012.

Goodman, Joshua. Semiring Parsing [pdf]. Computational Linguistics 25 (1999): 573-605. Microsoft Research. 31 Mar. 2012.

Hungerford, Thomas. Algebra. New York: Holt, Rinehart and Winston, 1974.

Ireland, Kenneth. A classical introduction to modern number theory. New York: Springer-Verlag, 1990.

Litvinov, G. I., V. P. Maslov, and A. YA Rodionov. Universal Algorithms, Mathematics of Semirings and Parallel Computations [pdf]. Spring Lectures Notes in Computational Science and Engineering. 7 May 2010. 31 Mar. 2012 .

Mazurkiewicz, Antoni. Introduction to Trace Theory [pdf]. Rep. 19 Nov. 1996. Institute of Computer Science, Polish Academy of Sciences. 31 Mar. 2012.

Pierce, Benjamin. Types and programming languages. Cambridge, Mass: MIT Press, 2002.

Stroustrup, Bjarne. The C++ Programming Language. Reading, Mass: Addison-Wesley, 1986.

Written by lewellen

2012-04-01 at 8:00 am

Menger Sponge in C++ using OpenGL

with one comment

This past summer I was going through some old projects and came across a Menger Sponge visualizer that I wrote back in college. A Menger Sponge is simple fractal that has infinite surface area and encloses zero volume. The sponge is constructed in successive iterations and the first four iterations are rendered in the video below.

The sponge starts as a single cube that is segmented into twenty-seven equally sized cubes. The center cubes of each face and that of the parent cube are then discarded and the process is applied again to each of the remaining cubes. Visually, the process looks like the following:

The geometry of the process is straight forward. Starting with a cube’s origin, \vec{o}, and edge length, e, each of the children’s attributes can be calculated. Each child’s edge length is given by e_{Child} = \frac{1}{3} e_{Parent}. Each child’s origin given by \vec{o}_{Child} = \vec{o}_{Parent} + e_{Child} \vec{c}_{Child}. The constant represents a child’s relative position (e.g., (1, -1, 0)) to its parent.

The following implementation isn’t particularly well written, but it accomplishes the desired end result. The point and Cube classes achieve the logic that I’ve outlined above. Cube can be thought of as a tree structure that is generated upon instantiation. The visualize() method pumps out the desired OpenGL commands to produce the faces of the cubes.

#include <GL\glut.h>

#include <math.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

//=================================================================================
//=================================================================================
class point
{
public:
	point(GLfloat x, GLfloat y, GLfloat z, point* ref = NULL);
	void visualize();

	GLfloat x,y,z;
};

point::point(GLfloat x, GLfloat y, GLfloat z, point* ref)
{
	this->x = x;
	this->y = y;
	this->z = z;

	if(ref != NULL)
	{
		this->x += ref->x;
		this->y += ref->y;
		this->z += ref->z;
	}
}

//=================================================================================
//=================================================================================

class Cube
{
public:
	Cube(point* origin, GLfloat edgelength, GLfloat depth);
	~Cube();

	void visualize();

private:
	void MakeFace(int i, int j, int k, int l);
	void ActAsContainer(point* o, GLfloat e, GLfloat d);
	void ActAsCube(point* o, GLfloat e);

	point** PointCloud;
	Cube** SubCubes;
};

Cube::Cube(point* origin, GLfloat edgelength, GLfloat depth)
{
	if(depth <= 1.0)
	{
		ActAsCube(origin, edgelength);
	} else {
		ActAsContainer(origin, edgelength, depth);
	}
}

Cube::~Cube()
{
	int i;

	if(PointCloud != NULL)
	{
		for(i = 0; i < 8; i++)
			delete PointCloud[i];
		delete[] PointCloud;
	}

	if(SubCubes != NULL)
	{
		for(i = 0; i < 20; i++)
			delete SubCubes[i];
		delete[] SubCubes;
	}
}

void Cube::ActAsCube(point* o, GLfloat e)
{
	GLfloat ne = e / 2.0;

	PointCloud = new point*[8];		// This is the actual physical cube coordinates;
	SubCubes = NULL;

	PointCloud[0] = new point( ne,  ne,  ne, o);	// net
	PointCloud[1] = new point( ne, -ne,  ne, o);	// set
	PointCloud[2] = new point(-ne,  ne,  ne, o);	// nwt
	PointCloud[3] = new point(-ne, -ne,  ne, o);	// swt
	PointCloud[4] = new point( ne,  ne, -ne, o);	// neb
	PointCloud[5] = new point( ne, -ne, -ne, o);	// seb
	PointCloud[6] = new point(-ne,  ne, -ne, o);	// nwb
	PointCloud[7] = new point(-ne, -ne, -ne, o);	// swb
}

void Cube::ActAsContainer(point* o, GLfloat e, GLfloat d)
{
	GLfloat ne = e / 3.0;

	SubCubes = new Cube*[20];	// These are the centers of each sub cube structure
	PointCloud = NULL;

	SubCubes[0] = new Cube(new point(-ne,  ne,  ne, o), ne, d-1.0);
	SubCubes[1] = new Cube(new point(0.0,  ne,  ne, o), ne, d-1.0);
	SubCubes[2] = new Cube(new point( ne,  ne,  ne, o), ne, d-1.0);
	SubCubes[3] = new Cube(new point( ne, 0.0,  ne, o), ne, d-1.0);
	SubCubes[4] = new Cube(new point( ne, -ne,  ne, o), ne, d-1.0);
	SubCubes[5] = new Cube(new point(0.0, -ne,  ne, o), ne, d-1.0);
	SubCubes[6] = new Cube(new point(-ne, -ne,  ne, o), ne, d-1.0);
	SubCubes[7] = new Cube(new point(-ne, 0.0,  ne, o), ne, d-1.0);
	
	SubCubes[8] = new Cube(new point( ne,  ne,  0.0, o), ne, d-1.0);
	SubCubes[9] = new Cube(new point( ne, -ne,  0.0, o), ne, d-1.0);
	SubCubes[10] = new Cube(new point(-ne, ne,  0.0, o), ne, d-1.0);
	SubCubes[11] = new Cube(new point(-ne, -ne,  0.0, o), ne, d-1.0);
	
	SubCubes[12] = new Cube(new point(-ne,  ne, -ne, o), ne, d-1.0);
	SubCubes[13] = new Cube(new point(0.0,  ne, -ne, o), ne, d-1.0);
	SubCubes[14] = new Cube(new point( ne,  ne, -ne, o), ne, d-1.0);
	SubCubes[15] = new Cube(new point( ne, 0.0, -ne, o), ne, d-1.0);
	SubCubes[16] = new Cube(new point( ne, -ne, -ne, o), ne, d-1.0);
	SubCubes[17] = new Cube(new point(0.0, -ne, -ne, o), ne, d-1.0);
	SubCubes[18] = new Cube(new point(-ne, -ne, -ne, o), ne, d-1.0);
	SubCubes[19] = new Cube(new point(-ne, 0.0, -ne, o), ne, d-1.0);
}

void Cube::MakeFace(int i, int j, int k, int l)
{
		glVertex3f(PointCloud[i]->x, PointCloud[i]->y, PointCloud[i]->z);
		glVertex3f(PointCloud[j]->x, PointCloud[j]->y, PointCloud[j]->z);
		glVertex3f(PointCloud[k]->x, PointCloud[k]->y, PointCloud[k]->z);
		glVertex3f(PointCloud[l]->x, PointCloud[l]->y, PointCloud[l]->z);

}

void Cube::visualize()
{
	int i;

	if(PointCloud != NULL)
	{
		glBegin(GL_QUADS);
			glColor3f(1.0,0.0,0.0);// top
			MakeFace(0,2,3,1);
			glColor3f(0.0,1.0,1.0);//bottom
			MakeFace(4,6,7,5);
			
			glColor3f(0.0,1.0,0.0);// north
			MakeFace(0,2,6,4);
			glColor3f(1.0,0.0,1.0);// south
			MakeFace(1,3,7,5);
			
			glColor3f(0.0,0.0,1.0);//east
			MakeFace(0,4,5,1);
			glColor3f(1.0,1.0,0.0);// west
			MakeFace(2,6,7,3);
		glEnd();
	}

	if(SubCubes != NULL)
	{
		for(i = 0; i < 20; i++)
		{
			SubCubes[i]->visualize();
		}
	}
}

The implementation of the program is your run-of-the-mill OpenGL boilerplate. The application takes in an argument dictating what order of sponge it should produce. It sets up the camera and positions the sponge at the origin. The sponge is left stationary, while the camera is made to orbit upon each display(). On idle(), a redisplay message is sent back to the OpenGL system in order to achieve the effect that the sponge is spinning.

//=================================================================================
//=================================================================================
Cube* MengerCube;

void idle()
{
	glutPostRedisplay();
}

void display()
{
	static GLfloat rtri = 0.0;
	
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
	gluLookAt(1.0,1.0,1.0, 0.0,0.0,0.0,0.0,1.0,0.0);
	glRotatef((rtri+=0.932), 1.0, 0.5, -1.0);

	MengerCube->visualize();

	glutSwapBuffers();
}

void reshape(int w, int h)
{
	glViewport(0,0,w,h);
	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	glOrtho(-8.0, 8.0,-8.0, 8.0,-8.0, 8.0);
}

void init()
{
	glShadeModel(GL_SMOOTH);
	glClearColor(0.0, 0.0, 0.0, 0.0);
	glClearDepth(1.0f);
	glEnable(GL_DEPTH_TEST);
	glColor3f(1.0, 1.0, 1.0);
}

GLfloat getDepth(char* depth)
{
	int k = atoi(depth);
	if(k <= 1) return 1.0;
	else if (k>= 5) return 5.0;
	else return (GLfloat) k;
}

int main(int argc, char* argv[])
{
	GLfloat depth;
	bool viewAsPointCloud = false;
	point origin(0.0, 0.0, 0.0);

	printf("%d\n",argc);

	switch(argc)
	{
	case 2:
		depth = getDepth(argv[1]);
		break;
	default:
		depth = 2.0;
		break;
	}

	MengerCube = new Cube(&origin, 8.0, depth);

	glutInit(&argc, argv);
	glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
	glEnable(GL_DEPTH_TEST);
	glutInitWindowSize(500,500);
	glutInitWindowPosition(0,0);
	glutCreateWindow(*argv);
	glutReshapeFunc(reshape);
	glutDisplayFunc(display);
	glutIdleFunc(idle);
	init();
	glutMainLoop();

	delete MengerCube;
}

Written by lewellen

2012-02-01 at 8:00 am