-
Notifications
You must be signed in to change notification settings - Fork 13
Home
It was found in this stackoverflow question: https://stackoverflow.com/questions/20416944/parallel-k-means-in-r
R package can be found here: https://cran.r-project.org/web/packages/knor/index.html
It is based on this research paper: https://arxiv.org/abs/1606.08905
From abstract
k-means is one of the most influential and utilized machine learning algorithms. Its computation limits the performance and scalability of many statistical analysis and machine learning tasks. We rethink and optimize k-means in terms of modern NUMA architectures to develop a novel parallelization scheme that delays and minimizes synchronization barriers. The k-means NUMA Optimized Routine
knor
library has (i) in-memoryknori
, (ii) distributed memoryknord
, and (iii) semi-external memoryknors
modules that radically improve the performance of k-means for varying memory and hardware budgets.knori
boosts performance for single machine datasets by an order of magnitude or more.knors
improves the scalability of k-means on a memory budget using SSDs.knors
scales to billions of points on a single machine, using a fraction of the resources that distributed in-memory systems require.knord
retainsknori
's performance characteristics, while scaling in-memory through distributed computation in the cloud.knor
modifies Elkan's triangle inequality pruning algorithm such that we utilize it on billion-point datasets without the significant memory overhead of the original algorithm. We demonstrateknor
outperforms distributed commercial products like H2O, Turi (formerly Dato, GraphLab) and Spark's MLlib by more than an order of magnitude for datasets of 107 to 109 points.
Original paper can be found here: https://www.researchgate.net/publication/220906984_Making_k-means_Even_Faster
From abstract
The k-means algorithm is widely used for clustering, compressing, and summarizing vector data. In this paper, we propose a new acceleration for exact k-means that gives the same answer, but is much faster in practice. Like Elkan's accelerated algorithm (8), our algorithm avoids distance computations using distance bounds and the triangle inequality. Our algorithm uses one novel lower bound for point-center distances, which allows it to eliminate the innermost k-means loop 80% of the time or more in our experiments. On datasets of low and medium dimension (e.g. up to 50 dimensions), our algorithm is much faster than other methods, including methods based on low-dimensional indexes, such as k-d trees. Other advantages are that it is very simple to implement and it has a very small memory overhead, much smaller than other accelerated algorithms.