Matrix factorizations at scale: A comparison of scientific data analytics in spark and C+MPI using three case studies
2016
We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausability), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark's data-parallel model. We perform scaling experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.
Keywords:
- Matrix decomposition
- Principal component analysis
- Data mining
- Computer science
- Parallel computing
- Spark (mathematics)
- Data analysis
- Distributed computing
- Matrix (mathematics)
- Computer cluster
- Theoretical computer science
- Data modeling
- Non-negative matrix factorization
- Linear algebra
- Interpretability
- Computational science
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
55
References
33
Citations
NaN
KQI