Machine Learning Parallelization

para3.PNGLarge scale data analytics is very compute intensive and requires parallelization of algorithms as well as optimization of the data flow. We develop various types of parallelizations for multi-core systems and clusters. In addition we also work with heterogeneous systems that include GPU’s or vector processors. MALT is one of our projects to enable parallelization over a large number of processors through virtual shared memory. MALT provides abstractions for fine-grained in-memory updates using one-sided RDMA, limiting data movement costs during incremental model updates. Developers can specify the dataflow while MALT takes care of communication and representation optimizations. ML applications, written in C, C++ and Lua, are supported that are based on SVM, matrix factorization and deep learning. Beside speedup, MALT also provides fault tolerance and guarantees network efficiency. We are implementing various new distributed optimization algorithms on MALT such as RWDDA and support for multiple GPUs.

NIPS LearningSys 2015: pdf

view all department projects