Baechi: fast device placement of machine learning graphs.

2020
Machine Learning graphs (or models) can be challenging or impossible to train when either devices have limited memory, or the models are large. Splitting the model graph across multiple devices, today, largely relies on learning-based approaches to generate this placement. While it results in models that train fast on data (i.e., with low step times), learning-based model-parallelism is time-consuming, taking many hours or days to create a placement plan of operators on devices. We present the Baechi system, where we adopt an algorithmic approach to the placement problem for running machine learning training graphs on a small cluster of memory-constrained devices. We implemented Baechi so that it works modularly with TensorFlow. Our experimental results using GPUs show that Baechi generates placement plans in time 654X--206K X faster than today's learning-based approaches, and the placed model's step time is only up to 6.2% higher than expert-based placements.
    • Correction
    • Source
    • Cite
    • Save
    42
    References
    1
    Citations
    NaN
    KQI
    []
    Baidu
    map