An Automated approach to accelerate DNNs on Edge Devices

2021
Deployment of Deep Neural Networks (DNNs) on edge devices can significantly increase the utility of DNNs for a variety of applications. However executing DNN models on the edge device is still a major challenge, as heavy computation and memory bandwidth requirements of such models limit their adoption. Employing a highly optimized code for DNN model execution can easily enable many more use-cases than currently possible. However, current strategies are still based on manual optimization for efficient resource utilization. This is not only cumbersome but also requires a high level of expert intervention in the rapidly changing DNN Model landscape. In this work, we provide an automated way of optimizing Convolutional Neural Network (CNN) models using Deep Reinforcement Learning (DRL) algorithm. The experiments with our DRL technique demonstrate 1.85×, 1.58×, 1.64× speedup in execution time for MobileNetV1, MobileNetV2 and Efficientnet-lite0 CNN models respectively on Mobile CPU devices.
    • Correction
    • Source
    • Cite
    • Save
    15
    References
    0
    Citations
    NaN
    KQI
    []
    Baidu
    map