Publication Date: 6/20/2022
Event: SMARTCOMP 2022
Reference: pp. 70-77, 2022
Authors: Sumaiya Tabassum Nimi, University of Missouri-Kansas City, Missouri; Md Adnan Arefeen, University of Missouri-Kansas City, Missouri; Md Yusuf Sarwar Uddin, University of Missouri-Kansas City, Missouri; Biplob Debnath, NEC Laboratories America, Inc.; Srimat T. Chakradhar, NEC Laboratories America, Inc.
Abstract: Design of multitasking deep learning models has mostly focused on improving the accuracy of the constituent tasks, but the challenges of efficiently deploying such models in a device-edge collaborative setup (that is common in 5G deployments) has not been investigated. Towards this end, in this paper, we propose an approach called Chimera 1 for training (done Offline) and deployment (done Online) of multitasking deep learning models that are splittable across the device and edge. In the offline phase, we train our multi-tasking setup such that features from a pre-trained model for one of the tasks (called the Primary task) are extracted and task-specific sub-models are trained to generate the other (Secondary) tasks’ outputs through a knowledge distillation like training strategy to mimic the outputs of pre-trained models for the tasks. The task-specific sub-models are designed to be significantly lightweight than the original pre-trained models for the Secondary tasks. Once the sub-models are trained, during deployment, for given deployment context, characterized by the configurations, we search for the optimal (in terms of both model performance and cost) deployment strategy for the generated multitasking model, through finding one or multiple suitable layer(s) for splitting the model, so that inference workloads are distributed between the device and the edge server and the inference is done in a collaborative manner. Extensive experiments on benchmark computer vision tasks demonstrate that Chimera generates splittable multitasking models that are at least ~ 3 x parameter efficient than the existing such models, and the end-to-end device-edge collaborative inference becomes ~ 1.35 x faster with our choice of context-aware splitting decisions.
Publication Link: https://ieeexplore.ieee.org/document/9821039