A Modelette refers to a bespoke, resource-efficient version of a deep learning model, specifically tailored for edge devices in edge intelligence systems. It is created dynamically based on the subset of classes encountered by edge devices during their deployment.

The term “modelette” is coined to describe these customized models that are designed to be smaller and more lightweight compared to the resource-intensive deep learning models, such as vision transformers (ViT), typically used for classification tasks. Modelettes are optimized to handle the specific subset of classes encountered by edge devices, allowing them to perform inference tasks efficiently despite their limited computational resources.

Posts

FactionFormer: Context-Driven Collaborative Vision Transformer Models for Edge Intelligence

Edge Intelligence has received attention in the recent times for its potential towards improving responsiveness, reducing the cost of data transmission, enhancing security and privacy, and enabling autonomous decisions by edge devices. However, edge devices lack the power and compute resources necessary to execute most Al models. In this paper, we present FactionFormer, a novel method to deploy resource-intensive deep-learning models, such as vision transformers (ViT), on resource-constrained edge devices. Our method is based on a key observation: edge devices are often deployed in settings where they encounter only a subset of the classes that the resource intensive Al model is trained to classify, and this subset changes across deployments. Therefore, we automatically identify this subset as a faction, devise on-the fly a bespoke resource-efficient ViT called a modelette for the faction and set up an efficient processing pipeline consisting of a modelette on the device, a wireless network such as 5G, and the resource-intensive ViT model on an edge server, all of which work collaboratively to do the inference. For several ViT models pre-trained on benchmark datasets, FactionFormer’s modelettes are up to 4× smaller than the corresponding baseline models in terms of the number of parameters, and they can infer up to 2.5× faster than the baseline setup where every input is processed by the resource-intensive ViT on the edge server. Our work is the first of its kind to propose a device-edge collaborative inference framework where bespoke deep learning models for the device are automatically devised on-the-fly for most frequently encountered subset of classes.