The Vision Transformer (ViT), is a specific application of the Transformer architecture to computer vision tasks. It extends the Transformer model to process and analyze images, challenging the conventional use of convolutional neural networks (CNNs) for image-related tasks.

In a Vision Transformer, an image is divided into fixed-size patches, and these patches are treated as sequences of tokens. The Transformer processes these token sequences to capture spatial relationships and dependencies within the image.

ViT has demonstrated competitive performance in image classification tasks and has the advantage of being more scalable to larger image sizes compared to traditional CNNs.

Posts

FactionFormer: Context-Driven Collaborative Vision Transformer Models for Edge Intelligence

Edge Intelligence has received attention in the recent times for its potential towards improving responsiveness, reducing the cost of data transmission, enhancing security and privacy, and enabling autonomous decisions by edge devices. However, edge devices lack the power and compute resources necessary to execute most Al models. In this paper, we present FactionFormer, a novel method to deploy resource-intensive deep-learning models, such as vision transformers (ViT), on resource-constrained edge devices. Our method is based on a key observation: edge devices are often deployed in settings where they encounter only a subset of the classes that the resource intensive Al model is trained to classify, and this subset changes across deployments. Therefore, we automatically identify this subset as a faction, devise on-the fly a bespoke resource-efficient ViT called a modelette for the faction and set up an efficient processing pipeline consisting of a modelette on the device, a wireless network such as 5G, and the resource-intensive ViT model on an edge server, all of which work collaboratively to do the inference. For several ViT models pre-trained on benchmark datasets, FactionFormer’s modelettes are up to 4× smaller than the corresponding baseline models in terms of the number of parameters, and they can infer up to 2.5× faster than the baseline setup where every input is processed by the resource-intensive ViT on the edge server. Our work is the first of its kind to propose a device-edge collaborative inference framework where bespoke deep learning models for the device are automatically devised on-the-fly for most frequently encountered subset of classes.