Efficient Semantic Communication Through Transformer-Aided Compression
Publication Date: 5/26/2025
Event: IEEE International Conference on Machine Learning for Communication and Networking (ICMLCN 2025)
Reference: pp. 1-6, 2025
Authors: Matin Mortaheb, NEC Laboratories America, Inc., University of Maryland, College Park; Mohammad A. Khojastepour, NEC Laboratories America, Inc.; Sennur Ulukus, University of Maryland, College Park
Abstract: Transformers, known for their attention mechanisms, have proven highly effective in focusing on critical elements within complex data. This feature can effectively be used to address the time-varying channels in wireless communication systems. In this work, we introduce a channel-aware adaptive framework for semantic communication, where different regions of the image are encoded and compressed based on their semantic content. By employing vision transformers, we interpret the attention mask as a measure of the semantic contents of the patches and dynamically categorize the patches to be compressed at various rates as a function of the instantaneous channel bandwidth. Our method enhances communication efficiency by adapting the encoding resolution to the contents relevance, ensuring that even in highly constrained environments, critical information is preserved. We evaluate the proposed adaptive transmission framework using the TinyImageNet dataset, measuring both reconstruction quality and accuracy. The results demonstrate that our approach maintains high semantic fidelity while optimizing bandwidth, providing an effective solution for transmitting multiresolution data in limited bandwidth conditions.
Publication Link: