A Perspective on Deep Vision Performance with Standard Image and Video Codecs
Resource-constrained hardware such as edge devices or cell phones often rely on cloud servers to provide the required computational resources for inference in deep vision models. However transferring image and video data from an edge or mobile device to a cloud server requires coding to deal with network constraints. The use of standardized codecs such as JPEG or H.264 is prevalent and required to ensure interoperability. This paper aims to examine the implications of employing standardized codecs within deep vision pipelines. We find that using JPEG and H.264 coding significantly deteriorates the accuracy across a broad range of vision tasks and models. For instance strong compression rates reduce semantic segmentation accuracy by more than 80% in mIoU. In contrast to previous findings our analysis extends beyond image and action classification to localization and dense prediction tasks thus providing a more comprehensive perspective.