Camouflaged Object Detection with Feature Decomposition and Edge Reconstruction

Publication Date: 6/18/2023

Event: CVPR 2023

Reference: pp. 22046-22055, 2023

Authors: Chunming He, Tsinghua Shenzhen Graduate School; Kai Li, NEC Laboratories America, Inc.; Yachao Zhang, Tsinghua Shenzhen Graduate School; Longxiang Tang, Tsinghua Shenzhen Graduate School; Yulun Zhang, Computer Vision Lab, ETH Zurich; Zhenhua Guo, Tsinghua Shenzhen Graduate School; Xiu Li, Tsinghua Shenzhen Graduate School

Abstract: Camouflaged object detection (COD) aims to address the tough issue of identifying camouflaged objects visually blended into the surrounding backgrounds. COD is a challenging task due to the intrinsic similarity of camouflaged objects with the background, as well as their ambiguous boundaries. Existing approaches to this problem have developed various techniques to mimic the human visual system. Albeit effective in many cases, these methods still struggle when camouflaged objects are so deceptive to the vision system. In this paper, we propose the FEature Decomposition and Edge Reconstruction (FEDER) model for COD. The FEDER model addresses the intrinsic similarity of foreground and background by decomposing the features into different frequency bands using learnable wavelets. It then focuses on the most informative bands to mine subtle cues that differentiate foreground and background. To achieve this, a frequency attention module and a guidance-based feature aggregation module are developed. To combat the ambiguous boundary problem, we propose to learn an auxiliary edge reconstruction task alongside the COD task. We design an ordinary differential equation-inspired edge reconstruction module that generates exact edges. By learning the auxiliary task in conjunction with the COD task, the FEDER model can generate precise prediction maps with accurate object boundaries. Experiments show that our FEDER model significantly outperforms state-of-the-art methods with cheaper computational and memory costs.

Publication Link: