Weakly-supervised Concealed Object Segmentation with SAM-based Pseudo Labeling and Multi-scale Feature Grouping

Publication Date: 12/10/2023

Event: NeurIPS 2023

Reference: pp. 1-23, 2023

Authors: Chunming He, Tsinghua University; Kai Li, NEC Laboratories America, Inc.; Yachao Zhang, Tsinghua University; Guoxia Xu, Nanjing University of Posts and Telecommunications; Longxiang Tang, Tsinghua University; Yulun Zhang, ETH Zürich; Zhenhua Guo, Tianyi Traffic Technology; Xiu Li, Tsinghua University

Abstract: Weakly-Supervised Concealed Object Segmentation (WSCOS) aims to segment objects well blended with surrounding environments using sparsely-annotated data for model training. It remains a challenging task since (1) it is hard to distinguish concealed objects from the background due to the intrinsic similarity and (2) the sparsely-annotated training data only provide weak supervision for model learning. In this paper, we propose a new WSCOS method to address these two challenges. To tackle the intrinsic similarity challenge, we design a multi-scalefeature grouping module that first groups features at different granularities and then aggregates these grouping results. By grouping similar features together, it encourages segmentation coherence, helping obtain complete segmentation results for both single and multiple-object images. For the weak supervision challenge, we utilize the recently-proposed vision foundation model, “Segment Anything Model (SAM)”, and use the provided sparse annotations as prompts to generate segmentation masks, which are used to train the model. To alleviate the impact oflow-quality segmentation masks, we further propose a series of strategies, including multi-augmentation result ensemble, entropy-based pixel-level weighting, and entropy-based image-level selection. These strategies help provide more reliable supervision to train the segmentation model. We verify the effectiveness of our method on various WSCOS tasks, and experiments demonstrate that our method achieves state-of-the-art performance on these tasks.

Publication Link: https://proceedings.neurips.cc/paper_files/paper/2023/file/61aa557643ae8709b6a4f41140b2234a-Paper-Conference.pdf