Open Vocabulary refers to a language model or system that is not restricted to a predefined set of words or phrases. In the context of natural language processing and machine learning, an open vocabulary model has the ability to understand and generate text using a wide range of words and expressions, even those that were not present in its training data. This flexibility allows the model to adapt to various domains, topics, and evolving language use, making it more versatile in understanding and generating human-like text across different contexts.

Posts

Improving Pseudo Labels for Open-Vocabulary Object Detection

Improving Pseudo Labels for Open-Vocabulary Object Detection Recent studies show promising performance in open-vocabulary object detection (OVD) using pseudo labels (PLs) from pretrained vision and language models (VLMs). However, PLs generated by VLMs are extremely noisy due to the gap between the pretraining objective of VLMs and OVD, which blocks further advances on PLs. In this paper, we aim to reduce the noise in PLs and propose a method called online Self-training And a Split-and-fusion head for OVD (SAS-Det). First, the self-training finetunes VLMs to generate high quality PLs while prevents forgetting the knowledge learned in the pretraining. Second, a split-and-fusion (SAF) head is designed to remove the noise in localization of PLs, which is usually ignored in existing methods. It also fuses complementary knowledge learned from both precise ground truth and noisy pseudo labels to boost the performance. Extensive experiments demonstrate SAS-Det is both efficient and effective. Our pseudo labeling is 3 times faster than prior methods. SAS-Det outperforms prior state-of-the-art models of the same scale by a clear margin and achieves 37.4 AP50 and 27.3 APr on novel categories of the COCO and LVIS benchmarks, respectively.