Publication Date: 10/2/2023
Event: ICCV 2023
Reference: pp. 11953-11962, 2023
Authors: Samuel Schulter, NEC Laboratories America, Inc.; Vijay Kumar B G, NEC Laboratories America, Inc.; Yumin Suh, NEC Laboratories America, Inc.; Konstantinos M. Dafnis, Rutgers University; Zhixing Zhang, Rutgers University, NEC Laboratories America, Inc.; Shiyu Zhao, Rutgers University, NEC Laboratories America, Inc.; Dimitris Metaxas, Rutgers University
Abstract: Language-based object detection is a promising direction towards building a natural interface to describe objects in images that goes far beyond plain category names. While recent methods show great progress in that direction, proper evaluation is lacking. With OmniLabel, we propose a novel task definition, dataset, and evaluation metric. The task subsumes standard and open-vocabulary detection as well as referring expressions. With more than 30K unique object descriptions on over 25K images, OmniLabel provides a challenge benchmark with diverse and complex object descriptions in a naturally open-vocabulary setting. Moreover, a key differentiation to existing benchmarks is that our object descriptions can refer to one, multiple or even no object, hence, providing negative examples in free-form text. The proposed evaluation handles the large label space and judges performance via a modified average precision metric, which we validate by evaluating strong language-based baselines. OmniLabel indeed provides a challenging test bed for future research on language-based detection.