Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!
Publication Date: 6/18/2023
Event: CVPR 2023
Reference: pp. 15005-15015
Authors: Zaid Khan, Northeastern University, NEC Laboratories America, Inc.; Vijay Kumar B G,, NEC Laboratories America, Inc.; Samuel Schulter, NEC Laboratories America, Inc.; Xiang Yu, NEC Laboratories America, Inc.; Yun Fu, Northeastern University; Manmohan Chandraker, NEC Laboratories America, Inc.
Abstract: Finetuning a large vision language model (VLM) on a target dataset after large-scale pretraining is a dominant paradigm in visual question answering (VQA). Datasets for specialized tasks such as knowledge-based VQA or VQA in non natural-image domains are orders of magnitude smaller than those for general-purpose VQA. While collecting additional labels for specialized tasks or domains can be challenging, unlabeled images are often available. We introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning large VLMs on small-scale VQA datasets. SelTDA uses the VLM and target dataset to build a teacher model that can generate question-answer pseudolabels directly conditioned on an image alone, allowing us to pseudolabel unlabeled images. SelTDA then finetunes the initial VLM on the original dataset augmented with freshly pseudolabeled images. We describe a series of experiments showing that our self-taught data augmentation increases robustness to adversarially searched questions, counterfactual examples, and rephrasings, it improves domain generalization, and results in greater retention of numerical reasoning skills. The proposed strategy requires no additional annotations or architectural modifications, and is compatible with any modern encoder-decoder multimodal transformer. Code available at https://github.com/codezakh/SelTDA
Publication Link: https://openaccess.thecvf.com/content/CVPR2023/html/Khan_Q_How_To_Specialize_Large_Vision-Language_Models_to_Data-Scarce_VQA_CVPR_2023_paper.html
Additional Publication Link: https://arxiv.org/pdf/2306.03932.pdf