Retrieval, Analogy, and Composition: A framework for Compositional Generalization in Image Captioning

Publication Date: 11/7/2021

Event: EMNLP 2021

Reference: pp. 1990-2000, 2021

Authors: Zhan Shi, Queen’s University; Hui Liu, Queen’s University; Martin Renqiang Min, NEC Laboratories America, Inc.; Christopher Malon, NEC Laboratories America, Inc.; Li Erran Li, Amazon; Xiaodan Zhu, Queen’s University

Abstract: Image captioning systems are expected to have the ability to combine individual concepts when describing scenes with concept combinations that are not observed during training. In spite of significant progress in image captioning with the help of the autoregressive generation framework, current approaches fail to generalize well to novel concept combinations. We propose a new framework that revolves around probing several similar image caption training instances (retrieval), performing analogical reasoning over relevant entities in retrieved prototypes (analogy), and enhancing the generation process with reasoning outcomes (composition). Our method augments the generation model by referring to the neighboring instances in the training set to produce novel concept combinations in generated captions. We perform experiments on the widely used image captioning benchmarks. The proposed models achieve substantial improvement over the compared baselines on both composition-related evaluation metrics and conventional image captioning metrics.

Publication Link: