DWIM: Towards Tool-aware Visual Reasoning via Discrepancy-aware Workflow Generation & Instruct-Masking Tuning
Publication Date: 3/25/2025
Event: ArXiV
Reference: https://arxiv.org/abs/2503.19263
Authors: Fucai Ke, Monash University, NEC Laboratories America, Inc.; Vijay Kumar B G, NEC Laboratories America, Inc.; Xingjian Leng, Australian National University, NEC Laboratories America, Inc.; Zhixi Cai, Monash University; Zaid Khan, UNC, Chapel Hill, NEC Laboratories America, Inc.; Weiqing Wang, Monash University; Pari Delir Haghighi, Monash University; Hamid Rezatophigi, Monash University; Manmohan Chandraker, NEC Laboratories America, Inc.
Abstract: Visual reasoning (VR), which is crucial in many fields for enabling human-like visual understanding, remains highly challenging. Recently, compositional visual reasoning approaches, which leverage the reasoning abilities of large language models (LLMs) with integrated tools to solve problems, have shown promise as more effective strategies than end-to-end VR methods. However, these approaches face limitations, as frozen LLMs lack tool awareness in VR, leading to performance bottlenecks. While leveraging LLMs for reasoning is widely used in other domains, they are not directly applicable to VR due to limited training data, imperfect tools that introduce errors and reduce data collection efficiency in VR, and challenging in fine-tuning on noisy workflows. To address these challenges, we propose DWIM: i) Discrepancy-aware training Workflow generation, which assesses tool usage and extracts more viable workflows for training; and ii) Instruct-Masking fine-tuning, which guides the model to only clone effective actions, enabling the generation of more practical solutions. Our experiments demonstrate that DWIM achieves state-of-the-art performance across various VR tasks, exhibiting strong generalization on multiple widely-used datasets.
Publication Link: https://arxiv.org/pdf/2503.19263