iFinder: Structured Zero-Shot Vision-Based LLM Grounding for Dash-Cam Video Reasoning
Publication Date: 12/2/2025
Event: The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)
Reference: pp 1-35, 2025
Authors: Manyi Yao, NEC Laboratories America, Inc.,UC, Riverside; Bingbing Zhuang, NEC Laboratories America, Inc.; Sparsh Garg, NEC Laboratories America, Inc.; Amit Roy-Chowdhury, UC, Riverside; Christian Shelton, UC, Riverside; Manmohan Chandraker, NEC Laboratories America, Inc., UC San Diego; Abhishek Aich, NEC Laboratories America, Inc.
Abstract: Grounding large language models (LLMs) in domain-specific tasks like post-hoc dash-cam driving video analysis is challenging due to their general-purpose training and lack of structured inductive biases. As vision is often the sole modality available for such analysis (i.e. no LiDAR, GPS, etc.), existing video-basedvision-language models (V-VLMs) struggle with spatial reasoning, causal inference, and explainability of events in the input video. To this end, we introduce iFinder, a structured semantic grounding framework that decouples perception from reasoning by translating dash-cam videos into a hierarchical, interpretable data structure for LLMs. iFinder operates as a modular, training-free pipeline that employs pretrained vision models to extract critical cuesobject pose, lane positions, and object trajectorieswhich are hierarchically organized into frame and video-level structures. Combined with a three-block prompting strategy, it enables step-wise, grounded reasoning for the LLM to refine a peer V-VLMs outputs and provide accurate reasoning. Evaluations on four public dash-cam video benchmarks show that iFinders proposed grounding with domain-specific cuesespecially object orientation and global contextsignificantly outperforms end-to-end V-VLMs on four zero-shot driving benchmarks, with up to 39% gains in accident reasoning accuracy. By grounding LLMs with driving domain-specific representations, iFinder offers a zero-shot, interpretable, and reliable alternativeto end-to-end V-VLMs for post-hoc driving video understanding
Publication Link:



Leave a Reply
Want to join the discussion?Feel free to contribute!