iFinder: Structured Zero-Shot Vision-Based LLM Grounding for Dash-Cam Video Reasoning

Publication Date: 12/2/2025

Event: The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)

Reference: pp 1-35, 2025

Authors: Manyi Yao, NEC Laboratories America, Inc.,UC, Riverside; Bingbing Zhuang, NEC Laboratories America, Inc.; Sparsh Garg, NEC Laboratories America, Inc.; Amit Roy-Chowdhury, UC, Riverside; Christian Shelton, UC, Riverside; Manmohan Chandraker, NEC Laboratories America, Inc., UC San Diego; Abhishek Aich, NEC Laboratories America, Inc.

Abstract: Grounding large language models (LLMs) in domain-specific tasks like post-hoc dash-cam driving video analysis is challenging due to their general-purpose training and lack of structured inductive biases. As vision is often the sole modality available for such analysis (i.e. no LiDAR, GPS, etc.), existing video-basedvision-language models (V-VLMs) struggle with spatial reasoning, causal inference, and explainability of events in the input video. To this end, we introduce iFinder, a structured semantic grounding framework that decouples perception from reasoning by translating dash-cam videos into a hierarchical, interpretable data structure for LLMs. iFinder operates as a modular, training-free pipeline that employs pretrained vision models to extract critical cues—object pose, lane positions, and object trajectories—which are hierarchically organized into frame and video-level structures. Combined with a three-block prompting strategy, it enables step-wise, grounded reasoning for the LLM to refine a peer V-VLM’s outputs and provide accurate reasoning. Evaluations on four public dash-cam video benchmarks show that iFinder’s proposed grounding with domain-specific cues—especially object orientation and global context—significantly outperforms end-to-end V-VLMs on four zero-shot driving benchmarks, with up to 39% gains in accident reasoning accuracy. By grounding LLMs with driving domain-specific representations, iFinder offers a zero-shot, interpretable, and reliable alternativeto end-to-end V-VLMs for post-hoc driving video understanding

Publication Link:

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply