iRAG: Advancing RAG for Videos with an Incremental Approach
Publication Date: 10/21/2024
Event: The 33rd ACM International Conference on Information and Knowledge Management (CIKM 2024)
Reference: pp. 4341-4348, 2024
Authors: Md Adnan Arefeen, NEC Laboratories America, Inc., University of Missouri-Kansas City; Biplob Debnath, NEC Laboratories America, Inc.; Yusuf Sarwar Uddin, University of Missouri-Kansas City; Srimat T. Chakradhar, NEC Laboratories America, Inc.
Abstract: Retrieval-augmented generation (RAG) systems combine the strengths of language generation and information retrieval to power many real-world applications like chatbots. Use of RAG for understanding of videos is appealing but there are two critical limitations. One-time, upfront conversion of all content in large corpus of videos into text descriptions entails high processing times. Also, not all information in the rich video data is typically captured in the text descriptions. Since user queries are not known apriori, developing a system for video to text conversion and interactive querying of video data is challenging. To address these limitations, we propose an incremental RAG system called iRAG, which augments RAG with a novel incremental workflow to enable interactive querying of a large corpus of videos. Unlike traditional RAG, iRAG quickly indexes large repositories of videos, and in the incremental workflow, it uses the index to opportunistically extract more details from select portions of the videos to retrieve context relevant to an interactive user query. Such an incremental workflow avoids long video to text conversion times, and overcomes information loss issues due to conversion of video to text, by doing on-demand query-specific extraction of details in video data. This ensures high quality of responses to interactive user queries that are often not known apriori. To the best of our knowledge, iRAG is the first system to augment RAG with an incremental workflow to support efficient interactive querying of a large corpus of videos. Experimental results on real-world datasets demonstrate 23x to 25x faster video to text ingestion, while ensuring that latency and quality of responses to interactive user queries is comparable to responses from a traditional RAG where all video data is converted to text upfront before any user querying.
Publication Link: https://dl.acm.org/doi/10.1145/3627673.3680088