Interpretations refer to the understanding and presentation of the reasoning behind the system’s responses or answers to user queries. Interpretability in artificial intelligence (AI) question answering is crucial to provide users with clear explanations about how the system arrived at a specific answer. This transparency enhances user trust, aids in error analysis, and facilitates improvement of the question answering model. Interpretations in AI question answering contribute to the overall reliability of the system, enabling users to better understand the decision-making process and fostering trust in the technology. This is especially crucial in applications where users rely on accurate and comprehensible information, such as virtual assistants, customer support chatbots, and information retrieval systems.

Posts

Generating Followup Questions for Interpretable Multi hop Question Answering

We propose a framework for answering open domain multi hop questions in which partial information is read and used to generate followup questions, to finally be answered by a pretrained single hop answer extractor. This framework makes each hop interpretable, and makes the retrieval associated with later hops as flexible and specific as for the first hop. As a first instantiation of this framework, we train a pointer generator network to predict followup questions based on the question and partial information. This provides a novel application of a neural question generation network, which is applied to give weak ground truth single hop followup questions based on the final answers and their supporting facts. Learning to generate followup questions that select the relevant answer spans against downstream supporting facts, while avoiding distracting premises, poses an exciting semantic challenge for text generation. We present an evaluation using the two hop bridge questions of HotpotQA