In the context of natural language processing and artificial intelligence, an Open Domain refers to a system or model that is designed to handle a wide range of topics and information. An open-domain model is not restricted to specific domains or subject areas; instead, it is capable of understanding and generating content across various topics and domains. Such models are versatile and can be applied to diverse tasks, making them suitable for addressing a broad spectrum of user queries or inputs without being limited to a narrow set of predefined topics. Open-domain models are often used in applications like chatbots, question-answering systems, and language generation tasks where adaptability to different subject matters is crucial.


Open-ended Commonsense Reasoning with Unrestricted Answer Scope

Open-ended Commonsense Reasoning is defined as solving a commonsense question without providing 1) a short list of answer candidates and 2) a pre-defined answer scope. Conventional ways of formulating the commonsense question into a question-answering form or utilizing external knowledge to learn retrieval-based methods are less applicable in the open-ended setting due to an inherent challenge. Without pre-defining an answer scope or a few candidates, open-ended commonsense reasoning entails predicting answers by searching over an extremely large searching space. Moreover, most questions require implicit multi-hop reasoning, which presents even more challenges to our problem. In this work, we leverage pre-trained language models to iteratively retrieve reasoning paths on the external knowledge base, which does not require task-specific supervision. The reasoning paths can help to identify the most precise answer to the commonsense question. We conduct experiments on two commonsense benchmark datasets. Compared to other approaches, our proposed method achieves better performance both quantitatively and qualitatively.