Uncertainty Quantification for In-Context Learning of Large Language Models
Publication Date: 6/20/2024
Event: 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico
Reference: pp. 3357-3370, 2024
Authors: Chen Ling, Emory University; Xujiang Zhao, NEC Laboratories America, Inc.; Xuchao Zhang, Microsoft; Wei Cheng, NEC Laboratories America, Inc.; Yanchi Liu, NEC Laboratories America, Inc.; Yiyou Sun, NEC Laboratories America, Inc.; Mika Oishi, NEC Corporation; Takao Osaki, NEC Corporation; Katsushi Matsuda, NEC Corporation; Jie Ji, Emory University; Guangji Bai, Emory University; Liang Zhao, Emory University; Haifeng Chen, NEC Laboratories America, Inc.
Abstract: In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLMs response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLMs response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the models configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.
Publication Link: https://aclanthology.org/2024.naacl-long.184.pdf