Uncertainty Quantification and Reasoning for Reliable AI integrates probabilistic modeling and logic-based reasoning to improve system trustworthiness. It ensures that AI models can express and manage uncertainty when making predictions or decisions. This field combines statistical methods with knowledge representation frameworks. NEC Laboratories America studies UQ to advance dependable AI systems for complex, real-world applications. Such research underpins ethical and transparent AI deployment.

Posts

Uncertainty Quantification and Reasoning for Reliable AI Seminar at Brigham Young University

Our researcher Xujiang Zhao will present “Uncertainty Quantification and Reasoning for Reliable AI” at Brigham Young University on Thursday, Sept. 25 at 11 a.m. in TMCB 1170. The seminar explores how statistical modeling and reasoning frameworks can strengthen trustworthy AI, making systems more robust and transparent in high-stakes applications like healthcare and autonomous systems. Attendees will gain insights into how uncertainty quantification is shaping the next generation of responsible AI.

Uncertainty Quantification for In-Context Learning of Large Language Models

In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM’s response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM’s response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model’s configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.