Uncertainty Quantification and Reasoning for Reliable AI at Brigham Young University
Xujiang Zhao, a researcher in our Data Science & System Security department will present the “Uncertainty Quantification and Reasoning for Reliable AI” seminar on advancing trustworthy AI at the Talmage Math Sciences/Computer Building, TMCB 1170, Brigham Young University, on Thursday, Sept 25th at 11am. As AI systems play a greater role in critical decisions, understanding how to measure and reason about uncertainty is essential.
Xujiang will share how advanced statistical modeling and reasoning frameworks can make AI more robust, transparent, and reliable in real-world applications from healthcare to autonomous systems. Don’t miss this opportunity to engage with cutting-edge research and learn how uncertainty quantification is shaping the next generation of responsible AI.
Related Papers
Uncertainty Propagation on LLM Agent
Large language models (LLMs) integrated into multi-step agent systems enable complex decision-making processes across various applications. However, their outputs often lack reliability, making uncertainty estimation crucial. Existing uncertainty estimation methods primarily focus on final-step outputs,
Uncertainty Quantification for In-Context Learning of Large Language Models
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLMs response, such as hallucination, have also been actively discussed. Existing