Human–AI Interaction (HAI) refers to the study and design of how people engage with artificial intelligence systems, focusing on communication, usability, and trust. It examines how users interpret AI outputs, provide feedback, and collaborate with models in tasks such as decision support, automation, and analysis. HAI draws on human–computer interaction, machine learning, and cognitive science to ensure AI systems are understandable, reliable, and aligned with human intent in real-world applications.

Posts

Beyond Explainability: How We Are Redefining Interpretability in AI

AI interpretability has long been the focus, but what if it’s only part of the story? New research introduces model semantics, a framework for understanding what AI systems truly represent and how their internal structures connect to real-world phenomena.

Interpretability and Implicit Model Semantics in Biomedicine and Deep Learning

We introduce a framework to analyse interpretability in deep learning, by drawing on a formal notion of model semantics from the philosophy of science. We argue that interpretability is only one aspect of a model’s semantics and illustrate our framework with examples from biomedicine.