Michael Gancz works at Yale University.

Posts

Beyond Explainability: How We Are Redefining Interpretability in AI

AI interpretability has long been the focus, but what if it’s only part of the story? New research introduces model semantics, a framework for understanding what AI systems truly represent and how their internal structures connect to real-world phenomena.

Interpretability and Implicit Model Semantics in Biomedicine and Deep Learning

We introduce a framework to analyse interpretability in deep learning, by drawing on a formal notion of model semantics from the philosophy of science. We argue that interpretability is only one aspect of a model’s semantics and illustrate our framework with examples from biomedicine.