Explaining Neural Network Classifiers: Hurdles and Progress

27.04.2022 18:00 - 19:00

Stephan Wäldchen (TU Berlin)

 

 

Abstract: Neural Networks have become the standard tools for high-dimensional decision making, e.g. medical imaging, autonomous driving, playing complex games.
Even in high-stakes areas they generally operate as black-box algorithms without a legible decision process. This has birthed the field of explainable artificial intelligence (XAI). The first step for XAI-methods is to discern between the relevant and irrelevant input components for a decision. In this talk, we formalise this idea by extending the concept of prime implicants from abductive reasoning to a probabilistic setting. This setting captures what many XAI practitioners intuitively aim for. We show that finding such small implicants, even approximately, is a computationally hard problem. Furthermore, good solutions depend strongly on the underlying probability distribution. We present strategies to overcome both problems and discuss what challenges still remain.

Organiser:
P. Petersen
Location:
Zoom Meeting