less than 1 minute read

Interpretable methods using prototypical patches help AI explain its reasoning to humans. However, current prototype-based methods may not align with human visual perception, causing non-intuitive interpretations.

PIP-Net learns prototypical parts in a self-supervised fashion, better correlating with human vision. PIP-Net’s sparse scoring sheet shows evidence for a class based on prototypical parts in an image. It can abstain from decisions for unfamiliar data. No part annotations needed, just image-level labels.

PIP-Net achieves global interpretability, showcasing the entire reasoning process through learned prototypes. Local explanations identify relevant prototypes in an image. Our prototypes correlate with ground-truth object parts. With interpretable prototypes, PIP-Net empowers users to understand decisions intuitively and meaningfully.

Paper

  • M. Nauta, J. Schlötterer, M. van Keulen, and C. Seifert, “PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification,” 2023, doi: 10.1109/CVPR52729.2023.00269.