computer-vision

B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable

B-cos Networks have been shown to be effective for obtaining highly human interpretable explanations of model decisions by architecturally enforcing stronger alignment between inputs and weight. B-cos variants of convolutional networks (CNNs) and …

Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery

Concept Bottleneck Models (CBMs) have recently been proposed to address the ‘black-box’ problem of deep neural networks, by first mapping images to a human-understandable concept space and then linearly combining concepts for classification. Such …

Good Teachers Explain: Explanation-Enhanced Knowledge Distillation

Knowledge Distillation (KD) has proven effective for compressing large teacher models into smaller student models. While it is well known that student models can achieve similar accuracies as the teachers, it has also been shown that they nonetheless …

Better Understanding Differences in Attribution Methods via Systematic Evaluations

Deep neural networks are very successful on many vision tasks, but hard to interpret due to their black box nature. To overcome this, various post-hoc attribution methods have been proposed to identify image regions most influential to the models' …

Studying How to Efficiently and Effectively Guide Models with Explanations

Despite being highly performant, deep neural networks might base their decisions on features that spuriously correlate with the provided labels, thus hurting generalization. To mitigate this, ‘model guidance’ has recently gained popularity, i.e. the …

Model Guidance

Studying How to Efficiently and Effectively Guide Models with Explanations

Understanding Attributions

Towards Better Understanding Attribution Methods

Towards Better Understanding Attribution Methods

Deep neural networks are very successful on many vision tasks, but hard to interpret due to their black box nature. To overcome this, various post-hoc attribution methods have been proposed to identify image regions most influential to the models' …

Adversarial Training against Location-Optimized Adversarial Patches

Deep neural networks have been shown to be susceptible to adversarial examples -- small, imperceptible changes constructed to cause mis-classification in otherwise highly accurate image classifiers. As a practical alternative, recent work proposed …

Adversarial Patch Training

Adversarial Training against Location-Optimized Adversarial Patches