Improving Interpretability via Regularization of Neural Activation Sensitivity

11/16/2022
by   Ofir Moshe, et al.
1

State-of-the-art deep neural networks (DNNs) are highly effective at tackling many real-world tasks. However, their wide adoption in mission-critical contexts is hampered by two major weaknesses - their susceptibility to adversarial attacks and their opaqueness. The former raises concerns about the security and generalization of DNNs in real-world conditions, whereas the latter impedes users' trust in their output. In this research, we (1) examine the effect of adversarial robustness on interpretability and (2) present a novel approach for improving the interpretability of DNNs that is based on regularization of neural activation sensitivity. We evaluate the interpretability of models trained using our method to that of standard models and models trained using state-of-the-art adversarial robustness techniques. Our results show that adversarially robust models are superior to standard models and that models trained using our proposed method are even better than adversarially robust models in terms of interpretability.

READ FULL TEXT

page 5

page 6

research
07/04/2023

Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection

With the perpetual increase of complexity of the state-of-the-art deep n...
research
05/15/2023

Smoothness and monotonicity constraints for neural networks using ICEnet

Deep neural networks have become an important tool for use in actuarial ...
research
03/26/2023

Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability

Data augmentation strategies are actively used when training deep neural...
research
05/31/2019

L0 Regularization Based Neural Network Design and Compression

We consider complexity of Deep Neural Networks (DNNs) and their associat...
research
04/18/2020

Single-step Adversarial training with Dropout Scheduling

Deep learning models have shown impressive performance across a spectrum...
research
05/10/2019

On the Connection Between Adversarial Robustness and Saliency Map Interpretability

Recent studies on the adversarial vulnerability of neural networks have ...
research
07/04/2020

On Connections between Regularizations for Improving DNN Robustness

This paper analyzes regularization terms proposed recently for improving...

Please sign up or login with your details

Forgot password? Click here to reset