Improving Defensive Distillation using Teacher Assistant

05/14/2023
by   Maniratnam Mandal, et al.
0

Adversarial attacks pose a significant threat to the security and safety of deep neural networks being applied to modern applications. More specifically, in computer vision-based tasks, experts can use the knowledge of model architecture to create adversarial samples imperceptible to the human eye. These attacks can lead to security problems in popular applications such as self-driving cars, face recognition, etc. Hence, building networks which are robust to such attacks is highly desirable and essential. Among the various methods present in literature, defensive distillation has shown promise in recent years. Using knowledge distillation, researchers have been able to create models robust against some of those attacks. However, more attacks have been developed exposing weakness in defensive distillation. In this project, we derive inspiration from teacher assistant knowledge distillation and propose that introducing an assistant network can improve the robustness of the distilled model. Through a series of experiments, we evaluate the distilled models for different distillation temperatures in terms of accuracy, sensitivity, and robustness. Our experiments demonstrate that the proposed hypothesis can improve robustness in most cases. Additionally, we show that multi-step distillation can further improve robustness with very little impact on model accuracy.

READ FULL TEXT
research
05/23/2019

Adversarially Robust Distillation

Knowledge distillation is effective for producing small high-performance...
research
01/19/2023

RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation

Deep Neural Networks are vulnerable to adversarial attacks. Neural Archi...
research
01/30/2022

Improving Corruption and Adversarial Robustness by Enhancing Weak Subnets

Deep neural networks have achieved great success in many computer vision...
research
10/04/2022

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

Existing hybrid retrievers which integrate sparse and dense retrievers, ...
research
03/10/2022

Improving Neural ODEs via Knowledge Distillation

Neural Ordinary Differential Equations (Neural ODEs) construct the conti...
research
05/05/2023

A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness

The aim of dataset distillation is to encode the rich features of an ori...
research
05/14/2023

Analyzing Compression Techniques for Computer Vision

Compressing deep networks is highly desirable for practical use-cases in...

Please sign up or login with your details

Forgot password? Click here to reset