Federated Unlearning via Class-Discriminative Pruning

10/22/2021
by   Junxiao Wang, et al.
0

We explore the problem of selectively forgetting categories from trained CNN classification models in the federated learning (FL). Given that the data used for training cannot be accessed globally in FL, our insights probe deep into the internal influence of each channel. Through the visualization of feature maps activated by different channels, we observe that different channels have a varying contribution to different categories in image classification. Inspired by this, we propose a method for scrubbing the model clean of information about particular categories. The method does not require retraining from scratch, nor global access to the data used for training. Instead, we introduce the concept of Term Frequency Inverse Document Frequency (TF-IDF) to quantize the class discrimination of channels. Channels with high TF-IDF scores have more discrimination on the target categories and thus need to be pruned to unlearn. The channel pruning is followed by a fine-tuning process to recover the performance of the pruned model. Evaluated on CIFAR10 dataset, our method accelerates the speed of unlearning by 8.9x for the ResNet model, and 7.9x for the VGG model under no degradation in accuracy, compared to retraining from scratch. For CIFAR100 dataset, the speedups are 9.9x and 8.4x, respectively. We envision this work as a complementary block for FL towards compliance with legal and ethical criteria.

READ FULL TEXT
research
04/24/2021

Carrying out CNN Channel Pruning in a White Box

Channel Pruning has been long adopted for compressing CNNs, which signif...
research
07/12/2022

Federated Unlearning: How to Efficiently Erase a Client in FL?

With privacy legislation empowering users with the right to be forgotten...
research
11/12/2019

Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Neural Networks

We explore the problem of selectively forgetting a particular set of dat...
research
10/28/2018

Discrimination-aware Channel Pruning for Deep Neural Networks

Channel pruning is one of the predominant approaches for deep model comp...
research
01/04/2020

Discrimination-aware Network Pruning for Deep Model Compression

We study network pruning which aims to remove redundant channels/kernels...
research
02/01/2021

Scaling Federated Learning for Fine-tuning of Large Language Models

Federated learning (FL) is a promising approach to distributed compute, ...
research
05/10/2023

Spectrum Breathing: Protecting Over-the-Air Federated Learning Against Interference

Federated Learning (FL) is a widely embraced paradigm for distilling art...

Please sign up or login with your details

Forgot password? Click here to reset