Out-of-distribution Few-shot Learning For Edge Devices without Model Fine-tuning

04/13/2023
by   Xinyun Zhang, et al.
0

Few-shot learning (FSL) via customization of a deep learning network with limited data has emerged as a promising technique to achieve personalized user experiences on edge devices. However, existing FSL methods primarily assume independent and identically distributed (IID) data and utilize either computational backpropagation updates for each task or a common model with task-specific prototypes. Unfortunately, the former solution is infeasible for edge devices that lack on-device backpropagation capabilities, while the latter often struggles with limited generalization ability, especially for out-of-distribution (OOD) data. This paper proposes a lightweight, plug-and-play FSL module called Task-aware Normalization (TANO) that enables efficient and task-aware adaptation of a deep neural network without backpropagation. TANO covers the properties of multiple user groups by coordinating the updates of several groups of the normalization statistics during meta-training and automatically identifies the appropriate normalization group for a downstream few-shot task. Consequently, TANO provides stable but task-specific estimations of the normalization statistics to close the distribution gaps and achieve efficient model adaptation. Results on both intra-domain and out-of-domain generalization experiments demonstrate that TANO outperforms recent methods in terms of accuracy, inference speed, and model size. Moreover, TANO achieves promising results on widely-used FSL benchmarks and data from real applications.

READ FULL TEXT
research
05/21/2018

AgileNet: Lightweight Dictionary-based Few-shot Learning

The success of deep learning models is heavily tied to the use of massiv...
research
03/22/2023

Meta-augmented Prompt Tuning for Better Few-shot Learning

Prompt tuning is a parameter-efficient method, which freezes all PLM par...
research
03/15/2016

Revisiting Batch Normalization For Practical Domain Adaptation

Deep neural networks (DNN) have shown unprecedented success in various c...
research
10/06/2022

Hypernetwork approach to Bayesian MAML

The main goal of Few-Shot learning algorithms is to enable learning from...
research
02/08/2023

CrossCodeBench: Benchmarking Cross-Task Generalization of Source Code Models

Despite the recent advances showing that a model pre-trained on large-sc...
research
08/26/2023

Generalized Lightness Adaptation with Channel Selective Normalization

Lightness adaptation is vital to the success of image processing to avoi...
research
03/30/2020

Unsupervised Model Personalization while Preserving Privacy and Scalability: An Open Problem

This work investigates the task of unsupervised model personalization, a...

Please sign up or login with your details

Forgot password? Click here to reset