Detecting Morphing Attacks via Continual Incremental Training

07/27/2023
by   Lorenzo Pellegrini, et al.
0

Scenarios in which restrictions in data transfer and storage limit the possibility to compose a single dataset – also exploiting different data sources – to perform a batch-based training procedure, make the development of robust models particularly challenging. We hypothesize that the recent Continual Learning (CL) paradigm may represent an effective solution to enable incremental training, even through multiple sites. Indeed, a basic assumption of CL is that once a model has been trained, old data can no longer be used in successive training iterations and in principle can be deleted. Therefore, in this paper, we investigate the performance of different Continual Learning methods in this scenario, simulating a learning model that is updated every time a new chunk of data, even of variable size, is available. Experimental results reveal that a particular CL method, namely Learning without Forgetting (LwF), is one of the best-performing algorithms. Then, we investigate its usage and parametrization in Morphing Attack Detection and Object Classification tasks, specifically with respect to the amount of new training data that became available.

READ FULL TEXT

page 2

page 5

page 6

page 8

research
05/06/2023

Active Continual Learning: Labelling Queries in a Sequence of Tasks

Acquiring new knowledge without forgetting what has been learned in a se...
research
11/29/2022

Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning

Generally, regularization-based continual learning models limit access t...
research
09/18/2019

Continual learning: A comparative study on how to defy forgetting in classification tasks

Artificial neural networks thrive in solving the classification problem ...
research
09/06/2023

Continual Evidential Deep Learning for Out-of-Distribution Detection

Uncertainty-based deep learning models have attracted a great deal of in...
research
08/02/2023

Training Data Protection with Compositional Diffusion Models

We introduce Compartmentalized Diffusion Models (CDM), a method to train...
research
09/03/2022

Continual Learning for Steganalysis

To detect the existing steganographic algorithms, recent steganalysis me...
research
05/30/2023

Prediction Error-based Classification for Class-Incremental Learning

Class-incremental learning (CIL) is a particularly challenging variant o...

Please sign up or login with your details

Forgot password? Click here to reset