KASAM: Spline Additive Models for Function Approximation

05/12/2022
by   Heinrich van Deventer, et al.
0

Neural networks have been criticised for their inability to perform continual learning due to catastrophic forgetting and rapid unlearning of a past concept when a new concept is introduced. Catastrophic forgetting can be alleviated by specifically designed models and training techniques. This paper outlines a novel Spline Additive Model (SAM). SAM exhibits intrinsic memory retention with sufficient expressive power for many practical tasks, but is not a universal function approximator. SAM is extended with the Kolmogorov-Arnold representation theorem to a novel universal function approximator, called the Kolmogorov-Arnold Spline Additive Model - KASAM. The memory retention, expressive power and limitations of SAM and KASAM are illustrated analytically and empirically. SAM exhibited robust but imperfect memory retention, with small regions of overlapping interference in sequential learning tasks. KASAM exhibited greater susceptibility to catastrophic forgetting. KASAM in combination with pseudo-rehearsal training techniques exhibited superior performance in regression tasks and memory retention.

READ FULL TEXT

page 23

page 26

page 29

research
08/10/2022

ATLAS: Universal Function Approximator for Memory Retention

Artificial neural networks (ANNs), despite their universal function appr...
research
06/11/2018

Meta Continual Learning

Using neural networks in practical settings would benefit from the abili...
research
01/06/2020

Dissecting Catastrophic Forgetting in Continual Learning by Deep Visualization

Interpreting the behaviors of Deep Neural Networks (usually considered a...
research
05/03/2022

ExSpliNet: An interpretable and expressive spline-based neural network

In this paper we present ExSpliNet, an interpretable and expressive neur...
research
01/13/2021

EEC: Learning to Encode and Regenerate Images for Continual Learning

The two main impediments to continual learning are catastrophic forgetti...
research
10/05/2011

Linearized Additive Classifiers

We revisit the additive model learning literature and adapt a penalized ...
research
02/17/2020

Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks

Artificial neural networks are well-known to be susceptible to catastrop...

Please sign up or login with your details

Forgot password? Click here to reset