Fitted Learning: Models with Awareness of their Limits

09/07/2016
by   Navid Kardan, et al.
0

Though deep learning has pushed the boundaries of classification forward, in recent years hints of the limits of standard classification have begun to emerge. Problems such as fooling, adding new classes over time, and the need to retrain learning models only for small changes to the original problem all point to a potential shortcoming in the classic classification regime, where a comprehensive a priori knowledge of the possible classes or concepts is critical. Without such knowledge, classifiers misjudge the limits of their knowledge and overgeneralization therefore becomes a serious obstacle to consistent performance. In response to these challenges, this paper extends the classic regime by reframing classification instead with the assumption that concepts present in the training set are only a sample of the hypothetical final set of concepts. To bring learning models into this new paradigm, a novel elaboration of standard architectures called the competitive overcomplete output layer (COOL) neural network is introduced. Experiments demonstrate the effectiveness of COOL by applying it to fooling, separable concept learning, one-class neural networks, and standard classification benchmarks. The results suggest that, unlike conventional classifiers, the amount of generalization in COOL networks can be tuned to match the problem.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 12

research
01/25/2021

Deep Learning Generalization and the Convex Hull of Training Sets

We study the generalization of deep learning models in relation to the c...
research
06/22/2021

Towards Consistent Predictive Confidence through Fitted Ensembles

Deep neural networks are behind many of the recent successes in machine ...
research
01/24/2022

EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients

Few-shot learning aims at leveraging knowledge learned by one or more de...
research
11/17/2018

Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples

Although deep learning has shown great success in recent years, research...
research
04/09/2019

Universal Lipschitz Approximation in Bounded Depth Neural Networks

Adversarial attacks against machine learning models are a rather hefty o...
research
10/24/2022

Are Deep Sequence Classifiers Good at Non-Trivial Generalization?

Recent advances in deep learning models for sequence classification have...
research
05/25/2023

Concept-Centric Transformers: Concept Transformers with Object-Centric Concept Learning for Interpretability

Attention mechanisms have greatly improved the performance of deep-learn...

Please sign up or login with your details

Forgot password? Click here to reset