Learning in the Wild with Incremental Skeptical Gaussian Processes

11/02/2020
by   Andrea Bontempelli, et al.
0

The ability to learn from human supervision is fundamental for personal assistants and other interactive applications of AI. Two central challenges for deploying interactive learners in the wild are the unreliable nature of the supervision and the varying complexity of the prediction task. We address a simple but representative setting, incremental classification in the wild, where the supervision is noisy and the number of classes grows over time. In order to tackle this task, we propose a redesign of skeptical learning centered around Gaussian Processes (GPs). Skeptical learning is a recent interactive strategy in which, if the machine is sufficiently confident that an example is mislabeled, it asks the annotator to reconsider her feedback. In many cases, this is often enough to obtain clean supervision. Our redesign, dubbed ISGP, leverages the uncertainty estimates supplied by GPs to better allocate labeling and contradiction queries, especially in the presence of noise. Our experiments on synthetic and real-world data show that, as a result, while the original formulation of skeptical learning produces over-confident models that can fail completely in the wild, ISGP works well at varying levels of noise and as new classes are observed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2013

Gaussian Processes for Nonlinear Signal Processing

Gaussian processes (GPs) are versatile tools that have been successfully...
research
02/03/2020

Learning from Noisy Similar and Dissimilar Data

With the widespread use of machine learning for classification, it becom...
research
03/02/2020

A Framework for Interdomain and Multioutput Gaussian Processes

One obstacle to the use of Gaussian processes (GPs) in large-scale probl...
research
10/26/2021

Non-Gaussian Gaussian Processes for Few-Shot Regression

Gaussian Processes (GPs) have been widely used in machine learning to mo...
research
05/26/2018

Calibrating Deep Convolutional Gaussian Processes

The wide adoption of Convolutional Neural Networks (CNNs) in application...
research
10/12/2019

On the expected behaviour of noise regularised deep neural networks as Gaussian processes

Recent work has established the equivalence between deep neural networks...
research
07/06/2020

In the Wild: From ML Models to Pragmatic ML Systems

Enabling robust intelligence in the wild entails learning systems that o...

Please sign up or login with your details

Forgot password? Click here to reset