Design of Supervision-Scalable Learning Systems: Methodology and Performance Benchmarking

06/18/2022
by   Yijing Yang, et al.
0

The design of robust learning systems that offer stable performance under a wide range of supervision degrees is investigated in this work. We choose the image classification problem as an illustrative example and focus on the design of modularized systems that consist of three learning modules: representation learning, feature learning and decision learning. We discuss ways to adjust each module so that the design is robust with respect to different training sample numbers. Based on these ideas, we propose two families of learning systems. One adopts the classical histogram of oriented gradients (HOG) features while the other uses successive-subspace-learning (SSL) features. We test their performance against LeNet-5, which is an end-to-end optimized neural network, for MNIST and Fashion-MNIST datasets. The number of training samples per image class goes from the extremely weak supervision condition (i.e., 1 labeled sample per class) to the strong supervision condition (i.e., 4096 labeled sample per class) with gradual transition in between (i.e., 2^n, n=0, 1, ⋯, 12). Experimental results show that the two families of modularized learning systems have more robust performance than LeNet-5. They both outperform LeNet-5 by a large margin for small n and have performance comparable with that of LeNet-5 for large n.

READ FULL TEXT

page 2

page 6

research
07/06/2022

A Comprehensive Review on Deep Supervision: Theories and Applications

Deep supervision, or known as 'intermediate supervision' or 'auxiliary s...
research
06/27/2020

ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image Classification

Despite achieving state-of-the-art performance, deep learning methods ge...
research
09/17/2019

PixelHop: A Successive Subspace Learning (SSL) Method for Object Classification

A new machine learning methodology, called successive subspace learning ...
research
01/12/2023

A Scalable Technique for Weak-Supervised Learning with Domain Constraints

We propose a novel scalable end-to-end pipeline that uses symbolic domai...
research
03/26/2020

Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks

Recent works have demonstrated the existence of adversarial examples ta...
research
10/11/2021

Designing off-sample performance metrics

Modern machine learning systems are traditionally designed and tested wi...
research
11/01/2019

Novelty Detection and Learning from Extremely Weak Supervision

In this paper we offer a method and algorithm, which make possible fully...

Please sign up or login with your details

Forgot password? Click here to reset