End-to-end Learning, with or without Labels

12/30/2019
by   Corinne Jones, et al.
101

We present an approach for end-to-end learning that allows one to jointly learn a feature representation from unlabeled data (with or without labeled data) and predict labels for unlabeled data. The feature representation is assumed to be specified in a differentiable programming framework, that is, as a parameterized mapping amenable to automatic differentiation. The proposed approach can be used with any amount of labeled and unlabeled data, gracefully adjusting to the amount of supervision. We provide experimental results illustrating the effectiveness of the approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2021

A Novel Perspective for Positive-Unlabeled Learning via Noisy Labels

Positive-unlabeled learning refers to the process of training a binary c...
research
03/03/2021

Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments Latent Variable Estimation

Labeling data for modern machine learning is expensive and time-consumin...
research
06/09/2021

Semi-Supervised Training with Pseudo-Labeling for End-to-End Neural Diarization

In this paper, we present a semi-supervised training technique using pse...
research
06/24/2020

Labeled Optimal Partitioning

In data sequences measured over space or time, an important problem is a...
research
04/26/2019

Neural Chinese Word Segmentation with Lexicon and Unlabeled Data via Posterior Regularization

Existing methods for CWS usually rely on a large number of labeled sente...
research
04/06/2014

Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation

Sparse coding algorithm is an learning algorithm mainly for unsupervised...
research
05/07/2018

Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots

We propose a method that can leverage unlabeled data to learn a matching...

Please sign up or login with your details

Forgot password? Click here to reset