Self-Teaching Networks

09/09/2019
by   Liang Lu, et al.
0

We propose self-teaching networks to improve the generalization capacity of deep neural networks. The idea is to generate soft supervision labels using the output layer for training the lower layers of the network. During the network training, we seek an auxiliary loss that drives the lower layer to mimic the behavior of the output layer. The connection between the two network layers through the auxiliary loss can help the gradient flow, which works similar to the residual networks. Furthermore, the auxiliary loss also works as a regularizer, which improves the generalization capacity of the network. We evaluated the self-teaching network with deep recurrent neural networks on speech recognition tasks, where we trained the acoustic model using 30 thousand hours of data. We tested the acoustic model using data collected from 4 scenarios. We show that the self-teaching network can achieve consistent improvements and outperform existing methods such as label smoothing and confidence penalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

Improving Generalization of Deep Neural Network Acoustic Models with Length Perturbation and N-best Based Label Smoothing

We introduce two techniques, length perturbation and n-best based label ...
research
01/13/2022

STEdge: Self-training Edge Detection with Multi-layer Teaching and Regularization

Learning-based edge detection has hereunto been strongly supervised with...
research
01/10/2017

Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition

In this paper, a novel architecture for a deep recurrent neural network,...
research
10/13/2019

What happens when self-supervision meets Noisy Labels?

The major driving force behind the immense success of deep learning mode...
research
02/09/2021

Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers

Although the lower layers of a deep neural network learn features which ...
research
04/05/2018

Regularizing Deep Networks by Modeling and Predicting Label Structure

We construct custom regularization functions for use in supervised train...
research
11/05/2020

Teaching with Commentaries

Effective training of deep neural networks can be challenging, and there...

Please sign up or login with your details

Forgot password? Click here to reset