DeepAI AI Chat
Log In Sign Up

Semi-supervised and Population Based Training for Voice Commands Recognition

by   Oğuz H. Elibol, et al.

We present a rapid design methodology that combines automated hyper-parameter tuning with semi-supervised training to build highly accurate and robust models for voice commands classification. Proposed approach allows quick evaluation of network architectures to fit performance and power constraints of available hardware, while ensuring good hyper-parameter choices for each network in real-world scenarios. Leveraging the vast amount of unlabeled data with a student/teacher based semi-supervised method, classification accuracy is improved from 84 explore the hyper-parameter space through population based training and obtain an optimized model in the same time frame as it takes to train a single model.


FROST: Faster and more Robust One-shot Semi-supervised Training

Recent advances in one-shot semi-supervised learning have lowered the ba...

Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance

Large amounts of labeled data are typically required to train deep learn...

Semi-supervised Tuning from Temporal Coherence

Recent works demonstrated the usefulness of temporal coherence to regula...

Self Training with Ensemble of Teacher Models

In order to train robust deep learning models, large amounts of labelled...

Semi-Supervised Translation with MMD Networks

This work aims to improve semi-supervised learning in a neural network a...

Vehicle Trajectory Prediction by Transfer Learning of Semi-Supervised Models

In this work we show that semi-supervised models for vehicle trajectory ...

Semi-Supervised Monaural Singing Voice Separation With a Masking Network Trained on Synthetic Mixtures

We study the problem of semi-supervised singing voice separation, in whi...