Superposition as Data Augmentation using LSTM and HMM in Small Training Sets

10/24/2019 ∙ by Akilesh Sivaswamy, et al. ∙ 0

Considering audio and image data as having quantum nature (data are represented by density matrices), we achieved better results on training architectures such as 3-layer stacked LSTM and HMM by mixing training samples using superposition augmentation and compared with plain default training and mix-up augmentation. This augmentation technique originates from the mix-up approach but provides more solid theoretical reasoning based on quantum properties. We achieved 3 number of training samples in Russian audio-digits recognition task and 7,16 better accuracy than mix-up augmentation by training only 500 samples using HMM on the same task. Also, we achieved 1.1 900 samples in MNIST using 3-layer stacked LSTM.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.