Unsupervised Learning of Sequence Representations by Autoencoders

04/03/2018
by   Wenjie Pei, et al.
0

Traditional machine learning models have problems with handling sequence data, because the lengths of sequences may vary between samples. In this paper, we present an unsupervised learning model for sequence data, called the Integrated Sequence Autoencoder (ISA), to learn a fixed-length vectorial representation by minimizing the reconstruction error. Specifically, we propose to integrate two classical mechanisms for sequence reconstruction which takes into account both the global silhouette information and the local temporal dependencies. Furthermore, we propose a stop feature that serves as a temporal stamp to guide the reconstruction process, and which results in a higher-quality representation. Extensive validation on real-world datasets shows that the learned representation is able to effectively summarize not only the apparent features, but also the underlying and high-level style information. Take for example a speech sequence sample: our ISA model can not only recognize the spoken text (apparent feature), but can also discriminate the speaker who utters the audio (more high-level style).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset