Compressed Dictionary Learning

05/02/2018 ∙ by Flavio Teixeira, et al. ∙ 0

In this paper we show that the computational complexity of the Iterative Thresholding and K-Residual-Means (ITKrM) algorithm for dictionary learning can be significantly reduced by using dimensionality reduction techniques based on the Johnson-Lindenstrauss Lemma. We introduce the Iterative Compressed-Thresholding and K-Means (IcTKM) algorithm for fast dictionary learning and study its convergence properties. We show that IcTKM can locally recover a generating dictionary with low computational complexity up to a target error ε̃ by compressing d-dimensional training data into m < d dimensions, where m is proportional to d and inversely proportional to the distortion level δ incurred by compressing the data. Increasing the distortion level δ reduces the computational complexity of IcTKM at the cost of an increased recovery error and reduced admissible sparsity level for the training data. For generating dictionaries comprised of K atoms, we show that IcTKM can stably recover the dictionary with distortion levels up to the order δ≤ O(1/√( K)). The compression effectively shatters the data dimension bottleneck in the computational cost of the ITKrM algorithm. For training data with sparsity levels S ≤ O(K^2/3), ITKrM can locally recover the dictionary with a computational cost that scales as O(d K (ε̃^-1)) per training signal. We show that for these same sparsity levels the computational cost can be brought down to O(^5 (d) K (ε̃^-1)) with IcTKM, a significant reduction when high-dimensional data is considered. Our theoretical results are complemented with numerical simulations which demonstrate that IcTKM is a powerful, low-cost algorithm for learning dictionaries from high-dimensional data sets.



There are no comments yet.


page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.