Near-Linear Time Projection onto the ℓ_1,∞ Ball; Application to Sparse Autoencoders
Looking for sparsity is nowadays crucial to speed up the training of large-scale neural networks. Projections onto the ℓ_1,2 and ℓ_1,∞ are among the most efficient techniques to sparsify and reduce the overall cost of neural networks. In this paper, we introduce a new projection algorithm for the ℓ_1,∞ norm ball. The worst-case time complexity of this algorithm is 𝒪(nm+Jlog(nm)) for a matrix in ℝ^n× m. J is a term that tends to 0 when the sparsity is high, and to nm when the sparsity is low. Its implementation is easy and it is guaranteed to converge to the exact solution in a finite time. Moreover, we propose to incorporate the ℓ_1,∞ ball projection while training an autoencoder to enforce feature selection and sparsity of the weights. Sparsification appears in the encoder to primarily do feature selection due to our application in biology, where only a very small part (<2%) of the data is relevant. We show that both in the biological case and in the general case of sparsity that our method is the fastest.
READ FULL TEXT