
Adversarial Reprogramming of Sequence Classification Neural Networks
Adversarial Reprogramming has demonstrated success in utilizing pretrai...
read it

ANTNets: Mobile Convolutional Neural Networks for Resource Efficient Image Classification
Deep convolutional neural networks have achieved remarkable success in c...
read it

Convolutional Neural Networks for Sentiment Classification on Business Reviews
Recently Convolutional Neural Networks (CNNs) models have proven remarka...
read it

Reduced Basis Decomposition: a Certified and Fast Lossy Data Compression Algorithm
Dimension reduction is often needed in the area of data mining. The goal...
read it

Intrinsic dimension of data representations in deep neural networks
Deep neural networks progressively transform their inputs across multipl...
read it

How Large a Vocabulary Does Text Classification Need? A Variational Approach to Vocabulary Selection
With the rapid development in deep learning, deep neural networks have b...
read it

Relative gradient optimization of the Jacobian term in unsupervised deep learning
Learning expressive probabilistic models correctly describing the data i...
read it
Learning Neural Networks on SVD Boosted Latent Spaces for Semantic Classification
The availability of large amounts of data and compelling computation power have made deep learning models much popular for text classification and sentiment analysis. Deep neural networks have achieved competitive performance on the above tasks when trained on naive text representations such as word count, term frequency, and binary matrix embeddings. However, many of the above representations result in the input space having a dimension of the order of the vocabulary size, which is enormous. This leads to a blowup in the number of parameters to be learned, and the computational cost becomes infeasible when scaling to domains that require retaining a colossal vocabulary. This work proposes using singular value decomposition to transform the high dimensional input space to a lowerdimensional latent space. We show that neural networks trained on this lowerdimensional space are not only able to retain performance while savoring significant reduction in the computational complexity but, in many situations, also outperforms the classical neural networks trained on the native input space.
READ FULL TEXT
Comments
There are no comments yet.