DeepAI AI Chat
Log In Sign Up

Information Planning for Text Data

by   Vadim Smolyakov, et al.

Information planning enables faster learning with fewer training examples. It is particularly applicable when training examples are costly to obtain. This work examines the advantages of information planning for text data by focusing on three supervised models: Naive Bayes, supervised LDA and deep neural networks. We show that planning based on entropy and mutual information outperforms random selection baseline and therefore accelerates learning.


page 1

page 2

page 3

page 4


An information-Theoretic Approach to Semi-supervised Transfer Learning

Transfer learning is a valuable tool in deep learning as it allows propa...

Learning from networked examples in a k-partite graph

Many machine learning algorithms are based on the assumption that traini...

When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?

Modern machine learning models are complex and frequently encode surpris...

Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

Three important properties of a classification machinery are: (i) the sy...

Statistically adaptive learning for a general class of cost functions (SA L-BFGS)

We present a system that enables rapid model experimentation for tera-sc...

Learning More From Less: Towards Strengthening Weak Supervision for Ad-Hoc Retrieval

The limited availability of ground truth relevance labels has been a maj...

Demonstrating PAR4SEM - A Semantic Writing Aid with Adaptive Paraphrasing

In this paper, we present Par4Sem, a semantic writing aid tool based on ...