Information Planning for Text Data

02/09/2018
by   Vadim Smolyakov, et al.
0

Information planning enables faster learning with fewer training examples. It is particularly applicable when training examples are costly to obtain. This work examines the advantages of information planning for text data by focusing on three supervised models: Naive Bayes, supervised LDA and deep neural networks. We show that planning based on entropy and mutual information outperforms random selection baseline and therefore accelerates learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2023

An information-Theoretic Approach to Semi-supervised Transfer Learning

Transfer learning is a valuable tool in deep learning as it allows propa...
research
06/03/2013

Learning from networked examples in a k-partite graph

Many machine learning algorithms are based on the assumption that traini...
research
12/11/2020

When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?

Modern machine learning models are complex and frequently encode surpris...
research
04/30/2015

Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

Three important properties of a classification machinery are: (i) the sy...
research
08/31/2012

Statistically adaptive learning for a general class of cost functions (SA L-BFGS)

We present a system that enables rapid model experimentation for tera-sc...
research
07/19/2019

Learning More From Less: Towards Strengthening Weak Supervision for Ad-Hoc Retrieval

The limited availability of ground truth relevance labels has been a maj...
research
08/21/2018

Demonstrating PAR4SEM - A Semantic Writing Aid with Adaptive Paraphrasing

In this paper, we present Par4Sem, a semantic writing aid tool based on ...

Please sign up or login with your details

Forgot password? Click here to reset