Robustly Leveraging Prior Knowledge in Text Classification

03/03/2015
by   Biao Liu, et al.
0

Prior knowledge has been shown very useful to address many natural language processing tasks. Many approaches have been proposed to formalise a variety of knowledge, however, whether the proposed approach is robust or sensitive to the knowledge supplied to the model has rarely been discussed. In this paper, we propose three regularization terms on top of generalized expectation criteria, and conduct extensive experiments to justify the robustness of the proposed methods. Experimental results demonstrate that our proposed methods obtain remarkable improvements and are much more robust than baselines.

READ FULL TEXT
research
03/13/2020

Graph Convolutional Topic Model for Data Streams

Learning hidden topics in data streams has been paid a great deal of att...
research
02/21/2019

Deep Short Text Classification with Knowledge Powered Attention

Short text classification is one of important tasks in Natural Language ...
research
06/13/2022

Better Teacher Better Student: Dynamic Prior Knowledge for Knowledge Distillation

Knowledge distillation (KD) has shown very promising capabilities in tra...
research
02/13/2023

Knowledge Enhanced Semantic Communication Receiver

In recent years, with the rapid development of deep learning and natural...
research
08/11/2023

Towards Packaging Unit Detection for Automated Palletizing Tasks

For various automated palletizing tasks, the detection of packaging unit...
research
11/16/2022

Disentangling Task Relations for Few-shot Text Classification via Self-Supervised Hierarchical Task Clustering

Few-Shot Text Classification (FSTC) imitates humans to learn a new text ...
research
03/13/2020

Dynamic transformation of prior knowledge into Bayesian models for data streams

We consider how to effectively use prior knowledge when learning a Bayes...

Please sign up or login with your details

Forgot password? Click here to reset