Learning Concept Abstractness Using Weak Supervision

09/05/2018
by   Ella Rabinovich, et al.
0

We introduce a weakly supervised approach for inferring the property of abstractness of words and expressions in the complete absence of labeled data. Exploiting only minimal linguistic clues and the contextual usage of a concept as manifested in textual data, we train sufficiently powerful classifiers, obtaining high correlation with human labels. The results imply the applicability of this approach to additional properties of concepts, additional languages, and resource-scarce scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2023

Defect detection using weakly supervised learning

In many real-world scenarios, obtaining large amounts of labeled data ca...
research
11/19/2014

ConceptLearner: Discovering Visual Concepts from Weakly Labeled Image Collections

Discovering visual knowledge from weakly labeled data is crucial to scal...
research
05/22/2018

Adversarial Labeling for Learning without Labels

We consider the task of training classifiers without labels. We propose ...
research
06/06/2022

Training Subset Selection for Weak Supervision

Existing weak supervision approaches use all the data covered by weak si...
research
12/03/2017

Multimodal Visual Concept Learning with Weakly Supervised Techniques

Despite the availability of a huge amount of video data accompanied by d...
research
05/23/2023

Deep GEM-Based Network for Weakly Supervised UWB Ranging Error Mitigation

Ultra-wideband (UWB)-based techniques, while becoming mainstream approac...
research
04/19/2021

Labels, Information, and Computation: Efficient, Privacy-Preserving Learning Using Sufficient Labels

In supervised learning, obtaining a large set of fully-labeled training ...

Please sign up or login with your details

Forgot password? Click here to reset