SemDeDup: Data-efficient learning at web-scale through semantic deduplication

03/16/2023
by   Amro Abbas, et al.
0

Progress in machine learning has been driven in large part by massive increases in data. However, large web-scale datasets such as LAION are largely uncurated beyond searches for exact duplicates, potentially leaving much redundancy. Here, we introduce SemDeDup, a method which leverages embeddings from pre-trained models to identify and remove semantic duplicates: data pairs which are semantically similar, but not exactly identical. Removing semantic duplicates preserves performance and speeds up learning. Analyzing a subset of LAION, we show that SemDeDup can remove 50 performance loss, effectively halving training time. Moreover, performance increases out of distribution. Also, analyzing language models trained on C4, a partially curated dataset, we show that SemDeDup improves over prior approaches while providing efficiency gains. SemDeDup provides an example of how simple ways of leveraging quality embeddings can be used to make models learn faster with less data.

READ FULL TEXT

page 2

page 25

page 26

page 27

research
04/11/2021

Disentangling Semantics and Syntax in Sentence Embeddings with Pre-trained Language Models

Pre-trained language models have achieved huge success on a wide range o...
research
11/02/2020

Introducing various Semantic Models for Amharic: Experimentation and Evaluation with multiple Tasks and Datasets

The availability of different pre-trained semantic models enabled the qu...
research
05/08/2023

Code Execution with Pre-trained Language Models

Code execution is a fundamental aspect of programming language semantics...
research
03/09/2020

KGvec2go – Knowledge Graph Embeddings as a Service

In this paper, we present KGvec2go, a Web API for accessing and consumin...
research
03/08/2021

Text Simplification by Tagging

Edit-based approaches have recently shown promising results on multiple ...
research
06/30/2023

A Massive Scale Semantic Similarity Dataset of Historical English

A diversity of tasks use language models trained on semantic similarity ...

Please sign up or login with your details

Forgot password? Click here to reset