DTT: An Example-Driven Tabular Transformer by Leveraging Large Language Models

03/12/2023
by   Arash Dargahi Nobari, et al.
0

Many organizations rely on data from government and third-party sources, and those sources and organizations do not follow the same data formatting. This introduces challenges in integrating data from multiple sources. Commercial database systems do not offer adequate support for integrating data from heterogeneous sources, and manual integration is both time-consuming and inefficient. While state-of-the-art approaches rely on similarity functions and textual transformations, they often fail to handle challenging cases where multiple mappings are required, or the mappings go beyond simple textual transformations. In this paper, we study the potential of deep neural models for transforming tables for joinability. In particular, we cast the problem as a prediction task and develop a framework that leverages large deep-learning language models to transform tabular data from a source formatting to a desired target representation. Our framework can efficiently learn the pattern for mapping the source formatting into the expected target using just a few examples, which can then be used for table joining, filling in missing values, and error detection. Compared to state-of-the-art mapping and joining approaches, our framework delivers noticeably more accurate and scalable performance on both real-world and synthetic datasets. Our experimental evaluation also shows that the performance of the proposed framework using our fine-tuned model is at par or better than large language models such as GPT-3, despite the significant difference in size, and that integrating large language models into our framework improves their performance.

READ FULL TEXT

page 1

page 9

research
11/18/2021

Efficiently Transforming Tables for Joinability

Data from different sources rarely conform to a single formatting even i...
research
04/10/2023

Learnings from Data Integration for Augmented Language Models

One of the limitations of large language models is that they do not have...
research
02/22/2023

Learning from Multiple Sources for Data-to-Text and Text-to-Data

Data-to-text (D2T) and text-to-data (T2D) are dual tasks that convert st...
research
08/17/2023

BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model with Non-textual Features for CTR Prediction

Although deep pre-trained language models have shown promising benefit i...
research
12/20/2022

Geographic and Geopolitical Biases of Language Models

Pretrained language models (PLMs) often fail to fairly represent target ...
research
12/22/2014

Pragmatic Neural Language Modelling in Machine Translation

This paper presents an in-depth investigation on integrating neural lang...
research
03/17/2023

Towards AI-controlled FES-restoration of movements: Learning cycling stimulation pattern with reinforcement learning

Functional electrical stimulation (FES) has been increasingly integrated...

Please sign up or login with your details

Forgot password? Click here to reset