Bridging the Gap: Deciphering Tabular Data Using Large Language Model

08/23/2023
by   Hengyuan Zhang, et al.
0

In the realm of natural language processing, the understanding of tabular data has perpetually stood as a focal point of scholarly inquiry. The emergence of expansive language models, exemplified by the likes of ChatGPT, has ushered in a wave of endeavors wherein researchers aim to harness these models for tasks related to table-based question answering. Central to our investigative pursuits is the elucidation of methodologies that amplify the aptitude of such large language models in discerning both the structural intricacies and inherent content of tables, ultimately facilitating their capacity to provide informed responses to pertinent queries. To this end, we have architected a distinctive module dedicated to the serialization of tables for seamless integration with expansive language models. Additionally, we've instituted a corrective mechanism within the model to rectify potential inaccuracies. Experimental results indicate that, although our proposed method trails the SOTA by approximately 11.7 1.2 of large language models to table-based question answering tasks, enhancing the model's comprehension of both table structures and content.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2021

Representations for Question Answering from Documents with Tables and Text

Tables in Web documents are pervasive and can be directly used to answer...
research
06/04/2023

A Mathematical Abstraction for Balancing the Trade-off Between Creativity and Reality in Large Language Models

Large Language Models have become popular for their remarkable capabilit...
research
04/24/2023

Unlocking Context Constraints of LLMs: Enhancing Context Efficiency of LLMs with Self-Information-Based Content Filtering

Large language models (LLMs) have received significant attention by achi...
research
03/17/2023

Generate, Transform, Answer: Question Specific Tool Synthesis for Tabular Data

Tabular question answering (TQA) presents a challenging setting for neur...
research
05/23/2023

RET-LLM: Towards a General Read-Write Memory for Large Language Models

Large language models (LLMs) have significantly advanced the field of na...
research
02/09/2023

Using Language Models for Enhancing the Completeness of Natural-language Requirements

[Context and motivation] Incompleteness in natural-language requirements...
research
02/19/2023

Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation

We introduce a method to measure uncertainty in large language models. F...

Please sign up or login with your details

Forgot password? Click here to reset