Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning

01/31/2023
by   Yunhu Ye, et al.
0

Table-based reasoning has shown remarkable progress in combining deep models with discrete reasoning, which requires reasoning over both free-form natural language (NL) questions and structured tabular data. However, previous table-based reasoning solutions usually suffer from significant performance degradation on huge evidence (tables). In addition, most existing methods struggle to reason over complex questions since the required information is scattered in different places. To alleviate the above challenges, we exploit large language models (LLMs) as decomposers for effective table-based reasoning, which (i) decompose huge evidence (a huge table) into sub-evidence (a small table) to mitigate the interference of useless information for table reasoning; and (ii) decompose complex questions into simpler sub-questions for text reasoning. Specifically, we first use the LLMs to break down the evidence (tables) involved in the current question, retaining the relevant evidence and excluding the remaining irrelevant evidence from the huge table. In addition, we propose a "parsing-execution-filling" strategy to alleviate the hallucination dilemma of the chain of thought by decoupling logic and numerical computation in each step. Extensive experiments show that our method can effectively leverage decomposed evidence and questions and outperforms the strong baselines on TabFact, WikiTableQuestion, and FetaQA datasets. Notably, our model outperforms human performance for the first time on the TabFact dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2022

Large Language Models are few(1)-shot Table Reasoners

Recent literature has shown that large language models (LLMs) are genera...
research
12/20/2022

Toward a Unified Framework for Unsupervised Complex Tabular Reasoning

Structured tabular data exist across nearly all fields. Reasoning task o...
research
09/05/2019

TabFact: A Large-scale Dataset for Table-based Fact Verification

The problem of verifying whether a textual hypothesis holds the truth ba...
research
11/11/2022

DocuT5: Seq2seq SQL Generation with Table Documentation

Current SQL generators based on pre-trained language models struggle to ...
research
10/27/2022

Self-consistent Reasoning For Solving Math Word Problems

Math word problems (MWPs) is a task that automatically derives solution ...
research
05/19/2022

Table Retrieval May Not Necessitate Table-specific Model Design

Tables are an important form of structured data for both human and machi...
research
05/12/2023

Comprehensive Solution Program Centric Pretraining for Table-and-Text Hybrid Numerical Reasoning

Numerical reasoning over table-and-text hybrid passages, such as financi...

Please sign up or login with your details

Forgot password? Click here to reset