KaLM at SemEval-2020 Task 4: Knowledge-aware Language Models for Comprehension And Generation

05/24/2020
by   Jiajing Wan, et al.
0

This paper presents our strategies in SemEval 2020 Task 4: Commonsense Validation and Explanation. We propose a novel way to search for evidence and choose the different large-scale pre-trained models as the backbone for three subtasks. The results show that our evidence-searching approach improves model performance on commonsense explanation task. Our team ranks 2nd in subtask C according to human evaluation score.

READ FULL TEXT
research
09/06/2020

QiaoNing at SemEval-2020 Task 4: Commonsense Validation and Explanation system based on ensemble of language model

In this paper, we present language model system submitted to SemEval-202...
research
04/07/2022

Autoencoding Language Model Based Ensemble Learning for Commonsense Validation and Explanation

An ultimate goal of artificial intelligence is to build computer systems...
research
06/12/2021

Prompting Contrastive Explanations for Commonsense Reasoning Tasks

Many commonsense reasoning NLP tasks involve choosing between one or mor...
research
12/18/2020

A Benchmark Arabic Dataset for Commonsense Explanation

Language comprehension and commonsense knowledge validation by machines ...
research
10/14/2022

MiQA: A Benchmark for Inference on Metaphorical Questions

We propose a benchmark to assess the capability of large language models...
research
05/23/2023

Large Language Models as Commonsense Knowledge for Large-Scale Task Planning

Natural language provides a natural interface for human communication, y...
research
08/17/2020

BUT-FIT at SemEval-2020 Task 4: Multilingual commonsense

This paper describes work of the BUT-FIT's team at SemEval 2020 Task 4 -...

Please sign up or login with your details

Forgot password? Click here to reset