Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

12/06/2021
by   Yichong Xu, et al.
3

Most of today's AI systems focus on using self-attention mechanisms and transformer architectures on large amounts of diverse data to achieve impressive performance gains. In this paper, we propose to augment the transformer architecture with an external attention mechanism to bring external knowledge and context to bear. By integrating external information into the prediction process, we hope to reduce the need for ever-larger models and increase the democratization of AI systems. We find that the proposed external attention mechanism can significantly improve the performance of existing AI systems, allowing practitioners to easily customize foundation AI models to many diverse downstream applications. In particular, we focus on the task of Commonsense Reasoning, demonstrating that the proposed external attention mechanism can augment existing transformer models and significantly improve the model's reasoning capabilities. The proposed system, Knowledgeable External Attention for commonsense Reasoning (KEAR), reaches human parity on the open CommonsenseQA research benchmark with an accuracy of 89.4% in comparison to the human accuracy of 88.9%.

READ FULL TEXT
research
08/15/2023

Attention Is Not All You Need Anymore

In recent years, the popular Transformer architecture has achieved great...
research
08/24/2023

Towards Hierarchical Regional Transformer-based Multiple Instance Learning

The classification of gigapixel histopathology images with deep multiple...
research
06/23/2023

Knowledge-Infused Self Attention Transformers

Transformer-based language models have achieved impressive success in va...
research
07/19/2023

Integrating a Heterogeneous Graph with Entity-aware Self-attention using Relative Position Labels for Reading Comprehension Model

Despite the significant progress made by transformer models in machine r...
research
04/29/2019

Self-Attention Capsule Networks for Image Classification

We propose a novel architecture for image classification, called Self-At...
research
11/24/2022

A Self-Attention Ansatz for Ab-initio Quantum Chemistry

We present a novel neural network architecture using self-attention, the...
research
05/09/2023

MoT: Pre-thinking and Recalling Enable ChatGPT to Self-Improve with Memory-of-Thoughts

Large Language Models have shown impressive abilities on various tasks. ...

Please sign up or login with your details

Forgot password? Click here to reset