NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic

07/06/2023
by   Zi'ou Zheng, et al.
0

Reasoning has been a central topic in artificial intelligence from the beginning. The recent progress made on distributed representation and neural networks continues to improve the state-of-the-art performance of natural language inference. However, it remains an open question whether the models perform real reasoning to reach their conclusions or rely on spurious correlations. Adversarial attacks have proven to be an important tool to help evaluate the Achilles' heel of the victim models. In this study, we explore the fundamental problem of developing attack models based on logic formalism. We propose NatLogAttack to perform systematic attacks centring around natural logic, a classical logic formalism that is traceable back to Aristotle's syllogism and has been closely developed for natural language inference. The proposed framework renders both label-preserving and label-flipping attacks. We show that compared to the existing attack models, NatLogAttack generates better adversarial examples with fewer visits to the victim models. The victim models are found to be more vulnerable under the label-flipping setting. NatLogAttack provides a tool to probe the existing and future NLI models' capacity from a key viewpoint and we hope more logic-based attacks will be further explored for understanding the desired property of reasoning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2020

Exploring End-to-End Differentiable Natural Logic Modeling

We explore end-to-end trained differentiable models that integrate natur...
research
08/01/2023

LimeAttack: Local Explainable Method for Textual Hard-Label Adversarial Attack

Natural language processing models are vulnerable to adversarial example...
research
04/29/2022

Logically Consistent Adversarial Attacks for Soft Theorem Provers

Recent efforts within the AI community have yielded impressive results t...
research
05/17/2021

Supporting Context Monotonicity Abstractions in Neural NLI Models

Natural language contexts display logical regularities with respect to s...
research
04/20/2023

Interventional Probing in High Dimensions: An NLI Case Study

Probing strategies have been shown to detect the presence of various lin...
research
04/27/2021

Improved and Efficient Text Adversarial Attacks using Target Information

There has been recently a growing interest in studying adversarial examp...
research
09/18/2019

Information Extraction Tool Text2ALM: From Narratives to Action Language System Descriptions

In this work we design a narrative understanding tool Text2ALM. This too...

Please sign up or login with your details

Forgot password? Click here to reset