Logically Consistent Adversarial Attacks for Soft Theorem Provers

04/29/2022
by   Alexander Gaskell, et al.
0

Recent efforts within the AI community have yielded impressive results towards "soft theorem proving" over natural language sentences using language models. We propose a novel, generative adversarial framework for probing and improving these models' reasoning capabilities. Adversarial attacks in this domain suffer from the logical inconsistency problem, whereby perturbations to the input may alter the label. Our Logically consistent AdVersarial Attacker, LAVA, addresses this by combining a structured generative process with a symbolic solver, guaranteeing logical consistency. Our framework successfully generates adversarial attacks and identifies global weaknesses common across multiple target models. Our analyses reveal naive heuristics and vulnerabilities in these models' reasoning capabilities, exposing an incomplete grasp of logical deduction under logic programs. Finally, in addition to effective probing of these models, we show that training on the generated samples improves the target model's performance.

READ FULL TEXT
research
05/01/2020

Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks

We evaluate machine comprehension models' robustness to noise and advers...
research
07/06/2023

NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic

Reasoning has been a central topic in artificial intelligence from the b...
research
05/18/2022

LogiGAN: Learning Logical Reasoning via Adversarial Pre-training

We present LogiGAN, an unsupervised adversarial pre-training framework f...
research
06/13/2021

Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models

Despite significant improvements in natural language understanding model...
research
09/01/2023

Why do universal adversarial attacks work on large language models?: Geometry might be the answer

Transformer based large language models with emergent capabilities are b...
research
01/23/2023

Practical Adversarial Attacks Against AI-Driven Power Allocation in a Distributed MIMO Network

In distributed multiple-input multiple-output (D-MIMO) networks, power c...
research
10/01/2019

TMLab: Generative Enhanced Model (GEM) for adversarial attacks

We present our Generative Enhanced Model (GEM) that we used to create sa...

Please sign up or login with your details

Forgot password? Click here to reset