Zero-Shot Classification by Logical Reasoning on Natural Language Explanations

11/07/2022
by   Chi Han, et al.
0

Humans can classify an unseen category by reasoning on its language explanations. This ability is owing to the compositional nature of language: we can combine previously seen concepts to describe the new category. For example, we might describe mavens as "a kind of large birds with black feathers", so that others can use their knowledge of concepts "large birds" and "black feathers" to recognize a maven. Inspired by this observation, in this work we tackle zero-shot classification task by logically parsing and reasoning on natural language explanations. To this end, we propose the framework CLORE (Classification by LOgical Reasoning on Explanations). While previous methods usually regard textual information as implicit features, CLORE parses the explanations into logical structure the and then reasons along this structure on the input to produce a classification score. Experimental results on explanation-based zero-shot classification benchmarks demonstrate that CLORE is superior to baselines, mainly because it performs better on tasks requiring more logical reasoning. Alongside classification decisions, CLORE can provide the logical parsing and reasoning process as a form of rationale. Through empirical analysis we demonstrate that CLORE is also less affected by linguistic biases than baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2023

FLamE: Few-shot Learning from Natural Language Explanations

Natural language explanations have the potential to provide rich informa...
research
01/08/2023

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text

Logical reasoning task involves diverse types of complex reasoning over ...
research
11/26/2020

Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations

Traditional symbolic reasoning engines, while attractive for their preci...
research
06/16/2023

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Large Language Models (LLMs) have achieved great success in various natu...
research
10/22/2022

MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure

In this paper, we propose a comprehensive benchmark to investigate model...
research
11/04/2019

Learning to Annotate: Modularizing Data Augmentation for TextClassifiers with Natural Language Explanations

Deep neural networks usually require massive labeled data, which restric...
research
05/21/2021

Probabilistic Sufficient Explanations

Understanding the behavior of learned classifiers is an important task, ...

Please sign up or login with your details

Forgot password? Click here to reset