Log In Sign Up

∂-Explainer: Abductive Natural Language Inference via Differentiable Convex Optimization

by   Mokanarangan Thayaparan, et al.

Constrained optimization solvers with Integer Linear programming (ILP) have been the cornerstone for explainable natural language inference during its inception. ILP based approaches provide a way to encode explicit and controllable assumptions casting natural language inference as an abductive reasoning problem, where the solver constructs a plausible explanation for a given hypothesis. While constrained based solvers provide explanations, they are often limited by the use of explicit constraints and cannot be integrated as part of broader deep neural architectures. In contrast, state-of-the-art transformer-based models can learn from data and implicitly encode complex constraints. However, these models are intrinsically black boxes. This paper presents a novel framework named ∂-Explainer (Diff-Explainer) that combines the best of both worlds by casting the constrained optimization as part of a deep neural network via differentiable convex optimization and fine-tuning pre-trained transformers for downstream explainable NLP tasks. To demonstrate the efficacy of the framework, we transform the constraints presented by TupleILP and integrate them with sentence embedding transformers for the task of explainable science QA. Our experiments show up to ≈ 10% improvement over non-differentiable solver while still providing explanations for supporting its inference.


page 1

page 2

page 3

page 4


Hybrid Autoregressive Solver for Scalable Abductive Natural Language Inference

Regenerating natural language explanations for science questions is a ch...

Argumentation for Explainable Scheduling (Full Paper with Proofs)

Mathematical optimization offers highly-effective tools for finding solu...

Optimizing Transformers with Approximate Computing for Faster, Smaller and more Accurate NLP Models

Transformer models have garnered a lot of interest in recent years by de...

SML: a new Semantic Embedding Alignment Transformer for efficient cross-lingual Natural Language Inference

The ability of Transformers to perform with precision a variety of tasks...