Memory networks for consumer protection:unfairness exposed

07/24/2020
by   Federico Ruggeri, et al.
0

Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the classification accuracy, but are also able to offer meaningful, natural language explanations of otherwise opaque classifier outcomes.

READ FULL TEXT
research
05/29/2018

Teaching Meaningful Explanations

The adoption of machine learning in high-stakes applications such as hea...
research
06/25/2021

Using Issues to Explain Legal Decisions

The need to explain the output from Machine Learning systems designed to...
research
08/13/2018

Learning Explanations from Language Data

PatternAttribution is a recent method, introduced in the vision domain, ...
research
09/03/2014

Augmented Neural Networks for Modelling Consumer Indebtness

Consumer Debt has risen to be an important problem of modern societies, ...
research
06/15/2023

Explaining Legal Concepts with Augmented Large Language Models (GPT-4)

Interpreting the meaning of legal open-textured terms is a key task of l...
research
05/28/2021

ILDC for CJPE: Indian Legal Documents Corpus for Court Judgment Prediction and Explanation

An automated system that could assist a judge in predicting the outcome ...
research
10/20/2022

Modelling and Explaining Legal Case-based Reasoners through Classifiers

This paper brings together two lines of research: factor-based models of...

Please sign up or login with your details

Forgot password? Click here to reset