DeepAI AI Chat
Log In Sign Up

Towards Grad-CAM Based Explainability in a Legal Text Processing Pipeline

12/15/2020
by   Lukasz Gorski, et al.
0

Explainable AI(XAI)is a domain focused on providing interpretability and explainability of a decision-making process. In the domain of law, in addition to system and data transparency, it also requires the (legal-) decision-model transparency and the ability to understand the models inner working when arriving at the decision. This paper provides the first approaches to using a popular image processing technique, Grad-CAM, to showcase the explainability concept for legal texts. With the help of adapted Grad-CAM metrics, we show the interplay between the choice of embeddings, its consideration of contextual information, and their effect on downstream processing.

READ FULL TEXT
01/12/2021

Expanding Explainability: Towards Social Transparency in AI systems

As AI-powered systems increasingly mediate consequential decision-making...
05/11/2022

The Conflict Between Explainable and Accountable Decision-Making Algorithms

Decision-making algorithms are being used in important decisions, such a...
06/30/2022

Uniting Machine Intelligence, Brain and Behavioural Sciences to Assist Criminal Justice

I discuss here three important roles where machine intelligence, brain a...
02/21/2023

Aligning Explainable AI and the Law: The European Perspective

The European Union has proposed the Artificial Intelligence Act intendin...
09/11/2021

An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability

Numerous government initiatives (e.g. the EU with GDPR) are coming to th...
12/14/2022

Explainability of Text Processing and Retrieval Methods: A Critical Survey

Deep Learning and Machine Learning based models have become extremely po...
05/25/2023

Prototype-Based Interpretability for Legal Citation Prediction

Deep learning has made significant progress in the past decade, and demo...