DeepAI AI Chat
Log In Sign Up

Towards Grad-CAM Based Explainability in a Legal Text Processing Pipeline

by   Lukasz Gorski, et al.

Explainable AI(XAI)is a domain focused on providing interpretability and explainability of a decision-making process. In the domain of law, in addition to system and data transparency, it also requires the (legal-) decision-model transparency and the ability to understand the models inner working when arriving at the decision. This paper provides the first approaches to using a popular image processing technique, Grad-CAM, to showcase the explainability concept for legal texts. With the help of adapted Grad-CAM metrics, we show the interplay between the choice of embeddings, its consideration of contextual information, and their effect on downstream processing.


Expanding Explainability: Towards Social Transparency in AI systems

As AI-powered systems increasingly mediate consequential decision-making...

The Conflict Between Explainable and Accountable Decision-Making Algorithms

Decision-making algorithms are being used in important decisions, such a...

Uniting Machine Intelligence, Brain and Behavioural Sciences to Assist Criminal Justice

I discuss here three important roles where machine intelligence, brain a...

Aligning Explainable AI and the Law: The European Perspective

The European Union has proposed the Artificial Intelligence Act intendin...

An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability

Numerous government initiatives (e.g. the EU with GDPR) are coming to th...

Explainability of Text Processing and Retrieval Methods: A Critical Survey

Deep Learning and Machine Learning based models have become extremely po...

Prototype-Based Interpretability for Legal Citation Prediction

Deep learning has made significant progress in the past decade, and demo...