LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information

04/13/2021
by   Ioannis Mollas, et al.
0

Artificial Intelligence (AI) has a tremendous impact on the unexpected growth of technology in almost every aspect. AI-powered systems are monitoring and deciding about sensitive economic and societal issues. The future is towards automation, and it must not be prevented. However, this is a conflicting viewpoint for a lot of people, due to the fear of uncontrollable AI systems. This concern could be reasonable if it was originating from considerations associated with social issues, like gender-biased, or obscure decision-making systems. Explainable AI (XAI) is recently treated as a huge step towards reliable systems, enhancing the trust of people to AI. Interpretable machine learning (IML), a subfield of XAI, is also an urgent topic of research. This paper presents a small but significant contribution to the IML community, focusing on a local-based, neural-specific interpretation process applied to textual and time-series data. The proposed methodology introduces new approaches to the presentation of feature importance based interpretations, as well as the production of counterfactual words on textual datasets. Eventually, an improved evaluation metric is introduced for the assessment of interpretation techniques, which supports an extensive set of qualitative and quantitative experiments.

READ FULL TEXT

page 10

page 15

research
02/10/2021

The human-AI relationship in decision-making: AI explanation to support people on justifying their decisions

The explanation dimension of Artificial Intelligence (AI) based system h...
research
04/19/2022

On the Influence of Explainable AI on Automation Bias

Artificial intelligence (AI) is gaining momentum, and its importance for...
research
04/05/2023

Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm

Artificial intelligence (AI) systems can cause harm to people. This rese...
research
10/15/2020

Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

Interpretable machine learning is an emerging field providing solutions ...
research
04/26/2021

TrustyAI Explainability Toolkit

Artificial intelligence (AI) is becoming increasingly more popular and c...
research
07/25/2022

Designing an AI-Driven Talent Intelligence Solution: Exploring Big Data to extend the TOE Framework

AI has the potential to improve approaches to talent management enabling...

Please sign up or login with your details

Forgot password? Click here to reset