Causal Intersectionality and Dual Form of Gradient Descent for Multimodal Analysis: a Case Study on Hateful Memes

08/19/2023
by   Yosuke Miyanishi, et al.
0

In the wake of the explosive growth of machine learning (ML) usage, particularly within the context of emerging Large Language Models (LLMs), comprehending the semantic significance rooted in their internal workings is crucial. While causal analyses focus on defining semantics and its quantification, the gradient-based approach is central to explainable AI (XAI), tackling the interpretation of the black box. By synergizing these approaches, the exploration of how a model's internal mechanisms illuminate its causal effect has become integral for evidence-based decision-making. A parallel line of research has revealed that intersectionality - the combinatory impact of multiple demographics of an individual - can be structured in the form of an Averaged Treatment Effect (ATE). Initially, this study illustrates that the hateful memes detection problem can be formulated as an ATE, assisted by the principles of intersectionality, and that a modality-wise summarization of gradient-based attention attribution scores can delineate the distinct behaviors of three Transformerbased models concerning ATE. Subsequently, we show that the latest LLM LLaMA2 has the ability to disentangle the intersectional nature of memes detection in an in-context learning setting, with their mechanistic properties elucidated via meta-gradient, a secondary form of gradient. In conclusion, this research contributes to the ongoing dialogue surrounding XAI and the multifaceted nature of ML models.

READ FULL TEXT

page 2

page 3

page 5

page 9

page 10

page 11

page 13

page 14

research
12/18/2021

Does Explainable Machine Learning Uncover the Black Box in Vision Applications?

Machine learning (ML) in general and deep learning (DL) in particular ha...
research
06/21/2022

Interpretable Deep Causal Learning for Moderation Effects

In this extended abstract paper, we address the problem of interpretabil...
research
08/10/2022

Explaining Machine Learning DGA Detectors from DNS Traffic Data

One of the most common causes of lack of continuity of online systems st...
research
06/07/2021

Amortized Generation of Sequential Counterfactual Explanations for Black-box Models

Explainable machine learning (ML) has gained traction in recent years du...
research
05/08/2023

Exploring a Gradient-based Explainable AI Technique for Time-Series Data: A Case Study of Assessing Stroke Rehabilitation Exercises

Explainable artificial intelligence (AI) techniques are increasingly bei...
research
02/21/2023

Differentiable Multi-Target Causal Bayesian Experimental Design

We introduce a gradient-based approach for the problem of Bayesian optim...

Please sign up or login with your details

Forgot password? Click here to reset