A Review on Explainability in Multimodal Deep Neural Nets

05/17/2021
by   Gargi Joshi, et al.
0

Artificial Intelligence techniques powered by deep neural nets have achieved much success in several application domains, most significantly and notably in the Computer Vision applications and Natural Language Processing tasks. Surpassing human-level performance propelled the research in the applications where different modalities amongst language, vision, sensory, text play an important role in accurate predictions and identification. Several multimodal fusion methods employing deep learning models are proposed in the literature. Despite their outstanding performance, the complex, opaque and black-box nature of the deep neural nets limits their social acceptance and usability. This has given rise to the quest for model interpretability and explainability, more so in the complex tasks involving multimodal AI methods. This paper extensively reviews the present literature to present a comprehensive survey and commentary on the explainability in multimodal deep neural nets, especially for the vision and language tasks. Several topics on multimodal AI and its applications for generic domains have been covered in this paper, including the significance, datasets, fundamental building blocks of the methods and techniques, challenges, applications, and future trends in this domain

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 14

page 24

11/10/2019

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

Deep learning has revolutionized speech recognition, image recognition, ...
01/13/2021

Explainability of vision-based autonomous driving systems: Review and challenges

This survey reviews explainability methods for vision-based self-driving...
10/16/2020

Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

The recent series of innovations in deep learning (DL) have shown enormo...
12/02/2020

DecisiveNets: Training Deep Associative Memories to Solve Complex Machine Learning Problems

Learning deep representations to solve complex machine learning tasks ha...
02/02/2018

Explaining First Impressions: Modeling, Recognizing, and Explaining Apparent Personality from Videos

Explainability and interpretability are two critical aspects of decision...
05/10/2018

Deep Nets: What have they ever done for Vision?

This is an opinion paper about the strengths and weaknesses of Deep Nets...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.