Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time Planetary Explorations

by   Daniel Lundstrom, et al.

Deep learning (DL) has proven to be an effective machine learning and computer vision technique. DL-based image segmentation, object recognition and classification will aid many in-situ Mars rover tasks such as path planning and artifact recognition/extraction. However, most of the Deep Neural Network (DNN) architectures are so complex that they are considered a 'black box'. In this paper, we used integrated gradients to describe the attributions of each neuron to the output classes. It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized. The neurons in each dense layer are mapped and ranked by measuring expected contribution of a neuron to a class vote given a true image label. The importance of neurons is prioritized according to their correct or incorrect contribution to the output classes and suppression or bolstering of incorrect classes, weighted by the size of each class. ET provides an interface to prune the network to enhance high-rank neurons and remove low-performing neurons. ET technology will make DNNs smaller and more efficient for implementation in small embedded systems. It also leads to more explainable and testable DNNs that can make systems easier for Validation & Verification. The goal of ET technology is to enable the adoption of DL in future in-situ planetary exploration missions.


page 1

page 4

page 7

page 8


Understanding trained CNNs by indexing neuron selectivity

The impressive performance and plasticity of convolutional neural networ...

Explainable Deep Learning: A Field Guide for the Uninitiated

Deep neural network (DNN) is an indispensable machine learning tool for ...

Statistics of Visual Responses to Object Stimuli from Primate AIT Neurons to DNN Neurons

Cadieu et al. (Cadieu,2014) reported that deep neural networks(DNNs) cou...

Explainable Machine Learning: The Importance of a System-Centric Perspective

The landscape in the context of several signal processing applications a...

Explaining Deep Learning Hidden Neuron Activations using Concept Induction

One of the current key challenges in Explainable AI is in correctly inte...

Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning

A major challenge in Explainable AI is in correctly interpreting activat...

Computationally Efficient Measures of Internal Neuron Importance

The challenge of assigning importance to individual neurons in a network...

Please sign up or login with your details

Forgot password? Click here to reset