Towards Learning a Vocabulary of Visual Concepts and Operators using Deep Neural Networks

09/01/2021
by   Sunil Kumar Vengalil, et al.
0

Deep neural networks have become the default choice for many applications like image and video recognition, segmentation and other image and video related tasks.However, a critical challenge with these models is the lack of explainability.This requirement of generating explainable predictions has motivated the research community to perform various analysis on trained models.In this study, we analyze the learned feature maps of trained models using MNIST images for achieving more explainable predictions.Our study is focused on deriving a set of primitive elements, here called visual concepts, that can be used to generate any arbitrary sample from the data generating distribution.We derive the primitive elements from the feature maps learned by the model.We illustrate the idea by generating visual concepts from a Variational Autoencoder trained using MNIST images.We augment the training data of MNIST dataset by adding about 60,000 new images generated with visual concepts chosen at random.With this we were able to reduce the reconstruction loss (mean square error) from an initial value of 120 without augmentation to 60 with augmentation.Our approach is a first step towards the final goal of achieving trained deep neural network models whose predictions, features in hidden layers and the learned filters can be well explained.Such a model when deployed in production can easily be modified to adapt to new data, whereas existing deep learning models need a re training or fine tuning. This process again needs a huge number of data samples that are not easy to generate unless the model has good explainability.

READ FULL TEXT
research
07/02/2018

Make (Nearly) Every Neural Network Better: Generating Neural Network Ensembles by Weight Parameter Resampling

Deep Neural Networks (DNNs) have become increasingly popular in computer...
research
02/06/2018

Digital Watermarking for Deep Neural Networks

Although deep neural networks have made tremendous progress in the area ...
research
09/07/2019

Explainable Deep Learning for Video Recognition Tasks: A Framework Recommendations

The popularity of Deep Learning for real-world applications is ever-grow...
research
03/23/2018

Pattern Analysis with Layered Self-Organizing Maps

This paper defines a new learning architecture, Layered Self-Organizing ...
research
06/14/2023

Explaining Explainability: Towards Deeper Actionable Insights into Deep Learning through Second-order Explainability

Explainability plays a crucial role in providing a more comprehensive un...
research
12/18/2022

Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint

Deep learning has revolutionized human society, yet the black-box nature...
research
10/02/2019

Persistent and Unforgeable Watermarks for Deep Neural Networks

As deep learning classifiers continue to mature, model providers with su...

Please sign up or login with your details

Forgot password? Click here to reset