A game method for improving the interpretability of convolution neural network

10/21/2019
by   Jinwei Zhao, et al.
0

Real artificial intelligence always has been focused on by many machine learning researchers, especially in the area of deep learning. However deep neural network is hard to be understood and explained, and sometimes, even metaphysics. The reason is, we believe that: the network is essentially a perceptual model. Therefore, we believe that in order to complete complex intelligent activities from simple perception, it is necessary to con-struct another interpretable logical network to form accurate and reasonable responses and explanations to external things. Researchers like Bolei Zhou and Quanshi Zhang have found many explanatory rules for deep feature extraction aimed at the feature extraction stage of convolution neural network. However, although researchers like Marco Gori have also made great efforts to improve the interpretability of the fully connected layers of the network, the problem is also very difficult. This paper firstly analyzes its reason. Then a method of constructing logical network based on the fully connected layers and extracting logical relation between input and output of the layers is proposed. The game process between perceptual learning and logical abstract cognitive learning is implemented to improve the interpretable performance of deep learning process and deep learning model. The benefits of our approach are illustrated on benchmark data sets and in real-world experiments.

READ FULL TEXT
research
06/12/2018

Pressure Predictions of Turbine Blades with Deep Learning

Deep learning has been used in many areas, such as feature detections in...
research
04/07/2020

The relationship between Fully Connected Layers and number of classes for the analysis of retinal images

This paper experiments with the number of fully-connected layers in a de...
research
06/19/2022

Artificial intelligence system based on multi-value classification of fully connected neural network for construction management

This study is devoted to solving the problem to determine the profession...
research
01/20/2019

A Universal Logic Operator for Interpretable Deep Convolution Networks

Explaining neural network computation in terms of probabilistic/fuzzy lo...
research
09/30/2019

MonoNet: Towards Interpretable Models by Learning Monotonic Features

Being able to interpret, or explain, the predictions made by a machine l...
research
02/13/2023

An Order-Invariant and Interpretable Hierarchical Dilated Convolution Neural Network for Chemical Fault Detection and Diagnosis

Fault detection and diagnosis is significant for reducing maintenance co...
research
01/23/2018

Automatic construction of Chinese herbal prescription from tongue image via convolution networks and auxiliary latent therapy topics

The tongue image is an important physical information of human, it is of...

Please sign up or login with your details

Forgot password? Click here to reset