A game method for improving the interpretability of convolution neural network

by   Jinwei Zhao, et al.

Real artificial intelligence always has been focused on by many machine learning researchers, especially in the area of deep learning. However deep neural network is hard to be understood and explained, and sometimes, even metaphysics. The reason is, we believe that: the network is essentially a perceptual model. Therefore, we believe that in order to complete complex intelligent activities from simple perception, it is necessary to con-struct another interpretable logical network to form accurate and reasonable responses and explanations to external things. Researchers like Bolei Zhou and Quanshi Zhang have found many explanatory rules for deep feature extraction aimed at the feature extraction stage of convolution neural network. However, although researchers like Marco Gori have also made great efforts to improve the interpretability of the fully connected layers of the network, the problem is also very difficult. This paper firstly analyzes its reason. Then a method of constructing logical network based on the fully connected layers and extracting logical relation between input and output of the layers is proposed. The game process between perceptual learning and logical abstract cognitive learning is implemented to improve the interpretable performance of deep learning process and deep learning model. The benefits of our approach are illustrated on benchmark data sets and in real-world experiments.



page 6


Pressure Predictions of Turbine Blades with Deep Learning

Deep learning has been used in many areas, such as feature detections in...

The relationship between Fully Connected Layers and number of classes for the analysis of retinal images

This paper experiments with the number of fully-connected layers in a de...

Artificial intelligence system based on multi-value classification of fully connected neural network for construction management

This study is devoted to solving the problem to determine the profession...

A Universal Logic Operator for Interpretable Deep Convolution Networks

Explaining neural network computation in terms of probabilistic/fuzzy lo...

Learning Robust Deep Face Representation

With the development of convolution neural network, more and more resear...

Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation

Most deep neural networks are considered to be black boxes, meaning thei...

A Novel Neural Network Structure Constructed according to Logical Relations

To solve more complex things, computer systems becomes more and more com...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.