How and what to learn:The modes of machine learning

02/28/2022
by   Sihan Feng, et al.
8

We proposal a new approach, namely the weight pathway analysis (WPA), to study the mechanism of multilayer neural networks. The weight pathways linking neurons longitudinally from input neurons to output neurons are considered as the basic units of a neural network. We decompose a neural network into a series of subnetworks of weight pathways, and establish characteristic maps for these subnetworks. The parameters of a characteristic map can be visualized, providing a longitudinal perspective of the network and making the neural network explainable. Using WPA, we discover that a neural network stores and utilizes information in a "holographic" way, that is, the network encodes all training samples in a coherent structure. An input vector interacts with this "holographic" structure to enhance or suppress each subnetwork which working together to produce the correct activities in the output neurons to recognize the input sample. Furthermore, with WPA, we reveal fundamental learning modes of a neural network: the linear learning mode and the nonlinear learning mode. The former extracts linearly separable features while the latter extracts linearly inseparable features. It is found that hidden-layer neurons self-organize into different classes in the later stages of the learning process. It is further discovered that the key strategy to improve the performance of a neural network is to control the ratio of the two learning modes to match that of the linear and the nonlinear features, and that increasing the width or the depth of a neural network helps this ratio controlling process. This provides theoretical ground for the practice of optimizing a neural network via increasing its width or its depth. The knowledge gained with WPA enables us to understand the fundamental questions such as what to learn, how to learn, and how can learn well.

READ FULL TEXT

page 1

page 8

page 20

page 27

research
09/25/2019

Wider Networks Learn Better Features

Transferability of learned features between tasks can massively reduce t...
research
09/20/2021

Dynamic Neural Diversification: Path to Computationally Sustainable Neural Networks

Small neural networks with a constrained number of trainable parameters,...
research
10/29/2020

Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth

A key factor in the success of deep neural networks is the ability to sc...
research
05/24/2022

Randomly Initialized One-Layer Neural Networks Make Data Linearly Separable

Recently, neural networks have been shown to perform exceptionally well ...
research
02/10/2022

Exact Solutions of a Deep Linear Network

This work finds the exact solutions to a deep linear network with weight...
research
03/22/2018

Learning through deterministic assignment of hidden parameters

Supervised learning frequently boils down to determining hidden and brig...
research
10/06/2022

Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks

Striking progress has recently been made in understanding human cognitio...

Please sign up or login with your details

Forgot password? Click here to reset