A Differential Topological View of Challenges in Learning with Feedforward Neural Networks

11/26/2018
by   Hao Shen, et al.
0

Among many unsolved puzzles in theories of Deep Neural Networks (DNNs), there are three most fundamental challenges that highly demand solutions, namely, expressibility, optimisability, and generalisability. Although there have been significant progresses in seeking answers using various theories, e.g. information bottleneck theory, sparse representation, statistical inference, Riemannian geometry, etc., so far there is no single theory that is able to provide solutions to all these challenges. In this work, we propose to engage the theory of differential topology to address the three problems. By modelling the dataset of interest as a smooth manifold, DNNs can be considered as compositions of smooth maps between smooth manifolds. Specifically, our work offers a differential topological view of loss landscape of DNNs, interplay between width and depth in expressibility, and regularisations for generalisability. Finally, in the setting of deep representation learning, we further apply the quotient topology to investigate the architecture of DNNs, which enables to capture nuisance factors in data with respect to a specific learning task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset