How Convolutional Neural Network Architecture Biases Learned Opponency and Colour Tuning

10/06/2020
by   Ethan Harris, et al.
0

Recent work suggests that changing Convolutional Neural Network (CNN) architecture by introducing a bottleneck in the second layer can yield changes in learned function. To understand this relationship fully requires a way of quantitatively comparing trained networks. The fields of electrophysiology and psychophysics have developed a wealth of methods for characterising visual systems which permit such comparisons. Inspired by these methods, we propose an approach to obtaining spatial and colour tuning curves for convolutional neurons, which can be used to classify cells in terms of their spatial and colour opponency. We perform these classifications for a range of CNNs with different depths and bottleneck widths. Our key finding is that networks with a bottleneck show a strong functional organisation: almost all cells in the bottleneck layer become both spatially and colour opponent, cells in the layer following the bottleneck become non-opponent. The colour tuning data can further be used to form a rich understanding of how colour is encoded by a network. As a concrete demonstration, we show that shallower networks without a bottleneck learn a complex non-linear colour system, whereas deeper networks with tight bottlenecks learn a simple channel opponent code in the bottleneck layer. We further develop a method of obtaining a hue sensitivity curve for a trained CNN which enables high level insights that complement the low level findings from the colour tuning data. We go on to train a series of networks under different conditions to ascertain the robustness of the discussed results. Ultimately, our methods and findings coalesce with prior art, strengthening our ability to interpret trained CNNs and furthering our understanding of the connection between architecture and learned representation. Code for all experiments is available at https://github.com/ecs-vlc/opponency.

READ FULL TEXT

page 9

page 14

page 31

page 32

research
07/02/2020

ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network

This paper addresses representational bottleneck in a network and propos...
research
01/21/2019

Impact of Fully Connected Layers on Performance of Convolutional Neural Networks for Image Classification

The Convolutional Neural Networks (CNNs), in domains like computer visio...
research
08/18/2019

Convolutional Neural Network with Median Layers for Denoising Salt-and-Pepper Contaminations

We propose a deep fully convolutional neural network with a new type of ...
research
07/28/2017

Learning Pixel-Distribution Prior with Wider Convolution for Image Denoising

In this work, we explore an innovative strategy for image denoising by u...
research
04/17/2021

Emergence of Lie symmetries in functional architectures learned by CNNs

In this paper we study the spontaneous development of symmetries in the ...
research
04/13/2019

Transformable Bottleneck Networks

We propose a novel approach to performing fine-grained 3D manipulation o...
research
06/07/2018

Correspondence of Deep Neural Networks and the Brain for Visual Textures

Deep convolutional neural networks (CNNs) trained on objects and scenes ...

Please sign up or login with your details

Forgot password? Click here to reset