Rethinking the Image Feature Biases Exhibited by Deep CNN Models

11/03/2021
by   Dawei Dai, et al.
4

In recent years, convolutional neural networks (CNNs) have been applied successfully in many fields. However, such deep neural models are still regarded as black box in most tasks. One of the fundamental issues underlying this problem is understanding which features are most influential in image recognition tasks and how they are processed by CNNs. It is widely accepted that CNN models combine low-level features to form complex shapes until the object can be readily classified, however, several recent studies have argued that texture features are more important than other features. In this paper, we assume that the importance of certain features varies depending on specific tasks, i.e., specific tasks exhibit a feature bias. We designed two classification tasks based on human intuition to train deep neural models to identify anticipated biases. We devised experiments comprising many tasks to test these biases for the ResNet and DenseNet models. From the results, we conclude that (1) the combined effect of certain features is typically far more influential than any single feature; (2) in different tasks, neural models can perform different biases, that is, we can design a specific task to make a neural model biased toward a specific anticipated feature.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

page 9

page 10

page 11

page 12

page 13

04/04/2019

Neural Models of the Psychosemantics of `Most'

How are the meanings of linguistic expressions related to their use in c...
06/14/2017

Teaching Compositionality to CNNs

Convolutional neural networks (CNNs) have shown great success in compute...
11/30/2020

Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs

Despite current advances in deep learning, domain shift remains a common...
01/27/2021

Shape or Texture: Understanding Discriminative Features in CNNs

Contrasting the previous evidence that neurons in the later layers of a ...
06/17/2020

Neural Anisotropy Directions

In this work, we analyze the role of the network architecture in shaping...
12/20/2019

Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification

Semmelhack et al. (2014) have achieved high classification accuracy in d...
01/10/2022

Systematic biases when using deep neural networks for annotating large catalogs of astronomical images

Deep convolutional neural networks (DCNNs) have become the most common s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.