Rethinking the Image Feature Biases Exhibited by Deep CNN Models

11/03/2021
by   Dawei Dai, et al.
4

In recent years, convolutional neural networks (CNNs) have been applied successfully in many fields. However, such deep neural models are still regarded as black box in most tasks. One of the fundamental issues underlying this problem is understanding which features are most influential in image recognition tasks and how they are processed by CNNs. It is widely accepted that CNN models combine low-level features to form complex shapes until the object can be readily classified, however, several recent studies have argued that texture features are more important than other features. In this paper, we assume that the importance of certain features varies depending on specific tasks, i.e., specific tasks exhibit a feature bias. We designed two classification tasks based on human intuition to train deep neural models to identify anticipated biases. We devised experiments comprising many tasks to test these biases for the ResNet and DenseNet models. From the results, we conclude that (1) the combined effect of certain features is typically far more influential than any single feature; (2) in different tasks, neural models can perform different biases, that is, we can design a specific task to make a neural model biased toward a specific anticipated feature.

READ FULL TEXT

page 4

page 8

page 9

page 10

page 11

page 12

page 13

research
04/04/2019

Neural Models of the Psychosemantics of `Most'

How are the meanings of linguistic expressions related to their use in c...
research
06/14/2017

Teaching Compositionality to CNNs

Convolutional neural networks (CNNs) have shown great success in compute...
research
09/15/2023

Biased Attention: Do Vision Transformers Amplify Gender Bias More than Convolutional Neural Networks?

Deep neural networks used in computer vision have been shown to exhibit ...
research
02/21/2022

Photometric Redshift Estimation with Convolutional Neural Networks and Galaxy Images: A Case Study of Resolving Biases in Data-Driven Methods

Deep Learning models have been increasingly exploited in astrophysical s...
research
11/30/2020

Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs

Despite current advances in deep learning, domain shift remains a common...
research
05/02/2023

When Newer is Not Better: Does Deep Learning Really Benefit Recommendation From Implicit Feedback?

In recent years, neural models have been repeatedly touted to exhibit st...
research
12/20/2019

Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification

Semmelhack et al. (2014) have achieved high classification accuracy in d...

Please sign up or login with your details

Forgot password? Click here to reset