INoD: Injected Noise Discriminator for Self-Supervised Representation Learning in Agricultural Fields

03/31/2023
by   Julia Hindel, et al.
0

Perception datasets for agriculture are limited both in quantity and diversity which hinders effective training of supervised learning approaches. Self-supervised learning techniques alleviate this problem, however, existing methods are not optimized for dense prediction tasks in agriculture domains which results in degraded performance. In this work, we address this limitation with our proposed Injected Noise Discriminator (INoD) which exploits principles of feature replacement and dataset discrimination for self-supervised representation learning. INoD interleaves feature maps from two disjoint datasets during their convolutional encoding and predicts the dataset affiliation of the resultant feature map as a pretext task. Our approach enables the network to learn unequivocal representations of objects seen in one dataset while observing them in conjunction with similar features from the disjoint dataset. This allows the network to reason about higher-level semantics of the entailed objects, thus improving its performance on various downstream tasks. Additionally, we introduce the novel Fraunhofer Potato 2022 dataset consisting of over 16,800 images for object detection in potato fields. Extensive evaluations of our proposed INoD pretraining strategy for the tasks of object detection, semantic segmentation, and instance segmentation on the Sugar Beets 2016 and our potato dataset demonstrate that it achieves state-of-the-art performance.

READ FULL TEXT

page 1

page 3

page 4

page 7

research
01/16/2021

Self-Supervised Representation Learning from Flow Equivariance

Self-supervised representation learning is able to learn semantically me...
research
05/10/2022

CoDo: Contrastive Learning with Downstream Background Invariance for Detection

The prior self-supervised learning researches mainly select image-level ...
research
08/16/2022

Matching Multiple Perspectives for Efficient Representation Learning

Representation learning approaches typically rely on images of objects c...
research
10/20/2022

Self-Supervised Learning via Maximum Entropy Coding

A mainstream type of current self-supervised learning methods pursues a ...
research
06/13/2018

Self-Supervised Feature Learning by Learning to Spot Artifacts

We introduce a novel self-supervised learning method based on adversaria...
research
06/07/2022

Spatial Cross-Attention Improves Self-Supervised Visual Representation Learning

Unsupervised representation learning methods like SwAV are proved to be ...
research
12/31/2022

Disjoint Masking with Joint Distillation for Efficient Masked Image Modeling

Masked image modeling (MIM) has shown great promise for self-supervised ...

Please sign up or login with your details

Forgot password? Click here to reset