Robust and Generalizable Visual Representation Learning via Random Convolutions

07/25/2020
by   Zhenlin Xu, et al.
0

While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust. Hence, our goal is to train models in such a way that improves their robustness to these perturbations. We are motivated by the approximately shape-preserving property of randomized convolutions, which is due to distance preservation under random linear transforms. Intuitively, randomized convolutions create an infinite number of new domains with similar object shapes but random local texture. Therefore, we explore using outputs of multi-scale random convolutions as new images or mixing them with the original images during training. When applying a network trained with our approach to unseen domains, our method consistently improves the performance on domain generalization benchmarks and is scalable to ImageNet. Especially for the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin. More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation.

READ FULL TEXT

page 2

page 15

page 16

research
04/04/2023

Randomized Adversarial Style Perturbations for Domain Generalization

We propose a novel domain generalization technique, referred to as Rando...
research
06/07/2023

Improving neural network representations using human similarity judgments

Deep neural networks have reached human-level performance on many comput...
research
02/18/2023

StyLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based Domain Generalization

Large-scale foundation models (e.g., CLIP) have shown promising zero-sho...
research
05/29/2019

Learning Robust Global Representations by Penalizing Local Predictive Power

Despite their renowned predictive power on i.i.d. data, convolutional ne...
research
08/20/2022

Fuse and Attend: Generalized Embedding Learning for Art and Sketches

While deep Embedding Learning approaches have witnessed widespread succe...
research
04/02/2023

Progressive Random Convolutions for Single Domain Generalization

Single domain generalization aims to train a generalizable model with on...
research
01/30/2019

Emerging Convolutions for Generative Normalizing Flows

Generative flows are attractive because they admit exact likelihood opti...

Please sign up or login with your details

Forgot password? Click here to reset