Aggregated Residual Transformations for Deep Neural Networks

11/16/2016
by   Saining Xie, et al.
Facebook
University of California, San Diego
0

We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/06/2020

ResNeXt and Res2Net Structure for Speaker Verification

ResNet-based architecture has been widely adopted as the speaker embeddi...
05/09/2018

Evaluating ResNeXt Model Architecture for Image Classification

In recent years, deep learning methods have been successfully applied to...
11/30/2016

Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

The trend towards increasingly deep neural networks has been driven by a...
12/10/2015

Deep Residual Learning for Image Recognition

Deeper neural networks are more difficult to train. We present a residua...
12/18/2019

ResNetX: a more disordered and deeper network architecture

Designing efficient network structures has always been the core content ...
04/19/2020

When Residual Learning Meets Dense Aggregation: Rethinking the Aggregation of Deep Neural Networks

Various architectures (such as GoogLeNets, ResNets, and DenseNets) have ...
09/17/2014

Going Deeper with Convolutions

We propose a deep convolutional neural network architecture codenamed "I...

Code Repositories

ResNeXt.pytorch

Reproduces ResNet-V3 with pytorch


view repo

resnext

Implementation of ResNext by chainer (Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431)


view repo

Wide-Residual-Network

Implementation of a Wide Residual Network on Tensorflow for Image Classification. Trained and tested on Cifar10 dataset.


view repo

pytorch_resnext

A PyTorch implementation of ResNeXt.


view repo

Please sign up or login with your details

Forgot password? Click here to reset