Log In Sign Up

Scaling Wide Residual Networks for Panoptic Segmentation

by   Liang-Chieh Chen, et al.

The Wide Residual Networks (Wide-ResNets), a shallow but wide model variant of the Residual Networks (ResNets) by stacking a small number of residual blocks with large channel sizes, have demonstrated outstanding performance on multiple dense prediction tasks. However, since proposed, the Wide-ResNet architecture has barely evolved over the years. In this work, we revisit its architecture design for the recent challenging panoptic segmentation task, which aims to unify semantic segmentation and instance segmentation. A baseline model is obtained by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime.


Wide Residual Networks

Deep residual networks were shown to be able to scale up to thousands of...

PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies

PointNet++ is one of the most influential neural architectures for point...

Residual Pyramid Learning for Single-Shot Semantic Segmentation

Pixel-level semantic segmentation is a challenging task with a huge amou...

Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation

In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast ...

Scaling up deep neural networks: a capacity allocation perspective

Following the recent work on capacity allocation, we formulate the conje...