FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

12/09/2018
by   Bichen Wu, et al.
0

Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices. However, existing approaches are too expensive for case-by-case redesigns. Also, previous work focuses primarily on reducing FLOPs, but FLOP count does not always reflect actual latency. To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. FBNets, a family of models discovered by DNAS surpass state-of-the-art models both designed manually and generated automatically. FBNet-B achieves 74.1 top-1 accuracy on ImageNet with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, 2.4x smaller and 1.5x faster than MobileNetV2-1.3 with similar accuracy. Despite higher accuracy and lower latency than MnasNet, we estimate FBNet-B's search cost is 420x smaller than MnasNet's, at only 216 GPU-hours. Searched for different resolutions and channel sizes, FBNets achieve 1.5 accuracy than MobileNetV2. The smallest FBNet achieves 50.2 ms latency (345 frames per second) on a Samsung S8. Over a Samsung-optimized FBNet, the iPhone-X-optimized model achieves a 1.4x speedup on an iPhone X.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2019

Hardware-aware One-Shot Neural Architecture Search in Coordinate Ascent Framework

Designing accurate and efficient convolutional neural architectures for ...
research
12/02/2018

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

Neural architecture search (NAS) has a great impact by automatically des...
research
06/19/2019

SwiftNet: Using Graph Propagation as Meta-knowledge to Search Highly Representative Neural Architectures

Designing neural architectures for edge devices is subject to constraint...
research
07/29/2022

Evaluating the Practicality of Learned Image Compression

Learned image compression has achieved extraordinary rate-distortion per...
research
07/31/2018

MnasNet: Platform-Aware Neural Architecture Search for Mobile

Designing convolutional neural networks (CNN) models for mobile devices ...
research
02/23/2022

The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

Along with the progress of AI democratization, neural networks are being...
research
12/31/2022

Pseudo-Inverted Bottleneck Convolution for DARTS Search Space

Differentiable Architecture Search (DARTS) has attracted considerable at...

Please sign up or login with your details

Forgot password? Click here to reset