Reducing Inference Latency with Concurrent Architectures for Image Recognition

11/13/2020
by   Ramyad Hadidi, et al.
0

Satisfying the high computation demand of modern deep learning architectures is challenging for achieving low inference latency. The current approaches in decreasing latency only increase parallelism within a layer. This is because architectures typically capture a single-chain dependency pattern that prevents efficient distribution with a higher concurrency (i.e., simultaneous execution of one inference among devices). Such single-chain dependencies are so widespread that even implicitly biases recent neural architecture search (NAS) studies. In this visionary paper, we draw attention to an entirely new space of NAS that relaxes the single-chain dependency to provide higher concurrency and distribution opportunities. To quantitatively compare these architectures, we propose a score that encapsulates crucial metrics such as communication, concurrency, and load balancing. Additionally, we propose a new generator and transformation block that consistently deliver superior architectures compared to current state-of-the-art methods. Finally, our preliminary results show that these new architectures reduce the inference latency and deserve more attention.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

11/26/2018

InstaNAS: Instance-aware Neural Architecture Search

Neural Architecture Search (NAS) aims at finding one "single" architectu...
09/20/2019

Understanding Architectures Learnt by Cell-based Neural Architecture Search

Neural architecture search (NAS) generates architectures automatically f...
09/18/2019

IR-NAS: Neural Architecture Search for Image Restoration

Recently, neural architecture search (NAS) methods have attracted much a...
09/08/2021

RepNAS: Searching for Efficient Re-parameterizing Blocks

In the past years, significant improvements in the field of neural archi...
11/30/2021

MAPLE: Microprocessor A Priori for Latency Estimation

Modern deep neural networks must demonstrate state-of-the-art accuracy w...
05/21/2020

AOWS: Adaptive and optimal network width search with latency constraints

Neural architecture search (NAS) approaches aim at automatically finding...
06/19/2019

SwiftNet: Using Graph Propagation as Meta-knowledge to Search Highly Representative Neural Architectures

Designing neural architectures for edge devices is subject to constraint...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.