Reducing Inference Latency with Concurrent Architectures for Image Recognition

by   Ramyad Hadidi, et al.

Satisfying the high computation demand of modern deep learning architectures is challenging for achieving low inference latency. The current approaches in decreasing latency only increase parallelism within a layer. This is because architectures typically capture a single-chain dependency pattern that prevents efficient distribution with a higher concurrency (i.e., simultaneous execution of one inference among devices). Such single-chain dependencies are so widespread that even implicitly biases recent neural architecture search (NAS) studies. In this visionary paper, we draw attention to an entirely new space of NAS that relaxes the single-chain dependency to provide higher concurrency and distribution opportunities. To quantitatively compare these architectures, we propose a score that encapsulates crucial metrics such as communication, concurrency, and load balancing. Additionally, we propose a new generator and transformation block that consistently deliver superior architectures compared to current state-of-the-art methods. Finally, our preliminary results show that these new architectures reduce the inference latency and deserve more attention.



page 1

page 2

page 3

page 4


InstaNAS: Instance-aware Neural Architecture Search

Neural Architecture Search (NAS) aims at finding one "single" architectu...

Understanding Architectures Learnt by Cell-based Neural Architecture Search

Neural architecture search (NAS) generates architectures automatically f...

IR-NAS: Neural Architecture Search for Image Restoration

Recently, neural architecture search (NAS) methods have attracted much a...

RepNAS: Searching for Efficient Re-parameterizing Blocks

In the past years, significant improvements in the field of neural archi...

MAPLE: Microprocessor A Priori for Latency Estimation

Modern deep neural networks must demonstrate state-of-the-art accuracy w...

AOWS: Adaptive and optimal network width search with latency constraints

Neural architecture search (NAS) approaches aim at automatically finding...

SwiftNet: Using Graph Propagation as Meta-knowledge to Search Highly Representative Neural Architectures

Designing neural architectures for edge devices is subject to constraint...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.