Reducing Inference Latency with Concurrent Architectures for Image Recognition

11/13/2020
by   Ramyad Hadidi, et al.
0

Satisfying the high computation demand of modern deep learning architectures is challenging for achieving low inference latency. The current approaches in decreasing latency only increase parallelism within a layer. This is because architectures typically capture a single-chain dependency pattern that prevents efficient distribution with a higher concurrency (i.e., simultaneous execution of one inference among devices). Such single-chain dependencies are so widespread that even implicitly biases recent neural architecture search (NAS) studies. In this visionary paper, we draw attention to an entirely new space of NAS that relaxes the single-chain dependency to provide higher concurrency and distribution opportunities. To quantitatively compare these architectures, we propose a score that encapsulates crucial metrics such as communication, concurrency, and load balancing. Additionally, we propose a new generator and transformation block that consistently deliver superior architectures compared to current state-of-the-art methods. Finally, our preliminary results show that these new architectures reduce the inference latency and deserve more attention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2018

InstaNAS: Instance-aware Neural Architecture Search

Neural Architecture Search (NAS) aims at finding one "single" architectu...
research
10/06/2022

Inference Latency Prediction at the Edge

With the growing workload of inference tasks on mobile devices, state-of...
research
09/20/2019

Understanding Architectures Learnt by Cell-based Neural Architecture Search

Neural architecture search (NAS) generates architectures automatically f...
research
11/28/2022

GraphPNAS: Learning Distribution of Good Neural Architectures via Deep Graph Generative Models

Neural architectures can be naturally viewed as computational graphs. Mo...
research
09/25/2022

Bigger Faster: Two-stage Neural Architecture Search for Quantized Transformer Models

Neural architecture search (NAS) for transformers has been used to creat...
research
09/08/2021

RepNAS: Searching for Efficient Re-parameterizing Blocks

In the past years, significant improvements in the field of neural archi...
research
05/22/2023

Flover: A Temporal Fusion Framework for Efficient Autoregressive Model Parallel Inference

In the rapidly evolving field of deep learning, the performance of model...

Please sign up or login with your details

Forgot password? Click here to reset