Accuracy vs. Efficiency: Achieving Both through FPGA-Implementation Aware Neural Architecture Search

01/31/2019
by   Weiwen Jiang, et al.
1

A fundamental question lies in almost every application of deep neural networks: what is the optimal neural architecture given a specific dataset? Recently, several Neural Architecture Search (NAS) frameworks have been developed that use reinforcement learning and evolutionary algorithm to search for the solution. However, most of them take a long time to find the optimal architecture due to the huge search space and the lengthy training process needed to evaluate each candidate. In addition, most of them aim at accuracy only and do not take into consideration the hardware that will be used to implement the architecture. This will potentially lead to excessive latencies beyond specifications, rendering the resulting architectures useless. To address both issues, in this paper we use Field Programmable Gate Arrays (FPGAs) as a vehicle to present a novel hardware-aware NAS framework, namely FNAS, which will provide an optimal neural architecture with latency guaranteed to meet the specification. In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher efficiency. Experimental results on common data set such as ImageNet show that in the cases where the state-of-the-art generates architectures with latencies 7.81x longer than the specification, those from FNAS can meet the specs with less than 1 11.13x speedup for the search process. To the best of the authors' knowledge, this is the very first hardware aware NAS.

READ FULL TEXT
research
07/06/2019

Hardware/Software Co-Exploration of Neural Architectures

We propose a novel hardware and software co-exploration framework for ef...
research
06/17/2019

Hardware Aware Neural Network Architectures using FbNet

We implement a differentiable Neural Architecture Search (NAS) method in...
research
02/23/2022

The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

Along with the progress of AI democratization, neural networks are being...
research
02/10/2020

Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks

Neural Architecture Search (NAS) has demonstrated its power on various A...
research
06/27/2023

PASNet: Polynomial Architecture Search Framework for Two-party Computation-based Secure Neural Network Deployment

Two-party computation (2PC) is promising to enable privacy-preserving de...
research
09/07/2021

ISyNet: Convolutional Neural Networks design for AI accelerator

In recent years Deep Learning reached significant results in many practi...
research
09/14/2022

NAAP-440 Dataset and Baseline for Neural Architecture Accuracy Prediction

Neural architecture search (NAS) has become a common approach to develop...

Please sign up or login with your details

Forgot password? Click here to reset