Maximizing CNN Accelerator Efficiency Through Resource Partitioning

06/30/2016
by   Yongming Shen, et al.
0

Convolutional neural networks (CNNs) are revolutionizing a variety of machine learning tasks, but they present significant computational challenges. Recently, FPGA-based accelerators have been proposed to improve the speed and efficiency of CNNs. Current approaches construct a single processor that computes the CNN layers one at a time; this single processor is optimized to maximize the overall throughput at which the collection of layers are computed. However, this approach leads to inefficient designs because the same processor structure is used to compute CNN layers of radically varying dimensions. We present a new CNN accelerator paradigm and an accompanying automated design methodology that partitions the available FPGA resources into multiple processors, each of which is tailored for a different subset of the CNN convolutional layers. Using the same FPGA resources as a single large processor, multiple smaller specialized processors result in increased computational efficiency and lead to a higher overall throughput. Our design methodology achieves 1.51x higher throughput than the state of the art approach on evaluating the popular AlexNet CNN on a Xilinx Virtex-7 FPGA. Our projections indicate that the benefit of our approach increases with the amount of available FPGA resources, already growing to over 3x over the state of the art within the next generation of FPGAs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset