STANNIS: Low-Power Acceleration of Deep NeuralNetwork Training Using Computational Storage

02/17/2020
by   Ali HeydariGorji, et al.
University of California, Irvine
0

This paper proposes a framework for distributed, in-storage training of neural networks on clusters of computational storage devices. Such devices not only contain hardware accelerators but also eliminate data movement between the host and storage, resulting in both improved performance and power savings. More importantly, this in-storage processing style of training ensures that private data never leaves the storage while fully controlling the sharing of public data. Experimental results show up to 2.7x speedup and 69 energy consumption and no significant loss in accuracy.

READ FULL TEXT

page 4

page 5

02/17/2020

STANNIS: Low-Power Acceleration of Deep Neural Network Training Using Computational Storage

This paper proposes a framework for distributed, in-storage training of ...
12/23/2021

In-storage Processing of I/O Intensive Applications on Computational Storage Drives

Computational storage drives (CSD) are solid-state drives (SSD) empowere...
07/16/2020

HyperTune: Dynamic Hyperparameter Tuning For Efficient Distribution of DNN Training Over Heterogeneous Systems

Distributed training is a novel approach to accelerate Deep Neural Netwo...
03/06/2023

Domain-Specific Computational Storage for Serverless Computing

While (1) serverless computing is emerging as a popular form of cloud ex...
12/04/2018

Pre-Defined Sparse Neural Networks with Hardware Acceleration

Neural networks have proven to be extremely powerful tools for modern ar...
07/08/2022

The Dirty Secret of SSDs: Embodied Carbon

Scalable Solid-State Drives (SSDs) have revolutionized the way we store ...
06/07/2023

An Analytical Model-based Capacity Planning Approach for Building CSD-based Storage Systems

The data movement in large-scale computing facilities (from compute node...

Please sign up or login with your details

Forgot password? Click here to reset