In-Storage Embedded Accelerator for Sparse Pattern Processing

11/10/2016
by   Sang-Woo Jun, et al.
0

We present a novel architecture for sparse pattern processing, using flash storage with embedded accelerators. Sparse pattern processing on large data sets is the essence of applications such as document search, natural language processing, bioinformatics, subgraph matching, machine learning, and graph processing. One slice of our prototype accelerator is capable of handling up to 1TB of data, and experiments show that it can outperform C/C++ software solutions on a 16-core system at a fraction of the power and cost; an optimized version of the accelerator can match the performance of a 48-core server.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2019

A Survey on Graph Processing Accelerators: Challenges and Opportunities

Graph is a well known data structure to represent the associated relatio...
research
10/08/2021

Pyxis: An Open-Source Performance Dataset of Sparse Accelerators

Specialized accelerators provide gains of performance and efficiency in ...
research
05/26/2021

A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators

The rapidly-changing deep learning landscape presents a unique opportuni...
research
08/30/2021

Embedded Pattern Matching

Haskell is a popular choice for hosting deeply embedded languages. A rec...
research
09/20/2020

The Ultimate DataFlow for Ultimate SuperComputers-on-a-Chips

This article starts from the assumption that near future 100BTransistor ...
research
12/20/2020

IntersectX: An Efficient Accelerator for Graph Mining

Graph pattern mining applications try to find all embeddings that match ...
research
05/08/2018

FlashAbacus: A Self-Governing Flash-Based Accelerator for Low-Power Systems

Energy efficiency and computing flexibility are some of the primary desi...

Please sign up or login with your details

Forgot password? Click here to reset