CLAASIC: a Cortex-Inspired Hardware Accelerator

04/20/2016
by   Valentin Puente, et al.
0

This work explores the feasibility of specialized hardware implementing the Cortical Learning Algorithm (CLA) in order to fully exploit its inherent advantages. This algorithm, which is inspired in the current understanding of the mammalian neo-cortex, is the basis of the Hierarchical Temporal Memory (HTM). In contrast to other machine learning (ML) approaches, the structure is not application dependent and relies on fully unsupervised continuous learning. We hypothesize that a hardware implementation will be able not only to extend the already practical uses of these ideas to broader scenarios but also to exploit the hardware-friendly CLA characteristics. The architecture proposed will enable an unfeasible scalability for software solutions and will fully capitalize on one of the many CLA advantages: low computational requirements and reduced storage utilization. Compared to a state-of-the-art CLA software implementation it could be possible to improve by 4 orders of magnitude in performance and up to 8 orders of magnitude in energy efficiency. We propose to use a packet-switched network to tackle this. The paper addresses the fundamental issues of such an approach, proposing solutions to achieve scalable solutions. We will analyze cost and performance when using well-known architecture techniques and tools. The results obtained suggest that even with CMOS technology, under constrained cost, it might be possible to implement a large-scale system. We found that the proposed solutions enable a saving of 90 of the original communication costs running either synthetic or realistic workloads.

READ FULL TEXT

Authors

page 8

page 9

page 11

page 12

11/06/2019

MLPerf Inference Benchmark

Machine-learning (ML) hardware and software system demand is burgeoning....
08/07/2021

Clio: A Hardware-Software Co-Designed Disaggregated Memory System

Memory disaggregation has attracted great attention recently because of ...
12/01/2021

Optimizing for In-memory Deep Learning with Emerging Memory Technology

In-memory deep learning computes neural network models where they are st...
05/24/2018

GIRAF: General purpose In-storage Resistive Associative Framework

GIRAF is an in-storage architecture and algorithm framework based on Res...
02/26/2018

Adaptive Geospatial Joins for Modern Hardware

Geospatial joins are a core building block of connected mobility applica...
02/23/2020

PoET-BiN: Power Efficient Tiny Binary Neurons

The success of neural networks in image classification has inspired vari...
03/06/2020

Stretching the capacity of Hardware Transactional Memory in IBM POWER architectures

The hardware transactional memory (HTM) implementations in commercially ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.