Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

07/18/2023
by   Manuel Le Gallo, et al.
0

Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. The AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, that provides the benefits of using the AIHWKit simulation platform in a fully managed cloud setting. Finally, we show examples on how users can expand and customize AIHWKit for their own needs. This tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.

READ FULL TEXT
research
04/05/2021

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

We introduce the IBM Analog Hardware Acceleration Kit, a new and first o...
research
05/17/2023

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

The advancement of Deep Learning (DL) is driven by efficient Deep Neural...
research
03/30/2023

XPert: Peripheral Circuit Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing

The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) impl...
research
06/29/2023

NeuralFuse: Learning to Improve the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes

Deep neural networks (DNNs) have become ubiquitous in machine learning, ...
research
02/16/2023

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Analog in-memory computing (AIMC) – a promising approach for energy-effi...
research
06/22/2021

Randomness In Neural Network Training: Characterizing The Impact of Tooling

The quest for determinism in machine learning has disproportionately foc...
research
04/26/2023

HiQ – A Declarative, Non-intrusive, Dynamic and Transparent Observability and Optimization System

This paper proposes a non-intrusive, declarative, dynamic and transparen...

Please sign up or login with your details

Forgot password? Click here to reset