A Framework for Designing Efficient Deep Learning-Based Genomic Basecallers

11/06/2022
by   Gagandeep Singh, et al.
0

Nanopore sequencing generates noisy electrical signals that need to be converted into a standard string of DNA nucleotide bases using a computational step called basecalling. The accuracy and speed of basecalling have critical implications for all later steps in genome analysis. Many researchers adopt complex deep learning-based models to perform basecalling without considering the compute demands of such models, which leads to slow, inefficient, and memory-hungry basecallers. Therefore, there is a need to reduce the computation and memory cost of basecalling while maintaining accuracy. Our goal is to develop a comprehensive framework for creating deep learning-based basecallers that provide high efficiency and performance. We introduce RUBICON, a framework to develop hardware-optimized basecallers. RUBICON consists of two novel machine-learning techniques that are specifically designed for basecalling. First, we introduce the first quantization-aware basecalling neural architecture search (QABAS) framework to specialize the basecalling neural network architecture for a given hardware acceleration platform while jointly exploring and finding the best bit-width precision for each neural network layer. Second, we develop SkipClip, the first technique to remove the skip connections present in modern basecallers to greatly reduce resource and storage requirements without any loss in basecalling accuracy. We demonstrate the benefits of RUBICON by developing RUBICALL, the first hardware-optimized basecaller that performs fast and accurate basecalling. Compared to the fastest state-of-the-art basecaller, RUBICALL provides a 3.19x speedup with 2.97 higher accuracy. We show that RUBICON helps researchers develop hardware-optimized basecallers that are superior to expert-designed models.

READ FULL TEXT

page 4

page 7

page 9

research
01/18/2023

Tailor: Altering Skip Connections for Resource-Efficient Inference

Deep neural networks use skip connections to improve training convergenc...
research
10/31/2019

On Neural Architecture Search for Resource-Constrained Hardware Platforms

In the recent past, the success of Neural Architecture Search (NAS) has ...
research
04/25/2022

PVNAS: 3D Neural Architecture Search with Point-Voxel Convolution

3D neural networks are widely used in real-world applications (e.g., AR/...
research
02/12/2021

Confounding Tradeoffs for Neural Network Quantization

Many neural network quantization techniques have been developed to decre...
research
04/24/2019

Design Automation for Efficient Deep Learning Computing

Efficient deep learning computing requires algorithm and hardware co-des...
research
12/09/2022

TargetCall: Eliminating the Wasted Computation in Basecalling via Pre-Basecalling Filtering

Basecalling is an essential step in nanopore sequencing analysis where t...

Please sign up or login with your details

Forgot password? Click here to reset