DeepAI AI Chat
Log In Sign Up

Multi-Channel FFT Architectures Designed via Folding and Interleaving

02/19/2022
by   Nanda K. Unnikrishnan, et al.
University of Minnesota
0

Computing the FFT of a single channel is well understood in the literature. However, computing the FFT of multiple channels in a systematic manner has not been fully addressed. This paper presents a framework to design a family of multi-channel FFT architectures using folding and interleaving. Three distinct multi-channel FFT architectures are presented in this paper. These architectures differ in the input and output preprocessing steps and are based on different folding sets, i.e., different orders of execution.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/18/2013

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

Given a family of binary-input memoryless output-symmetric (BMS) channel...
02/01/2020

Multi-channel Acoustic Modeling using Mixed Bitrate OPUS Compression

Recent literature has shown that a learned front end with multi-channel ...
02/08/2021

End-to-End Multi-Channel Transformer for Speech Recognition

Transformers are powerful neural architectures that allow integrating di...
12/22/2017

Learning in the Machine: the Symmetries of the Deep Learning Channel

In a physical neural system, learning rules must be local both in space ...
05/08/2021

On Multi-Channel Huffman Codes for Asymmetric-Alphabet Channels

Zero-error single-channel source coding has been studied extensively ove...
05/20/2022

Towards efficient feature sharing in MIMO architectures

Multi-input multi-output architectures propose to train multiple subnetw...
08/16/2018

The Computational Wiretap Channel

We present the computational wiretap channel: Alice has some data x and ...