DeepAI AI Chat
Log In Sign Up

Multi-Channel FFT Architectures Designed via Folding and Interleaving

by   Nanda K. Unnikrishnan, et al.
University of Minnesota

Computing the FFT of a single channel is well understood in the literature. However, computing the FFT of multiple channels in a systematic manner has not been fully addressed. This paper presents a framework to design a family of multi-channel FFT architectures using folding and interleaving. Three distinct multi-channel FFT architectures are presented in this paper. These architectures differ in the input and output preprocessing steps and are based on different folding sets, i.e., different orders of execution.


page 1

page 2

page 3

page 4


The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

Given a family of binary-input memoryless output-symmetric (BMS) channel...

Multi-channel Acoustic Modeling using Mixed Bitrate OPUS Compression

Recent literature has shown that a learned front end with multi-channel ...

End-to-End Multi-Channel Transformer for Speech Recognition

Transformers are powerful neural architectures that allow integrating di...

Learning in the Machine: the Symmetries of the Deep Learning Channel

In a physical neural system, learning rules must be local both in space ...

On Multi-Channel Huffman Codes for Asymmetric-Alphabet Channels

Zero-error single-channel source coding has been studied extensively ove...

Towards efficient feature sharing in MIMO architectures

Multi-input multi-output architectures propose to train multiple subnetw...

The Computational Wiretap Channel

We present the computational wiretap channel: Alice has some data x and ...