The (discrete) normalized Fourier transform is a complex linear mapping sending an input to , where is an unitary matrix defined by
If is a power of , then Walsh-Hadamard transform is a real, orthogonal mapping , with the element in position given by:
where is the dot-product modulo of the binary representations of the integers and .
More genererally, both and (resp.) are defined by the characters of underlying abelian groups (integers modulo , under addition) and the dimensional binary cube (
dimensional vector space over bits). Given an input vector, it is possible to compute and (resp.) in time using the Fast Fourier-Transform [Cooley and Tukey(1964)] , or the Walsh-Hadamard transform (resp.).
As for computational lower bounds, it is trivial that computing both and requires a linear number of steps, because each coordinate of the output depends on all the input coordinates.
There has not been much prior work on better bounds. We refer the reader to [Ailon(2013)] for a brief history of this line of work, and concentrate on a recent lower bound.
The work [Ailon(2013)] provides a lower bound of operations for computing (or ) given , assuming that at each step the computer can perform a unitary operation affecting at most rows. In other words, the algorithm, running in steps, is viewed as a product
of matrices , each a block-diagonal matrix with blocks equalling , and one block equalling a unitary matrix :
The justification for this model of computation is threefold:
Similarly to matrices of the form (3), any basic operation of a modern computer (e.g., addition of numbers) acts on only a fixed number of inputs.
The Fast Fourier-Transform, as well as the Walsh-Hadamard transform, operate in this model, and
The set of matrices of the form (3) generate the group of unitary matrices.
Thus the question of computational complexity of of the Fourier transform becomes that of computing distances between elements of a group, namely the unitary group, with respect to a set of generators that is computationally simple.
Obtaining the lower bound of in [Ailon(2013)] is done by defining a potential function for unitary matrices, as follows:
With this potential function, one shows that:
, where is the state of the algorithm after steps.
Indeed, if the potentail grows from to changing (in absolute value) by no more than at each step, then the number of steps must be . Showing is done using two observations. The first is that defers from in at most rows and , and that for each column , due to unitarity of ,
The next observation is that any numbers satisfying , also satisfy
Combining the observations, we conclude that the total change in the potential function can be at most
2 An Interesting Problem
The advantage of the method just described is that it reduces a computational problem to that of computing distance between two elements of a group, with respect to a chosen set of generators of the group. We now define a more general problem within the same group theoretical setting.
Consider the matrix defined as
(One may replace with , but we work with henceforth.) The matrix
is skew-Hermitian. Letdenote the identity matrix, and finally define for a real angle the following matrix:
It is easy to verify that is unitary for all . It is also easy to verify that
Also, using the potential function defined above, we see that
Hence, using the argument as above, the number of steps required to compute must be at least . However, it is unreasonable that it should be possible to compute faster than the time it takes to compute , by a factor of . Indeed, given an input , we could simply embed it as
by padding with’s, then compute and then retrieve from and by a simple arithmetic manipulation. Hence, we conjecture that the number of steps required to compute should be not much smaller than . 111The author conjectures to be the correct bound.
2.1 A slight improvement: Lower bound of .
It is possible to get a better bound than , as follows. Instead of starting the computation at state and finishing at , we can opportunistically choose a starting point (and finish at ).
If we choose the state then it is trivial to verify that the computation ends at state , which equals by (4). We then observe that
2.2 Stronger improvements?
Is it possible to get a stronger lower bound than ? One approach for solving this problem might be using group representation theory. If is any unitary representation of , then we could define a new potential function on , and use it to obtain possibly better lower bounds.
An interesting representation is related to determinants. We let the order determinant representation of a unitary matrix be the matrix of shape , defined by
where are subsets of size exactly of , is the -by- submatrix defined by row set and column set . The fact that is a unitary matrix coming from a group representation is non-trivial, and we refer the reader to resources on representation theory for more details.
So far I have not been able to make progress on the problem using this (quite natural) representation, but I am not convinced that this direction is futile either.
2.3 Important Note: Even the case is Interesting
Note that although the main problem proposed here is to understand the asymptotic behviour of the complexity of , as tends to , even the case of finding a lower bound for computation of is not trivial, in the sense that it is not clear how (and whether it is at all possible) to get a bound better than , which is the best possible using the “vanilla” entropy function .
- [Ailon(2013)] Nir Ailon. A lower bound for Fourier transform computation in a linear model over 2x2 unitary gates using matrix entropy. Chicago J. of Theo. Comp. Sci., 2013.
- [Cooley and Tukey(1964)] J. W Cooley and J. W Tukey. An algorithm for the machine computation of complex Fourier series. J. of American Math. Soc., pages 297–301, 1964.