1 Introduction
Discrete Fourier transform is a change of basis matrix which for a vector given by the coordinates in the standard basis computes its coordinates in the Fourier basis. Then fast Fourier transform is just a fast way of computing the discrete Fourier transform of any vector. Fast Fourier transform is according to Computing in Science & Engineering one of the top 10 most influential algorithms of 20th century [1]. Without a doubt, every theoretical computer scientist should have at least a basic understanding of the fundamental principles of this algorithm.
There are many excellent books dealing with these topics, for example [3, 5, 6, 8, 9]. In this paper, we offer a new presentation of the discrete and the Fast Fourier transform. We believe that the key in an exposition that makes the material more accessible to a wider audience, such as undergraduates and people interested in algorithms, is to start with some appealing motivation. Our starting point the convolution product of two vectors in the complex vector space
. In general, convolution is a fundamental concept in mathematics and it appears in many different forms with applications in image processing, digital data processing, acoustics, electrical engineering, physics, probability theory, multiplication of polynomials, and more. An important part of our presentation is that no new notion is introduced out of the blue, but with a clear justification.
In the space , the convolution of two vectors and is again a vector that can be represented by constructing the circulant matrix of and computing the product . It is clear that can be computed in time, however, due to the numerous applications of convolution, it is desirable to compute it faster. This naturally leads to the spectral decomposition of
, which is guaranteed by the very specific structure of a circulant matrix. The basis of orthogonal eigenvectors which diagonalizes
is exactly the Fourier basis. Fast Fourier transform computes the coordinates of a vector in the Fourier basis in time , which then also gives an algorithm computing the convolution of two vectors in the same time.In general, Fourier analysis provides an orthogonal basis of complex functions defined on an abelian group . Note that the vectors in can be identified with the functions , where are the integers modulo with addition. Some of the important settings are when for example , , which correspond to the classical Fourier transform and Fourier series, respectively, , which corresponds to the discrete Fourier transform, and , which corresponds to the Fourier analysis on the Boolean cube.
In this paper, we focus on the case when is finite where we can think of the functions as complex vectors indexed by . We generalize the approach for to derive the Fourier transform on . To this end we recall the, not so wellknown, definition of a circulant matrix, which can be used to represent the convolution of complex functions on . The Fourier basis is formed by the eigenvectors of the circulant. We derive a recursive description of circulants and of their eigenvectors, which, to the best of our knowledge, is not stated in the literature in the form as it is in this paper.
The main idea underlying Fourier analysis on finite abelian groups is a basic fact of linear algebra: if a linear mapping has a orthogonal basis of eigenvectors, we can see it as a diagonal matrix in this basis. The infinitedimensional case is more complicated but the rough idea is similar. We will see this manifested throughout the paper.
Prerequisites.
We assume that the reader is familiar with basic linear algebra. In particular, the important concepts which we need are the following: representation of a linear mapping by a matrix, eigenvalues and eigenvectors, spectral decomposition.
Structure of the paper. In Section 2, we derive the discrete Fourier transform. In Section 3, we derive the Fast Fourier transform using a matrix notation, which is more transparent. In Section 4, we generalize the approach from Section 2 to any finite abelian group .
Acknowledgements. We would like to thank Pavel Klavík, Martin Černý, Milan Hladík, and Roman Nedela for many useful comments.
2 Discrete Fourier Transform
Our starting point is the discrete circular convolution, which naturally leads to the discrete Fourier transform. It is an operation that, given two vectors , produces a third vector . Instead of saying what each component of the vector looks like, we take a different approach. For the vector , we construct the circulant matrix:
The first column of is the vector and each column is the cyclic shift of the previous column by one position in the downward direction.
Example 1.
The circulant matrix corresponding to the vector is the matrix
∎
We define the convolution of the vectors and by
Here, the matrix represents the linear mapping defined by .
By definition, computing requires arithmetic operations, which is the number of operations needed for a matrixvector multiplication. However, one can see that the structure of the matrix is very special. It turns out that the eigenvectors of form an orthogonal basis of , which means that, in a suitable basis, the convolution can be represented by a diagonal matrix. This can be used to derive an algorithm, called fast Fourier transform, that computes in time .
2.1 Eigenvectors of a Circulant Matrix
In linear algebra, the first thing to do when one encounters a new linear mapping, is to try to compute its eigenvalues and eigenvectors. In case the eigenvectors form an orthogonal basis, then the linear mapping is represented by a diagonal matrix with respect to this basis. As we will see, this is exactly the case for every circulant.
Note that we can write
where
Example 2.
For the vector , we have
∎
If is an eigenvector of , i.e., , then we have
In other words, every eigenvector of is an eigenvector of . The corresponding eigenvalue of can then be computed from the formula
(1) 
where is the eigenvalue of corresponding to the eigenvector . To determine the eigenvalues and eigenvectors of , it suffices to determine the eigenvalues and eigenvectors of .
Eigenvectors of the Matrix . Suppose that . The matrix is a permutation matrix, and therefore, it is unitary, i.e., . The eigenvalues of lie on the unit circle in
. In fact, this is true for every unitary matrix
:This implies . Moreover, we can see that the order of the permutation represented by is , i.e., . It follows that if is an eigenvalue of , then is an eigenvalue of . Since we know that has all its eigenvalues equal to , the candidates for eigenvalues of are exactly the th roots of unity: , for and .
Now, we find the eigenvector associated to the eigenvalue . Suppose that . We have
The entries of the vector must satisfy the following system of linear equations:
If we pick an arbitrary value for , then are uniquely determined. We put . The eigenvectors of , and therefore also the eigenvectors of , are
for . The corresponding eigenvalues of , satisfying , are given by
2.2 Fourier Basis
The key property of the eigenvectors of is that they form an orthogonal basis of , called the Fourier basis. This means that we can apply spectral decomposition to every circulant matrix .
Lemma 1.
The vectors form an orthogonal basis.
Proof.
We have
If , then clearly . If , then we have
where . The last equality follows from the fact that .
There is more geometric way of proving that . Since the complex number is nonzero, implies . The equality is clearly satisfied by . Geometrically, multiplication by is a rotation by some angle . The complex numbers are the vertices of a regular gon (note that divides ). The angle is then equal to . Rotation by angle just permutes the vertices of the gon, thus the sum remains the same. ∎
2.3 Discrete Fourier Transform as Change of Basis Matrix
Discrete Fourier transform is the change of basis matrix from the standard basis to the Fourier basis . We can easily find the change of basis matrix from the Fourier basis to the standard basis, by placing the vectors into the columns:
Note that in Lemma 1, we proved that
By rearranging, we have . The matrix is called discrete Fourier transform and the inverse discrete Fourier transform is . In particular, we have
and
Actually, the matrix is symmetric, so the inverse can be computed by merely taking the complex conjugate of and dividing by , i.e., .
2.4 Convolution Theorem
We use the matrix to diagonalize . Since the columns of the matrix are exactly the eigenvectors of , we have the following
(2) 
where is the eigenvalue of corresponding to the eigenvector . The equation 2 can be interpreted as follows: to apply the linear mapping it is the same as to change to the Fourier basis using , then to apply a diagonal matrix, and then to change back to the standard basis using . Note that spectral decomposition in fact requires orthonormal basis, i.e., every vector must have norm , but in our case the Fourier basis orthogonal since all the vectors are of norm . This can be easily modified, but we stick here to the notation that is usually used in the textbooks.
From the previous analysis we know that . Moreover, from (1) it follows that
(3) 
The righthand side can be rearranged using the following relations:
(4) 
By applying (4) to (3), we get
(5) 
Notice that the vector
is exactly the th column of . Therefore,
(6) 
i.e., the diagonal entries of are exactly the components of .
From (6), we get
(7) 
where denotes the Hadamard product (componentwise) of two vectors. We proved the following.
Theorem 2 (Convolution theorem).
For any two vectors , we have
In words, the previous theorem states that the discrete Fourier transform of the convolution of two vectors equals (up to scaling by ) the componentwise product of the discrete Fourier transforms of and . It is possible to get rid of the scaling factor by choosing an orthonormal basis instead of orthogonal.
The complexity depends on how fast we can multiply a vector by the matrices and . Using the very special structure of and , this can be done in , which is the main theme of the next section.
3 Fast Fourier Transform
We give the divide and conquer algorithm computing the discrete Fourier transform and the inverse discrete Fourier transform in time , which is due to Cooley and Tukey [2]. For simplicity, we only show how to compute the product , the other can be done analogously. We start with the following key lemma.
Lemma 3.
If , then .
Proof.
We have . The statement of the lemma has the following geometric explanation. Let be the angle of . The angle of is . On the other hand, divides into less, but larger parts than . Since , . ∎
We show on an example for how to derive the recursive fast Fourier transform in a matrix notation. Then we discuss how to generalize this for , which we can always assume for simplicity. In this chapter we use to indicate the fact that we are considering a matrix of size . We also use to denote .
We want to compute the product . We start just by applying the regular matrixvector multiplication. To find recursion, we apply Lemma 3 and the properties of complex conjugation. We have
We multiply and rearrange each component of the resulting vector:
We apply Lemma 3 and use the properties of the complex numbers and to obtain:
In the last equation, we are using the fact that and . For general , we have . Now, we rearrange again:
Finally, we get
We put
In general, for , by a similar derivation, we obtain:
where is the identity matrix, , and is the permutation matrix which put first all the even components of .
Let be the time needed to compute . First, the algorithm splits the vector into the evennumbered components
and the oddnumbered components
, which can be done in time . Then, the algorithm computes and . This takes time. The final multiplication can be also done in the . We get the following recurrence:Using standard methods, it can be easily shown that .
4 Discrete Fourier Transform on Finite Abelian Groups
In this section, we consider functions of the form , where is a finite abelian group. We denote the set of all such functions by . The set is clearly a vector space. In this section, we use the following shorthand .
It is a folklore that every finite abelian group is isomorphic to a direct product of the form
where are powers of (not necessarily distinct) primes; see for example [7]. For every finite abelian group, we fix a canonical form by choosing the sequence to be nondecreasing. In what follows, when talking about a finite abelian group , we always think about this canonical representation instead.
We order the elements of an abelian group lexicographically. Then, we can alternatively think of as a complex vector indexed by the group . We refer to the elements of as vectors. However, we prefer the functional notation for cosmetic reasons.
Convolution. The previous two sections dealt with the case when . Here we derive the Fourier transform on , for any finite abelian group . We again start with the important convolution product of two vectors , which is usually defined by
Similarly, the concept of circulant matrix can be generalized for finite abelian groups. For , the circulant matrix , indexed by , is a matrix defined by
for . Note that here denotes the entry of indexed by and . We use this notation throughout the section.
Example 3.
Let and let be a vector such that . Then the circulant of is the following matrix:
∎
The convolution can be represented by multiplying the vector by the matrix from the left.
Lemma 4.
For , we have , where is the circulant matrix of .
Proof.
We have . ∎
Eigenvectors of circulants. The following theorem gives a recursive description of any circulant matrix and of its eigenvectors. Although it is not difficult to prove, to the best of our knowledge, this theorem is not available in this form in the current literature.
Before stating the theorem, we recall that the Kronecker product of an matrix with an matrix is a matrix defined by
We also need several properties of the Kronecker product:

If and , then is an eigenvalue of with the corresponding eigenvector . Any eigenvalue of arises as such product of eigenvalues of and .

We have .

If and are of the same size, and are of the same size, then , where denotes the Hadamard (componentwise) product.
Theorem 5.
Let and be abelian groups such that is not cyclic and . Then for every circulant , the following hold:

For every , there is an eigenvector of such that , where is an eigenvector of any circulant, and is an eigenvector of any circulant. In particular every circulant has the same set of eigenvectors.
Proof.
First, we prove (i). Consider the submatrix of indexed by the sets and , for some fixed . For every and , we have
It follows that is a circulant.
Clearly, , for every . The mapping defines an action of on . We have if and belong to the same orbit under this action. The powers of the permutation matrix are in onetoone correspondence with the orbits of this action. We put , for , and the addition taken modulo . This concludes the proof of part (i).
Now, we prove (ii). From (i), we know that if is an eigenvector of , then it is also an eigenvector of each , for . By induction, we may assume that every circulant has the same set of eigenvectors. Therefore, must be of the form , where and are exactly as in the statement. ∎
Fourier Basis. Denote the set of eigenvectors of any circulant by . Before we prove that forms an orthogonal basis of , we derive a precise description of every .
Lemma 6.
Let be an abelian group. Then for ,
Proof.
We prove the lemma by induction on . If , then the group is cyclic and the statement follow from the arguments in Section 2. Suppose that the statement holds for any with at most factors. By Theorem 5, we have , where , for some and . By the induction hypothesis and the definition of the Kronecker product we have
∎
Recall that the inner product on is defined in an expected way . Finally, we prove that the set forms an orthogonal basis, called the Fourier basis.
Theorem 7 (Fourier basis).
The set forms an orthogonal basis.
Proof.
In Section 2, we proved this for the case when a cyclic group. Assume now that is not cyclic. Let . By Theorem 5, we have and . Moreover, from the properties of the Kronecker product, it follows that
To compute , we proceed by induction. It suffices to evaluate the sum
If , then by it easily follows from Lemma 6 that . If , then by induction, we have , therefore, also . ∎
The standard basis of consists of vectors , where if and if . The discrete Fourier transform on the finite abelian group is then the change of basis matrix from the standard basis to the Fourier basis. The matrix could be derived in an analogous way as for in Section 2, but we omit it here.
Fourier Analysis on Boolean Cube. If , then is often called Boolean cube. We note that in this special case the eigenvectors can be identified with the subsets of . In particular, for every , we put . Then by Lemma 6, for , we have
Fourier analysis on Boolean cube is especially important in many applications in theoretical computer science; for a survey see [4].
Characters of an Abelian Group. Finally, we conclude this section by comparing our derivation of Fourier transform on with the approach in other textbooks. Typically, the exposition starts by introducing characters of abelian groups.
Let be the multiplicative group of complex numbers on the unit circle. A character, also called onedimensional representation, of a finite abelian group is a homomorphism . It can be proved that characters form an orthogonal basis and this is then called the Fourier basis. The issue is that at the first encounter, this construction, though elegant, might seem to come out of the blue.
On the other hand, we first start with the convolution product. Its great importance clearly motivates our next endeavour. We realize that it can by represented by a linear mapping, namely the circulant. Then the natural step is to compute its eigenvectors and to discover that they form an orthogonal basis. Moreover, from Lemma 6 it is also easy to see that every eigenvector of a circulant is a character, and it is also straightforward to prove that every character of an abelian group is an eigenvector of every circulant.
Nevertheless, the connection between characters and Fourier basis is crucial when developing Fourier analysis on finite nonabelian groups, which relies on representation theory of groups. This is out of the scope of this paper.
References
 [1] Barry A Cipra. The best of the 20th century: Editors name top 10 algorithms. SIAM news, 33(4):1–2, 2000.
 [2] James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of computation, 19(90):297–301, 1965.
 [3] Sanjoy Dasgupta, Christos H Papadimitriou, and Umesh V Vazirani. Algorithms. McGrawHill Higher Education, 2008.
 [4] Ronald De Wolf. A brief introduction to fourier analysis on the boolean cube. Theory of Computing, pages 1–20, 2008.
 [5] Ida Kantor, Jiří Matoušek, and Robert Šámal. Mathematics++, volume 75. American Mathematical Soc., 2015.
 [6] María C Pereyra and Lesley A Ward. Harmonic analysis: from Fourier to wavelets, volume 63. American Mathematical Soc., 2012.
 [7] Joseph J Rotman. An introduction to the theory of groups, volume 148. Springer Science & Business Media, 2012.
 [8] Gilbert Strang and Kaija Aarikka. Introduction to applied mathematics, volume 16. WellesleyCambridge Press Wellesley, MA, 1986.
 [9] Audrey Terras. Fourier analysis on finite groups and applications. Number 43. Cambridge University Press, 1999.
Comments
There are no comments yet.