# Discrete and Fast Fourier Transform Made Clear

Fast Fourier transform was included in the Top 10 Algorithms of 20th Century by Computing in Science Engineering. In this paper, we provide a new simple derivation of both the discrete Fourier transform and fast Fourier transform by means of elementary linear algebra. We start the exposition by introducing the convolution product of vectors, represented by a circulant matrix, and derive the discrete Fourier transform as the change of basis matrix that diagonalizes the circulant matrix. We also generalize our approach to derive the Fourier transform on any finite abelian group, where the case of Fourier transform on the Boolean cube is especially important for many applications in theoretical computer science.

## Authors

• 6 publications
09/15/2018

### The inverse cyclotomic Discrete Fourier Transform algorithm

The proof of the theorem concerning to the inverse cyclotomic Discrete F...
03/06/2020

### Quantum Fourier Transform Revisited

The fast Fourier transform (FFT) is one of the most successful numerical...
07/17/2019

### Interesting Open Problem Related to Complexity of Computing the Fourier Transform and Group Theory

The Fourier Transform is one of the most important linear transformation...
12/25/2021

### Geometric algebra generation of molecular surfaces

Geometric algebra is a powerful framework that unifies mathematics and p...
12/17/2020

### Generalized gaussian bounds for discrete convolution powers

We prove a uniform generalized gaussian bound for the powers of a discre...
08/28/2020

### Fast Partial Fourier Transform

Given a time series vector, how can we efficiently compute a specified p...
04/29/2021

### Simulating the DFT Algorithm for Audio Processing

Since the evolution of digital computers, the storage of data has always...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Discrete Fourier transform is a change of basis matrix which for a vector given by the coordinates in the standard basis computes its coordinates in the Fourier basis. Then fast Fourier transform is just a fast way of computing the discrete Fourier transform of any vector. Fast Fourier transform is according to Computing in Science & Engineering one of the top 10 most influential algorithms of 20th century [1]. Without a doubt, every theoretical computer scientist should have at least a basic understanding of the fundamental principles of this algorithm.

There are many excellent books dealing with these topics, for example [3, 5, 6, 8, 9]. In this paper, we offer a new presentation of the discrete and the Fast Fourier transform. We believe that the key in an exposition that makes the material more accessible to a wider audience, such as undergraduates and people interested in algorithms, is to start with some appealing motivation. Our starting point the convolution product of two vectors in the complex vector space

. In general, convolution is a fundamental concept in mathematics and it appears in many different forms with applications in image processing, digital data processing, acoustics, electrical engineering, physics, probability theory, multiplication of polynomials, and more. An important part of our presentation is that no new notion is introduced out of the blue, but with a clear justification.

In the space , the convolution of two vectors and is again a vector that can be represented by constructing the circulant matrix of and computing the product . It is clear that can be computed in time, however, due to the numerous applications of convolution, it is desirable to compute it faster. This naturally leads to the spectral decomposition of

, which is guaranteed by the very specific structure of a circulant matrix. The basis of orthogonal eigenvectors which diagonalizes

is exactly the Fourier basis. Fast Fourier transform computes the coordinates of a vector in the Fourier basis in time , which then also gives an algorithm computing the convolution of two vectors in the same time.

In general, Fourier analysis provides an orthogonal basis of complex functions defined on an abelian group . Note that the vectors in can be identified with the functions , where are the integers modulo with addition. Some of the important settings are when for example , , which correspond to the classical Fourier transform and Fourier series, respectively, , which corresponds to the discrete Fourier transform, and , which corresponds to the Fourier analysis on the Boolean cube.

In this paper, we focus on the case when is finite where we can think of the functions as complex vectors indexed by . We generalize the approach for to derive the Fourier transform on . To this end we recall the, not so well-known, definition of a -circulant matrix, which can be used to represent the convolution of complex functions on . The Fourier basis is formed by the eigenvectors of the -circulant. We derive a recursive description of -circulants and of their eigenvectors, which, to the best of our knowledge, is not stated in the literature in the form as it is in this paper.

The main idea underlying Fourier analysis on finite abelian groups is a basic fact of linear algebra: if a linear mapping has a orthogonal basis of eigenvectors, we can see it as a diagonal matrix in this basis. The infinite-dimensional case is more complicated but the rough idea is similar. We will see this manifested throughout the paper.

Prerequisites.

We assume that the reader is familiar with basic linear algebra. In particular, the important concepts which we need are the following: representation of a linear mapping by a matrix, eigenvalues and eigenvectors, spectral decomposition.

Structure of the paper. In Section 2, we derive the discrete Fourier transform. In Section 3, we derive the Fast Fourier transform using a matrix notation, which is more transparent. In Section 4, we generalize the approach from Section 2 to any finite abelian group .

Acknowledgements. We would like to thank Pavel Klavík, Martin Černý, Milan Hladík, and Roman Nedela for many useful comments.

## 2 Discrete Fourier Transform

Our starting point is the discrete circular convolution, which naturally leads to the discrete Fourier transform. It is an operation that, given two vectors , produces a third vector . Instead of saying what each component of the vector looks like, we take a different approach. For the vector , we construct the circulant matrix:

 C=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣c0cn−1⋅c2c1c1c0cn−1⋅c2⋅c1c0⋅⋅cn−2⋅⋅⋅cn−1cn−1cn−2⋅c1c0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

The first column of is the vector and each column is the cyclic shift of the previous column by one position in the downward direction.

###### Example 1.

The circulant matrix corresponding to the vector is the matrix

 ⎡⎢⎣132213321⎤⎥⎦.

We define the convolution of the vectors and by

 c∗d:=Cd.

Here, the matrix represents the linear mapping defined by .

By definition, computing requires arithmetic operations, which is the number of operations needed for a matrix-vector multiplication. However, one can see that the structure of the matrix is very special. It turns out that the eigenvectors of form an orthogonal basis of , which means that, in a suitable basis, the convolution can be represented by a diagonal matrix. This can be used to derive an algorithm, called fast Fourier transform, that computes in time .

### 2.1 Eigenvectors of a Circulant Matrix

In linear algebra, the first thing to do when one encounters a new linear mapping, is to try to compute its eigenvalues and eigenvectors. In case the eigenvectors form an orthogonal basis, then the linear mapping is represented by a diagonal matrix with respect to this basis. As we will see, this is exactly the case for every circulant.

Note that we can write

 C=c0I+cn−1P+cn−2P2+⋯+c1Pn−1,

where

 P=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣01⋅00001⋅0⋅00⋅⋅0⋅⋅⋅110⋅00⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.
###### Example 2.

For the vector , we have

 ⎡⎢⎣132213321⎤⎥⎦ =1⋅⎡⎢⎣111⎤⎥⎦+3⋅⎡⎢⎣111⎤⎥⎦+2⋅⎡⎢⎣111⎤⎥⎦ =1⋅⎡⎢⎣111⎤⎥⎦0+3⋅⎡⎢⎣111⎤⎥⎦1+2⋅⎡⎢⎣111⎤⎥⎦2.

If is an eigenvector of , i.e., , then we have

 Cv =(c0I+cn−1P+cn−2P2+⋯+c1Pn−1)v=c0Iv+cn−1Pv+cn−2P2v+⋯+c1Pn−1v =c0Iv+cn−1λv+cn−2λ2v+⋯+c1λn−1v=(c0+cn−1λ+cn−2λ2+⋯+c1λn−1)v.

In other words, every eigenvector of is an eigenvector of . The corresponding eigenvalue of can then be computed from the formula

 c0+cn−1λ+cn−2λ2+⋯+c1λn−1, (1)

where is the eigenvalue of corresponding to the eigenvector . To determine the eigenvalues and eigenvectors of , it suffices to determine the eigenvalues and eigenvectors of .

Eigenvectors of the Matrix . Suppose that . The matrix is a permutation matrix, and therefore, it is unitary, i.e., . The eigenvalues of lie on the unit circle in

. In fact, this is true for every unitary matrix

:

 ∥v∥=∥Uv∥=∥λv∥=|λ|⋅∥v∥.

This implies . Moreover, we can see that the order of the permutation represented by is , i.e., . It follows that if is an eigenvalue of , then is an eigenvalue of . Since we know that has all its eigenvalues equal to , the candidates for eigenvalues of are exactly the -th roots of unity: , for and .

Now, we find the eigenvector associated to the eigenvalue . Suppose that . We have

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣x1x2⋅xn−1x0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=Pχk=wkχk=wk⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣x0x1⋅xn−2xn−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

The entries of the vector must satisfy the following system of linear equations:

 x1 =wkx0, x2 =wkx1=w2kx0, ⋮ xn−1 =wkxn−2=⋯=w(n−1)kx0, x0 =wkxn−1=⋯=wnkx0=x0.

If we pick an arbitrary value for , then are uniquely determined. We put . The eigenvectors of , and therefore also the eigenvectors of , are

 χk=⎡⎢ ⎢ ⎢ ⎢⎣w0⋅kw1⋅k⋅w(n−1)k⎤⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢⎣e2πi0k/ne2πi1k/n⋅e2πi(n−1)k/n⎤⎥ ⎥ ⎥ ⎥⎦,

for . The corresponding eigenvalues of , satisfying , are given by

 Λk=c0+cn−1wk+cn−2w2k+⋯+c1w(n−1)k.

### 2.2 Fourier Basis

The key property of the eigenvectors of is that they form an orthogonal basis of , called the Fourier basis. This means that we can apply spectral decomposition to every circulant matrix .

###### Lemma 1.

The vectors form an orthogonal basis.

###### Proof.

We have

 χ∗kχℓ=n−1∑j=0¯¯¯¯¯¯¯¯wjkwjℓ=n−1∑j=0w−jkwjℓ=n−1∑j=0wj(ℓ−k).

If , then clearly . If , then we have

 χ∗kχℓ=n−1∑j=0w(ℓ−k)j=α0+α1+⋯+αn−1=1−αn1−α=0,

where . The last equality follows from the fact that .

There is more geometric way of proving that . Since the complex number is nonzero, implies . The equality is clearly satisfied by . Geometrically, multiplication by is a rotation by some angle . The complex numbers are the vertices of a regular -gon (note that divides ). The angle is then equal to . Rotation by angle just permutes the vertices of the -gon, thus the sum remains the same. ∎

### 2.3 Discrete Fourier Transform as Change of Basis Matrix

Discrete Fourier transform is the change of basis matrix from the standard basis to the Fourier basis . We can easily find the change of basis matrix from the Fourier basis to the standard basis, by placing the vectors into the columns:

 F=⎡⎢⎣\vline\vline\vlineχ0χ1⋅χn−1\vline\vline\vline⎤⎥⎦.

Note that in Lemma 1, we proved that

By rearranging, we have . The matrix is called discrete Fourier transform and the inverse discrete Fourier transform is . In particular, we have

 F=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣111⋅11ww2⋅wn−11w2w4⋅w2(n−1)⋅⋅⋅⋅⋅1wn−1w2(n−1)⋅w(n−1)2⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

and

 F−1=1n⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣111⋅11ww−2⋅w−(n−1)1w−2w−4⋅w−2(n−1)⋅⋅⋅⋅⋅1w−(n−1)w−2(n−1)⋅w−(n−1)2⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

Actually, the matrix is symmetric, so the inverse can be computed by merely taking the complex conjugate of and dividing by , i.e., .

### 2.4 Convolution Theorem

We use the matrix to diagonalize . Since the columns of the matrix are exactly the eigenvectors of , we have the following

 C=D−1diag(Λ0,…,Λn−1)D, (2)

where is the eigenvalue of corresponding to the eigenvector . The equation 2 can be interpreted as follows: to apply the linear mapping it is the same as to change to the Fourier basis using , then to apply a diagonal matrix, and then to change back to the standard basis using . Note that spectral decomposition in fact requires orthonormal basis, i.e., every vector must have norm , but in our case the Fourier basis orthogonal since all the vectors are of norm . This can be easily modified, but we stick here to the notation that is usually used in the textbooks.

From the previous analysis we know that . Moreover, from (1) it follows that

 Λk=c0w0k+cn−1w1k+cn−1w2k+⋯+c1w(n−1)k. (3)

The right-hand side can be rearranged using the following relations:

 w−1k=w(n−1)k,w−2k=w(n−2)k,…,w−(n−1)k=w1k. (4)

By applying (4) to (3), we get

 Λk=c0w0k+c1w−1k+c2w−2k+⋯+cn−1w−(n−1)k. (5)

Notice that the vector

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣w0kw−1kw−2k⋅w−(n−1)k⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

is exactly the -th column of . Therefore,

 ¯¯¯¯Fc=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Λ0Λ1Λ2⋅Λn−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (6)

i.e., the diagonal entries of are exactly the components of .

From (6), we get

 c∗d=Cd=D−1diag(Λ0,…,Λn−1)Dd=D−1(nDc∘Dd), (7)

where denotes the Hadamard product (componentwise) of two vectors. We proved the following.

###### Theorem 2 (Convolution theorem).

For any two vectors , we have

 D(c∗d)=n(Dc∘Dd).

In words, the previous theorem states that the discrete Fourier transform of the convolution of two vectors equals (up to scaling by ) the componentwise product of the discrete Fourier transforms of and . It is possible to get rid of the scaling factor by choosing an orthonormal basis instead of orthogonal.

The complexity depends on how fast we can multiply a vector by the matrices and . Using the very special structure of and , this can be done in , which is the main theme of the next section.

## 3 Fast Fourier Transform

We give the divide and conquer algorithm computing the discrete Fourier transform and the inverse discrete Fourier transform in time , which is due to Cooley and Tukey [2]. For simplicity, we only show how to compute the product , the other can be done analogously. We start with the following key lemma.

If , then .

###### Proof.

We have . The statement of the lemma has the following geometric explanation. Let be the angle of . The angle of is . On the other hand, divides into less, but larger parts than . Since , . ∎

We show on an example for how to derive the recursive fast Fourier transform in a matrix notation. Then we discuss how to generalize this for , which we can always assume for simplicity. In this chapter we use to indicate the fact that we are considering a matrix of size . We also use to denote .

We want to compute the product . We start just by applying the regular matrix-vector multiplication. To find recursion, we apply Lemma 3 and the properties of complex conjugation. We have

We multiply and rearrange each component of the resulting vector:

We apply Lemma 3 and use the properties of the complex numbers and to obtain:

In the last equation, we are using the fact that and . For general , we have . Now, we rearrange again:

Finally, we get

We put

In general, for , by a similar derivation, we obtain:

 y=Fnx=[In/2An/2In/2−An/2]⋅[Fn/2Fn/2]⋅Pπ⋅x,

where is the identity matrix, , and is the permutation matrix which put first all the even components of .

Let be the time needed to compute . First, the algorithm splits the vector into the even-numbered components

and the odd-numbered components

, which can be done in time . Then, the algorithm computes and . This takes time. The final multiplication can be also done in the . We get the following recurrence:

 T(n)=2T(n/2)+Θ(n).

Using standard methods, it can be easily shown that .

## 4 Discrete Fourier Transform on Finite Abelian Groups

In this section, we consider functions of the form , where is a finite abelian group. We denote the set of all such functions by . The set is clearly a vector space. In this section, we use the following shorthand .

It is a folklore that every finite abelian group is isomorphic to a direct product of the form

 Zk1×⋯×Zku,

where are powers of (not necessarily distinct) primes; see for example [7]. For every finite abelian group, we fix a canonical form by choosing the sequence to be non-decreasing. In what follows, when talking about a finite abelian group , we always think about this canonical representation instead.

We order the elements of an abelian group lexicographically. Then, we can alternatively think of as a complex vector indexed by the group . We refer to the elements of as vectors. However, we prefer the functional notation for cosmetic reasons.

Convolution. The previous two sections dealt with the case when . Here we derive the Fourier transform on , for any finite abelian group . We again start with the important convolution product of two vectors , which is usually defined by

 (v∗u)(x)=∑y∈Gv(x−y)g(y).

Similarly, the concept of circulant matrix can be generalized for finite abelian groups. For , the -circulant matrix , indexed by , is a matrix defined by

 C(x,y):=v(x−y),

for . Note that here denotes the entry of indexed by and . We use this notation throughout the section.

###### Example 3.

Let and let be a vector such that . Then the -circulant of is the following matrix:

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣abefcdbafedccdabefdcbafeefcdabfedcba⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

The convolution can be represented by multiplying the vector by the matrix from the left.

###### Lemma 4.

For , we have , where is the -circulant matrix of .

###### Proof.

We have . ∎

Eigenvectors of -circulants. The following theorem gives a recursive description of any -circulant matrix and of its eigenvectors. Although it is not difficult to prove, to the best of our knowledge, this theorem is not available in this form in the current literature.

Before stating the theorem, we recall that the Kronecker product of an matrix with an matrix is a matrix defined by

 A⊗B=⎡⎢ ⎢ ⎢⎣a1,1B⋯a1,nB⋮⋱⋮am,1B⋯am,nB⎤⎥ ⎥ ⎥⎦.

We also need several properties of the Kronecker product:

• If and , then is an eigenvalue of with the corresponding eigenvector . Any eigenvalue of arises as such product of eigenvalues of and .

• We have .

• If and are of the same size, and are of the same size, then , where denotes the Hadamard (componentwise) product.

###### Theorem 5.

Let and be abelian groups such that is not cyclic and . Then for every -circulant , the following hold:

• There are -circulants such that

 C=I⊗C0+P⊗Ck−1+P2⊗Ck−2+⋯+Pk−1⊗C1,

where is defined in Section 2.

• For every , there is an eigenvector of such that , where is an eigenvector of any -circulant, and is an eigenvector of any -circulant. In particular every -circulant has the same set of eigenvectors.

###### Proof.

First, we prove (i). Consider the submatrix of indexed by the sets and , for some fixed . For every and , we have

 Ci,j(x,y)=f(x−y)=f((i−j,x′−y′)).

It follows that is a -circulant.

Clearly, , for every . The mapping defines an action of on . We have if and belong to the same orbit under this action. The powers of the permutation matrix are in one-to-one correspondence with the orbits of this action. We put , for , and the addition taken modulo . This concludes the proof of part (i).

Now, we prove (ii). From (i), we know that if is an eigenvector of , then it is also an eigenvector of each , for . By induction, we may assume that every -circulant has the same set of eigenvectors. Therefore, must be of the form , where and are exactly as in the statement. ∎

Fourier Basis. Denote the set of eigenvectors of any -circulant by . Before we prove that forms an orthogonal basis of , we derive a precise description of every .

###### Lemma 6.

Let be an abelian group. Then for ,

 χg(x)=e(u∑i=1gixiki),x∈G.
###### Proof.

We prove the lemma by induction on . If , then the group is cyclic and the statement follow from the arguments in Section 2. Suppose that the statement holds for any with at most factors. By Theorem 5, we have , where , for some and . By the induction hypothesis and the definition of the Kronecker product we have

 χg(x)=(χg1⊗χg′)(x)=χg1(x1)χg′(x2,…,xu)=
 =e(g1x1k1)⋅e(u∑i=2gixiki)=e(u∑i=1gixiki).

Recall that the inner product on is defined in an expected way . Finally, we prove that the set forms an orthogonal basis, called the Fourier basis.

###### Theorem 7 (Fourier basis).

The set forms an orthogonal basis.

###### Proof.

In Section 2, we proved this for the case when a cyclic group. Assume now that is not cyclic. Let . By Theorem 5, we have and . Moreover, from the properties of the Kronecker product, it follows that

 ¯¯¯¯¯¯χg∘χh=(¯¯¯¯¯¯¯χg1⊗¯¯¯¯¯¯¯χg′)∘(χh1⊗χh′)=(¯¯¯¯¯¯¯χg1∘χh1)⊗(¯¯¯¯¯¯¯χg′∘χh′).

To compute , we proceed by induction. It suffices to evaluate the sum

 ∑x∈G(¯¯¯¯¯¯χg∘χh)(x)=∑(x1,x′)∈G¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯χg1(x1)⋅χh1(x1)⋅¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯χg′(x′)⋅χh′(x′)=
 =∑x1∈Zk1(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯χg1(x1)⋅χh1(x1)∑x′∈G′¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯χg′(x′)⋅χh′(x′))=∑x1∈Zk1¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯χg1(x1)⋅χh1(x1)⋅χ∗g′χh′.

If , then by it easily follows from Lemma 6 that . If , then by induction, we have , therefore, also . ∎

The standard basis of consists of vectors , where if and if . The discrete Fourier transform on the finite abelian group is then the change of basis matrix from the standard basis to the Fourier basis. The matrix could be derived in an analogous way as for in Section 2, but we omit it here.

Fourier Analysis on Boolean Cube. If , then is often called Boolean cube. We note that in this special case the eigenvectors can be identified with the subsets of . In particular, for every , we put . Then by Lemma 6, for , we have

 χg(x)=(−1)∑i∈Sgxi.

Fourier analysis on Boolean cube is especially important in many applications in theoretical computer science; for a survey see [4].

Characters of an Abelian Group. Finally, we conclude this section by comparing our derivation of Fourier transform on with the approach in other textbooks. Typically, the exposition starts by introducing characters of abelian groups.

Let be the multiplicative group of complex numbers on the unit circle. A character, also called one-dimensional representation, of a finite abelian group is a homomorphism . It can be proved that characters form an orthogonal basis and this is then called the Fourier basis. The issue is that at the first encounter, this construction, though elegant, might seem to come out of the blue.

On the other hand, we first start with the convolution product. Its great importance clearly motivates our next endeavour. We realize that it can by represented by a linear mapping, namely the -circulant. Then the natural step is to compute its eigenvectors and to discover that they form an orthogonal basis. Moreover, from Lemma 6 it is also easy to see that every eigenvector of a -circulant is a character, and it is also straightforward to prove that every character of an abelian group is an eigenvector of every -circulant.

Nevertheless, the connection between characters and Fourier basis is crucial when developing Fourier analysis on finite non-abelian groups, which relies on representation theory of groups. This is out of the scope of this paper.

## References

• [1] Barry A Cipra. The best of the 20th century: Editors name top 10 algorithms. SIAM news, 33(4):1–2, 2000.
• [2] James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of computation, 19(90):297–301, 1965.
• [3] Sanjoy Dasgupta, Christos H Papadimitriou, and Umesh V Vazirani. Algorithms. McGraw-Hill Higher Education, 2008.
• [4] Ronald De Wolf. A brief introduction to fourier analysis on the boolean cube. Theory of Computing, pages 1–20, 2008.
• [5] Ida Kantor, Jiří Matoušek, and Robert Šámal. Mathematics++, volume 75. American Mathematical Soc., 2015.
• [6] María C Pereyra and Lesley A Ward. Harmonic analysis: from Fourier to wavelets, volume 63. American Mathematical Soc., 2012.
• [7] Joseph J Rotman. An introduction to the theory of groups, volume 148. Springer Science & Business Media, 2012.
• [8] Gilbert Strang and Kaija Aarikka. Introduction to applied mathematics, volume 16. Wellesley-Cambridge Press Wellesley, MA, 1986.
• [9] Audrey Terras. Fourier analysis on finite groups and applications. Number 43. Cambridge University Press, 1999.