Tucker Tensor Decomposition on FPGA

Tensor computation has emerged as a powerful mathematical tool for solving high-dimensional and/or extreme-scale problems in science and engineering. The last decade has witnessed tremendous advancement of tensor computation and its applications in machine learning and big data. However, its hardware optimization on resource-constrained devices remains an (almost) unexplored field. This paper presents an hardware accelerator for a classical tensor computation framework, Tucker decomposition. We study three modules of this architecture: tensor-times-matrix (TTM), matrix singular value decomposition (SVD), and tensor permutation, and implemented them on Xilinx FPGA for prototyping. In order to further reduce the computing time, a warm-start algorithm for the Jacobi iterations in SVD is proposed. A fixed-point simulator is used to evaluate the performance of our design. Some synthetic data sets and a real MRI data set are used to validate the design and evaluate its performance. We compare our work with state-of-the-art software toolboxes running on both CPU and GPU, and our work shows 2.16 - 30.2x speedup on the cardiac MRI data set.



There are no comments yet.


page 8


Sparse Tucker Tensor Decomposition on a Hybrid FPGA-CPU Platform

Recommendation systems, social network analysis, medical imaging, and da...

Tensor Completion via Tensor QR Decomposition and L_2,1-Norm Minimization

In this paper, we consider the tensor completion problem, which has many...

Grassmannian Optimization for Online Tensor Completion and Tracking in the t-SVD Algebra

We propose a new streaming algorithm, called TOUCAN, for the tensor comp...

Exact tensor completion using t-SVD

In this paper we focus on the problem of completion of multidimensional ...

Group Orbit Optimization: A Unified Approach to Data Normalization

In this paper we propose and study an optimization problem over a matrix...

Tensor-Tensor Products for Optimal Representation and Compression

In this era of big data, data analytics and machine learning, it is impe...

Algorithms for an Efficient Tensor Biclustering

Consider a data set collected by (individuals-features) pairs in differe...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As a multi-dimension extension of matrices, tensors are a natural tool to represent and process multi-way data arrays [1]

. For instance, an MRI sequence with three spatial dimensions and a time dimension can be naturally represented as a 4-way tensor. The convolution layer in a neural network is also a tensor. Leveraging various tensor decompositions 

[2, 3, 4]

, many high-dimensional data mining 

[5], machine learning [6, 7, 8, 9, 10, 11] and EDA [12, 13]

problems have been solved efficiently without suffering from the curse of dimensionality.

Due to their superior performance in processing high-volume data, tensors have emerged as a promising tool to enable real-time machine learning and data analysis. Recently, tensor algorithms have achieved great success in training and compressing deep neural networks [6, 7, 8, 9, 10, 11]. The resulting tensorized models consume tremendously less memory, run-time and power than the original deep neural network models. Tensor algorithms have also been successfully employed to accelerate medical image analysis [14]

, anomaly detection 

[15] and speech recognition [16].

Despite the rapid progress of tensor algorithms, the hardware/algorithm co-design of tensor computation on resource-constrained platforms remains a new and (almost) unexplored field. Some tensor libraries [17, 18, 19]

have been developed for high-performance platforms like clusters and super computers. However, little algorithm-architecture co-design targeting on power and cost-limited devices has been done, which has limited the application of tensor-based data analysis and machine learning on IoT and edge devices. Although many hardware accelerators are available for matrix and vector computations 

[20, 21] and have been applied to machine learning [22, 23, 24] and signal processing [25], they cannot handle tensor data, because the underlying theory and numerical procedures are fundamentally different. It is inefficient and error-prone to process tensor data by matrix- or vector-computation accelerators. Therefore, resource-constrained hardware accelerators for tensor computation are highly desired.

This paper presents, for the first time, a hardware accelerator for one of the most important tensor algorithms: Tucker decomposition [3]

. Tucker decomposition is a high-order generalization of singular value decomposition (SVD) and principal component analysis (PCA), and it often achieves orders-of-magnitude higher data compression ratio than matrix compression algorithms on multi-way data. This method has been widely used in facial recognition 

[26], signal processing [27]

, deep learning 

[28] and data mining [5]. Tucker decomposition is often implemented via the high-order orthogonal iteration (HOOI) [29]. This algorithm involves some computation-intensive operations such as the tensor-times-matrix (TTM) and matrix SVD. Meanwhile, handling the huge amount of tensor data on FPGA or ASIC is a challenging task.

The contributions of our paper are summarized below:

  • [leftmargin=*]

  • On the hardware side, we present an hardware architecture for Tucker decomposition. We describe the design and data communication of three units: TTM, SVD via Jacobi iterations, and tensor permutation/reshaping.

  • On the algorithm side, we propose a warm-start algorithm to reduce the cost of the Jacobi iterations.

  • We analyze the performance of our accelerator, implement it on a Xilinx FPGA, and show the implementation results.

  • We compare our FPGA accelerator with some state-of-the-art algorithms on both CPU and GPU, and demonstrate its application on an MRI compression task. Our accelerator shows up to speedup on the MRI data set.

Ii Algorithm Background

In this section, we introduce the necessary background of tensors and Tucker decomposition.

Ii-a Tensors and Basic Tensor Operations

Notations: We use boldface lower-case letters (e.g., ) to denote vectors, boldface capital letters (e.g. ) to denote matrices, and boldface Euler script letters (e.g. ) to denote tensors.

A tensor is a multidimensional data array. Here is called the order or way of . An integer can be used as the index of a specific dimension or mode with size . An entry of a tensor can be specified by an index vector. For instance, the -th entry in tensor is denoted by . Clearly, a vector and a matrix are order-1 and order-2 tensors, respectively.

Fig. 1: Left to right: a tensor, slices, and fibers.

Tensor Fiber and Slice: A fiber is a one-dimensional fragment of a tensor, obtained by fixing all indices but one. Tensor fibers are higher-order extension of matrix rows and columns. A third-order tensor has fibers that can be denoted by , or correspondingly. A tensor slice is a two-dimensional fragment of a tensor, obtained by fixing all indices but two. For instance, , and denote the horizontal, lateral, and frontal slices of a -way tensor, respectively. Fig. 1 shows the slices and fibers of a tensor.

Tensor permutation: Tensor permutation changes the mode order of a tensor. It is a high-order extension of the matrix transpose. For instance, given , generates a new tensor with .

Fig. 2: Tensor unfolding.

Unfolding: Unfolding (or matricization) as shown in Fig. 2, is the process of transforming a tensor into a matrix. Unfolding reorders the elements of an -way data array into a matrix. For instance, the mode- unfolding of a tensor is denoted as . The mapping from the -th element to the -th element of is given as follows


TTM: Tensor-times-matrix (TTM) in short, is a high-dimensional expansion of matrix multiplication. Given a tensor and a matrix , their -mode product is a new tensor


It can be expressed as matrix-matrix multiplication


However, implementing it in this way directly can be inefficient on hardware because tensor permutation will be needed. This needs to move almost all the data in a tensor.

Tensor Norms: The norm of a tensor is a function which maps any tensor to a non-negative scalar. A widely used norm of tensors is the Frobenius norm, defined as


Ii-B Tenser Tucker Decomposition

Given an -way tensor , we may compress it by the Tucker decomposition [30]. As shown in Fig. 3, the Tucker decomposition approximates a large-size tensor with a small-size core tensor and factor matrices as follows:


where all columns are orthonormal inside each factor matrix . We call as the multilinear rank of . When , can be represented with the above Tucker format at a much lower cost. Once all factor matrices are obtained, the core tensor can be computed as

Fig. 3: Tucker decomposition.

Two popular methods can be used to compute a Tucker decomposition. The first well-known method is the high-order SVD (HOSVD) [3]. The idea of HOSVD is simple:

  • [leftmargin=*]

  • One first unfolds along mode to get ;

  • Then, perform a singular value decomposition (SVD)

  • Finally, pick as the first columns of .

This method is easy to implement, however it is not optimal in fitting the data. Alternatively, an alternative least-square method called HOOI is widely used to get a better solution.

HOOI: The High Order Orthogonal Iteration (HOOI) [29, 31, 32] method aims to minimize the approximation error


through the iterative process as shown in Alg. 1. Each iteration of the HOOI consists of two steps for every mode index : (1) obtain a tensor via a power iteration (TTM along all modes except ), which can be done from mode , then , … until mode 1. (2) an SVD of the mode- unfolded matrix of to extract a mode- factor matrix .

In practice, the initialization process via HOSVD can be very time-consuming, because it needs SVD operations, and each of it works on a matrix whose size is equal to the original tensor. Therefore, some random orthonormal matrices are often used as the initial factor matrices. In this case, the total number of iterations needed may increase slightly, but the total runtime can decrease significantly. Even though, when the size of the tensor is large, the time and energy comsumed to compute the Tucker decomposition can be very high.

1:   Initialize via HOSVD
2:  while not converge do
3:     for  do
5:        Unfold and perform SVD:
6:         the first columns of .
7:     end for
8:  end while
9:  return .
Algorithm 1 HOOI for Tucker Decomposition

Iii Proposed Architecture

Fig. 4: Overall structure of our Tucker decomposition.

In this section, we propose a new hardware architecture to perform Tucker decomposition via HOOI.

Fig. 4 shows the system-level diagram of our architecture. In order to load and accommodate the huge amount of tensor data elements, our design stores the tensor in an external DRAM. The HOOI engine consists of three parts: a tensor-times-matrix (TTM) unit, a singular value decomposition (SVD) unit, and a tensor permuting/reshaping unit. The data elements of the three units are stored in different memories. Because the size of a tensor can be very large, all tensors (including the intermediate and final results of TTM) are stored in an external DRAM. The matrix for each SVD is stored in an on-chip memory to reduce latency and to achieve a maximum throughput. Both parts are parallel and pipelined to achieve the maximum performance. The tensor permuting unit moves the data between the on-chip memory used by the SVD unit and the external DRAM used by the TTM unit, and it permutes the tensor accordingly.

Iii-a Tensor-Times-Matrix (TTM) Unit

The TTM unit can be implemented as a matrix-matrix multiplication that is available in some common computational libraries. However, we need to permute tensors before it can be implemented in this way. This is time- and memory-consuming because almost all data elements of have to be moved. In this work, we develop a TTM unit without tensor permutations. For simplicity, the tensors are always stored by incrementing the mode-1 index , then the second index , and so forth. Since the size of tensor is often beyond the capacity of an on-chip memory, all data elements (including the input and output data, and the intermediate result of a TTM operation) are stored in an external DRAM.

TTM with a 2-D PE Array. Our TTM unit is shown in Fig. 5. In order to maximize the throughput of the TTM module, we design a 2-D array of processing elements (PE) with columns and rows. Each PE computes the product of one scalar element of the tensor (black arrow) and one scalar from the matrix (either from the blue line or stored in the PE) at each clock cycle. The PEs in a single column are always connected to the same bus so they handle the same element in the tensor each time. Here can be either the original tensor or an intermediate tensor after the TTM of with some factor matrices. With columns, this module can handle up to neighbouring elements of a tensor fiber in total at the same time. Due to our method of storing , the fibers are always obtained by only changing the first mode index. Each row of this array handles one column of the factor matrix .

Fig. 5: The TTM unit. The red part is used when computing the mode-1 product and blue part is used when computing mode- product when .

We compute the mode- TTMs in a decreasing order from to . This PE array works in different ways dependent on the value of , which is explained below.

  • [leftmargin=*]

  • Assume that and that we have done TTMs for all modes except mode . Let us ignore mode for simplicity, and the size of becomes . To simplify the expression, we fold the modes into a single mode and use as its index. The element-wise expression of this operation is


    In each clock cycle, each vertical bus can carry some neighbouring elements in a tensor fiber and each horizontal bus can carry an element in the factor matrix. Specifically, In the -th cycle of the -th round, the -th vertical bus and the -th horizontal bus carry scalars and , respectively. Consequently the -th PE calculates

    Finally is obtained by summing the above product terms over (in each round). Note that the index of matrix elements in used at each PE does not depend on . Therefore, we can multiply the whole fiber with the same matrix element, and all PEs in the same row share the same data elements of by connecting them to the same horizontal bus. Because the size the new dimension, , is very large, we divide the partly-folded tensor into some small sub-tensors of size , and the resulting tensor into some sub-tensors of size . We can compute the sub-tensors of one by one to reduce the buffer size.

  • When computing , the element-wise result is


    The -th row of the PE array computes product terms of (10). Because is usually smaller than the fiber length, we need to compute all product terms by several cycles. In the -th cycle, the -th PE calculates

    Note that in mode-1 TTM, the RAM inside the -th PE stores all ’s for all . The TTM element is obtained by accumulating all products terms in the -th row during all cycles.


In-place adder tree:

1:  for =1 to log(n) do
2:     for =1 to n/2 do
4:     end for
5:     for =n/2+1 todo
7:     end for
8:  end for
Fig. 6: (a) The details of a PE. The red part is used only for mode-1 TTM, and blue part is only for mode- TTM with . (b) An in-pace adder tree for .

Processing Element (PE). As shown in Fig. 6, each PE consists of a multiplier, a small RAM, and another memory used as an output buffer storing the result before writing it to a DRAM. The RAM (marked in red) is only used to store when computing the mode-1 TTM. Otherwise, the bus at each row imports data elements of when . Therefore, a mutiplexer (MUX) is used to select the correct data to perform product operations. After computing one batch of the data (a tensor fiber if and a tensor slice if ), the result is written to the DRAM and then reset to zero. The buffer temporarily holds the intermediate results, and keeps updating during the multiplication and sum operations. The buffer stops updating when its data is written into an external memory. In order to avoid timing conflicts and to increase throughput, another buffer is used (not shown in the figure) for transferring data to an external memory. These two buffers switch their roles after processing each batch of data.

In-place Adder Tree. As mentioned above, we need an adder tree for the mode-1 TTM. If the adder tree is implemented as a pipeline, a lot of adders and registers will be used. Given that the product terms need to be summed up only once per batch of data instead of per clock cycle, we only need an in-place adder tree. We split a adder tree into multiple stages. After each stage, each two elements are summed up so the total number of data elements is reduced by . The registers and adders are shared among different stages. The data elements are read from and write back to the same group of registers after each clock cycle. This is why we call it an “in-place” adder tree. This treatment can reduce the number of adders and registers by .

Iii-B Singular Value Decomposition (SVD) Unit

1:  Input: , initial guess .
2:  while Not converge do
3:     for any ,  do
4:        , ,
6:        ,
7:        ,
8:     end for
9:  end while
10:  return  , with its -th row being .
Algorithm 2 SVD via single-side Jacobi iteration

Our SVD unit employs the Jacobi iterations [33, 34, 35, 36, 37]. Both single-side and double-side Jacobi iterations are widely used. We use the single-side version because of its higher accurate, ease to parallelize and less data dependency. The whole framework is summarized in Alg. 2. Given a matrix , this algorithms computes its left singular vectors by orthogonizing every two rows (i.e., and ) of the matrix iteratively. The iterative process involves the norms and inner products of the row vectors and performing some rotations.

We can select the order of in different ways. A natural choice is to increment in the inner loop and increment in the outer loop, or vice versa. However, because of the data dependency between two adjacent operations, this choice makes it impossible to implement parallel or pipe-lined design. In order to overcome this issue, we employ the round-robin ordering suggested in [38], which eliminates the data dependency between adjacent iterations. This method starts by dividing all indices into pairs where is the total number of rows. After orthogonalizing all the pair of rows specified above, we generate new index pairs in this way: suppose the pair in the previous round is , this pair index is updated in the next round as

Fig. 7 shows the block diagram of our SVD unit. In each step, two vectors are orthogonalized. The on-chip memory provides two ports to operate independently. In this part, one port takes the two vectors from the memory, and another port writes the orthogonalized vectors back into the memory. Because we use only one port to input data and the other for output, the two vectors have to be fetched through the same port alternatively. Three sum-of-products are needed to calculate the rotation angle . Given that the two vectors are fetched alliteratively, the multiplier and adder to get and can be shared. Therefore, only two sets of multipliers and adders are used. In order to get , and , we employ the CORDIC algorithm [39], which uses the rotations of some fixed angles to approximate the rotation of any angle. This algorithm is efficient to calculate the trigonometric functions on hardware, and an FPGA IP core is available. After these two vectors are fetched from the memory, they are stored in a local buffer until and are calculated, then they are rotated accordingly. In this way, it is guaranteed each vector is read from the memory only once when orthogonalizing each pair of rows.

Besides the input matrix

, the orthogonal matrix

also needs to be rotated in the same way. We store and in the same memory and handle them in the same way, except that is not used for calculating and . Since has a much smaller size than , such a design only causes negligible run-time overhead but saves half of the area and power.

Fig. 7: Block diagram of SVD unit. ACC: accumulator. FIFO: first-in, first-out queue.

Iii-C Tensor Permutation and Reshaping

Once is computed, we need to permute and reshape the tensor into before performing an SVD. Since is stored in an external DRAM and the matrix is stored in an on-chip memory, we need an extra module to move the data between the DRAM and the on-chip memory and re-organize the data elements to match . After SVD, the factor matrix need to be transposed and moved from the on-chip memory to DRAM, which is done by this module as well.

When moving the data from on-chip memory and external DRAM, the data is first read from the its original memory, written to a local buffer with size , then written to the destination memory. The size of the buffer determines the size of data set that can be moved in every batch.

Iv Implementation Details

Iv-a Fixed Point Number

Floating-point numbers usually cause higher hardware cost and longer latency. Therefore, we use a fixed-point number system based on the trade-off between accuracy and hardware complexity.

We decide the fixed-point precision based on some hardware constrains. Because the data width at the interface of a DRAM controller is fixed as 512 bits, the memory is most efficient if the data width is a factor of 512 (i.e., with integer ). Meanwhile, each DSP our FPGA can perform an multiplication with data sizes up to 27 bits 18 bits. Considering these constraints, we use 16-bit numbers to represent all tensor data elements, and store them in an external DRAM. On the other hand, we use 27-bit numbers to represent the matrix data in both TTM and SVD in order to achieve higher accuracy and to avoid excessive use of multipliers. Note that we use a smaller data width for tensor data in order to save some memory space when processing the huge amount of data in a tensor. In this case, each multiplier in the TTM unit requires one DSP block, and each multiplier in the SVD unit requires two DSP blocks.

There are many sum-of-product operations in both TTM and SVD. The small error in the product terms will accumulate when calculating the sum. In order to address this issue, we use 48-bit numbers to represent the product terms. We use 27-bit numbers to represent most of the intermediate results, except for the 32-bit in the SVD unit.

1:  Initialize as any orthonormal matrix.
2:  while Not converge do
3:     for  do
5:        Unfold into
6:        SVD: run Jacobi iterations (i.e., Alg. 2) with input and initial guess .
7:         the first columns of .
8:     end for
9:  end while
Algorithm 3 HOOI with warm-start Jacobi iterations

Iv-B HOOI with A Warm-start Algorithm

We observe that the SVD

via the Jacoboi iterations is the most time-consuming part of HOOI. Therefore, we employ a warm-start strategy to reduce the number of Jacobi iterations. In the standard Jacobi iterations, the initial guess is chosen as an identity matrix. In our implementation, we use the orthonormal matrix

obtained from the previous iteration of HOOI as the initial guess of the Jacobi iteration. Thanks to the good initial guess, we only need to perform one or two rounds of Jacobi iterations inside each SVD, and the whole HOOI still converges after an enough number of power iterations. The optimized algorithm is shown in Alg. 3.

Fig. 8: Convergence speed (measured as the total number of Jacobi iterations) of a standard HOOI and the optimized HOOI with the warm-start inside SVD.

Fig. 8 shows the convergence curves of the standard HOOI and our proposed warm-start HOOI, respectively. We consider a tensor of size and with a multi-linear rank

, which is generated by a Gaussian distribution and corrupted with some Gaussian noise. The noise variance is half of that of the tensor element. The standard method converges after only two HOOI iterations, but the SVD of each mode requires about 10 rounds of Jacobi iterations. Our optimized HOOI converges after 7 iterations, but each SVD requires only one round of Jacobi iterations, leading to a significant reduction of the total cost.

V Performance Analysis

This section analyzes the hardware performance of our FPGA-accelerated Tucker decomposition.

V-a Run-time Analysis

The total runtime is the sum of each part: TTM, SVD, and tensor permuting. For a -way tensor, each HOOI iteration requires TTMs, SVDs and tensor permuting/reshaping operations. Some intermediate results can be reused. For instance, after computing for a 3-way , the result of can be reused when computing . By considering the TTM data reuse, the actual total number of TTMs is reduced to . When applying the warm-start algorithm for Jacobi iteration, the unfolded matrix need to be multiplied with first, and this is done by TTM as well, causing additional TTM operations. The total run-time is given by


The details of each term are provided below.

TTM Part: The run-time of TTM depends on the size of its multiplier matrix. Suppose that we have a multiplier matrix of size . In this case, the multiplier takes in elements of per clock cycle, and each element is multiplied with elements of the factor matrix . At most product and sum operations can be done per clock cycle. Therefore, the number of clock cycles is

Similarly, the clock cycles required for is

Some extra clock cycles are caused by the latency of the pipeline and ping-pong buffer, but they are often less than of the total run-time and thus negligible. When the TTM is applied over the first mode, the data need to be copied to the local memory of each PE in advance. This causes extra clock cycles, which is negligible again and can be done in parallel with other operations.

SVD Part: When updating the -th factor matrix, we need to do SVD to a matrix with size , with . In each Jacobi iteration, pairs of matrices will be orthogonalized. Each orthogonalization handles numbers, therefore it takes clock cycles, where is the degree of parallelism. As a result, each Jacobi iteration takes approximately


clock cycles. Similar to the case in TTM, the extra time cased by the latency of pipeline is negligible.

Fig. 9: Runtime and convergence of HOOI on some randomly generated 3-way tensors with size and . (a) Convergence of proposed FPGA-based HOOI on a 3-way tensor, in the number of HOOI iterations. (b)-(d) Runtime of HOOI for various sizes of tensors. TensorLy uses the standard HOOI which uses SVD for initialization and for updating

; the MATLAB implementation uses random initialization and eigenvalue decomposition to update

; our proposed FPGA implementation uses random initialization and SVD Jacobi method with warmstart to update .
Fig. 10: Runtime of HOOI on some 4-way tensors with the same size along each dimension, and rank and , respectively.

Tensor Permutation: Suppose that the size of the buffer is . In each clock cycle, this buffer can either exchange (read or write) numbers with the internal memory or numbers with the DRAM. Note that and are not necessarily equal to and , respectively, as long as the memory supports writing or elements each time. However, setting can simplify our design and maximize the hardware efficiency. Each tensor permutation requires clock cycles, with . Our simulation shows that the practical run-time is


clock cycles when we move from a DRAM to an on-chip memory and permute it. We need


clock cycles to move to DRAM and transpose it.

V-B Area and Power

The area and power depends on the design parameters , and . The higher the degree of parallelism is, the more PEs, hardware area and power will be required. In the TTM unit, the total number of multipliers, adders, accumulators and buffers are proportional to the size of multiplier matrix, . Therefore, the area of TTM is approximately . Similarly, the area of the tensor permutation unit is proportional to the buffer size . The power also increases when the area increases.

The area of the SVD unit is independent of its input matrix size, but depends on the degree of parallelism . Additionally, one arctan module and one sin/cos

module are required. Therefore, the area of the SVD unit is estimated as

, where represents the area (multipliers, adders, accumulators, etc.) required for each matrix element, and represents the area of arctan and sin/cos blocks.

Vi Results

LUTs Registers DSPs Clock rate Power
16 16 46,056 24,556 256 212MHz 2.008W
16 32 99,384 48,357 512 200MHz 2.395W
32 16 99,505 48,189 512 203MHz 2.503W
32 32 198,269 95,743 1,024 187MHz 3.141W
TABLE I: Performance of the TTM unit.
LUTs Registers DSPs Clock rate Power
16 8,711 12,284 128 209MHz 0.477W
32 11,134 13,965 256 207MHz 0.683W
64 16,127 17,532 512 208MHz 1.095W
128 25,360 24,631 1,024 203MHz 1.871W
TABLE II: Performance of the SVD unit (fixed point).
LUTs Registers Clock rate Power
16 64 29,929 27,718 241MHz 1.342W
32 64 59,308 55,366 209MHz 1.961W
32 128 115,749 110,662 205MHz 2.981W
TABLE III: Performance of the tensor permute/reshape unit.

Vi-a FPGA Performance Validation

We implement all parts of the optimized HOOI with different design parameters on FPGA. All the results below are based on Xilinx XCVU9P-L2FSGD2104E FPGA, which is available on a Xilinx VCU1525 acceleration board. The power is estimated with a 200-MHz clock rate. The results for different units are shown in Tables I-III. The hardware complexity, including the number of lookup tables (LUTs), registers and DSP blocks, is determined by the design parameters. The area of the TTM unit is approximately proportional to . The area of the SVD unit is approximately proportional to . In tensor permute unit, its area is approximately proportional to . The power consumption increases as we increase the design parameters but the relationship is not strictly linear, since the power consumption of some parts (e.g., the clock generator) is independent of our design parameters.

Vi-B Performance Comparisons

We compare our FPGA accelerator with other two commonly used toolboxes on different platforms: the Tensor toolbox [40, 41] in MATLAB on CPU, and the TensorLy toolbox [42]

in Python on both CPU and GPU. The TensorLy can select NumPy, PyTorch or MXNet as its backend, and the latter two allow high-performance GPU computation. In our experiment, we use NumPy and PyTorch for comparison. The runtime on CPU is measured on a Linux desktop with Intel i5-8400 6-core CPU and 32 GB memory. The GPU experiments are conducted on a Titan V GPU. Since the accuracy and convergence of our Tucker decomposition depends on the fixed-point number system, we develop a fixed-point simulator with the Xilinx fixed-point number library to simulate the truncation error and overflow in our FPGA accelerator.

We perform the comparison by using some randomly generated low-rank tensors. For each tensor, both the core tensor and the factor matrices are generated by Gaussian distributions, and the tensor is then corrupted by some Gaussian random noise. The relative error is evaluated as


Fig. 9 shows the results on a set of 3-way tensors with size . Our architecture can get speedup compared with MATLAB tensor toolbox on CPU, and even more compared with the TensorLy toolbox on both CPU and GPU.The convergence behavior of our FPGA-based Tucker decomposition is shown in Fig. 9(a). It is shown that our method always converges after 6-8 HOOI iterations. Therefore, we assume 8 HOOI iterations to estimate the runtime of our FPGA architecture.

We further perform comparisons using some 4-way tensors and show the results in Fig. 10. Our PC with 32GB RAM can no longer accommodate such 4-way tensor data, therefore the results on CPU are obtained by running our experiments on Amazon AWS r4.4x large instance with 16 virtual CPUs (vCPU) and 122GB memory. Since large 4-way tensors exceed the memory of GPU, and TensorLy with the NumPy backend consumes extremely long time, their runtimes are not shown in Fig. 10. When the size along each dimension is 256, MATLAB run out of memory when computing. On these 4-way tensor data, our FPGA design can get speedup compared with the MATLAB tensor toolbox on the AWS CPU.

Remark: Our FPGA accelerator uses the Jacobi iteration to perform SVD, whereas the MATLAB tensor toolbox and TensorLy use more powerful advanced SVD algorithms. If the same SVD algorithms are used in all implementations, our FPGA design may achieve more significant speedup.

Vi-C Application: MRI Compression

The accelerated Tucker decomposition can be applied to multiple application domains. Here we demonstrate its application in compressing multi-way MRI data. This dataset is a single-channel, first-pass myocardial perfusion real-time MRI sequence (). We use the pre-processed data available in [43, 44], and set rank to in HOOI. The estimated runtime with our architecture is 0.0447s at a clock rate of 185MHz. In comparison, on a Linux PC with Intel i5-8500 CPU 6 core CPU, the same operation takes 0.0964s with the MATLAB tensor toolbox, 0.335s with TensorLy toolbox and NumPy backend, 1.352s with TensorLy toolbox and Pytorch backend. on a PC with NVIDIA TITAN V GPU, this operation takes 0.217s. Therefore, our method provides 2.16-30.2 speedup compared with existing frameworks on CPU, and 4.85 speedup compared with GPU. Fig. 11 shows the original MRI data and the data approximated by our low-rank Tucker decomposition on FPGA.

Fig. 11: Decomposition result of MRI dataset. Top: original data at . Bottom: approximated data by Tucker decomposition on FPGA, with rank . The data compression ratio is .

Vii Conclusion

This paper has presented an algorithm-architecture co-design to perform tensor Tucker decomposition. We have implemented Tensor-Times-Matrix, matrix SVD with single side Jacobi iteration, and tensor permuting on FPGA. We have also proposed a warm-start algorithm for the Jacobi iterations to reduce its computation time. We have done simulations on both synthetic data sets and an MRI data set. Our method is significantly faster than existing computation frameworks on both CPU and GPU. This accelerator can be employed in a broad range of applications including data mining, scientific computing, computer vision, and deep learning.


This work was supported by NSF CCF 1817037. We would like to thank Sophia Shao (NVIDIA), Lei Deng, Zheng Qu and Bangyan Wang (UCSB) for their technical discussions.


  • [1] T. Kolda and B. Bader, “Tensor decompositions and applications,” SIAM Rev, vol. 51, no. 3, pp. 455–500, 2009.
  • [2] F. Hitchcock, “The expression of a tensor or a polyadic as a sum of products,” J. Math. Phys., vol. 6, no. 1-4, pp. 164–189, 1927.
  • [3] L. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966.
  • [4] I. V. Oseledets, “Tensor-train decomposition,” SIAM Journal Sci. Comp., vol. 33, no. 5, pp. 2295–2317, 2011.
  • [5] T. G. Kolda and J. Sun, “Scalable tensor decompositions for multi-aspect data mining,” in Proc. IEEE Int. Conf. Data Mining, 2008, pp. 363–372.
  • [6] C. Hawkins and Z. Zhang, “Bayesian tensorized neural networks with automatic rank selection,” arXiv preprint arXiv:1905.10478, 2019.
  • [7] H. Ding, K. Chen, Y. Yuan, M. Cai, L. Sun, S. Liang, and Q. Huo, “A compact CNN-DBLSTM based character model for offline handwriting recognition with tucker decomposition,” in Proc. IEEE Int. Conf. Document Analysis and Recognition, vol. 1, 2017, pp. 507–512.
  • [8] J.-T. Chien and Y.-T. Bao, “Tensor-factorized neural networks,” IEEE Trans. Neur. Networks Learn. Syst., vol. 29, no. 5, pp. 1998–2011, 2018.
  • [9] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov, “Tensorizing neural networks,” in NIPS, 2015, pp. 442–450.
  • [10] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, “Speeding-up convolutional neural networks using fine-tuned CP-decomposition,” arXiv preprint arXiv:1412.6553, 2014.
  • [11]

    Y. Yang, D. Krompass, and V. Tresp, “Tensor-train recurrent neural networks for video classification,” in

    Proc. Int. Conf. Machine Learning, 2017, pp. 3891–3900.
  • [12] Z. Zhang, X. Yang, I. V. Oseledets, G. E. Karniadakis, and L. Daniel, “Enabling high-dimensional hierarchical uncertainty quantification by anova and tensor-train decomposition,” IEEE Trans. CAD of Integrated Circuits and Systems, vol. 34, no. 1, pp. 63–76, 2015.
  • [13] Z. Zhang, T.-W. Weng, and L. Daniel, “Big-data tensor recovery for high-dimensional uncertainty quantification of process variations,” IEEE Trans. Comp., Pack. Manuf. Tech., vol. 7, no. 5, pp. 687–697, 2017.
  • [14] S. F. Roohi, D. Zonoobi, A. A. Kassim, and J. L. Jaremko, “Dynamic mri reconstruction using low rank plus sparse tensor decomposition,” in Proc. Int. Conf. Image Process., 2016, pp. 1769–1773.
  • [15] J. Li, G. Han, J. Wen, and X. Gao, “Robust tensor subspace learning for anomaly detection,” Int. J. Machine Learning and Cybernetics, vol. 2, no. 2, pp. 89–98, 2011.
  • [16] D. Saito, K. Yamamoto, N. Minematsu, and K. Hirose, “One-to-many voice conversion based on tensor representation of speaker space,” in Proc. Int. Conf. Speech Comm. Assoc., 2011.
  • [17] O. Kaya and B. Uçar, “High performance parallel algorithms for the Tucker decomposition of sparse tensors,” in Proc. IEEE Int. Conf. Parall. Proc., 2016, pp. 103–112.
  • [18] S. Smith, J. Park, and G. Karypis, “Sparse tensor factorization on many-core processors with high-bandwidth memory,” in Proc. IEEE Int. Parallel and Distributed Processing Symp, 2017, pp. 1058–1067.
  • [19] J. Li, C. Battaglino, I. Perros, J. Sun, and R. Vuduc, “An input-adaptive and in-place approach to dense tensor-times-matrix multiply,” in Proc. Int. Conf. High Performance Computing, Networking, Storage and Analysis, 2015, pp. 1–12.
  • [20] A. Amira, A. Bouridane, and P. Milligan, “Accelerating matrix product on reconfigurable hardware for signal processing,” in Proc. Int. Conf. Field Programmable Logic and Applications, 2001, pp. 101–111.
  • [21] Y. Dou, S. Vassiliadis, G. K. Kuzmanov, and G. N. Gaydadjiev, “64-bit floating-point FPGA matrix multiplication,” in Proc. Int. Symp. Field-programmable Gate Arrays, 2005, pp. 86–95.
  • [22] J. Zhu and P. Sutton, “FPGA implementations of neural networks–a survey of a decade of progress,” in Proc. FPLA, 2003, pp. 1062–1066.
  • [23] C. Wang, L. Gong, Q. Yu, X. Li, Y. Xie, and X. Zhou, “DLAU: A scalable deep learning accelerator unit on FPGA,” IEEE Trans. CAD of Integr. Circuits and Systems, vol. 36, no. 3, pp. 513–517, 2017.
  • [24]

    K. Irick, M. DeBole, V. Narayanan, and A. Gayasen, “A hardware efficient support vector machine architecture for FPGA,” in

    Proc. Int. Symp. FPCCM, 2008, pp. 304–305.
  • [25] J. L. Stanislaus and T. Mohsenin, “Low-complexity FPGA implementation of compressive sensing reconstruction,” in Int. Conf. Comput., Netw. Comm., 2013, pp. 671–675.
  • [26] M. A. O. Vasilescu and D. Terzopoulos, “Multilinear image analysis for facial recognition,” in Object recognition supported by user interaction for service robots, vol. 2.    IEEE, 2002, pp. 511–514.
  • [27] N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Trans. Sign. Proc., vol. 65, no. 13, pp. 3551–3582, 2017.
  • [28] Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin, “Compression of deep convolutional neural networks for fast and low power mobile applications,” arXiv preprint arXiv:1511.06530, 2015.
  • [29] L. De Lathauwer, B. De Moor, and J. Vandewalle, “On the best rank-1 and rank-(, ,…, ) approximation of higher-order tensors,” SIAM J. Matrix Analysis and Applications, vol. 21, no. 4, pp. 1324–1342, 2000.
  • [30] L. R. Tucker, “Implications of factor analysis of three-way matrices for measurement of change,” Problems in Measuring Change, vol. 15, pp. 122–137, 1963.
  • [31] P. M. Kroonenberg and J. De Leeuw, “Principal component analysis of three-mode data by means of alternating least squares algorithms,” Psychometrika, vol. 45, no. 1, pp. 69–97, 1980.
  • [32] A. Kapteyn, H. Neudecker, and T. Wansbeek, “An approach ton-mode components analysis,” Psychometrika, vol. 51, no. 2, pp. 269–275, 1986.
  • [33] E. R. Hansen, “On cyclic Jacobi methods,” J. Soc. Indust. and Appl. Math., vol. 11, no. 2, pp. 448–459, 1963.
  • [34] J. Demmel and K. Veselić, “Jacobi’s method is more accurate than QR,” SIAM J. Matrix Anal. Appl., vol. 13, no. 4, pp. 1204–1245, 1992.
  • [35] N. Hemkumar and J. Cavallaro, “A systolic VLSI architecture for complex SVD,” in Proc. IEEE ISCAS, vol. 3, 1992, pp. 1061–1064.
  • [36] A. Ahmedsaid, A. Amira, and A. Bouridane, “Improved SVD systolic array and implementation on FPGA,” in Proc. FPL, 2003, pp. 35–42.
  • [37] M. Rahmati, M. S. Sadri, and M. A. Naeini, “FPGA based singular value decomposition for image processing applications,” in IEEE Intl. Conf. ASSAP, 2008, pp. 185–190.
  • [38] R. Brent and F. Luk, “The solution of singular-value and symmetric eigenvalue problems on multiprocessor arrays,” SIAM J. Sci. Stat. Comp., vol. 6, no. 1, pp. 69–84, 1985.
  • [39] J. E. Volder, “The CORDIC trigonometric computing technique,” IRE Trans. Electronic Computers, no. 3, pp. 330–334, 1959.
  • [40] B. Bader, T. Kolda et al., “Matlab tensor toolbox version 2.6,” Available online, February 2015. [Online]. Available: http://www.sandia.gov/~tgkolda/TensorToolbox/
  • [41] B. W. Bader and T. G. Kolda, “Algorithm 862: MATLAB tensor classes for fast algorithm prototyping,” ACM Trans. Math. Software, vol. 32, no. 4, pp. 635–653, Dec 2006.
  • [42] J. Kossaifi, Y. Panagakis, A. Anandkumar, and M. Pantic, “TensorLy: Tensor learning in python,” CoRR, vol. abs/1610.09555, 2018.
  • [43] “First-pass myocardial perfusion real-time MRI dataset,” https://statweb.stanford.edu/~candes/SURE/matlab/JDT/DATA/invivo_perfusion4.mat, accessed: 2019-03-19.
  • [44] S. Lingala, Y. Hu, E. DiBella, and M. Jacob, “Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR,” IEEE Trans. Medical Imaging, vol. 30, no. 5, pp. 1042–1054, 2011.