I Introduction
As a multidimension extension of matrices, tensors are a natural tool to represent and process multiway data arrays [1]
. For instance, an MRI sequence with three spatial dimensions and a time dimension can be naturally represented as a 4way tensor. The convolution layer in a neural network is also a tensor. Leveraging various tensor decompositions
[2, 3, 4], many highdimensional data mining
[5], machine learning [6, 7, 8, 9, 10, 11] and EDA [12, 13]problems have been solved efficiently without suffering from the curse of dimensionality.
Due to their superior performance in processing highvolume data, tensors have emerged as a promising tool to enable realtime machine learning and data analysis. Recently, tensor algorithms have achieved great success in training and compressing deep neural networks [6, 7, 8, 9, 10, 11]. The resulting tensorized models consume tremendously less memory, runtime and power than the original deep neural network models. Tensor algorithms have also been successfully employed to accelerate medical image analysis [14]
[15] and speech recognition [16].Despite the rapid progress of tensor algorithms, the hardware/algorithm codesign of tensor computation on resourceconstrained platforms remains a new and (almost) unexplored field. Some tensor libraries [17, 18, 19]
have been developed for highperformance platforms like clusters and super computers. However, little algorithmarchitecture codesign targeting on power and costlimited devices has been done, which has limited the application of tensorbased data analysis and machine learning on IoT and edge devices. Although many hardware accelerators are available for matrix and vector computations
[20, 21] and have been applied to machine learning [22, 23, 24] and signal processing [25], they cannot handle tensor data, because the underlying theory and numerical procedures are fundamentally different. It is inefficient and errorprone to process tensor data by matrix or vectorcomputation accelerators. Therefore, resourceconstrained hardware accelerators for tensor computation are highly desired.This paper presents, for the first time, a hardware accelerator for one of the most important tensor algorithms: Tucker decomposition [3]
. Tucker decomposition is a highorder generalization of singular value decomposition (SVD) and principal component analysis (PCA), and it often achieves ordersofmagnitude higher data compression ratio than matrix compression algorithms on multiway data. This method has been widely used in facial recognition
[26], signal processing [27][28] and data mining [5]. Tucker decomposition is often implemented via the highorder orthogonal iteration (HOOI) [29]. This algorithm involves some computationintensive operations such as the tensortimesmatrix (TTM) and matrix SVD. Meanwhile, handling the huge amount of tensor data on FPGA or ASIC is a challenging task.The contributions of our paper are summarized below:

[leftmargin=*]

On the hardware side, we present an hardware architecture for Tucker decomposition. We describe the design and data communication of three units: TTM, SVD via Jacobi iterations, and tensor permutation/reshaping.

On the algorithm side, we propose a warmstart algorithm to reduce the cost of the Jacobi iterations.

We analyze the performance of our accelerator, implement it on a Xilinx FPGA, and show the implementation results.

We compare our FPGA accelerator with some stateoftheart algorithms on both CPU and GPU, and demonstrate its application on an MRI compression task. Our accelerator shows up to speedup on the MRI data set.
Ii Algorithm Background
In this section, we introduce the necessary background of tensors and Tucker decomposition.
Iia Tensors and Basic Tensor Operations
Notations: We use boldface lowercase letters (e.g., ) to denote vectors, boldface capital letters (e.g. ) to denote matrices, and boldface Euler script letters (e.g. ) to denote tensors.
A tensor is a multidimensional data array. Here is called the order or way of . An integer can be used as the index of a specific dimension or mode with size . An entry of a tensor can be specified by an index vector. For instance, the th entry in tensor is denoted by . Clearly, a vector and a matrix are order1 and order2 tensors, respectively.
Tensor Fiber and Slice: A fiber is a onedimensional fragment of a tensor, obtained by fixing all indices but one. Tensor fibers are higherorder extension of matrix rows and columns. A thirdorder tensor has fibers that can be denoted by , or correspondingly. A tensor slice is a twodimensional fragment of a tensor, obtained by fixing all indices but two. For instance, , and denote the horizontal, lateral, and frontal slices of a way tensor, respectively. Fig. 1 shows the slices and fibers of a tensor.
Tensor permutation: Tensor permutation changes the mode order of a tensor. It is a highorder extension of the matrix transpose. For instance, given , generates a new tensor with .
Unfolding: Unfolding (or matricization) as shown in Fig. 2, is the process of transforming a tensor into a matrix. Unfolding reorders the elements of an way data array into a matrix. For instance, the mode unfolding of a tensor is denoted as . The mapping from the th element to the th element of is given as follows
(1) 
TTM: Tensortimesmatrix (TTM) in short, is a highdimensional expansion of matrix multiplication. Given a tensor and a matrix , their mode product is a new tensor
(2) 
It can be expressed as matrixmatrix multiplication
(3) 
However, implementing it in this way directly can be inefficient on hardware because tensor permutation will be needed. This needs to move almost all the data in a tensor.
Tensor Norms: The norm of a tensor is a function which maps any tensor to a nonnegative scalar. A widely used norm of tensors is the Frobenius norm, defined as
(4) 
IiB Tenser Tucker Decomposition
Given an way tensor , we may compress it by the Tucker decomposition [30]. As shown in Fig. 3, the Tucker decomposition approximates a largesize tensor with a smallsize core tensor and factor matrices as follows:
(5) 
where all columns are orthonormal inside each factor matrix . We call as the multilinear rank of . When , can be represented with the above Tucker format at a much lower cost. Once all factor matrices are obtained, the core tensor can be computed as
(6) 
Two popular methods can be used to compute a Tucker decomposition. The first wellknown method is the highorder SVD (HOSVD) [3]. The idea of HOSVD is simple:

[leftmargin=*]

One first unfolds along mode to get ;

Then, perform a singular value decomposition (SVD)
(7) 
Finally, pick as the first columns of .
This method is easy to implement, however it is not optimal in fitting the data. Alternatively, an alternative leastsquare method called HOOI is widely used to get a better solution.
HOOI: The High Order Orthogonal Iteration (HOOI) [29, 31, 32] method aims to minimize the approximation error
(8) 
through the iterative process as shown in Alg. 1. Each iteration of the HOOI consists of two steps for every mode index : (1) obtain a tensor via a power iteration (TTM along all modes except ), which can be done from mode , then , … until mode 1. (2) an SVD of the mode unfolded matrix of to extract a mode factor matrix .
In practice, the initialization process via HOSVD can be very timeconsuming, because it needs SVD operations, and each of it works on a matrix whose size is equal to the original tensor. Therefore, some random orthonormal matrices are often used as the initial factor matrices. In this case, the total number of iterations needed may increase slightly, but the total runtime can decrease significantly. Even though, when the size of the tensor is large, the time and energy comsumed to compute the Tucker decomposition can be very high.
Iii Proposed Architecture
In this section, we propose a new hardware architecture to perform Tucker decomposition via HOOI.
Fig. 4 shows the systemlevel diagram of our architecture. In order to load and accommodate the huge amount of tensor data elements, our design stores the tensor in an external DRAM. The HOOI engine consists of three parts: a tensortimesmatrix (TTM) unit, a singular value decomposition (SVD) unit, and a tensor permuting/reshaping unit. The data elements of the three units are stored in different memories. Because the size of a tensor can be very large, all tensors (including the intermediate and final results of TTM) are stored in an external DRAM. The matrix for each SVD is stored in an onchip memory to reduce latency and to achieve a maximum throughput. Both parts are parallel and pipelined to achieve the maximum performance. The tensor permuting unit moves the data between the onchip memory used by the SVD unit and the external DRAM used by the TTM unit, and it permutes the tensor accordingly.
Iiia TensorTimesMatrix (TTM) Unit
The TTM unit can be implemented as a matrixmatrix multiplication that is available in some common computational libraries. However, we need to permute tensors before it can be implemented in this way. This is time and memoryconsuming because almost all data elements of have to be moved. In this work, we develop a TTM unit without tensor permutations. For simplicity, the tensors are always stored by incrementing the mode1 index , then the second index , and so forth. Since the size of tensor is often beyond the capacity of an onchip memory, all data elements (including the input and output data, and the intermediate result of a TTM operation) are stored in an external DRAM.
TTM with a 2D PE Array. Our TTM unit is shown in Fig. 5. In order to maximize the throughput of the TTM module, we design a 2D array of processing elements (PE) with columns and rows. Each PE computes the product of one scalar element of the tensor (black arrow) and one scalar from the matrix (either from the blue line or stored in the PE) at each clock cycle. The PEs in a single column are always connected to the same bus so they handle the same element in the tensor each time. Here can be either the original tensor or an intermediate tensor after the TTM of with some factor matrices. With columns, this module can handle up to neighbouring elements of a tensor fiber in total at the same time. Due to our method of storing , the fibers are always obtained by only changing the first mode index. Each row of this array handles one column of the factor matrix .
We compute the mode TTMs in a decreasing order from to . This PE array works in different ways dependent on the value of , which is explained below.

[leftmargin=*]

Assume that and that we have done TTMs for all modes except mode . Let us ignore mode for simplicity, and the size of becomes . To simplify the expression, we fold the modes into a single mode and use as its index. The elementwise expression of this operation is
(9) In each clock cycle, each vertical bus can carry some neighbouring elements in a tensor fiber and each horizontal bus can carry an element in the factor matrix. Specifically, In the th cycle of the th round, the th vertical bus and the th horizontal bus carry scalars and , respectively. Consequently the th PE calculates
Finally is obtained by summing the above product terms over (in each round). Note that the index of matrix elements in used at each PE does not depend on . Therefore, we can multiply the whole fiber with the same matrix element, and all PEs in the same row share the same data elements of by connecting them to the same horizontal bus. Because the size the new dimension, , is very large, we divide the partlyfolded tensor into some small subtensors of size , and the resulting tensor into some subtensors of size . We can compute the subtensors of one by one to reduce the buffer size.

When computing , the elementwise result is
(10) The th row of the PE array computes product terms of (10). Because is usually smaller than the fiber length, we need to compute all product terms by several cycles. In the th cycle, the th PE calculates
Note that in mode1 TTM, the RAM inside the th PE stores all ’s for all . The TTM element is obtained by accumulating all products terms in the th row during all cycles.
Processing Element (PE). As shown in Fig. 6, each PE consists of a multiplier, a small RAM, and another memory used as an output buffer storing the result before writing it to a DRAM. The RAM (marked in red) is only used to store when computing the mode1 TTM. Otherwise, the bus at each row imports data elements of when . Therefore, a mutiplexer (MUX) is used to select the correct data to perform product operations. After computing one batch of the data (a tensor fiber if and a tensor slice if ), the result is written to the DRAM and then reset to zero. The buffer temporarily holds the intermediate results, and keeps updating during the multiplication and sum operations. The buffer stops updating when its data is written into an external memory. In order to avoid timing conflicts and to increase throughput, another buffer is used (not shown in the figure) for transferring data to an external memory. These two buffers switch their roles after processing each batch of data.
Inplace Adder Tree. As mentioned above, we need an adder tree for the mode1 TTM. If the adder tree is implemented as a pipeline, a lot of adders and registers will be used. Given that the product terms need to be summed up only once per batch of data instead of per clock cycle, we only need an inplace adder tree. We split a adder tree into multiple stages. After each stage, each two elements are summed up so the total number of data elements is reduced by . The registers and adders are shared among different stages. The data elements are read from and write back to the same group of registers after each clock cycle. This is why we call it an “inplace” adder tree. This treatment can reduce the number of adders and registers by .
IiiB Singular Value Decomposition (SVD) Unit
Our SVD unit employs the Jacobi iterations [33, 34, 35, 36, 37]. Both singleside and doubleside Jacobi iterations are widely used. We use the singleside version because of its higher accurate, ease to parallelize and less data dependency. The whole framework is summarized in Alg. 2. Given a matrix , this algorithms computes its left singular vectors by orthogonizing every two rows (i.e., and ) of the matrix iteratively. The iterative process involves the norms and inner products of the row vectors and performing some rotations.
We can select the order of in different ways. A natural choice is to increment in the inner loop and increment in the outer loop, or vice versa. However, because of the data dependency between two adjacent operations, this choice makes it impossible to implement parallel or pipelined design. In order to overcome this issue, we employ the roundrobin ordering suggested in [38], which eliminates the data dependency between adjacent iterations. This method starts by dividing all indices into pairs where is the total number of rows. After orthogonalizing all the pair of rows specified above, we generate new index pairs in this way: suppose the pair in the previous round is , this pair index is updated in the next round as
Fig. 7 shows the block diagram of our SVD unit. In each step, two vectors are orthogonalized. The onchip memory provides two ports to operate independently. In this part, one port takes the two vectors from the memory, and another port writes the orthogonalized vectors back into the memory. Because we use only one port to input data and the other for output, the two vectors have to be fetched through the same port alternatively. Three sumofproducts are needed to calculate the rotation angle . Given that the two vectors are fetched alliteratively, the multiplier and adder to get and can be shared. Therefore, only two sets of multipliers and adders are used. In order to get , and , we employ the CORDIC algorithm [39], which uses the rotations of some fixed angles to approximate the rotation of any angle. This algorithm is efficient to calculate the trigonometric functions on hardware, and an FPGA IP core is available. After these two vectors are fetched from the memory, they are stored in a local buffer until and are calculated, then they are rotated accordingly. In this way, it is guaranteed each vector is read from the memory only once when orthogonalizing each pair of rows.
Besides the input matrix
, the orthogonal matrix
also needs to be rotated in the same way. We store and in the same memory and handle them in the same way, except that is not used for calculating and . Since has a much smaller size than , such a design only causes negligible runtime overhead but saves half of the area and power.IiiC Tensor Permutation and Reshaping
Once is computed, we need to permute and reshape the tensor into before performing an SVD. Since is stored in an external DRAM and the matrix is stored in an onchip memory, we need an extra module to move the data between the DRAM and the onchip memory and reorganize the data elements to match . After SVD, the factor matrix need to be transposed and moved from the onchip memory to DRAM, which is done by this module as well.
When moving the data from onchip memory and external DRAM, the data is first read from the its original memory, written to a local buffer with size , then written to the destination memory. The size of the buffer determines the size of data set that can be moved in every batch.
Iv Implementation Details
Iva Fixed Point Number
Floatingpoint numbers usually cause higher hardware cost and longer latency. Therefore, we use a fixedpoint number system based on the tradeoff between accuracy and hardware complexity.
We decide the fixedpoint precision based on some hardware constrains. Because the data width at the interface of a DRAM controller is fixed as 512 bits, the memory is most efficient if the data width is a factor of 512 (i.e., with integer ). Meanwhile, each DSP our FPGA can perform an multiplication with data sizes up to 27 bits 18 bits. Considering these constraints, we use 16bit numbers to represent all tensor data elements, and store them in an external DRAM. On the other hand, we use 27bit numbers to represent the matrix data in both TTM and SVD in order to achieve higher accuracy and to avoid excessive use of multipliers. Note that we use a smaller data width for tensor data in order to save some memory space when processing the huge amount of data in a tensor. In this case, each multiplier in the TTM unit requires one DSP block, and each multiplier in the SVD unit requires two DSP blocks.
There are many sumofproduct operations in both TTM and SVD. The small error in the product terms will accumulate when calculating the sum. In order to address this issue, we use 48bit numbers to represent the product terms. We use 27bit numbers to represent most of the intermediate results, except for the 32bit in the SVD unit.
IvB HOOI with A Warmstart Algorithm
We observe that the SVD
via the Jacoboi iterations is the most timeconsuming part of HOOI. Therefore, we employ a warmstart strategy to reduce the number of Jacobi iterations. In the standard Jacobi iterations, the initial guess is chosen as an identity matrix. In our implementation, we use the orthonormal matrix
obtained from the previous iteration of HOOI as the initial guess of the Jacobi iteration. Thanks to the good initial guess, we only need to perform one or two rounds of Jacobi iterations inside each SVD, and the whole HOOI still converges after an enough number of power iterations. The optimized algorithm is shown in Alg. 3.Fig. 8 shows the convergence curves of the standard HOOI and our proposed warmstart HOOI, respectively. We consider a tensor of size and with a multilinear rank
, which is generated by a Gaussian distribution and corrupted with some Gaussian noise. The noise variance is half of that of the tensor element. The standard method converges after only two HOOI iterations, but the SVD of each mode requires about 10 rounds of Jacobi iterations. Our optimized HOOI converges after 7 iterations, but each SVD requires only one round of Jacobi iterations, leading to a significant reduction of the total cost.
V Performance Analysis
This section analyzes the hardware performance of our FPGAaccelerated Tucker decomposition.
Va Runtime Analysis
The total runtime is the sum of each part: TTM, SVD, and tensor permuting. For a way tensor, each HOOI iteration requires TTMs, SVDs and tensor permuting/reshaping operations. Some intermediate results can be reused. For instance, after computing for a 3way , the result of can be reused when computing . By considering the TTM data reuse, the actual total number of TTMs is reduced to . When applying the warmstart algorithm for Jacobi iteration, the unfolded matrix need to be multiplied with first, and this is done by TTM as well, causing additional TTM operations. The total runtime is given by
(11) 
The details of each term are provided below.
TTM Part: The runtime of TTM depends on the size of its multiplier matrix. Suppose that we have a multiplier matrix of size . In this case, the multiplier takes in elements of per clock cycle, and each element is multiplied with elements of the factor matrix . At most product and sum operations can be done per clock cycle. Therefore, the number of clock cycles is
Similarly, the clock cycles required for is
Some extra clock cycles are caused by the latency of the pipeline and pingpong buffer, but they are often less than of the total runtime and thus negligible. When the TTM is applied over the first mode, the data need to be copied to the local memory of each PE in advance. This causes extra clock cycles, which is negligible again and can be done in parallel with other operations.
SVD Part: When updating the th factor matrix, we need to do SVD to a matrix with size , with . In each Jacobi iteration, pairs of matrices will be orthogonalized. Each orthogonalization handles numbers, therefore it takes clock cycles, where is the degree of parallelism. As a result, each Jacobi iteration takes approximately
(12) 
clock cycles. Similar to the case in TTM, the extra time cased by the latency of pipeline is negligible.
; the MATLAB implementation uses random initialization and eigenvalue decomposition to update
; our proposed FPGA implementation uses random initialization and SVD Jacobi method with warmstart to update .Tensor Permutation: Suppose that the size of the buffer is . In each clock cycle, this buffer can either exchange (read or write) numbers with the internal memory or numbers with the DRAM. Note that and are not necessarily equal to and , respectively, as long as the memory supports writing or elements each time. However, setting can simplify our design and maximize the hardware efficiency. Each tensor permutation requires clock cycles, with . Our simulation shows that the practical runtime is
(13) 
clock cycles when we move from a DRAM to an onchip memory and permute it. We need
(14) 
clock cycles to move to DRAM and transpose it.
VB Area and Power
The area and power depends on the design parameters , and . The higher the degree of parallelism is, the more PEs, hardware area and power will be required. In the TTM unit, the total number of multipliers, adders, accumulators and buffers are proportional to the size of multiplier matrix, . Therefore, the area of TTM is approximately . Similarly, the area of the tensor permutation unit is proportional to the buffer size . The power also increases when the area increases.
The area of the SVD unit is independent of its input matrix size, but depends on the degree of parallelism . Additionally, one arctan module and one sin/cos
module are required. Therefore, the area of the SVD unit is estimated as
, where represents the area (multipliers, adders, accumulators, etc.) required for each matrix element, and represents the area of arctan and sin/cos blocks.Vi Results
LUTs  Registers  DSPs  Clock rate  Power  

16  16  46,056  24,556  256  212MHz  2.008W 
16  32  99,384  48,357  512  200MHz  2.395W 
32  16  99,505  48,189  512  203MHz  2.503W 
32  32  198,269  95,743  1,024  187MHz  3.141W 
LUTs  Registers  DSPs  Clock rate  Power  

16  8,711  12,284  128  209MHz  0.477W 
32  11,134  13,965  256  207MHz  0.683W 
64  16,127  17,532  512  208MHz  1.095W 
128  25,360  24,631  1,024  203MHz  1.871W 
LUTs  Registers  Clock rate  Power  

16  64  29,929  27,718  241MHz  1.342W 
32  64  59,308  55,366  209MHz  1.961W 
32  128  115,749  110,662  205MHz  2.981W 
Via FPGA Performance Validation
We implement all parts of the optimized HOOI with different design parameters on FPGA. All the results below are based on Xilinx XCVU9PL2FSGD2104E FPGA, which is available on a Xilinx VCU1525 acceleration board. The power is estimated with a 200MHz clock rate. The results for different units are shown in Tables IIII. The hardware complexity, including the number of lookup tables (LUTs), registers and DSP blocks, is determined by the design parameters. The area of the TTM unit is approximately proportional to . The area of the SVD unit is approximately proportional to . In tensor permute unit, its area is approximately proportional to . The power consumption increases as we increase the design parameters but the relationship is not strictly linear, since the power consumption of some parts (e.g., the clock generator) is independent of our design parameters.
ViB Performance Comparisons
We compare our FPGA accelerator with other two commonly used toolboxes on different platforms: the Tensor toolbox [40, 41] in MATLAB on CPU, and the TensorLy toolbox [42]
in Python on both CPU and GPU. The TensorLy can select NumPy, PyTorch or MXNet as its backend, and the latter two allow highperformance GPU computation. In our experiment, we use NumPy and PyTorch for comparison. The runtime on CPU is measured on a Linux desktop with Intel i58400 6core CPU and 32 GB memory. The GPU experiments are conducted on a Titan V GPU. Since the accuracy and convergence of our Tucker decomposition depends on the fixedpoint number system, we develop a fixedpoint simulator with the Xilinx fixedpoint number library to simulate the truncation error and overflow in our FPGA accelerator.
We perform the comparison by using some randomly generated lowrank tensors. For each tensor, both the core tensor and the factor matrices are generated by Gaussian distributions, and the tensor is then corrupted by some Gaussian random noise. The relative error is evaluated as
(15) 
Fig. 9 shows the results on a set of 3way tensors with size . Our architecture can get speedup compared with MATLAB tensor toolbox on CPU, and even more compared with the TensorLy toolbox on both CPU and GPU.The convergence behavior of our FPGAbased Tucker decomposition is shown in Fig. 9(a). It is shown that our method always converges after 68 HOOI iterations. Therefore, we assume 8 HOOI iterations to estimate the runtime of our FPGA architecture.
We further perform comparisons using some 4way tensors and show the results in Fig. 10. Our PC with 32GB RAM can no longer accommodate such 4way tensor data, therefore the results on CPU are obtained by running our experiments on Amazon AWS r4.4x large instance with 16 virtual CPUs (vCPU) and 122GB memory. Since large 4way tensors exceed the memory of GPU, and TensorLy with the NumPy backend consumes extremely long time, their runtimes are not shown in Fig. 10. When the size along each dimension is 256, MATLAB run out of memory when computing. On these 4way tensor data, our FPGA design can get speedup compared with the MATLAB tensor toolbox on the AWS CPU.
Remark: Our FPGA accelerator uses the Jacobi iteration to perform SVD, whereas the MATLAB tensor toolbox and TensorLy use more powerful advanced SVD algorithms. If the same SVD algorithms are used in all implementations, our FPGA design may achieve more significant speedup.
ViC Application: MRI Compression
The accelerated Tucker decomposition can be applied to multiple application domains. Here we demonstrate its application in compressing multiway MRI data. This dataset is a singlechannel, firstpass myocardial perfusion realtime MRI sequence (). We use the preprocessed data available in [43, 44], and set rank to in HOOI. The estimated runtime with our architecture is 0.0447s at a clock rate of 185MHz. In comparison, on a Linux PC with Intel i58500 CPU 6 core CPU, the same operation takes 0.0964s with the MATLAB tensor toolbox, 0.335s with TensorLy toolbox and NumPy backend, 1.352s with TensorLy toolbox and Pytorch backend. on a PC with NVIDIA TITAN V GPU, this operation takes 0.217s. Therefore, our method provides 2.1630.2 speedup compared with existing frameworks on CPU, and 4.85 speedup compared with GPU. Fig. 11 shows the original MRI data and the data approximated by our lowrank Tucker decomposition on FPGA.
Vii Conclusion
This paper has presented an algorithmarchitecture codesign to perform tensor Tucker decomposition. We have implemented TensorTimesMatrix, matrix SVD with single side Jacobi iteration, and tensor permuting on FPGA. We have also proposed a warmstart algorithm for the Jacobi iterations to reduce its computation time. We have done simulations on both synthetic data sets and an MRI data set. Our method is significantly faster than existing computation frameworks on both CPU and GPU. This accelerator can be employed in a broad range of applications including data mining, scientific computing, computer vision, and deep learning.
Acknowledgment
This work was supported by NSF CCF 1817037. We would like to thank Sophia Shao (NVIDIA), Lei Deng, Zheng Qu and Bangyan Wang (UCSB) for their technical discussions.
References
 [1] T. Kolda and B. Bader, “Tensor decompositions and applications,” SIAM Rev, vol. 51, no. 3, pp. 455–500, 2009.
 [2] F. Hitchcock, “The expression of a tensor or a polyadic as a sum of products,” J. Math. Phys., vol. 6, no. 14, pp. 164–189, 1927.
 [3] L. Tucker, “Some mathematical notes on threemode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966.
 [4] I. V. Oseledets, “Tensortrain decomposition,” SIAM Journal Sci. Comp., vol. 33, no. 5, pp. 2295–2317, 2011.
 [5] T. G. Kolda and J. Sun, “Scalable tensor decompositions for multiaspect data mining,” in Proc. IEEE Int. Conf. Data Mining, 2008, pp. 363–372.
 [6] C. Hawkins and Z. Zhang, “Bayesian tensorized neural networks with automatic rank selection,” arXiv preprint arXiv:1905.10478, 2019.
 [7] H. Ding, K. Chen, Y. Yuan, M. Cai, L. Sun, S. Liang, and Q. Huo, “A compact CNNDBLSTM based character model for offline handwriting recognition with tucker decomposition,” in Proc. IEEE Int. Conf. Document Analysis and Recognition, vol. 1, 2017, pp. 507–512.
 [8] J.T. Chien and Y.T. Bao, “Tensorfactorized neural networks,” IEEE Trans. Neur. Networks Learn. Syst., vol. 29, no. 5, pp. 1998–2011, 2018.
 [9] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov, “Tensorizing neural networks,” in NIPS, 2015, pp. 442–450.
 [10] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, “Speedingup convolutional neural networks using finetuned CPdecomposition,” arXiv preprint arXiv:1412.6553, 2014.

[11]
Y. Yang, D. Krompass, and V. Tresp, “Tensortrain recurrent neural networks for video classification,” in
Proc. Int. Conf. Machine Learning, 2017, pp. 3891–3900.  [12] Z. Zhang, X. Yang, I. V. Oseledets, G. E. Karniadakis, and L. Daniel, “Enabling highdimensional hierarchical uncertainty quantification by anova and tensortrain decomposition,” IEEE Trans. CAD of Integrated Circuits and Systems, vol. 34, no. 1, pp. 63–76, 2015.
 [13] Z. Zhang, T.W. Weng, and L. Daniel, “Bigdata tensor recovery for highdimensional uncertainty quantification of process variations,” IEEE Trans. Comp., Pack. Manuf. Tech., vol. 7, no. 5, pp. 687–697, 2017.
 [14] S. F. Roohi, D. Zonoobi, A. A. Kassim, and J. L. Jaremko, “Dynamic mri reconstruction using low rank plus sparse tensor decomposition,” in Proc. Int. Conf. Image Process., 2016, pp. 1769–1773.
 [15] J. Li, G. Han, J. Wen, and X. Gao, “Robust tensor subspace learning for anomaly detection,” Int. J. Machine Learning and Cybernetics, vol. 2, no. 2, pp. 89–98, 2011.
 [16] D. Saito, K. Yamamoto, N. Minematsu, and K. Hirose, “Onetomany voice conversion based on tensor representation of speaker space,” in Proc. Int. Conf. Speech Comm. Assoc., 2011.
 [17] O. Kaya and B. Uçar, “High performance parallel algorithms for the Tucker decomposition of sparse tensors,” in Proc. IEEE Int. Conf. Parall. Proc., 2016, pp. 103–112.
 [18] S. Smith, J. Park, and G. Karypis, “Sparse tensor factorization on manycore processors with highbandwidth memory,” in Proc. IEEE Int. Parallel and Distributed Processing Symp, 2017, pp. 1058–1067.
 [19] J. Li, C. Battaglino, I. Perros, J. Sun, and R. Vuduc, “An inputadaptive and inplace approach to dense tensortimesmatrix multiply,” in Proc. Int. Conf. High Performance Computing, Networking, Storage and Analysis, 2015, pp. 1–12.
 [20] A. Amira, A. Bouridane, and P. Milligan, “Accelerating matrix product on reconfigurable hardware for signal processing,” in Proc. Int. Conf. Field Programmable Logic and Applications, 2001, pp. 101–111.
 [21] Y. Dou, S. Vassiliadis, G. K. Kuzmanov, and G. N. Gaydadjiev, “64bit floatingpoint FPGA matrix multiplication,” in Proc. Int. Symp. Fieldprogrammable Gate Arrays, 2005, pp. 86–95.
 [22] J. Zhu and P. Sutton, “FPGA implementations of neural networks–a survey of a decade of progress,” in Proc. FPLA, 2003, pp. 1062–1066.
 [23] C. Wang, L. Gong, Q. Yu, X. Li, Y. Xie, and X. Zhou, “DLAU: A scalable deep learning accelerator unit on FPGA,” IEEE Trans. CAD of Integr. Circuits and Systems, vol. 36, no. 3, pp. 513–517, 2017.

[24]
K. Irick, M. DeBole, V. Narayanan, and A. Gayasen, “A hardware efficient support vector machine architecture for FPGA,” in
Proc. Int. Symp. FPCCM, 2008, pp. 304–305.  [25] J. L. Stanislaus and T. Mohsenin, “Lowcomplexity FPGA implementation of compressive sensing reconstruction,” in Int. Conf. Comput., Netw. Comm., 2013, pp. 671–675.
 [26] M. A. O. Vasilescu and D. Terzopoulos, “Multilinear image analysis for facial recognition,” in Object recognition supported by user interaction for service robots, vol. 2. IEEE, 2002, pp. 511–514.
 [27] N. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Trans. Sign. Proc., vol. 65, no. 13, pp. 3551–3582, 2017.
 [28] Y.D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin, “Compression of deep convolutional neural networks for fast and low power mobile applications,” arXiv preprint arXiv:1511.06530, 2015.
 [29] L. De Lathauwer, B. De Moor, and J. Vandewalle, “On the best rank1 and rank(, ,…, ) approximation of higherorder tensors,” SIAM J. Matrix Analysis and Applications, vol. 21, no. 4, pp. 1324–1342, 2000.
 [30] L. R. Tucker, “Implications of factor analysis of threeway matrices for measurement of change,” Problems in Measuring Change, vol. 15, pp. 122–137, 1963.
 [31] P. M. Kroonenberg and J. De Leeuw, “Principal component analysis of threemode data by means of alternating least squares algorithms,” Psychometrika, vol. 45, no. 1, pp. 69–97, 1980.
 [32] A. Kapteyn, H. Neudecker, and T. Wansbeek, “An approach tonmode components analysis,” Psychometrika, vol. 51, no. 2, pp. 269–275, 1986.
 [33] E. R. Hansen, “On cyclic Jacobi methods,” J. Soc. Indust. and Appl. Math., vol. 11, no. 2, pp. 448–459, 1963.
 [34] J. Demmel and K. Veselić, “Jacobi’s method is more accurate than QR,” SIAM J. Matrix Anal. Appl., vol. 13, no. 4, pp. 1204–1245, 1992.
 [35] N. Hemkumar and J. Cavallaro, “A systolic VLSI architecture for complex SVD,” in Proc. IEEE ISCAS, vol. 3, 1992, pp. 1061–1064.
 [36] A. Ahmedsaid, A. Amira, and A. Bouridane, “Improved SVD systolic array and implementation on FPGA,” in Proc. FPL, 2003, pp. 35–42.
 [37] M. Rahmati, M. S. Sadri, and M. A. Naeini, “FPGA based singular value decomposition for image processing applications,” in IEEE Intl. Conf. ASSAP, 2008, pp. 185–190.
 [38] R. Brent and F. Luk, “The solution of singularvalue and symmetric eigenvalue problems on multiprocessor arrays,” SIAM J. Sci. Stat. Comp., vol. 6, no. 1, pp. 69–84, 1985.
 [39] J. E. Volder, “The CORDIC trigonometric computing technique,” IRE Trans. Electronic Computers, no. 3, pp. 330–334, 1959.
 [40] B. Bader, T. Kolda et al., “Matlab tensor toolbox version 2.6,” Available online, February 2015. [Online]. Available: http://www.sandia.gov/~tgkolda/TensorToolbox/
 [41] B. W. Bader and T. G. Kolda, “Algorithm 862: MATLAB tensor classes for fast algorithm prototyping,” ACM Trans. Math. Software, vol. 32, no. 4, pp. 635–653, Dec 2006.
 [42] J. Kossaifi, Y. Panagakis, A. Anandkumar, and M. Pantic, “TensorLy: Tensor learning in python,” CoRR, vol. abs/1610.09555, 2018.
 [43] “Firstpass myocardial perfusion realtime MRI dataset,” https://statweb.stanford.edu/~candes/SURE/matlab/JDT/DATA/invivo_perfusion4.mat, accessed: 20190319.
 [44] S. Lingala, Y. Hu, E. DiBella, and M. Jacob, “Accelerated dynamic MRI exploiting sparsity and lowrank structure: kt SLR,” IEEE Trans. Medical Imaging, vol. 30, no. 5, pp. 1042–1054, 2011.
Comments
There are no comments yet.