Multilinear Compressive Learning

Compressive Learning is an emerging topic that combines signal acquisition via compressive sensing and machine learning to perform inference tasks directly on a small number of measurements. Many data modalities naturally have a multi-dimensional or tensorial format, with each dimension or tensor mode representing different features such as the spatial and temporal information in video sequences or the spatial and spectral information in hyperspectral images. However, in existing compressive learning frameworks, the compressive sensing component utilizes either random or learned linear projection on the vectorized signal to perform signal acquisition, thus discarding the multi-dimensional structure of the signals. In this paper, we propose Multilinear Compressive Learning, a framework that takes into account the tensorial nature of multi-dimensional signals in the acquisition step and builds the subsequent inference model on the structurally sensed measurements. Our theoretical complexity analysis shows that the proposed framework is more efficient compared to its vector-based counterpart in both memory and computation requirement. With extensive experiments, we also empirically show that our Multilinear Compressive Learning framework outperforms the vector-based framework in object classification and face recognition tasks, and scales favorably when the dimensionalities of the original signals increase, making it highly efficient for high-dimensional multi-dimensional signals.



There are no comments yet.



Metrics for Evaluating the Efficiency of Compressing Sensing Techniques

Compressive sensing has been receiving a great deal of interest from res...

Multilinear Compressive Learning with Prior Knowledge

The recently proposed Multilinear Compressive Learning (MCL) framework c...

Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization

The goal of statistical compressive sensing is to efficiently acquire an...

Multi-dimensional sparse structured signal approximation using split Bregman iterations

The paper focuses on the sparse approximation of signals using overcompl...

Performance Indicator in Multilinear Compressive Learning

Recently, the Multilinear Compressive Learning (MCL) framework was propo...

A Cross-Layer Approach to Data-aided Sensing using Compressive Random Access

In this paper, data-aided sensing as a cross-layer approach in Internet-...

Cross-scale predictive dictionaries

We propose a novel signal model, based on sparse representations, that c...

Code Repositories



view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The classical sample-based signal acquisition and manipulation approach usually involve separate steps of signal sensing, compression, storing or transmitting, then the reconstruction. This approach requires the signal to be sampled above the Nyquist rate in order to ensure high-fidelity reconstruction. Since the existence of spatial-multiplexing cameras, over the past decade, Compressive Sensing (CS) [1] has become an efficient and a prominent approach for signal acquisition at sub-Nyquist rates, combining the sensing and compression step at the hardware level. This is due to the assumption that the signal often possesses specific structures that exhibit sparse or compressible representation in some basis, thus, can be sensed at a lower rate than the Nyquist rate but still allows almost perfect reconstruction [2, 3]. In fact, many data modalities that we operate on are often sparse or compressible. For example, smooth signals are compressible in the Fourier domain or subsequent frames in a video are piecewise smooth, thus compressible in a wavelet domain. With the efficient realization at the hardware level such as the popular Single Pixel Camera, CS becomes an efficient signal acquisition framework, however, making the signal manipulation an intimidating task. Indeed, over the past decade, since reversing the signal to its original domain is often considered the necessary step for signal manipulation, a significant amount of works have been dedicated to signal reconstruction, giving certain insights and theoretical guarantees for the successful recovery of the signal from compressively sensed measurements [2, 1, 3].

While signal recovery plays a major role in some sensing applications such as image acquisition for visual purposes, there are many scenarios in which the primary objective is the detection of certain patterns or inferring some properties in the acquired signal. For example, in many radar applications, one is often interested in anomaly patterns in the measurements, rather than signal recovery. Moreover, in certain applications [4, 5], signal reconstruction is undesirable since the step can potentially disclose private information, leading to the infringement of data protection legislation. These scenarios naturally led to the emergence of Compressive Learning (CL) concept [6, 7, 8, 9] in which the inference system is built on top of the compressively sensed measurements without the explicit reconstruction step. While the amount of literature in CL is rather insignificant compared to signal reconstruction in CS, different attempts have been made to modify the sensing component in accordance with the learning task [10, 11], to extract discriminative features [7, 12] from the randomly sensed measurements or to jointly optimize the sensing matrix [13, 14] and the subsequent inference system. Improvements to different components of CL pipeline have been proposed, however, existing frameworks utilize the same compressive acquisition step that performs a linear projection of the vectorized data, thereby operating on the vector-based measurements and thus losing the tensorial structure in the measurements of multi-dimensional data.

In fact, many data modalities naturally possess the tensorial format such as color images, videos or multivariate time-series. The multi-dimensional representation naturally reflects the semantic differences inherent in different dimensions or tensor modes. For example, the spatial and temporal dimensions in a video or the spatial and the spectral dimensions in hyperspectral images represent two different concepts, having different properties. Thus by exploiting this natural form of the signals and considering the semantic differences between different dimensions, many tensor-based signal processing, and learning algorithms have shown its superiority over the vector-based approach, which simply operates on the vectorized data [15, 16, 17, 18, 19, 20, 21]. Indeed, tensor representation and its associated mathematical operations and properties have found various applications in the Machine Learning community. For example, in multivariate time-series analysis, the multilinear projection was utilized in [18, 22] to model the dependencies between data points along the feature and temporal dimension separately. Several multilinear regression [23, 24] or discriminant models [25, 26]

have been developed to replace their linear counterparts, with improved performance. In neural network literature, multilinear techniques have been employed to compress pre-trained networks

[27, 28, 29], or to construct novel neural network architectures [19, 30, 22].

It is worth noting that CS plays an important role in many applications that involves high-dimensional tensor signals because the standard point-based signal acquisition is both memory and computationally intensive. Representative examples include Hyperspectral Compressive Imaging (HCI), Synthetic Aperture Radar (SAR) imaging, Magnetic Resonance Imaging (MRI) or Computer Tomography (CT). Therefore, the tensor-based approach has also found its place in CS, also known as Multi-dimensional Compressive Sensing (MCS) [31], which replaces the linear sensing and reconstruction model with multilinear one. Similar to vector-based CS, thereupon simply referred to as CS, the majority of efforts in MCS are dedicated to constructing multilinear models that induce sparse representation along each tensor mode with respect to a set of bases. For example, the adoption of sparse Tucker representation and the Kronecker sensing scheme in MRI allows computationally efficient signal recovery with very low Peak Signal to Noise Ratio (PSNR) [31, 32]. In addition, the availability of optical implementations of separable sensing operators such as [33] naturally enables MCS, significantly reducing the amount of data collection and reconstruction cost.

While multilinear models have been successfully applied in Compressive Sensing and Machine Learning, to the best of our knowledge, we have not seen their utilization in Compressive Learning, which is the joint framework combining CS and ML. In this paper, in order to leverage the multi-dimensional structure in many data modalities, we propose Multilinear Compressive Learning framework, which adopts a multilinear sensing operator and a neural network classifier that is designed to utilize the multi-dimensional structure-preserving compressed measurements. The contribution of this paper is as follows:

  • We propose Multilinear Compressive Learning (MCL), a novel CL framework that consists of a multilinear sensing module, a multilinear feature synthesis component, both taking into account the multi-dimensional property of the signals, and a task-specific neural network. The multilinear sensing module compressively senses along each separate mode of the original tensor signal, producing structurally encoded measurements. Similarly, the feature synthesis component performs the feature learning steps separately along each mode of the compressed measurements, producing inputs to the subsequent task-specific neural network which has the structure depending on the inference problem.

  • We show both theoretically and empirically that the proposed MCL framework is highly cost-effective in terms of memory and computational complexity. In addition, theoretical analysis and experimental results also indicate that our framework scales well when the dimensionalities of the original signal increases, making it highly efficient for high-dimensional tensor signals.

  • We conduct extensive experiments in object classification and face recognition tasks to validate the performance of our framework in comparison with its vector-based counterpart. Besides, the effect of different components and hyperparameters in the proposed framework were also empirically analyzed.

  • We publicly provide our implementation of the experiments reported in this paper to facilitate future research. By following our detailed instructions on how to set up the software environment, all experiment results can be reproduced in just one line of code. 111

The remainder of the paper is organized as follows: in Section 2, we review the background information in Compressive Sensing, Multi-dimensional Compressive Sensing and Compressive Learning. In Section 3, the detailed description of the proposed Multilinear Compressive Learning framework is given. Complexity analysis and comparison with the vector-based framework are also given in Section 3. In Section 4, we provide details of our experiment protocols and quantitative analysis of different experiment configurations. Section 5 concludes our work with possible future research directions.

Ii Related Work

Ii-a Notation

In this paper, we denote scalar values by either lower-case or upper-case characters , vectors by lower-case bold-face characters , matrices by upper-case or Greek bold-face characters and tensor as calligraphic capitals . A tensor with modes and dimension in the mode- is represented as . The entry in the th index in mode- for is denoted as . In addition, denotes the vectorization operation that rearranges elements in to the vector representation.

Definition 1 (The Kronecker Product)

The Kronecker product between two matrices and is denoted as having dimension , is defined by:

Definition 2 (Mode- Product)

The mode- product between a tensor and a matrix is another tensor of size and denoted by . The element of is defined as .

The following relationship between the Kronecker product and -mode product is the cornerstone in MCS:


can be written as


where and

Ii-B Compressive Sensing

Compressive Sensing (CS) [1] is a signal acquisition and manipulation paradigm that performs simultaneous sensing and compression on the hardware level, leading to large reduction in computation cost and the number of measurements. The signal working under CS is assumed to have a sparse or compressible representation in some basis or dictionary , that is:


where denotes the number of non-zero entries in . While the dictionary presented in Eq. (4) is complete, i.e., the number of columns in is equal to the signal dimension , we should note that signal models with over-complete dictionaries can also work, i.e., with some modifications [34].

With the assumption on the sparsity, CS performs the linear sensing step using the sensing operator , acquiring a small number of measurements with , from analog signal :


Eq. (5) represents both the sensing and compression step that can be efficiently implemented at the sensor level. Thus, what we obtain from CS sensors is a limited number of measurements that is used for other processing steps. By combining Eq. (4, and 5), the CS model is usually expressed as:


In some applications, we are interested in recovering the signal from . This involves developing theoretical properties and algorithms to determine the sensing operator , the dictionary or basis , and the number of nonzero coefficients in order to ensure that the reconstruction is unique, and of high-fidelity [2, 35, 3]. The reconstruction of is often posed as finding the sparsest solution of the under-determined linear system [36], particularly:


where is a small constant specifying the amount of residual error allowed in the approximation. A large body of research has been dedicated to solve the problem in Eq. (7) and its variants with two main approaches: basis pursuit (BP) which transforms Eq. (7

) to a convex one to be solved by linear programming

[37] or second-order cones programs [2], and matching pursuit (MP), a class of greedy algorithms, which iteratively refines the solution to the sparsest [38, 39]. Both BP and MP algorithms are computationally intensive when the number of elements in is big, especially in the case of multi-dimensional signals.

Ii-C Multi-dimensional Compressive Sensing

Given a multi-dimensional signal , a direct application of the sparse representation in Eq. (4) requires vectorizing and the calculations on , which is a very big matrix with the number of elements scales exponentially with . Instead of assuming is sparse in some basis or dictionary, MCS adopts a sparse Tucker model [40] as follows:


which assumes that the signal is sparse with respect to a set of bases or dictionaries . Since in some cases, the sensing step can be taken in a multilinear way, i.e., by using a set of linear operators along each mode separately, also known as separable sensing operators:


that allows us to obtain the measurements with retained multi-dimensional structure. From Eq. (2, 3, 8 and 9), the MCS model is often expressed as:


where , and (). The formulation in Eq. (10) is also known as Kronecker CS [41].

Since MCS can be expressed in the vector form, the existing algorithms and theoretical bounds for vector-based CS have also been extended for MCS. Representative examples include Kronecker OMP and its tensor block-sparsity extension [42] that improves the computation significantly. It is worth noting that by adopting a multilinear structure, MCS operates with a set of smaller sensing and dictionaries, requiring much lower memory and computation compared to the vectorization approach [31].

Ii-D Compressive Learning

Fig. 1: Illustration of the proposed Multilinear Compressive Learning framework

The idea of learning directly from the compressed measurements dates back to the early work of [7] in which the authors proposed a framework termed compressive classification which introduces the concept of smashed filters and operates directly on the compressive measurements without reconstruction as the first proxy step. The result in [7] was subsequently strengthened in [43] showing that when sufficiently large random sensing matrix is used, it can capture the structure of the data manifold. Later, further extensions that extract discriminative features from compressive measurements for activity recognition [44, 45] or face recognition [12] have also been proposed.

The concept of CL was introduced in [6], which provides theoretical analysis illustrating that learning machines can be built directly in the compressed domain. Particularly, given certain conditions of the sensing matrix

, the performance of a linear Support Vector Machine (SVM) trained on compressed measurements is as good as the best linear threshold classifier trained on the original signal

. Later, for compressive learning of signals described by a Gaussian Mixture Model, asymptotic behavior of the upper-bound

[9] and its extension [11] to learn the sensing matrix were also derived.

The idea of jointly optimizing the sensing matrix with the classifier was also adopted in [10] in which the authors proposed an adaptive version of feature-specific imaging system to learn an optimal sensing matrix based on past measurements. With the advances in computing hardware and stochastic optimization techniques, end-to-end CL system was proposed in [13], and several follow-up extensions and applications [46, 47, 48], indicating the superior performance when simultaneously optimizing the sensing component and the classifier via task-specific data. Our work is closely related to the end-to-end CL system in [13] in that we also optimize the CL system via stochastic optimization in an end-to-end manner. Different from [13], our proposed framework efficiently utilizes the tensor structure inherent in many types of signals, thus outperforming the approach in [13] in both inference performance and computational efficiency.

Iii Multilinear Compressive Learning Framework

In this Section, we first give our description of the proposed Multilinear Compressive Learning (MCL) framework that operates directly on the tensor representation of the signals. Then, the initialization scheme and optimization procedures of the proposed framework is discussed. Lastly, theoretical analysis of the framework’s complexity in comparison with its vector-based counterpart is provided.

Iii-a Motivation

In order to model the multi-dimensional structure in the signal of interest, we assume that the discriminative structure in can be captured in a lower-dimensional multilinear subspace of with ():


where denotes the factor matrices and is the signal representation in this multilinear subspace.

Here we should note that although Eq. (11) in our framework and Eq. (8) in MCS look similar in its mathematical form, the assumption and motivation are different. The objective in MCS is to reconstruct the signal by assuming the existence of the set of sparsifying dictionaries or bases and optimizing to induce the sparsest . Since our objective is to learn a classification or regression model, we make no assumption or constraint on the sparsity of but assume that the factorization in Eq. (11) can lead to a tensor subspace in which the representation is discriminative or meaningful for the learning problem.

As mentioned in the previous Section, in some applications, the measurements can be taken in a multilinear fashion, with different linear sensing operators operating along different tensor modes, i.e., separable sensing operators, we obtain the measurements from the following sensing equation:


where () represent the sensing matrices of those linear operators.

In cases where the measurements of the multi-dimensional signals are taken in a vector-based fashion, i.e., the following sensing model:


with a single sensing operator , we can still enforce a structure-preserving sensing operation similar to the multilinear sensing scheme in Eq. (12) by setting:


to obtain in Eq. (12) from in Eq. (13).

Combining Eq. (11 and 12), we can express our measurements as:


By setting the sensing matrices to be pseudo-inverse of for all , we obtain the measurements that lie in the discriminative-induced tensor subspace mentioned previously.

Iii-B Design

Figure 1 illustrates our proposed MCL framework which consists of the following components:

  • CS component: the data acquisition step of the multi-dimensional signals is done via separable linear sensing operators . As mentioned previously, in cases where the actual hardware implementation only allows vector-based sensing scheme, Eq. (14) allows the simulation of this multilinear sensing step. This component produces measurements with encoded tensor structure, having the same number of tensor modes () as the original signal.

  • Feature Synthesis (FS) component: from

    , this step performs feature extraction along

    modes of the measurements with the set of learnable matrices . Since the measurements typically have many fewer elements compared to the original signal , the FS component expands the dimensions of

    , allowing better separability between the sensed signals from different classes in a higher multi-dimensional space that is found through optimization. While the sensing step performs linear interpolations for computational efficiency, the FS component can be either multilinear or nonlinear transformations. A typical nonlinear transformation step is to perform zero-thresholding, i.e., ReLU, on

    before multiplying with , i.e., . In applications which require the transmission of to be analyzed, this simple thresholding step can, before transmission, increase the compression rate by sparsifying the encoded signal and discarding the sign bits. While nonlinearity is often considered beneficial for neural networks, adding the thresholding step as described above further restricts the information retained in a limited number of measurements , thus, adversely affects the inference system. In the Experiments Section, we provide empirical analysis on the effect of nonlinearity towards the inference tasks at different measurement rates. Here we should note that while our FS component resembles the reprojection step in the vector-based framework [13], our FS and CS components have different weights ( and , ) and the dimensionality of the tensor feature produced by FS component is task-dependent, and is not constrained to that of the original signal.

  • Task-specific Neural Network : from the tensor representation produced by FS step, a neural network with task-dependent architecture is built on top to generate the regression or classification outputs. For example, when analyzing visual data, the

    can be a Convolutional Neural Network (CNN) in case of static images or a Convolutional Recurrent Neural Network in case of videos. In CS applications that involve distributed arrays of sensors that continuously collect data, specific architectures for time-series analysis such as Long-Short Term Memory Network should be considered for

    . Here we should note that the size of is also task-dependent and should match with the neural network component. For example, in object detection and localization task, it is desirable to keep the spatial aspect ratio of similar to to allow precise localization.

Iii-C Optimization

Our Vector [13]
TABLE I: Complexity of the proposed MCL framework and vector-based framework [13]

In our proposed MCL framework, we aim to optimize all three components, i.e., , and , with respect to the inference task. A simple and straightforward approach is to consider all components in this framework as a single computation graph, then randomly initialize the weights according to some popular initialization scheme [49, 50]

and perform stochastic gradient descend on this graph with respect to the loss function defined by the learning task. However, this approach does not take into account any existing domain knowledge of each component that we have.

As mentioned in Section III.A, with the assumption of the existence of a tensor subspace and the factorization in Eq. (11), the sensing matrix in the CS component can be initialized equal to the pseudo-inverse of for all to obtain initial that are discriminative or meaningful. There have been several algorithms proposed to learn the factorization in Eq. (11) with respect to different criteria such as the multi-class discriminant [25], class-specific discriminant [26], max-margin [51] or Tucker Decomposition with non-negative constraint [52].

In a general setting, we propose to apply Higher Order Singular Value Decomposition (HOSVD)

[40] and initialize with the left singular vectors that correspond to the largest singular values in mode

. The sensing matrices are then adjusted together with other components during the stochastic optimization process. This initialization scheme resembles the one proposed for vector-based CL framework which utilizes Principal Component Analysis (PCA). In a general case where one has no prior knowledge on the structure of

, a transformation that retains the most energy in the signal such as PCA or HOSVD is a popular choice when reducing dimensionalities of the signal. While for higher-order data, HOSVD only provides a quasi-optimal condition for data reconstruction in the least-square sense [53], since our objective is to make inferences, this initialization scheme works well as indicated in our Experiments Section.

With the aforementioned initialization scheme of CS component for a general setting, it is natural to also initialize in FS component with the right singular vectors corresponding to the largest singular values in mode of the training data. With this initialization of

, during the initial forward steps in stochastic gradient descent, the FS component produces an approximate version of

, and in cases where a classifier pre-trained on or its approximated version exists, the weights of neural network can be initialized with that of . It is worth noting that the reprojection step in the vector-based framework in [13] shares the weights with the sensing matrices, performing inexplicit signal reconstruction while we have different sensing and feature extraction weights. Since the vector-based framework involves large sensing and reprojection matrices, from the optimization point of view, enforcing shared weights might be essential in their framework to reduce overfitting as indicated by their empirical results.

After performing the aforementioned initialization steps, all three components in our MCL framework are optimized using Stochastic Gradient Descent method. It is worth noting that above initialization scheme for CS and FS component is proposed in a generic setting, which can serve as a good starting point. In cases where certain properties of the tensor subspace or the tensor feature are known to improve the learning task, one might adopt a different initialization strategy for CS and FS components to induce such properties.

Iii-D Complexity Analysis

Since the complexity of the neural network component

varies with the choice of the architecture, we will estimate the theoretical complexity for the CS and FS component and make comparison with the vector-based framework

[13]. Let and denote the dimensionality of the original signal and its measurements , respectively. In addition, to compare with the vector-based framework, we also assume that the dimensionality of the feature is also . Thus, belongs to and belongs to for in our CS and FS component, while in [13], the sensing matrix and the reconstruction matrix belong to and , respectively.

It is clear that the memory complexity of CS and FS component in our MCL framework is , and that of the vector-based framework is . To see the huge difference between the two frameworks, let us consider 3D MRI image of size with the sampling ratio , i.e., , the memory complexity in our framework is while that of the vector-based framework is

Regarding computational complexity of our framework, the CS component performs having complexity of , and the FS component performs having complexity of . For the vector-based framework, the sensing step computes and reprojection step computes , resulting in total complexity of . With the same 3D MRI example as in the previous paragraph, the total computational complexity of our framework is while that of the vector-based framework is .

Table I summarizes the complexity of the two frameworks. It is worth noting that by taking into account the multi-dimensional structure of the signal, the proposed framework has both memory and computational complexity several orders of magnitudes lower than its vector-based counterpart.

Iv Experiments

In this section, we provide a detailed description of our empirical analysis of the proposed MCL framework. We start by describing the datasets and the experiments’ protocols that have been used. In the standard set of experiments, we analyze the performance of MCL in comparison with the vector-based framework proposed in [13]. We further investigate the effect of different components in our framework in the Ablation Study Subsection.

Iv-a Datasets and Experiment Protocol

We have conducted experiments on the object classification and face recognition tasks on the following datasets:

  • CIFAR-10 and CIFAR-100: CIFAR dataset [54] is a color (RGB) image dataset for evaluating object recognition task. The dataset consists of images for training and images for testing with resolution pixels. CIFAR-10 refers to the -class objection recognition task in which each individual image has a single class label coming from different categories. Likewise, CIFAR-100 refers to a more fine-grained classification task with each image having a label coming from different categories. In our experiment, from the training set of CIFAR-10 and CIFAR-100, we randomly selected images for validation purpose and only trained the algorithms on images.

  • CelebA: CelebA [55] is a large-scale face attributes dataset with more than images at different resolutions from more than identities. In our experiment, we used a subset of identities in this dataset which corresponds to , , and samples for training, validation, and testing, respectively. In order to evaluate the scalability of our proposed framework, we resized the original images to different set of resolutions, including: , , , and pixels, which are subsequently denoted as CelebA-32, CelebA-48, CelebA-64, and CelebA-80, respectively.

In our experiments, two types of network architecture have been employed for the neural network component : the AllCNN architecture [56] and the ResNet architecture [57]

. AllCNN is a simple 9-layer feed-forward architecture which has no max-pooling (pooling is done via convolution with stride more than 1) and no fully-connected layer. ResNet is a 110-layer CNN with residual connections. The exact topologies of AllCNN and ResNet in our experiment can be found in our publicly available implementation


Since all of the datasets contain RGB images, we followed the implementation proposed in [58] for the vector-based framework, which has 3 different sensing matrices for each of the color channel, and the corresponding reprojection matrices are enforced to share weights with the sensing matrices. The sensing matrices in MCL were initialized with the HOSVD decomposition on the training sets while the sensing matrices in the vector-based framework were initialized with PCA decomposition on the training set. Likewise, the bases obtained from HOSVD and PCA were also used to initialize the FS component in our framework and the reprojection matrices in the vector-based framework. In addition, we also trained the neural network component on uncompressed data with respect to the learning tasks and initialized the classifier in each framework with these pre-trained networks’ weights. After the initialization step, both frameworks were trained in an end-to-end manner.

All algorithms were trained with ADAM optimizer [59] with the following learning rate the schedule

, changing at epoch

and . Each algorithm was trained for epochs in total. Weight decay coefficient was set to to regularize all the trainable weights in all experiments. We performed no data preprocessing step, except scaling all the pixel values to . In addition, data augmentation was employed by random flipping on the horizontal axis and image shifting within of the spatial dimensions. In all experiments, the final model weights which are used to measure the performance on the test sets, are obtained from the epoch which has the highest validation accuracy.

For each experiment configuration, we performed

runs and the mean and standard deviation of test accuracy are reported.

Iv-B Comparison with the vector-based framework

Configuration #measurements Measurement Rate
vector [13]
MCL (our)
MCL (our)

vector [13]
MCL (our)
MCL (our)

vector [13]
MCL (our)
MCL (our)

TABLE II: Different configurations of measurements between vector-based framework and our framework. * Measurement Rate is calculated with respect to the original signal of size

In order to compare with the vector-based framework in [13], we performed experiments on 3 datasets: CIFAR-10, CIFAR-100, and CelebA-32. To compare the performances at different measurement rates, we employed three different measurement values for the vector-based framework: , , and . Here indicates that the vector-based framework has different sensing matrices for each color channel. Since we cannot always select the size of the measurements in MCL to match the number of measurements in the vector-based framework, we try to find the configurations of that closely match with the vector-based ones. In addition, with a target number of measurements, there can be more than one configuration of that yields a similar number of measurements. For each measurement value () in the vector-based framework, we evaluated two different values of , particularly, the following sizes of were used: , , , , and . The measurement configurations are summarized in Table II.

In order to effectively compare the CS and FS component in MCL with those in [13], two different neural network architectures with different capacities have been used. Table III and IV show the accuracy on the test set with AllCNN and ResNet architecture, respectively. The second row of each table shows the performance of the base classifier on the uncompressed data, which we term as Oracle.

It is clear that our proposed framework outperforms the vector-based framework in all compression rates and datasets with both AllCNN and ResNet architecture, except for CIFAR-100 dataset at the lowest measurement rate (). The performance gaps between the proposed MCL framework and the vector-based one are huge, with more than differences for the CIFAR datasets at measurement rates and . In case of CelebA-32 dataset and at measurement rate (configuration ), the inference systems learned by our proposed framework even slightly outperform the Oracle setting for both AllCNN and ResNet architecture.

Although the capacities of AllCNN and ResNet architecture are different, their performances on the uncompressed data are roughly similar. Regarding the effect of two different base classifiers in the two Compressive Learning pipelines, it is clear that the optimal configurations of our framework at each measurement rate are consistent between the two classifiers, i.e., the bold patterns from both Table III and IV are similar. When switching from AllCNN to ResNet, the vector-based framework observes performance drop at the highest measurement rate (), but increases in lower rates ( and ). For our framework when switching from AllCNN to ResNet, the test accuracies stay approximately similar or improve.

Table V shows the empirical complexity of both frameworks with respect to different measurement configurations, excluding the base classifiers. Since all three datasets employed in this experiment have the same input size and the size of the feature tensor in MCL was set similar to the original input size, the complexities of CS and FS components in all three datasets are similar. It is clear that our proposed MCL framework has much lower memory and computational complexity compared to the vector-based counterpart. In our proposed framework, even operating at the highest measurement rate , the CS and FS components require only parameters and FLOPs, which are approximately times fewer than that of the vector-based framework operating at the lowest measurement rate . Interestingly, the optimal configuration at each measurement rate obtained in our framework also has lower or similar complexity than the other configuration.

In Figure 2, we visualize the features obtained from the reprojection step and the FS component in the proposed framework, respectively. It is worth noting that the sensing matrices and the reprojection matrices (in case of the vector-based framework) or (in FS component of MCL framework) were initialized with PCA and HOSVD. In addition, the base network classifiers were also initialized with the ones trained on the original data. Thus, it is intuitive to expect the features obtained from both frameworks to be visually interpretable for human, despite no explicit reconstruction objective was incorporated during the training phase. Indeed, from Figure 2, we can see that with the highest number of measurements, the feature images obtained from both frameworks look very similar to the original images. Particularly, the ones synthesized by the vector-based framework look visually closer to the original images than those obtained from our MCL framework. Since the sensing and reprojection steps in the vector-based framework share the same weight matrices during the optimization procedure, the whole pipeline is more constrained to reconstruct the images at the reprojection step.

When the number of measurements drops to approximately of the original signal, the reverse scenario happens: the feature images (in configuration , ) obtained from our framework retain more facial features compared to those from the vector-based framework (), especially in the configuration. This is due to the fact that most of the facial information in particular, and natural images in general lie on the spatial dimensions, i.e., height and width. Besides, when the dimension of the third mode of the measurement is set to (as in configuration , ), after the optimization procedure, our proposed framework effectively discards the color information which is less relevant to the facial recognition task, and retains more lightness details, thus, performs better than the configurations with the -mode dimension set to (in configuration , ).

With the above observations from the empirical analysis, it is clear that structure-preserving Compressive Sensing and Feature Synthesis components in our proposed MCL framework can better capture essential information inherent in the multi-dimensional signal for the learning tasks, compared with the vector-based framework.

CIFAR-10 CIFAR-100 CelebA-32



TABLE III: Test Accuracy with AllCNN architecture as the base classifier

CIFAR-10 CIFAR-100 CelebA-32



TABLE IV: Test Accuracy with ResNet architecture as the base classifier

#Parameters #FLOPs




TABLE V: Complexity of the proposed framework and the vector-based counterpart, excluding the base classifier component
Fig. 2: Illustration of the feature images (inputs to ResNet) synthesized by the proposed framework and the vector-based counterpart. The original images come from the test set of CelebA-32.

Iv-C Ablation Study

In this subsection, we provide the empirical analysis on the effect of different components in MCL framework. These factors include the effect of the popular nonlinear thresholding step discussed in Section III.B; the choice of having shared or separate weights in CS and FS component; the initialization step discussed in Section III.C; the scalability of the proposed framework when the original dimensionalities of the signal increase. Since the total number of experiment settings when combining all of the aforementioned factors is huge, and the results involved multiple factors are difficult to interpret, we analyze these factors in a progressive manner.

Iv-C1 Linearity versus Nonlinearity and Shared versus Separate Weights

Firstly, the choice of linearity or nonlinearity and the choice of shared or separate weights in CS and FS component are analyzed together since the two factors are closely related. In this setting, the CS and FS components are initialized by HOSVD decomposition as described in Section III.C. The neural network classifier has the AllCNN architecture with the weights initialized from the corresponding pre-trained network on the original data. Table VI shows the test accuracies on CIFAR-10, CIFAR-100 and CelebA-32 at different measurements. It is clear that most of the highest test accuracies are obtained without the thresholding step and with separate weights in CS and FS component, i.e., most bold-face numbers appear in the lower quarter on the left side of Table VI. Comparing between linearity and nonlinearity option, it is obvious that the nonlinearity effect of adversely affect the performances, especially when the number of measurements decreases. The reason might be that applying to the compressed measurements restricts the information to be represented in the positive subspace only, thus further losing the representation power in the compressed measurements when only a limited number of measurements allowed.

In the linearity setting, while the performance differences between shared and separate weights in some configurations are small, here we should note that allowing non-shared weights can be beneficial in cases where we know that certain features should be synthesized in the FS component in order to make inferences.

Iv-C2 Effect of The Initialization Step

From the observation obtained from the above analysis on the effect of linearity and separate weights, we investigated the effect of the initialization step discussed in Section III.C. All setups were trained with a multilinear FS component having separate weights from CS component. From Table VII, we can easily observe that by initializing the CS and FS components with HOSVD, the performances of the learning systems increase significantly. When CS and FS components are initialized with HOSVD, utilizing a pre-trained network further improves the inference performance of the systems, especially in the low measurement rate regime. Thus, the initialization strategy proposed in Section III.C is beneficial in a general setting for the learning tasks.

CIFAR-10 CIFAR-100 CelebA-32 CIFAR-10 CIFAR-100 CelebA-32
TABLE VI: Test accuracy with respect to the choice of linearity or nonlinearity in conjunction with the choice of shared or separate weights in CS and FS component. The bold numbers denote the best test accuracy (among 4 combinations of LINEARITY versus NONLINEARITY and SHARED versus SEPARATE) in the same dataset with the same configuration
CIFAR-10 CIFAR-100 CelebA-32 CIFAR-10 CIFAR-100 CelebA-32

TABLE VII: Test accuracy with respect to the initialization of CS & FS component and the base classifier (AllCNN). The bold numbers denote the best test accuracy (among 4 combinations of PRECOMPUTE CLASSIFIER versus RANDOM CLASSIFIER and PRECOMPUTE CS & FS versus RANDOM CS & FS) in the same dataset with the same configuration

Iv-C3 Scalability

Finally, the scalability of the proposed framework is validated in different resolutions of the CelebA dataset. All of the previous experiments were demonstrated with CelebA-32 dataset, which we assume that there are only elements in the original signal. To investigate the scalability, we pose the following question: What if the original dimensions of the signal are higher than , with the same numbers of measurements presented in Table II, can we still learn to recognize facial images with feasible costs?. To answer this question, we trained our framework on CelebA-32, CelebA-48, CelebA-64 and CelebA-80 and recorded the test accuracies, the number of parameters and the number of FLOPs at different number of measurements, which are shown in Table VIII. It is clear that at each measurement configuration, when the original signal resolution increases, the measurement rate drops at a similar rate, however, without any adverse effect on the inference performance. Particularly, if we look into the last column of Table VIII, with a sampling rate of only , the proposed framework achieves accuracy, which is only lower compared to that of the base classifier trained on the original data. Here we should note that most of the images in CelebA dataset have higher resolution than pixel, therefore, 4 different versions of CelebA (, , , , ) in our experiments indeed contain increasing levels of data fidelity. From the performance statistics, we can observe that the performance of our framework is characterized by the number of measurements, rather than the measurement rates or compression rates.

Due to the memory limitation when training the vector-based framework at higher resolutions, we could not perform the same set of experiments for the vector-based framework. However, to compare the scalability in terms of computation and memory between the two frameworks, we measured the number of FLOPs and parameters in the vector-based framework, excluding the base classifier and visualize the results on Figure 3. It is worth noting that on the y-axis is the log scale and as the dimensions of the original signal increase, the complexity of the vector-based framework increases by an order of magnitude while our proposed MCL framework scales favorably in both memory and computation.

CelebA-32 CelebA-48 CelebA-64 CelebA-80 CelebA-32 CelebA-48 CelebA-64 CelebA-80
Configuration #FLOP #PARAMETER
CelebA-32 CelebA-48 CelebA-64 CelebA-80 CelebA-32 CelebA-48 CelebA-64 CelebA-80
TABLE VIII: Test Performance & Complexity of the proposed framework at different resolutions of the original CelebA dataset, with AllCNN as the base classifier
Fig. 3: #FLOP and #PARAMETER versus the original dimensionalities of the signal, measured in the proposed framework and the vector-based framework, excluding the base classifier. The x-axis represents the original dimension of the input signal. The y-axis on the first row represents the number of FLOPs in log scale while the y-axis on the second row represents the number of parameters

V Conclusions

In this paper, we proposed Multilinear Compressive Learning, an efficient framework to tackle the Compressive Learning task that operates on multi-dimensional signals. The proposed framework takes into account the tensorial nature of the multi-dimensional signals and performs the compressive sensing as well as the feature extraction step along different modes of the original data, thus being able to retain and synthesize essential information on a multilinear subspace for the learning task. We show theoretically and empirically that the proposed framework outperforms its vector-based counterpart in both inference performance and computational efficiency. Extensive ablation study has been conducted to investigate the effect of different components in the proposed framework, giving insights into the importance of different design choices.


  • [1] E. J. Candès and M. B. Wakin, “An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE signal processing magazine, vol. 25, no. 2, pp. 21–30, 2008.
  • [2] E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, vol. 59, no. 8, pp. 1207–1223, 2006.
  • [3] D. L. Donoho et al., “Compressed sensing,” IEEE Transactions on information theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [4] P. Mohassel and Y. Zhang, “Secureml: A system for scalable privacy-preserving machine learning,” in 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38, IEEE, 2017.
  • [5] E. Hesamifard, H. Takabi, and M. Ghasemi, “Cryptodl: Deep neural networks over encrypted data,” arXiv preprint arXiv:1711.05189, 2017.
  • [6] R. Calderbank and S. Jafarpour, “Finding needles in compressed haystacks,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3441–3444, IEEE, 2012.
  • [7] M. A. Davenport, M. F. Duarte, M. B. Wakin, J. N. Laska, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “The smashed filter for compressive classification and target recognition,” in Computational Imaging V, vol. 6498, p. 64980H, International Society for Optics and Photonics, 2007.
  • [8] M. A. Davenport, P. Boufounos, M. B. Wakin, R. G. Baraniuk, et al., “Signal processing with compressive measurements.,” J. Sel. Topics Signal Processing, vol. 4, no. 2, pp. 445–460, 2010.
  • [9] H. Reboredo, F. Renna, R. Calderbank, and M. R. Rodrigues, “Compressive classification,” in 2013 IEEE International Symposium on Information Theory, pp. 674–678, IEEE, 2013.
  • [10] P. K. Baheti and M. A. Neifeld, “Adaptive feature-specific imaging: a face recognition example,” Applied optics, vol. 47, no. 10, pp. B21–B31, 2008.
  • [11] H. Reboredo, F. Renna, R. Calderbank, and M. R. Rodrigues, “Projections designs for compressive classification,” in 2013 IEEE Global Conference on Signal and Information Processing, pp. 1029–1032, IEEE, 2013.
  • [12] S. Lohit, K. Kulkarni, P. Turaga, J. Wang, and A. C. Sankaranarayanan, “Reconstruction-free inference on compressive measurements,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , pp. 16–24, 2015.
  • [13] A. Adler, M. Elad, and M. Zibulevsky, “Compressed learning: A deep neural network approach,” arXiv preprint arXiv:1610.09615, 2016.
  • [14] S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), pp. 1913–1917, IEEE, 2016.
  • [15] D. Nion and N. D. Sidiropoulos, “Tensor algebra and multidimensional harmonic retrieval in signal processing for mimo radar,” IEEE Transactions on Signal Processing, vol. 58, no. 11, pp. 5693–5705, 2010.
  • [16] F. Miwakeichi, E. Martınez-Montes, P. A. Valdés-Sosa, N. Nishiyama, H. Mizuhara, and Y. Yamaguchi, “Decomposing eeg data into space–time–frequency components using parallel factor analysis,” NeuroImage, vol. 22, no. 3, pp. 1035–1045, 2004.
  • [17] D. M. Dunlavy, T. G. Kolda, and W. P. Kegelmeyer, “Multilinear algebra for analyzing data with multiple linkages,” in Graph algorithms in the language of linear algebra, pp. 85–114, SIAM, 2011.
  • [18] D. T. Tran, M. Magris, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Tensor representation in high-frequency financial data for price change prediction,” in 2017 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7, IEEE, 2017.
  • [19] D. T. Tran, A. Iosifidis, and M. Gabbouj, “Improving efficiency in convolutional neural networks with multilinear filters,” Neural Networks, vol. 105, pp. 328–339, 2018.
  • [20] A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa, and H. A. Phan, “Tensor decompositions for signal processing applications: From two-way to multiway component analysis,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 145–163, 2015.
  • [21] F. Malgouyres and J. Landsberg, “Multilinear compressive sensing and an application to convolutional linear networks,” 2018.
  • [22] D. T. Tran, A. Iosifidis, J. Kanniainen, and M. Gabbouj, “Temporal attention-augmented bilinear network for financial time-series data analysis,” IEEE transactions on neural networks and learning systems, 2018.
  • [23] T. L. Youd, C. M. Hansen, and S. F. Bartlett, “Revised multilinear regression equations for prediction of lateral spread displacement,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 128, no. 12, pp. 1007–1017, 2002.
  • [24] Q. Zhao, C. F. Caiafa, D. P. Mandic, Z. C. Chao, Y. Nagasaka, N. Fujii, L. Zhang, and A. Cichocki, “Higher order partial least squares (hopls): a generalized multilinear regression method,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 7, pp. 1660–1673, 2013.
  • [25] Q. Li and D. Schonfeld, “Multilinear discriminant analysis for higher-order tensor data classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 12, pp. 2524–2537, 2014.
  • [26] D. T. Tran, M. Gabbouj, and A. Iosifidis, “Multilinear class-specific discriminant analysis,” Pattern Recognition Letters, vol. 100, pp. 131–136, 2017.
  • [27] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in Advances in neural information processing systems, pp. 1269–1277, 2014.
  • [28] M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up convolutional neural networks with low rank expansions,” arXiv preprint arXiv:1405.3866, 2014.
  • [29] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, “Speeding-up convolutional neural networks using fine-tuned cp-decomposition,” arXiv preprint arXiv:1412.6553, 2014.
  • [30] Y. Yang, D. Krompass, and V. Tresp, “Tensor-train recurrent neural networks for video classification,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3891–3900, JMLR. org, 2017.
  • [31] C. F. Caiafa and A. Cichocki, “Multidimensional compressed sensing and their applications,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 3, no. 6, pp. 355–380, 2013.
  • [32] Y. Yu, J. Jin, F. Liu, and S. Crozier, “Multidimensional compressed sensing mri using tensor decomposition-based sparsifying transform,” PloS one, vol. 9, no. 6, p. e98441, 2014.
  • [33] R. Robucci, L. K. Chiu, J. Gray, J. Romberg, P. Hasler, and D. Anderson, “Compressive sensing on a cmos separable transform image sensor,” in 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 5125–5128, IEEE, 2008.
  • [34] M. Aharon, M. Elad, A. Bruckstein, et al., “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on signal processing, vol. 54, no. 11, p. 4311, 2006.
  • [35] D. L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization,” Proceedings of the National Academy of Sciences, vol. 100, no. 5, pp. 2197–2202, 2003.
  • [36] J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of linear inverse problems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 948–958, 2010.
  • [37] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM review, vol. 43, no. 1, pp. 129–159, 2001.
  • [38] J. A. Tropp, “Greed is good: Algorithmic results for sparse approximation,” IEEE Transactions on Information theory, vol. 50, no. 10, pp. 2231–2242, 2004.
  • [39] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on information theory, vol. 53, no. 12, pp. 4655–4666, 2007.
  • [40] L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000.
  • [41] M. F. Duarte and R. G. Baraniuk, “Kronecker compressive sensing,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 494–504, 2012.
  • [42] C. F. Caiafa and A. Cichocki, “Computing sparse representations of multidimensional signals using kronecker bases,” Neural computation, vol. 25, no. 1, pp. 186–220, 2013.
  • [43] R. G. Baraniuk and M. B. Wakin, “Random projections of smooth manifolds,” Foundations of computational mathematics, vol. 9, no. 1, pp. 51–77, 2009.
  • [44] K. Kulkarni and P. Turaga, “Recurrence textures for human activity recognition from compressive cameras,” in 2012 19th IEEE International Conference on Image Processing, pp. 1417–1420, IEEE, 2012.
  • [45] K. Kulkarni and P. Turaga, “Reconstruction-free action inference from compressive imagers,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 4, pp. 772–784, 2016.
  • [46] B. Hollis, S. Patterson, and J. Trinkle, “Compressed learning for tactile object recognition,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1616–1623, 2018.
  • [47] A. Değerli, S. Aslan, M. Yamac, B. Sankur, and M. Gabbouj, “Compressively sensed image recognition,” in 2018 7th European Workshop on Visual Information Processing (EUVIP), pp. 1–6, IEEE, 2018.
  • [48] Y. Xu and K. F. Kelly, “Compressed domain image classification using a multi-rate neural network,” arXiv preprint arXiv:1901.09983, 2019.
  • [49] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    , pp. 249–256, 2010.
  • [50] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European conference on computer vision, pp. 630–645, Springer, 2016.
  • [51] F. Wu, X. Tan, Y. Yang, D. Tao, S. Tang, and Y. Zhuang, “Supervised nonnegative tensor factorization with maximum-margin constraint,” in Twenty-Seventh AAAI Conference on Artificial Intelligence, 2013.
  • [52] Y.-D. Kim and S. Choi, “Nonnegative tucker decomposition,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, IEEE, 2007.
  • [53] L. Grasedyck, D. Kressner, and C. Tobler, “A literature survey of low-rank tensor approximation techniques,” GAMM-Mitteilungen, vol. 36, no. 1, pp. 53–78, 2013.
  • [54] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” tech. rep., Citeseer, 2009.
  • [55]

    Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in

    Proceedings of International Conference on Computer Vision (ICCV), 2015.
  • [56] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.
  • [57]

    K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in

    Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015.
  • [58] E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Processing, Analyzing and Learning of Images, Shapes, and Forms, vol. 19, p. 1, 2018.
  • [59] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.