The deep learning features , which are extracted with deep neural networks learned from abundant training data, have essential differences compared with handcrafted features, e.g., Histogram of Oriented Gradient (HOG)  and Scale-Invariant Feature (SIFT) 
. With the unprecedented success of deep learning in various computer vision tasks as well as the development of network infrastructure, there is an increasing demand to study the deep learning feature compression in theAnalysis-then-Compress (ATC)  paradigm. In particular, in contrast with Compress-then-Analysis
(CTA) paradigm where the videos would be first acquired at front-end sensors then compressed and transmitted to the cloud-end for analysis purposes, ATC allows the straightforward feature extraction at the front-end, leading to a much more compact representation of videos by transmitting the features instead of textures. In view of this advantage, the ATC paradigm with both handcrafted and deep learning features has been widely studied to address the challenges of video big data in various application scenarios.
In the literature, there are numerous algorithms proposed for compact feature representation of both handcrafted and deep features. Hash-based model DBH
and vector quantization based models, such as product quantization (PQ) optimized product quantization (OPQ) , target at the compact representation of handcrafted features. Moreover, binary based descriptors such as BRIEF  and USB  have been proposed for high-efficiency Hamming distance computation. Regarding deep learning features, Ding et al.  applied the philosophy of video coding to compact deep learning feature representation. The deep hashing network (DHN) 
combined supervised learning with hash compression to achieve performance promotion for image feature representation. Besides, Chenet al. also proposed an intermediate deep feature compression towards intelligent sensing in .
The promising characteristics of ATC paradigm motivate the standardization of the compact feature representation. In particular, the Compact Descriptor for Visual Search (CDVS) and Compact Descriptors for Video Analysis (CDVA) standards completed by the Moving Picture Experts Group (MPEG), define the standardized bitstream syntax such that the interoperability could be enabled in image/video retrieval applications. In 2019, the MPEG initiated the standardization of video coding for machine (VCM) [13, 14], aiming to achieve high accuracy, low latency, object oriented analysis based on compact video representation for machine vision. VCM relies on the fundamental development of feature compression, and could establish the relationship between compact feature representation and video compression in terms of both machine vision and human perception, as features could be ultimately utilized in various machine vision tasks.
In this work, motivated by the recent progress on deep learning based video coding , we attempt to further compress the raw deep learning features based on the representation and learning capability of deep neural works. The contributions of this paper are as follows,
We propose an end-to-end coding scheme to compactly represent the deep learning features as a latent code, in an effort to achieve optimal feature-in-feature representation based on the rate-distortion optimization.
We propose a compact feature enhancement method which further improves the feasibility in feature coding. The proposed scheme is built upon the teacher-student enhancement module at the latent code level, and allows to adaptively switch between high complexity decoding and high bit rate representation.
The proposed principled framework is implemented based on facial features, and better coding performance in terms of rate-accuracy has been demonstrated compared with the popular feature compression schemes.
2 The Framework of Feature Compression
The architecture of the proposed scheme is shown in Fig. 1. More specifically, the deep learning feature extraction from raw image x with pre-trained FaceNet model111https://github.com/davidsandberg/facenet is denoted as . Subsequently, the raw feature can be compressed with an end-to-end trained deep neural network, and for different bit rates different encoders and decoders are learned to adapt the characteristics of rate-distortion function. As such, the compact representations of denoted as and , indicate the compact latent code under low and high bit-rate scenarios, respectively. Moreover, the reconstructed features and can be obtained with and , and this process can be formulated as follows,
Furthermore, the low bit rate stream can be further enhanced by transferring it to the as the target based on the teacher-student learning. As such, the output of the enhancement module can be well decoded with the decoder learned in the high bit-rate coding scenario. This process is expressed as follows,
The reconstructed feature with the enhanced latent code reveals better fidelity compared with the reconstructed feature at low bitrate by improving the representation capability with enhanced decoding process, as both enhancement and pure decoding should be performed sequentially in this scenario. In this manner, the flexibility of the feature codec is significantly improved in an effort to ensure the optimal rate-accuracy performance.
3 End-to-end feature compression with teacher-student enhancement
3.1 End-to-end feature compression
To begin with, we extract deep learning features from the pre-trained FaceNet model, and investigate the distributions for end-to-end compression. The distribution of several dimensions in FaceNet features extracted from Labeled Face in Wild (LFW)  and VGG-Face2 datasets  are shown in Fig. 2. It is obvious to see the Gaussian-like distribution in the similar range and the expectations are all close to zero, indicating that the features well match the characteristics of generalized divisive normalization (GDN) in terms of Gaussianizing densities, as illustrated in . Motivated by the recent development of end-to-end image compression , an end-to-end model by imposing norm as the sparsity constraint is trained for feature compression.
are adopted as the Encoder/Decoder respectively, and an arithmetic coding engine is applied to generate the final bit-stream based on the latent code. The loss function includes the linear combination of mean square error (MSE) between original featureand , and the norm value of compact representation denoted as to indicate the bit rate of the bitstream. The balance between feature rate and distortion is governed by Lagrangian multiplier . The whole process is formulated as follows,
Besides, the compact representation is clipped by the threshold element-wise such that the expense in representing the feature is further reduced. It is worth mentioning that random noise is applied to simulate the distortion of the rounding operation for in the training process. Random noise also could strengthen the adaptation for the feature reconstruction in the decoder compared with quantization, which could be beneficial for the latent code level enhancement.
3.2 Teacher-Student Enhancement at Latent Code level
Based on the end-to-end feature compression, the teacher-student enhancement model is applied at the latent code level, to further improve the coding performance and feasibility. More specifically, the latent code for low bit rate coding is transferred to the high bit rate representation , based on the correspondence between the two domains. A straightforward approach is adopted here for teacher-student based enhancement, leading to the feasible solution that enhances the adaptively generated latent code with the specific domain knowledge based on a learned neural network.
The structure of enhancement model is two fully-connected layers with GDN, as shown in Fig. 1. Range normalization is adopted as the data pre-processing by dividing the clipping threshold used in the training of end-to-end feature compression. As such, the loss function in learning the network that transfers to is defined as follows,
4 Experimental Results
We conduct experiments to validate the effectiveness of the proposed models in terms of rate-accuracy. The training data are VGG-Face2  with over 3.3 million human face images, including 9131 subjects and every subject has over 360 images on average. Correspondingly, the testing data are popular face verification dataset, Labeled Faces in the Wild (LFW) .
In order to verify the effectiveness of the proposed models, we adopt the scalar quantization algorithm (SQ) used in  for comparison. Moreover, on top of this strategy, we introduce a deep learning based feature enhancement model (SQ-E) for further comparisons. In particular, FaceNet feature is 128-dimension vector in range of -1 and 1 and the SQ based compression is conducted with the following procedures,
where and are the reconstructed and original feature respectively, and is quantization parameter. is the quantized feature, which is further subjected to entropy coding to generate the feature bitstream.
Moreover, in order to validate the proposed scheme with a comparable deep learning model as the baseline, a neural network based model for SQ reconstructed feature is introduced to enhance the feature fidelity via the residual network and GDN, denoted as SQ-E. The network structure is shown in Fig. 3. The loss function of the network is mean square error (MSE) between original feature and enhanced feature to enhance the quality of the decompressed feature.
We first verify the effectiveness of the proposed scheme in terms of rate-accuracy performance, as shown in Table 1. In particular, the proposed end-to-end feature compression model and the teacher-student enhancement model are denoted as PRO and PRO-E, respectively. It is also worth mentioning that accuracy of the original FaceNet feature without compression is 99.32% with a public pre-trained FaceNet model. In addition to SQ and SQ-E, other compression algorithms including PQ , OPQ , DBH , DHN  are also compared . It is obvious that the proposed scheme could achieve better compression performance in terms of the rate-accuracy. Moreover, in order to investigate the performance of the teacher-student enhancement model, the area under curve (AUC) and equal error rate (EER) performance between PRO and PRO-E are also compared, as shown in Fig. 4 and 5. The rate-accuracy curves provide useful evidence regarding the effectiveness of the proposed enhancement model.
In this paper, we propose an end-to-end deep feature coding framework towards video coding for machine. The novelty of this paper lies in that instead of directly quantizing and entropy coding the features, we introduce the deep learning model to further compactly represent the features as the latent code, such that better performance can be achieved. Moreover, a feature enhancement approach is proposed at the latent code level, which transfers the low quality latent code representation into a high quality one to facilitate the subsequent analysis process. Experiments have proven the efficiency of the proposed deep learning feature representation scheme from different perspectives.
-  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
N. Dalal and B. Triggs,
“Histograms of oriented gradients for human detection,”
2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 2005, vol. 1, pp. 886–893.
-  D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
-  A. Redondi, L. Baroffio, M. Cesana, and M. Tagliasacchi, “Compress-then-analyze vs. analyze-then-compress: Two paradigms for image analysis in visual sensor networks,” in IEEE International Workshop on Multimedia Signal Processing, Sep. 2013, pp. 278–282.
-  W. Liu, J. Wang, R. Ji, Y. Jiang, and S. Chang, “Supervised hashing with kernels,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, June 2012, pp. 2074–2081.
-  H. Jgou, M. Douze, and C. Schmid, “Product quantization for nearest neighbor search,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 1, pp. 117–128, Jan 2011.
-  T. Ge, K. He, Q. Ke, and J. Sun, “Optimized product quantization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 4, pp. 744–755, April 2014.
-  M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, “Brief: Computing a local binary descriptor very fast,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1281–1298, July 2012.
-  S. Zhang, Q. Tian, Q. Huang, W. Gao, and Y. Rui, “Usb: Ultrashort binary descriptor for fast visual matching and retrieval,” IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3671–3683, Aug 2014.
-  L. Ding, Y. Tian, H. Fan, Y. Wang, and T. Huang, “Rate-performance-loss optimization for inter-frame deep feature coding from videos,” IEEE Transactions on Image Processing, vol. 26, no. 12, pp. 5743–5757, Dec 2017.
H. Zhu, M. Long, J. Wang, and Y. Cao,
“Deep hashing network for efficient similarity retrieval,”
Thirtieth AAAI Conference on Artificial Intelligence, 2016.
-  Z. Chen, K. Fan, S. Wang, L. Duan, W. Lin, and A. Kot, “Toward intelligent sensing: Intermediate deep feature compression,” IEEE Transactions on Image Processing, vol. 29, pp. 2230–2243, 2020.
-  L. Duan, J. Liu, W. Yang, T. Huang, and W. Gao, “Video coding for machines: A paradigm of collaborative compression and intelligent analytics,” arXiv preprint arXiv:2001.03569, 2020.
-  “Video coding for machine: Use cases,” ISO/IEC JTC 1/SC 29/WG 11 N18662, Jul. 2019.
-  S. Ma, X. Zhang, C. Jia, Z. Zhao, S. Wang, and S. Wanga, “Image and video compression with neural networks: A review,” IEEE Transactions on Circuits and Systems for Video Technology, pp. 1–1, 2019.
B. Gary, R. Manu, B. Tamara, and L. Erik,
“Labeled faces in the wild: A database for studying face recognition in unconstrained environments,”Tech. Rep. 07-49, University of Massachusetts, Amherst, October 2007.
-  Q. Cao, L. Shen, W. Xie, O. Parkhi, and A. Zisserman, “Vggface2: A dataset for recognising faces across pose and age,” in International Conference on Automatic Face and Gesture Recognition, 2018.
-  J. Ball, V. Laparra, and E. Simoncelli, “End-to-end optimization of nonlinear transform codes for perceptual quality,” in 2016 Picture Coding Symposium, Dec 2016, pp. 1–5.
-  J. Ballé, D. Minnen, S. Singh, S. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” arXiv preprint arXiv:1802.01436, 2018.
-  J. Ballé, V. Laparra, and E. Simoncelli, “End-to-end optimized image compression,” arXiv preprint arXiv:1611.01704, 2016.
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,
S. Ghemawat, G. Irving, M. Isard, et al.,
“Tensorflow: A system for large-scale machine learning,”in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016, pp. 265–283.
-  X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010, pp. 249–256.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  S. Wang, S. Wang, X. Zhang, S. Wang, S. Ma, and W. Gao, “Scalable facial image compression with deep feature reconstruction,” in 2019 IEEE International Conference on Image Processing, Sep. 2019, pp. 2691–2695.
-  “Deep feature compression towards large-scale fine-grained image search,” AVS/AI M1162, Nov. 2019.