References
 Noise flow: noise modeling with conditional normalizing flows. In ICCV, pp. 3165–3173.
 Network dissection: quantifying interpretability of deep visual representations. In CVPR, pp. 6541–6549.

Distributed optimization and statistical learning via the alternating direction method of multipliers.
Foundations and Trends® in Machine learning
3 (1), pp. 1–122.  Fast convolutional sparse coding. In CVPR, pp. 391–398.
 Consensus convolutional sparse coding. In ICCV, pp. 4280–4288.
 Bilinear modeling via augmented lagrange multipliers (balm). PAMI 34 (8), pp. 1496–1508.
 Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS,
 Centripetal sgd for pruning very deep convolutional networks with complicated structure. In CVPR, pp. 4943–4953.

Incorporating nesterov momentum into adam
. In International Conference on Learning Representations, pp. 1–8.  Dynamics of stochastic gradient descent for twolayer neural networks in the teacherstudent setup. arXiv preprint arXiv:1906.08632.

Convolutional sparse coding for image superresolution
. In ICCV, pp. 1823–1831.  Dynamic network surgery for efficient dnns. In NIPS, pp. 1379–1387.

Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding
. Fiber 56 (4), pp. 3–7.  Learning both weights and connections for efficient neural network. In NIPS, pp. 1135–1143.
 Second order derivatives for network pruning: optimal brain surgeon. In NIPS, pp. 164–171.
 Deep residual learning for image recognition. In CVPR, pp. 770–778.
 Filter pruning via geometric median for deep convolutional neural networks acceleration. In CVPR, pp. 4340–4349.
 Channel pruning for accelerating very deep neural networks. In ICCV, pp. 1398–1406.
 Fast and flexible convolutional sparse coding. In CVPR, pp. 5135–5143.
 Distilling the knowledge in a neural network. Computer Science 14 (7), pp. 38–39.
 Network trimming: a datadriven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250.
 Condensenet: an efficient densenet using learned group convolutions. In CVPR, pp. 2752–2761.
 Datadriven sparse structure selection for deep neural networks. In ECCV, pp. 304–320.
 Datadriven sparse structure selection for deep neural networks. In ECCV, pp. 304–320.
 Adam: a method for stochastic optimization. Computer Science.
 Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097–1105.
 A simple weight decay can improve generalization. In NIPS, pp. 950–957.
 Optimal brain damage. In NIPS, pp. 598–605.
 Structured pruning of neural networks with budgetaware regularization. In CVPR, pp. 9108–9116.
 Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710.
 Factorized bilinear models for image recognition. In ICCV, pp. 2079–2087.
 Exploiting kernel sparsity and entropy for interpretable cnn compression. In CVPR, pp. 2800–2809.
 Squeezed bilinear pooling for finegrained visual categorization. In ICCV Workshop,

HRank: filter pruning using highrank feature map.
In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
,  Accelerating convolutional networks via global & dynamic filter pruning.. In IJCAI, pp. 2425–2432.
 Towards optimal structured cnn pruning via generative adversarial learning. In CVPR,
 Bilinear cnn models for finegrained visual recognition. In ICCV, pp. 1449–1457.
 Learning efficient convolutional networks through network slimming. In ICCV, pp. 2736–2744.
 ThiNet: a filter level pruning method for deep neural network compression. In ICCV, pp. 5068–5076.
 Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11 (Jan), pp. 19–60.
 Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440.
 Proximal algorithms. Foundations and Trends® in Optimization 1 (3), pp. 127–239.
 PyTorch: an imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035.
 The matrix cookbook. Technical University of Denmark 7 (15), pp. 510.
 XNORnet: imagenet classification using binary convolutional neural networks. In ECCV,
 FitNets: hints for thin deep nets. Computer Science.
 Mobilenetv2: inverted residuals and linear bottlenecks. In CVPR, pp. 4510–4520.
 Recognizing human actions: a local svm approach. In ICPR,
 Partaligned bilinear representations for person reidentification. In ECCV, pp. 402–419.
 Efficient convolutional sparse coding. In ICASSP, pp. 7173–7177.
 Quantized convolutional neural networks for mobile devices. In CVPR,
 Image reconstruction via manifold constrained convolutional sparse coding for image sets. JSTSP 11 (7), pp. 1072–1081.
 Designing energyefficient convolutional neural networks using energyaware pruning. In CVPR, pp. 5687–5695.
 Rethinking the smallernormlessinformative assumption in channel pruning of convolution layers. In ICLR,
 Generalized bilinear model based nonlinear unmixing using seminonnegative matrix factorization. In IEEE International Geoscience and Remote Sensing Symposium, pp. 1365–1368.
 Combined group and exclusive sparsity for deep neural networks. In ICML, pp. 3958–3966.
 Solving vision problems via filtering. In ICCV,

Nisp: pruning networks using neuron importance score propagation
. In CVPR, pp. 9194–9203.  Multimodal factorized bilinear pooling with coattention learning for visual question answering. In ICCV, pp. 1821–1830.
 Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer.
 Deconvolutional networks. In CVPR, pp. 2528–2535.
 ADADELTA: an adaptive learning rate method. arXiv preprint.
 Sparse representation classification with manifold constraints transfer. In CVPR,
 Learning causality and causalityrelated learning: some recent progress. National science review.
 Collaborative representation based classification for face recognition. arXiv preprint arXiv:1204.2358.
 Accelerate cnn via recursive bayesian pruning. In ICCV, pp. 3306–3315.
 Cogradient descent for bilinear optimization. CoRR abs/2006.09142.
Comments
There are no comments yet.