DeepAI AI Chat
Log In Sign Up

Speeding-up Graphical Model Optimization via a Coarse-to-fine Cascade of Pruning Classifiers

by   B. Conejo, et al.

We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The proposed approach relies on a multi-scale pruning scheme that is able to progressively reduce the solution space by use of a novel strategy based on a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF.


CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

Vision transformer (ViT) has achieved competitive accuracy on a variety ...

Coarse-to-Fine Lifted MAP Inference in Computer Vision

There is a vast body of theoretical research on lifted inference in prob...

Cascaded Coarse-to-Fine Deep Kernel Networks for Efficient Satellite Image Change Detection

Deep networks are nowadays becoming popular in many computer vision and ...

Funnel-Structured Cascade for Multi-View Face Detection with Alignment-Awareness

Multi-view face detection in open environment is a challenging task due ...

Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning

We report, for the first time, on the cascade weight shedding phenomenon...

Higher-order Coreference Resolution with Coarse-to-fine Inference

We introduce a fully differentiable approximation to higher-order infere...

LiteEval: A Coarse-to-Fine Framework for Resource Efficient Video Recognition

This paper presents LiteEval, a simple yet effective coarse-to-fine fram...