DeepAI AI Chat
Log In Sign Up

Speeding-up Graphical Model Optimization via a Coarse-to-fine Cascade of Pruning Classifiers

09/15/2014
by   B. Conejo, et al.
0

We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The proposed approach relies on a multi-scale pruning scheme that is able to progressively reduce the solution space by use of a novel strategy based on a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF.

READ FULL TEXT
03/09/2022

CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

Vision transformer (ViT) has achieved competitive accuracy on a variety ...
07/22/2017

Coarse-to-Fine Lifted MAP Inference in Computer Vision

There is a vast body of theoretical research on lifted inference in prob...
12/21/2018

Cascaded Coarse-to-Fine Deep Kernel Networks for Efficient Satellite Image Change Detection

Deep networks are nowadays becoming popular in many computer vision and ...
09/23/2016

Funnel-Structured Cascade for Multi-View Face Detection with Alignment-Awareness

Multi-view face detection in open environment is a challenging task due ...
03/19/2021

Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning

We report, for the first time, on the cascade weight shedding phenomenon...
04/15/2018

Higher-order Coreference Resolution with Coarse-to-fine Inference

We introduce a fully differentiable approximation to higher-order infere...
12/03/2019

LiteEval: A Coarse-to-Fine Framework for Resource Efficient Video Recognition

This paper presents LiteEval, a simple yet effective coarse-to-fine fram...