Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection

09/16/2018
by   Yongcheng Liu, et al.
2

Multi-label image classification is a fundamental but challenging task towards general visual understanding. Existing methods found the region-level cues (e.g., features from RoIs) can facilitate multi-label classification. Nevertheless, such methods usually require laborious object-level annotations (i.e., object labels and bounding boxes) for effective learning of the object-level visual features. In this paper, we propose a novel and efficient deep framework to boost multi-label classification by distilling knowledge from weakly-supervised detection task without bounding box annotations. Specifically, given the image-level annotations, (1) we first develop a weakly-supervised detection (WSD) model, and then (2) construct an end-to-end multi-label image classification framework augmented by a knowledge distillation module that guides the classification model by the WSD model according to the class-level predictions for the whole image and the object-level visual features for object RoIs. The WSD model is the teacher model and the classification model is the student model. After this cross-task knowledge distillation, the performance of the classification model is significantly improved and the efficiency is maintained since the WSD model can be safely discarded in the test phase. Extensive experiments on two large-scale datasets (MS-COCO and NUS-WIDE) show that our framework achieves superior performances over the state-of-the-art methods on both performance and efficiency.

READ FULL TEXT

page 1

page 3

page 6

page 8

research
04/14/2022

Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly Supervised Object Detection

Weakly supervised object detection (WSOD), which is an effective way to ...
research
05/23/2022

Boosting Multi-Label Image Classification with Complementary Parallel Self-Distillation

Multi-Label Image Classification (MLIC) approaches usually exploit label...
research
05/10/2023

Explainable Knowledge Distillation for On-device Chest X-Ray Classification

Automated multi-label chest X-rays (CXR) image classification has achiev...
research
03/15/2023

Knowledge Distillation from Single to Multi Labels: an Empirical Study

Knowledge distillation (KD) has been extensively studied in single-label...
research
11/17/2016

Weakly-supervised Learning of Mid-level Features for Pedestrian Attribute Recognition and Localization

State-of-the-art methods treat pedestrian attribute recognition as a mul...
research
03/11/2022

Spatial Consistency Loss for Training Multi-Label Classifiers from Single-Label Annotations

As natural images usually contain multiple objects, multi-label image cl...

Please sign up or login with your details

Forgot password? Click here to reset