AFT*: Integrating Active Learning and Transfer Learning to Reduce Annotation Efforts

02/03/2018
by   Zongwei Zhou, et al.
0

The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributed to the availability of large annotated datasets, such as ImageNet and Places. However, in biomedical imaging, it is very challenging to create such large annotated datasets, as annotating biomedical images is not only tedious, laborious, and time consuming, but also demanding of costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, called AFT*, which starts directly with a pre-trained CNN to seek "worthy" samples for annotation and gradually enhance the (fine-tuned) CNN via continuous fine-tuning. We have evaluated our method in three distinct biomedical imaging applications, demonstrating that it can cut the annotation cost by at least half, in comparison with the state-of-the-art method. This performance is attributed to the several advantages derived from the advanced active, continuous learning capability of our method. Although AFT* was initially conceived in the context of computer-aided diagnosis in biomedical imaging, it is generic and applicable to many tasks in computer vision and image analysis; we illustrate the key ideas behind AFT* with the Places database for scene interpretation in natural images.

READ FULL TEXT

page 2

page 4

page 7

page 8

page 9

page 10

page 13

page 14

research
02/10/2016

Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

Remarkable progress has been made in image recognition, primarily due to...
research
02/01/2022

Minority Class Oriented Active Learning for Imbalanced Datasets

Active learning aims to optimize the dataset annotation process when res...
research
04/08/2016

CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples

Convolutional Neural Networks (CNNs) achieve state-of-the-art performanc...
research
10/23/2022

Delving into Masked Autoencoders for Multi-Label Thorax Disease Classification

Vision Transformer (ViT) has become one of the most popular neural archi...
research
11/29/2021

Enhanced Transfer Learning Through Medical Imaging and Patient Demographic Data Fusion

In this work we examine the performance enhancement in classification of...
research
12/12/2022

An adaptive human-in-the-loop approach to emission detection of Additive Manufacturing processes and active learning with computer vision

Recent developments in in-situ monitoring and process control in Additiv...
research
05/11/2021

Scene Understanding for Autonomous Driving

To detect and segment objects in images based on their content is one of...

Please sign up or login with your details

Forgot password? Click here to reset