Multi Instance Learning For Unbalanced Data

12/17/2018
by   Mark Kozdoba, et al.
0

In the context of Multi Instance Learning, we analyze the Single Instance (SI) learning objective. We show that when the data is unbalanced and the family of classifiers is sufficiently rich, the SI method is a useful learning algorithm. In particular, we show that larger data imbalance, a quality that is typically perceived as negative, in fact implies a better resilience of the algorithm to the statistical dependencies of the objects in bags. In addition, our results shed new light on some known issues with the SI method in the setting of linear classifiers, and we show that these issues are significantly less likely to occur in the setting of neural networks. We demonstrate our results on a synthetic dataset, and on the COCO dataset for the problem of patch classification with weak image level labels derived from captions.

READ FULL TEXT
research
07/31/2023

Towards Imbalanced Large Scale Multi-label Classification with Partially Annotated Labels

Multi-label classification is a widely encountered problem in daily life...
research
05/30/2019

Deep multi-class learning from label proportions

We propose a learning algorithm capable of learning from label proportio...
research
05/06/2019

An embarrassingly simple approach to neural multiple instance classification

Multiple Instance Learning (MIL) is a weak supervision learning paradigm...
research
06/06/2021

MOC-GAN: Mixing Objects and Captions to Generate Realistic Images

Generating images with conditional descriptions gains increasing interes...
research
08/29/2017

EC3: Combining Clustering and Classification for Ensemble Learning

Classification and clustering algorithms have been proved to be successf...
research
08/18/2020

When Hardness of Approximation Meets Hardness of Learning

A supervised learning algorithm has access to a distribution of labeled ...

Please sign up or login with your details

Forgot password? Click here to reset