Webly Supervised Learning with Category-level Semantic Information
As tons of photos are being uploaded to public websites (e.g., Flickr, Bing, and Google) every day, learning from web data has become an increasingly popular research direction because of freely available web resources, which is also referred to as webly supervised learning. Nevertheless, the performance gap between webly supervised learning and traditional supervised learning is still very large, owning to the label noise of web data as well as the domain shift between web data and test data. To be exact, on one hand, the labels of images crawled from public websites are very noisy and often inaccurate. On the other hand, the data distributions between web data and test data are considerably different, which is known as domain shift. Some existing works tend to facilitate learning from web data with the aid of extra information, such as augmenting or purifying web data by virtue of instance-level supervision, which is usually in demand of heavy manual annotation. Instead, we propose to tackle the label noise and domain shift by leveraging more accessible category-level supervision. In particular, we build our method upon variational autoencoder (VAE), in which the classification network is attached on the hidden layer of VAE in a way that the classification network and VAE can jointly leverage the category-level hybrid semantic information. Moreover, we further extend our method to cope with the domain shift by utilizing unlabeled test instances in the training stage followed by low-rank refinement. The effectiveness of our proposed methods is clearly demonstrated by extensive experiments on three benchmark datasets.
READ FULL TEXT