. Estimate of calorie intake can help users to modify their food habits to maintain a healthy diet. Current food journaling applications like Fitbit Appfitbit , MyFitnessPal myfitnesspal and My Diet Coach dietcoach require users to enter their meal information manually. A study of 141 participants in cordeiro2015rethinking reports that of the participants stopped food journaling because of the effort involved while stopped because they found it to be time consuming. Capturing images of meals is easier, faster and convenient than manual data entry. An automated algorithm for measuring calories from images should be able to solve several sub-problems
classify, segment and estimate 3D volume of the given food items. In this paper we focus on the first task of classification of food items in still images. This is a challenging task due to a large number of food categories, high intra-class variation and low inter-class variation among different food classes. Further, in comparison to standard computer vision problems such as object detectionlin2014microsoft zhou2017places , present datasets for food classification are limited in both quantity and quality to train deep networks (see section 2). Prior works try to resolve this issue by collecting training data using human annotators or crowd-sourcing platforms farinella2016retrieval ; chen2012automatic ; Kaw2014 ; zhang2015snap ; Mey2015 . Such data curation is expensive and limits the scalability in terms of number of training categories as well as number of training samples per category. Moreover, it is challenging to label images for food classification tasks as they often have co-occurring food items, partially occluded food items, and large variability in scale and viewpoints. Accurate annotation of these images would require bounding boxes, making data curation even more time and cost prohibitive. Thus, it is important to build food datasets with minimal data curation so that they can be scaled for novel categories based on the final application.
Unlike data obtained by human supervision, web data is freely available in abundance but contains different types of noise chen2015webly ; wang2008annotating ; sukhbaatar2014learning . Web images collected via search engines may include images of processed and packaged food items as well as ingredients required to prepare the food items as shown in Figure 2. We refer to this noise as cross-domain noise as it is introduced by the bias due to specific search engine and user tags. In addition, the web data may also include images with multiple food items while being labeled for a single food category (cross-category noise). For example, in images labeled as Guacamole, Nachos can be predominant (Figure 2). Further, the web results may also include images not belonging to any particular class.
We address the problem of food image classification by combining webly and weakly supervised learning ( Figure 1). We first propose to overcome the issues associated with obtaining clean training data for food classification by using inexpensive but noisy web data. In particular we demonstrate that by sequentially adding manually curated data to the uncurated data from web search engines, the classification performance improves linearly. We show that by augmenting a smaller curated dataset with larger uncurated web data the classification accuracy increases from to , which is at par with the performance obtained with the manually curated dataset (). We also propose to augment the deep model with weakly supervised learning (WSL) for for two reasons (1) tackle the cross-category noise present in web images, and (2) identify discriminative regions to disambiguate between fine-grained classes. We are able to approximately localize food items using the activation maps provided by WSL. We show that by using WSL, the classification accuracy on test data further increases to . We finally show qualitative results and provide useful insights into the two proposed strategies and discuss the reasons for performance improvements.
2 Related Work
Traditional computer vision feature vectors such has HOG, SIFT, bag-of-features, Gabor filters and color histograms have been used for classifying food images inzhang2015snap ; puri2009recognition ; Bos2014 ; bettadapura2015leveraging ; joutou2009food
Recent state-of-the-art deep learning methods for food recognition and localization have led to significant improvement in performanceliu2016food ; Mey2015 ; liu2016deepfood ; Yan2015 ; Sin2016 ; Bol2016 . However, the proposed methods use training data with only one food item in the image liu2016food or have labels for multiple food items in images Kaw2014 ; Mey2015 . The preparation of training data requires manual curation. The Food-101 dataset Bos2014 is often used for food classification. It is collected from a food discovery website foodspotting.com and generally contains less cross-domain noise as compared to images obtained from search engines such as Google.com. However, this website relies on images sent by users and thus has limited images for unique food categories, limiting expansion to new categories. In wang2015recipe , food data is collected from the web but also relies on textual information along with the images. CNNs have also been used to classify food vs. non-food items in Sin2016 ; Bol2016 . In addition, Bol2016 also provides food activation maps on the input image to generate bounding boxes for localization. We address the problem of classifying food items by using the noisy web data and incorporating weakly supervised learning for training CNNs.
Recent approaches of webly supervised learning in computer vision leverage from the noisy web data, which is easy and inexpensive to collect. Prior work uses web data to train CNNs for classification and object detection. Kra2016
use noisy data collected from web for fine-grained classification. They also use active learning-based approach for collecting data when only limited examples are available from web. They demonstrate that even if the classification task at hand has small number of categories, using a network trained with more categories gives better performance. Motivated by curriculum learning,chen2015webly
, propose an algorithm to first train a model on simple images from Google and estimate a relationship graph between different classes. The confusion matrix is integrated with the model and is fine-tuned on harder Flickr images. The confusion matrix makes the network robust to noise and improves performance. Similarly,patrini2016making
modified the loss function by using the noise distribution from the noisy images.
Food images often consist of multiple food items instead of a single food item and require bounding boxes for annotation. To avoid expensive curation, weakly supervised learning (WSL) utilizes image-level labels instead of pixel-level labels or bounding boxes. In Zhou_2016_CVPR ; oquab2015object , the network architecture is modified to incorporate WSL by adding a global pooling layer. Along with image classification, these architectures are able to localize the discriminative image parts. In durand2016weldon , the authors include top instances (most informative regions) and negative evidences(least informative regions) in the network architecture to identify discriminative image parts more accurately. To address object detection, the authors in bilen2016weakly modify the deep network using a spatial pyramid pooling layer and use region proposals to simultaneously select discriminative regions and perform classification. In cinbis2017weakly , the authors present a multi-fold multiple instance learning approach that detects object regions using CNN and fisher vector features while avoiding convergence to local optima.
In this paper, we combine the webly and weakly supervised learning to address the problem of food classification. We sequentially add curated data to the weakly labeled uncurated web data and augment the deep model with WSL. We report improved performance as well as gain insights by visualizing the qualitative results.
We first describe the datasets used to highlight the benefits of using uncurated data with manually curated data for the task of food classification. Thereafter, we briefly discuss weakly supervised learning to train the deep network.
We first collect food images from the web and augmented it with both curated and additional uncurated images, and test our method on a separate clean test set. The datasets are described below:
Food-101 Bos2014 : This dataset consists of food categories with training and test images per category. The test data was manually cleaned by the authors whereas the training data consists of cross-category noise i.e. images with multiple food items labeled with a single class. We use the manually cleaned test data as the curated dataset ( images), Food-101-CUR, which is used to augment the web dataset. We use of the uncurated training data for validation and of uncurated data (referred to as Food-101-UNCUR) for data augmentation for training the deep model.
Food-Web-G: We collect the web data using Google image search for food categories from Food-101 dataset Bos2014 . The restrictions on public search results limited the collected data to approximately images per category. We removed images smaller than pixels in height or width from the dataset. As previously described, the web data is weakly labeled and consists of both cross-domain and cross-category noise as shown in Figure 2. We refer to this dataset as Food-Web-G
UEC256 Kaw2014 : This dataset consists of food categories, including Japanese and international dishes and each category has at least images with bounding box indicating the location of its category label. Since this dataset provides the advantage of complete bounding box level annotations, we use this dataset for testing. We construct the test set by selecting categories in common with the Food-101 dataset and extract cropped images using the given bounding boxes.
3.2 Weakly Supervised Learning (WSL)
The data collected from web using food label tags is weakly labeled i.e. an image is labeled with a single label when it contains multiple food objects. We observe that most uncurated food images were unsegmented with images containing either items from co-occurring food classes or background objects such as kitchenware. We propose to tackle this problem by augmenting the deep network with WSL that explicitly grounds the discriminative parts of an image for the given training label Zhou_2016_CVPR , resulting in a better model for classification.
As shown in Figure 1, we incorporate discriminative localization capabilities into the deep model by adding a convolution layer and a spatial pooling layer to a pretrained CNN oquab2015object ; Zhou_2016_CVPR . The convolution layer generates
class-wise score maps from previous activations. The spatial pooling layer in our architecture is a global average pooling layer which has recently been shown to outperform the global max pooling step for localization in WSLoquab2015object ; Zhou_2016_CVPR . Max pooling only identifies the most discriminative region by ignoring lower activations, while average pooling finds the extent of the object by recognizing all discriminative regions and thus giving better localization. The spatial pooling layer returns class-wise score for each image which are then used to compute cross-entropy loss. During test phase, we visualize the heats maps for different classes by overlaying the predicted score maps on the original image.
Additionally, food classification is a fine-grained classification problem Kra2016 and we later show that discriminative localization also aids in correctly classifying visually similar classes. Compared to Krause et al. Kra2016 , who show the benefits of noisy data for fine-grained tasks such as bird classification, we also highlight the benefits of WSL for learning with noisy data for food classification.
3.3 Implementation Details
We use Inception-Resnet szegedy2017inception
as the base architecture and fine-tune the weights of a pre-trained network (ImageNet). During training, we use Adam optimizer with a learning ratefor the last fully-connected (classification) layer and for the pre-trained layers. We use batch size of . For WSL, we initialize the network with the weights obtained by training the base model and only fine-tune the layers added for weak localization with learning rate of . For WSL we obtain localized score maps for different classes by adding a convolutional layer to map the input feature maps into classification score maps Zhou_2016_CVPR . For an input image of , we get a score map of from the output of this convolutional layer, which gives approximate localization when resized to the size of input image. The average pooling layer is of size
4.1 Quantitative Results
|Dataset||No. of images||Type||w/o WSL||with WSL|
|Food-Web-G + Food-101-CUR|
|Food-Web-G + Food-101-UNCUR|
|Food-Web-UNCUR + Food-101-CUR|
We report top-1 classification accuracy for different combinations of datasets (see section 3) and WSL in Table 1. We first discuss the performance without WSL, where the baseline performance using Google images (Food-Web-G) is . We observe that augmenting Food-Web-G ( samples) with a small proportion of curated data ( samples) improves the performance to , whereas augmentation with additional uncurated data ( samples from foodspotting.com) results in . The performance of both combinations is higher compared to the curated data alone () clearly highlighting the performance benefits of using noisy web data. We also observe that different sources of web images i.e. Google versus Foodspotting results in different performance ( versus respectively) for similar number of training samples. As previously mentioned, Foodspotting is crowdsourced by food enthusiasts, who often compete for ratings, and thus has less cross-domain noise and better quality compared to Google images. By combining all the three datasets, we observe a classification accuracy of , which outperforms the performance obtained by either curated and uncurated datasets alone.
We also wanted to study the variation in performance on using different proportions of clean and unclean images. As shown in Figure 3, by sequentially adding manually curated data (Food-101 CUR) to the web data (Food-Web-G), the classification performance improves linearly from to . By adding the uncurated data from foodspotting, it further increases to . We also observe significant improvements by adding discriminative localization to the deep model, where the classification accuracy further increases to . In particular we observe a consistent improvement across all data splits by using WSL e.g. for the combination of both uncurated datasets from Google and foodspotting, the hike in performance by using WSL is absolute points. This performance trend highlights the advantages of WSL in tackling the noise present in food images by implicitly performing foreground segmentation and also focusing on the correct food item in the case when multiple food items are present (cross-category noise).
4.2 Qualitative Results
Heat maps showing approximate pixel-wise predicted probabilities obtained by Weakly Supervised training for few training images. We show three cases where (a) the food items are localized correctly, (b) network localizes frequently co-occurring food items due to weak labels for training, and (c) network localizes a frequently co-occurring food item instead of the labeled food item due to incomplete and noisy training data.
We show the heat maps indicating the approximate localization of the top-1 predicted label for few training images with multiple food items in Figure 4. We see that for some training images (Figure a) the network learns to correctly localize the correct food type among co-occurring food classes e.g. it is able to identify rice in “fried rice” example. This ability could explain the reasons for performance benefits especially when training data is not completely labeled. However, we also observe that for frequently co-occurring food items, sometimes the network learns to localize multiple food types together. As shown in Figure b), network learns “chicken” and “rice” as one category because they co-occur in many training examples. The network also learns wrong food item for some co-occurring food items. For example, Figure c shows some examples where the network learns to recognize “sauce” instead of Gyoza. This is a drawback with standard WSL methods where the algorithm generally tends to focus on the most discriminative part and overfits. We can overcome this aspect by either leveraging additional clean training data or using recent advances in WSL Singh_2017_ICCV ; Kim_2017_ICCV . We show the heat maps for test images that are misclassified without localization but are correctly classified with localization in Figure 5. Food classification is a fine-grained classification problem and we can see that WSL helps by identifying discriminative parts for different food items. For e.g., the model grounds the noodle pieces in “miso soup” image in Figure 5 that makes it possible to differentiate it from “chocolate cake” class, both of which are generally dark brown in color.
We observe that the properties of training data and quality of labeling influences the test performance. There are unique ways of cooking a food item in different cuisines, resulting in variability in appearance. The UEC256 test data mainly contains Japanese cuisines that may not be seen by the network during training phase. We found that some test images are misclassified if their appearance varies from the training images. Figure a shows an example of the category “omelette” that has high variability for training and test data. We observe that the performance on test data is also influenced by the weak/incomplete labeling of training data. For example, as shown in Figure b, the training dataset contains these two categories: “french fries” and “fish and chips”. “Fish and chips” always contains french fries, however this information is not used during the training phase resulting in high confusion between these classes during testing.
Misclassification on test images also occurs due to the presence of multiple food items. Localization heatmaps show that the network also focuses on the partially occluded food items in the images. Figure 7 shows some examples where the test images with true label “french fries” are misclassified because the network focuses on the other partial food items in the image. Even though the top-most predicted label corresponds to the partially occluded food item, the correct label is often found in the top-5 predictions (top-5 accuracy is ).
We also generated bounding boxes from the heatmaps as shown in Figure 8 and will evaluate the localization performance in the future.
In this paper, we leverage the freely available web data to address the problem of food classification. By augmenting the abundantly available uncurated web data with limited manually curated dataset and using weakly supervised learning, we achieve a classification accuracy of . The performance improves linearly as the amount of curated data for training is increased. We examine the localization maps and observe that WSL aids the network by learning to approximately localize a food item even in presence of multiple food items. Additionally, we examine some cases where discriminative localization helps to disambiguate visually similar classes. Although we chose to focus on WSL in this work, additional performance improvement can also be obtained by other complementary approaches such as cost sensitive loss chen2015webly ; patrini2016making and domain adaptation bergamo2010exploiting .
We thank Carter Brown, Ankan Bansal, Kilho Son and Anirban Roy for many helpful discussions.
- (1) Fitbit app. https://www.fitbit.com/app. Accessed: 2017-11-14.
- (2) My diet coach. https://play.google.com/store/apps/details?id=com.dietcoacher.sos. Accessed: 2017-11-14.
- (3) Myfitnesspal. https://www.myfitnesspal.com. Accessed: 2017-11-14.
- (4) Alessandro Bergamo and Lorenzo Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In Advances in neural information processing systems, pages 181–189, 2010.
- (5) Vinay Bettadapura, Edison Thomaz, Aman Parnami, Gregory D Abowd, and Irfan Essa. Leveraging context to support automated food recognition in restaurants. In Applications of Computer Vision (WACV), 2015 IEEE Winter Conference on, pages 580–587. IEEE, 2015.
Hakan Bilen and Andrea Vedaldi.
Weakly supervised deep detection networks.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2846–2854, 2016.
- (7) M. Bolaños and P. Radeva. Simultaneous food localization and recognition. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 3140–3145, Dec 2016.
Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool.
Food-101–mining discriminative components with random forests.In European Conference on Computer Vision, pages 446–461. Springer, 2014.
- (9) Mei-Yun Chen, Yung-Hsiang Yang, Chia-Ju Ho, Shih-Han Wang, Shane-Ming Liu, Eugene Chang, Che-Hua Yeh, and Ming Ouhyoung. Automatic chinese food identification and quantity estimation. In SIGGRAPH Asia 2012 Technical Briefs, page 29. ACM, 2012.
- (10) Xinlei Chen and Abhinav Gupta. Webly supervised learning of convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1431–1439, 2015.
- (11) Ramazan Gokberk Cinbis, Jakob Verbeek, and Cordelia Schmid. Weakly supervised object localization with multi-fold multiple instance learning. IEEE transactions on pattern analysis and machine intelligence, 39(1):189–203, 2017.
- (12) Felicia Cordeiro, Elizabeth Bales, Erin Cherry, and James Fogarty. Rethinking the mobile food journal: Exploring opportunities for lightweight photo-based capture. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3207–3216. ACM, 2015.
Thibaut Durand, Nicolas Thome, and Matthieu Cord.
Weldon: Weakly supervised learning of deep convolutional neural networks.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4743–4752, 2016.
- (14) Giovanni Maria Farinella, Dario Allegra, Marco Moltisanti, Filippo Stanco, and Sebastiano Battiato. Retrieval and classification of food images. Computers in biology and medicine, 77:23–39, 2016.
- (15) Armand Joulin, Laurens van der Maaten, Allan Jabri, and Nicolas Vasilache. Learning visual features from large weakly supervised data. In European Conference on Computer Vision, pages 67–84. Springer, 2016.
- (16) Taichi Joutou and Keiji Yanai. A food image recognition system with multiple kernel learning. In Image Processing (ICIP), 2009 16th IEEE International Conference on, pages 285–288. IEEE, 2009.
- (17) Yoshiyuki Kawano and Keiji Yanai. Automatic expansion of a food image dataset leveraging existing categories with domain adaptation. In ECCV Workshops (3), pages 3–17, 2014.
- (18) Dahun Kim, Donghyeon Cho, Donggeun Yoo, and In So Kweon. Two-phase learning for weakly supervised object localization. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
- (19) Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, and Li Fei-Fei. The unreasonable effectiveness of noisy data for fine-grained recognition. In European Conference on Computer Vision, pages 301–320. Springer, 2016.
- (20) Krishna Kumar Singh and Yong Jae Lee. Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
- (21) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
- (22) Chang Liu, Yu Cao, Yan Luo, Guanling Chen, Vinod Vokkarane, and Yunsheng Ma. Deepfood: Deep learning-based food image recognition for computer-aided dietary assessment. In International Conference on Smart Homes and Health Telematics, pages 37–48. Springer, 2016.
- (23) Renfeng Liu. Food recognition and detection with minimum supervision. 2016.
- (24) Austin Meyers, Nick Johnston, Vivek Rathod, Anoop Korattikara, Alex Gorban, Nathan Silberman, Sergio Guadarrama, George Papandreou, Jonathan Huang, and Kevin P Murphy. Im2calories: towards an automated mobile vision food diary. In Proceedings of the IEEE International Conference on Computer Vision, pages 1233–1241, 2015.
- (25) Maxime Oquab, Léon Bottou, Ivan Laptev, and Josef Sivic. Is object localization for free?-weakly-supervised learning with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 685–694, 2015.
- (26) Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neural networks robust to label noise: a loss correction approach. arXiv preprint arXiv:1609.03683, 2016.
- (27) Manika Puri, Zhiwei Zhu, Qian Yu, Ajay Divakaran, and Harpreet Sawhney. Recognition and volume estimation of food intake using a mobile device. In Applications of Computer Vision (WACV), 2009 Workshop on, pages 1–8. IEEE, 2009.
- (28) Ashutosh Singla, Lin Yuan, and Touradj Ebrahimi. Food/non-food image classification and food categorization using pre-trained googlenet model. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, pages 3–11. ACM, 2016.
- (29) Sainbayar Sukhbaatar and Rob Fergus. Learning from noisy labels with deep neural networks. arXiv preprint arXiv:1406.2080, 2(3):4, 2014.
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi.
Inception-v4, inception-resnet and the impact of residual connections on learning.In AAAI, pages 4278–4284, 2017.
- (31) Xin Wang, Devinder Kumar, Nicolas Thome, Matthieu Cord, and Frederic Precioso. Recipe recognition with large multimodal food dataset. In Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on, pages 1–6. IEEE, 2015.
- (32) Xin-Jing Wang, Lei Zhang, Xirong Li, and Wei-Ying Ma. Annotating images by mining image search results. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1919–1932, 2008.
- (33) Keiji Yanai and Yoshiyuki Kawano. Food image recognition using deep convolutional network with pre-training and fine-tuning. In Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on, pages 1–6. IEEE, 2015.
- (34) Weiyu Zhang, Qian Yu, Behjat Siddiquie, Ajay Divakaran, and Harpreet Sawhney. “snap-n-eat” food recognition and nutrition estimation on a smartphone. Journal of diabetes science and technology, 9(3):525–533, 2015.
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba.
Learning deep features for discriminative localization.In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
- (36) Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.