This repository contains files for Acute lymphoblastic leukemia (ALL) detection under Prof. Debdoot Sheet.
The development of automatic nutrition diaries, which would allow to keep track objectively of everything we eat, could enable a whole new world of possibilities for people concerned about their nutrition patterns. With this purpose, in this paper we propose the first method for simultaneous food localization and recognition. Our method is based on two main steps, which consist in, first, produce a food activation map on the input image (i.e. heat map of probabilities) for generating bounding boxes proposals and, second, recognize each of the food types or food-related objects present in each bounding box. We demonstrate that our proposal, compared to the most similar problem nowadays - object localization, is able to obtain high precision and reasonable recall levels with only a few bounding boxes. Furthermore, we show that it is applicable to both conventional and egocentric images.READ FULL TEXT VIEW PDF
This repository contains files for Acute lymphoblastic leukemia (ALL) detection under Prof. Debdoot Sheet.
The analysis of people’s nutrition habits is one of the most important mechanisms for applying a thorough monitorisation of several medical conditions (e.g. diabetes, obesity, etc.) that affect a high percentage of the global population. In most of the cases, interventional psychologists ask people to keep a manual detailed record of the daily meals ingested. However, as proved in , usually people tend to underestimate the quantity of food intake up to a 33%. Hence, methods for automatically logging one’s meals could not only make the process easier, but also make it objective to the user’s point of view and interpretability.
One of the solutions adopted recently that could ease the automatic construction of nutrition diaries is to ask individuals to take photos with their mobile phones . An alternative technique is visual lifelogging  that consists of using a wearable camera that automatically captures pictures from the user point of view (egocentric point of view) with the aim to analyse different patterns of his/her daily life and extract highly relevant information like nutritional habits. By developing algorithms for food detection and food recognition that could be applied on mobile or lifelogging images, we can automatically infer the user’s eating pattern. However, an important consideration to take into account when working with mobile or egocentric images is that they usually are of lower quality than conventional images due to the lower quality of portable hardware components. In addition, the analysis of egocentric images is harder considering that the pictures are non-intentionally taken and from a lateral point-of-view, causing motion blurriness, important partial occlusions, and bad lighting conditions (Fig. 1).
A relatively recent technology that can leverage the automatic construction of nutrition diaries is Deep Learning, and more precisely, from the Computer Vision side, Convolutional Neural Networks (CNNs). These networks are able to learn complex spatial patterns from images. Thanks to the appearance of huge annotated datasets, the performance of these models has burst, allowing to improve the state of the art of many Computer Vision problems.
In this paper, we propose a novel and fast approach based on CNNs for detecting and recognizing food in both conventional and egocentric vision pictures. Our contributions are four-fold: 1) we propose the first food-related objects localization algorithm, which is specifically trained to distinguish images containing generic food and has the ability to propose several bounding boxes containing generic food (without particular classes) in a single image, 2) we propose a food recognition algorithm, which learns by re-using food-related knowledge and can be applied on the top of the food localization method, 3) we present the first egocentric dataset for food localization and recognition, and 4) we demonstrate that our methodology is useful for both conventional and egocentric pictures. Our contribution for food localization, inspired by the food detection method in 
, starts by training a binary food/non food CNN classifier for food detection and then, a simple and easy to interpret mechanism that allows us to generate food probability maps is learned at the top of it. Finally, we propose an optimized method for generating bounding boxes on the obtained maps. Note that, as the desired application of the method is the generation of automatic nutrition diaries, we should not only detect food, but also food-related objects (e.g. bottles, cups, etc.). With this in mind, we collected data from complementary and varied datasets containing either food and non food pictures (see section IV-B). Up to our knowledge, there is no work in the literature that considers these categories. Without loss of generality, we add to the food categories those related to food-related objects, referring all of them as food. On the food recognition part, inspired by the findings in 
, we prove that, when we have small datasets for our problem, we can apply transfer learning by performing a chain of fine-tunings on a CNN for getting closer to our target domain (food types or food-related objects recognition) and achieving a better performing network.
The organization of this paper is as follows. In section II, we review the state of the art in food detection/localization and recognition. In section III, we explain the proposed methodology. In section IV, we describe the datasets used, the experimental setup, and present and discuss our results. Finally, in section V, we review the contributions, the limitations of the method and future directions.
Considering that no works have been presented yet for simultaneous food localization and recognition in the bibliography, following we will review the most recent works devoted to food detection and food recognition, separately.
Food Detection and Localization: the problem of food detection has been typically addressed as a binary classification problem, where the algorithm simply has to distinguish whether a given image is representing food or not [12, 1, 11]. A different approach is applied by several papers [3, 25, 18, 5], where they intend to first segment and separately classify the components or ingredients of food and then apply a joint dish recognition.
The main problem of both approaches is that they assume that the dish was previously localized and therefore it is centered in the image. Instead, in the context of food localization, we are interested in finding the precise generic regions (or bounding boxes) in an image where any kind of food is present.
Although no methods have been presented specifically for food localization, several works have focused on generic object localization, usually called object detection, too. These methods could be used as a first step for food localization if they are followed by a food/non food classification applied on the obtained regions. Selective Search 
, considered as one of the best in the state of the art, applies a hierarchical segmentation and grouping strategy to find objects at different scales. The object detection methods, which obtain generic object proposals, intend to detect as many objects in the image as possible for optimizing the recall level, thus, they need to propose hundreds or thousands of candidates, leading to near null precision. An open question is how to obtain straightforward object localization methods that get high precision and recall results at the same time. An alternative to the generic object localization methods are methods trained to localize a set of predetermined objects like Faster R-CNN. The authors propose a powerful end-to-end CNN optimized for localizing a set of 20 specific object classes.
Food Recognition: several authors have recently focused on food recognition. Most of them [4, 9, 7, 15, 11, 23, 14] have analyzed which features and models are more suitable for this problem. In their works, they have tested various methods for obtaining hand-crafted features in addition to exploring the use of different CNNs. One of the best results were obtained in  where the authors trained a CNN on the database Food101  with 101 food categories and proved that applying a pre-training and then fine-tuning with in-domain food images can improve the classification performance. The best results on the UECFOOD256 database  that contains 256 food categories were obtained by Yanai et al. , where they used a network pre-trained on mixed food and object images for improving the final performance on food recognition. Some papers [18, 5] take a step further and use additional information like GPS location for recognizing the restaurant where the picture was taken and improve the classification results.
In this section, we will describe the proposed methodology (see Fig. 2) in two steps: a) creating a generic food localizer, and b) training a fine-grained food recognition method by applying transfer learning.
Our food-specialised algorithm detects image regions containing any kind of food, being reliable enough so that with a few bounding boxes it is able to keep both high precision and recall. In order to achieve a fast inference, we propose to use a CNN trained on food detection. Then, we adapt it with a Global Average Pooling (GAP) layer  capable of generating Food Activation Maps (FAM) (i.e. heat maps of foodness probability). Finally, we extract candidates from the FAM in the form of bounding boxes (see pipeline in Fig. 3).
1) Food vs Non Food classifier: the first step to obtain a generic food localizer is to train a CNN for binary food classification. We chose the GoogleNet architecture 
due to its proven high performance on several Computer vision tasks. We trained the CNN on the Deep Learning framework Keras111https://github.com/MarcBS/keras. For obtaining a faster convergence we applied a fine-tuning for our binary classification of the GoogleNet, which was previously trained on ILSVRC data .
2) Fine-tuning for FAM generation: once we had a model capable of distinguishing Food vs Non Food images, we applied the following steps 
: 1) remove the two last inception modules and the following average pooling layer from the GoogleNet for obtaining a 14x14 pixels resolution (this allows to have a high enough spatial resolution for providing a final spatial classification), 2) introduce a new deep convolutional layer with 1024 kernels of dimensions 3x3 and stride 1, 3) introduce a GAP layer that summarizes the information captured by each kernel, and 4) set a new softmax layer for our binary problem. After getting the architecture ready, we applied an additional fine-tuning for the binary problem and learning the newly introduced layers.
Note that, instead of generating a map per class as done in Zhou et al. , we focus on obtaining a food-specific activation map that should be generic for any kind of food.
At inference time, our GoogleNet-GAP Food Vs NonFood network only has to: 1) apply a forward pass deciding whether the image contains food or not (softmax layer) and 2) compute the following equation for FAM generation:
where identifies each of the kernels in the deep convolutional layer, and and are the weighting terms of the softmax layer for the class food, and the activation of the th kernel at pixel , respectively.
3) Bounding box generation: as the last step, in order to extract bounding box proposals, we propose to apply a four steps method based on: 1) pick all regions above a certain threshold , being a percentage of the maximum FAM value, 2) remove all regions covering less than a certain percentage size of the original image, 3) generate a bounding box for each of the selected regions, and 4) expand the bounding boxes by a certain percentage, . All three parameters
were estimated through a cross-validation procedure on the validation set (see sectionIV-D).
After obtaining a generic object localizer, the final step in our approach is to classify each of the detected regions as a type of food. Again, for obtaining a high performing network and a faster convergence, we fine-tuned the GoogleNet pre-trained on ILSVRC. In addition, considering that our food recognition network has to overcome the problem of data quantity that most food classification datasets have, we propose applying an additional pre-training to the network. This supervised pre-training should serve as a fine-grained parameters adaptation in which the network should extract valuable knowledge from an extensive food recognition dataset before the final in-domain fine-tuning. For this purpose, we re-trained the GoogleNet, which was previously trained on ILSVRC, on the Food101 dataset .
At the end, we fine-tuned the network on the target domain data (either UECFood256  or EgocentricFood). To obtain as little false positives as possible, we added an additional class to the final food recognition network containing Non Food samples, enabling the system to discard false food regions detected by the localization method.
In this section we will describe the different datasets used for performing the tests; the pre-processing applied to them; the metrics used for testing the localization algorithm; the experimental setup and; finally, the results and performance of our localization and recognition techniques.
Following we describe all the dataset used in this work either for food localization, for food recognition or for both.
PASCAL VOC 2012 : dataset for object localization consisting of more than 10,000 images with bounding boxes of 20 different classes (none of them related to food).
ILSVRC 2013 : dataset similar to PASCAL with more than 400,000 images and 1,000 classes for training and validation (with a subset of classes related to food).
Food101 : dataset for food recognition that consists of 101 classes of typical foods around the world, having each class 1,000 different samples.
UECFood256 : dataset for food localization and recognition. It consists of 256 different international dishes with at least 100 samples each. The dataset was collected by the authors from images on the web, which means that they can be captured either by conventional cameras or by smartphones.
Egocentric Food222www.ub.edu/cvub/egocentricfood/: first dataset of egocentric images for food-related objects localization and recognition. It was collected using the wearable camera Narrative Clip and consists of 9 different classes (glass, cup, jar, can, mug, bottle, dish, food, basket), totalling 5038 images and 8573 bounding boxes.
Following we detail the different data pre-processing applied for each of the learning steps and classifiers.
Food Vs Non Food training: we used three different datasets: Food101, where all the images were treated as positive samples (class Food). We used the training split provided by the authors for generating a training (80%) and a validation (20%) splits balanced along all classes; PASCAL, where an object detector  was used to extract 50 object proposals per image on the ’trainval’ set. All the resulting bounding boxes were treated as negative samples (class Non Food). Again, we divided the data in 80/20% for training and validation; and ILSVRC, where we selected the 70 classes (or synsets) of food or food-related objects available. In this case, we only used the training/validation split provided by the authors. The bounding boxes were extracted and used as positive samples (class Food).
Food Recognition training: we used the Food101 dataset as the first dataset for fine-tuning the food recognition network pre-trained on ILSVRC. The previously applied 80/20% split of the training set provided by the authors was used for training and validation, respectively. The test set provided was used for testing. On the second fine-tuning, the same pre-processing was applied on both UECFood256 and EgocentricFood: a random 70/10/20% split of images was applied for training/validation/testing on each class separately and the bounding boxes were extracted.
Joint Localization and Recognition tests: the previous 70/10/20% split was also used on the localization and recognition test. We made sure that any image containing more than one instance was included only in one split.
The metric used for evaluating the results of a localization algorithm is the Intersection over Union (IoU). This metric defines how precise is the predicted bounding box (bb) with respect to the ground truth (GT) annotation, and is defined as:
where usually a bounding box is considered valid when its
. The other evaluation metrics used are:, , and , where the true positives (TP) are the bounding boxes correctly localized, the false positives (FP) are the predicted bounding boxes that do not exist in the ground truth, and the false negatives (FN) are the ground truth samples that are lost by the model. Note that given the convention from , if more than one bounding box overlaps the same GT object, only one will be considered as TP, the rest will be FPs.
|Dataset||Pre-training||Validation Accuracy||Test Accuracy|
The Food vs Non Food binary network used for food localization was trained during 24,000 iterations with a batch size of 50 and a learning rate of 0.001. A decay of 0.1 was applied every 6,000 iterations. The final validation accuracy achieved on the binary problem was 95.64%. During localization, the bounding box generation is applied on the FAM only if the image was classified as containing food by the softmax (see Fig. 3
). A grid search was applied on the localization-validation set for choosing the best hyperparametersfor localization (named threshold, size, and expansion percentages, respectively). The values tested were from 0.2 to 1 in increments of 0.2 for both and , and from 0.0 to 0.1 in increments of 0.02 for .
Considering that no food localization methods currently exist, we used Selective Search  and Faster R-CNN  as baselines for being two of the top performing object localization methods. The former obtains generic objects and the latter is optimized for localizing PASCAL’s classes (although we will treat its predictions as generic proposals).
For the food recognition
models, first, the GoogleNet-ILSVRC model was re-trained on Food101 using Caffe, achieving the best validation accuracy after 448,000 iterations. A batch size of 16 and a learning rate of 0.001 with a decay of 0.5 every 50,000 iterations were used. The model was converted to Keras before applying the final fine-tuning to the respective datasets UECFood256 or EgocentricFood.
During the joint localization and recognition tests, a bounding box is only considered TP if and only if it is both correctly localized (with a minimum IoU value of 0.5) and correctly recognized.
Taking into account that some of the tested methods  lack the capability of providing a localization score for each region, we are not enable to calculate a Precision-Recall curve. For this reason, we chose the accuracy as our guideline for comparison, which enables a trade-off between the capabilities of the methods to find all the objects present (Recall) and produce as little miss-localizations as possible (Precision). We chose the best parameters on the combined validation set (UECFood256 and EgocentricFood) in terms of the average accuracy value among all the IoU scores, resulting in , and .
In Fig. 4 we can see the precision, recall and accuracy curves obtained by the different localization methods.
Comparing the methods in terms of precision, it can be appreciated that ours outperforms the other methods in all cases. This pattern is easy to explain given that any generic object localization method (Selective Search in this case) usually outputs several thousands of proposals per image (see some examples in Fig. 5), causing it to get a lot of FPs. In comparison, Faster R-CNN only provides some tens of proposals per image given that it is optimized for finding bounding boxes of the specific classes in the PASCAL dataset. This means that it can focus on the most interesting proposals per class, which is a great advantage compared to Selective Search and makes its precision higher. Even though, it is still far from the optimum considering that usually there are less than 10 food-related elements in an image. Note that, curiously, Faster R-CNN is able to find food-related objects even without being optimized to do so. Comparing the methods in terms of recall, the Selective Search, in contrast to our method and Faster R-CNN, is clearly the best given that its goal is to find any object appearing in the image even if it is necessary to sacrifice the precision of the method. We can see that, although on most of the cases our method and Faster R-CNN are paired, in EgocentricFood the latter is better. This can be explained by the fact that the purpose of Faster R-CNN, which is to localize objects, is more aligned with the annotations found in EgocentricFood, which are of food-related objects. If we compare the methods in terms of accuracy, we can see that our proposal, which is able to obtain more balanced precision-recall results, outperforms both state of the art methods in UECFood256 and the combined datasets, and is paired with Faster R-CNN on EgocentricFood.
As we saw, a great part of the proposed bounding boxes are correctly predicted by our method. Although, we could say that this ability is also its weak point in terms of recall, where it obtains lower values considering it is not always able to find all the food-related elements in the image, mostly when they are very close or overlapping.
Additionally, comparing them in terms of execution time, Selective Search needs an average of 0.8s per image, Faster R-CNN needs 0.2s and our localization method needs only 0.06s using a GPU and a batch size of 25. Thus, it is able to apply a near real-time inference.
From the food recognition side, the results on the different trainings performed can be seen on Table I. Note that the results are comparable to the state of the art on food recognition: either on Food101 , or in UECFood256, where an alternative would be to apply the method on . We can see that, when fine-tuning on a model which is already adapted for food recognition, we can obtain better accuracy. The difference is more remarkable on UECFood256 because all the samples in the dataset are different types of food, while EgocentricFood is more focused on food-related objects.
Finally, we test the whole localization and recognition pipeline proposed. We present the final results fixing the minimum IoU to in Table II. To take into account the results of both steps at the same time, we evaluated the precision, recall and accuracy separately for each class and applied a final mean over all the classes. Note that when combining both datasets, we have a total of 265 classes (256 on UECFood256 and 9 on EgocentricFood). Our method is able to find most of the food-related objects in the UECFood256 dataset with only a few bounding boxes (usually at most 5). On the EgocentricFood dataset the difficulty of the problem becomes clear, where there are three additional issues to overcome: 1) the quality of the pictures is lower and objects are taken in a lateral point of view, 2) some classes are ambiguous and difficult to distinguish from non food-related objects and, 3) a great part of the samples are occluded and far from the camera wearer (see examples in Fig. 1 and 6).
Finally, in Fig. 6 we show some examples of the complete method. In some cases, the GT ambiguity produces recognition or localization misclassification. For instance, in the first image at the bottom right zone we can see a glass (GT) with a lemon (food prediction) inside, and in the second one, we can see a dish in the foreground (GT) and a bounding box of bread in the dish (food prediction).
We proposed the first methodology for simultaneous food localization and recognition. Our method is applicable to conventional and to egocentric point-of-view images. We have proven that this methodology outperforms the baseline achieved by generic object localizers. As future work, we will focus on the ability of the method to distinguish very close or overlapping food-related objects.
Work partially funded by TIN2015-66951-C2-1-R, SGR 1219 and an ICREA Academia’2014 grant. We acknowledge NVIDIA for the donation of a GPU and M. Ángeles Jiménez for her collaboration.
Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 73–80. IEEE, 2010.
Food-101–mining discriminative components with random forests.In Computer Vision–ECCV 2014, pages 446–461. Springer, 2014.