Obesity is a medical condition in which excess body fat has accumulated to the extent that it may have a negative effect on health. People are generally considered obese when their Body Mass Index(BMI) is over 30 . High BMI is associated with the increased risk of diseases, such as heart disease, type two diabetes, etcBMI. Unfortunately, more and more people will meet criteria for obesity. The main cause of obesity is the imbalance between the amount of food intake and energy consumed by the individuals. Conventional dietary assessment methods include food diary, 24-hour recall, and food frequency questionnaire (FFQ)FFQ, which requires obese patients to record all food intakes per day. In most cases, patients do have troubles in estimating the amount of food intake because they are unwillingness to record or lack of related nutritional information. While computer vision-based measurement methods were applied to estimate calories from food images which includes calibration objects; obese patients have benefited a lot from these methods.
In recent years, there are a lot of methods based on computer vision proposed to estimate caloriescircle_plate; collaboration_card; ebutton; mobile_cloud
. For these methods, the accuracy of estimation result is determined by two main factors: object detection algorithm and volume estimation method. In the aspect of object detection, classification algorithms like Support Vector Machine(SVM)SVM are used to recognize food’s type in general conditions. In the aspect of volume estimation, the calibration of food and the volume calculation are two key issues. For example, when using a circle platecircle_plate as a calibration object, it is detected by ellipse detection; and the volume of food is estimated by applying corresponding shape model. Another example is using people’s thumb as the calibration object, the thumb is detected by color space conversionRGB2YCBCR, and the volume is estimated by simply treating the food as a column. However, thumb’s skin is not stable and it is not guaranteed that each person’s thumb can be detected. The involvement of human’s assistancecollaboration_card can improve the accuracy of estimation but consumes more time, which is less useful for obesity treatment. After getting food’s volume, food’s calorie is calculated by searching its density in food density tabledensity_table and energy in nutrition table222http://www.hc-sc.gc.ca/fn-an/nutrition/fiche-nutri-data/nutrient_value-valeurs_nutritives-tc-tm-eng.php. Although these methods mentioned above have been used to estimate calories, the accuracy of detection and volume estimation still needs to be improved.
In this paper, we propose our calorie estimation method. This method takes two food images as its inputs: a top view and a side view; each image includes a calibration object which is used to estimate image’s scale factor. Food(s) and calibration object are detected by object detection method called Faster R-CNN and each food’s contour is obtained by applying GrabCut algorithm. After that, we estimate each food’s volume and calorie.
2 Material and Methods
2.1 Calorie Estimation Method Based On Deep Learning
Figure 1 shows the flowchart of the proposed method. Our method includes 5 steps: image acquisition, object detection, image segmentation volume estimation and calorie estimation. To estimate calories, it requires the user to take a top view and a side view of the food before eating with his/her smart phone. Each images used to estimate must include a calibration object; in our experiments, we use One Yuan coin as a reference.
In order to get better results, we choose to use Faster Region-based Convolutional Neural Networks (Faster R-CNN)fasterrcnn to detect objects and GrabCut grabcut as segmentation algorithms.
2.2 Deep Learning based Objection detection
We do not use semantic segmentation method such as Fully Convolutional Networks (FCN)fcn but choose to use Faster R-CNN. Faster R-CNN is a framework based on deep convolutional networks. It includes a Region Proposal Network (RPN) and an Object Detection Networkfasterrcnn. When we put an image with RGB channels as input, we will get a series of bounding boxes. For each bounding box created by Faster R-CNN, its class is judged.
After the detection of the top view, we get a series of bounding boxes box,box,…,box. For ith bounding box box, its food type is type. Besides these bounding boxes, we regard the bounding box c which is judged as the calibration object with the highest score to calculate scale factor of the top view. In the same way, after the detection of the side view, we get a series of bounding boxes box,box,…,box. For j th(j 1,2,…,n) bounding box box, its food type is type. And we treat the bounding box c judged as the calibration object with the highest score to calculate scale factor of the side view.
2.3 Image Segmentation
After detection, we need to segment each bounding box. GrabCut is an image segmentation approach based on optimization by graph cutsgrabcut. Practicing GrabCut needs a bounding box as foreground area which can be provided by Faster R-CNN. Although asking user to label the foreground/background color can get better result, we refuse it so that our system can finish calorie estimation without user’s assistance. For each bounding box, we get precise contour after applying GrabCut algorithm. After segmentation, we get a series of food images P,P,…,P and P,P,…,P. The size of P is the same as the size of box (i 1,2,…,m), but the values of background pixels are replaced with zeros, which means that only the foreground pixels are left. The calibration object boxes c and c are not applied by GrabCut. After image segmentation, we can estimate every food’s volume and calorie.
2.4 Volume Estimation
In order to estimate volume of each food, we need to to calculate scale factors based on the calibration objects first. When we use the One Yuan coin as the reference, according to the coin’s real diameter(2.50), we can calculate the side view’s scale factor () with Equation 1.
Where is the width of the bounding box c and is the height of the bounding box c.
Then the top view’s scale factor () is calculated with Equation 2.
Where is the width of the bounding box c and is the height of the bounding box c.
For each food image P(i 1,2,…,m), we try to find an image in set P,P,…,P with the same food type. If type is equal to type(j 1,2,…,n), P will be marked so that it won’t be used again; then P and P will be used to calculate this food’s volume. We divide foods into three shape types: ellipsoid, column, irregular. According to the food type type, we select the corresponding volume estimation formula as shown in Equation 3.
In Equation 3, is the number of rows in side view and is the number of foreground pixels in row . records the maximum number of foreground pixels in . is the number of foreground pixels in top view , where is the number of foreground pixels in row . is the compensation factor and the default value is 1.0.
2.5 Calorie Estimation
After getting volume of a food, we get down to estimate each food’s mass first with Equation 4.
Where is the volume of current food and is its density value.
Finally, each food’s calorie is obtained with Equation 5.
Where is the mass of current food and is its calories per gram.
3 Results and Discussion
3.1 Dataset Description
For our calorie estimation method, a corresponding image dataset is necessary for evaluation. Several food image datasetsfood101; PFID; FooDD; UEC256 have been created so far. But these food datasets can not meet our requirements, so we use our own food dataset named ECUSTFD333http://pan.baidu.com/s/1o8qDnXC.
ECUSTFD contains 19 kinds of food: apple, banana, bread, bun, doughnut, egg, fired dough nut, grape, lemon, litchi, mango, mooncake, orange, peach, pear, plum, qiwi, sachima, tomato. For a single food portion, we took some pairs of images by using smart phones; each pair of images contains a top view and a side view. For each image, there is only a One Yuan coin as calibration object. If there are two food in the same image, the type of one food is different from another. For every image in ECUSTFD, we provide annotations, volume and mass records.
3.2 Object Detection Experiment
In this section, we compare the object detection results between Faster R-CNN and another object detection algorithm named Exemplar SVM(ESVM) in ECUSTFD. In order to avoid using train images to estimate volumes in the following experiments, the images of two sets are not selected randomly but orderly. The numbers of training images and testing images are shown in Figure 2. And we use Average Precision(AP) to evaluate the object detection results.
In testing set, Faster R-CNN’S mean Average Precision(mAP) is 93.0% while ESVM’S mAP is only 75.9%, which means that Faster R-CNN is up to the standard and can be used to detect object.
3.3 Food Calorie Estimation Experiment
In this section, we present our food calorie estimation results. Due to the limit of experimental equipments, we can not get each food’s calorie as a reference; so our experiments just verify the volume estimation results and mass estimation results. First we need to get the compensation factor in Equation 3 and in Equation 4 for each food type with the training sets. is calculated with Equation 6.
Where is the food type, is the number of volume estimation. is the real volume of food in the th volume estimation; and is the estimation volume of food in the th volume estimation.
is calculated with Equation 7.
Where is the food type, is the number of mass estimation. is the real mass of food and is the real volume of food in the th mass estimation.
The shape definition, estimation images number, , of each food type are shown in Table 1. For example, we use 122 images to calculate parameters for apple, which means that volume estimation results are used to calculate .
|Food Type||shape||estimation image|
|fired dough twist||irregular||48||1.22||0.60|
|fired dough twist||44||65.00||54.50||4.78||40.80||66.64||-2.53|
Then we use the images from the testing set to evaluate the volume and mass estimation results. The results are shown in Table 2 either. We use mean volume error to evaluate volume estimation results. Mean volume error is defined as:
In Equation 8, for food type , is the number of images Faster R-CNN recognizes correctly. Since we use two images to calculate volume, so the number of estimation volumes for type is . is the estimation volume for the th pair of images with the food type ; and is corresponding real volume for the same pair of images. Mean mass error is defined as:
In Equation 9, for food type , the number of mass estimation for th type is . is the estimation volume for the th pair of images with the food type ; and is corresponding real mass for the same pair of images.
Volume and mass estimation results are shown in Table2. For most types of food in our experiment, the estimation results are closer to reference real values. The mean error between estimation volume and true volume does not exceed 20% except banana, bread, grape, plum. For some food types such as lemon, our estimation result is close enough to the true value. The mass estimation results are almost the same as the volume estimation results. But for some food types like mooncake and tomato, the mass estimation errors are less than the volume estimation errors; the way we measure volume needs to be blamed due to drainage method is not accurate enough. All in all, our estimation method is available.
In this paper, we provided our calorie estimation method. Our method needs a top view and side view as its inputs. Faster R-CNN is used to detect the food and calibration object. GrabCut algorithm is used to get each food’s contour. Then the volume is estimated with volume estimation formulas. Finally we estimate each food’s calorie. The experiment results show our method is effective.