Deep Learning-Based Food Calorie Estimation Method in Dietary Assessment

06/10/2017 ∙ by Yanchao Liang, et al. ∙ Tencent QQ East China Universtiy of Science and Technology 0

Obesity treatment requires obese patients to record all food intakes per day. Computer vision has been introduced to estimate calories from food images. In order to increase accuracy of detection and reduce the error of volume estimation in food calorie estimation, we present our calorie estimation method in this paper. To estimate calorie of food, a top view and side view is needed. Faster R-CNN is used to detect the food and calibration object. GrabCut algorithm is used to get each food's contour. Then the volume is estimated with the food and corresponding object. Finally we estimate each food's calorie. And the experiment results show our estimation method is effective.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Obesity is a medical condition in which excess body fat has accumulated to the extent that it may have a negative effect on health. People are generally considered obese when their Body Mass Index(BMI) is over 30 . High BMI is associated with the increased risk of diseases, such as heart disease, type two diabetes, etcBMI. Unfortunately, more and more people will meet criteria for obesity. The main cause of obesity is the imbalance between the amount of food intake and energy consumed by the individuals. Conventional dietary assessment methods include food diary, 24-hour recall, and food frequency questionnaire (FFQ)FFQ, which requires obese patients to record all food intakes per day. In most cases, patients do have troubles in estimating the amount of food intake because they are unwillingness to record or lack of related nutritional information. While computer vision-based measurement methods were applied to estimate calories from food images which includes calibration objects; obese patients have benefited a lot from these methods.

In recent years, there are a lot of methods based on computer vision proposed to estimate caloriescircle_plate; collaboration_card; ebutton; mobile_cloud

. For these methods, the accuracy of estimation result is determined by two main factors: object detection algorithm and volume estimation method. In the aspect of object detection, classification algorithms like Support Vector Machine(SVM)

SVM are used to recognize food’s type in general conditions. In the aspect of volume estimation, the calibration of food and the volume calculation are two key issues. For example, when using a circle platecircle_plate as a calibration object, it is detected by ellipse detection; and the volume of food is estimated by applying corresponding shape model. Another example is using people’s thumb as the calibration object, the thumb is detected by color space conversionRGB2YCBCR, and the volume is estimated by simply treating the food as a column. However, thumb’s skin is not stable and it is not guaranteed that each person’s thumb can be detected. The involvement of human’s assistancecollaboration_card can improve the accuracy of estimation but consumes more time, which is less useful for obesity treatment. After getting food’s volume, food’s calorie is calculated by searching its density in food density tabledensity_table and energy in nutrition table222http://www.hc-sc.gc.ca/fn-an/nutrition/fiche-nutri-data/nutrient_value-valeurs_nutritives-tc-tm-eng.php. Although these methods mentioned above have been used to estimate calories, the accuracy of detection and volume estimation still needs to be improved.

In this paper, we propose our calorie estimation method. This method takes two food images as its inputs: a top view and a side view; each image includes a calibration object which is used to estimate image’s scale factor. Food(s) and calibration object are detected by object detection method called Faster R-CNN and each food’s contour is obtained by applying GrabCut algorithm. After that, we estimate each food’s volume and calorie.

2 Material and Methods

2.1 Calorie Estimation Method Based On Deep Learning

Figure 1 shows the flowchart of the proposed method. Our method includes 5 steps: image acquisition, object detection, image segmentation volume estimation and calorie estimation. To estimate calories, it requires the user to take a top view and a side view of the food before eating with his/her smart phone. Each images used to estimate must include a calibration object; in our experiments, we use One Yuan coin as a reference.

Figure 1: Calorie Estimation Flowchart

In order to get better results, we choose to use Faster Region-based Convolutional Neural Networks (Faster R-CNN)

fasterrcnn to detect objects and GrabCut grabcut as segmentation algorithms.

2.2 Deep Learning based Objection detection

We do not use semantic segmentation method such as Fully Convolutional Networks (FCN)fcn but choose to use Faster R-CNN. Faster R-CNN is a framework based on deep convolutional networks. It includes a Region Proposal Network (RPN) and an Object Detection Networkfasterrcnn. When we put an image with RGB channels as input, we will get a series of bounding boxes. For each bounding box created by Faster R-CNN, its class is judged.

After the detection of the top view, we get a series of bounding boxes box,box,…,box. For ith bounding box box, its food type is type. Besides these bounding boxes, we regard the bounding box c which is judged as the calibration object with the highest score to calculate scale factor of the top view. In the same way, after the detection of the side view, we get a series of bounding boxes box,box,…,box. For j th(j 1,2,…,n) bounding box box, its food type is type. And we treat the bounding box c judged as the calibration object with the highest score to calculate scale factor of the side view.

2.3 Image Segmentation

After detection, we need to segment each bounding box. GrabCut is an image segmentation approach based on optimization by graph cutsgrabcut. Practicing GrabCut needs a bounding box as foreground area which can be provided by Faster R-CNN. Although asking user to label the foreground/background color can get better result, we refuse it so that our system can finish calorie estimation without user’s assistance. For each bounding box, we get precise contour after applying GrabCut algorithm. After segmentation, we get a series of food images P,P,…,P and P,P,…,P. The size of P is the same as the size of box (i 1,2,…,m), but the values of background pixels are replaced with zeros, which means that only the foreground pixels are left. The calibration object boxes c and c are not applied by GrabCut. After image segmentation, we can estimate every food’s volume and calorie.

2.4 Volume Estimation

In order to estimate volume of each food, we need to to calculate scale factors based on the calibration objects first. When we use the One Yuan coin as the reference, according to the coin’s real diameter(2.50), we can calculate the side view’s scale factor () with Equation 1.

(1)

Where is the width of the bounding box c and is the height of the bounding box c.

Then the top view’s scale factor () is calculated with Equation 2.

(2)

Where is the width of the bounding box c and is the height of the bounding box c.

For each food image P(i 1,2,…,m), we try to find an image in set P,P,…,P with the same food type. If type is equal to type(j 1,2,…,n), P will be marked so that it won’t be used again; then P and P will be used to calculate this food’s volume. We divide foods into three shape types: ellipsoid, column, irregular. According to the food type type, we select the corresponding volume estimation formula as shown in Equation 3.

(3)

In Equation 3, is the number of rows in side view and is the number of foreground pixels in row . records the maximum number of foreground pixels in . is the number of foreground pixels in top view , where is the number of foreground pixels in row . is the compensation factor and the default value is 1.0.

2.5 Calorie Estimation

After getting volume of a food, we get down to estimate each food’s mass first with Equation 4.

(4)

Where is the volume of current food and is its density value.

Finally, each food’s calorie is obtained with Equation 5.

(5)

Where is the mass of current food and is its calories per gram.

3 Results and Discussion

3.1 Dataset Description

For our calorie estimation method, a corresponding image dataset is necessary for evaluation. Several food image datasetsfood101; PFID; FooDD; UEC256 have been created so far. But these food datasets can not meet our requirements, so we use our own food dataset named ECUSTFD333http://pan.baidu.com/s/1o8qDnXC.

ECUSTFD contains 19 kinds of food: apple, banana, bread, bun, doughnut, egg, fired dough nut, grape, lemon, litchi, mango, mooncake, orange, peach, pear, plum, qiwi, sachima, tomato. For a single food portion, we took some pairs of images by using smart phones; each pair of images contains a top view and a side view. For each image, there is only a One Yuan coin as calibration object. If there are two food in the same image, the type of one food is different from another. For every image in ECUSTFD, we provide annotations, volume and mass records.

3.2 Object Detection Experiment

In this section, we compare the object detection results between Faster R-CNN and another object detection algorithm named Exemplar SVM(ESVM) in ECUSTFD. In order to avoid using train images to estimate volumes in the following experiments, the images of two sets are not selected randomly but orderly. The numbers of training images and testing images are shown in Figure 2. And we use Average Precision(AP) to evaluate the object detection results.

Figure 2: training and testing image numbers

In testing set, Faster R-CNN’S mean Average Precision(mAP) is 93.0% while ESVM’S mAP is only 75.9%, which means that Faster R-CNN is up to the standard and can be used to detect object.

3.3 Food Calorie Estimation Experiment

In this section, we present our food calorie estimation results. Due to the limit of experimental equipments, we can not get each food’s calorie as a reference; so our experiments just verify the volume estimation results and mass estimation results. First we need to get the compensation factor in Equation 3 and in Equation 4 for each food type with the training sets. is calculated with Equation 6.

(6)

Where is the food type, is the number of volume estimation. is the real volume of food in the th volume estimation; and is the estimation volume of food in the th volume estimation.

is calculated with Equation 7.

(7)

Where is the food type, is the number of mass estimation. is the real mass of food and is the real volume of food in the th mass estimation.

The shape definition, estimation images number, , of each food type are shown in Table 1. For example, we use 122 images to calculate parameters for apple, which means that volume estimation results are used to calculate .

Food Type shape estimation image
number
apple ellipsoid 122 1.08 0.78
banana irregular 82 0.62 0.91
bread column 26 0.62 0.18
bun irregular 32 1.07 0.38
doughnut irregular 42 1.28 0.30
egg ellipsoid 30 1.01 1.17
fired dough twist irregular 48 1.22 0.60
grape column 24 0.45 1.00
lemon ellipsoid 34 1.06 0.94
litchi irregular 30 0.82 0.98
mango irregular 20 1.16 1.08
mooncake column 64 1.00 1.20
orange ellipsoid 110 1.09 0.88
peach ellipsoid 48 1.05 1.01
pear irregular 72 1.02 0.97
plum ellipsoid 82 1.22 0.97
qiwi ellipsoid 54 1.16 0.98
sachima column 54 1.10 0.22
tomato ellipsoid 46 1.21 0.90
Table 1: Shape Definition and Parameters in Our Experiments
Food Type estimation mean mean volume mean mean mass
image volume estimation error mass estimation error
number volume (%) mass (%)
apple 154 333.64 270.66 -11.59 263.82 292.51 -13.14
banana 90 162.00 204.16 -21.42 146.38 127.24 -20.94
bread 20 155.00 180.62 -26.47 29.04 112.04 -29.13
bun 56 245.36 235.39 2.80 77.87 252.11 22.76
doughnut 118 174.75 143.47 5.36 56.26 183.64 -0.74
egg 34 52.94 62.13 17.56 61.64 62.52 17.67
fired dough twist 44 65.00 54.50 4.78 40.80 66.64 -2.53
grape 30 240.00 323.57 -38.86 219.50 146.73 -33.45
lemon 112 96.79 94.03 3.88 94.24 100.11 0.13
litchi 48 43.33 49.25 -6.05 43.80 40.54 -8.88
mango 96 81.67 70.43 4.34 88.11 81.48 -0.07
mooncake 66 67.58 52.52 -16.57 62.13 52.54 2.15
orange 104 234.42 235.93 10.06 216.99 257.83 4.66
peach 62 110.65 115.52 6.46 117.06 121.03 1.62
pear 82 260.00 225.93 -10.05 248.10 229.41 -8.91
plum 94 100.00 98.35 20.22 105.14 120.22 11.35
qiwi 56 127.14 123.08 8.65 127.31 143.33 5.93
sachima 96 147.29 129.05 -3.28 31.89 142.35 -1.48
tomato 90 174.22 168.28 17.11 182.64 203.07 0.36
Table 2: Volume and Mass Estimation Experiment Results

Then we use the images from the testing set to evaluate the volume and mass estimation results. The results are shown in Table 2 either. We use mean volume error to evaluate volume estimation results. Mean volume error is defined as:

(8)

In Equation 8, for food type , is the number of images Faster R-CNN recognizes correctly. Since we use two images to calculate volume, so the number of estimation volumes for type is . is the estimation volume for the th pair of images with the food type ; and is corresponding real volume for the same pair of images. Mean mass error is defined as:

(9)

In Equation 9, for food type , the number of mass estimation for th type is . is the estimation volume for the th pair of images with the food type ; and is corresponding real mass for the same pair of images.

Volume and mass estimation results are shown in Table2. For most types of food in our experiment, the estimation results are closer to reference real values. The mean error between estimation volume and true volume does not exceed 20% except banana, bread, grape, plum. For some food types such as lemon, our estimation result is close enough to the true value. The mass estimation results are almost the same as the volume estimation results. But for some food types like mooncake and tomato, the mass estimation errors are less than the volume estimation errors; the way we measure volume needs to be blamed due to drainage method is not accurate enough. All in all, our estimation method is available.

4 Conclusion

In this paper, we provided our calorie estimation method. Our method needs a top view and side view as its inputs. Faster R-CNN is used to detect the food and calibration object. GrabCut algorithm is used to get each food’s contour. Then the volume is estimated with volume estimation formulas. Finally we estimate each food’s calorie. The experiment results show our method is effective.

Reference

References