Bean Split Ratio for Dry Bean Canning Quality and Variety Analysis

05/01/2019 ∙ by Yunfei Long, et al. ∙ USDA Michigan State University 4

Splits on canned beans appear in the process of preparation and canning. Researchers are studying how they are influenced by cooking environment and genotype. However, there is no existing method to automatically quantify or to characterize the severity of splits. To solve this, we propose two measures: the Bean Split Ratio (BSR) that quantifies the overall severity of splits, and the Bean Split Histogram (BSH) that characterizes the size distribution of splits. We create a pixel-wise segmentation method to automatically estimate these measures from images. We also present a bean dataset of recombinant inbred lines of two genotypes, use the BSR and BSH to assess canning quality, and explore heritability of these properties.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Beans are a staple food, having low cost and high nutrition (including protein, vitamins, minerals, and dietary fiber) [13], and additional health benefits [35]. For quickness of use, beans are often pre-cooked and canned. However, depending on genotype, bean structural integrity can be harmed by cooking and canning, so there is an effort to improve these traits through breeding  [33, 19, 4]. To facilitate this, automated quality measures are needed, and that is the focus of this paper.

Splits in the seed coats of canned dry beans, as shown in Fig. 1, are an important undesirable phenotype exhibited following preparation and retort processing  [21, 33, 32]. Preparation for canning involves soaking, which may reveal splits that are already present in the dry seeds or induce splitting in genotypes with poor seed coat integrity  [29, 32, 34]. Following preparation, beans are canned in a retort under high temperature, high pressure conditions that can further induce splitting  [29, 32]. Splits degrade the appearance and internal quality of beans [21, 29, 32], and genotypes that exhibit extensive splitting following canning are considered to have poor canning quality. The canning industry demands beans that can withstand the canning process to produce a high quality product [29, 32], so it is important for dry bean breeders to evaluate canning quality of breeding lines prior to variety release. To improve the end-use quality of dry beans, researchers in plant and food science are interested in studying the association of splitting with genotype and processing conditions [34, 4] including cooking time and, in this case, retort processing time.

Figure 1: Examples of splits on canned yellow dry beans, which we propose to characterize with BSR and BSH. Detected splits are shown as green boundaries.

It is common practice to evaluate the appearance of canned beans by manual visual inspection prior to variety release [21, 12]. Trained reviewers rate the overall quality of canned products poured into trays, taking into account intactness, color, clumping, and brine characteristics including color, viscosity, and amount of extruded starch [10]. This, however, is biased, imprecise, laborious, and time-consuming [21]. To automate this process, Mendoza et. al. [21]

extracted hand-crafted features such as color and texture from bean images and used a support vector machine to classify canned bean samples into acceptable and unacceptable categories. Here our aim is to characterize and quantify only splits. To the best of our knowledge, this is the first method that specializes in quantifying bean splits.

Characterizing splits involves identifying splits in images, but this is challenging, even for a human, due to variations in shape, appearance, and lighting. Our approach leverages convolutional neural networks (ConvNet) 

[15, 14] which have been successful in image-based classification, detection, and segmentation, and play an increasingly important role in agriculture and food science [11]. Object segmentation with ConvNets [18, 9] achieve pixel-level classification for numerous applications from leaf segmentation [24] to vehicle detection from Lidar points [16]. Our bean split detector uses a ConvNet trained to discriminate splits from bean seed coats.

In this work, we propose two new measures for quantifying and characterizing bean split severity. We also present a tool to automatically estimate these measures from images and so characterize bean intactness. In addition, we establish a public bean-split dataset of canned bean images of different genotypes and retort processing times. Experimental results verify the feasibility of detecting and quantifying split severity and demonstrate the strategy’s potential to be a powerful tool in revealing the association of splits with genotype and retort processing time.

2 Related Work

Various non-intrusive techniques such as soft X-ray imaging, hyperspectral imaging, and machine vision have been used to detect seed defects and evaluate seed quality [26], with machine vision having the advantage of being low-cost. Efforts have been made to extract color features from images and use them to recognize damage types of soybeans [30] or to classify beans into a number of quality categories [21, 6]. However, the severity of the defects on beans has not been accurately represented and quantified.

Numerous techniques are used to reveal hidden defects in the process of imaging, making it possible to segment defect regions via simple image processing such as thresholding. For example, Sood  [31] applied X-ray imaging to visualize internal cracks of kidney bean seeds. Momin  [23] employed backlighting to enhance the splits of soybeans in captured images. In contrast, this work focuses on external splits of canned beans, and such splits are more complex and diverse in color and textures than that of uncooked soybean seeds.

Annotated image are the driving force behind the successful application of ConvNets to agriculture and food science. With large datasets, deep ConvNet achieves high performance in general tasks such as classification of plant species [7] and crop diseases [8, 22] from images. In addition, researchers build smaller datasets for specific tasks, including quantifying plant phynotypes. For example, with a self-made dataset and applying ConvNet, Baweja  [1] measure stalk count and stalk width, and Pound  [25] accurately localize wheat spikes and spikelets. Here we create a small dataset for a specific bean phenotype task.

3 Proposed Measures

We seek to move beyond manual assessment of dry bean quality and build automated tools to quantify quality-related bean phenotypes. For this goal, we propose two phenotypic measurements: Bean Split Ratio (BSR) and Bean Split Histogram (BSH), and present a tool for their automatic estimation. This section describes these measurements.

3.1 BSR: Bean Split Ratio

We define the BSR as the ratio of the exposed split area of a bean to the total surface area of the bean and refer to it as . This will vary from bean to bean, so for a collection of beans the average BSR is:

(1)

where is the total split areas and

is the total bean surface areas. The BSR is also the probability that a bean surface point selected at random will be within a split.

Now, given an image of beans in a tray, we can define an image-estimated BSR as the ratio of the total number of split pixels to the total number of bean pixels (including both bean seed coat and bean split pixels). If splits are uniformly distributed around beans, then each pixel has an independent probability of

of being a split, and the expected value of the image-estimated BSR is .

The image-estimated BSR has a number of characteristics that make it useful and convenient. It is simple and does not require segmenting or counting individual beans, which would involve additional labeling effort and classification. It is robust to occlusions, which always exist as at most one half of a bean is observed in any single image, and occlusions become worse when beans overlap each other. Furthermore, the image-estimated BSR is invariant to scale and so does not require strict control of distance from camera or viewing angle. Finally, BSR measures the lack of intactness of beans, a quantity used to assess bean quality, see Section 4.

3.2 BSH: Bean Split Histogram

In addition to their total area, splits can be characterized by their size distribution. While one could enumerate the number of splits of different sizes, we instead propose a Bean Split Histogram (BSH) that sums the total area belonging to splits of different sizes, divided by the total bean area. This gives more influence to larger splits and is a natural extension of the BSR.

The BSH is a histogram of split area-ratios defined for a collection of beans. Given splits, let refer to the areas of all the splits, and let be the maximum split area (which could be equal to the maximum bean size). Let the range be evenly partitioned into bins such that the ’th bin has range . Each bin of the BSH will collect the sum of BSR values from Eq. 1 for splits of that size. Thus the value, , of the ’th bin of the BSH, is the sum of split ratios with area ratios in , namely:

(2)

An example BSH is shown in Fig. 2. A probabilistic interpretation of the BSH is that bin value is the probability of an point on the surface of the bean collection of being in a split whose area satisfies . We see that each split area contributes toward a single bin; therefore, the sum of bins is the BSR, namely .

In practice we use images to estimate a BSH. For this the areas are replaced with image areas measured in pixels. We note that an image-BSH is not identical to the BSH in Eq.  2. The image-area of a split depends not only on its area but also on its orientation relative to the camera. In addition, some splits may only be partially visible, and so be counted as a smaller split. Also, if adjacent splits are merged during detection, they may be counted as a larger split. However, given that most splits are small and dispersed, we expect that these adjustments are small and that the image-BSH will approximate the BSH.

Figure 2: Overview of generating BSR and BSH from a bean image.

4 Bean Dataset

A bean dataset111Dataset location: https://www.egr.msu.edu/drybeans was collected and annotated for two purposes: (1) genotype and phenotype inheritance inference and (2) developing our split detection method. The latter uses a smaller subset of the data, enabling us to generate BSR and BSH measures on the full dataset.

4.1 Dataset for genotype inferences

The dry bean genotypes included in this study consist of two yellow dry bean parent genotypes and 18 of their recombinant inbred line progeny. The dataset includes a total of 20 genotypes that were processed in duplicate across five different retort processing times (10, 15, 20, 30 and 45 minutes) resulting in 200 canned samples. Human panelists performed quantitative visual ratings of canned samples to assess intactness. Further details of the bean genotypes, canning process, and quality evaluation are in the Appendix.

The beans were imaged as described in [21]. The system has a top-down view camera, fluorescent lights to provides stable lighting, and a cover to shield from ambient light. Each canned sample was rinsed to remove brine and then poured into a tray, positioned randomly and imaged. A total of 199 images were collected from the 200 samples, as one sample was lost following visual evaluation.

4.2 Dataset for split detection

A subset of 36 images from the dataset are pixel-wise labeled to enable development and validation of our split detection method, (29 for training and 7 for validation). Each pixel is labeled as one of 3 classes: (1) tray, (2) seed coat, or (3) split. Labeling was done using graph cuts [17] to interactively separate bean pixels from tray pixels, and hand-drawn boundaries to segment each split. Some examples are shown in Fig. 3.

Although the number of labeled images is small, we believe it is feasible to train an accurate model from them for the following reasons. First, many genotypes have similar appearance that can be captured by a small number of images. Second, classification applies to a pixel, rather than image, and so each training image provides many labels. Third, images are augmented via rotations and flipping.

Figure 3: Examples of pixel-wise annotations: (a) - (c) are representative images in our dataset with green lines between splits and seed coats, and orange between bean and tray pixels. (d) - (f) are visualizations of these same pixel-wise annotations with tray in gray, seed coat in orange, and splits in green.

5 Method

We provide a summary of our method for estimating BSR and BSH from images as well as details of our segmentation algorithm.

5.1 Algorithm

  1. Train two pixel-wise segmentation models using the labeled dataset described in Sec. 4.2.

  2. Apply trained models to generate pixel-wise, 3-class predictions on the full image dataset from Sec. 4.1, namely: tray, seed coat, or split.

  3. Threshold and count the number of split pixels and seed coat pixels and compute the BSR using Eq. 1.

  4. Cluster split pixels into contiguous regions, measure their areas, and estimate the BSH using Eq. 2.

5.2 Segmentation ConvNet

Splits can be challenging to distinguish, even for humans, as surface textures and specular reflections often have similar appearance to splits, and some beans have seed coats with very similar color to the internal bean. Discriminating splits from confusers involves using both fine texture cues and surrounding context. This motivated our use of a pyramid convolutional network [24] which integrates both fine detail and context to discriminate leaf boundaries.

5.2.1 Network architecture

The ConvNet architecture we use in this work is modified from [24], and its diagram is shown in Fig. 4

. Here symbol C represents sequential connection of convolutional layer and a ReLU nonlinearity,

denotes down-sampling by max-pooling, and

denotes up-sampling via nearest neighboring interpolation. With six resolutions the network has a maximum receptive field size of

, giving significant contextual cues for segmenting split pixels.

We train two networks independently: one performs bean/tray segmentation (orange + green vs. grey in Fig. 3

), and the other segments splits from seed coats (green vs. orange). The seed coat segmentation task is much more difficult than the former and requires more epochs for convergence, and so make sense to train separately.

Figure 4: Network architecture: represents a convolutional layer using a filter with stride; the number of filter channels is shown after each box; NN denotes up-sampling via nearest neighbor interpolation.

5.2.2 Thresholding scores

A threshold converts output scores into one of the 2 classes. We compute the optimum threshold based on two criteria, (1) intersection over the union (IoU) between predicted split pixels with labeled pixels (a standard measure for detection accuracy) and (2) BSR error. Both give a similar best threshold level shown with the green lines.

6 Experimental results

Here we report on both the human panelists’ rating and the automated BSR and BSH measures. We explore how these measures can be used to evaluate quality and investigate genetic factors influencing it.

6.1 Canned bean intactness: human panelists

Table 1 shows that the estimated intactness of canned bean samples was influenced by bean genotype, retort process time, and reviewer. There was also an interaction between reviewer and genotype. The effect of reviewer on the intactness score indicates the subjective nature of using human panelists for this evaluation, and the need for a more objective method, such as ours.

Source DF F Value PRF
Genotype 19 30.51 .0001
Retort Time 4 29.92 .0001
Genotype by Retort Time 76 3.08 .0001
Replication (2 cans) 1 1.67 0.1968
Reviewer 13 35.18 .0001
Genotype by Reviewer 247 1.63 .0001
  • Type 3 tests in Proc Mixed with REML estimation method

  • Numerator Degrees of Freedom for the F value calculation

Table 1:

Analysis of Variance of panel-estimated canned bean intactness. This shows high variability of intactness due to genotype, retort time and reviewer.

6.2 Image-based BSR prediction

We built the network shown in Fig. 4

based on Keras 

[3] with an input image resolution and ran training programs with AdaDelta [36] as an optimizer. As mentioned in Section 5.2.1, we trained two models. The first one distinguishes beans from the tray. Loss and average precision (AP) over 18 epochs are shown in Figs. 5 and 5

, respectively. As it is an easy task, the loss function converges fast, and the model achieves high AP.

Figure 5: Segmenting bean vs. tray is straightforward with our network and obtains high performance as shown by (a) Convergence of cross entropy loss, and (b) high average precision in pixel labels.

The second model detects splits, by classifying each pixel into splits or seed coat. The tray pixels are ignored for computing loss and AP. The loss and AP curves over 70 epochs are shown in Figs. 6 and 6, respectively. The model achieves AP over 0.8.

Figure 6: Segmenting splits from seed coats is challenging. (a) Loss and (b) average precision of pixel scores are shown.

To use the predicted split pixels for further analysis, we need to apply a threshold on the network output. It makes sense to select a threshold that maximizes performance, and Figs. 7 and 7 show performance on two measures: Split IoU and BSR error. BSR error is the average percent error in estimating the BSR from an image. It can be seen that the two criteria result in similar best threshold: 0.89 (based on IoU) and 0.90 (based on BSR error).

Figure 7: We select a threshold (green line) that maximizes pixel segmentation performance measured as (a) the IoU of estimated split pixels, or (b) BSR percentage error.
Figure 8: Heritability estimates over our dataset for the first 10 bins of the BSH are quite high, ranging from 0.64 to 0.82.

Fig. 9 shows segmentation results for 3 representative images in the validation set. Figs. 9 to 9 shows the boundaries of estimated and true splits plotted on the original images. Figs. 9 to 9 shows the predicted segmentation and ground truth. The model achieves higher accuracy on dark beans than light beans, as light bean seed coat color is very close to the split color.

Figure 9: Visualization of segmentation results: (a), (b) and (c) are images in the validation set; predicted (green) and true (blue) boundaries of splits are shown on top of the images. (d), (e) and (f) are visualization of segmentation of (a), (b) and (c), respectively, showing predicted tray (grey), predicted seed coat (orange) and predicted bean splits (green), along with true splits (blue) and bean regions (black) boundaries.

6.3 Baseline performance comparison

In Table 2, we use the bean pixels of the validation set to compare our method to a baseline Linear Discriminant Analysis (LDA) [20] on pixel-wise HSV color features [2]. The BSH error is quantified by Earth Mover’s Distance [28]. The poor performance of LDA shows that pixel color alone is insufficient to detect splits and spatial cues learned by the pyramid ConvNet greatly improve performance.

Methods AP IoU BSR Error BSH Error
LDA [20] 0.30 0.26 33.2% 3.75
Ours
Table 2: Baseline accuracy comparison for split detection

6.4 Qualitative BSR and BSH results

Figs. 10 to 10 shows a single bean type under increasing retort times. This results in more visible splits, and can be quantified by the increasing BSR measures. Figs. 10 to 10 show the BSH of Figs. 10 to 10, respectively. To obtain the size of individual split, we cluster split pixels by using connected component labeling [27]. Note that different splits that are connected in images can be incorrectly treated as one split.

Figure 10: (a) to (c) are 3 bean samples with increasing split severity due to increased retort time. Our detected splits segments are shown by green boundaries and are quantified by BSR scores: (a) 0.019, (b) 0.037 and (c) 0.056. The corresponding BSH measures are shown below in (d) to (f) with split size in units of maximum bean size.

6.5 Comparison of automated and human analysis

Multiple human panelists rated each tray of beans with an integer intactness score ranging from 1 to 5. Their average score was used as reference intactness. We calculated the Pearson correlation between the image-based BSR and the reference intactness, and compared this to the correlation of the individual raters with the reference intactness.222In correlating each rater with the reference intactness, we excluded that rater’s score from the intactness measure. BSR had a correlation coefficient of compared to an average over all raters. Thus the BSR tracks the reference intactness better and so has potential to achieve at least comparable performance to human panelists in intactness estimation.

6.6 Applicability of methods to breeding

The most important quality of detected features for bean breeders is a measure of the proportion of variance attributed to genotype rather than other factors. This estimate is called the repeatability or heritability, and ranges from 0 to 1, with higher values being more useful and effective to implement in a breeding program. The values indicate to what degree the phenotype is controlled by genotype, which determines how closely the progeny will resemble their parents, and therefore how effective selection based on the given trait (or features) will be from one generation to the next. Heritability estimates for the BSR was 0.957, which is extremely high. Heritability estimates for individual BSH bins representing different sizes of splits ranged from 0.64 to 0.82 (see Fig. 8), and some of these bin scores were impacted by retort time, rep, or whether the image was included in the training set. Though a significant proportion of the variation in human ratings was also attributable to genotype (Table 1), these values were also significantly impacted by the Reviewer and Retort Time, which was not the case for the BSR trait. In addition, the need for multiple reviewers to reduce bias in human scores translates to a very time- and cost-intensive rating process compared to an image-based approach.

7 Conclusion

In this paper, we present two measures, BSR and BSH, that quantify and characterize the severity of bean splits. We propose a method estimating BSR and BSH from bean images and demonstrate it on a new, annotated bean-image dataset. We find strong correlation between BSR and human-estimated bean intactness. Furthermore, from our bean dataset we find strong heritability of these measures, indicating that they are likely a useful tool for breeders interested in selecting for improved canning quality. Since these measures can be estimated more quickly and reliably than human-based scores, they may improve breeders’ ability to select genotypes for crossing and advancement in their breeding programs, as well as determine the genetic factors controlling propensity for seed coats to split during canning.

Acknowledgements

This work was made possible in part by a predoctoral training award to Yunfei Long from grant number DGE-1828149 from the National Science Foundation. The authors would like to thank Kirk Dolan in the Food Science and Human Nutrition Department at Michigan State University for providing equipment and technical support for the bean canning.

Appendix

A.1. Bean Genotypes

Two yellow dry bean genotypes Ervilha (ADP0512) and PI527538 (ADP0468) were crossed, and the progeny were self-pollinated through single-seed descent for five generations to create recombinant inbred lines. Recombinant inbred lines are highly homozygous genotypes that incorporate various combinations of parental genetic material due to multiple recombination events. The parents Ervilha and PI527538 were selected from the Andean Diversity Panel [5] for their cooking and nutritional characteristics. The genotypes included in this study were selected to evaluate whether their canning quality differs across different retort processing times. The dataset contains images of both parents as well as 18 recombinant inbred lines canned in a retort across five different process times (10, 15, 20, 30 and 45 minutes).

A.2. Canning protocol and quality evaluation

Each genotype was processed in duplicate across five retort times for a total of ten samples per genotype. For each sample, 90g dry weight (14-17% moisture) of dry bean seeds were placed in mesh bags and soaked for 12 hours in 0.0028% CaCl2 solution prior to canning. The soaked samples were placed into aluminum cans, filled with brine (1.5% sucrose, 1.25% NaCl, 0.03% CaCl2), and heated in an exhaust box to 165 °C. The cans were then sealed and processed at 250 °F in a stationary rotary retort for 10, 15, 20, 30, or 45 minutes. All process times exceeded minimum safety requirements (F0>6 min) for production of canned food products. Cans were left to equilibrate at room temperature for one week before opening, at which point visual inspections were performed. Each can was poured into a paper food tray and mixed slightly to distribute the samples evenly across each tray. The samples were evaluated by 14 trained reviewers (with at least 7 reviewers per can) using a 5-point scale for intactness ( 1: 0-20% intact, 2: 21-40% intact, 3: 41-60% intact, 4: 61-80% intact, 5: 81-100% intact). Intactness is defined as an absence of splits. Following visual evaluations, each sample was rinsed and imaged.

Analysis of variance of intactness data was evaluated using the residual maximal likelihood method. The model included genotype, retort time, genotype by retort time, replication, reviewer, and genotype by reviewer. Genotype and retort time were fixed effects and replication and reviewer were random effects.

References

  • [1] H. S. Baweja, T. Parhar, O. Mirbod, and S. Nuske.

    StalkNet: a deep learning pipeline for high-throughput measurement of plant stalk count and stalk width.

    In Field and Service Robotics, pages 271–284. Springer, 2018.
  • [2] T.-W. Chen, Y.-L. Chen, and S.-Y. Chien.

    Fast image segmentation based on K-Means clustering with histograms in HSV color space.

    In IEEE 10th Workshop on Multimedia Signal Processing, pages 322–325, 2008.
  • [3] F. Chollet et al. Keras. https://keras.io, 2015.
  • [4] K. A. Cichy, A. Fernandez, A. Kilian, J. D. Kelly, C. H. Galeano, S. Shaw, M. Brick, D. Hodkinson, and E. Troxtell. QTL analysis of canning quality and color retention in black beans (Phaseolus vulgaris l.). Molecular Breeding, 33(1):139–154, 2014.
  • [5] K. A. Cichy, T. G. Porch, J. S. Beaver, P. Cregan, D. Fourie, R. P. Glahn, M. A. Grusak, K. Kamfwa, D. N. Katuuramu, P. McClean, et al. A Phaseolus vulgaris diversity panel for Andean bean improvement. Crop Science, 55(5):2149–2160, 2015.
  • [6] E. M. de Oliveira, D. S. Leme, B. H. G. Barbosa, M. P. Rodarte, and R. G. F. A. Pereira.

    A computer vision system for coffee beans classification based on computational intelligence techniques.

    Journal of Food Engineering, 171:22–27, 2016.
  • [7] M. Dyrmann, H. Karstoft, and H. S. Midtiby. Plant species classification using deep convolutional neural network. Biosystems Engineering, 151:72–80, 2016.
  • [8] K. P. Ferentinos. Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture, 145:311–318, 2018.
  • [9] Y. Guo, Y. Liu, T. Georgiou, and M. S. Lew. A review of semantic segmentation using deep neural networks. International Journal of Multimedia Information Retrieval, 7(2):87–93, 2018.
  • [10] G. L. Hosfield, J. D. Kelly, M. Silbernagel, J. Stavely, M. Adams, M. Uebersax, and G. Varner. Eight small-red dry bean germplasm lines with upright architecture, narrow profile, and short vine growth habit. HortScience, 30(7):1479–1482, 1995.
  • [11] A. Kamilaris and F. X. Prenafeta-Boldu. Deep learning in agriculture: a survey. Computers and Electronics in Agriculture, 147:70–90, 2018.
  • [12] J. D. Kelly and K. A. Cichy. Dry bean breeding and production technologies. Dry Beans and Pulses Production, Processing and Nutrition, pages 23–54, 2012.
  • [13] T. Kutoš, T. Golob, M. Kač, and A. Plestenjak. Dietary fibre content of dry and processed beans. Food Chemistry, 80(2):231–235, 2003.
  • [14] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436, 2015.
  • [15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [16] B. Li, T. Zhang, and T. Xia. Vehicle detection from 3D lidar using fully convolutional network. In Robotics: Science and Systems, 2016.
  • [17] Y. Li, J. Sun, C.-K. Tang, and H.-Y. Shum. Lazy snapping. ACM Transactions on Graphics, 23(3):303–308, 2004.
  • [18] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In

    IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3431–3440, 2015.
  • [19] W. Lu and K. Chang. Correlations between chemical composition and canning quality attributes of navy bean (Phaseolus vulgaris l.). Cereal Chemistry, 73(6):785–787, 1996.
  • [20] A. M. Martínez and A. C. Kak. PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2):228–233, 2001.
  • [21] F. A. Mendoza, J. D. Kelly, and K. A. Cichy. Automated prediction of sensory scores for color and appearance in canned black beans (Phaseolus vulgaris l.) using machine vision. International Journal of Food Properties, 20(1):83–99, 2017.
  • [22] S. P. Mohanty, D. P. Hughes, and M. Salathé. Using deep learning for image-based plant disease detection. Frontiers in Plant Science, 7:1419, 2016.
  • [23] M. A. Momin, K. Yamamoto, M. Miyamoto, N. Kondo, and T. Grift. Machine vision based soybean quality evaluation. Computers and Electronics in Agriculture, 140:452–460, 2017.
  • [24] D. Morris. A pyramid CNN for dense-leaves segmentation. In 15th Conference on Computer and Robot Vision, pages 238–245, 2018.
  • [25] M. P. Pound, J. A. Atkinson, D. M. Wells, T. P. Pridmore, and A. P. French. Deep learning for multi-task plant phenotyping. In IEEE International Conference on Computer Vision, pages 2055–2063, 2017.
  • [26] A. Rahman and B.-K. Cho. Assessment of seed quality using non-destructive measurement techniques: a review. Seed Science Research, 26(4):285–305, 2016.
  • [27] A. Rosenfeld and J. L. Pfaltz. Sequential operations in digital picture processing. Journal of the ACM, 13(4):471–494, 1966.
  • [28] Y. Rubner, C. Tomasi, and L. J. Guibas.

    The earth mover’s distance as a metric for image retrieval.

    International Journal of Computer Vision, 40(2):99–121, 2000.
  • [29] V. Schoeninger, S. R. M. Coelho, and P. Z. Bassinello. Industrial processing of canned beans. Ciência Rural, 47(5), 2017.
  • [30] P. Shatadal and J. Tan. Identifying damaged soybeans by color image analysis. Applied Engineering in Agriculture, 19(1):65, 2003.
  • [31] S. Sood, S. Mahajan, A. Doegar, and A. Das. Internal crack detection in kidney bean seeds using X-ray imaging technique. In International Conference on Advances in Computing, Communications and Informatics, pages 2258–2261, 2016.
  • [32] J. Van Buren, M. Bourne, D. Downing, D. Queale, E. Chase, and S. Comstock. Processing factors influencing splitting and other quality characteristics of canned kidney beans. Journal of Food Science, 51(5):1228–1230, 1986.
  • [33] N. Wassimi, G. Hosfield, and M. Uebersax. Inheritance of physico-chemical seed characters related to culinary quality in dry bean. Journal of the American Society for Horticultural Science, 115(3):492–499, 1990.
  • [34] N. Zamindar, M. S. Baghekhandan, A. Nasirpour, and M. Sheikhzeinoddin. Effect of line, soaking and cooking time on water absorption, texture and splitting of red kidney beans. Journal of Food Science and Technology, 50(1):108–114, 2013.
  • [35] M. Zanovec, C. E. O’Neil, and T. A. Nicklas. Comparison of nutrient density and nutrient-to-cost between cooked and canned beans. Food and Nutrition Sciences, 2(02):66, 2011.
  • [36] M. D. Zeiler. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.