Images are used with increasing frequency in plant phenotyping for a variety of reasons. One reason is the ability to remotely capture data without disturbing the plant material, while another is the promise of high-throughput phenotyping via image processing pipelines such as those enabled by PlantCV fahlgren_versatile_2015 .
However, to acquire precise data suitable for measurements of two-dimensional objects, the prevailing method in the community is to use a flatbed scanner. Shape analysis of leaves has used scanned images, for apple, grapevine, Claytonia L., and a mixture of species migicovsky_morphometrics_2018 ; klein_digital_2017 ; stoughton_next-generation_2018 ; li_topological_2018 111Note that not all of the data in li_topological_2018 is from a scanner.. Scanners have also been used to analyze the shape of pansy petals yoshioka_genetic_2006 and Vitis vinifera L. seeds orru_morphological_2013 .
Cameras have been used to phenotype a range of structures and sizes, such as cranberry fruit shape and size diaz-garcia_image-based_2018 and root system architecture das_digital_2015 . In both of these works, a disk of known diameter is added to the scene for scaling purposes.
1.1 Camera calibration
The protocol in this paper transforms images acquired from a standard consumer camera such that measurements in pixels are representative of a planar scene. What this means in more detail is that we have emulated a flatbed scanner with a consumer camera; angles between lines are preserved, as are distance ratios. Physical measurements can be recovered from image measurements by dividing by the number of pixels per millimeter, similar to flatbed scanners.
This method is needed because measurements of two-dimensional objects, when done in image space of camera-acquired images, are subject to diminished accuracy from physical perturbations. A small movement of the camera up or down will give the erroneous impression that an object is larger or smaller in terms of pixels. Image pixels are also subject to radial distortion and projective geometry that allows three-dimensional objects to be viewed in a two-dimensional image. In other words, pixels on one side of the image may not represent the same physical dimensions as pixels in another portion of the image.
The method at the center of this protocol makes use of established camera calibration procedures to mitigate the problems of the preceding paragraph. Camera calibration is the estimation of parameters that relate three coordinate systems: image, camera, and world, to each other. This chapter does not have to the space to deal with this topic in depth, but Hartley and Zissermanhartley_multiple_2003 is a good text on this topic. When camera calibration is completed, the coordinate systems have been defined relative to a standard, and the relationships of one coordinate system to another are known.
Calibration patterns are used to define coordinate systems relative to a standard. These may take many forms; in this work we use aruco patterns garrido-jurado_automatic_2014 ; laid out in a grid, patterns define the X-Y plane of the world coordinate system as in Figure 1. The camera captures an image of the pattern to aid in defining the world coordinate system with respect to the image and camera coordinate systems.
Usually, many views of the pattern are captured to solve an optimization problem to fully calibrate the camera zhang_flexible_2000 . However, the Structure from Motion (SfM) community, snavely_photo_2006 , wu_towards_2013 began exploiting EXIF data, or Exchangeable image file format. EXIF data is a type of meta data that is common in today’s consumer cameras. Within SfM, the camera’s sensor size and some data from the EXIF file is used to generate an initial solution for some of the camera calibration parameters. We have borrowed this practice for calibrating in the phenotyping context.
1.2 Using a camera as a scanner
The original intent of this method was to develop a high-throughput substitute for slow flatbed scanners. The steps in Section 3 will give details for the user. A brief overview of the code is provided with this chapter: 1) calibrates the camera, per image, 2) computes the homography to transform the current image to the X-Y grid of the world coordinate system, and 3) warps the current image to match the world coordinate system’s X-Y grid.
shows the input images and the output of the method. From the output images, users can apply their own computer vision techniques to identify the objects of interest. Measurements in pixels can be transformed to physical units by dividing by the user-selected scaling factor.
It is important to note a strong assumption when using this method, which is that the object is planar. In practical terms, the user should either use objects that are roughly planar, or consider the footprint of the object on the calibration pattern plane. This method is not suitable for measuring objects that are non-planar, such as free-standing branches with the calibration pattern behind.
To verify that the protocol has been performed correctly, we also include instructions for verifying that the measurements are correct by way of an artifact.
The materials needed are:
The preparation of the calibration pattern is documented in 1. The style of the camera is not specific to this method, and should be chosen for the user’s convenience. This method relies on the extraction of EXIF tags, so the camera should write EXIF data. At the time of this writing, this feature is common in consumer and cellular phone cameras. An artifact of a known size is needed to check that the protocol has been implemented correctly. In our example, we chose a playing card, as shown in Figure 3. A natural choice for an artifact may be a ruler.
[leftmargin=*,label=Step 0, itemsep=1.2]
Prepare the aruco calibration pattern. The pattern should be printed such that and axes are equally scaled, and attached to a flat surface. A pattern is provided in the tabb2019code_using resource, as well as code for generating a new pattern via aruco-pattern-write and instructions in its README. Considerations when generating a new pattern are in 1. The option of printing patterns on metal is discussed in 2.
Arrange the object to be measured on top of the aruco pattern printout. If segmentation of the object from the scene is desired using an image processing technique, we suggest placing a solid-colored paper or fabric in between the object and the pattern. See 3 for more details.
Acquire images of the object, including at minimum a 1-layer border of aruco tags on all four sides of the image. The image should generally be in focus, and acquired such that the camera body is parallel to the aruco pattern plane. However, the alignment does not have to be exact. See Figures 2 and 3 for examples. If using a cell phone camera, do not zoom. Standard image formats are all acceptable, as long as EXIF tags are generated.
Acquire an image of an artifact (such as a ruler) of known size with the same protocol as in 3. We suggest that the artifact be rectangular in shape to allow for ease of measurement.
The preparation instructions for running the method for a group of images is given with the README of repository camera-as-scanner. Create a test directory.
Look up the camera’s sensor size and convert to millimeters. This information may be found in the manufacturer’s provided information that came with the camera, or can be found online. Fill in the sensor size parameters in the appropriate file as indicated in 5a.
Measure one of the squares of the printed aruco calibration pattern, in millimeters. Fill in the square length parameter of the appropriate file as as indicated in 5a.
Move the images of the objects and image of the artifact to a directory with the name images within the test directory.
Determine the number of pixels per millimeter for the transformed images, which will be an argument for running the code. The choice for depends on the size of the object, size of the calibration pattern, and how large one can tolerate the result image size. Suppose the aruco calibration pattern print is mm mm. The result images will be pixels pixels. See 4 for suggestions. In Figures 2 and 3, was chosen.
Run the code camera-as-scanner with three, and optionally four, arguments: the directory and the specified files and directory from 5, an empty output directory, and the number of pixels per millimeter . The optional fourth argument is a Boolean variable, or , indicating whether intermediate results are written. If the variable is , the intermediate results are written, if , they are not.
Verify that the output is as expected, by inspecting the warped image corresponding to the artifact. Measure the width of the artifact in an image manipulation program such as ImageJ, KolourPaint, the GIMP, Adobe Photoshop, etc.; its units will be pixels . Measure the width of the physical artifact in millimeters: . The following should be true: . If not, then recheck the steps. The verification process was demonstrated with the playing card artifact in Figure 3.
[leftmargin=*,label=Note 0, itemsep=1.2]
Note that the pattern can be scaled up or down to be suitable for the data acquisition context, such as the image provided in aruco-pattern-write as an example. It is not necessary for the camera to view the whole pattern. The patterns are black and white, so do not need to be printed in color.
In our experiments, we have ordered prints of the patterns on aluminum. These have been convenient when working with fruit and plant material, because aluminum prints can be washed and cleaned. It is important that the aruco patterns not become occluded with dirt or stains.
Concerning segmentation of the object from the scene of aruco pattern and solid-colored fabric or paper, we suggest that the solid-colored fabric or paper be chosen such that it is a contrasting color compared to the target object. The fabric or paper should be cleaned or replaced if there are dirt or stains. The color of the fabric or paper, whatever color is chosen, will not interfere with the detection of the aruco tags.
As increases, so will the image size. We suggest trying a range of sizes with a small number of images, such as , to get a sense of the resulting file size and resolution of features of interest.
Acknowledgements.We acknowledge the support of USDA-NIFA Specialty Crops Research Initiative, VitisGen2 Project (award number 2017-51181-26829).
- (1) Das, A., Schneider, H., Burridge, J., Ascanio, A.K.M., Wojciechowski, T., Topp, C.N., Lynch, J.P., Weitz, J.S., Bucksch, A.: Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics. Plant Methods 11(1), 51 (2015). DOI 10.1186/s13007-015-0093-3. URL https://doi.org/10.1186/s13007-015-0093-3
- (2) Diaz-Garcia, L., Covarrubias-Pazaran, G., Schlautman, B., Grygleski, E., Zalapa, J.: Image-based phenotyping for identification of QTL determining fruit shape and size in American cranberry (Vaccinium macrocarpon L.). PeerJ 6, e5461 (2018). DOI 10.7717/peerj.5461. URL https://peerj.com/articles/5461
- (3) Fahlgren, N., Feldman, M., Gehan, M., Wilson, M., Shyu, C., Bryant, D., Hill, S., McEntee, C., Warnasooriya, S., Kumar, I., Ficor, T., Turnipseed, S., Gilbert, K., Brutnell, T., Carrington, J., Mockler, T., Baxter, I.: A Versatile Phenotyping System and Analytics Platform Reveals Diverse Temporal Responses to Water Availability in Setaria. Molecular Plant 8(10), 1520–1535 (2015). DOI 10.1016/j.molp.2015.06.005. URL https://linkinghub.elsevier.com/retrieve/pii/S1674205215002683
- (4) Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F., Marín-Jiménez, M.: Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47(6), 2280–2292 (2014). DOI 10.1016/j.patcog.2014.01.005. URL http://linkinghub.elsevier.com/retrieve/pii/S0031320314000235
- (5) Hartley, R., Zisserman, A.: Multiple view geometry in computer vision, 2nd ed edn. Cambridge University Press, Cambridge, UK ; New York (2003)
- (6) Klein, L.L., Caito, M., Chapnick, C., Kitchen, C., O’Hanlon, R., Chitwood, D.H., Miller, A.J.: Digital Morphometrics of Two North American Grapevines (Vitis: Vitaceae) Quantifies Leaf Variation between Species, within Species, and among Individuals. Frontiers in Plant Science 8 (2017). DOI 10.3389/fpls.2017.00373. URL https://www.frontiersin.org/articles/10.3389/fpls.2017.00373/full
- (7) Li, M., An, H., Angelovici, R., Bagaza, C., Batushansky, A., Clark, L., Coneva, V., Donoghue, M.J., Edwards, E., Fajardo, D., Fang, H., Frank, M.H., Gallaher, T., Gebken, S., Hill, T., Jansky, S., Kaur, B., Klahs, P.C., Klein, L.L., Kuraparthy, V., Londo, J., Migicovsky, Z., Miller, A., Mohn, R., Myles, S., Otoni, W.C., Pires, J.C., Rieffer, E., Schmerler, S., Spriggs, E., Topp, C.N., Van Deynze, A., Zhang, K., Zhu, L., Zink, B.M., Chitwood, D.H.: Topological Data Analysis as a Morphometric Method: Using Persistent Homology to Demarcate a Leaf Morphospace. Frontiers in Plant Science 9 (2018). DOI 10.3389/fpls.2018.00553. URL https://www.frontiersin.org/articles/10.3389/fpls.2018.00553/full
- (8) Migicovsky, Z., Li, M., Chitwood, D.H., Myles, S.: Morphometrics Reveals Complex and Heritable Apple Leaf Shapes. Frontiers in Plant Science 8 (2018). DOI 10.3389/fpls.2017.02185. URL https://www.frontiersin.org/articles/10.3389/fpls.2017.02185/full
- (9) Orrù, M., Grillo, O., Lovicu, G., Venora, G., Bacchetta, G.: Morphological characterisation of Vitis vinifera L. seeds by image analysis and comparison with archaeological remains. Vegetation History and Archaeobotany 22(3), 231–242 (2013). DOI 10.1007/s00334-012-0362-2. URL https://doi.org/10.1007/s00334-012-0362-2
- (10) Snavely, N., Seitz, S.M., Szeliski, R.: Photo Tourism: Exploring Photo Collections in 3d. In: ACM SIGGRAPH 2006 Papers, SIGGRAPH ’06, pp. 835–846. ACM, New York, NY, USA (2006). DOI 10.1145/1179352.1141964. URL http://doi.acm.org/10.1145/1179352.1141964. Event-place: Boston, Massachusetts
- (11) Stoughton, T.R., Kriebel, R., Jolles, D.D., O’Quinn, R.L.: Next-generation lineage discovery: A case study of tuberous Claytonia L. American Journal of Botany 105(3), 536–548 (2018). DOI 10.1002/ajb2.1061. URL https://bsapubs.onlinelibrary.wiley.com/doi/abs/10.1002/ajb2.1061
- (12) Tabb, A.: Code from: Using cameras for precise measurement of two-dimensional plant features (2019). URL https://data.nal.usda.gov/dataset/code-using-cameras-precise-measurement-two-dimensional-plant-features
- (13) Wu, C.: Towards Linear-Time Incremental Structure from Motion. In: 2013 International Conference on 3D Vision, pp. 127–134. IEEE, Seattle, WA, USA (2013). DOI 10.1109/3DV.2013.25. URL http://ieeexplore.ieee.org/document/6599068/
- (14) Yoshioka, Y., Iwata, H., Hase, N., Matsuura, S., Ohsawa, R., Ninomiya, S.: Genetic Combining Ability of Petal Shape in Garden Pansy (Viola × wittrockiana Gams) based on Image Analysis. Euphytica 151(3), 311–319 (2006). DOI 10.1007/s10681-006-9151-2. URL https://doi.org/10.1007/s10681-006-9151-2
- (15) Zhang, Z.: A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(11), 1330–1334 (2000). DOI 10.1109/34.888718