Using cameras for precise measurement of two-dimensional plant features

04/30/2019 ∙ by Amy Tabb, et al. ∙ Universidad Tecnológica de Pereira USDA 0

Images are used frequently in plant phenotyping to capture measurements. This chapter offers a repeatable method for capturing two-dimensional measurements of plant parts in field or laboratory settings using a variety of camera styles (cellular phone, DSLR), with the addition of a printed calibration pattern. The method is based on calibrating the camera using information available from the EXIF tags from the image, as well as visual information from the pattern. Code is provided to implement the method, as well as a dataset for testing. We include steps to verify protocol correctness by imaging an artifact. The use of this protocol for two-dimensional plant phenotyping will allow data capture from different cameras and environments, with comparison on the same physical scale.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Images are used with increasing frequency in plant phenotyping for a variety of reasons. One reason is the ability to remotely capture data without disturbing the plant material, while another is the promise of high-throughput phenotyping via image processing pipelines such as those enabled by PlantCV fahlgren_versatile_2015 .

However, to acquire precise data suitable for measurements of two-dimensional objects, the prevailing method in the community is to use a flatbed scanner. Shape analysis of leaves has used scanned images, for apple, grapevine, Claytonia L., and a mixture of species migicovsky_morphometrics_2018 ; klein_digital_2017 ; stoughton_next-generation_2018 ; li_topological_2018 111Note that not all of the data in li_topological_2018 is from a scanner.. Scanners have also been used to analyze the shape of pansy petals yoshioka_genetic_2006 and Vitis vinifera L. seeds orru_morphological_2013 .

Cameras have been used to phenotype a range of structures and sizes, such as cranberry fruit shape and size diaz-garcia_image-based_2018 and root system architecture das_digital_2015 . In both of these works, a disk of known diameter is added to the scene for scaling purposes.

1.1 Camera calibration

Figure 1: An aruco calibration pattern. This particular example has been printed on aluminum, so it can be cleaned during experiments, which is convenient in plant research.

The protocol in this paper transforms images acquired from a standard consumer camera such that measurements in pixels are representative of a planar scene. What this means in more detail is that we have emulated a flatbed scanner with a consumer camera; angles between lines are preserved, as are distance ratios. Physical measurements can be recovered from image measurements by dividing by the number of pixels per millimeter, similar to flatbed scanners.

This method is needed because measurements of two-dimensional objects, when done in image space of camera-acquired images, are subject to diminished accuracy from physical perturbations. A small movement of the camera up or down will give the erroneous impression that an object is larger or smaller in terms of pixels. Image pixels are also subject to radial distortion and projective geometry that allows three-dimensional objects to be viewed in a two-dimensional image. In other words, pixels on one side of the image may not represent the same physical dimensions as pixels in another portion of the image.

The method at the center of this protocol makes use of established camera calibration procedures to mitigate the problems of the preceding paragraph. Camera calibration is the estimation of parameters that relate three coordinate systems: image, camera, and world, to each other. This chapter does not have to the space to deal with this topic in depth, but Hartley and Zisserman

hartley_multiple_2003 is a good text on this topic. When camera calibration is completed, the coordinate systems have been defined relative to a standard, and the relationships of one coordinate system to another are known.

Calibration patterns are used to define coordinate systems relative to a standard. These may take many forms; in this work we use aruco patterns garrido-jurado_automatic_2014 ; laid out in a grid, patterns define the X-Y plane of the world coordinate system as in Figure 1. The camera captures an image of the pattern to aid in defining the world coordinate system with respect to the image and camera coordinate systems.

Usually, many views of the pattern are captured to solve an optimization problem to fully calibrate the camera zhang_flexible_2000 . However, the Structure from Motion (SfM) community, snavely_photo_2006 , wu_towards_2013 began exploiting EXIF data, or Exchangeable image file format. EXIF data is a type of meta data that is common in today’s consumer cameras. Within SfM, the camera’s sensor size and some data from the EXIF file is used to generate an initial solution for some of the camera calibration parameters. We have borrowed this practice for calibrating in the phenotyping context.

1.2 Using a camera as a scanner

Figure 2: Example of a grape cluster. This is a three-dimensional object, but we are interested in measuring aspects of the object where it meets the calibration pattern. Top row: input images of the same grape cluster, left two images are from an Apple iphone 6 (cellular phone camera), right two images are from a Canon EOS 60D DSLR camera. Bottom row: results of applying the method for the image above, where every 10 pixels equal 1 millimeter. Full images are available in camera-as-scanner-data.

The original intent of this method was to develop a high-throughput substitute for slow flatbed scanners. The steps in Section 3 will give details for the user. A brief overview of the code is provided with this chapter: 1) calibrates the camera, per image, 2) computes the homography to transform the current image to the X-Y grid of the world coordinate system, and 3) warps the current image to match the world coordinate system’s X-Y grid.

Figure 3

shows the input images and the output of the method. From the output images, users can apply their own computer vision techniques to identify the objects of interest. Measurements in pixels can be transformed to physical units by dividing by the user-selected scaling factor.

It is important to note a strong assumption when using this method, which is that the object is planar. In practical terms, the user should either use objects that are roughly planar, or consider the footprint of the object on the calibration pattern plane. This method is not suitable for measuring objects that are non-planar, such as free-standing branches with the calibration pattern behind.

To verify that the protocol has been performed correctly, we also include instructions for verifying that the measurements are correct by way of an artifact.

2 Materials

The materials needed are:

  1. calibration pattern

  2. camera

  3. artifact

  4. code

The preparation of the calibration pattern is documented in 1. The style of the camera is not specific to this method, and should be chosen for the user’s convenience. This method relies on the extraction of EXIF tags, so the camera should write EXIF data. At the time of this writing, this feature is common in consumer and cellular phone cameras. An artifact of a known size is needed to check that the protocol has been implemented correctly. In our example, we chose a playing card, as shown in Figure 3. A natural choice for an artifact may be a ruler.

Figure 3: Left: Apple iphone6 camera images of a x inch ( mm x mm) playing card. Right: results of applying the method for the image above, where every 10 pixels equal 1 millimeter. Black lines indicate measurements of the card in pixels. The horizontal line was pixels, so is equivalent to mm as measured by this system. The vertical line was pixels, which is equivalent to mm.

The code and test datasets are provided in tabb2019code_using . Within tabb2019code_using , are two programs and the data source: aruco-pattern-write, camera-as-scanner, and data camera-as-scanner-data. To prepare for the experiments, install the code and run the examples.

3 Methods

  1. [leftmargin=*,label=Step 0, itemsep=1.2]

  2. Prepare the aruco calibration pattern. The pattern should be printed such that and axes are equally scaled, and attached to a flat surface. A pattern is provided in the tabb2019code_using resource, as well as code for generating a new pattern via aruco-pattern-write and instructions in its README. Considerations when generating a new pattern are in 1. The option of printing patterns on metal is discussed in 2.

  3. Arrange the object to be measured on top of the aruco pattern printout. If segmentation of the object from the scene is desired using an image processing technique, we suggest placing a solid-colored paper or fabric in between the object and the pattern. See 3 for more details.

  4. Acquire images of the object, including at minimum a 1-layer border of aruco tags on all four sides of the image. The image should generally be in focus, and acquired such that the camera body is parallel to the aruco pattern plane. However, the alignment does not have to be exact. See Figures 2 and 3 for examples. If using a cell phone camera, do not zoom. Standard image formats are all acceptable, as long as EXIF tags are generated.

  5. Acquire an image of an artifact (such as a ruler) of known size with the same protocol as in 3. We suggest that the artifact be rectangular in shape to allow for ease of measurement.

  6. Prepare the image and format information to run camera-as-scanner. This step assumes that the code has been installed according to its instructions, mentioned in Section 2

    1. [label=Step 5.0]

    2. The preparation instructions for running the method for a group of images is given with the README of repository camera-as-scanner. Create a test directory.

    3. Look up the camera’s sensor size and convert to millimeters. This information may be found in the manufacturer’s provided information that came with the camera, or can be found online. Fill in the sensor size parameters in the appropriate file as indicated in 5a.

    4. Measure one of the squares of the printed aruco calibration pattern, in millimeters. Fill in the square length parameter of the appropriate file as as indicated in 5a.

    5. Move the images of the objects and image of the artifact to a directory with the name images within the test directory.

    6. Determine the number of pixels per millimeter for the transformed images, which will be an argument for running the code. The choice for depends on the size of the object, size of the calibration pattern, and how large one can tolerate the result image size. Suppose the aruco calibration pattern print is mm mm. The result images will be pixels pixels. See 4 for suggestions. In Figures 2 and 3, was chosen.

  7. Run the code camera-as-scanner with three, and optionally four, arguments: the directory and the specified files and directory from 5, an empty output directory, and the number of pixels per millimeter . The optional fourth argument is a Boolean variable, or , indicating whether intermediate results are written. If the variable is , the intermediate results are written, if , they are not.

  8. Verify that the output is as expected, by inspecting the warped image corresponding to the artifact. Measure the width of the artifact in an image manipulation program such as ImageJ, KolourPaint, the GIMP, Adobe Photoshop, etc.; its units will be pixels . Measure the width of the physical artifact in millimeters: . The following should be true: . If not, then recheck the steps. The verification process was demonstrated with the playing card artifact in Figure 3.

4 Notes

  1. [leftmargin=*,label=Note 0, itemsep=1.2]

  2. Note that the pattern can be scaled up or down to be suitable for the data acquisition context, such as the image provided in aruco-pattern-write as an example. It is not necessary for the camera to view the whole pattern. The patterns are black and white, so do not need to be printed in color.

  3. In our experiments, we have ordered prints of the patterns on aluminum. These have been convenient when working with fruit and plant material, because aluminum prints can be washed and cleaned. It is important that the aruco patterns not become occluded with dirt or stains.

  4. Concerning segmentation of the object from the scene of aruco pattern and solid-colored fabric or paper, we suggest that the solid-colored fabric or paper be chosen such that it is a contrasting color compared to the target object. The fabric or paper should be cleaned or replaced if there are dirt or stains. The color of the fabric or paper, whatever color is chosen, will not interfere with the detection of the aruco tags.

  5. As increases, so will the image size. We suggest trying a range of sizes with a small number of images, such as , to get a sense of the resulting file size and resolution of features of interest.

Acknowledgements.
We acknowledge the support of USDA-NIFA Specialty Crops Research Initiative, VitisGen2 Project (award number 2017-51181-26829).

References