Computer-assisted polyp matching between optical colonoscopy and CT colonography: a phantom study

01/15/2015
by   Holger R. Roth, et al.
UCL
0

Potentially precancerous polyps detected with CT colonography (CTC) need to be removed subsequently, using an optical colonoscope (OC). Due to large colonic deformations induced by the colonoscope, even very experienced colonoscopists find it difficult to pinpoint the exact location of the colonoscope tip in relation to polyps reported on CTC. This can cause unduly prolonged OC examinations that are stressful for the patient, colonoscopist and supporting staff. We developed a method, based on monocular 3D reconstruction from OC images, that automatically matches polyps observed in OC with polyps reported on prior CTC. A matching cost is computed, using rigid point-based registration between surface point clouds extracted from both modalities. A 3D printed and painted phantom of a 25 cm long transverse colon segment was used to validate the method on two medium sized polyps. Results indicate that the matching cost is smaller at the correct corresponding polyp between OC and CTC: the value is 3.9 times higher at the incorrect polyp, comparing the correct match between polyps to the incorrect match. Furthermore, we evaluate the matching of the reconstructed polyp from OC with other colonic endoluminal surface structures such as haustral folds and show that there is a minimum at the correct polyp from CTC. Automated matching between polyps observed at OC and prior CTC would facilitate the biopsy or removal of true-positive pathology or exclusion of false-positive CTC findings, and would reduce colonoscopy false-negative (missed) polyps. Ultimately, such a method might reduce healthcare costs, patient inconvenience and discomfort.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 5

page 6

10/23/2019

Iterative Matching Point

In this paper, we propose a neural network-based point cloud registratio...
12/01/2020

Cross-modal registration using point clouds and graph-matching in the context of correlative microscopies

Correlative microscopy aims at combining two or more modalities to gain ...
08/08/2019

Image-based marker tracking and registration for intraoperative 3D image-guided interventions using augmented reality

Augmented reality has the potential to improve operating room workflow b...
04/26/2019

Best Match Graphs and Reconciliation of Gene Trees with Species Trees

A wide variety of problems in computational biology, most notably the as...
05/29/2020

Non-Rigid Volume to Surface Registration using a Data-Driven Biomechanical Model

Non-rigid registration is a key component in soft-tissue navigation. We ...
06/03/2020

Complete Characterization of Incorrect Orthology Assignments in Best Match Graphs

Genome-scale orthology assignments are usually based on reciprocal best ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Detection of potentially precancerous polyps and cancers using CT colonography (CTC) is only the first step of patient treatment. Once a suspected polyp has been detected using CTC, the patient must undergo subsequent optical colonoscopy (OC) to enable polypectomy – surgical removal of the polyp. However, CTC can suffer from false-positive polyps reported by radiologists during its interpretation and such polyps will not be found during subsequent OC. Therefore a polyp seen at CTC may not be seen at subsequent colonoscopy for either of two reasons: 1.) a polyp is not actually present, i.e. a false-positive CTC diagnosis; 2.) a polyp is present, but the colonoscopist cannot find it.

In present clinical practice, the colonoscopist does not know which of these two circumstances apply when a polyp reported on CTC cannot be found during subsequent OC. Previous studies of back-to-back OC indicate that the miss-rates for 10 mm adenomas or larger is approximately 6% [12]. Due to deformations induced by the colonoscope, variable colon lengths and differences in haustration, even very experienced colonoscopists find it difficult to pinpoint the precise location of the endoscope tip during the examination. This makes it hard to relate the colonoscopic video to the radiological findings from CTC. This problem can cause unduly prolonged OC examinations that are stressful for the patient, colonoscopist and supporting staff.

In this study, we performed a phantom study that investigated the feasibility of automatically matching polyps and other abnormalities perceived during OC with polyps reported on prior CTC.

1.1 Related work

Hong et al. [5] propose an algorithm that detects the edges of haustral folds using a single OC image. They extract rings of fold contours by fitting Bezier curves. These contours are then used to reconstruct a virtual 3D representation of a colonic segment, using a reverse-projecting of the 2D fold contours into 3D space, as described in their previous work [6]

. They propose to use this method for estimating how much area of the colonic surface mucosa a colonoscopist has observed during OC. However, their method relies on the assumption that the diameter of fold contours and the colon does not change in the segment observed in a single OC image. Therefore, a simple reverse-projection of the fold contours will give a reasonable estimate of the fold position in 3D – an assumption certainly not true for the entirety of the colon observed in colonoscopy.

Liu et al. [9]

describe a purely computer-vision based method for tracking the motion of an colonoscope in relation to virtual colonoscopy. After manual initialization between the virtual view in VC and the current location of the colonoscope tip, they estimate the motion of the colonoscope using an optical-flow-based method that ignores non-informative (blurry) image frames

[8]. Their exclusive use of vision-based information avoids the usage of any external tracking sensors such as electromagnetic tracking. However, Liu et al. [9] describe certain limitations of their current method: it relies on good manual initialization between the first OC frame to be tracked and the VC view. A good initialization is mainly achieved by using colonic regions with clear landmarks such as polyps that need to be linked by experts in both OC and VC. Otherwise, their optical flow algorithm relies on the fold structures of the colon which can be misleading due to is tubular structure and repeating fold shapes [8]. Furthermore, the colon is highly flexible in the sigmoid and transverse colon and here, Liu et al. [9] describe problems when tracking the colonoscope due the large deformations and “sharp turns” of the colonoscope when it passes these regions. A continuous tracking of the colonoscope using optical flow proved very challenging in these regions of the colon and therefore several re-initializations of the method would need to take place. Therefore, more automated methods of initialization between the optical and colonoscopic images would need to be implemented in order to make this method work in clinical practice.

The work presented here was motivated by this issue of providing the means of automated initialization between OC and VC images. Previous work in optical colonoscopy exists on camera motion estimation (potentially usable for 3D reconstruction) [14, 15] and sparse 3D reconstruction [7, 3, 1], but no attempt has been made to use that information in order to match with VC. Furthermore, it is likely that a sparse 3D reconstruction will not deliver the required detail to accurately match to VC data.

2 Methods

Recent advances in computer vision (CV) enable us to compute dense 3D structural information from 2D camera images. Structure-from-motion approaches for 3D reconstruction use the perspective shift between camera images observing the same three-dimensional scene. After pre-calibration of the camera, different viewpoints of the same structure allow the triangulation of 3D points of the corresponding projections of these points in the camera images if the relative translation and rotation of the cameras are known [4]. Figure 1 illustrates the reconstruction process of the 3D shape of a polyp using three colonoscopy camera images, showing the same polyp from different viewing angles. Automatic matching between the different images allows the 3D reconstruction of the polyp’s shape and the camera’s current location with respect to the polyp. The real shape of the polyp can then be reconstructed by triangulation between its surface points in 3D space and the camera locations.

Figure 1: 3D reconstruction of an abnormal polyp from three colonoscopy images.

2.1 Dense 3D reconstruction

We used a dense 3D reconstruction algorithm (specifically developed for narrow-baseline applications) based on an open-source implementation by Paalanen et al. [11] that uses the plane-sweep approach proposed by Collins [2] (Figure 2 illustrates the principles of the plane-sweep approach). This approach has the advantage that it is parallelizable and can therefore be used to reconstruct surfaces quickly from multiple camera views using commodity graphics hardware as introduced by Yang and Pollefeys [16]. The plane-sweep approach assumes a plane in 3D space onto which OC images can be back-projected. Corresponding points from the same objects in the colonoscopic scene can then be back-projected onto the same plane in 3D. This can be measured by defining a matching cost at each pixel that measures the dissimilarity of the back-projected points. This cost is computed for each pixel in the images at several depths as in

(1)

where

(2)

is the number of views and are the gray images at each view as in [11].

The resulting reconstructed pixel depths are the ones minimizing the dissimilarity cost across all planes [11]. We ignore specular reflections when tracking the motion of the OC camera by detecting areas of high intensity in the OC images using thresholding.

Figure 2: Plane-sweep approach used for multi-view 3D reconstruction. The 3D space is discretized into several parallel planes. This method allows us to measure the dissimilarity of back-projected points (pixels) at these planes [2, 16].

2.2 Point set registration

After dense 3D reconstruction of the endoscopic scene, we use the rigid version of Coherent Point Drift (CPD). CPD is a point set registration algorithm, that allows us to match the polyp point clouds reconstructed from OC to the targeted polyp surfaces as extracted from CTC. CPD uses the L2 norm between the two sets of points and probabilistic methods with Gaussian mixture models

[10] in order to minimize a squared distance between both point sets:

(3)

where and are the target and source point sets and , are the number of target and source points, respectively. D is the dimensionality of the points – 3 in this case.

3 Experimental evaluation

3.1 Colon phantom generation

A colon phantom was produced using 3D printing (MakerBot) from a stereolithography model generated from real CTC data of a 25 cm long section of transverse colon with two polyps (8 mm and 15 mm diameter). The colon was printed in two halves such that blood vessels and colon mucosa could be painted on the inside of the model in order to mimic the visual features of colonic mucosa. Figure 3 shows the underlying 3D stereolithography model that was used for printing one of the two sides of the colon. After painting (see Fig. 4), both halves were assembled to form a closed colon model into which an endoscope could be inserted.

Figure 3: Underlying 3D stereolithography model for 3D-printing a colon phantom.
Figure 4: We painted a colon phantom after 3D-printing in order to mimic the visual features of colonic mucosa.

3.2 Endoscopic video acquisition and 3D reconstruction

We used one video channel (as with a standard monocular colonoscope) of a VIKING laparoscope with 1920x539 resolution and recorded a short sweep motion of the camera around the 15 mm polyp as the input sequence to the 3D reconstruction algorithm. For 3D reconstruction in OC, we used only the upper field of the interlaced video and all video frames were resized to half size in order to achieve real-time processing speed (at 15 fps using up to 32 views for 3D reconstruction). Figure 5 shows the 3D reconstruction of a colonic surface with the 15 mm polyp acquired using the rigid phantom.

Figure 5: 3D reconstruction of a rigid colon phantom surface at a 15 mm polyp in the transverse colon. Top: three example images of the polyp from optical colonoscopy (OC). Bottom: 3D reconstructed point cloud from three different views. The tracked OC camera positions are plotted using magenta dots (green dots indicate camera viewing direction)

3.3 Matching with targeted polyp

After 3D reconstruction of the 15 mm polyp observed at OC, the reconstructed point cloud can be registered to polyp surface point clouds extracted from CTC, using a segmentation of the colonic lumen [13]. Running the rigid CPD registration against all polyp locations from CTC could potentially provide a measurement that indicates which CTC polyp corresponds to the 3D reconstructed point cloud around the polyp as observed in OC. Table 1 shows the result of the measurement (square root of ) for the 8 mm and 15 mm CTC polyp and OC point cloud of the 15 mm polyp (as shown in Fig. 5). One can see that value is 3.9 times higher at the incorrect polyp, comparing the correct match between polyps to the incorrect match.

Table 1: Rigid Coherent Point Drift (CPD) registration of a 3D reconstructed point cloud from optical colonoscopy (OC), with two polyp surface point clouds extracted from CT colonography (CTC). The tables show the distance measure that could provide an indication of how well reconstructed OC points and the CTC polyp point clouds are aligned. Absolute values are shown on the left, while the right table shows the values normalized with value at the correct polyp match (colored in green). It shows that at the incorrect polyp, the value of is 3.9 times larger.

3.4 Matching with other colonic surface structures

Figure 6 shows a comparison of value when registering the 3D reconstructed point cloud (15 mm polyp) from OC rigidly with other colonic surface structures, such as haustral folds and the targeted 15 mm polyp, extracted from CTC. In total, there were 370 surface structures evenly distributed over the colon surface that were used for registration. The global minimum of is correctly located at the surface point cloud extracted from the 15 mm polyp ( = 0.65).

Figure 6: Comparison of value between 3D reconstructed point cloud from optical colonoscopy (OC) and other colon surface structures such as haustral folds and the targeted 15 mm polyp on a short section of transverse colon, extracted from CT colonography. Polyp candidates are shown in blue. The correctly matched polyp is shown in red (the 3D reconstruction from OC views the polyp from the side as seen in Fig. 5).

4 Discussion

Potentially, 3D reconstructions of polyps observed at OC could be matched with corresponding polyps reported at prior CTC. This would be useful for the colonoscopist, in order to assess whether the polyp he or she is seeing during the colonoscopic procedure is the same one as reported by the radiologist at prior CTC. Because it is known that CTC can miss polyps, by using automatically matched polyps the colonoscopist could more confidently determine whether an observed polyp is one previously reported or whether it is new (i.e. false-negative CTC).

This phantom study illustrates how automatic polyp matching OC and CTC could be achieved, thus facilitating localization and verification of polyps detected at CTC. Subsequently, this will facilitate biopsy of true-positive pathology and exclusion of false-positive CTC findings. Consequently, it will also likely reduce the occurrence of false-negative or missed polyps by colonoscopy, when these have been seen previously at CTC. It might also help reduce healthcare costs, patient inconvenience and discomfort. This work assumes that bowel motion is relatively constrained (rigid) at the location of the polyp and the majority of perspective shift comes from the movement of the colonoscopic camera, which is likely to be true for short periods of time during colonoscopy. Potential incorporation of EM tracking data, or a way of automatically measuring the inserted length of the colonoscope, could provide a means of distinguishing between other potentially matching OC polyp locations with CTC.

5 Acknowledgments

The authors gratefully acknowledge financial support for this work from the NIHR Program Grants for Applied Research: “Imaging diagnosis of colorectal cancer: Interventions for efficient and acceptable diagnosis in symptomatic and screening populations” (Grant No. RP-PG-0407-10338) and the EPSRC-CRUK Comprehensive Cancer Imaging Centre of UCL and KCL (Grant No. C1519AO).

References

  • Chen et al. [2009] Chen, C.-I., D. Sargent, C.-M. Tsai, Y.-F. Wang, and D. Koppel (2009). Uniscale multi-view registration using double dog-leg method. In SPIE Medical Imaging, pp. 72611F–72611F. International Society for Optics and Photonics.
  • Collins [1996] Collins, R. T. (1996). A space-sweep approach to true multi-image matching. In

    Computer Vision and Pattern Recognition, 1996. Proceedings CVPR’96, 1996 IEEE Computer Society Conference on

    , pp. 358–363. IEEE.
  • Gu and Poirson [2007] Gu, J. and A. Poirson (2007). Computer-aided diagnosis (cad) for colonoscopy. In Medical Imaging, pp. 65142J–65142J. International Society for Optics and Photonics.
  • Hartley and Zisserman [2000] Hartley, R. and A. Zisserman (2000). Multiple view geometry in computer vision, Volume 2. Cambridge Univ Press.
  • Hong et al. [2011] Hong, D., W. Tavanapong, J. Wong, J. Oh, and P. C. de Groen (2011). Colon fold contour estimation for 3d visualization of colon structure from 2d colonoscopy images. In Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on, pp. 121–124. IEEE.
  • Hong et al. [2009] Hong, D., W. Tavanapong, J. Wong, J. Oh, and P. Groen (2009). 3d reconstruction of colon segments from colonoscopy images. In Bioinformatics and BioEngineering, 2009. BIBE’09. Ninth IEEE International Conference on, pp. 53–60. IEEE.
  • Koppel et al. [2007] Koppel, D., C.-I. Chen, Y.-F. Wang, H. Lee, J. Gu, A. Poirson, and R. Wolters (2007). Toward automated model building from video in computer-assisted diagnoses in colonoscopy. In Medical Imaging, pp. 65091L–65091L. International Society for Optics and Photonics.
  • Liu et al. [2008] Liu, J., K. Subramanian, T. Yoo, and R. Van Uitert (2008). A stable optic-flow based method for tracking colonoscopy images. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW ’08. IEEE Computer Society Conference on, pp. 1–8.
  • Liu et al. [2013] Liu, J., K. R. Subramanian, and T. S. Yoo (2013). An optical flow approach to tracking colonoscopy video. Computerized Medical Imaging and Graphics 37(3), 207 – 223.
  • Myronenko and Song [2010] Myronenko, A. and X. Song (2010). Point set registration: coherent point drift. Pattern Analysis and Machine Intelligence, IEEE Transactions on 32(12), 2262–2275.
  • Paalanen and Kamarainen [2010] Paalanen, P. and J.-K. Kamarainen (2010). Narrow baseline glsl multiview stereo. 3D Data Processing, Visualization and Transmission (3DPVT).
  • Rex et al. [1997] Rex, D., C. Cutler, G. Lemmel, E. Rahmani, D. Clark, D. Helper, G. Lehman, and D. Mark (1997). Colonoscopic miss rates of adenomas determined by back-to-back colonoscopies. Gastroenterol. 112(1), 24–28.
  • Roth et al. [2011] Roth, H. R., J. R. McClelland, D. J. Boone, M. Modat, M. J. Cardoso, T. E. Hampshire, M. Hu, S. Punwani, S. Ourselin, G. G. Slabaugh, S. Halligan, and D. J. Hawkes (2011). Registration of the endoluminal surfaces of the colon derived from prone and supine ct colonography. Medical Physics 38(6), 3077–3089.
  • Sargent et al. [2009] Sargent, D., C.-I. Chen, C.-M. Tsai, Y.-F. Wang, and D. Koppel (2009). Feature detector and descriptor for medical images. In SPIE Medical Imaging, pp. 72592Z–72592Z. International Society for Optics and Photonics.
  • Sargent et al. [2011] Sargent, D., S. Y. Park, I. Spofford, and K. Vosburgh (2011).

    Image-based camera motion estimation using prior probabilities.

    In SPIE Medical Imaging, pp. 79641U–79641U. International Society for Optics and Photonics.
  • Yang and Pollefeys [2005] Yang, R. and M. Pollefeys (2005). A versatile stereo implementation on commodity graphics hardware. Real-Time Imaging 11(1), 7–18.