Coarse-to-fine Surgical Instrument Detection for Cataract Surgery Monitoring

09/19/2016 ∙ by Hassan Al Hajj, et al. ∙ Inserm 0

The amount of surgical data, recorded during video-monitored surgeries, has extremely increased. This paper aims at improving existing solutions for the automated analysis of cataract surgeries in real time. Through the analysis of a video recording the operating table, it is possible to know which instruments exit or enter the operating table, and therefore which ones are likely being used by the surgeon. Combining these observations with observations from the microscope video should enhance the overall performance of the systems. To this end, the proposed solution is divided into two main parts: one to detect the instruments at the beginning of the surgery and one to update the list of instruments every time a change is detected in the scene. In the first part, the goal is to separate the instruments from the background and from irrelevant objects. For the second, we are interested in detecting the instruments that appear and disappear whenever the surgeon interacts with the table. Experiments on a dataset of 36 cataract surgeries validate the good performance of the proposed solution.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the emergence of many medical imaging devices and technologies in the operating rooms (MRI, ultrasound imaging, surgical microscope, etc .), the automated analysis of data recorded during video monitored surgeries has developed during the last decade. Methods emerged in this research field could help the surgeons in different manners: report generation [1, 2], surgical skill evaluation or construction of educational videos [3]. Also, real-time video monitoring would allow automatically communicating useful information to the surgeon during the surgery.
For instance, studies have been initiated to setup warning/recommendation generation systems for video monitored surgery. This includes fast and robust methods to recognize surgical tasks, steps or gestures in real time [4, 5, 6]. With such methods, it would be possible to distinguish a normal conduct of surgery from an abnormal one. The results obtained are very encouraging, but highlighted one main challenge: to improve the interpretation of the video, one should be able to detect all surgical instruments. But, these instruments have a wide variety of shapes and are only partially visible in the surgical scene. A lot of studies tackled the surgical instrument detection problem. The work carried out can be divided into two categories. The first category is the identification by radio frequency methods (RFID) [7]. The second category is based on image processing. Compared to the first category, the biggest advantage of the image processing methods is that they do not require any installation of additional components in the operating room that would alter the surgical procedure.
To solve the partial occlusion problem, we propose the addition of a second video stream, filming the operating table (see Fig. 1). By knowing which instruments exit or enter the operating table we know which tools are likely being used by the surgeon and which tools surely are not. In this context, a lot of methods were proposed for detecting, monitoring and recognizing the surgical instruments in different areas of medical surgery [8, 9, 10]. All these methods have focused on a small number of highly differentiated instruments [11]. We work in a different context: we are dealing with many instruments, many of which resemble strongly. Although instruments are more easily detected on the surgical table than in the microscope video, analyzing the table video is challenging as well, due to the variety of actions that can be realized by the surgeons on the operating table (preparing implant, filling in the syringes, etc.).
In this paper, we present two methods: one to segment the instruments at the beginning of the surgery and one to detect the instruments that appear and disappear along the surgery.

(a) Operating table
(b) Microscope field of view
Fig. 1: Operating table image captured at . Microscope image captured at + a few seconds, showing part of the knife that has been taken out from the table.

Ii Method Overview

We have two objectives, i.e. two tasks to accomplish: 1) describing the initial state, at the beginning of the surgery and 2) describing changes, whenever motion is detected on the table. The reason we propose to describe the changes, rather than rerunning state description every time a change is detected, is that we assume change detection is more accurate in view of the issues we may encounter in the table scene, e.g. occlusion problems.

A similar solution will be proposed for both tasks. Because there are many instruments, we do not want to manually define a model for each tool. Instead, we propose a general, strongly supervised solution. In that purpose, manual ground truth was provided for a subset of images from the video dataset. For the first task, instruments have been manually segmented. For the second task, changes (appearing/disappearing instruments) have been manually segmented. The solutions are based on analogy reasoning (k-NN): images are divided in patches. A feature vector is extracted for each patch: they describe the local visual content (for the first task) or the local change (for the second task). Using the manual segmentation associated with the nearest neighbors, an instrument probability (for the first task) or a change probability (for the second task) is computed.


Several solutions are proposed to speed up computations: images are downsampled by a factor of two, a fast approximation of k-NN is used [12] and a coarse-to-fine search strategy is proposed.

Iii Static Instrument Detection

In this section, we handle the first task, called initial state description. It is necessary to note that there are no motions by any means, no hands are hiding any part of the scene and no tasks are taking placed on the table. Then, it is about separating the instruments from the background.

Iii-a Challenges

The tablecloth color is obviously uniform (green color) and easily differentiable from the color range of the instruments. However, the background contains more objects than just the tablecloth: it contains all objects that are not surgically relevant, such as the piece of towel in Fig. 2LABEL:sub@fig:staticImageWithTowel. In fact, the background contains all objects that the expert did not manually segment in the reference images, hence the relevance of the proposed strongly-supervised solution.

Iii-B Patch Description

Simple visual features are proposed in this study. For each patch, we extracted the mean and the standard deviation of the intensity values of the R, G, B, H, S and V channels, in addition to the mean and the standard deviation of the result of Sobel edge detection applied to the luminance channel. It results in a vector descriptor of 14 elements.

Iii-C Cross-validation

The system is trained and tested using leave-one-out cross-validation. While processing an image (the test image), all other images are used as references. For each patch in the reference dataset, an instrument probability is computed: it is defined as the percentage of pixels inside the patch that were manually segmented by the expert.

Iii-D K-NN Regression

Given a patch in the test set, the k nearest neighbors from the reference set are searched for: the patch probability is defined as the average instrument probability among the nearest neighbors.

Iii-E Coarse-to-fine Strategy

For faster computations, we propose to start with large patches and subdivide them if and only if instrument probability is greater than 0% and so on until the desired patch size is reached.

Iii-F Parameter Estimation

We introduced 4 parameters for this part. indicates the number of the nearest neighbors to be taken into consideration. is the smallest patch size in the list of patch sizes. is the scale factor used to go from a scale level to another and last but not least -levels is the number of the scale levels to be run. To find the optimal value for these parameters, which are discrete, a discrete version of the Particle Swarm Optimization (PSO) algorithm was used here, called D-PSO [13].

Iv Dynamic Instrument Detection

In this section, we are interested in detecting the instruments that appear and disappear along the way. In other words, detecting, at every moment, the instruments that are probably in the microscope scene. In this study, we compare the last image before a motion is detected in the table scene (the ’before’ image)

to the first image after motion stops (the ’after’ image).

Iv-a Challenges

The surgeon does not simply put one instrument on the table and/or take another one. First, the surgeons usually moves several instruments around to search for the right instrument. Second, they use some of the instruments to accomplish some tasks over the table, e.g. preparing implants. Therefore, many instruments are displaced without going out of the scene or used in the surgery. Then, the main challenge is to differentiate instruments that were simply moved around from instruments that have appeared on or disappeared from the table.

Iv-B Appearance and disappearance detection

Without loss of generality, we only focus on appearance detection. To detect appearance in one patch from the ’after’ image, the corresponding patch in the ’before’ image is selected and these patches are compared. To detect disappearance, we simply swap the ’before’ and ’after’ images and run the analysis again.

Iv-C Compensating For Instrument Motions

Considering the fact that the instruments are being displaced over the table, it is most likely possible that a patch P1 at position X in the ’after’ image will be found at position X + l in the ’before’ image, as a patch P2, where l is the displacement distance. Patch P2 is searched for inside a window centered on P1. P2 is defined as the patch whose feature vector V2 minimizes the Euclidean distance with V1, the feature vector extracted from P1.

Iv-D Change Description

We extracted the same features used in the first task, detailed in section III-B. In this part, the change is described by the difference between feature vectors (V2-V1). In case of instrument appearance, no match will be found in the ’before’ image, so the difference will be large. If case of instrument motion, the difference will be close to zero.

Iv-E Analogy Reasoning

For each patch in the reference ’after’ images, we computed the change description, which implies looking for the most similar patch in the ’before’ images. The cross-validation, k-NN regression and coarse-to-fine strategy are similar to the first task but one more parameter has been added to this part. -size is the window size in which we look up for the best match in the ’before’ image.

V Experiments

V-a Cataract Surgery Dataset

V-A1 Data Collection

a dataset of 36 cataract surgery videos, recorded at Brest University Hospital between February and September 2015, were used in this experiment. These surgeries were carried out by two different surgeons. Each surgery is recorded as two videos, one for the operating table and the other one for the microscope field of view. Videos were acquired in full HD pixel resolution and a frame rate of 50 FPS for the former and 30 FPS for the latter.

V-A2 Static Method Ground Truth

to be able to detect the instruments statically, we captured 36 frames: one frame at the beginning of each table video. They were segmented manually by delineating the boundaries of all the instruments visible on the table. Examples of images that were manually segmented are given in Fig. 2LABEL:sub@fig:staticSegmentedImageWithoutTowelLABEL:sub@fig:staticSegmentedImageWithTowel.

V-A3 Dynamic Method Ground Truth

to detect the instruments dynamically, 36 surgeon actions were selected randomly, one per video. An action is considered as an act of taking out an instrument from the table, putting it back or both at the same time. 2 images were captured for each action, one right before it, the other one right after it. Those images were manually segmented by delineating the boundaries of the instruments that appeared and disappeared along the way. Instruments that were simply displaced were not segmented. Examples of images that were manually segmented are given in Fig. 3LABEL:sub@fig:objectDectectionManualSegmentation1LABEL:sub@fig:objectDectectionManualSegmentation2.

V-B Results and Discussion

V-B1 Static Method

algorithm parameters were optimized using D-PSO. The performance of the system is measured for each set of parameters in terms of Az, the area under the Receiver Operating Characteristic (ROC) curve. The results are presented in Table I. They show that we could strongly separate the instruments from the background as we can see in Fig. 2. We can also see in Fig. 2LABEL:sub@fig:staticSegmentedImageWithTowel that the towel, which was not segmented in reference images, was not segmented in the test image neither. Conversely, the large greenish and transparent containers, which the expert decided to segment in reference images, were segmented in the test image as well.

(a) Original image
(b) Original image
(c) Image segmented manually
(d) Image segmented manually
(e) Result of instrument detection
(f) Result of instrument detection
Fig. 2: Two examples of static instrument detection.
(a) Image before action
(b) Image after action
(c) Difference between the ’after’ and ’before’ images
(d) Manual segmentation of ’before’ and ’after’ images
(e) Result of instrument detection
(f) Image before action
(g) Image after action
(h) Manual segmentation of ’before’ and ’after’ images
(i) Result of instrument detection
Fig. 3: Two examples of dynamic instrument detection: a success and a failure. In (d), (e), (h) and (i) red indicates the objects that left the table scene and green represents the objects that entered the scene.

Type -levels -sizes Mean Std D-PSO 89 4 5 3 [5;20;80] 0.982 0.015

TABLE I: Performance of static instrument detection

Type -size -levels -sizes Mean Std Research on grid 89 4 81 5 3 [5;20;80] 0.947 0.045

TABLE II: Performance of dynamic instrument detection

V-B2 Dynamic Method

to streamline the optimization process, we assumed that values of the parameters obtained in the previous section can be used in this part. So, we fixed the values of the common parameters and we just optimized -size using a grid search, by randomly assigning values to it. The results are presented in Table II. In Fig. 3, we show that we could identify the instruments that have left and appeared in the scene. But one clear limitation, presented in Fig. 3LABEL:sub@fig:objectDectectionResultsProblems, is when instruments are seen under a very different view in the ’before’ and ’after’ images.

Vi Conclusion

A promising solution to detect the instruments over the operating table has been presented in this paper. The proposed solution is based on k-NN regression, using a coarse-to-fine strategy. In future works, more advanced features will be proposed to push performance further. Also, to achieve the set aims, we will need to automate the selection of ’before’ and ’after’ images. The next step will be to recognize the detected objects. A k-NN regression principle can also be followed for this task, possibly using a temporal model of the surgery to help discriminate between strongly resembling instruments. The resulting tool will be very useful for computer-aided surgery.

References

  • [1] F. Lalys, L. Riffaud, D. Bouget, and P. Jannin, “A framework for the recognition of high-level surgical tasks from video images for cataract surgeries,” IEEE Trans Biomed Eng, vol. 59, no. 4, pp. 966–976, Apr. 2012.
  • [2] S. R. Stanek, W. Tavanapong, J. Wong, J. H. Oh, and P. C. de Groen, “Automatic real-time detection of endoscopic procedures using temporal features,” Comput Methods Programs Biomed, vol. 108, no. 2, pp. 524–535, Nov. 2012.
  • [3]

    B. André, T. Vercauteren, A. M. Buchner, M. B. Wallace, and N. Ayache, “Learning semantic and visual similarity for endomicroscopy video retrieval,”

    IEEE Transactions on Medical Imaging, vol. 31, no. 6, pp. 1276–1288, June 2012.
  • [4] K. Charrière, G. Quellec, M. Lamard, G. Coatrieux, B. Cochener, and G. Cazuguel, “Automated surgical step recognition in normalized cataract surgery videos,” in Proc IEEE EMBC, Aug. 2014, pp. 4647–4650.
  • [5] G. Quellec, M. Lamard, B. Cochener, and G. Cazuguel, “Real-time segmentation and recognition of surgical tasks in cataract surgery videos,” IEEE Transactions on Medical Imaging, vol. 33, no. 12, pp. 2352–2360, Dec. 2014.
  • [6] ——, “Real-Time Task Recognition in Cataract Surgery Videos Using Adaptive Spatiotemporal Polynomials,” IEEE Transactions on Medical Imaging, vol. 34, no. 4, pp. 877–887, Apr. 2015.
  • [7] J. Tan, S. Wang, H. Wang, and J. Zheng, “A new method of surgical instruments automatic identification and counting,” in Proc CISP, vol. 4, Oct. 2011, pp. 1797–1800.
  • [8] R. Sznitman, C. Becker, and P. Fua, “Fast part-based classification for instrument detection in minimally invasive surgery,” Proc MICCAI, vol. 17, no. Pt 2, pp. 692–699, 2014.
  • [9] V. Baldas, L. Tang, P. Bountris, G. Saleh, and D. Koutsouris, “A real-time automatic instrument tracking system on cataract surgery videos for dexterity assessment,” in Proc ITAB, Nov. 2010, pp. 1–4.
  • [10] R. Sznitman, R. Richa, R. H. Taylor, B. Jedynak, and G. D. Hager, “Unified Detection and Tracking of Instruments during Retinal Microsurgery,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 5, pp. 1263–1273, May 2013.
  • [11] X.-H. Liu, C.-H. Hsieh, J.-D. Lee, S.-T. Lee, and C.-T. Wu, “A vision-based surgical instruments classification system,” in Proc ARIS, June 2014, pp. 72–77.
  • [12] D. G. L. Marius Muja, “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration.” Proc VISAPP, vol. 1, pp. 331–340, 2009.
  • [13] D. Datta and J. R. Figueira, “A real-integer-discrete-coded particle swarm optimization for design problems,” Applied Soft Computing, vol. 11, no. 4, pp. 3625–3633, June 2011.