The introduction of Ultrasound Guided Regional Anaesthesia (UGRA) has rapidly become popular for performing regional anaesthesia blocks. However, this procedure requires great concentration by the operator because of many simultaneous tasks, such as nerve and needle localization, and steady probe positioning in order to maintain the nerve and the needle in the observation plane [Marhofer2007],[Carlos2012]. Performing the UGRA needs a long learning process and years of practice in the operating room. Computer aided system that can detect automatically region of interest ROI, would help practitioner to concentrate more in anaesthetic delivery.
Although there has been an extensive development of detection and segmentation algorithms for medical ultrasound (US) images [Noble2006],[Cheng10],[Afsaneh13], it is still an open problem especially for regional anesthesia. So far, very little attention has been paid to the nerve detection. We recently demonstrated the possibility of detecting and segmenting the sciatic nerve structure [Hafiane14], the method is based on Monogenic signal and probabilistic active contour approaches. In [Hadjerci14]
authors proposed a descriptor based on the combination of median binary pattern and Gabor filter to characterize and classify median nerve tissues. A machine learning framework was also proposed to enable robust detection of the median nerve[Hadjerci16]. Recent work addresses such a problem, by proposing assistive system that detect vessels and nerve region allowing path planning for needle [Hadjerci16b]. Despite the promising results obtained, still the topic requires further development and investigation.
Recently, the deep learning approach has emerged as a powerful approach to address many problems in machine learning and computer vision fields[Hinton06],[Bengio06] [Bengio13], [Lecun2015]. Among deep learning methods, the Convolutional Neural Network (CNN) has been successfully applied for various computer vision tasks, including objet recognition, region of interest (ROI) detection, segmentation and so on [Krizhevsky12], [Simonyan14], [Girshick14], [Long2015]. Deep learning is also gaining popularity in medical image segmentation and classification with promising results on various applications [Kooi2017],[Tuan2017].
However, this approach is still limited to static images, very little attention has been paid to dynamic information generated by the motion of the probe [Hadjerci_ICIP16]. The purpose of this paper is to explore this type of information, motivated by the manner that human expert uses such information. In order to visualize region of interest (i.e. nerves, veins, arteries,..), clinician scan with the transducer a given location on the patient’s body. Through this scanning process, he uses dynamic information to increase confidence for nerve localisation in US images. Hence, it is interesting to use such kind of information to increase the robustness of detection and segmentation tasks.
In this paper we propose a new method based on convolutional neural network and spatiotemporal consistency to segment efficiently the nerve region. Indeed, CNN architecture is not sufficient to robustly locate the nerve region. Due to the noise and different artefacts, CNN may generate non negligible rate of false positives among the detected ROIs. In order to reduce this rate, we use spatial and temporal consistency to eliminate the false positives. If a given position has a majority intersection of ROIs and steady with the same in time (through several US slices) the ROI is likely to be consistent and not a noisy one. As result, the region of interest is considered as true positive. In final phase, we use active contours based on phase and probabilistic approach [Hafiane14] to delineate the nerve contours.
The structure of this paper is organized as follows. In section 2, we present the method of nerve detection. Section 3 provides validation and evaluation of the proposed approach, followed by conclusion in Section 4.
In order to segment nerve structures, we need to detect and locate them first. The localization procedure is mainly based on two phases, the first one uses spatial coherence to keep selected ROIs with high probability measure. In the second phase it is assumed that the nerve chracteristics should be stable during certain period, when clinician steadly maintain the ultrasound probe. After localisation, the segmentation is much easier since the initial contours are near the target region. Figure1 presents the general framework of the method.
2.1 ROIs detection with Convolutional neural networks
Convolutional neural networks (CNNs) are one of the most effective deep learning approach, it is based on multilayer neural network. Generally, it uses three main neural layers: convolutional layers, pooling layers, and fully connected layers. The three main layers are deployed in cascading manner to perform the learning process. Typical convolutional layer consists in a 2D convolution operation with a set of kernels, which can be considered as a filtering process that generates a feature maps. The pooling layer performs downsampling operation on the filtered images, using maximum or average value of neighbourhood pixels. The fully connected layer follows the last operation of the pooling layer, it performs as the classical multi-layer neural network and provides a softmax output.
For a given input training set, CNN function is optimized to learn the best representation of target regions. Let be a learned function from CNN and as an input patch from US image. Given , predict the probability of classes according to the learned weights . Softmax function is used in CNN output layer to generate the probability such that:
corresponds to class prediction, that takes the maximum probability of classes. However, with this approach, may yield weak predictions; when probabilities of classes are slightly different from each other. For ultrasound images, this could generate a large number of false positives since images are corrupted with noise. For this reason we keep with high probability, that is:
where is a parameter, set empirically, such that , for our experiments and .
2.2 Spatiotemporal consistency
During the UGRA procedure practitioners use three basic motions of probe while scanning the patient; longitudinal sliding, rotation and tilting. The probe position and motion affects the visual aspect of tissues in US images, anaesthetists adjust and stabilize the probe for best nerve visualisation. They use back and forth motion of the probe on specific human body zone, to generate dynamic information with consistent characteristics of certain anatomic structures such as nerves. Similarly, we explore dynamic information to reinforce ROIs detection. Indeed, detecting the nerve region in one US frame is not sufficient, due to variability on its visual aspect on different frames. To robustly locate this region, it is more interesting to include temporal coherence over successive frames.
For this end, a sliding window scan the image and classify each location with method described in previous section. Using Equation 3, we obtain several ROIs. Even though, yields ROIs with high probability, it is still a weak approach. In order to increase further the confidence in nerve localisation, we take into account the overlap between detected ROIs. A position that have several overlapped ROIs is considered as a valid candidate to be followed in US frames, for temporal consistency measure. Let , (, ) a candidate ROI in the frame to be identified as nerve zone or something else. The proposed scheme is formulated as:
where is the number of blocks (ROIs) that spatially overlap at least 50%. is a set of ROIs, which satisfy the condition . are the coordinates of ROI in the image. is a threshold that determines the number of overlapped blocks. In the present experiments, .
Temporal coherence is measured by the number of ROIs that are consistent in position over frames. It is given by:
where is the cardinal of the set in the frame. Finally, robust nerve localisation is determined by the maximum spatiotemporal consistency.
Note that, the localisation decision is applied to the final frame, since the model was built with frames history.
2.3 Active model for segmentation
After localisation we need to delineate the nerve contours. For that purpose, we use phase based probabilistic active contour (PGVF) [Hafiane14]
, since it provides better results for nerve segmentation compared to classical methods. The bounding box of the localization is used as an initial contour. The PGVF function is based on the combination of the probabilistic learning approach, with the local phase information. It modifies the external energy equation of the original GVF (Gradient Vector Flow).
3 Experiment and results
The dataset was obtained in real condition at Medipole Garonne hospital at Toulouse in France. Ultrasound machine dedicated to the regional anaesthesia was used to acquire the dataset. A linear transducer probe with 5-12 MHz was utilized, the acoustic beam generated by the transducer produces a series of pulse echo lines that generates transverse section images of anatomic structures. To visualize the median nerve block, anaesthetists scan a given zone on the forearm using four basic movements on the probe, translation, alignment, rotation and tilt. Therefore, a sequence of images is generated during the scanning procedure. The image sequence was saved as a video.
In this study we used dataset that contains ten videos of median nerve corresponding to ten patients. The dataset was annotated by regional anesthesia experts, providing ground truth of the nerve region in the images. For learning and testing phases, a sequence of frames is extracted from each video, we have in average 500 images per video, the size of each image is . To feed the CNN model, we used patches with size of that represent positive class (nerve region), and negative class (non nerve).
The first step consists in learning features and establish classifier model with CNN architecture. The optimization of the network were performed using stochastic gradient descent (SGD). We used a 0.5 dropout rate on the fully connected layer during the training stage for regularization. We employed the CNN architecture with 3 convolutional layers usingkernels followed by max-pooling and ReLU activation function, then a fully connected layer of 128 input was added. The last layer consists in softmax function that generates a probability of the two classes.
3.2 Performance evaluation
First we examine the classification ability of the proposed method for nerve localisation in ultrasound images. For that purpose, we adopted cross validation with procedure for performance evaluation. In second stage, we evaluate the nerve segmentation after its localisation using Dice and Hausdorff metrics.
3.2.1 Qualitative evaluation
Figure 2 illustrates and example of results in two US images from two different patients. Figures 2 (b) and (e) shows the localisation obtained with CNN and spatiotemporal consistency. The nerve segmentation by PGVF is depicted in figures 2 (c) and (f). One can observe that the automatic segmentation is very close to the one obtained form human experts figures 2 (a) and (d).
3.2.2 Quantitative evaluation
We compare our proposed method with similar approaches based on Support Vector Machine (SVM) classifier. We consider a true positive localization if the detected ROI overlap the ground truth ROI at least 50%, otherwise the detected ROI is considered as false positive. Results present average performance on 5000 images. Table1
summarize the results obtained with cross validation procedure. As we can observe CNN do better than SVM with feature selection. SVM combined with spatiotemporal coherence[Hadjerci_ICIP16] increase performances further more, but CNN combined with spatiotemporal consistency provides best results.
|SVM + feature selection [Hadjerci16]|
|SVM +temporal constraint [Hadjerci_ICIP16]|
Once the nerve zone is detected, the next step utilizes active contour approach to detect nerve borders. This segmentation approach is evaluated according to the ground truth. Table 2 presents the average performance of final segmentation. Overall, these results indicate that the deep learning combined wit spatiotemporal constraint increase detection and segmentation performances.
|Method||Dice metric||Hausdorff metric|
|Localisation + PGVF [Hafiane14]|
In this paper we presented a new method that combine Deep learning approach and spatiotemporal concept. The method is applied successfully to segment median nerve structure in ultrasound images. To reduce the false positive rate, we combined spatial and dynamic information together with the CNN classifier. Nerve localisation worked better than CNN alone or SVM with spatiotemporal coherence. The comparative study showed the effectiveness of the proposed scheme, achieving 96% of F-score. Despite this promising results reported so far, there is room for further development. For instance, targeting other types of nerves, increasing dataset for learning and test.