Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME) are chronic sight-threatening conditions that affect over 250 million people world wide . To diagnose and manage these diseases, Optical Coherence Tomography (OCT) is the standard of care to image the retina safely and quickly (see Fig. 1). However, with a growing global patient population and over 30 million volumetric OCT scans acquired each year, the resources needed to assess these has already surpassed the capacity of knowledgeable experts to do so .
For ophthalmologists, identifying biological markers of the retina, or biomarkers, plays a critical role in both clinical routine and research. Biomarkers can include the presence of different types of fluid buildups in the retina, retinal shape and thickness characteristics, the presence of cysts, atrophy or scar tissue. Beyond this, biomarkers are paramount to assess disease severity in clinical routine and have a major role in the development of new pharmaceutical therapeutics. With over a dozen clinical and research biomarkers, their identification is both challenging and time consuming due to their number, size, shape and extent.
To support clinicians with OCT-based diagnosis, numerous automated methods have attempted to segment and identify specific biomarkers from OCT scans. For instance, retinal layer [2, 3, 4] and fluid  segmentation, as well as drusen detection 
have previously been proposed. While these methods perform well, they are limited in the number of biomarkers they consider at a time and often use pixel-wise annotations to train supervised machine learning frameworks. Given the enormous annotation task involved in manually segmenting volumes, they are often trained and evaluated on relatively small amounts of data (20 to 100 volumes)[2, 4, 6].
Instead, we present a novel strategy that automatically identifies the presence of a wide range of biomarkers throughout an OCT volume. Our method does not require biomarker segmentation annotations, but rather biomarker tags as to which are present on a given OCT slice. Using a large dataset of OCT slices and annotated tags, our approach then estimates all biomarkers on each slice of a new volume separately. We do this first seperately, without considering adjacent slices, as these are typically highly anisotropic and not aligned within the volume. We then treat these predictions as sufficient statistics for each slice and impose biomarker coherence across slices using a bidirectional Long short-term memory (LSTM) neural network. By doing so, we force our network to learn the wanted biomarker co-dependencies within the volume from slice predictions only, so as to avoid dealing with anisotropic and non-registered slices common in OCT volumes. We show in our experiments that this leads to superior performances over a number of existing methods.
We describe here our approach for predicting biomarkers across all slices in an OCT volume. Formally, we wish to predict the presence of different biomarkers in a volume using a deep network, , that maps from a volume of slices,
, to a set of predicted probabilities. We denote as the estimated probability that biomarker occurs in slice .
While there are many possible network architectures for , one simple approach would be to express as copies of the same CNN, , whereby each slice in the volume is individually predicted. However, such an architecture ignores the fact that biomarkers are deeply correlated across an entire volume. The other extreme would be to define as a single 3D CNN. Doing so however would be difficult because (1) 3D CNNs assume spatial coherence in their convolutional layers and (2) the output of would be of dimension . While (1) strongly violates OCT volume structure because there are they typically display non-rigid transformations between consecutive OCT slices, (2) would imply training with an enormous amount of training data.
For these reasons, we take an intermediate approach between the above mentioned extremes and express our network as a composition , where processes slices individually and produces a -dimensional descriptor for each slice. Then, fuses all slice descriptors and predicts the biomarker probabilities for each slice, whereby taking into account the information of the entire volume. Fig. 2 depicts our framework and we detail each of its components in the subsequent sections.
2.1 Slice network
When presented with a volume , processes each slice independently using the same slice convolutional network, that maps from a single slice to a -dimensional descriptor. The output of is then the concatenation of the individual descriptors,
In our experiments, we implemented as the convolutional part of a Dilated Residual Network  up to the global pooling layer.
2.2 Volume fusion network
Let be the set of descriptors of a volume computed by . The fusion network takes and produces a final probability prediction .
The most straightforward architecture for would be a multilayer perception (MLP), which is typical after convolutional layers. However, MLPs make no assumptions about the underlying nature of the data. Consequently, MLPs are hard to train, requiring either huge amounts of training data or resort to aggressive data augmentation techniques, particularly when the dimensionality of the input space is large as in this case. More importantly, a MLP would ignore two important aspects about : (1) the rows of belong to the same feature space that share a common distribution; (2) volumes have spatial structure with respect to the biomarkers within them and slices that are nearby to one another have similar descriptors.
To account for this, we use an LSTM to process slices in a sequential way and implicitly leverage spatial dependencies, while performing the same operations on every input (i. e., implicitly assuming a common distribution in the input space). Formally, our LSTM is a network that receives a descriptor and the previous -dimensional LSTM state, to produce a new state. We use the LSTM to iteratively process the descriptors generating a sequence of LSTM states,
where is the descriptor on slice . Additionally, since the underlying distribution of OCT volumes is symmetric111Flipping the slice order in a volume produces another statistically correct volume., we use the same LSTM to process the descriptors backwards,
generating a second sequence of LSTM states. Initial states are and , respectively.
Note that at each position , and combine the information from the current descriptor
with additional information coming from neighboring slices. We then concatenate both states in a single vector and feed it to a final fully connected layerthat computes the estimated probabilities. The complete volume fusion network is the concatenation of the outputs of for all the slices:
Training requires a dataset of annotated volumes, where for each volume a set of binary labels is provided. is 1 if biomarker is present in slice
. We then use the standard binary cross entropy as our loss function,
where is the estimation of our network for a given volume. The goal during training is then to minimize the expected value of the loss over the training dataset .
While we could perform this minimization with a gradient-based method in an end-to-end fashion from scratch, we found that a two-stage training procedure helped boost performances at test time. In the first stage, we train the slice network alone to predict the biomarkers of individual slices. More specifically, we append a temporary fully connected layer at the end of , and then minimize a cross entropy loss while presenting to the network randomly sampled slices from . In the second stage, we fix the weights of and minimize the loss of Eq. (5) for the whole architecture updating only the weights of the volume fusion network .
Our dataset consists of 416 volumes (originating from 327 individuals with Age-Related Macula Degeneration and Diabetic Retinopathy) whereby each volume scan consists of slices for a total of 20’384 slices. Volumes were obtained using the Heidelberg Spectralis OCT and each OCT slice has a resolution of pixels. Trained annotators provided slice level annotations for 11 common biomarkers: Subretinal Fluid (SRF), Interetinal Fluid (IRF), Hyperreflective Foci (HF), Drusen, Reticular Pseudodrusen (RPD), Epiretinal Membrane (ERM), Geographyic Atrophy (GA), Outer Retinal Atrophy (ORA), Intraretinal Cysts (IRC), Fibrovascular PED (FPED) and Healthy. The dataset was randomly split for training and testing purposes, making sure that no volume from the same individual was present in both the training and test sets. These sets contained a total of 18’179 and 2’205 slices, respectively. The distribution of biomarkers in the training and test sets are reported in Table 1. For all our experiments, we performed 10-fold cross validation, where the training set was split into a training (90%) and validation (10%) set.
3.2 Parameters and baselines
For our approach, we set for the size of the descriptors and for the size of the LSTM hidden state. We train the fusion stage using a batch size of 4 volumes, while training using SGD with momentum of and a base learning rate of
which we decrease after 10 epochs of no improvement in the validation score.
To demonstrate the performance of our approach, we compare it to the following baselines:
Base: the output of (e.g. no slice fusion).
MLP: output of size using the sized feature matrix from the Baseclassifier.
Conv-BLSTM: fuses the last convolutional channels of with a size of and a hidden state of channels. This is then followed by a global pooling and a fully connected layer.
Conditional Random Field (CRF):
trained to learn co-occurrence of biomarkers within each slice and to smooth the predictions for each biomarker along different slices of the volume. Logit outputs of theBase classifier are used as unary terms, and learned pairwise weights are constrained to be positive to enforce submodularity of the CRF energy. We use the method from  for training and standard graph-cuts for inference at test time.
For all methods we use the same Base classifier and train it as a multi-label classification task using a learning rate of , a batch size of 32 with SGD and a momentum of
. Rotational and flipping data augmentation was applied during training. We retain the best model for evaluation and do not perform test time augmentation or cropping. The network was pre-trained on ImageNet.
Our primary evaluation metric are the micro and macro mean Average Precision (mAP). In addition, we also report the Exact Match Ratio (EMR) which is equal to the percentage of slices predicted without failing to detect any biomarker in it. The mAP of the CRF baseline is not directly comparable as the CRF output is binary, hence allowing only a single preciscion-recall point to be evaluated. We therefore also state the maximum F1 scores for each method.
Table 1 reports the performances of all methods. Using the proposed method we see an increase in mAP across all biomarkers except for GA and ORA. Both biomarkers have a very low sample size in the test set. The proposed method outperforms all other fusion methods in terms of mAP and F1 score and considerably improves over the unfused baseline, which confirms our hypothesis that inter-slice dependencies can be used to increase the per slice performance. The poor performance of the Convolutional BLSTM can be explained due to the misalignment of adjacent slices.
In Fig. 3, we show a typical example illustrating the performance improving ability of our proposed method. In particular, we show here the prediction of our approach on each slice for each biomarker and highlight three consecutive slices of the tested volume (right). For comparison, we also show the corresponding ground-truth (top left) and the outcome from the Base classifier (middle left). Here we see that our approach is capable of inferring more accurately the set of biomarkers across the different slices.
|Drusen (3990/399)||0.762 0.024||0.7310.024||0.7660.029||-||0.7750.038|
|IRC (3097/553)||0.9320.006||0.8800.011||0.9280.012||-||0.940 0.006|
|FPED (3654/387)||0.9310.007||0.920 0.008||0.9360.009||-||0.9490.006|
We have presented a novel method to identify pathological biomarkers in OCT slices. Our approach involves detecting biomarkers first slice by slice in the OCT volume and then using a bidirectional LSTM to coherently adjust predictions. As far as we are aware, we are the first to demonstrate that such fine-grained biomarker detection can be achieved in the context of retinal diseases. We have shown that our approach performs well on a substantial patient dataset outperforming other common fusion methods. Future efforts will be focused on extending these results to infer pixel-wise segmentations of found biomarkers relying solely on the per-image labels.
This work received partial financial support from the Innosuisse Grant #6362.1 PFLS-LS.
-  Bourne, R., et al.: Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis. Lancet Global Health 5 (2017) e888 – e897
-  Apostolopoulos, S., De Zanet, S., Ciller, C., Wolf, S., Sznitman, R.: Pathological OCT Retinal Layer Segmentation Using Branch Residual U-Shape Networks. In: Medical Image Computing and Computer-Assisted Intervention. (2017) 294–301
-  Hussain, M.A., Bhuiyan, A., Turpin, A., Luu, C.D., Smith, R.T., Guymer, R.H., Kotagiri, R.: Automatic Identification of Pathology-Distorted Retinal Layer Boundaries Using SD-OCT Imaging. IEEE Transactions on Biomedical Engineering 64(7) (2017) 1638–1649
-  Roy, A.G., Conjeti, S., Karri, S.P.K., Sheet, D., Katouzian, A., Wachinger, C., Navab, N.: ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomedical Optics Express 8(8) (2017)
-  Zhao, R., Camino, A., Wang, J., Hagag, A.M., Lu, Y., Bailey, S.T., Flaxel, C.J., Hwang, T.S., Huang, D., Li, D., Jia, Y.: Automated drusen detection in dry age-related macular degeneration by multiple-depth,en faceoptical coherence tomography. Biomedical optics express 8(11) (2017) 5049–5064
-  Bogunovic, H., et al.: RETOUCH -the retinal OCT fluid detection and segmentation benchmark and challenge. IEEE Trans. Med. Imaging (February 2019)
-  Graves, A., Fernández, S., Schmidhuber, J.: Bidirectional lstm networks for improved phoneme classification and recognition. In: Artificial Neural Networks: Formal Models and Their Applications. (2005) 799–804
-  Yu, F., Koltun, V., Funkhouser, T.: Dilated residual networks. (May 2017)
-  Szummer, M., Kohli, P., Hoiem, D.: Learning crfs using graph cuts. Volume 5303. (10 2008) 582–595
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z.,
Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.:
ImageNet Large Scale Visual Recognition Challenge.
International Journal of Computer Vision115(3) (2015) 211–252