Single-image Tomography: 3D Volumes from 2D X-Rays

10/13/2017
by   Philipp Henzler, et al.
0

As many different 3D volumes could produce the same 2D x-ray image, inverting this process is challenging. We show that recent deep learning-based convolutional neural networks can solve this task. As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which is then fused in a second step with the input x-ray into a high-resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer-simulated 2D x-ray images of 3D volumes scanned from 175 mammalian species. Applications of our approach include stereoscopic rendering of legacy x-ray images, re-rendering of x-rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x-rays.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 4

page 6

page 7

page 8

page 9

page 11

11/05/2021

TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering

Volume rendering using neural fields has shown great promise in capturin...
08/13/2020

Interactive volume illumination of slice-based ray casting

Volume rendering always plays an important role in the field of medical ...
07/19/2018

CNNs based Viewpoint Estimation for Volume Visualization

Viewpoint estimation from 2D rendered images is helpful in understanding...
01/15/2020

A Reference Architecture for Plausible Threat Image Projection (TIP) Within 3D X-ray Computed Tomography Volumes

Threat Image Projection (TIP) is a technique used in X-ray security bagg...
04/13/2016

Quantifying mesoscale neuroanatomy using X-ray microtomography

Methods for resolving the 3D microstructure of the brain typically start...
03/29/2018

CobWeb - a toolbox for automatic tomographic image analysis based on machine learning techniques: application and examples

In this study, we introduce CobWeb 1.0 which is a graphical user interfa...
01/06/2003

Practical and Robust Stenciled Shadow Volumes for Hardware-Accelerated Rendering

Twenty-five years ago, Crow published the shadow volume approach for det...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Producing 2D images of a 3D world is inherently a lossy process, i. e. the entire geometric richness of 3D gets projected onto a single flat 2D image. Consequently, any attempt to undo this operation is a daunting task. X-ray, or any other volumetric imaging technique is not different in this respect from photography of opaque surfaces. However, while x-ray enables us to “see inside” a solid object the spatial structure is only apparent to an expert with previous experience, typically in analyzing medical imaging data. On the one hand, inverting x-ray imagery might be more difficult than inverting opaque 2D images, as multiple transparent observations mix into a single pixel, following the intricate laws of non-linear volumetric radiation transport. On the other hand, there is also hope that semi-transparent imaging fairs better compared to solid-surface imaging as occlusion is not binary and additional surfaces remain accessible.

In this work, we apply deep learning in the form of convolutional neural networks (CNNs) to the challenge of inverting x-ray imagery. While CNNs have had success in generating depth from opaque observations [EPF14] and inferring full 3D volumes [WSK15, REM16, FMAJB16], we are not aware of any attempts to invert single x-ray images, or other transparent modalities.

Figure 1: Differences between previous work and our approach.

Fig. 1 conceptualizes the difference to previous work in flatland: The first CNNs (brain icon) consumed a 2D image to output a 2D height field representation of the 3D world as shown in the first column. Later work has considered binary voxelizations shown in the middle column. Ours, in the last column, addresses transparent surfaces and produces results with continuous density, both essential for x-rays as found in medial applications or the natural sciences.

Performing this mapping is an important challenge as the quantity of x-ray images for which the real 3D volume is unknown, lost or inaccessible is likely substantial. An example would be a large repository of x-ray imagery acquired before 3D imaging such as CT scanning was invented. If this legacy material can be made to “become 3D” again, many interesting computer graphics applications become possible. Our results indicate that 3D volumes can be inferred from 2D x-ray imagery for a certain quality level for a specific class of inputs. In this paper, we focus on x-rays of mammalian anatomical structures, typically crania (cf. Fig. Single-image Tomography: 3D Volumes from 2D X-Rays, a), and demonstrate previously impossible computer graphics applications, such as volumetric cut-aways, novel view synthesis, stylization or stereoscopic rendering (Fig. Single-image Tomography: 3D Volumes from 2D X-Rays, d). We believe application to other content, such as x-ray for security are possible in future work, given training data is provided.

Beyond the area of computer graphics, we also see the presented approach as a first important step towards applications in the life sciences. We envision a diagnostic tool in conditions where either no CT equipment, or the education to interpret x-ray imagery is available, such as for mobile x-ray devices, lay users, or medial diagnostics in developing countries. While there can be more than a hundred CT device units per one million inhabitants in industrial countries, the number is below one per million e. g. in Africa [Wor11]. Although we see that a certified application in a medical context requires a vast amount of extra effort, we believe that nevertheless, some applications such as adding stereography to legacy x-ray images could already provide an immediate clinical benefit.

To this end, our main contribution is two-fold: firstly, a new dataset that contains a large number of pairs of simulated 2D x-ray images and their corresponding 3D volumes forming a sampling of the mapping we wish to invert. Secondly, an investigation of an approach involving a CNN architecture to learn this inverse mapping so that it can be applied to legacy 2D x-ray images, as well as a fusion step to make it scale to large resolutions to overcome limitations of learning. Our evaluation of the proposed network architecture includes quantitative measurements with respect to the given data set and comparison to learning and non-learning-based baselines, qualitative evaluations on real x-ray imagery where no ground truth is available and a user study.

2 Previous Work

The problem at hand falls into the class of inverse rendering. Instead of generating a 2D rendering from a 3D volume, as done in classical direct volume rendering [DCH88], we aim at an opposite challenge: how to obtain a 3D volume from a 2D x-ray image. While typically, such inverse problems are cast as an optimization procedure, such as based on deconvolution [RH01], we suggest an approach purely based on learning from synthetic example data.

Typically, a volume is reconstructed from multiple views by means of computed tomography [Hou73]. While the quality of tomographic reconstructions has increased by making use of principles such as maximum likelihood [SV82] or sparsity [LDP07], they typically require a large number of input images. Prior work on single-image tomography has used statistical shape models of known anatomical structures [LWH06]. While these approaches deliver precise results, they are only applicable for specific problems, namely those where the content of the x-ray image is known. Our dataset however, contains many different anatomical structures from a multitude of different species in different poses.

In the computer graphics community, tomographic reconstruction methods for specific volumetric phenomena, such as flames [IM04] or planetary nebulae [MKH04], have been proposed. Notably, the method of Magnor  et al.  [MKH04] works on a single image, but reconstructs only a very general shape, with a-priori known radial symmetry and an emission-only model. Instead, we employ deep learning [LBH15], in particular CNNs [JSD14] to solve the task at hand. The presented approach has been inspired by previous work of Eigen et al. which is generating a 2D depth map from a 2D image by using a two-staged CNN [EPF14]. The next logical step following the reconstruction of 2D depth maps is to infer complete 3D volumes [WSK15, REM16]. Wu et al. infer a small binary 3D volume from a 2D photo. In comparison to our task, this is harder (as parts are occluded), but at the same time easier (as one pixel still has a unique depth). While Qi et al. have more recently improved upon the main task to make use of this approach (object recognition) [QSN16], we are not aware of any generalizations, neither for semi-transparent surfaces nor for non-binary densities as it is required for instance for x-ray illustrated in Fig. 1.

A typical challenge for deep learning is to find a suitably large number of training examples. We address this by using synthetic imagery in combination with real world volumetric CT scans. Since synthetic images can be used to enable tasks such as object detection at a similar quality as real data could [PBRS15], they have been used to apply deep learning to classical problems such as optical flow [DFI15], intrinsic images [NMY15]

or light and material estimation

[RRF16]. In our case, the volumetric data has been collected from a large-scale database of mammalian CT scans which contains a large variety of skulls ranging from mole rats, over polar bears to chimpanzees and walruses. Fig. 2 shows a small subset of the synthetic x-rays and the corresponding volumes contained in our data set. While anatomy can be considered an important use case of x-rays, the transfer to other domains, such as airport security x-ray imaging, is left open for future work.

Figure 2: Samples of our dataset comprising of synthetic 2D x-ray images (top row) and 3D volumes (bottom row). The second row shows images rendered from the original viewpoint while the mapping is between view-dependent 2D images and view-independent 3D volumes.

Medical visualization has adapted deep learning for tasks that have similarity with ours. Würfl et al. [WGCM16] have mapped tomographic reconstruction to the back-propagation [RHW88] of a CNN. Their input is an x-ray sinogram, that captures a volume from many views, while our input is a single view only. While exploiting similarity of back-propagation in learning and tomography is an interesting observation, it does not make use of the main feature of deep learning: finding useful internal representations to invert a mapping. Hammerdink et al.  [HWPM17] consider the special case of limited-angle tomography and used deep learning to remedy aliasing problems in reconstruction from very few images. Bahrami et al.  [BSRS16] learn the mapping from 7T CT to 3T CT, whereby input and output is 3D, whereas we map from 2D to 3D. However, the high-level objective is similar: make most of the available data by means of deep learning reducing capture cost and effort. Our approach is the limit case: reconstruction from a single image.

3 Dataset

We chose to explain our dataset previous to the introduction of the method that makes use of it, as we hope it to be general enough to also serve other purposes in the future.

Our dataset contains samples of the mapping from 2D x-ray images to 3D volume data. This represents both the forward and inverse mapping. We name a pair of 2D x-ray images and 3D volume a sample. Examples of samples are shown in Fig. 2. Overall we have produced 466,200 such samples, whereby computer-generated (simulated) x-ray imaging is used to achieve such a high number of x-ray images based on real-world CT volumes. The 3D volumes come from a repository of CT scans of Mammalia [Dig17]. Out of these 53,280 samples are withheld for validation. The validation set exclusively contains 20 species which are never observed at training time. Next, we will describe how we have obtained the data (Sec. 3.1) and detail our approach for synthetic x-ray generation (Sec. 3.2).

Figure 3: Input to our architecture is a 2D x-ray gray image (left). The network converts this image into an internal representation with decreased spatial resolution (here seen as a block’s height) and increasing depth (depicted as a block’s width). Each type of block (encoded as colors) is defined as a combination of other blocks. Solid lines are learned, dotted lines are non-learned. For details, please see the text.

3.1 3D Data: Density Volumes

All data is acquired form the UTCT database [Dig17] category “Mammalia”. We have downloaded 175 slice videos in a resolution of approx. 500 pixels horizontally, acquired at varying quality, all subject to video compression, in 8 bit and with little calibration information, i. e. in a rather unconstrained setting. We assume pixel values in these videos to be in units of linear density, i. e. not to be subject to a gamma curve. All slice videos were re-sampled into volumes with a resolution of using a Gaussian reconstruction filter. Note that a cleaner acquisition process would likely improve our results. Finally, the 3D volume is re-sampled from different views (according to the x-ray image view), resulting in one unique volume per sample.

Some of those species only have minor differences, such as female and male and others are completely different. Similarly, species such as the Walrus or Koala in the validation can be very different from any species in the test data.

3.2 2D Data: X-ray Images

In order to simulate real-world x-ray imaging, our image formation follows the Beer-Lambert absorption-only model. Intensity attenuation for each ray is simulated depending on the medium’s density but never reflected. This is typical for x-rays [Dri03], as most relevant (organic) materials have an index of refraction very close to at x-ray wavelengths. Formally, the fraction of x-radiation arriving at the sensor (transparency) after traveling a volume with extinction coefficient (the sum of absorption and out-scattering ) with spatially-varying density along a ray parametrized with variable , is

(1)

We have manually chosen globally to be , which produces x-ray images with plausible contrast. Note that traditionally as well as in this work, an x-ray is represented not by means of transparency, but inverted by means of opacity which is defined as .

To generate the synthetic x-ray data, i. e. to obtain transparency values for each pixel, a ray is marched front-to-back in steps, solving for a known and by numerical quadrature. We use OpenGL to compute this value in parallel for all pixels [EHK06]. The output 2D x-ray images have a pixel resolution of . They are in linear physical units, i. e. no gamma curve is applied. Orthographic projection along a random view direction from the positive -hemisphere is used to generate the per-pixel rays. The restriction to the positive hemisphere is chosen to resolve the following ambiguity in our image formation model: X-ray images taken from direction are identical to x-ray images taken from direction , and consequently two 3D density volumes, where one is flipped along in camera space have the same image. Additionally, for each view direction the corresponding images were mirrored in a vertical direction, such that we obtain two instead of one images. Thus, overall x-ray images are generated for each species adding up to a total of x-ray images, and thus also samples in our dataset, to be made publicly available upon publication.

4 Single-image Tomography

To address the single-image tomography problem, we have designed a CNN architecture (Fig. 3) to learn the mapping from a 2D x-ray image (Fig. 3, left) to a 3D volume (Fig. 3, right) from many examples of 3D volumes that are paired with 2D x-rays (Fig. 2).

At deployment of our architecture, the input is a 2D x-ray of arbitrary spatial pixel resolution, i. e. higher than the 2D images in the training set, and output is a 3D density volume with the same spatial resolution and a depth of slices. This is achieved in two stages: a network (Sec. 4.1) and a fusion (Sec. 4.2) step. Input to the network step is the 2D x-ray image re-sampled to . Output is a density volume. The fusion step (yellow block in the very right of Fig. 3) combines this coarse 3D representation with the full-resolution 2D x-ray image (dotted line) into the final resolution. This step is simple enough to be done on-the-fly without the need to even hold the full result in (GPU) memory.

4.1 Network

The network step uses a deep CNN [LBH15]. The overall structure is an encoder-decoder with skip connections [LSD15] and residual learning [HZRS16]. An overview of our network can be seen in Fig. 3. We will now detail some of its design aspects.

Encoder-decoder.

The purpose of an encoder-decoder design is to combine abstraction of an image into an internal representation that represents the information contained in the training data (encoder) with a second step (decoder) that applies this knowledge to the specific instance.

To combine global and local information, the network operates on different resolutions [LSD15]: the first part (orange blocks in the left half of Fig. 3) reduces spatial resolution and produces more complex features, as seen from the decreasing horizontal block size and increasing vertical block size in Fig. 3.

The right half increases resolution again (blue blocks), but without reducing the feature channel count, as is typically done when the output only has a low number of channels. Spatial resolution is increased by a deconvolution (or up-sampling) unit [ÇAL16]. This deconvolution combines the information about the existence of global features with spatial details in increased resolution.

We found the symmetric encoder-decoder to work best when combined with additional steps before (left grey blocks) and after changing resolution (right pink block). A minimal resolution of 8 provides the best trade-off: larger or smaller minimal sizes result in a larger error in our experiments.

Skip connections.

To share spatial details of some resolution at some level on the convolutional part with the same resolution on the de-convolutional part we make use of skip-connections [LSD15] (also called cross-links, shown as bridging arrows). These convert fine details in the input 2D image into details of the output 3D volume. Skip connections allow to use high-resolutional spatial layout to locate features, such as on (3D) edges.

Residual.

Furthermore, we use residual blocks to increase the learnability [HZRS16]. Instead of learning the convolutions directly, we only learn the additive residual and add in the identity. This is seen in the definition of the 3-residual block (dark gray) in Fig. 3: it combines 3 basic blocks (light gray) with a residual link that provides a “detour” resulting in the identity mapping. This does not change the networks expressiveness, but significantly helps the training.

Convolution.

The CNN learns image filters of compact support that make the approach scalable to input images and volumes with a sufficient resolution. Convolution (pink block) is typically accompanied by a batch normalization and a ReLU non-linearity. All three steps form a basic block (light gray). Usually neural networks map from 2D to 2D or 3D to 3D. However, in our case it is different as 2D to 3D is required. Trivially, one would think to encode the depth dimension of the volume as the third dimension. This appears attractive as the result is fully convolutional: features deeper in the image/volume are computed in the same way as if they were shallow. However, since the input is 2D and the task is to find the 3D mapping this is not applicable. Therefore, in our case the third volume dimension is encoded as individual feature channels. Consequently, the design increases the number of feature channels from 1 to 256 and retains this number until the end where it is decreased to the output resolution of 128 as seen right in Fig. 

3. In other words: a network fully convolutional along would produce the same result for every slice as nothing ever changes, which is clearly not desirable. Future work however could explore switching from feature channels along to convolutions along in later steps of a network.

Learning.

As our network aims to solve a regression task the loss calculation comprises of a simple

-norm (Euclidean Loss) between the 3D voxels. We train our network using Caffe 

[JSD14] in version rc5, and exploit four Nvidia Tesla K80 accelerator cards which brings training time down to roughly one day.

4.2 Fusion

Fusion combines the coarse-resolution 3D result of the previous step into a 3D volume with full spatial resolution. This is based on the intuition, that the overall 3D structure is best captured by the “intelligence” of a neural network, while the fine details are more readily available from the high-resolution 2D x-ray image.

Fusion proceeds independently for every pixel in the high-resolution image as follows. Recalling the definition of from Eq. 1, we note, that while the loss encourages the inferred densities, say of slice to be close to the ground truth densities , nothing forces their composition to be close to the input . This is not surprising, as we don’t know the ground-truth values at test time. However, we know that they have to combine to and that the value is the transparency error of our reconstruction. Based on the Beer-Lambert equation, we can compute the density error of this as . The idea of fusion is to distribute this density error to arrive at new density values , such that these compose into the correct value again.

While we would need to know the ground truth to do this correctly, many policies to distribute the error are possible. Consider — for illustrative purpose — blaming the entire error on the first slice . It is worth noting how this would result in the correct 2D x-ray, but from a novel view it would show an undesirable “wall” of density in front of the object. Instead, one could distribute the error evenly across all slices, as in . This, as any other convex combination of the error, will produce a correct x-ray which will already be much more usable than the first policy. Regrettably, it will also create density in areas that the network has correctly identified as empty, such as the void around each object. This observation leads to the intuition behind the policy we finally suggest, that should change density proportional to density. This is achieved, by setting

(2)

where is a sharpness parameter to weight denser areas more.

5 Evaluation

We have evaluated the proposed network architecture both on the validation subset of our synthetic dataset, where a ground truth is available, as well as on real images where we do not have access to the ground truth. Furthermore, our approach is compared to three baseline alternatives (Sec. 5.1) using two metrics (Sec. 5.2).

5.1 Alternative Approaches

We compare the proposed network architecture (Our) to three alternative approaches which are capable of deriving a 3D volume from a 2D x-ray image. We refer to those approaches as the nearest-neighbor (NN), the oracle approach (Oracle), and the method of Wenger et al. wenger2013fast (Wenger). We describe them briefly in the following paragraphs.

Figure 4: Comparing different methods (columns) on different x-rays (rows). Different methods in different columns are coded as colors also used in the quantitative results found to the right, where we show the numerical results according to SSIM and (less is better). We see that ours is similar to a reference while a real competitor for single-image tomography cannot achieve this. An oracle or NN method produces plausible skulls, but not the skull in the input x-ray. This manifests as larger error according to both metrics.

Nearest neighbor.

The nearest neighbor approach uses the input 2D x-ray image to find the most -similar 2D x-ray in the training data and returns the 3D volume belonging to this same sample. While such a method is feasible in theory, it is very far from practical to remember all 3D volumes and all 2D x-rays as the storage requirement is in the range of terrabyte. Furthermore, the search time would be in the order of a couple of days, whereas our approach only requires less than a second to execute. Nevertheless, outperforming such a method shows that the problem cannot be solved by memorizing the training data, even if it was feasible.

Oracle.

The oracle approach simply returns the 3D volume from the training dataset that is closest to the ground truth solution. Note that this ignores the x-ray completely. This is a completely hypothetical method, as it requires to know the ground truth 3D volume of the input, which is not available in practice. Nevertheless, outperforming such a method shows an upper bound on what any memorizing encoding could ever achieve, as no memorization of the data can be better than the data itself.

Wenger et al. .

Wenger et al. have demonstrated single-image tomography for the case of planetary nebulae [WLM13]. While it clearly has different assumptions and a different objective where it is producing convincing results, it is still the method closest to our objective that we are aware of. They phrase the problem as an optimization with special constraints such as sparsity and symmetry. Different from ours, their method assumes the medium to be emission-only while ours is absorption only. Their original implementation was run on our x-ray images. Due to computation time, we could not run Wenger on the full validation set and will limit ourselves to a representative choice of three images.

5.2 Metrics

In order to facilitate the comparison, we use two metrics: one 3D volume metric, and one 2D image metric.

3D Volume Metric.

For the volume metric as used in training is is employed. The volume metric can account for errors, independent of view point, lighting, iso-value or any other rendering parameters, but is often not well-correlated with the perceived quality of a reconstruction that is dominated for “what’s in” for the final image. Smaller values of course are better.

2D Image Metric.

For computing the image metric, the two volumes are rendered using a canonical setting and the resulting images compared. Rendering is done from a camera identical to the x-ray view (orthographic), but in a different modality: we use iso-surface ray-casting, image based lighting with ambient occlusion and slight specular shading. This is typical for volume visualization and used to visualize our results as well. The resulting images are then compared using DSSIM [WBSS04], where again smaller values are better.

5.3 Synthetic Data Evaluation

First, we evaluate all approaches using all metrics on synthetic x-ray images, where the ground truth 3D volume is known. For our validation dataset, we find that our approach consistently and significantly performs better than all others according to both metrics.

Figure 5: Error means (a,c) and distributions (b,d) across our test set for three methods (colors) and two metrics ( left, SSIM right).

Quantitative results.

Our mean error of is significantly (, paired -test) better (smaller) than the NN method with a mean of and the Oracle method

. The mean and confidence intervals are seen in Fig. 

5, a. When plotting the distribution of errors in Fig. 5, b, we further see that no method fairs better than our approach in any regime. The picture is stronger in DSSIM, where our mean error of is significantly better (smaller) than both NN and Oracle methods with means of , resp.  (both , in a paired -test). We have added Wenger to all plots, despite having only three samples as a rough indication of performance. While it is originally designed for a different purpose, it is the closest competitor we are aware of. We see that the error is slightly larger than baseline methods using our data.

Qualitative results

Finally, the quality is best inspected by comparing our results in re-rendering, cut-away or stereo applications to the ground truth as seen in Fig. 4 that shows all approaches compared. We see that Oracle and NN produce volumes that look plausible, but do not really match the input image. This can be seen from the error bars to the right which are for each individual sample (row) following the color coding of the methods (columns). Finally, Fig. 16 show more results of our approach, including novel views. The supplemental materials show the full validation dataset following the protocol of Fig. 16.

User study

When showing the 10 best results according to the SSIM metric (Fig. 16 rows continue) of either GT and ours to naïve Ss using iso-surface renderings in a time-randomized two-alternative forced choice (2AFC) protocol and asking “if the image is real", the correct answer was given in of the cases (, binomial test). While this might indicate Ss did not understand the task it could also mean that there is at least no obvious criterion to separate our results from GT.

Figure 6: Visual comparison of down- and up-sampled volumes (8, 16, 32, 64) to the original volume resolution of 128.

Effect of Slice Count.

In order to see if the network is able to reconstruct 128 slices properly rather than simply interpolating between the slices the 3D output was firstly down-sampled along the depth dimension and instantly up-sampled again to 128 in order to simulate interpolated volumes for the dimensions 8, 16, 32 and 64 respectively. Then each volume was rendered and compared to the original output volume of depth dimension 128 as seen in Fig. 

6. There are big differences for resolution 8 and 16. For the resolutions 32 and 64 the differences are not that strong in numbers anymore. However, as seen in Fig. 7,a there are still differences which means the task performed by the network exceeds interpolation.

Figure 7: a) DSSIM error of down- and up-sampled volumes (8, 16, 32, 64) compared to the original volume resolution of 128. b) DSSIM error across our test set for four different angles: top, front, side, other. For (a) and (b), less is better.

Different views.

As the 3D volumes are globally aligned before we choose a random view we can analyze the effect of view direction on the error (Fig. 7, b). There are no big differences between different views regarding the output quality. However, x-rays from the front seem to be easier for the network than from other angles, whereas x-rays from the side yield the worst results.

Figure 8: Comparing fusion (Top) and no fusion (Bottom). The re-synthesized x-ray (shown) is fully identical to the input x-ray (not shown) while the iso-surface looks more detailed and remains plausible.

Effect of Fusion

A comparison of the obtained results with and without using the described fusion strategy is shown in Fig. 8. We note that fusion does not only ensure that our result produces a density-decomposition of the input that is seamlessly compositing into the input again, but also allows the arbitrary handling of high spatial resolutions that would be infeasible to tackle in practice for current CNNs due to the massive amount of data. We would also like to point out that the fusion will never produce more than the as no additional information in the depth dimension is available from the input 2D x-ray’s transparency .

5.4 Real-World Data Evaluation

Next, we have applied our network to real-world x-ray images, we have obtained from on-line repositories. Here, we do not have the corresponding 3D volume so quantitative evaluation or rendering from a novel view is not possible. However, the visual quality is apparent from Fig. 9. We see that our approach can extract meaningful three-dimensional structures for unobserved species and real-world x-rays, despite being trained on synthetic images. The fact that typical x-rays come with gamma compression – that can only be undone partially – adds to the difficulty of this task. Another experiment using real 2D x-rays in combination with real 3D volumes is presented in a separate section Sec. 5.6.

Figure 9: Our results (bottom) from real x-ray images (top). While no reference is available here, the overall shape appears plausible.

5.5 Applications

Our approach allows for a couple of interesting computer graphics applications of legacy x-ray images: novel views, stereo, re-rendering and a combination of these. The supplemental material provides a web-application to explore all combinations for 100 samples of the validation dataset. In the paper, we will constrain ourselves to visualize results only using iso-surface ray-casting with image-based lighting and ambient occlusion with an iso-value of .

The prime computer graphics application example enabled through our approach is novel view-generation. To this end, the 3D volume is simply input to the image synthesis procedure again, but from a novel point of view. Examples are seen in Fig. 16. Our approach allows manipulating the obtained 3D density volume, such as cutting away parts (Fig. 10). We see that, compared to the ground truth both interior and exterior are predicted.

Figure 10: Cut-away visualizations of the ground truth (top) and our result (bottom). Note that we reproduce both the surface and the interior of the skull or the back of the jaw. Cut-aways of the entire validation set are found in the supplemental materials.

The ability to take novel views also allows to produce the two views required for a stereo image as seen in Fig. 11.

Figure 11: Stereo from x-rays. Anaglyph and wiggly stereo visualization of the entire validation set are found in the supplemental.

5.6 Reality check

A methodological dilemma is that ultimately we want to know performance on true x-ray images, but regrettably, we do not see a viable way to attain the same amount of real-world x-ray images as can be acquired for synthetic ones. In the following paragraphs we address this challenge with one observation and an additional experiment.

Figure 12: Which x-rays are real and which are synthetic?

Realism of synthetic x-rays.

First, we note that our synthetic x-ray images are likely similar to the real x-ray images. This is hard to quantify, but a reader is encouraged to try to detect which x-rays in Fig. 12, that show both our x-rays and x-rays from the internet are real and which one are synthetic. When time-sequentially showing 10 space-randomized pairs of real and fake x-ray images to naïve subjects in a 2AFC task and asking “which image is a real x-ray”, the correct answer was given in of the cases i. e. chance-level at 50 % (, , binomial test). The supplemental materials show these stimuli. While this is no formal proof of our performance on real-world x-rays, it indicates that at least the differences in x-rays are not easily detected, and that they could be close. This is likely because x-ray transport is less complex (less scattering, no reflection, only absorption) than light transport on the size-scales of our geometry.

Learning from real-world x-rays.

Second, we have repeated the entire learning for a restricted subset of x-ray images for which we have explicitly reconstructed both image and volume. In this set, we used our own micro-CT scanner to acquire x-rays of 15 mice (Mus musculus), which were then reconstructed into volumes using classical tomography. We split this set into 12 training and 3 test exemplars, produced overall samples and repeated the entire learning procedure explained above.

Qualitative results of this experiment are shown in Fig. 13. Quantitatively, we find, that, despite the restricted setting again we outperform a NN and Oracle comparison (Fig. 15). Again, both our mean and error distributions are better than any competitor for any metric. While it does not show generalization across species, it shows that if the scanning effort is made, our method is applicable to real volumina and x-rays. If we had access to the massive original 2D x-ray image data from UTCT [Dig17], a similar experiment could be repeated on a full scale, providing an actual proof. Regrettably, the x-ray data for those scans is not available anymore.

Figure 13: Applying our approach to recover the 3D internal structure of mice X-rays. The left image shows the x-ray image. The middle column show the reconstructed 3D volume rendered using a transfer function. This can be compared to the reference, shown on the right. We see that the both the external skin structures shown in blue and the bones shown in orange are present.

Finally, we re-synthesized x-rays from the density volumes, closing the loop in Fig. 14 to compare them to the real x-rays. We find, that resulting images are very close and training with these re-synthesized images leads to similar results as seen in Fig. 14. This indicates, at least on a small scale, that our approach has learned the inversion of real x-rays by training on synthetic x-rays.

Figure 14: Re-synthesis of x-rays (bottom) from CT scans where the original x-rays are available (top). Both are similar, and it would not be obvious which one is synthesized and which is real.
Figure 15: Evaluation on real x-ray imagery of mice, using the protocol of Fig. 5, allowing for the same conclusion.

6 Conclusion

We have demonstrated the first application of deep learning to reconstruct 3D volumes from single 2D x-ray images. After suggesting a novel dataset for evaluation and testing, we have devised a deep CNN that can produce full 3D volumes. We suggest a specialized fusion step, that allows training on low resolution examples, yet transferring the outcome to high-resolution input. A similar approach could be applicable to other conditions that are limited by the sheer amount of data (video, 3D video, light field (video), etc.). Our method was tested, both on synthetic and real images, allowing for novel applications such as free viewpoint, viewing of legacy x-ray footage, stereo x-ray imagery or re-rendering in new modalities.

We have only looked at one specific instance of learning volumes from images. Our approach was learned on and tested with skulls, which form a prominent and intriguing class, but are by far not the only class. Our experiments on CTs of real mice indicate that the method can be trained on both synthetic and real data. Other geometry worth reconstructing could be clouds or smoke in applications such as weather forecast, or x-ray images obtained from security scanners at airports. We have chosen an absorption-only transport model that suits x-rays. For photos, e. g. of clouds or smoke, an emission or emission-absorption model would need to be learned. We imagine our setup would trivially extend to this case. As x-rays are known to be dominated by single-scattering, future work would need to account for multiple scattering in other modalities.

Finally, 3D-volumes-from-X in other modalities such as PET or ultrasound, will be subject to phenomena not typically modeled in graphics, e. g. diffraction, requiring even more refined synthesis of training data providing excellent avenues of future research.

Figure 16: Results of different approaches (columns) on different species (rows) in our synthetic validation data. The first column shows the input 2D x-ray image. The second and third column show a rendering of our result resp. the ground truth rendered from the original view. The last two columns use a novel view. We see that our approach can recover non-trivial details of the mamalian morphology such as the cheekbones. The overall shape and surface orientation is plausible as seen from the colors in the shading. Even from novel views our results look convincing, most notably when reproducing holes and cavities not present in any height field.

References

  • [BSRS16] Bahrami K., Shi F., Rekik I., Shen D.: Convolutional neural network for reconstruction of 7T-like images from 3T MRI using appearance and anatomical features. In Large-Scale Annotation of Biomedical Data and Expert Label Synthesis (2016), pp. 39–47.
  • [ÇAL16] Çiçek Ö., Abdulkadir A., Lienkamp S. S., Brox T., Ronneberger O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In Proc. Medical Image Computing and Computer-Assisted Intervention (2016), pp. 424–32.
  • [DCH88] Drebin R. A., Carpenter L., Hanrahan P.: Volume rendering. In Siggraph Computer Graphics (1988), vol. 22, pp. 65–74.
  • [DFI15] Dosovitskiy A., Fischer P., Ilg E., Hausser P., Hazirbas C., Golkov V., van der Smagt P., Cremers D., Brox T.: Flownet: Learning optical flow with convolutional networks. In Proc., ICCV (2015), pp. 2758–66.
  • [Dig17] Digmorph: Digimorph, 2017.
  • [Dri03] Driggers R. G.: Encyclopedia of Optical Engineering, vol. 2. CRC press, 2003.
  • [EHK06] Engel K., Hadwiger M., Kniss J., Rezk-Salama C., Weiskopf D.: Real-time volume graphics. CRC Press, 2006.
  • [EPF14] Eigen D., Puhrsch C., Fergus R.: Depth map prediction from a single image using a multi-scale deep network. In NIPS (2014), pp. 2366–74.
  • [FMAJB16] Firman M., Mac Aodha O., Julier S., Brostow G. J.: Structured prediction of unobserved voxels from a single depth image. In CVPR (2016).
  • [Hou73] Hounsfield G. N.: Computerized transverse axial scanning (tomography): Part 1. description of system. The British j of radiology 46, 552 (1973), 1016–1022.
  • [HWPM17] Hammernik K., Würfl T., Pock T., Maier A.: A deep learning architecture for limited-angle computed tomography reconstruction. In Bildverarbeitung für die Medizin 2017. 2017, pp. 92–7.
  • [HZRS16] He K., Zhang X., Ren S., Sun J.: Deep residual learning for image recognition. In CVPR (2016), pp. 770–8.
  • [IM04] Ihrke I., Magnor M.: Image-based tomographic reconstruction of flames. In Proc. SCA (2004), pp. 365–73.
  • [JSD14] Jia Y., Shelhamer E., Donahue J., Karayev S., Long J., Girshick R., Guadarrama S., Darrell T.: Caffe: Convolutional architecture for fast feature embedding. In Proc. ACM Multimedia (2014), pp. 675–678.
  • [LBH15] LeCun Y., Bengio Y., Hinton G.: Deep learning. Nature 521, 7553 (2015), 436–44.
  • [LDP07] Lustig M., Donoho D., Pauly J. M.: Sparse mri: The application of compressed sensing for rapid MR imaging. Magnetic resonance in medicine 58, 6 (2007), 1182–95.
  • [LSD15] Long J., Shelhamer E., Darrell T.: Fully convolutional networks for semantic segmentation. In CVPR (2015), pp. 3431–40.
  • [LWH06] Lamecker H., Wenckebach T. H., Hege H.-C.: Atlas-based 3D-shape reconstruction from X-ray images. In Proc. ICPR (2006), pp. 371–4.
  • [MKH04] Magnor M., Kindlmann G., Hansen C.: Constrained inverse volume rendering for planetary nebulae. In Proc. VIS (2004), pp. 83–90.
  • [NMY15] Narihira T., Maire M., Yu S. X.: Direct intrinsics: Learning albedo-shading decomposition by convolutional regression. In Proc. CVPR (2015), pp. 2992–2992.
  • [PBRS15] Pepik B., Benenson R., Ritschel T., Schiele B.: What is holding back convnets for detection? In Proc. GCPR (2015), pp. 517–28.
  • [QSN16] Qi C. R., Su H., Nießner M., Dai A., Yan M., Guibas L. J.: Volumetric and multi-view cnns for object classification on 3d data. In CVPR (2016), pp. 5648–5656.
  • [REM16] Rezende D. J., Eslami S. A., Mohamed S., Battaglia P., Jaderberg M., Heess N.: Unsupervised learning of 3d structure from images. In NIPS (2016), pp. 4996–5004.
  • [RH01] Ramamoorthi R., Hanrahan P.: A signal-processing framework for inverse rendering. In SIGGRAPH (2001), pp. 117–28.
  • [RHW88] Rumelhart D. E., Hinton G. E., Williams R. J., et al.: Learning representations by back-propagating errors. Cognitive modeling 5, 3 (1988), 1.
  • [RRF16] Rematas K., Ritschel T., Fritz M., Gavves E., Tuytelaars T.: Deep reflectance maps. In Proc. CVPR (2016), pp. 4508–16.
  • [SV82] Shepp L. A., Vardi Y.: Maximum likelihood reconstruction for emission tomography. IEEE Trans medical imaging 1, 2 (1982), 113–22.
  • [WBSS04] Wang Z., Bovik A. C., Sheikh H. R., Simoncelli E. P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. image Processing 13, 4 (2004), 600–12.
  • [WGCM16] Würfl T., Ghesu F. C., Christlein V., Maier A.: Deep learning computed tomography. In Medical Image Computing and Computer-Assisted Intervention (2016), pp. 432–40.
  • [WLM13] Wenger S., Lorenz D., Magnor M.: Fast image-based modeling of astronomical nebulae. Comp. Graph. Forum (Proc. PG) 32, 7 (2013), 93–100.
  • [Wor11] World Health Organization: Baseline country survey on medical devices, 2011.
  • [WSK15] Wu Z., Song S., Khosla A., Yu F., Zhang L., Tang X., Xiao J.: 3d shapenets: A deep representation for volumetric shapes. In CVPR (2015), pp. 1912–20.