BigDataViewer: Interactive Visualization and Image Processing for Terabyte Data Sets

12/01/2014
by   Tobias Pietzsch, et al.
MPI-CBG
0

The increasingly popular light sheet microscopy techniques generate very large 3D time-lapse recordings of living biological specimen. The necessity to make large volumetric datasets available for interactive visualization and analysis has been widely recognized. However, existing solutions build on dedicated servers to generate virtual slices that are transferred to the client applications, practically leading to insufficient frame rates (less than 10 frames per second) for truly interactive experience. An easily accessible open source solution for interactive arbitrary virtual re-slicing of very large volumes and time series of volumes has yet been missing. We fill this gap with BigDataViewer, a Fiji plugin to interactively navigate and visualize large image sequences from both local and remote data sources.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 14

page 16

page 24

page 26

page 27

page 28

page 30

03/30/2018

Interactive 3D Visualization for Theoretical Virtual Observatories

Virtual Observatories (VOs) are online hubs of scientific knowledge. The...
09/07/2020

Interactive Visualization of Terascale Data in the Browser: Fact or Fiction?

Information visualization applications have become ubiquitous, in no sma...
10/08/2021

VIRUP : The Virtual Reality Universe Project

VIRUP is a new C++ open source software that provides an interactive vir...
07/08/2014

Visualization of Large Volumetric Multi-Channel Microscopy Data Streams on Standard PCs

Background: Visualization of multi-channel microscopy data plays a vital...
04/09/2019

Unwind: Interactive Fish Straightening

The ScanAllFish project is a large-scale effort to scan all the world's ...
10/16/2017

Volumetric Data Exploration with Machine Learning-Aided Visualization in Neutron Science

Recent advancements in neutron and x-ray sources, instrumentation and da...
10/07/2020

Interactive Visualization of Atmospheric Effects for Celestial Bodies

We present an atmospheric model tailored for the interactive visualizati...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Bibliography

  • [1] J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by Selective Plane Illumination Microscopy,” Science, vol. 305, no. 5686, pp. 1007–1009, 2004.
  • [2] Z. L. Husz, N. Burton, B. Hill, N. Milyaev, and R. A. Baldock, “Web tools for large-scale 3D biological images and atlases,” BMC Bioinformatics, vol. 13, no. 1, p. 122, 2012.
  • [3] J. Schindelin et al., “Fiji: an open-source platform for biological-image analysis,” Nature Methods, vol. 9, pp. 676–682, 2012.
  • [4] T. Pietzsch, S. Preibisch, P. Tomancak, and S. Saalfeld, “ImgLib2—generic image processing in Java,” Bioinformatics, vol. 28, no. 22, pp. 3009–3011, 2012.
  • [5] S. Saalfeld, A. Cardona, V. Hartenstein, and P. Tomancak, “CATMAID: Collaborative annotation toolkit for massive amounts of image data,” Bioinformatics, vol. 25, no. 15, pp. 1984–1986, 2009.
  • [6] R. Burns et al., “The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience,” in SSDBM 25, 2013. Article No. 27. arXiv:1306.3543.
  • [7] S. Preibisch, F. Amat, E. Stamataki, M. Sarov, R. H. Singer, E. Myers, and P. Tomancak, “Efficient Bayesian-based multiview deconvolution,” Nat. Methods, vol. 11, pp. 645–648, Jun 2014.
  • [8] S. Saalfeld, R. Fetter, A. Cardona, and P. Tomančák, “Elastic volume reconstruction from series of ultra-thin microscopy sections,” Nature Methods, vol. 9, no. 7, pp. 717–720, 2012.
  • [9] F. Amat, W. Lemon, D. P. Mossing, K. McDole, Y. Wan, K. Branson, E. W. Myers, and P. J. Keller, “Fast, accurate reconstruction of cell lineages from large-scale fluorescence microscopy data,” Nature Methods, vol. 11, no. 9, pp. 951–958, 2014.

1.1 Introduction

is a re-slicing browser. It takes a set of image volumes that are registered into a common global space and displays an arbitrary slice through that global space. This supplement explains the details of this rendering procedure. Several steps are necessary for this. The source image volumes need to be transformed into a global coordinate system. For each rendered pixel on the screen, the source voxels that contribute to it need to be determined. Voxel intensities need to be converted from the space they are defined in to RGB color space for display on the screen, and colors contributed by different source volumes must be blended to a final output color.

To perform these operations we rely heavily on ImgLib2 [2], a generic image processing library which we use both to represent image volumes and implement slice rendering. Section 1.2 reviews important ImgLib2 features and discusses a basic rendering algorithm. With data that does not fit into the memory of the rendering computer, we need to take data caching, loading bandwith and latency into consideration. In Section 1.3 we discuss how we handle this in a non-blocking data loading scheme. uses multi-resolution image pyramids as one strategy to facilitate interactive browsing in the face of extremely large datasets and bandwith limitations. In Section 1.4 we devise a refined rendering algorithm that leverages multi-resolution volumes and non-blocking data loading, and is extendable to custom, user-defined data sources displaying annotations or processing results.

1.2 Rendering using ImgLib2

1.2.1 ImgLib2

In , we employ the generic image processing library ImgLib2 [2] to represent image volumes and implement the slice rendering algorithm. ImgLib2 allows to express algorithms in a way that abstracts from the data type, dimensionality, or memory storage of the image data. For we rely in particular on the following key features:

  • virtualized pixel access,

  • transparent, virtualized image extension,

  • transparent, virtualized image interpolation,

  • transparent, virtualized coordinate transformations.

The first, virtualized pixel access, provides the basis for the latter three. Virtualized access means that in ImgLib2 the pixels of an image are accessed through an abstraction layer that hides storage details such as the memory layout used to store pixel values. This allows images to be backed by arbitrary storage mechanisms. One storage scheme provided by ImgLib2 is the |CellImg|, an image container that stores an -dimensional image as a set of flat arrays, each representing a (-dimensional hyper-)block of the image. In this example, virtualized access hides the logic of which pixel is stored in which flat array, in a way that is completely transparent to algorithms accessing the pixels.

In , we build on the |CellImg| storage scheme and extend it to provide cache-backed image volumes. Again, individual blocks are stored as flat arrays. However, not all blocks are in memory all the time. Instead, blocks are loaded on demand and cached using in a most-recently-used scheme. The load-on-demand is triggered by virtualized pixel access: If an algorithm accesses a pixel that is not currently in memory then the corresponding block is loaded and cached. This is extremely convenient for our rendering algorithm that can operate under the assumption that all data is in memory all the time.

Virtualized access provides the basis for virtualized image transformations. Instead of being stored as pixel arrays in memory, images can be backed by transformation into other images. This allows transparent image transformations that are lazily evaluated. Only when a pixel is accessed, its coordinates are transformed into the original image space and the corresponding original pixel is accessed. This allows for the latter three features listed above.

Virtualized image extension means that an image is extended to infinity by defining how values beyond the image boundaries are computed. For example these may be fixed to a constant value, obtained by repeating or mirroring the image content, . Virtualized access takes care of generating out-of-bounds pixels using the specified rule. For our rendering algorithm, we extend the raw image volumes with the background color. The rendering algorithm does not have to consider whether it accesses pixels within or beyond image boundaries.

Virtualized image interpolation makes a discrete pixel image accessible at arbitrary real-valued coordinates. ImgLib2 provides several interpolation schemes that define how values at non-integer coordinates are interpolated. For we currently use nearest-neighbor an trilinear interpolation.

Virtualized coordinate transformations are used to transform an image into another coordinate frame. The transformed image is transparent, , coordinate transformation are performed on-the-fly when accessing pixels. No data is copied, accessing a pixel of the transformed image means accessing the corresponding pixel of the original image. In , we use this to spatially calibrate non-isotropic microscopy acquisitions, to register raw image volumes into a common global coordinate system, and to map the global coordinate system into the desired virtual slice for rendering, as detailed in the next section.

What makes the facilities described above even more powerful is that they can be effortlessly combined and layered. An image extended to infinity is again an image which can be interpolated or coordinate-transformed to yield yet another image, . Whether the underlying data lives in our cache-backed |CellImg| or in a standard memory array is irrelevant.

1.2.2 Rendering Arbitrary Slices of a Multi-View Dataset

This section explains the basic algorithm for rendering a slice through a registered multi-view dataset. Note that for timelapse datasets we only need to consider a single timepoint. For the moment, we assume that there is a single image volume corresponding to each view. Handling multi-resolution data, where each image volume is available in progressively down-scaled resolutions (mipmaps), will be discussed in Section 

1.4.

We will represent coordinates in three coordinate frames:

  • The local frame of a raw image volume is defined by the 3D voxel coordinates of the volume.

  • The global reference frame is an arbitrarily defined isotropic 3D coordinate system. We denote by the transformation of local coordinates to global coordinates , that is, the registration of an image volume into the global reference frame.

  • The viewer frame is a 3D coordinate system defined such that its plane coincides with the rendering canvas on the screen. That is, the value at in the viewer frame will is rendered to pixel on the canvas. We denote by the transformation of global coordinates to viewer coordinates . The transformation represents the current rendering transform. It is modified by the user by zooming, panning, reslicing, ,

render3-

Figure 1.1: Coordinate transformations involved in rendering a slice through a set of registered volumes. Volumes are transformed into a common global reference frame using local-to-global transformations . Then the viewer transformation is applied to transform global coordinates into the viewer frame. The plane of the viewer frame coincides with the rendering canvas on the screen. Pixels are rendered by copying this plane to the rendering canvas.

Assume that the following inputs are given:

  • A rendering transform , mapping global to viewer coordinates.

  • raw image volumes of pixel types . (For example, image data from HDF5 files has pixel type unsigned short.)

  • transformations , where maps voxel coordinates in to global coordinates.

  • converters , where a converter is a function that maps values of pixel type to RGB pixel values for display on screen.

  • A 2D RGB canvas for rendering pixels to the screen.

The basic algorithm for rendering to the canvas can be now formulated: Every raw image volume is virtually extended and interpolated to be continuously accessible, , pixel values are defined for every real-valued coordinate. Then we (virtually) transform coordinates, first to the global reference frame using and then to the viewer frame using . Then we (virtually) convert pixel values from the pixel type of the raw volume to pixel type RGB for display. Now we can simply read pixels of slice of the transformed images, combine them using a blending function , and write to the canvas. Currently, we use a simple summation () as the blending function.

More formally, the basic rendering procedure is expressed in Algorithm 1. The coordinate transformation steps used by the rendering algorithm are illustrated in Figure 1.1.

Input : viewer transform ,
source volumes ,
source transforms ,
source converters ,
canvas
Result : rendered image in
for do
      
end for
for every canvas pixel do
       set
end for
Algorithm 1 Basic rendering algorithm.

1.3 Non-Blocking Data Access

Representing raw image volumes as cache-backed images is necessitated by the fact that we need to render datasets that are far too large to fit into the available memory of the rendering computer. We will often face the situation that a voxel needed for rendering is not currently in cache and needs to be loaded from a data source, , an HDF5 file on the local disk or a on-line data store. In this case, we have to decide how to handle a request for such a voxel. The simplest solution would be to use blocking access: When an uncached voxel is requested, a loading operation is triggered and the access blocks until the voxel data is available. However, with on-line data sources in particular, this approach is problematic. When data has to be fetched over unreliable network connections we cannot make guarantees regarding the bandwith or latency of the loading operation.

Using blocking access, a rendering operation might be blocked indefinitely. For interactive rendering it is not desirable to block while waiting for data. For the purpose of immediate interactive feedback it is very much preferable to present the user with partial data instead of rendering the application unresponsive while waiting for complete data. To handle this requirement and to provide an alternative to blocking access, we introduce |Volatile| pixel types into ImgLib2 [2].

|Volatile| pixel types represent each pixel value as a tuple. The tuple comprises an intensity value for the pixel, and a validity flag that signals whether the intensity value is valid (, in our case whether it exists in memory or is still waiting to be loaded). This allows to implement a deferred loading scheme that provides immediate feedback.

Let us consider a concrete example. Image data is stored in HDF5 files with 16-bit precision. Assuming a blocking scheme, this means that our cache-backed image volumes have |ShortType| pixel values in ImgLib2 terms. With the deferred loading scheme, the cache-backed image instead has pixel type |VolatileShortType|. When a voxel is accessed that is not currently in cache, the storage memory for the voxel’s block is immediately allocated and the voxel is immediately ready to be processed. However, the validity flag associated with the voxel is |false|, indicating that the voxel’s intensity data is not (yet) valid. The loading of intensity data is carried out asynchronously in a background thread. Once the data is loaded, the validity flag changes to |true|, indicating that the intensity data is now valid.

Rendering from such |Volatile| cache-backed images is always possible without delay, presenting the user with partial data and immediate feedback. In most cases, the majority of the required data will be available in cache already.

Crucially, arithmetic operations on |Volatile| types are implemented to correctly propagate validity information. For example, an interpolated value computed from several voxel values will only be valid if all participating voxel values were valid. This enables the rendering algorithm to track the number of invalid pixels that were rendered to the screen, and repeatedly render the image until all pixels are valid.

1.4 Rendering Multi-Resolution Sources

In , raw image volumes are typically available as multi-resolution pyramids. In addition to the original resolution there exist several progressively down-scaled versions (mipmaps). There are two main reasons for using multiple mipmap levels. First, aliasing effects in zoomed-out views can be reduced by choosing the appropriate mipmap level to render. Second, low-resolution versions occupy less memory and can therefore be transferred faster from disk or over a network connection.

makes use of this by rapidly loading and rendering low-resolution data to provide immediate feedback when the user browses portions of the dataset that are not yet in cache. Higher-resolution details are filled in as time permits, until the data is rendered at the optimal resolution level. Section 1.4.1 discusses what we mean by “optimal” and how the optimal mipmap level is determined.

To make best use of the cache, we allow resolution levels to stand in for each other. When trying to render voxels from a particular mipmap level, parts of the data that are missing in this particular mipmap level are replaced with data that is present in other mipmap levels. Section 1.4.2 describes how this is implemented in the rendering algorithm.

1.4.1 Choosing the Optimal Mipmap Level

Given a multi-resolution pyramid of an image volume, we want to select the resolution level for rendering that will provide the best quality. Intuitively, we should choose the level that is closest to the on-screen resolution of the rendered image. More formally, the ratio between the source voxel size (projected to the screen) and the screen pixel size should be close to 1. Unfortunately, “projected source voxel size” is not well-defined. The source resolution may be anisotropic and therefore the width, height, and depth of a voxel projected to the screen may yield different “sizes” In the following, projected source voxel size refers to the largest of these values.

Let us assume that image volume is available in mipmap levels of different resolutions. Let denote the transformation from voxel coordinates in the mipmap level to voxel coordinates in the (full-resolution) image volume. Let denote 3D-to-2D projection. Then voxel coordinates in the mipmap level transform to screen coordinates as

where we use to denote the concatenated chain of transformations. Let , , ,

denote the origin and unit vectors along the

, , axes, respectively. Then the projected source voxel size is

The projected source voxel size can be used to select a single mipmap level for rendering by choosing that minimizes ∥∥.

1.4.2 Rendering with Mipmapped Volatile Sources

Rendering from unreliable multi-scale data sources with non-blocking access requires and alternative strategy to selecting the single best resolution as that single best resolution may not be available for indefinite time while others could be used temporarily. We have implemented the following strategy to cope with this situation: All available mipmap levels are sorted by their expected rendering quality. The order of this sorted list depends on both zoom and pose of the current virtual slice and is therefore updated at each change in pose or zoom. Assume that raw image volumes are available as multi-resolution pyramids, where each mipmap level is a cache-backed image of |Volatile| pixel type. Remember that this means that each voxel of each mipmap level has a validity flag that indicates whether the voxel’s intensity value is currently in memory or whether the voxel is still pending to be loaded. When rendering a pixel on screen, we go through the list of mipmap levels starting from the best entry and try to compute a value for the rendered pixel. If the pixel cannot be rendered with valid data from the best mipmap level, we can try to render using the second best mipmap level and so on.

The resulting rendering algorithm is specified in Algorithm 3 (which uses the auxiliary RenderView procedure specified in Algorithm 2). We use the notation introduced in Sections 1.2.2 and 1.4.1, with the following augmentation. Arguments to Algorithm 3 denote ordered mipmap pyramids. That is, is a list of mipmap levels for view that should be considered for rendering, ordered by quality from best to worst. We assume that the list contains mipmap levels. We denote by the mipmap level, , a (possibly down-scaled) image volume. We denote by the transformation from voxel coordinates in the mipmap level to voxel coordinates in the (full-resolution) image volume.

Procedure RenderView():
       Input : viewer transform ,
       source transform
       mipmap levels ,
       mipmap transforms ,
       converter ,
       canvas ,
       validity mask
       Result : partially rendered image in ,
       updated validity mask in
       for do
             for every canvas pixel do
                   if then
                         if is valid then
                              
                         end if
                        
                   end if
                  
             end for
            
       end for
      
end
Algorithm 2 RenderView procedure. This partially renders one view (source multi-resolution pyramid). Available data from all mipmap levels is used. For each rendered pixel , the next-best mipmap level which could be used to improve the pixel is written to . That is, implies that the pixel has been rendered with the best possible quality.
Input : viewer transform ,
ordered source pyramids ,
source transforms ,
source converters ,
canvas
Result : rendered image in
// initialization
for do
       create empty canvas create empty mask image set for all
end for
// main render-and-display loop
repeat
       for do
             if then
                   RenderView()
             end if
            
       end for
       for every canvas pixel do
             set
       end for
       display()
until
Algorithm 3 Rendering algorithm. In the main loop we partially render all views and sum the rendered views into a partially rendered final image for display. This is repeated until all pixels are rendered at the desired optimal resolution level. Note that the loop may be prematurely aborted with an incomplete image if the user navigates away from the current slice.

Verbally, the algorithm can be summarized as follows. We first consider a single view (, the ordered mipmap pyramid for one source image volume). We create a mask image of the same size as the render canvas.

The mask image contains for each rendered pixel the largest index of a mipmap level that could be used to improve it (if its data were valid). All mask pixels are initialized to , meaning that the pixel is not rendered at all and could therefore be improved by any mipmap level.

For each mipmap level (starting from the best) we go over the render canvas. For every pixel we check whether it was already drawn with the current mipmap level (or a better one). If not, we check whether the current mipmap level has valid data for the pixel (this will either result in valid data or trigger asynchronous loading of the data). If the data is valid, we set the pixel in the output image and set the corresponding mask pixel to . Then we go to the next mipmap level until all pixels have been drawn once, or there is no more mipmap level.

To overlay multiple views, the above procedure is used for each view to render an intermediate result. The results are blended into the final rendered image. For each of the single-view results we check whether all pixels have been drawn at the optimal mipmap level, , . If not the whole procedure is repeated. Note, that already-completed views need not be rendered again in repeat passes.

We employ two additional tweaks to reduce rendering artifacts that have been omitted from the above discussion. These require a tighter integration between the rendering code and the cache. Without these tweaks the following artifact may occur. Assume, that in the first render pass there are voxels that are missing from the cache for all mipmap levels. Then the rendering algorithm will trigger asynchronous loading of the missing data by touching these voxels. For a short period, until the data is loaded, these voxels will remain invalid in all mipmap levels. This will result in pixels that cannot be rendered at all and therefore will appear as black artifacts on the screen.

To remedy this, we do the following. First, before starting to render, we do a fast prediction of which data chunks will be touched by the rendering. We trigger loading of these chunks such that chunks from the lowest-resolution mipmap will be loaded first. The intuition for this is that for the lowest-resolution mipmap, we will need to load only little data, which will happen very fast. Therefore, when the rendering starts it is likely that the low-resolution data is in cache and we will be able to successfully render (at least a low-resolution version of) each pixel. Second, we reserve a small time budget during which loading operations are allowed to block. That is, until the time budget is used up, when an invalid voxel is hit, we allow for a small delay during which the voxel might become valid.

1.5 Interactive Navigation at High Resolution

On a modern display, a virtual slice at full resolution can easily comprise several millions of pixels. While ’s slice rendering is very efficient, on current hardware, the update rate at full resolution is not always satisfying. In order to achieve truly interactive browsing experience, renders lower resolution slices first and gradually improves the resolution as the slice stops moving. ’s renderer permanently measures the time required to generate a slice at each given resolution and sets the lowest resolution for interactive browsing to the maximum resolution at which update rates of frames per second can be guaranteed. This way, we combine the best of two worlds, interactive smooth navigation and high quality still images for detailed inspection.

1.6 Extensibility

’s rendering algorithm is designed with our caching infrastructure in mind. It is aware of multi-resolution pyramids, |Volatile| voxel types, and cache-backed |CellImg| volumes. However, it is important to note that caching and rendering in are only loosely coupled. In fact, the renderer will take advantage of data sources that are backed by a cache, provide multiple resolutions, or have |Volatile| voxel type. But data sources do not need to have these properties.

We took care to make it easy to add additional data sources to the renderer, for example to overlay segmentations and similar processing results. Any image that exposes a standard ImgLib2 interface can be trivially wrapped as a data source. The rendering algorithm of course works with non-mipmapped sources, because these can be treated as resolution pyramids with only a single level. Sources that do not expose |Volatile| voxel types are also trivially handled, because their voxels are always valid. Finally, sources that are infinitely large or not restricted to an integer grid can be rendered just as well. For these we can simply omit interpolation and extension that are required for the standard bounded, rasterized sources.

The ability to add custom sources is illustrated in Figure 1.2. Here we render an additional custom source to visualize the results of a blob detection algorithm. The custom source shows a virtualized image that is backed by a list of blob centers and radii. This image is continuous, infinite, and defined with a boolean voxel type. When a voxel is accessed, its value is determined on-the-fly by comparing its coordinate to the blob list. If the coordinate lies within the blob radius of the center of a blob then the voxel value is true, otherwise it is false. Note, that this image is unbounded and continuously defined. The blob containment check can be performed for any real-valued coordinate, , the image has infinite resolution. To display the source, we provide a boolean-to-RGB converter that converts true to green and false to black.

[width=.8]continuous-source

Figure 1.2: Rendering of custom sources. Visualisation blob-detection algorithm is added as an additional source for rendering in . The source is continuous and defined with a boolean pixel type: If a coordinate lies within a given radius of the center of a detected blob the associated value is true, otherwise it is false. A custom converter to RGB converts true to green and false to black. The zoomed-view illustrates that the source is continuous (, has infinite resolution).

2.1 Introduction

BigDataViewer provides a custom file-format that is optimized for fast arbitrary re-slicing at various scales. The file format is build on open standards XML[3] and HDF5[4], where HDF5 is used to store image volumes and XML is used to store meta-data. Section 2.2 gives a high-level overview of the file-format, Sections 2.3 and 2.4 provide more detail on the XML and HDF5 parts respectively.

The format is extensible in multiple ways: The XML file of a dataset can be augmented with arbitrary additional meta-data. Fiji’s SPIMage processing pipeline makes use of this for example, to store information about detected beads and nuclei. Moreover, the HDF5 file of the dataset can be replaced by other storage methods, for example TIFF files or remote access to data available online. Extensibility is further discussed in Section 2.5.

2.2 Overview

Each dataset contains a set of 3D grayscale image volumes organized by and . In the context of lightsheet microscopy, each channel or acquisition angle or combination of both is a . , for a multi-view recording with 3 angles and 2 channels there are 6 . Each represents a visualisation data source in the viewer that provides one image volume per . We refer to each combination of and as a . Each has one corresponding grayscale image volume.

A dataset comprises an XML file to store meta-data and one or more HDF5 files to store the raw images. Among other things, the XML file contains

  • the path of the HDF5 file(s),

  • a number of setups,

  • a number of timepoints,

  • the registration of each view into the global coordinate system.

[scale=0.9]pyramidblocks-

Figure 2.1: Chunked Mipmap Pyramid. Each raw image volume is stored in multiple resolutions, the original resolution (left) and successively smaller, downsampled versions (right). Each resolution is stored in a chunked representation, split into small 3D blocks.

Each has one corresponding image volume which is stored in the HDF5 file. Raw image volumes are stored as multi-resolution pyramids: In addition to the original resolution, several progressively down-scaled resolutions (mipmaps) are stored. This serves two purposes. First, using mipmaps minimizes aliasing effects when rendering a zoomed-out view of the dataset [5]. Second, and more importantly, using mipmaps reduces data access time and thus increases the perceived responsiveness for navigation. Low-resolution mipmaps take up less memory and therefore load faster from disk. New chunks of data must be loaded when the user browses to a part of the dataset that is not currently cached in memory. In this situation, can rapidly load and render low-resolution data, filling in high resolution detail later as it becomes available. This multi-resolution pyramid scheme is illustrated in Figure 2.1.

Each level of the multi-resolution pyramid is stored as a chunked multi-dimensional array. Multi-dimensional arrays are the standard way of storing image data in HDF5. The layout of multi-dimensional arrays on disk can be configured. We use a chunked layout which means that the 3D image volume is split into several chunks (smaller 3D blocks). These chunks are stored individually in the HDF5 file, which optimizes performance for our use-case where fast random access to individual chunks is required.

sliceblocks-

Figure 2.2: Loading Mipmap Chunks. When rendering a slice (schematically illustrated by the blue line) the data of only a small subset of blocks is required. In the original resolution 5 blocks are required, while only 2, respectively 1 block is required for lower resolutions. Therefore, less data needs to be loaded to render a low-resolution slice. This allows low-resolution versions to be loaded and rendered rapidly. High-resolution detail is filled in when the user stops browsing to view a certain slice for an extended period of time.

The performance of partial I/O, . reading subsets of the data, is maximized when the data selected for I/O is contiguous on disk [6]. The chunked layout is therefore well-suited to re-slicing access to images data. Rendering a virtual slice requires data contained within a small subset of chunks. Only chunks that touch the slice need to be loaded, see Figure 2.2. Each of these chunks, however, is loaded in full, although only a subset of voxels in each chunk is required to render the actual slice. Loading the data in this way, aligned at chunk boundaries, gurantees optimal I/O performance.

cacheblocks-

Figure 2.3: Caching Mipmap Chunks. Recently used blocks are cached in RAM. For rendering the slice indicated by the red line, only the red blocks need to be loaded. The blue blocks are already cached from rendering the blue slice before.

All loaded chunks are cached in RAM. During interactive navigation, subsequent slices typically intersect with a similar set of chunks because their pose has changed only moderately, . cached data are re-used. Only chunks that are not currently in the cache need to be loaded from disk, see Figure 2.3. Combined with the multi-resolution mipmap representation, this chunking and caching scheme allows for fluid interactive browsing of very large datasets.

The parameters of the mipmap and chunking scheme are specific to each dataset and they are fully configurable by the user. In particular, when exporting images to the format, the following parameters are adjustable:

  • the number of mipmap levels,

  • the subsampling factors in each dimension for each mipmap level,

  • the chunk sizes in each dimension for each mipmap level.

suggests sensible parameter settings, however, for particular applications and data properties a user may tweak these parameters for optimal performance.

2.3 XML File Format

In the following we describe the XML format by means of an example. Consider a dataset that consists of 2 and 3 , that is, 6 in total. The dataset can be specified in a minimal XML file as follows language=xml [commandchars=
{}] c+cp?xml version=1.0 encoding=UTF8? n+ntSpimData n+naversion=l+s0.2n+nt n+ntBasePath n+natype=l+srelativen+nt.n+nt/BasePath n+ntSequenceDescription n+ntImageLoader n+naformat=l+sbdv.hdf5n+nt n+nthdf5 n+natype=l+srelativen+ntdrosophila.h5n+nt/hdf5 n+nt/ImageLoader n+ntViewSetups n+ntViewSetup n+ntid0n+nt/id n+ntnameangle 1n+nt/name n+nt/ViewSetup n+ntViewSetup n+ntid1n+nt/id n+ntnameangle 2n+nt/name n+nt/ViewSetup n+nt/ViewSetups n+ntTimepoints n+natype=l+srangen+nt n+ntfirst0n+nt/first n+ntlast2n+nt/last n+nt/Timepoints n+nt/SequenceDescription n+ntViewRegistrations n+ntViewRegistration n+natimepoint=l+s0 n+nasetup=l+s0n+nt n+ntViewTransform n+natype=l+saffinen+nt n+ntaffine0.996591 0.001479 0.010733 5.384684 0.001931 0.995446 0.003766 81.544861 0.000497 0.000060 3.490110 9.854919n+nt/affine n+nt/ViewTransform n+nt/ViewRegistration n+ntViewRegistration n+natimepoint=l+s0 n+nasetup=l+s1n+nt ... firstnumber=66 [commandchars=
{}] n+nt/ViewRegistrations n+nt/SpimData firstnumber=1

SpimData|<SpimData>| BasePath|<BasePath>| SequenceDescription|<SequenceDescription>| ImageLoader|<ImageLoader>| ViewSetups|<ViewSetups>| ViewSetup|<ViewSetup>| ViewRegistrations|<ViewRegistrations>| ViewRegistration|<ViewRegistration>| ViewTransform|<ViewTransform>| id|<id>| sname|<name>| Timepoints|<Timepoints>| ViewInterestPoints|<ViewInterestPoints>| Attributes|<Attributes>|

The top-level SpimData element contains at least a BasePath element, a SequenceDescription element, and a ViewRegistrations element. The ordering of elements is irrelevant.

BasePath (line 3) defines the base path path for all relative paths occuring in the rest of the file. Usually, this is “|.|”, , the directory of the XML file.

SequenceDescription (lines 4–22) defines the and and thereby specifies the (raw image volumes) contained in the sequence. It also specifies an ImageLoader (line 5-7), which will be discussed later. In the example we have two ViewSetups (line 8–17). Each ViewSetup must have a unique id (|0| and |1| in the example). It may have a sname (|angle 1| and |angle 2| in the example). It may also have arbitrary additional attributes (see Section 2.5). Timepoints (lines 18–21) can be specified in several ways: as a range, as a list, or as a pattern. In the example they are specified as a range, starting with |0| and ending with |2|.

ViewRegistrations (lines 23–66) describe the transformations that register each ’s raw voxel coordinates into the global coordinate system. In the example, there are 6 : (0, 0) through (2, 1). Thus there are 6 ViewRegistration child elements, one for each . Each can be speficied as a single ViewTransform 3d-affine matrix as in the example, or as a list of ViewTransform elements which will be concatenated to obtain the final transform.

The ImageLoader element (line 5-7) describes a raw image volume source for each . The default |<ImageLoader format="bdv.hdf5">| will read the volumes from an HDF5 file, as indicated by the |format| attribute. The contents of the ImageLoader element is specific to the format. For HDF5, it simply specifies the name of the HDF5 file (line 6).

2.4 HDF5 File Format

The HDF5 file format is straightforward. It contains the chunked multi-dimensional arrays for each of the dataset and a minimum of meta-data. Figure 2.4 shows the HDF5 file of the example dataset, inspected in the standard HDFView browser [7].

[width=.6]hdfview

Figure 2.4: HDF5 File Structure. The HDF5 file of the example dataset shown in a HDF5 browser.

The meta-data comprises the parameters of the mipmap pyramids. We allow different parameters for each , because the image volumes of individual might be captured with different size, resolution, or anisotropy. The parameters of a mipmap pyramid comprise the subsampling factors in each dimension for each mipmap level, and the chunk sizes in each dimension for each mipmap level. The subsampling factors for the mipmap pyramid of the with id SS are stored as a matrix data object in the path “sSS/resolutions” in the HDF5 file. The chunk sizes of SS are stored as a matrix data object in the path “sSS/subdivisions”. Having 2 , the example file contains s00/resolutions, s00/subdivisions, s01/resolutions, and s01/subdivisions. Consider s00/resolutions in the example dataset. This is the matrix

where rows index the mipmap level and columns index the dimension. For example, the 4 mipmap level has subsampling factors in dimension respectively. Similary, s00/subdivisions is the matrix

where rows index the mipmap level and columns index the dimension. For example, the 0 mipmap level is chunked into blocks of size in .

Image data is stored in exactly one chunked multi-dimensional array for every mipmap level for every . These data arrays are stored in paths “tTTTTT/sSS/L/cells” in the HDF5 file. Here, TTTTT is the id of the , SS is the id of the , and L is the index of the mipmap level. Having 3 , 2 , and 4 mipmap levels, the example file contains t00000/s00/0/cells through t00002/s01/3/cells. Currently, we always store image volumes with 16-bit precision.

2.5 Extensibility

The XML format is extensible in several ways (which may seem self-evident because XML has “eXtensible” already in its name). By extensible we mean the following: The Java library that maps the XML file to a SpimData object representation in memory provides extension points for augmenting the XML (and object model) with additional content.111 The SpimData Java library is open source and is available on http://github.com/tpietzsch/spimdata. Crucially, this is done in a backwards compatible way, such that different users of the format need not be aware of each others respective extensions. For example, ignores parts of the XML file which are specific to Fiji’s SPIMage processing tools. Nevertheless it is able to read and write these files, leaving the SPIMage processing extensions intact.

In the following we briefly highlight the available XML extension points and their support by the SpimData Java library.

2.5.1 Alternative Image Sources

Instead of the default HDF5 backend, any other source providing image volumes may be specified. As discussed in Section 2.3, the type of image source is defined in the |format| attribute of the |<ImageLoader| |format="bdv.hdf5">| element.

provides alternative backends in addition to HDF5, for example for accessing images provided by a CATMAID web service. Adding a new type of image source requires

  1. a Java class |C| that implements the |BasicImgLoader<T>| interface (where |T| is the voxel type provided by the new format),

  2. a Java class that implements the |XmlIoBasicImgLoader<C>| interface and is annotated by |@ImgLoaderIo|, specifying the name of the new format.

To give a concrete example, the implementation of the CATMAID web service backend consists of the classes language=java [commandchars=
{}] k+kdpublic k+kdclass n+ncCatmaidImageLoader k+kdimplements nBasicImgLoaderonARGBTypeo o o… o and [commandchars=
{}] n+nd@ImgLoaderIoo(nformat o= l+scatmaido, ntype o= nCatmaidImageLoadero.n+naclasso) k+kdpublic k+kdclass n+ncXmlIoCatmaidImageLoader k+kdimplements nXmlIoBasicImgLoaderonCatmaidImageLoadero o o… o The actual implementations are beyond the scope of this document.222We refer to http://github.com/tpietzsch/spimviewer and https://github.com/fiji/SPIM_Registration which provide multiple example backends. Implementing a backend is particularly easy if the custom image format is able to load chunks of image volumes. We provide facilities that make it straightforward to re-use our caching infrastructure in this case. All annotated |XmlIoBasicImgLoader| subclasses will be picked up automatically and used to instantiate |BasicImgLoader| when the specified format name is encountered. For example, if an |<ImageLoader format="catmaid">| element is encountered it will be passed to |XmlIoCatmaidImageLoader|, which then will create a |CatmaidImageLoader|.

2.5.2 Adding Custom SpimData Sections

Arbitrary top-level elements may be added to the SpimData root element. The only restriction is that each top-level element may occur only once. As discussed in Section 2.3, the elements BasePath, SequenceDescription, and ViewRegistrations must always exist. Fiji’s SPIMage processing for example, adds a custom top-level element ViewInterestPoints. language=xml [commandchars=
{}] c+cp?xml version=1.0 encoding=UTF8? n+ntSpimData n+naversion=l+s0.2n+nt n+ntBasePath n+natype=l+srelativen+nt.n+nt/BasePath n+ntSequenceDescription … n+nt/SequenceDescription n+ntViewRegistrations … n+nt/ViewRegistrations n+ntViewInterestPoints … n+nt/ViewInterestPoints n+nt/SpimData To be able to read and write files with this additional top-level element, the SPIMage processing tools use a custom reader/writer class. All such reader/writer classes are derived from |XmlIoAbstractSpimData| which also takes care of unknown top-level elements.

A particular reader/writer may not be able to handle a particular top-level element. For example the does not know how to handle the ViewInterestPoints and therefore ignores it. It would not be reasonable to expect every consumer of the XML format to understand additional content that is of no interest to them. However, neither should additional content be simply discarded. Otherwise it might get lost if a load-modify-save operation is performed on a file with additional content.

Instead, the |XmlIoAbstractSpimData| reader/writer stores unhandled top-level elements when opening a file. If the same reader/writer is later used to write the (modified) dataset, this information is simply appended to the newly created file as-is. This allows programmatic modification of datasets without understanding all extensions. In summary, we avoid losing information while also avoiding the need for every consumer to handle every extension.

2.5.3 Adding Custom Attributes

The ViewSetups section may be augmented with arbitrary ViewSetup attributes to provide additional meta-data for the . While the requires no attributes at all, Fiji’s SPIMage processing requires at least the acquisition angle, channel, and illumination direction of the microscope. Conceptually a particular attribute is a set of values. These can be defined in Attributes elements and particular attribute values may be associated to using value ids. To illustrate this, here is how the angle attribute is defined and used. language=xml [commandchars=
{}] c+cp?xml version=1.0 encoding=UTF8? n+ntSpimData n+naversion=l+s0.2n+nt n+ntBasePath n+natype=l+srelativen+nt.n+nt/BasePath n+ntSequenceDescription n+ntViewSetups n+ntViewSetup n+ntid0n+nt/id n+ntattributes n+ntangle0n+nt/angle … n+nt/attributes n+nt/ViewSetup … n+ntAttributes n+naname=l+sanglen+nt n+ntAngle n+ntid0n+nt/id n+ntname45 degreen+nt/name n+nt/Angle … n+nt/Attributes n+nt/ViewSetups n+nt/ViewRegistrations n+nt/SpimData An Attributes element (lines 14–20) defines all attribute values for a given attribute |name|, in this case “|angle|”. Each of the values must at least have an id, which is then used to associate an attribute value to a particular ViewSetup. This is exemplified in line 9, where the value with id |0| of the attribute named “|angle|” is referenced.

Adding a new type of attribute requires

  1. a Java class |A| that extends |Entity| (which means that it has an id) and represents an attribute value,

  2. a Java class that extends |XmlIoEntity<A>| and is annotated by a |@ViewSetupAttributeIo| annotation specifying the name of the attribute.

For example, the above |angle| attribute is implemented by classes language=java [commandchars=
{}] k+kdpublic k+kdclass n+ncAngle k+kdextends nNamedEntity o o… o and [commandchars=
{}] n+nd@ViewSetupAttributeIoo(nname o= l+sangleo, ntype o= nAngleo.n+naclasso) k+kdpublic k+kdclass n+ncXmlIoAngle k+kdextends nXmlIoNamedEntityonAngleo o o… o The actual implementations are beyond the scope of this document.333We refer to http://github.com/tpietzsch/spimdata for example attribute implementations, , for angle. The |@ViewSetupAttributeIo| annotation allows automatic discovery of the classes that handle particular attibutes. Similar to Section 2.5.2, the XML reader/writer stores unhandled attributes as XML trees when reading files and puts them back into place when writing files. This allows programmatic modification of datasets without understanding all attributes. Again, we avoid losing information while also avoiding the need for every consumer to handle every custom attribute.

3.1 Overview

The is a re-slicing browser for terabyte-sized multi-view image sequences, for example multi-view light-sheet microscopy data. Conceptually, the visualized data comprises multiple data sources or setups. Each source provides one 3D image for each time-point (in the case of a time-lapse sequence). For example, in a multi-angle SPIM sequence, each angle is a setup. In a multi-angle, multi-channel SPIM sequence, each channel of each angle is a setup.

comes with a custom data format that is is optimized for fast random access to very large data sets. This permits browsing to any location within a multi-terabyte recording in a fraction of a second. The file format is based on XML and HDF5 [4] and is describe in the Supplementary Note .

This supplement is a slightly modified version of the User Guide on the Fiji wiki, http://fiji.sc/BigDataViewer. In particular, we removed content which is redundant with Supplementary Note . The User Guide describes the Fiji plugins that comprise , , the viewer itself as well as plugins for importing and exporting data to/from our file format.

3.2 Installation

is a Fiji plugin that is distributed via a Fiji update site. You will need a recent version of Fiji which you can download from http://fiji.sc. To install you need to enable its update site in the Fiji Updater. Select Help > Update Fiji from the Fiji menu to start the updater.

[width=.6]imagej-updater-1.png


Click on Manage update sites. This brings up a dialog where you can activate additional update sites.

[width=.6]manage-update-sites-1.png


Activate the update site and Close the dialog. Now you should see additional files appearing for download.

[width=.6]imagej-updater-2.png


Click Apply changes and restart Fiji.

You should now have a sub-menu Plugins > BigDataViewer.

3.3 Usage

To use the BigDataViewer we need some example dataset to browse. You can download a small dataset from http://fly.mpi-cbg.de/~pietzsch/bdv-example/, comprising two views and three time-points. This is an excerpt of a 6 angle 715 time-point sequence of drosophila melanogaster embryonal development, imaged with a Zeiss Lightsheet Z.1. Download both the XML and the HDF5 file and place them somewhere next to each other.

Alternatively, you can create a dataset by exporting your own data as described below.

3.3.1 Opening Dataset

To start , select Plugins > BigDataViewer > Open XML/HDF5 from the Fiji menu. This brings up a file open dialog. Open the XML file of your test dataset.

3.3.2 Basic Navigation

You should see something like this:

[width=.6]bdv-start.png


On startup, the middle slice of the first source (angle) is shown. You can browse the stack using the keyboard or the mouse. To get started, try the following:

  • Use the mouse-wheel or < and > keys to scroll through z slices.

  • right-click + drag anywhere on the canvas to translate the image.

  • Use + + mouse-wheel, or and keys to zoom in and out.

  • left-click + drag anywhere on the canvas to rotate (reslice) the image.

The following table shows the available navigation commands using the mouse:

left-click + drag Rotate (pan and tilt) around the point where the mouse was clicked.
right-click + drag or
middle-click + drag
Translate in the XY-plane.
mouse-wheel Move along the z-axis.
+ mouse-wheel or
+ + mouse-wheel
Zoom in and out.

The following table shows the available navigation commands using keyboard shortcuts:

X, Y, Z Select keyboard rotation axis.
, Rotate clockwise or counter-clockwise around the choosen rotation axis.
, Zoom in or out.
,, . Move forward or backward along the Z-axis.
+ X Rotate to the ZY-plane of the current source. (Look along the X-axis of the current source.)
+ Y or + A Rotate to the XZ-plane of the current source. (Look along the Y-axis of the current source.)
+ Z Rotate to the XY-plane of the current source. (Look along the Z-axis of the current source.)
[ or N Move to previous timepoint.
] or M Move to next timepoint.

For all navigation commands you can hold to rotate and browse faster, or hold to rotate and browse slower. For example, rotates by clockwise, while + rotates by , and + rotates by .

The axis-rotation commands (, + X) rotate around the current mouse location. That is, if you press + X, the view will pivot such that you see a ZY-slice through the dataset (you look along the X-axis). The point under the mouse will stay fixed, , the view will be a ZY-slice through that point.

3.3.3 Interpolation Mode

Using I you can switch between nearest-neighbor and trilinear interpolation schemes. The difference is clearly visible when you zoom in such that individual source pixels are visible.

[width=.6]interpolation.png


Trilinear interpolation results in smoother images but is a bit more expensive computationally. Nearest-neighbor is faster but looks more pixelated.

3.3.4 Displaying Multiple Sources

datasets typically contain more than one source. For a SPIM sequence one usually has multiple angles and possibly fused and deconvoled data on top.

Select Settings > Visibility & Grouping from the menu to bring up a dialog to control source visibility. You can also bring up this dialog by the shortcut F6.

[width=.6]visibility.png


Using the current source checkboxes (A in the figure above), you can switch between available sources. The first ten sources can also be made current by the number keys 1 through 0 in the main window.

To view multiple sources overlaid at the same time, switch to fused mode using the checkbox (B). You can also switch between normal and fused mode using the shortcut F in the main window. In fused mode individual sources can be turned on and off using the checkboxes (C) or shortcuts + 1 through + 0 in the main window.

Whether in normal or fused mode, the (unselectable) boxes (D) provide feedback on which sources are actually currently displayed. Also the main window provides feedback:

[width=.6]overlays-1.png


In the top-left corner an overview of the dataset is displayed (E). Visible sources are displayed as green/magenta wireframe boxes, invisible sources are displayed as grey wireframe boxes. The dimensions of the boxes illustrate the size of the source images. The filled grey rectangle illustrates the screen area, , the portion of the currently displayed slice. For the visible sources, the part that is in front of the screen is green, the part that is behind the screen is magenta.

At the top of the window, the name of the current source is shown (F).

Note, that also in fused mode there is always a current source, although this source may not even be visible. Commands such as + X (rotate to ZY-plane) refer to the local coordinate system of the current source.

3.3.5 Grouping Sources

Often there are sets of sources for which visibility is logically related. For example, in a multi-angle, multi-channel SPIM sequence, you will frequently want to see all channels of a given angle, or all angles of a given channel. If your dataset contains deconvolved data, you may want to see either all raw angles overlaid, or the deconvolved view, respectively. You want to be able to quickly switch between those two views. Turning individual sources on and off becomes tedious in these situations. Therefore, sources can be organized into groups. All sources of a group can be activated or deactivated at once.

Source grouping is handled in the visibility and grouping dialog, too (menu Settings > Visibility & Grouping or shortcut F6).

[width=.6]grouping.png


The lower half of the dialog is dedicated to grouping. There are 10 groups available. They are named “group 1” through “group 10” initially, but the names can be edited (A).

Sources can be assigned to groups using the checkboxes (B). In every line, there are as many checkboxes as there are sources. Sources corresponding to active checkboxes are assigned to the respective group. For example, in the above screenshot there are two sources and therefore two “assigned sources” checkboxes per line The first source is assigned to groups 1 and 2, the second source is assigned to groups 2 and 3. Group 2 has been renamed to “all sources”.

Grouping can be turned on and off by the checkbox (C) or by using the shortcut G in the main window. If grouping is enabled, groups take the role of individual sources: There is one current group which is visible in normal mode (all individual sources that are part of this group are overlaid). Groups can be activated or deactivated to determine visibility in fused mode (all individual sources that are part of at least one active group are overlaid).

Groups can be made current and made active or inactive using the checkboxes (D). Also, if grouping is enabled the number key shortcuts in the main window act on groups instead of individual sources. That is, groups 1 through 10 can be made current by keys 1 through 0. Similarly, shortcuts + 1 through + 0 in the main window activate or deactivate groups 1 through 10 for visibility in fused mode.

If grouping is enabled, the name of the current group is shown at the top of the main window.

[width=.6]overlays-2.png

3.3.6 Adjusting Brightness and Color

To change the brightness, contrast, or color of particular sources select Setting > Brightness & Color or press the shortcut S. This brings up the brightness and color settings dialog.

[width=.6]brightness-1.png


The min and max sliders (A) can be used to adjust the brightness and contrast. They represent minimum and maximum source values that are mapped to the display range. For the screenshot above, this means that source intensity 200 (and everything below) is mapped to black. Source intensity 862 (and everything above) is mapped to white.

When a new dataset is opened, tries to estimate good initial

min and max settings by looking at the first image of the dataset.

datasets are currently always stored with 16 bits per pixel, however the data does not always exploit the full value range 0 …65535. The example drosophila dataset uses values in the range of perhaps 0 …1000, except for the much brighter fiducial beads around the specimen. The min and max sliders in this case are a bit fiddly to use, because they span the full 16 bit range with the interesting region squeezed into the first few pixels. This can be remedied by adjusting the range of the sliders. For this, click on the dialog button (B). This shows two additional input fields, where the range of the sliders can be adjusted. In the following screenshot, the leftmost value of the slider range has been set to 0 and the rightmost value to 2000, making the sliders much more useful.

[width=.6]brightness-2.png


So far, all sources share the same min and max settings. However, these can also be adjusted for each individual source or for groups of sources. The checkboxes (C) assign sources to min-max-groups. There is one checkbox per source. In the example drosophila dataset there are two sources, therefore there are two checkboxes. The active checkboxes indicate for which sources the min and max values apply.

If you uncheck one of the sources, it will move to its own new min-max-group. Now you can adjust the values for each source individually. The sliders of new group are initialized as a copy of the old group.

[width=.6]brightness-3.png


Sources can be assigned to min-max-groups by checking/unchecking the checkboxes. The rule is that every source is always assigned to exactly one min-max-group. Thus, if you activate an unchecked source in a min-max-group, this will remove the source from its previous min-max-group and add it to the new one. Unchecking a source will remove it from its min-max-group and move it to a new one. Min-max-groups that become empty are removed. To go back to a single min-max-group in the example, simply move all sources to the same group.

Finally, at the bottom of the dialog (D) colors can be assigned to sources. There is one color button per source (two in the example). Clicking a button brings up a color dialog, where you can choose a color for that particular source. In the following screenshot, the sources have been colored magenta and green.

[width=.55]brightness-4.png

3.3.7 Bookmarking Locations and Orientations

allows to bookmark the current view. You can set bookmarks for interesting views or particular details of your dataset to easily navigate back to those views later.

Each bookmark has an assigned shortcut key, , you can have bookmarks “a”, ”A”, ”b”, …, “1”, “2”, . To set a bookmark for the current view, press + B and then the shortcut you want to use for the bookmark. To recall bookmark, press B and then the shortcut of the bookmark.

provides visual feedback for setting and recalling bookmarks. When you press + B, the message “set bookmark:” appears in the lower right corner of the main window, prompting to press the bookmark shortcut next.

[width=.6]set-bookmark.png


Now press the key you want to use as a shortcut, for example A. The promt message will change to “set bookmark: a” indicating that you have set a bookmark with shortcut A. Instead of pressing a shortcut key you can abort using esc.

Similarly, when you press B to recall a bookmark, the prompt message “go to bookmark:” appears. Now press the shortcut of the bookmark you want to recall, for example A. The promt message will change to “go to bookmark: a” and the view will move to the bookmarked location. Instead of pressing a shortcut key you can abort using esc.

Note, that bookmark shortcuts are case-sensitive, , A and + A refer to distinct bookmarks “a” and “A” respectively.

The bookmarking mechanism can also be used to bookmark and recall orientations. Press O and then a bookmark shortcut to recall only the orientation of that bookmark. This rotates the view into the rotation of the bookmarked view (but does not zoom or translate to the bookmarked location). The rotation is around the current mouse location (, the point under the mouse stays fixed).

3.3.8 Loading and Saving Settings

Organizing sources into groups, assigning appropriate colors, adjusting brightness correctly, and bookmarking interesting locations is work that you do not want to repeat over and over every time you re-open a dataset. Therefore, allows to save and load these settings.

Select File > Save settings from the menu to store settings to an XML file, and File > Load settings to load them from an XML file.

When a dataset is opened, automatically loads an appropriately named settings file if it is present. This settings file must be in the same directory as the dataset’s XML file, and have the same filename with .settings appended. For example, if the dataset’s XML file is named drosophila.xml, the settings file must be named drosophila.settings.xml. (If you select File > Save settings, this filename is already suggested in the Save File dialog.)

Settings files assume that a specific number of sources are present, therefore settings are usually not compatible across different datasets.

3.4 Opening Datasets as ImageJ Stacks

may be great for looking at your data, but what if you want to apply other ImageJ algorithms or plugins to the images? You can open individual images from a dataset as ImageJ stacks using File > Import > BigDataViewer… from the Fiji menu.

[width=.6]import.png


Select the XML file of a dataset, then choose the time-point and source (setup) index of the image you want to open. If you enable the “open as virtual stack” checkbox the image will open as an ImageJ virtual stack. This means that the opened image is backed by ’s cache and slices are loaded on demand. Without “open as virtual stack”, the full image will be loaded into memory. Virtual stacks will open a bit faster but switching between slices may be less instantaneous.

Note that the import function is macro-recordable. Thus, you can make use of it to batch-process images from datasets.

3.5 Exporting Datasets for the

BigDataViewer uses a custom file-format that is optimized for fast arbitrary re-slicing at various scales. This file format is build on open standards XML[3] and HDF5[4], where HDF5 is used to store image volumes111 Actually, we support several ways to store the image volumes besides HDF5. For example, the volume data can be provided by a web service for remote access. However, the Fiji plugins always export to HDF5. and XML is used to store meta-data. The format is explained in detail in Supplementary Note – we recommend to read at least the overview in Section 2 of that Note for some background, rationale, and terminology that will be helpful in the following.

3.5.1 Exporting from ImageJ Stacks

You can export any dataset to format by opening it as a stack in Fiji and then selecting Plugins > BigDataViewer > Export Current Image as XML/HDF5 from the Fiji menu. If the image has multiple channels, each channel will become one setup in the exported dataset. If the image has multiple frames, each frame will become on timepoint in the exported dataset. Of course, you may export from virtual stacks if your data is too big to fit into memory.

To get started, let’s open one of the ImageJ sample images by File > Open Samples > T1 Head (2.4M, 16-bits). Selecting Plugins > BigDataViewer > Export Current Image as XML/HDF5 brings up the following dialog.

[width=.6]export-stack.png


Parts (A) and (C) of the dialog are optional, so we will explain (B) and (D) first.

At the bottom of the dialog (D), the export path is defined. Specify the path of the XML file to which you want to export the dataset. The HDF5 file for the dataset will be placed into the same directory under the same name with extension “.h5”. If the “use deflate compression” checkbox is enabled, the image data will be compressed using HDF5 built-in DEFLATE compression. We recommend to use this option. It will usually reduce the file size to about 50% with respect to uncompressed image size. The performance impact of decompression when browsing the dataset is negligible.

In part (B) of the dialog the value range of the image must be specified. always stores images with 16-bit precision currently, while the image you want to export is not necessarily 16-bit. The value range defines the minimum and maximum of the image you want to export. This is mapped to the 16-bit range for export. , the minimum of the value range will be mapped to the minimum of the unsigned 16-bit range (0). The maximum of the value range will be mapped to the maximum of the unsigned 16-bit range (65535). In the drop-down menu you can select one the following options to specify how the value range should be determined:

  • “Use ImageJ’s current min/max setting” The minimum and maximum set in ImageJ’s Brightness&Contrast are used. Note, that image intensities outside that range will be clipped to the minimum or maximum, respectively.

  • “Compute min/max of the (hyper-)stack” Compute the minimum and maximum of the stack and use these. Note, that this may take some time to compute because it requires to look at all pixels of the stack you want to export.

  • “Use values specified below” Use the values specified in the Min and Max fields (B) of the export dialog. Note, that image intensities outside that range will be clipped to the minimum or maximum, respectively.

After you have specified the value range and selected and export path, press OK to export the dataset. Messages about the progress of the operation are displayed in the ImageJ Log window.

[width=.6]export-stack-log.png


When the export is done you can browse the dataset in the by opening the exported XML file.

The optional parts (A) and (C) of the export dialog provide further options to customize the export. If the checkbox “manual mipmap setup” is enabled, you can customize the multi-resolution mipmap pyramid which stores your image stacks. You can specify the number of resolution levels used, and their respective down-scaling factors, as well as the chunk sizes into which each resolution level is subdivided.

The “Subsampling factors” field specifies a comma-separated list of resolution levels, formatted as {level, …, level}. Each level is a list of subsampling factors in , formatted as {x-scale, y-scale, z-scale}. For example consider the specification {{1,1,1}, {2,2,1}, {4,4,2}}. This will create a resolution pyramid with three levels. The first level is the full resolution – it is scaled by factors in all axes. The second level is down-scaled by factors in respectively. So it has half the resolution in and , but full resolution in . The third level has half the resolution of the second in all axes, , it is down-scaled by factors in respectively. Note, that you should always order levels by decreasing resolution like this. Also note, that in the above example we down-scale by different factors in different axes. One may want to do this if the resolution of the dataset is anisotropic. Then it is advisable to first downscale only in the higher-resolved axes until approximately isotropic resolution is reached.

The “Hdf5 chunk sizes” specifies the chunk sizes into which data on each resolution level is sub-divided. This is again formatted as {level, …, level}, with the same number of levels as supplied for the Subsampling factors. Each level is a list of sizes in , formatted as {x-size, y-size, z-size}. For example consider the specification {{16,16,16}, {16,16,16}, {16,16,16}}. This will sub-divide each resolution level into chunks of pixels.

It is usually not recommended to specify subsampling factors and chunk sizes manually. When browsing a dataset, the mipmap setup determines the loading speed and therefore the perceived speed of browsing to data that is not cached. With “manual mipmap setup” turned off, reasonable values will be determined automatically depending on the resolution and anisotropy of your dataset.

Finally, in part (C) of the export dialog, you may choose to split your dataset into multiple HDF5 files. This is useful in particular for very large datasets. For example when moving the data to a different computer, it may be cumbersome to have it sitting in a single 10TB file. If the checkbox “split hdf5” is enabled the dataset will be split into multiple HDF5 partition files. The dataset can be split along the timepoint and setup dimensions. Specify the number of timepoints per partition and setups per partition in the respective input fields.

For example, assume your dataset has 4 setups and 10 timepoints. Setting timepoints per partition = 4 and setups per partition = 2 will result in 4 HDF5 partitions:

  • setups 1 and 2 of timepoints 1 through 5,

  • setups 3 and 4 of timepoints 1 through 5,

  • setups 1 and 2 of timepoints 6 through 10, and

  • setups 3 and 4 of timepoints 6 through 10.

Setting timepoints per partition = 0 or setups per partition = 0 means that the dataset is not split in the respective dimension.

Note, that splitting into multiple HDF5 files is transparent from the viewer side. There is still only one XML file that gathers all the partitions files into one dataset.

3.5.2 Integration with Fiji’s SPIMage Processing Tools

seamlessly integrates with the “Multiview Reconstruction” plugins that Fiji offers for registration and reconstruction of lightsheet microscopy data. Recent versions of these tools build on the same XML format as itself. In addition to HDF5, “Multiview reconstruction” supports a backend for datasets that store individual views as TIFF files, because unprocessed data from lightsheet microscopes is often available in this format.

In principle, is able to display a TIFF dataset as is. However, for quick navigation this is not the ideal format: When navigating to a new timepoint, needs to load all TIFF files of that timepoint into memory, suffering a delay of tens of seconcds. Therefore, it is beneficial to convert the TIFF dataset to HDF5. Note, that this can be one at any point of the processing pipeline (, before registration, after registration, after multiview fusion or deconvolution, ) because the “Multiview reconstruction” plugins can operate on HDF5 datasets as well.

A discussion of the “Multiview reconstruction” plugins is beyond the scope of this document. We assume that the used has already created an XML/TIFF dataset, and refer to the description on the Fiji wiki, http://fiji.sc/Multiview-Reconstruction, for details.

To convert the dataset to HDF5, select Plugins > Multiview Reconstruction > Resave > As HDF5 form the Fiji menu. This brings up the following dialog.

[width=.6]mvr-export1b.png


At the top of the dialog, select the XML file of the dataset you want to convert to HDF5. In the lower part of the dialog, you can select which parts of the dataset you want to resave. For example, assume that the source dataset contains the raw image stacks from the microscope, as well as deconvolved versions. You might decide that you do not need the raw data as HDF5, so you can select only the deconvolved channels. Once you have determined what you want to convert press OK.

This brings up the next dialog, in which you need to specify the export path and options.

[width=.6]mvr-export2.png


These parameters are the same as discussed in the previous section: If you want to specify custom mipmap settings, you can do so in the top part of the dialog. Below that, choose whether you want to compress the image data. For the export path, specify the XML file to which you want to export the dataset. Press OK to start the export.

Bibliography