scenery -- Flexible Virtual Reality Visualisation on the Java VM

06/16/2019
by   Ulrik Günther, et al.
MPI-CBG
0

Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data resulting from analysis of such data or simulations. Visualisation is often the first step in making sense of the data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we introduce scenery, a flexible VR/AR visualisation framework for the Java VM that can handle mesh and arbitrarily large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features and detail example applications, such as its use in the biomedical image analysis software Fiji, or for visualising agent-based simulations.

READ FULL TEXT VIEW PDF

Authors

page 2

page 4

06/16/2019

scenery: Flexible Virtual Reality Visualization on the Java VM

Life science today involves computational analysis of a large amount and...
04/10/2018

A Review of Augmented Reality Applications for Building Evacuation

Evacuation is one of the main disaster management solutions to reduce th...
08/10/2021

An examination of skill requirements for Augmented Reality and Virtual Reality job advertisements

The field of Augmented Reality (AR) and Virtual Reality (VR) has seen ma...
10/27/2019

Immersive Insights: A Hybrid Analytics System for Collaborative Exploratory Data Analysis

In the past few years, augmented reality (AR) and virtual reality (VR) t...
12/15/2018

Walking Through an Exploded Star: Rendering Supernova Remnant Cassiopeia A into Virtual Reality

NASA and other astrophysical data of the Cassiopeia A supernova remnant ...
11/04/2019

Synthetic Video Generation for Robust Hand Gesture Recognition in Augmented Reality Applications

Hand gestures are a natural means of interaction in Augmented Reality an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Related work

A particularly popular framework in scientific visualisation is VTK [9]: VTK offers rendering of both geometric and volumetric data, using an OpenGL 2.1 renderer. However, VTK’s complexity has also grown over the years and its API is becoming more complex, making it difficult to change internals without breaking existing applications (). A more recent development is VTK.js , which brings VTK to web browsers. ClearVolume [26] is a visualisation toolkit tailored to high-speed, volumetric microscopy and supports multi-channel/multi-timepoint data, but focuses solely on volumetric data and does not support VR/AR. Another special-purpose framework is MegaMol [8], which focuses on efficient rendering of a large number of discrete particles, and provides a thin abstraction layer over the used graphics API for the developer. 3D Viewer [29] does general-purpose image visualisation tasks, and supports multi-timepoint data, but no out-of-core volume rendering, or VR/AR.

In out-of-core rendering (OOCR), the rendering of volumetric data that does not fit into main or graphics memory, existing software packages include Vaa3D/Terafly [20, 3]

, which is written with applications like neuron tracing in mind, and

BigDataViewer [23], which performs by-slice rendering of arbitrarily large datasets, powered by the ImgLib2 library[22]. Another application supporting OOCR is the VR neuron tracing tool [30], which lacks support for multiple timepoints and is not customizable. Inviwo [14] supports OOCR and interactive development, but does not support the overlay of multiple volumetric datasets in a single view.

In the field of biomedical image analysis, various commercial packages exist: Arivis, Amira, and Imaris222See the corresponding websites at arivis.com/en/imaging-science/imaging-science, fei.com/software/amira/, imaris.oxinst.com, and syGlass [21] support out-of-core rendering, and are scriptable by the user. Arivis, Imaris, and also syGlass offer rendering to VR headsets, and Amira can run on CAVE systems. Imaris provides limited Fiji and Matlab integration. Due to being closed-source, the flexibility of these software packages is ultimately limited (e.g., changing rendering methods, or adding new input devices).

2 scenery

With scenery, we provide a flexible framework for developing visualization prototypes and applications, on systems ranging from desktop screens over VR/AR headsets to distributed setups. The framework supports arbitrarily large volumetric datasets, which can contain multiple color channels, multiple views, and multiple timepoints. Via OpenVR/SteamVR, it supports rendering to VR headsets like the Oculus Rift or HTC Vive. scenery is written in Kotlin, a language for the JVM that requires less boilerplate code and has more functional constructs than Java itself. This increases developer productivity, while maintaining 100% compatibility with existing Java code. scenery runs on Windows, Linux, and macOS (). scenery uses the low-level Vulkan API for fast and efficient rendering, and can fall back to an OpenGL 4.1-based renderer333The Vulkan renderer uses the LWJGL Vulkan bindings (see lwjgl.org), while the OpenGL renderer uses JOGL (see jogamp.org)..

scenery is designed around two concepts: A scene graph for the scene organisation into nodes, and a hub that organises all of scenery’s subsystems — e.g. rendering, input, statistics — and enables communication between them. scenery’s application architecture is depicted in Fig. 1

. scenery’s subsystems are only loosely coupled, meaning they can work fully independent of each other. The loose coupling enables isolated testing of the subsystems, and thereby we can reach 65% code coverage at the moment (the remaining 35% is mostly code that requires additional hardware and is therefore harder to test in an automated manner).

Figure 1: Overview of scenery’s architecture.

3 Highlighted features

3.1 Realtime rendering on the JVM —

Historically, the JVM has not been the go-to target for realtime rendering: For a long time, the JVM had the reputation of being slow and memory-hungry. However, since the HotSpot VM has been introduced in Java 6, this is less true, and state-of-the-art just-in-time compilers like the ones used in Java 12 have become very good at generating automatically vectorized code

444For this project, we have measured the timings of performance-critical parts of code, such as 4x4 matrix multiplication. Compared to hand-tuned, vectorized AVX512 code, the native code generated by the JVM’s JIT compiler is about a factor of 3-4 slower. With the introduction of a new vectorisation API in Project Panama, this gap will close further.. The JVM is widely used, provides excellent dependency management via the Maven or Gradle build tools, and efficient, easy-to-use abstractions for, e.g., multithreading or UIs on different operating systems. Additionally, with the move to low-overhead APIs like Vulkan, pure-CPU performance is becoming less important. In the near future, Project Panama555See openjdk.java.net/projects/panama. will introduce JVM-native vectorization primitives to support CPU-heavy workloads. These primitives will work in a way similar to those provided by .NET.

Another convenience provided by the JVM is scripting: Via the JVM’s scripting extensions, scenery can be scripted using its REPL with third-party languages like Python, Ruby, and Clojure. In the future, GraalVM666See graalvm.org. will enable polyglot code on the JVM, e.g. by ingesting LLVM bytecode directly[2]. scenery has already been tested with preview builds of both GraalVM and Project Panama.

3.2 Out-of-core volume rendering —

scenery supports volume rendering of multiple, potentially overlapping volumes that are placed into the scene via arbitrary affine transforms. For out-of-core direct volume rendering of large volumes () we develop and integrate the BigVolumeViewer library, which builds on the pyramidal image data structures and in-memory caching of large image data from BigDataViewer [23]. We augment this by a GPU cache tier for volume blocks, implemented using a single large 3D texture. This cache texture is organized into small (e.g.,

) uniformly sized blocks. Each texture block stores a particular block of the volume at a particular level in the resolution pyramid, padded by one voxel on each side to avoid bleeding from neighboring blocks during trilinear interpolation. The mapping between texture and volume blocks is maintained on the CPU.

To render a particular view of a volume, we determine a base resolution level such that screen resolution is matched for the nearest visible voxel. Then, we prepare a 3D lookup texture in which each voxel corresponds to a volume block at base resolution. Each voxel in this lookup texture stores the coordinates of a block in the cache texture, as well as its resolution level relative to base, encoded as a RGBA tuple. For each (visible) volume block, we determine the optimal resolution by its distance to the viewer. If the desired block is present in the cache texture, we encode its coordinates in the corresponding lookup texture voxel. Otherwise, we enqueue the missing cache block for asynchronous loading through the CPU cache layer of BigDataViewer. Newly loaded blocks are inserted into the cache texture, where the cache blocks to replace are determined by a least-recently-used strategy that is also maintained on the CPU. For rendering, currently missing blocks are substituted by lower-resolution data if it is available from the cache.

Once the lookup texture is prepared, volume rendering proceeds by raycasting and sampling volume values with varying step size along the ray, adapted to the distance to the viewer. To obtain each volume sample, we first downscale its coordinate to fall within the correct voxel in the lookup texture. A nearest-neighbor sample from the lookup texture yields a block offset and scale in the cache texture. The final value is then sampled from the cache texture with the accordingly translated and scaled coordinate. With this approach, it is straightforward to raycast through multiple volumes simultaneously, simply by using multiple lookup textures. It is also easy to mix in smaller volumes which are simply stored as 3D textures and do not require indirection via lookup textures. To adapt to varying number and type of visible volumes, we generate shader sources dynamically at runtime. Blending of volume and mesh data is achieved by reading scene depth from the depth buffer for early ray termination, thereby hiding volume values that are behind rendered geometry.

3.3 Code-shader communication and reflection —

In traditional OpenGL (before version 4.1), parameter data like vectors, matrices, etc. are communicated to shaders via uniforms, which are set one-by-one. In scenery, instead of single uniforms, Uniform Buffer Objects (UBOs) are used. UBOs lead to a lower API overhead and enable variable update rates. Custom properties defined for a certain node class that need to be communicated to the shader are annotated in the class definition with the @ShaderProperty annotation, scenery picks up annotated properties automatically, and serializes them. See Listing 1 for an example of how properties can be communicated to the shader, and Listing 2 for the corresponding GLSL code for UBO definition in the shader. For procedurally-generated shaders, a hash map storing these properties can be used alternatively.

For all values stored in shader properties a hash is calculated, and they are only communicated to the GPU when the hash changes. At the time of writing, all elementary types (ints, floats, etc.), as well as matrices and vectors thereof, are supported.

// Define a matrix and an integer property
@ShaderProperty var myMatrix: GLMatrix
@ShaderProperty var myIntProperty: Int
// For a dynamically generated shader: Store properties as hash map
@ShaderProperty val shaderProperties = HashMap<String, Any>()
Listing 1: Shader property example
layout(set = 5, binding = 0)
uniform ShaderProperties {
    int myIntProperty;
    mat4 myMatrix;
};
Listing 2: GLSL code example for shader properties

Determination of the correct memory layout required by the shader is done by our Java wrapper for the shader reflection library SPIRV-cross and the GLSL reference compiler glslang777See github.com/KhronosGroup/SPIRV-cross for SPIRV-cross and github.com/scenerygraphics/spirvcrossj for our wrapper library, spirvcrossj.. This provides a user- and developer-friendly API ().

Furthermore, scenery supports shader factories — classes that dynamically produce shader code to be consumed by the GPU — and use them, e.g., when multiple volumetric datasets with arbitrary alignment need to be rendered in the same view.

3.4 Custom rendering pipelines —

In scenery, the user can use custom-written shaders and assign them on a per-node basis in the scene graph. In addition, scenery allows for the definition of fully customizeable rendering pipelines. The rendering pipelines are defined in a declarative manner in a YAML file, describing render targets, render passes, and their contents. Render passes can have properties that are adjustable during runtime, e.g., for adjusting the exposure of a HDR rendering pass. Rendering pipelines can be exchanged at runtime, and do not require a full reload of the renderer — e.g., already loaded textures do not need to be reloaded.

The custom rendering pipelines enable the user/developer to quickly switch between different pipelines, thereby enabling rapid prototyping of new rendering pipelines. We hope that this flexibility stimulates the creation of custom pipelines, e.g., for non-photorealistic rendering, or novel applications, such as Neural Scene (De)Rendering [19, 31].

3.5 VR and preliminary AR support —

scenery supports rendering to VR headsets via the OpenVR/SteamVR library and rendering on distributed setups, such as CAVEs or Powerwalls — addressing . The modules supporting different VR devices can be exchanged quickly and at runtime, as all of these implement a common interface.

In the case of distributed rendering, one machine is designated as master, to which multiple clients can connect. We use the same hashing mechanism as described in Section 3.3 to determine which node changes need to be communicated over the network, use Kryo888See github.com/EsotericSoftware/Kryo. for fast serialization of the changes, and finally ZeroMQ for low-latency and resilient network communication. A CAVE usage example is shown in Fig. 2.

Figure 2: A scientist interactively explores a 500 GiB multi-timepoint dataset of the development of an embryo of the fruit fly Drosophila melanogaster in the CAVE at the CSBD using a scenery-based application. Dataset courtesy of Loïc Royer, MPI-CBG/CZI Biohub, and Philipp Keller, HHMI Janelia Farm [25].

We have also developed an experimental compositor that enables scenery to render to the Microsoft Hololens.

3.6 Remote rendering and headless rendering

To support downstream image analysis and usage settings where rendering happens on a powerful, but non-local computer, scenery can stream rendered images out, either as raw data or as H264 stream. The H264 stream can either be saved to disk or streamed over the network via RTP. In the streaming video case, all produced frames are buffered and processed in a separate coroutine, such that the rendering performance is not impacted.

scenery can run in headless mode, creating no windows, enabling both remote rendering on machines that do not have a screen, e.g., in a cluster setup, or easier integration testing. Most examples provided with scenery can be run automatically (see the ExampleRunner test) and store screenshots for comparison. In the future, broken builds will be automatically identified by comparisons against known good images.

4 Example Applications

4.1 sciview

Figure 3: Out-of-core dataset of a D. melanogaster embryo visualised with scenery/sciview. The image is a composite of three different volumetric views, shown in different colors. The transfer function on the left was adjusted to highlight volume boundaries. Dataset courtesy of Michael Weber, Huisken Lab, MPI-CBG/Morgridge Institute.

On top of scenery, we have developed a plugin for embedding in Fiji/ImageJ2 [27] — sciview, fulfilling . We hope it will boost the use of VR technology in the life sciences, by enabling the user to quickly prototype visualisations and add new functionality. In sciview, many aspects of the UI are automatically generated, including the node property inspector and the list of Fiji plugins and commands applicable to the currently active dataset. sciview has been used in a recent SPIM pipeline [5]. In Supplementary Video 2, we show sciview rendering three overlaid volumes from a fruit fly embryo, a still frame of that is shown in Figure 3.

4.2 Agent-based simulations

We have utilized scenery to visualize agent-based simulations with large numbers of agents. By adapting an existing agent- and physics- based simulation toolkit [11], we have increased the number of agents that can be efficiently visualized by a factor of 10. This performance improvement enables previous studies of swarms with evolving behaviors to be revisited under conditions that may enable new levels of emergent behavior [10, 7]. In Figure 4, we show 10,000 agents using flocking rules inspired by [24] to collectively form a sphere.

Figure 4: Agent-based simulation with 10,000 agents collectively forming a sphere.

4.3 Evaluating simulator sickness for VR control of microscopes

We have used scenery in a preliminary study of VR control for state-of-the-art volumetric microscopes. In this study with 8 microscopy experts, we investigated whether users tend to suffer from motion sickness while using our interfaces. We found an average SSQ score [15] of , which is very low, indicating that users very well tolerated our VR rendering of live microscopy data and interacting with it. A demo of such an interface is shown in Supplementary Video 1.

5 Conclusions and Future Work

We have introduced scenery, an extensible, user/developer-friendly rendering framework for geometric and arbitrarily large volumetric data and demonstrated its wide applicability in a few use cases. In the future, we will introduce better volume rendering algorithms (e.g. [16, 13]) and investigate their applicability to VR settings. Furthermore, we are looking into providing support for out-of-core mesh data, e.g. using sparse voxel octrees [17, 18]. On the application side, we are driving forward projects in microscope control (see Section 4.3) and VR/AR augmentation of lab experiments.

6 Software and Code Availability

scenery, its source code, and a variety of examples are available at github.com/scenerygraphics/scenery and are licensed under the LGPL 3.0 license. A preview of the Fiji plugin sciview is available at github.com/scenerygraphics/sciview.

Acknowledgements

The authors thank Curtis Rueden, Martin Weigert, Robert Haase, Vladimir Ulman, Philipp Hanslovsky, Wolfgang Büschel, Vanessa Leite, and Giuseppe Barbieri for additional contributions. The authors also thank Loïc Royer, Philipp Keller, Nicola Maghelli, and Michael Weber for allowing use of their datasets.

References