scenery: Flexible Virtual Reality Visualization on the Java VM

06/16/2019
by   Ulrik Günther, et al.
MPI-CBG
0

Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. Visualization is often the first step in making sense of data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualizations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we introduce scenery, a flexible VR/AR visualization framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features and example applications, such as its use in VR for microscopy, in the biomedical image analysis software Fiji, or for visualizing agent-based simulations.

READ FULL TEXT VIEW PDF

Authors

06/16/2019

scenery -- Flexible Virtual Reality Visualisation on the Java VM

Life science today involves computational analysis of a large amount and...
01/25/2013

Immersive VR Visualizations by VFIVE. Part 2: Applications

VFIVE is a scientific visualization application for CAVE-type immersive ...
10/27/2019

Immersive Insights: A Hybrid Analytics System for Collaborative Exploratory Data Analysis

In the past few years, augmented reality (AR) and virtual reality (VR) t...
09/07/2021

GeneNet VR: Interactive visualization of large-scale biological networks using a standalone headset

Visualizations are an essential part of biomedical analysis result inter...
07/10/2019

Deadeye Visualization Revisited: Investigation of Preattentiveness and Applicability in Virtual Environments

Visualizations rely on highlighting to attract and guide our attention. ...
10/31/2017

Multi-Path Cooperative Communications Networks for Augmented and Virtual Reality Transmission

Augmented and/or virtual reality (AR/VR) are emerging as one of the main...
06/20/2022

Timeline Design Space for Immersive Exploration of Time-Varying Spatial 3D Data

Timelines are common visualizations to represent and manipulate temporal...

Code Repositories

scenery

Flexible VR Visualisation for Volumetric and Geometric Data on the Java VM, powered by Kotlin and Vulkan


view repo

sciview

SciView is an ImageJ/FIJI plugin for visualization and interaction with 3D data


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Related work

A particularly popular framework in scientific visualization is VTK [Hanwell:2015iv]: VTK offers rendering of both geometric and volumetric data, using an OpenGL 2.1 renderer. However, VTK’s complexity has also grown over the years and its API is becoming more complex, making it difficult to change internals without breaking existing applications (). A more recent development is VTK.js , which brings VTK to web browsers. ClearVolume [Royer:2015tg] is a visualization toolkit tailored to high-speed, volumetric microscopy and supports multi-channel/multi-timepoint data, but focuses solely on volumetric data and does not support VR/AR. MegaMol [grottel2014] is a special-purpose framework focused on efficient rendering of a large number of discrete particles that provides a thin abstraction layer over the graphics API for the developer. 3D Viewer [Schmid:2010gm] does general-purpose image visualization tasks, and supports multi-timepoint data, but no out-of-core volume rendering, or VR/AR.

In out-of-core rendering (OOCR), the rendering of volumetric data that does not fit into main or graphics memory, existing software packages include Vaa3D/Terafly [peng2014extensible, Bria:2016fl]

, which is written with applications like neuron tracing in mind, and

BigDataViewer [pietzsch2015bigdataviewer], which performs by-slice rendering of large datasets, powered by the ImgLib2 library[Pietzsch:2012img]. The VR neuron tracing tool [Usher:2017bda] also supports OOCR, but lacks support for multiple timepoints and is not customizable. Inviwo [jonsson2019] supports OOCR and interactive development, but does not support the overlay of multiple volumetric datasets in a single view.

In the field of biomedical image analysis, various commercial packages exist: Arivis, Amira, and Imaris222See arivis.com/en/imaging-science/imaging-science, fei.com/software/amira/, and imaris.oxinst.com, and syGlass [pidhorskyi2018] support out-of-core rendering, and are scriptable by the user. Arivis, Imaris, and syGlass offer rendering to VR headsets, while Amira can run on CAVE systems. Imaris provides limited Fiji and Matlab integration. Due to being closed-source, the flexibility of these packages is ultimately limited (e.g., changing rendering methods, or adding new input devices).

2 scenery

With scenery, we provide a flexible framework for developing visualization prototypes and applications, on systems ranging from desktop screens, VR/AR headsets (like the Oculus Rift or HTC Vive), to distributed setups. scenery is written in Kotlin, a language for the JVM that requires less boilerplate code and has more functional constructs than Java itself. This increases developer productivity, while maintaining 100% compatibility with existing Java code. scenery runs on Windows, Linux, and macOS (). scenery uses the low-level Vulkan API for fast and efficient rendering, and can fall back to an OpenGL 4.1-based renderer333The Vulkan renderer uses the LWJGL Vulkan bindings (see lwjgl.org), while the OpenGL renderer uses JOGL (see jogamp.org)..

scenery is designed around two concepts: A scene graph for the scene organisation into nodes, and a hub organizing all subsystems — e.g. rendering, input, statistics — and enables communication between them. scenery’s application architecture is depicted in Fig. 1

. scenery’s subsystems are only loosely coupled, meaning they can work fully independent of each other. The loose coupling enables isolated testing of the subsystems, and thereby we can reach 65% code coverage at the moment (the remaining 35% is mostly code that requires additional hardware and is therefore harder to test in an automated manner).

Figure 1: Overview of scenery’s architecture.

3 Highlighted features

3.1 Realtime rendering on the JVM —

Historically, the JVM has not been the go-to target for realtime rendering: For a long time, the JVM had the reputation of being slow and memory-hungry. However, since the HotSpot VM has been introduced in Java 6, this is less true, and state-of-the-art just-in-time compilers like the ones used in Java 12 have become very good at generating automatically vectorized code

444For this project, we have measured the timings of performance-critical parts of code, such as 4x4 matrix multiplication. Compared to hand-tuned, vectorized AVX512 code, the native code generated by the JVM’s JIT compiler is about a factor of 3-4 slower.. The JVM is widely used, provides excellent dependency management via the Maven or Gradle build tools, and efficient, easy-to-use abstractions for, e.g., multithreading or UIs on different operating systems. Additionally, with the move to low-overhead APIs like Vulkan, pure-CPU performance is becoming less important. In the near future, Project Panama555See openjdk.java.net/projects/panama. will introduce JVM-native vectorization primitives to support CPU-heavy workloads. These primitives will work in a way similar to those provided by .NET.

Another convenience provided by the JVM is scripting: Via the JVM’s scripting extensions, scenery can be scripted using its REPL with third-party languages like Python, Ruby, and Clojure. In the future, GraalVM666See graalvm.org. will enable polyglot code on the JVM, e.g. by ingesting LLVM bytecode directly[bonetta2018]. scenery has already been tested with preview builds of both GraalVM and Project Panama.

3.2 Out-of-core volume rendering —

scenery supports volume rendering of multiple, potentially overlapping volumes that are placed into the scene via arbitrary affine transforms. For out-of-core direct volume rendering of large volumes () we develop and integrate the BigVolumeViewer library, which builds on the pyramidal image data structures and in-memory caching of large image data from BigDataViewer [pietzsch2015bigdataviewer]. We augment this by a GPU cache tier for volume blocks, implemented using a single large 3D texture. This cache texture is organized into small (e.g.,

) uniformly sized blocks. Each texture block stores a particular block of the volume at a particular level in the resolution pyramid, padded by one voxel on each side to avoid bleeding from neighboring blocks during trilinear interpolation The mapping between texture and volume blocks is maintained on the CPU.

To render a particular view of a volume, we determine a base resolution level such that screen resolution is matched for the nearest visible voxel. Then, we prepare a 3D lookup texture in which each voxel corresponds to a volume block at base resolution. Each voxel in this lookup texture stores the coordinates of a block in the cache texture, as well as its resolution level relative to base, encoded as a RGBA tuple. For each (visible) volume block, we determine the optimal resolution by its distance to the viewer. If the desired block is present in the cache texture, we encode its coordinates in the corresponding lookup texture voxel. Otherwise, we enqueue the missing cache block for asynchronous loading through the CPU cache layer of BigDataViewer. Newly loaded blocks are inserted into the cache texture, where the cache blocks to replace are determined by a least-recently-used strategy that is also maintained on the CPU. For rendering, currently missing blocks are substituted by lower-resolution data if it is available from the cache.

Once the lookup texture is prepared, volume rendering proceeds by raycasting and sampling volume values with varying step size along the ray, adapted to the viewer distance. To obtain each volume sample, we first downscale its coordinate to fall within the correct voxel in the lookup texture. A nearest-neighbor sample from the lookup texture yields a block offset and scale in the cache texture. The final value is then sampled from the cache texture with the accordingly translated and scaled coordinate. With this approach, it is straightforward to raycast through multiple volumes simultaneously, simply by using multiple lookup textures. It is also easy to mix in smaller volumes which are simply stored as 3D textures and do not require indirection via lookup textures. To adapt to varying number and type of visible volumes, we generate shader sources dynamically at runtime. Blending of volume and mesh data is achieved by reading scene depth from the depth buffer for early ray termination, thereby hiding volume values that are behind rendered geometry.

3.3 Code-shader communication and reflection —

In traditional OpenGL (before version 4.1), parameter data like vectors, matrices, etc. are communicated to shaders via uniforms, which are set one-by-one. In scenery, instead of single uniforms, Uniform Buffer Objects (UBOs) are used. UBOs lead to a lower API overhead and enable variable update rates. Custom properties defined for node classes that need to be communicated to the shader are annotated in the class definition with the @ShaderProperty annotation, scenery picks up annotated properties automatically, and serializes them. See Listing 1 for an example of how properties can be communicated to the shader, and Listing 2 for the corresponding GLSL code for UBO definition in the shader. Procedurally-generated shaders can use a hash map storing these properties.

For all values stored in shader properties a hash is calculated, and they are only communicated to the GPU when the hash changes. , all elementary types (ints, floats, etc.), as well as matrices and vectors thereof, are supported.

// Define a matrix and an integer property
@ShaderProperty var myMatrix: GLMatrix
@ShaderProperty var myIntProperty: Int
// For a dynamically generated shader: Store properties as hash map
@ShaderProperty val shaderProperties = HashMap<String, Any>()
Listing 1: Shader property example
layout(set = 5, binding = 0)
uniform ShaderProperties {
    int myIntProperty;
    mat4 myMatrix;
};
Listing 2: GLSL code example for shader properties

Determination of the correct memory layout required by the shader is done by our Java wrapper for the shader reflection library SPIRV-cross and the GLSL reference compiler glslang777See github.com/KhronosGroup/SPIRV-cross and github.com/scenerygraphics/spirvcrossj for our wrapper, spirvcrossj.. This provides a user- and developer-friendly API ().

Furthermore, scenery supports shader factories — classes that dynamically produce shaders to be consumed by the GPU — and use them, e.g., when multiple volumetric datasets with arbitrary alignment need to be rendered in the same view.

3.4 Custom rendering pipelines —

In scenery, the user can use custom-written shaders and assign them on a per-node basis in the scene graph. In addition, scenery allows for the definition of fully customizeable rendering pipelines. The rendering pipelines are defined in a declarative manner in a YAML file, describing render targets, render passes, and their contents. Render passes can have properties that are adjustable during runtime, e.g., for adjusting the exposure of a HDR rendering pass. Rendering pipelines can be exchanged at runtime, and do not require a full reload of the renderer — e.g., already loaded textures do not need to be reloaded.

The custom rendering pipelines enable the user/developer to quickly switch between different pipelines, thereby enabling rapid prototyping of new rendering pipelines. We hope that this flexibility stimulates the creation of custom pipelines, e.g., for non-photorealistic rendering, or novel applications, such as Neural Scene (De)Rendering [Nalbach:2016wr, wu2017].

3.5 VR and preliminary AR support —

Figure 2: A scientist interactively explores a 500 GiB multi-timepoint dataset of the development of an embryo of the fruit fly Drosophila melanogaster in the CAVE at the CSBD using a scenery-based application. Dataset courtesy of Loïc Royer, MPI-CBG/CZI Biohub, and Philipp Keller, HHMI Janelia Farm [Royer:2016fh].

scenery supports rendering to VR headsets via the OpenVR/SteamVR library and rendering on distributed setups, such as CAVEs or Powerwalls — addressing . The modules supporting different VR devices can be exchanged quickly and at runtime, as all of these implement a common interface. .

In the case of distributed rendering, one machine is designated as master, to which multiple clients can connect. We use the same hashing mechanism as described in Section 3.3 to determine which node changes need to be communicated over the network, use Kryo888See github.com/EsotericSoftware/Kryo. for fast serialization of the changes, and finally ZeroMQ for low-latency and resilient network communication. A CAVE usage example is shown in Fig. 2.

We have also developed an experimental compositor that enables scenery to render to the Microsoft Hololens.

3.6 Remote rendering and headless rendering

To support downstream image analysis and usage settings where rendering happens on a powerful, but non-local computer, scenery can stream rendered images out, either as raw data or as H264 stream, which can be saved to disk or streamed over the network via RTP. All produced frames are buffered and processed in a separate coroutine, such that rendering performance is not impacted.

scenery can run in headless mode, creating no windows, enabling both remote rendering on machines that do not have a screen, e.g., in a cluster setup, or easier integration testing. Most examples provided with scenery can be run automatically (see the ExampleRunner test) and store screenshots for comparison. In the future, broken builds will be automatically identified by comparisons against known good images.

4 Example Applications

4.1

We also investigated whether users tend to suffer from motion sickness during use of our interfaces. We found an average SSQ score [kennedy1993] of , which is very low,

4.2 Agent-based simulations

We have utilized scenery to visualize agent-based simulations with large numbers of agents. By adapting an existing agent- and physics-based simulation toolkit [brevis], we have increased the number of agents that can be efficiently visualized by a factor of 10. This performance improvement enables previous studies of swarms with evolving behaviors to be revisited under conditions that may enable new levels of emergent behavior [harrington2017competitive, gold2014feedback]. In Figure 3, we show 10,000 agents using flocking rules inspired by [reynolds1987flocks] to collectively form a sphere.

Figure 3: Agent-based simulation with 10,000 agents collectively forming a sphere.

4.3 sciview

Figure 4: Out-of-core dataset of a D. melanogaster embryo visualised with scenery/sciview. The image is a composite of three different volumetric views, shown in different colors. The transfer function on the left was adjusted to highlight volume boundaries. Dataset courtesy of Michael Weber, Huisken Lab, MPI-CBG/Morgridge Institute.

On top of scenery, we have developed a plugin for embedding in Fiji/ImageJ2 [Rueden:2017ij2] — sciview, fulfilling . We hope it will boost the use of VR technology in the life sciences, by enabling the user to quickly prototype visualizations and add new functionality. In sciview, many aspects of the UI are automatically generated, including the node property inspector and the list of Fiji plugins and commands applicable to the currently active dataset. sciview has been used in a recent lightsheet microscopy pipeline [daetwyler2019multi]. In Supplementary Video 2, we show sciview rendering three overlaid volumes from a fruit fly embryo, a still frame of that is shown in Figure 4.

5 Conclusions and Future Work

We have introduced scenery, an extensible, user/developer-friendly rendering framework for geometric and and demonstrated its applicability in several use cases.

In the future, we will introduce better volume rendering algorithms (e.g. [Kroes:2012bo, igouchkine2017]) and investigate their applicability to VR settings. Furthermore, we are looking into providing support for out-of-core mesh data, e.g. using sparse voxel octrees [Kampe:2013dp, Laine:EffectiveSVO]. On the application side, we are driving forward projects in microscope control (see Section 4.1) and VR/AR augmentation of laboratory experiments.

6 Software and Code Availability

scenery, its source code, and a variety of examples are available at github.com/scenerygraphics/scenery and are licensed under the LGPL 3.0 license. A preview of the Fiji plugin sciview is available at github.com/scenerygraphics/sciview.

Acknowledgements

The authors thank C. Rueden, M. Weigert, R. Haase, V. Ulman, P. Hanslovsky, W. Büschel, V. Leite, and G. Barbieri for additional contributions, L. Royer, P. Keller, N. Maghelli, and M. Weber for allowing use of their datasets,

References