CAVE is a room-sized virtual reality (VR) system, which was developed in the early 1990s at the University of Illinois, Chicago . In a CAVE room, the viewer is surrounded by wall screens and a floor screen. Stereo-images are projected onto the surfaces. Tracking systems are used to capture the viewer’s head position and direction. The wide viewing angle provided by the surrounding screens on the walls and floor generates a high-quality immersive VR experience. The viewer can interact with three-dimensional (3D) virtual objects using a portable controller known as wand, in which the tracking system is installed.
CAVE systems have been used for scientific visualizations from the first system  until the latest generation (StarCAVE) . For example, visualization applications in CAVE systems have been developed to analyze general computational fluid dynamics (CFD) , turbulence simulations , CFD of molten iron , CFD of wind turbines , seismic simulation , meteorological simulation , biomedical fluid simulation , magnetic resonance imaging , geomagnetic fields , archaeological studies , and geophysical surveys .
Recently, a new CAVE system was installed at the Integrated Research Center (IRC) at Kobe University. This CAVE system was named “-CAVE” after the IRC’s location on Port Island (PI). Fig. 2 shows a front view of the -CAVE while Fig. 2 shows the configuration of its projectors and mirrors.
The original CAVE system had a cubic geometry with a side length of 3 m. A straightforward extension to enlarge the VR space of a CAVE is to use a rectangular parallelepiped shape. More sophisticated configurations have been proposed for advanced CAVE systems, such as StarCAVE , but we used the rectangular parallelepiped approach for -CAVE to maximize the VR volume in the space allowed in the IRC building. The side lengths of -CAVE were 3 m 3 m 7.8 m. As far as we know, this is the largest CAVE system in Japan.
We have developed several VR applications for the scientific visualization of large-scale simulation data. Of these, Virtual LHD  was our first VR visualization application. This application was developed for the CompleXcope CAVE system installed at the National Institute for Fusion Science, Japan. Currently, Virtual LHD is used to visualize the magnetohydrodynamic (MHD) equilibrium state of a nuclear fusion experiment. We also developed a general-purpose visualization application, VFIVE [15, 16, 17, 17]
, for 3D scalar/vector field data. Recently, we added a new visualization method to VFIVE at-CAVE for visualizing magnetic field lines frozen into a fluid . The original VFIVE only accepted a structured grid data format as the input, but an extension of VFIVE for unstructured grid data was developed at Chuo University . The development and its applications of VFIVE are summarized in our recent papers [20, 21].
In addition to improvements of VFIVE, we also developed the following four types of novel CAVE visualization applications for -CAVE. (1) IonJetEngine: for VR visualization of plasma particle in cell (PIC) simulations of an ion jet engine in space probes (2) RetinaProtein: for molecular dynamics (MD) simulations of proteins (3) SeismicWave: for the simulation of seismic wave propagation (4) CellDivision: to simulate three-dimensional time sequence microscope images of mouse embryos. All of these new CAVE visualization programs were written using OpenGL and CAVElib. We started developing these visualization applications when the construction of -CAVE was underway.
Several problems occur if multiple CAVE visualization applications are executed one after another, as follows. First, the command has to be typed in to launch the first application using the keyboard beside the CAVE room. The user then enters the CAVE room wearing stereo glasses. After analyzing the data from the first application in the CAVE, the user leaves the CAVE room and takes off the glasses. Next, the user types in the command to launch the second application and enters the CAVE room wearing the stereo glasses. These steps have to be repeated if there are many applications. This inconvenience occurs because the CAVE must be used for single tasks.
To resolve this inconvenience, we developed an application launcher for CAVE. This program, Multiverse, is a CAVE application written in CAVElib and OpenGL. Multiverse can control other VR applications. These sub-applications are depicted in CAVE’s VR space using 3D icons or panels. If the user in the CAVE room touches one of the panels using the wand, they are “teleported” to the corresponding VR application.
2 -CAVE system
-CAVE has a rectangular parallelepiped configuration with side lengths of 3 m 3 m 7.8 m (Fig. 3). The large width (7.8 m) is one of the characteristic features of the CAVE system. The large volume of -CAVE allows several people to stand on the floor at the same time, without any mutual occlusion of the screen views in the room.
Like many other CAVE systems, -CAVE has four screens: three wall screens (front, right, and left) and a floor screen. Soft, semi-transparent screens are used on the walls. The images are rear-projected onto these screens. The floor is a hard screen where the stereo image is projected from the ceiling. Two projectors are used to generate the front wall image (Fig. 4). An edge blending technique is applied to the interface between the two images. Another pair of projectors is used for the floor screen. Each side wall screen (right and left) is projected onto using a projector. In total, six projectors are used.
The resolution of the projector (Christie WU12K-M) shown in Fig. 5 with the counterpart mirror, is 1920 1200 pixels. The brightness is 10,500 lumens. An optical motion tracking system (Vicon) is used for head and wand tracking. Ten cameras with 640 480 resolution are installed on top of the wall screens. A commonly used API (Trackd) is used for the interface to CAVElib.
Two computer systems are used for computations and for rendering -CAVE. One is a Linux PC (HP Z800) with 192 GB of shared memory. Three sets of GPUs (NVIDIA QuadroPLEX) are used for real-time stereoscopic image generation by the six projectors. The other computer system is a Windows PC cluster system.
We developed an applications launcher, Multiverse, for the -CAVE system. At the start of this Multiverse environment, the viewer in the -CAVE stands in the virtual building in IRC where -CAVE is installed. The 3D CAD model data of the IRC building (Fig. 6) is loaded into Multiverse and rendered in 3D in real time. This is the Multiverse’s start-up environment known as World. In the World mode of Multiverse, the viewer can walk through the building. Fig. 7(a) shows a snapshot where the user is approaching the IRC building. In Fig. 7(b), the viewer is (literally) walking into the (virtual) IRC building. Some fine structures of the building, including the virtual -CAVE is shown in Fig. 7(c) and (d), are also loaded from CAD data files.
In Multiverse, there are two methods of showing the application list loaded in Multiverse. The first is to use “ribbons” that connect the wand and application icons. In the “ribbons” mode, the user in the World finds one or more curves or wires that start from the wand tip. Each wire is a type of guide that leads the user to a Gate.
A Gate is an entrance to the VR world of the corresponding application. If multiple visualization applications are loaded into Multiverse, this automatically generates the corresponding number of Gates. All of these are connected to the user (or the wand) via guide wires (Fig. 8). If the user walks or “flies” into a place in front of a Gate, they will find an exploratory movie near the Gate (see the rectangular panel in the center of the blue, torus-shaped Gate in Fig. 8). This explains the type of application that will be executed when the user selects the Gate. To select the application, the user (literally) walks through the Gate when the corresponding VR application program loads and the user feels as if they have been “teleported” to the visualization space. Each VR world is known as a Universe in Multiverse.
Another method of showing the application lists loaded in the Multiverse is to use a virtual elevator. When the user enters the elevator in the (virtual) IRC building, they are automatically taken upward by the elevator into the sky above the IRC building. The spatial scale of the view changes rapidly from the building, to the city, country, and finally the globe. The user finds that they are “floating” in space surrounded by stars. Several panels then appear in front of the viewer. Each panel represents a visualization application (Fig. 9).
When the user touches one of the panels, the corresponding VR application is launched and the user is “teleported” to the selected visualization Universe.
In short, Multiverse is composed of the World and several Universes. World is a type of 3D desktop environment and a Universe is a visualization application loaded onto Multiverse.
In the program code, each Universe is simply a standard CAVE application with a unified interface to the Multiverse class.
A Universe is an instance of a class that is derived from a virtual class known as Vacuum. Vacuum represents an empty space, which only has an interface to the Multiverse class through the member functions
These function names convey their roles to readers who are familiar with CAVElib programming.
In this section, we describe five applications, or Universes, which we developed as the first applications for the Multiverse environment.
We converted VFIVE, which is described in section 1, into a class of Universe. VFIVE is a general-purpose visualization tool, so we can visualize any vector/scalar field provided that the data are legitimate for VFIVE’s input data format in the Multiverse framework.
Fig. 10 shows a snapshot of an example of a Universe based on VFIVE, known as GeomagField. The input data used by GeomagField was a geodynamo simulation performed by one of the authors and his colleagues [24, 25, 26]. The purpose of this simulation was to understand the mechanism that generates the Earth’s magnetic field (or geomagnetic field).
Fig. 11 shows another snapshot of GeomagField in which two VFIVE visualization methods were applied. The temperature distribution was visualized by volume rendering (colored in orange to yellow). The 3D arrow glyphs show the flow velocity vectors around the wand position. The arrows followed the motion when the viewer moved the wand, which changed the directions and lengths (vector amplitudes) in real time. The white balls are tracer particles that also visualized the flow velocity. These balls were highlighted in a spotlight or cone-shaped region, the apex of which was the wand. This visualization method is known as Snowflakes in VFIVE. The viewer can change the focus of the flow visualization by changing the direction of the spotlight via wand direction movements.
The second example of a Universe is known as IonJetEngine and a snapshot is shown in Fig. 12.
This Universe visualized a plasma PIC simulation of the ion jet engine of a space probe. The positions of the particles (ions and electrons) were represented by balls (yellow for ions and blue for electrons). The velocity distribution of the jet was visualized as the set of the individual motions of the particles. A 3D model of the virtual space probe from which the plasma jet beams were ejected is also shown in Fig. 12.
Fig. 13 shows a Universe known as RetinaProtein, which was a molecular dynamics simulation of rhodopsin , a protein in the human retina. At the start of this Universe, the viewer observed a 3D model of a human (see the top panel of Fig. 13). As the viewer approached the model’s face, the fine structures of the eyes became visible until MD simulation visualization appeared.
In this Universe, a simulation of seismic wave propagation was visualized, which was performed by Prof. Furumura of the University of Tokyo by animated volume rendering (see Fig. 14). In this Universe, we implemented rapid volume rendering based on the 3D texture mapping technique in CAVEs. The full details of this implementation will be reported elsewhere.
The final Universe described here is CellDivision and a snapshot is shown in Fig. 15. The target data used for this visualization were not simulation data. Instead, they were microscope images of live mouse embryos. The data were provided by Dr. Yamagata of Osaka University. The time sequence of microscope images was visualized as an animated volume rendering using the same tool used for SeismicWave in the previous subsection.
In many CAVE systems, VR applications are executed as single tasks. Thus, the user has to type in each command one after another outside the CAVE room. To convert a CAVE into a more convenient tool for scientific visualization, we developed an application launcher known as Multiverse. Multiverse comprises a World and Universes. World, which correspond to the desktop of a PC operating system, where the user can select visualization applications by touching icons floating in the World. Using the virtual touch screen interface, the specified application program is launched and the user is “teleported” to another VR space containing the corresponding visualization application, which is known as a Universe. We developed five Universes, which can be launched from the Multiverse environment. Multiverse was designed as a general application framework, so it can read and control other Universes. A user can jump back to a World and switch to another Universe at any time from any Universe.
During the implementation of Multiverse, we developed several new fundamental tools and methods for the CAVE environment, such as a fast speed volume renderer, a 3D model (CAD) data loader/renderer, and a 2D movie file loader/renderer. Details of these fundamental tools and methods will be reported elsewhere.
We thank the undergraduate students at our laboratory at Kobe University (Toshiaki Morimoto, Yasuhiro Nishida, Yuta Ohno, Tomoki Yamada, and Mana Yuki) for contributing to the development of Multiverse. The plasma particle simulation data were provided by Prof. H. Usui, Dr. Y. Miyake, and Mr. A. Hashimoto (Kobe University). The MD simulation data were provided by Prof. S. Ten-no and Dr. Y. Akinaga. The simulation data for seismic wave propagation were provided by Prof. T. Furumura (University of Tokyo). The microscope images were provided by Dr. K. Yamagata (Osaka University).
This work was supported by JSPS KAKENHI Grant Numbers 23340128 and 30590608, and also by the Takahashi Industrial and Economic Research Foundation.
-  Cruz-neira C, Sandin D J and Defanti T A 1993 Proc. SIGGRAPH ’93 pp 135–142
-  Defanti T, Dawe G, Sandin D, Schulze J, Otto P, Girado J, Kuester F, Smarr L and Rao R 2009 Future Generation Computer Systems 25 pp 169–178
-  Jaswal V 1997 Proc. Visualization ’97 pp 301–308
-  Tufo H M, Fischer P F, Papka M E and Blom K 1999 Proc. ACM/IEEE Conf. Supercomputing 1999 pp 62–76
-  Fu D, Wu B, Chen G, Moreland J, Tian F, Hu Y and Zhou C Q 2010 Proc. 14th Int. Heat Transfer Conf., pp 1–8
-  Yan N, Okosun T, Basak S K, Fu D, Moreland J and Zhou C Q 2011 Proc. ASME 2011 Int. Design Engineering Technical Conf.& Computers and Information in Engineering Conf., pp 1–8
-  Chopra P, Meyer J and Fernandez A 2002 IEEE Visualization, 2002, pp 497–500
-  Ziegeler S, Moorhead R J, Croft P J and Lu D 2001 Proc. Conf. Visualization ’01, pp 489–493
-  Forsberg A, Laidlaw D, Van Dam A, Kirby R, Kafniadakis G and Elion J 2000 Proc. Conf. Visualization ’00, pp 457–460
-  Zhang S, Demiralp C, Keefe D, DaSilva M, Laidlaw D, Greenberg B, Basser P, Pierpaoli C, Chiocca E and Deisboeck T 2001 Proc. Visualization, 2001, pp 437–584
-  Bidasaria H B 2005 Proc. 43rd Annual Southeast Regional Conf. on ACM-SE 43, p 355
-  Acevedo D, Vote E, Laidlaw D and Joukowsky M 2001 Proc. Visualization, pp 493–597
-  Lin A Y m, Novo A, Weber P P, Morelli G and Goodman D 2011 Advances in Visual Computing (Springer Berlin Heidelberg) pp 229–238
-  Kageyama A, Hayashi T, Horiuchi R, Watanabe K and Sato T 1998 Proc. 16th Int. Conf. Numerical Simulation Plasmas (Santa Barbara, CA, USA) pp 138–142
-  Kageyama A, Tamura Y and Sato T 2000 Prog. Theor. Phys., Suppl. 138 pp 665–673
-  Ohno N and Kageyama A 2007 Phys. Earth Planet. Inter. 163 pp 305–311
-  Ohno N and Kageyama A 2010 Comput. Phys. Comm. 181 pp 720–725
-  Murata K and Kageyama A 2011 Plasma Fusion Res. 6 2406023–1–5
-  Kashiyama K, Takada T, Yamazaki T, Kageyama A, Ohno N and Miyachi H 2009 Proc. 9th Int. Conf. Construction Applications of Virtual Reality (Sydney) pp 1–6
-  Kageyama A, and Ohno N submitted to Int. J. Modeling Simulation & Scientific Comput.
-  Kageyama A, Ohno N, Kawahara S, Kashiyama K and Ohtani H submitted to Int. J. Modeling Simulation & Scientific Comput.
-  Bierbaum A, Just C, Hartling P, Meinert K, Baker A and Cruz-Neira C 2001 Proc. IEEE Virtual Reality 2001, pp 89–96
-  Meno D, Kageyama A and Masada Y 2012 Proc. Int. Conf. Simulation Technology pp 387–389
-  Kageyama A, Miyagoshi T and Sato T 2008 Nature 454 pp 1106–1109
-  Miyagoshi T, Kageyama A and Sato T 2010 Nature 463 pp 793–796
-  Miyagoshi T, Kageyama A and Sato T 2011 Phys. Plasmas 18 p 072901
-  Akinaga Y, Jung J and Ten-no S 2011 Phys. Chem. Chem. Phy. 13 pp 14490-14499
-  Furumura T, Kennett B L N and Koketsu K 2003 Bul. Seismological Soc. America 93 pp 870–881