connecting X3D to a state of the art rendering engine
We connect X3D to the state of the art OGRE renderer using our prototypical x3ogre implementation. At this we perform a comparison of both on a conceptual level, highlighting similarities and differences. Our implementation allows swapping X3D concepts for OGRE concepts and vice versa. We take advantage of this to analyse current shortcomings in X3D and propose X3D extensions to overcome those.READ FULL TEXT VIEW PDF
The highly influential framework of conceptual spaces provides a geometr...
Upper limb Prosthetic can be viewed as an independent cognitive system i...
The highly influential framework of conceptual spaces provides a geometr...
In this paper, we generalize the basic notions and results of Dempster-S...
Several recently proposed methods aim to learn conceptual space
Online news media provides aggregated news and stories from different so...
To be reliable on rare events is an important requirement for systems ba...
connecting X3D to a state of the art rendering engine
X3D (Daly and Brutzman, 2007) is an open standard for 3D graphics with precisely defined semantics. Scenes stored in the X3D format can be parsed using standard XML parsers and the files are usually self-contained which makes X3D a good choice for interchange. However the standardization process causes that new rendering techniques and concepts appear in X3D with a considerable delay. The available X3D based rendering engines like X3DOM (Behr et al., 2009) or InstantReality (Fraunhofer IGD, 2016) therefore offer custom extensions to overcome this. Yet those are only scarcely used as they impede interchange.
This paper therefore takes a different approach and instead connects X3D to an existing state of the art rendering engine. This allows using the file formats of the underlying renderer where X3D falls short. As these formats neither are standardized nor have to provide legacy compatibility, they can evolve faster and at the same time are better optimized for rendering. In comparison with creating an X3D extension this approach is more flexible as it allows replacing even core X3D concepts. This flexibility in turn gives better insights on how to improve X3D itself.
Here, we focus on the presentation aspect of X3D as it is arguably the most important part of a 3D file format. When using the X3D DOM profile (Behr et al., 2009) most application logic is outside of the X3D format and recent extensions (e.g. (Schwenk et al., 2012) and (Sturm et al., 2016)) specifically target material representation. Therefore, we neglect the Scripting and Sensor parts of X3D and concentrate the Rendering and Geometry components.
For the underlying renderer we chose the Object-Oriented Graphics Rendering Engine (OGRE) (Streeting, 2012) . While other rendering engines like Unreal 4 and Unity recently gained more popularity, OGRE is available royalty-free under an permissive open-source license making it a better fit for research as well as allowing deeper inspection.
OGRE is not bound to a specific rendering API like OpenGL or DirectX, but rather provides high level concepts that map to any of those. It is being developed since 1999 and was used in AAA games like Torchlight 1/2 as well as in industrial training applications. Therefore the rendering concepts are mature and proven — while having been developed independently to X3D. This makes a comparison especially interesting.
The comparison is performed using our prototypical implementation called ”x3ogre”, which allows loading X3D scenes in OGRE as well as using OGRE concepts inside X3D. Besides the comparison, one use-case of the prototype is to visually enhance existing X3D scenes without requiring invasive changes.
By utilizing Emscripten (Zakai, 2011) OGRE based applications can be deployed on the Web111https://ogrecave.github.io/ogre/emscripten/. Internally a GLES3 (Khronos, 2016) based renderer is used, which supports WebGL (Khronos, 2017) while also providing forward compatibility with WebGL2. Therefore the results are not limited to the C/ C++ ecosystem, but can also be transferred to the web context.
This paper is structured as follows: in section 2 the mapping of the X3D concepts to the corresponding OGRE concepts is discussed, while section 3 takes the reverse way, describing the integration of OGRE concepts in X3D. Based on the preceding discussions we then propose several X3D extensions in section 4. Finally we conclude with section 5 giving a summary of our results and discussing the limitations and future directions.
As the first step we identify the concepts corresponding to X3D in OGRE. Here we focus on a subset of the X3D interchange profile (ISO/IEC 19775-1, 2012) which roughly corresponds the X3D DOM profile introduced by (Behr et al., 2009).
We will briefly discuss the high level objects and then focus on the compound geometry and material objects in more detail.
The correspondences for the core X3D objects are:
the X3D Scene node corresponds to the SceneManager. In X3D only one Scene node per file is allowed, while OGRE supports several active SceneManager instances. With only one of them being rendered by a specific Camera.
the X3D Transform node corresponds to two nested SceneNode
s. SceneNodes only store a translation vector, an orientation quaternion and a scaling vector. Therefore two SceneNodes are needed to support thecenter property of a Transform. The X3D scaleOrientation property can only be implemented for multiples of as there is no shearing component in OGRE.
Note that OGRE does not have a native serialization format for SceneNodes and thus benefits by using X3D here.
the X3D Geometry node corresponds to the Mesh object. As in X3D this is a compound object. It can be serialized either as XML or in a binary file format.
the X3D Appearance node corresponds to the Material object. Again this is a compound object. It is serialized in a custom file format resembling the VRML97 encoding.
Additionally we identified the following correspondences needed to support animations. These are given for completeness as the concepts mostly map one-to-one:
the X3D TimeSensor maps to AccumulateControllerFunction.
the X3D ScalarInterpolator maps to LinearControllerFunction.
the X3D CoordinateInterpolator maps to VertexAnimationTrack.
the X3D PositionInterpolator and OrientationInterpolators correspond to NodeAnimationTrack.
Note that for supporting animations in X3D one does not need to implement the full X3D event model. Instead it is sufficient to rewrite specific ROUTE statements in a compositional way as shown in listing 1.
This approach requires allowing specific source nodes as children of the target nodes and is therefore not compatible with general ROUTE statements. However it enabled the correct loading of the existing X3D scenes we tried.
OGRE uses the binary .mesh file format which can be transparently converted from and to XML for inspection and debugging.
The Shape node corresponds to a submesh node and the IndexedTriangleSet node corresponds to geometry node. Note that OGRE allows interleaved storage of the position and normal vertex attributes while X3D does not. This is useful, as OGRE also stores additional vertex attributes like bone assignments for skeletal animation (HAnimSegment node in X3D) in the .mesh file.
Listing 3 shows a single submesh definition, but generally a .mesh file stores multiple submeshes. This allows the definition multi-material meshes, where each submesh is rendered with one material. In X3D one has to use multiple Shapes and then use a Group node for linking them.
Furthermore the submeshes can reference the same vertices to avoid data duplication. This representation directly maps to the low-level graphics APIs. (In OpenGL: glDrawRangeElements and glBufferData).
For X3D this concept can be reproduced by DEF/ USE of the Coordinate node — yet due to the complexity of the system it is not obvious whether X3D viewers will share the memory between the corresponding Shapes.
In OGRE, used material can only be referenced in the .mesh file while in X3D it is typically defined inline with the geometry data. This separation is similar to the concept in glTF (Robinet and Cozzi, 2013) or the X3D BinaryGeometry node (Behr et al., 2012). Mesh files do not specify any compression. However OGRE assets are usually distributed in zipped packs containing geometry, materials and textures which offer compression on a higher level.
OGRE uses the custom .material script format for material definition which resembles the classic VRML encoding. For comparability we will use the VRML encoding for the following X3D material examples.
OGRE materials support multiple techniques which are again composed of several rendering passes. The technique range allows picking the appropriate one at runtime based on hardware support, LOD level etc., while defining multiple passes can be useful for advanced rendering techniques like rendering hair.
Both OGRE and X3D material definitions reflect the simple Blinn-Phong shading model (Blinn, 1977). However state of the art rendering usually involves more sophisticated lighting models like the Cook-Torrance microfacet reflection model (Cook and Torrance, 1981) — optionally combined with normal mapping and deferred shading.
This requires a more flexible material definition. (Schwenk et al., 2010) therefore introduced the X3D CommonSurfaceShader node that used the uber-shader (Hargreaves, 2005) approach. While offering more flexibility then the traditional materials, the monolithic nature requires the change of existing materials whenever a new rendering technique must be incorporated; the CommonSurfaceShader had to be updated to incorporate Physically Based Shading (PBS) (Schwenk et al., 2012).
In contrast OGRE provides the high level material system (HLMS) for defining custom materials. This system builds around the idea of passing opaque properties to a named template shader.
However while (Sturm et al., 2016) rely on a predefined Shader, OGRE just forwards the given parameters to a shader named ”PBS” (HLSL on DirectX , GLSL on OpenGL). This allows users to define custom materials with arbitrary parameters. Figure (a)a shows a grid of spheres rendered with custom PBS shading while the ground plane is being rendered using the classical Blinn-Phong shading.
In Section 2 we identified the X3D concepts inside OGRE which allowed loading X3D files. This section on the other hand will focus on bringing the OGRE concepts to X3D. First we describe the connection for concepts which exist in both X3D and in OGRE, where the OGRE counterparts usually offer more flexibility. Then we will describe the mapping of the OGRE compositor system for which X3D has no counterpart.
The general approach taken here is to redirect existing X3D concepts to their OGRE counterparts as identified in section 2. In contrast to creating new X3D nodes, this simplifies the implementation and allows to first evaluate the benefits before introducing new concepts in X3D. For instance to use the interleaved vertex attribute storage in X3D we just redirect the Geometry node to an OGRE .mesh file and override the material by the X3D appearance. Listing 8 shows the different redirection options we support in x3ogre.
Both, the X3D DEF/ USE system and the OGRE resource system rely on strings for identifying resources. The implementation therefore is straightforward and we can maintain compatibility with existing X3D files by only using OGRE resources if the USE lookup inside of the current X3D file fails.
Compared to the ExternalGeometry node proposed by (Limper et al., 2014) this approach is easier to implement and does not requires invasive changes to existing X3D files. However it implicitly relies on an externally defined resource pool and overriding individual mesh properties (e.g. color) is not possible.
A compositing system allows specifying full screen effects to be applied to the image after a scene has been rendered — similar to layer effects in an image editing software. A simple example would be desaturation to create a black & white image without having to change the scene materials. Figure (b)b shows a more complex effect that involves adding noise and vignetting at the image borders.
X3D does not have an explicit compositing concept. Post processing is usually implemented by redirecting the rendering to a RenderedTexture which is then rendered using a custom Appearance on a full-screen quad. Combining multiple layers is only possible using the MultiTexture node. However it only offers a fixed set of blending modes that correspond to an OpenGL 1.2 extension (Khronos, 2006). Chaining post processing effects is not feasible as there is no mechanism in X3D to specify in which order RenderedTextures should be processed.
OGRE on the other hand uses the explicit .compositor file format (see listing 9) that resemble the .material format but instead specifies the rendering and routing of full-screen render targets.
Using this format it is possible to describe simple effects in a more concise way compared to X3D, while also allowing complex effects like pre-pending an invert effect to ”Night Vision” (figure (b)b) or even implementing deferred shading.
To enable compositing effects in X3D we added a new MFString field compositors to the Viewpoint node which allows specifying a compositor chain for that specific View. (see Listing 10)
Based on the discussion in the preceding sections we now propose two conceptual extensions to X3D that allow implementing the respective OGRE concepts directly instead of merely referencing them.
The first extension is the explicit notion of compositing by introducing a Compositing System inside X3D. The second extension is the definition of user defined appearances.
Following the OGRE notion of a compositor, we propose adding the following X3D nodes to allow the definition of a compositing effect directly inside a X3D file:
Compositor for defining a named Compositor and the according scope. For defining the intermediate render layers, we use the RenderTexture extension.
CompositorPass for explicitly stating the rendering order of RenderTextures. For specifying the shader and referencing input textures, we can use the Appearance node without modifications.
CompositorOutput for explicitly marking the sink of a compositor graph. While one could use a special target on a CompositorPass, this makes automated error checking easier.
Listing 10 shows a sample usage of the above nodes. The implemented effect is a separated Gaussian Blur filter which requires two render passes to be executed in the correct order.
Bringing user defined Appearances to X3D eases using specialized rendering techniques and allows bringing together the proposed Material extensions (Schwenk et al., 2012) (Sturm et al., 2016) using an unified concept.
To this end we propose the new CustomApperance node that references a named ComposedShader and can be used instead of the classical Appearance node. The CustomApperance node contains any number of Texture and field nodes that are forwarded to the referenced shader.
We have connected X3D to OGRE in a bidirectional manner allowing X3D scenes to be loaded by OGRE as well as using OGRE resources in X3D scenes. By comparing both on a conceptual level we could identify shortcomings in X3D and propose extensions to overcome those.
However we only extended X3D on a coarse level; one could improve the existing X3D concepts by comparing the implementations in detail. For instance one could improve the Geometry representation in X3D by allowing interleaved storage of vertex attributes and explicit buffer sharing by a notion of submeshes.
Finally it should be evaluated in how far the identified shortcomings also apply to the glTF format.
The implementation presented in this work is available open source at https://github.com/paroj/x3ogre.