User guide to TIM, a ray-tracing program for forbidden ray optics

03/25/2011 ∙ by Dean Lambert, et al. ∙ University of Glasgow 0

This user guide outlines the use of TIM, an interactive ray-tracing program with a number of special powers. TIM can be customised and embedded into internet pages, making it suitable not only for research but also for its dissemination.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 8

page 11

page 13

page 15

page 16

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

TIM is a ray-tracing program that can model very general light-ray-direction changes, so general, in fact, that they can lead to physically impossible light-ray fields [1]. We wrote TIM as a tool for our research on windows that perform precisely those light-ray-direction changes. They can do this by compromising other aspects of the light field, but this compromise can be unnoticeably small. We call such components METATOYs [1, 2]. The name TIM started off as an acronym for The Interactive METATOY, but is now beginning to morph into the name of the head with the luscious lips that appears when TIM starts up (Fig. 1).

Figure 1: Content of TIM’s Java-application window after startup. This screen shot shows TIM running on a 2.4 GHz Intel MacBook.

We built a number of other special features into TIM. These include the abilities to

  • focus on non-planar surfaces,

  • model “teleporting” surfaces,

  • visualise light-ray trajectories,

  • render scenes for 3D viewing.

A more detailed explanation of these specialities, and of TIM’s more common features, can be found in Ref. [3].

This user guide aims to introduce the use of TIM’s interactive user interface. Another way of using TIM is described in Ref. [3], which contains information about using TIM by writing code that uses its Java methods, about extending its capabilities, and generally about its underlying code structure and algorithms. The remainder of this user guide is exclusively about controlling TIM through the interactive user interface, and any mention of “TIM” refers to the interactive version.

TIM (in its interactive incarnation) can be run as an applet or as a Java application. The TIM applet can be embedded in web pages (see http://www.physics.gla.ac.uk/Optics/play/TIM/). The Java application comes packaged as a JAR file, which can be downloaded from http://www.physics.gla.ac.uk/Optics/play/TIM/, and which can be run directly in any good operating system. The interface of the Java-application version is almost identical to that of the applet version, but additionally allows saving of the calculated images, something the applet version cannot do because of security restrictions designed to protect users from malicious applets.

When it starts up, TIM renders a default scene with default view parameters (position, aperture size, quality, etc.) (Fig. 1). Section 2 explains how to alter this scene, and then render it. Section 3 outlines TIM’s coordinate system, and how to select and interpret the different view positions. Section 4 is about focussing in general, and section 5 is specifically about focussing on non-planar surfaces. Section 6 explains how to visualise light-ray trajectories. Section 7 describes the way scene objects are parametrised in TIM. Finally, section 8 discusses the “Teleporting” surface property, which allows, amongst other things, simulation of optical geometrical transformations [4].

TIM requires version 1.6 or higher of the Java Virtual Machine (JVM).

2 Defining the scene and rendering

After TIM has started up, it spends a few moments rendering the default scene and it displays the rendered view (Fig. 

1). In addition to the chequered floor and blue sky, the default scene consists of Tim’s distinctive head. The default scene also contains an eye at the position of the default camera (section 4); by default, this eye is invisible.

Figure 2: “Edit scene” dialog. The central list of scene objects contains the scene objects in the default scene.

The scene can be altered by clicking on the “Edit scene” button. TIM now shows a list of scene objects (Fig. 2).

Figure 3: Default lattice behind a window that rotates the light-ray direction through around the window normal.

As an example for defining a scene in TIM, we alter the scene in a way that allows us to understand better the effect of an example of a METATOY, namely a ray-rotating window [5]. First we remove the head by clicking on “Tim’s head” in the list of scene objects, and then clicking the “Remove” button. Similarly, we remove the scene object “invisible wall”. We add a rectangular window to the scene by selecting “Rectangle” in the “Create new…” drop-down menu. This brings up a dialog in which the rectangle’s geometrical and optical parameters can be edited; by clicking the “OK” button without altering any of the values, we accept the default values of the parameters, which are suitable for our current purposes. The default optical properties are those of an idealised METATOY that rotates the direction of light rays by around the surface normal [5]; each light ray emerges from the METATOY with the new direction from the same position where it intersected it, but on the other side of its surface. Next, we add a suitable three-dimensional lattice behind the window. We do this by selecting “Cylinder lattice” from the “Create new…” drop-down menu and then clicking the “OK” button at the bottom right of the dialog that appears, thereby accepting the default values of the parameters, which are again suitable for our purposes. We are now back in the “Edit scene” dialog, but the list of scene objects is different from that shown in Fig. 2. By clicking “OK”, we accept the altered scene. Now we are back in TIM’s main screen, with the status line at the bottom saying “Ready to render.” Clicking on the “Render” button now renders the altered scene; the result is shown in Fig. 3.

3 Coordinates and views

The tabs at the top of the TIM applet area (Fig. 1) allow the selection of different views. Only one of them, the “Eye view”, is rendered automatically when TIM starts up. To render any view, or to re-render it after the scene and/or view parameters have been changed, select the appropriate tab and click on the “Render” button.

The standard view is “Eye view”. This simulates a camera with its aperture centred at coordinates , pointing in the positive direction, and with a horizontal angle of view of . The eye-view camera can be either a pinhole camera or a camera with a finite aperture, leading to blurring of out-of-focus objects (section 4). The camera is configured such that greater values appear further right, and that greater values appear higher up. As the axis points away from the camera, the coordinate system is left-handed. TIM uses dimensionless coordinates. Some sense of scale can be gained from the default floor position (in the plane ) and patterning (the tiles are squares of side length 1).

In “Top view”, everything is seen from directly overhead. In other words, “Top view” shows an orthographic projection into the horizontal, i.e. , plane. Similarly, in “Side view” everything is seen from the positive direction, so it shows an orthographic projection into the plane.

Figure 4: Collection of scene objects, rendered in “Eye view” (top) and in “Anaglyph 3D” view.

The view “Anaglyph 3D” creates an anaglyph image of the scene, designed for viewing with standard red/blue (or red/cyan) anaglyph glasses [6]. Anaglyph images fundamentally show two different views of the scene, colour-coded so that each eye sees one view. In TIM, the left and right eyes see views from positions to the left and right of the “Eye view” position, . Fig. 4 shows an example of the same scene rendered with the standard “Eye view” and “Anaglyph 3D” view.

Figure 5: Random-dot autostereogram of the default scene shown in Fig. 1. The plane behind Tim’s head, which is invisible in the other views, makes it easier to perceive the scene’s depth.

Finally, the view “Autostereogram 3D” creates a random-dot autostereogram (“Magic eye”) image [7] of the scene. Details of how random-dot autostereograms work can be found elsewhere [3]. Note that autostereograms often work better for scenes that are not too complex and in which the depth range is quite limited. For this latter reason the default scene contains a transparent plane (“invisible wall”) behind Tim’s head that is invisible in other views (it is not only transparent, but also does not throw any shadow), but which is nevertheless visible in the “Autostereogram 3D” view as this ignores surface properties (see Fig. 5).

The parameters of a specific view can be altered by clicking on the tab corresponding to that view and then clicking on the “Edit view” button. All views allow the “Anti-aliasing quality” to be selected. TIM implements anti-aliasing by calculating the image at a size that is greater than its screen size. Before displaying the image on the screen (by default at size ), the colour of the screen pixels is then calculated by averaging over a number pixels of the calculated image. Table 1 lists the size at which the image is calculated for the different anti-aliasing qualities. Note that anti-aliasing quality “Normal” calculates one pixel per screen pixel, and so calculates a sharp — but aliased — image. The better anti-aliasing-quality settings calculate (“Good”) or (“Great”) image pixels per screen pixel, and respectively take 4 and 16 times longer to render. There are also settings (“Bad” and “Rubbish”) which calculate the image at a reduced size. These are intended to provide previews.

“Anaglyph 3D” allows editing of additional parameters: “Centre of view”, which is the position where the centres of the field of view of the left eye and the right eye intersect; “Eye separation”, which is a vector along the separation between the position of the left eye and the right eye; and “Colour”, which switches between different algorithms for calculating the anaglyph image, one that gives a colour impression, another that gives a monochromatic impression.

anti-aliasing quality Rubbish Bad Normal Good Great
size of calculated image
Table 1: Anti-aliasing quality descriptions and corresponding size at which the image gets calculated before being displayed.

4 Focussing and aperture

In top view and side view, all objects are sharp (“in focus”). These views can be understood as limiting cases of pinhole-camera views222As a pinhole camera moves further away, and the field of view becomes correspondingly smaller, it becomes “more orthographic”. In the limit of infinite camera distance and correspondingly zero field of view, the camera is orthographic..

aperture size Pinhole Small Medium Large Huge
radius 0 0.025 0.05 0.1 0.2
Table 2: Aperture-size settings and corresponding aperture radii.
Figure 6: Dialog for editing the “Eye view”. This can be accessed by selecting the “Eye view” tab and then clicking on the “Edit view” button.

“Eye view” also can also simulate a pinhole camera. Unlike “Top view” and “Side view”, it can additionally simulate a camera with a lens that has a circular aperture of finite size. The aperture size can be adjusted in the dialog for changing the parameters for “Eye view” (Fig. 6). Table 2 lists the possible aperture sizes. TIM can directly visualise the aperture size as the pupil size of a stylised eye (Fig. 7).

Figure 7: The eye, seen in a mirror (a “Rectangle” scene object with surface type “Reflective”) at with different aperture (pupil) sizes, namely “Small” (top) and “Huge” (bottom). In both cases, the eye was focussed on a plane at so that the pupil plane, which is an optical path length of twice the eye-mirror distance from the eye, was in focus. To increase the contrast between the eye’s iris and pupil, the ambient brightness (see Ref. [3]) was temporarily increased in the TIM source code for this calculation.
blur quality Rubbish Bad Normal Good Great
rays per pixel 1 3 10 32 100
Table 3: Blur-quality settings and corresponding number of rays used to calculate the colour of a single pixel.

TIM simulates the effect of a finite-size aperture as follows. Consider the calculation of the colour of light that hits a specific detector pixel. The camera lens creates an image of the detector pixel (see section 5), which means that any light ray that comes from the direction of the 3D position of this image and passes through the lens is redirected by the lens into the detector pixel. When TIM traces a ray from the detector pixel backwards, it utilises the imaging property of the lens by tracing a ray that starts from a random position within the lens’s aperture opening and that points in the direction of the 3D position of the pixel’s image. This is repeated for a number of rays starting at different aperture points, and the RGB brightnesses due to the different rays are averaged.

To understand how this leads to focussing or blurring, consider a scene that consists exclusively of coloured, non-reflective and non-transmissive, surfaces. If the backwards-traced rays from one particular detector pixel all end on the same point on the surface of one of the scene objects — and they only intersect in one point if this point is the image of the detector pixel — then the averaged RGB brightness is the colour of this surface point; if the rays do not all end on the same point, then the averaged RGB brightness will be the average of the colours of the various surface points where the different rays end. The former case describes the creation of a sharp image of the intersection point; the latter case describes blurring.

Ideally, the contribution of every point within the aperture opening is taken into account, but this requires tracing of infinitely many rays. In practice, a finite number of rays are used; the greater the number of rays, the higher the quality of the blur in the rendered image, which is why the number of rays per pixel can be adjusted by altering the “Blur quality” setting in the “Eye view” dialog. Table 3 lists all possible “Blur quality” settings and the corresponding number of rays per pixel.

5 Focussing on non-planar surfaces

One of TIM’s unique features is the ability to focus on non-planar surfaces. We call the surface onto which a camera is focussed its focus surface.

The camera lens forms an image of each detector pixel, and the focus surface is determined by the positions of the images of all detector pixels (section 4). With a standard camera lens, because the detector pixels lie in a plane perpendicular to the lens’s optical axis, the images of all these pixels also lie in a plane333Strictly speaking, the images of the detector pixels lie in a plane only approximately, due to lens aberrations [8]. perpendicular to the lens’s optical axis. This can be generalised with a perspective-control lens (also called tilt-shift lens) [9], in which the lens — and with it its optical axis — can be tilted with respect to the plane of the detector, and so the plane of the image of the detector is also tilted with respect to the lens’s optical axis.

TIM can focus on considerably more general surfaces. This works as follows. In section 4 we discussed how blurring is related to imaging by the camera lens. Specifically, the RGB brightness recorded by a specific detector pixel is calculated as the average of the RGB brightnesses of a number of rays that hit this pixel. This involves tracing rays backwards, starting from different positions on the lens aperture and travelling in the direction of the 3D position of the image of the detector pixel. TIM uses this procedure, but with a twist: the 3D position of the image of a detector pixel is not determined by standard imaging with a lens. Instead, TIM defines an additional collection of scene objects, the “focus scene”; establishes the 3D position on the surface of the focus-scene object that a backward-traced ray would first intersect if the camera was a pinhole camera; and then uses this position as the position of the image of the detector pixel. If the focus scene contains only a plane , then focussing is the same as with a traditional lens that is focussed at distance . Note that TIM is not concerned with how an actual lens (or other imaging element) would achieve this, if indeed it can be achieved at all optically. It should be possible to achieve it using computational imaging with a plenoptic camera (or light-field camera) [10, 11].

Figure 8: Focussing on different surfaces. The scene consists of four spheres located at different distances from the camera. In the top image, the focus scene contains two of these spheres and the sky; in the bottom image it consists of a plane that passes through the centre of one of the spheres and that is not perpendicular to the axis (the floor tiles are in focus along the line where this plane intersects the floor). The spheres have unit radius; they are centred respectively at , , , and . All four spheres have identical tiled surfaces (edit sphere; select surface type “Tiled”; in the “Parametrisation” tab (see Fig. 11), set “Direction from centre to north pole” to ). The focus plane in the bottom frame is described by the point on the plane and the normal to the plane (so the plane is vertical and has an angle of with respect to the direction). The image was calculated with “Large” aperture size and “Good” blur quality and anti-aliasing quality.

The following example illustrates how to render scenes focussed on non-planar surfaces. We assume that we have already defined a scene consisting of four spheres at different distances from the camera. For the focus scene to have any effect we need a finite-size aperture; this can be achieved by selecting any aperture size other than “Pinhole”, for example “Large” (see section 4). We can now focus on two of the spheres and the sky444Note that TIM needs to be able to find an image position for every detector pixel, so we need to define a focus-scene object for those pixels whose image does not lie on one of the two in-focus spheres. This is easily achieved by including the sky — a large sphere completely surrounding the camera — in the focus scene. as follows. First we copy the scene into the focus scene; this is done by clicking on the “Focus on scene” button in the dialog for editing the “Eye view” parameters. Then we remove all objects we do not want to focus on from the focus scene; we do this by clicking on the “Edit focus scene” button and either removing them or making them invisible (by unchecking the “Visible” checkbox). Clicking on “OK” twice takes us back to TIM’s main screen, from where we can now render the image (by clicking on the “Render” button), which is shown in the top frame of Fig. 8.

It is perhaps worth noting that the focus scene can be edited just like the scene itself. Specifically, we can add objects to the focus scene, for example to focus on an inclined plane, like a camera with a tilt-shift lens (bottom frame of Fig. 8). Only scene objects for whom “Visible” is checked are part of the focus scene. TIM ignores the surface properties of focus-scene objects.

6 Visualising light-ray trajectories

TIM has the capability to visualise light-ray trajectories. This is implemented by first tracing those light rays through the scene, recording the start and end points of each straight-line trajectory segment; adding to the scene a cylinder along every one of these straight-line segments; and finally rendering the extended scene.

Figure 9: Visualisation of the trajectories of a cone of light rays through a complex scene that includes a ray-rotating METATOY window and reflecting spheres. (Top) Top view; (bottom) eye view. The frames were calculated with anti-aliasing quality “Good”.

Fig. 9 shows the trajectories of a cone of light rays launched into one of TIM’s standard scenes, which can be chosen by selecting, in the “Edit scene” dialog’s “Initialise scene to…” drop-down menu (see Fig. 2), “Original (shiny spheres behind ray-rotating window)”. The cone of light rays (“Ray-trajectory cone”) was simply added to the scene with all parameters taking default values, apart from the start point, which is , and the cone angle, which is . Clicking on the “Render” button on the main screen then renders the scene and the light-ray trajectories. Note that this works not only in “Eye view”, but also in the other views (see top frame of Fig. 9). Note also that segments of light-ray trajectories that are seen through METATOYs, reflected off curved mirrors, etc., are distorted accordingly. This can clearly be seen in the bottom frame of Fig. 9, in which many of the straight-line trajectory segments appear bent when seen through the ray-rotating METATOY.

7 Parametrisation of scene objects

Scene objects in TIM all have associated with them coordinate systems that describe their surface. Each point on the surface of a scene object is described by the corresponding values of these coordinates, the point’s surface coordinates. The surface-coordinate systems were chosen to be the most “natural” choice; for example, the surface of a sphere is parametrised in terms of spherical polar coordinates [12], i.e. the polar angle and the azimuthal angle .

Figure 10: Parametrisation of a scene object. Each point on the surface is described by a pair of surface coordinates, in the picture the spherical coordinates and , defined with respect to an arbitrary zenith direction (here ) and direction of the azimuth axis, i.e. the direction from the centre to the meridian (here ). This parametrisation of the surface has been indicated by covering it in a chequerboard pattern with tiles of side length 1 in both and . The local surface-coordinate axes, and , together with the local surface normals, , are shown for two points, (). The sphere has radius 1 and is centred at .

Instead of explaining in detail how surfaces are parametrised, we have built into TIM a number of features that encourage interactive exploration of this parametrisation once a scene has been rendered:

  • hovering with the mouse cursor over any point of the rendered image displays the global coordinates (section 3) of the point on the surface of the scene object the corresponding backwards-traced light ray first intersects;

  • clicking on any point of the rendered image displays the surface coordinates of ;

  • right-clicking brings up a popup menu that allows the addition of the axes of the local surface-coordinate system at ; the scene then needs to be re-rendered for the axes to be shown.

Fig. 10 shows an example of a scene object with sets of surface-coordinate-system axes added at two points on its surface.

The parametrisation of surfaces can also be glimpsed by rendering scene objects with a “Tiled” surface. The sphere in Fig. 10 is one example; several other examples are scattered throughout this document, the most useful of these is perhaps Fig. 4.

Figure 11: Panel controlling the parametrisation of a (newly created) sphere.

Most scene objects allow the parametrisation of their surface to be controlled. Fig. 11, for example, shows the panel that controls the parametrisation of a sphere. This panel is accessed by clicking on the “Parametrisation” tab when editing a sphere, which is parametrised in terms of the polar angle and the azimuthal angle . It allows the directions of different axes to be altered, specifically the “Zenith direction”, which corresponds to the direction for which , and the “Direction of azimuth axis”, where . It is worth noting that clicking on “OK” performs a few checks on the entered parameters and corrects them, if necessary. Specifically, to ensure that the direction of the azimuth axis is perpendicular to the zenith direction, the vector made up of the entered component values is projected into the plane perpendicular to the zenith direction. Furthermore, both vectors are normalised. It is also worth noting that the range of both coordinates can be scaled.

Additional details on coordinates can be found in Ref. [3].

8 Teleporting surface property

The teleporting surface property allows light rays to hit the surface of an object, and then emerge from a corresponding point (with the same values of the surface coordinates — see section 7), and with a corresponding direction (calculated according to wave optics [3]), from the surface of another, linked, object. We call the former object the “origin object”, and the latter the “destination object”. Note that the link is one-way: if a light ray hits the surface of the destination object, it does not automatically emerge again from the surface of the origin object. (It could do this, i.e. the link could be made two-way, if the origin object was made the target object’s target object.) Amongst other effects, the teleporting surface property can be used to simulate geometrical optical transformations [4] such as a polar-to-Cartesian converter [13] (Fig. 12).

Figure 12: View through a polar-to-Cartesian converter. A view through a Cartesian-to-polar converter can be found in Ref. [3], and can, of course, be rendered with TIM.

To make a scene object’s surface teleporting, simply set its “Surface type” to “Teleporting”, and select the target object from the drop-down list that appears. In the example shown in Fig. 12, the origin object is a square rectangle. Its “Teleporting” surface’s destination object is a circular disc immediately behind the rectangle. The disc’s surface parameters, the polar coordinates and , are both scaled to the range 0 to 1, matching the range of the origin object’s surface parameters.

9 Conclusions

We hope you find TIM useful, (reasonably) user-friendly, and fun. Perhaps you even feel a warm glow in your heart whenever you set eyes on those luscious lips.

References

  • [1] A. C. Hamilton and J. Courtial, “Metamaterials for light rays: ray optics without wave-optical analog in the ray-optics limit,” New J. Phys. 11, 013042 (2009).
  • [2] Wikipedia, “METATOY,” http://en.wikipedia.org/wiki/METATOY.
  • [3] D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for forbidden optics,” submitted for publication (2011).
  • [4] O. Bryngdahl, “Geometrical transformations in optics,” J. Opt. Soc. Am. 64, 1092–1099 (1974).
  • [5] A. C. Hamilton, B. Sundar, J. Nelson, and J. Courtial, “Local light-ray rotation,” J. Opt. A: Pure Appl. Opt. 11, 085705 (2009).
  • [6] Wikipedia, “Anaglyph image,” http://en.wikipedia.org/wiki/Anaglyph_image.
  • [7] C. W. Tyler and M. B. Clarke, “The autostereogram,” in “Stereoscopic Displays and Applications,” , vol. 1258 of SPIE proceedings series (SPIE - The International Society for Optical Engineering, Bellingham, Washington, 1990), vol. 1258 of SPIE proceedings series, pp. 182–196.
  • [8] Wikipedia, “Petzval field curvature,” http://en.wikipedia.org/wiki/Petzval_field_curvature.
  • [9] Wikipedia, “Perspective control lens,” http://en.wikipedia.org/wiki/Perspective_control_lens.
  • [10] T. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 99–106 (1992).
  • [11] R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. rep., Stanford Tech Report CTSR 2005-02 (2005).
  • [12] Wikipedia, “Spherical coordinate system,” http://en.wikipedia.org/wiki/Spherical_coordinate_system.
  • [13] G. C. G. Berkhout, M. P. J. Lavery, J. Courtial, M. W. Beijersbergen, and M. J. Padgett, “Efficient sorting of orbital angular momentum states of light,” Phys. Rev. Lett. 105, 153601 (2010).