## 1 Introduction

Light fields emerged as a new imaging modality, enabling to capture all light rays passing through a given amount of the 3D space [Levoy1996]. Compared to traditional 2D images, which only describe the spatial intensity of light rays, captured light fields also describe the angle at which rays arrive at the detection plane. A light field can be represented as a 4D function: , where the plane represents the spatial distribution of rays, indexed by , while corresponds to their angular distribution, indexed by . 3D world coordinates are denoted by , and for simplicity and without loss of generality, we assume that plane and are parallel to the plane of the - axis.

A common light field operation is to simulate a change of focal length for a traditional photographic camera with a narrow depth of field. A “refocus image” can be produced through use of the well-known shift-and-sum method [Ng2005]^{1}^{1}1Note that Equation 1 is a re-parameterization of the shift-and-sum equation presented in [Ng2005], and describes the same operations, despite changes in notation., in which the refocus image is obtained as linear combination of shifted light field views:

(1) |

where are the indices of the view for which refocus will be performed (“reference view”), is a filter that defines the synthetic aperture, and is a disparity value, which is related to the refocus distance.

To perform shit-and-sum refocus, it is only necessary to specify as an input parameter. Because the relationship between disparity and refocus distance is not necessarily known, a user may need to use guess-and-check methods to refocus to a specific plane: entering a disparity, viewing the refocus result, and repeating until the desired result is achieved. However, more intuitive interfaces incorporate disparity information, computed from the light field, to allow the user to specify disparity by clicking on the object to be focused on.

When using the shift-and-sum method described above, the refocus plane is parallel to the light field planes and (or “frontoparallel”). Other refocus methods have been described that allow for non-frontoparallel, planar refocus [Isaksen2000, Vaish2005, Sugimoto2008, Xiao2018]. With these, the results of traditional tilt-shift photography can be simulated, which we refer to as “tilt-shift refocus.” Most of these methods utilize planar homographies [Isaksen2000, Vaish2005, Sugimoto2008] to achieve a result and require the guess-and-check for the placement of the refocus plane by specifying input parameters [Isaksen2000, Vaish2005, Xiao2018].

A tilted refocus plane has more degrees of freedom than a frontoparallel plane, so it is easier for a user to become confused with the plane’s placement, with respect to the scene geometry. The method by Sugimoto and Okutomi allows the user to select a region of interest to focus on

[Sugimoto2008]. However, the tilted refocus plane is a side effect applying the homography that optimises the sharpness in the region. As such, the results outside the region can be unpredictable. Overall, the existing literature does not provide an instructive description of how homographies can be applied to perform tilt-shift refocus in the contemporary light field framework and lack intuitive user specification of the refocus plane.In this paper we describe a generalization of shift-and-sum that allows for non-frontoparallel refocus planes, with frontoparallel refocus as a specific case. This generalized refocus is applied to create a tilt-shift refocus tool that allows for intuitive specification of the refocus plane, visualizing it relative to the a point cloud of the scene geometry. These visualizations are enabled by the inclusion of depth information and camera calibration parameters. With this tool, the user can specify a tilted refocus plane through mouse selection and adjust the parameters with keyboard commands. An example of this can be seen in Figure 1. Finally, we show that interactive perspective shift can be performed for intermediate views within the generalized shift-and sum framework.

## 2 Theory

### 2.1 Generalized shift-and-Sum

To generalize the basic shift-and-sum method, disparity terms are replaced by a generalized transformation , specific to the refocus surface . Then, the refocus image can be expressed as

(2) |

This is the result of reprojection of pixel coordinates in the reference view onto the refocus surface , followed by projection into view . This is represented visually in Figure 2a. Note that primed superscripts on variables do not have the same meaning as in [Ng2005].

### 2.2 Tilt-Shift Refocus

While the application of homographies to produce tilted refocus planes has been mentioned previously [Isaksen2000, Vaish2005, Sugimoto2008], we provide details below, in the generalized shift-and-sum formalism, for completeness and instructiveness.

For the specific case of planar refocus, we consider a plane , described by a point normal vector . Then, the projection can be described in terms of a planar homography and camera intrinsic matrices through

(3) |

with

(4) |

and

(5) |

where is the rotation matrix of camera ; is the translation vector between cameras and ; and is the distance from the reference camera to plane . A visual representation of this transformation is shown in Figure 2b. It is assumed that all camera parameters, intrinsic and extrinsic, are known. The refocus plane normal vector , and point can be specified directly or, in our case, interactively by the user.

In practice, the refocus is not done in a pointwise manner as implied by Equation 2. Instead, the transformation can be applied to warp a whole view at once because there is no dependence of on . Then, similar to shift-and-sum, the weighted average of all views is taken, with a mask applied to each to avoid contributions from empty pixels outside the bounds of the parallelogram containing the warped image.

It can be shown that frontoparallel refocus is a special case of tilt-shift refocus. If we assume that all camera focal lengths are identical, cameras have parallel optical axes (, and the normal is parallel to the optical axes (), Equation 2 takes the form of Equation 1 when is of the form in Equation 3.

### 2.3 Intermediate View Perspective Shift

As is the case for the original shift-and-sum algorithm, the angular coordinates of the reference view do not need to coincide with the angular coordinates of a discrete light field view [Ng2005]. Choosing an intermediate can thus produce a refocus image simulating capture by a camera at a position between the real cameras. However, this effect will only be compelling if, when moving the virtual camera position , the set of light field cameras included in the aperture is updated, i.e new angular information is taken into account.

## 3 Methods

We propose three different methods to intuitively and interactively define the refocus plane parameters and . Two of these methods require a depth map, and all of them include use depth information to provide the user with a visualization of the position of the refocus plane, relative to the scene geometry.

To create a point cloud of a scene, disparity is converted to metric depth as per [Wanner2013]. The pixels in each view are converted from camera coordinates to 2D, homogeneous world coordinates by reprojection via . Then, these 2D coordinates are converted to 3D world coordinates by multiplying by depth . This is expressed as

(6) |

### 3.1 Single-Click Definition

In the most simple interactive refocus plane definition method, the user defines the refocus plane by selecting, with a mouse, a single point in the reference view (Figure 3, left). Invisible to the user, the normal map of the scene has been created from the point cloud information using [Hoppe1992]. Point is calculated, using Equation 6, from the pixel selected, and the corresponding normal vector . Once these refocus plane parameters have been obtained, can be visualized in the point cloud.

### 3.2 Three-Click Definition

A second refocus plane definition method has the user select points to define a plane. The user is presented with a rendering of the point cloud, which they can manipulate as needed before selecting three points (Figure 4, left). Any of the selected points can serve as , and the normal is found from the cross-product of the vectors between the points. Though this method has a few more steps than the single-click method, it can be used to force the refocus plane through multiple, disparate objects, whereas the single-click method is limited to refocus planes with normals in the normal map.

### 3.3 Keyboard Definition

Where the methods given above require depth information to function, the final method has the user define the refocus plane manually with keyboard commands, and the point cloud is only used to visualize the placement. The point is assumed to lie on the optical axis of the reference camera, with the user specifying the distance . The normal vector is not specified directly. Instead, the user specifies the plane’s rotation about the Cartesian axes, since this is more intuitive. The visual representation of the plane is updated as the user steps through different values of distance and angle so that it is easy to see how the refocus result will relate to the geometry of the scene, via the point cloud.

While keyboard definition may require more steps to produce the desired refocus plane placement, it is possible to define any plane by using it. In contrast, the click-based methods described in Sections 3.1 & 3.2 are limited by points that can be selected from the scene. For this reason, it is useful to include keyboard definition as a second step that follows either of the mouse-based methods to allow for fine adjustment, in case the original result was not exactly as desired.

## 4 Results

Example tilt-shift refocus results are shown in Figures 1, 3, 4, and 5. Additional results can be found online^{2}^{2}2https://v-sense.scss.tcd.ie/research/tilt-shift/.

Figure 3 shows results of the single-click interactive definition discussed in Section 3.1. For simulated scenes, such as the one shown, estimated surface normals are quite clean. This means clicking on a planar surface, as shown in the upper-left, will produce a refocus plane across that surface. Results, away from the point clicked, can be more unpredictable for complex surfaces and real scenes, where the normal maps have noise. The middle and bottom rows of Figure 3 show how a small difference in the point selected can produce quite different refocus planes.

Figure 4 shows results of the three-click method, discussed in Section 3.2, for a scene captured with a calibrated plenoptic camera. Though requiring more input from the user, the three-click method is more resilient to noise and allows for planes spanning multiple surfaces to be defined more easily, such as in the bottom row of Figure 4. Where the single-click method is subject to errors in the depth map and normal estimation, the three-click method is only subject to errors in the depth map.

Figures 1 & 5 show results of the interactive keyboard definition discussed in Section 3.3. While it has the most complicated controls and requires more steps to achieve the desired result, compared to the other methods, this method is the most robust. The only dependence of the method on the depth map is in the visualization of the refocus plane, relative to the point cloud. As mentioned previously, it can be beneficial to make a first estimate with single or three-click methods and, then, refine with keyboard definition. Control of the virtual aperture size and position, Section 2.3, can be provided with the keyboard interface, with an example shown in Figure 5.

## 5 Conclusion

Here we have provided formalism for light field tilt-shift refocus in a generalized shift-and-sum framework. We have demonstrated interactive capabilities, enabled by inclusion of depth information, that had not been previously considered and provided a qualitative analysis of the benefits and drawbacks of three different refocus plane definition methods.

Currently, refocus images from light fields with large separation between cameras contain significant angular aliasing artifacts (ex: Figure 1). Addressing this problem, either through filtering or view interpolation, is the focus of current work.

While we have discussed only one specific case of generalized shift-and-sum (planar refocus), it should be possible to simulate other refocus surfaces in this framework, similar to [Xiao2018]. We are also investigating refocus surfaces composed of multiple planes, as a hybrid of tilt-shift refocus and generalized refocus surfaces.

Comments

There are no comments yet.