Towards Building an RGBD-M Scanner

03/12/2016 ∙ by Zhe Wu, et al. ∙ Simon Fraser University Singapore University of Technology and Design 0

We present a portable device to capture both shape and reflectance of an indoor scene. Consisting of a Kinect, an IR camera and several IR LEDs, our device allows the user to acquire data in a similar way as he/she scans with a single Kinect. Scene geometry is reconstructed by KinectFusion. To estimate reflectance from incomplete and noisy observations, 3D vertices of the same material are identified by our material segmentation propagation algorithm. Then BRDF observations at these vertices are merged into a more complete and accurate BRDF for the material. Effectiveness of our device is demonstrated by quality results on real-world scenes.



There are no comments yet.


page 3

page 5

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Appearance capture involves simultaneous acquisition of both 3D shape and reflectance of an object or a scene. Given such information, it is possible to produce photo-realistic images for movies and games. Besides, it has seen wide application in both reverse engineering and cultural heritage preservation. Because of its importance, various methods on this topic were proposed and some of them [17, 26, 9] have achieved highly accurate results. However, these methods typically relied on giant and expensive hardware setups, and the data acquisition process required certain amount of expertise in this field. Lack of both has prohibited average users who wish to digitize objects or scenes in everyday life from doing so. Another major issue of existing systems is that they are only targeted at acquisition of single movable objects of small scale(a few centimeters in diameter), which further prohibits their application in situations where a static and relatively large indoor scene is of interest to the user. Thus, a novel acquisition device which addresses these limitations is highly desirable for casual appearance capture.

Compared with appearance capture, acquisition of only 3D shape is a relatively easier task with the aid of Microsoft Kinect. Kinect is a portable and easy-to-use device which provides streams of depth images and RGB images of a scene. By making use of the depth stream, [10] achieved real-time 3D reconstruction. Per-vertex color information can be estimated by taking the RGB stream into account [25]. Further efforts had been made by [19, 3, 7, 22] to improve the quality of depth images or the reconstructed shape via photometric techniques using the RGB stream. However, none of them is capable of revealing reflectance information, which is of great importance for a computer to better analyze and understand the scene.

In view of the above mentioned gaps in both fields, we present in this paper a novel device for appearance capture. Specifically, we equip ASUS Xtion Pro Live (a similar product to Kinect) with an additional infrared camera and several infrared LEDs, which are synchronized by a customized circuit. After acquiring data using this portable device, we first reconstruct the 3D scene from depth images by KinectFusion and then collect reflectance information of each 3D vertex by projecting them onto the images of the IR camera under IR illumination. While BRDF observations for each 3D vertex are sparse and severely corrupted, we manage to group together vertices of the same material through our proposed algorithm called BRDF segmentation propagation. By concatenating BRDF observations of vertices of the same material, we obtain a more complete and accurate estimation of BRDF for each material in the scene. Our system can be regarded as an RGBD-M(aterial) scanner due to its capability of depth sensing and reflectance sensing.

In summary, the contributions of this paper are twofold.

1. We prototype a novel appearance acquisition device featuring portability and ease of use. Moreover, this device is able to capture the shape and reflectance of a relatively large indoor scene instead a single object;

2. We present a novel method for automatic classification of 3D vertices based solely on their sparse and corrupted BRDF observations.

2 Related Work

The system presented in this paper is aimed at simultaneous acquisition of shape and reflectance with the aid of customized hardware. It is related to the fields as reviewed below.

Appearance Capture

Appearance capture methods can be classified into two categories based on input data. Methods in the first category 

[11, 16] rely on a range sensor to first reconstruct a precise 3D shape. By collecting observations from an additional RGB camera and fitting parametric BRDFs, it is possible to reconstruct the reflectance information at each surface point. Methods of the second category involve only RGB cameras to reconstruct both 3D shapes and BRDFs. [8, 6, 2, 26] made various assumptions on BRDFs to produce high-quality results while [9, 17] achieved the same goal by smartly designing the acquisition system: using either a coaxial optical scanner or spherical harmonic illumination. The major advantage of such systems is accuracy. However, accuracy comes at the expense of giant and complicated hardware setups in a controlled environment. Besides, these systems worked in the visible spectrum and required a darkroom environment when capturing data. The third issue for such systems is that they work only on small-scale objects which can be placed on a turntable. This issue prohibits them from being used for acquiring the appearance of an indoor scene.

Dense Scene Reconstruction

3D reconstruction has been one of the key problems in computer vision. By using solely RGB images, multi-view stereo methods such as 

[5] and structure-from-motion(SfM) methods like [12] could produce plausible 3D shape models. The emergence of Microsoft Kinect, which has a consumer-level range sensor, enables novel approaches [10, 24] to dense 3D reconstruction. Color information can also be estimated with an additional RGB camera [25].

3D reconstruction by Kinect cannot preserve fine surface details. This problem was tackled with the help of the RGB camera on Kinect. [22, 7, 3, 19] either refined each depth image or refined the final triangular mesh by photometric stereo approaches. One recent work [27]

refined geometry directly on the volumetric representation of the shape instead of the explicit mesh representation. While these methods achieved plausible results in revealing detailed shape information, BRDFs, which play an important role in relighting and scene understanding, remain missing.

Computational Sensing Traditional RGB cameras provide users with a color image. In recent years, there has been a trend of modifying the RGB camera or augmenting it with additional hardware to enhance its imaging capability. [raskar2004non] made a non-photorealistic camera out of an ordinary RGB camera by augmenting it with synchronized flashes. [liu2014discriminative] enabled an RGB camera to differentiate materials by adding active illumination sources controlled by a computer. A recent work [tang2014high] extended traditional RGB mosaic to allow IR light to be received by the camera sensor, and the additional IR channel enabled high resolution photography. Kinect can also be viewed as a new computational sensing device due to its capability of depth sensing. Compared with Kinect, our device takes one step further to recover not only shape, but also BRDFs of a scene. Because of this capability, our device can be seen as an RGBD-M(aterial) scanner in a general sense.

The work closest to ours is [20], which made use of a single Kinect for appearance capture. While both systems feature portability and ease of use, ours is different from that of [20] in following ways.

1. [20] worked on a single object because of the requirement of environmental lighting while our device works on scenes, such as a room’s corner, thanks to its active illumination in the IR spectrum;

2. [20]

assumed a parametric BRDF model while we use a bivariate BRDF model which is deduced from reflectance symmetries and is represented as a 2D table. Compared with a parametric model, the bivariate model is applicable to a wider range of real-world materials.

3. [20] required illumination calibration whenever the visible illumination changes, which is done by placing a mirror sphere into the scene, while our device does not require such calibration;

4. [20] required a user-specified number of materials, which is hard for average users to decide, while our system does not require additional input from the user.

3 Hardware Description

In this section, our customized device is first introduced. Then we briefly mention various calibrations involved before using the device. Finally, the data acquisition process is presented to illustrate the device’s ease of use.

3.1 Device Setup

(a) (b)
Figure 1: (a) shows our prototype device; (b) shows the spectrum of Xtion and IR LEDs
Figure 2: System pipeline

As shown in device_spectrum(a), our customized device involves three key parts. The first one is ASUS Xtion PRO LIVE, or Xtion for short. Similar to Microsoft Kinect, Xtion streams depth and RGB images, whose resolutions are both , at 30 fps.

However, Xtion is preferred in our setup because of its smaller form factor and the fact that it uses a single USB port to transmit both data and power, which is simpler than Kinect that requires a dedicated cable for power supply.

The second part is an infrared(IR) camera which is sensitive in the near IR spectrum(). The IR camera and Xtion are fixed on opposite sides of a thick Aluminium plate so that the baseline between them is minimized. The resolution of the IR camera is and the lens was chosen so that the IR camera and Xtion have similar FOVs.

The third part is a set of 10 IR LEDs. These LEDs are synchronized with the IR camera by a customized control circuit in such a way that LEDs are switched on and off sequentially while the IR camera acquires an IR image when a single LED is switched on. To reduce the effects of underexposure and overexposure, for each LED, we capture two images under different exposures ( and ). The frame rate for the IR camera is around 20 fps. LEDs are fixed on a circular plastic plate, with eight of them evenly distributed on a circle with a radius of , the center of which is the IR camera, and the rest two found at and away from the IR camera respectively. Such kind of lighting design is to facilitate robust BRDF estimation.

Since Xtion’s depth camera also works in the IR spectrum, we have to carefully avoid interference between Xtion and the IR LEDs. We measured the spectrum of Xtion’s depth camera by a spectrometer, which suggests a peak at around . Thus, we chose IR LEDs with a peak at around in the spectrum as shown in device_spectrum(b). Clearly, Xtion’s depth camera will be not affected by the additional IR LEDs. However, the IR camera collects photons from the visible spectrum to near IR spectrum. Thus, we placed a bandpass filter in front of the IR camera so that it only receives light with a wavelength at around .

Since Xtion’s depth camera also works in the IR spectrum, we carefully chose the working spectrum of IR LEDs to minimize interference between Xtion and IR LEDs, as shown in device_spectrum(b). Besides, a bandpass filter was placed in front of the IR camera so that it only receives light from IR LEDs.

Both Xtion and the IR camera are connected to a desktop computer and controlled by a data capture program.

3.2 Device Calibration

Our device was calibrated in the following aspects.

1. We assume perspective camera model for the IR camera, Xtion’s depth camera and RGB camera, and we calibrated their intrinsic matrices. We also calibrated the relative poses between any pair out of the three;

2. An LED is considered a point light source and its relative position to the IR camera was calibrated;

3. Vignette effect found in IR images was calibrated;

4. Relative brightness of LEDs were calibrated.

Another issue is temporal synchronization between Xtion and the IR camera. We simply record a timestamp for every image received by the computer. These timestamps will be used in per_vertex_ir_brdf.

3.3 Data Acquisition

Our device is intended to be used in a similar way as we use a single Kinect: the user holds the device and points it to the scene of interest. Moving around the scene, our device generates three image streams: depth stream, RGB stream and IR stream. These images are transmitted back to a computer and will later be used to reconstruct the appearance model of the scene.

4 Appearance Capture

Given acquired depth, RGB and IR images, our system estimates both shape and reflectance of the scene. pipeline shows the processing pipeline of our system. In this section, we detail each step.

4.1 Geometry Reconstruction

We first obtain the 3D shape of the scene by KinectFusion on the sequence of depth images. The output is a triangular mesh representing the shape, together with a camera pose for every depth image.

4.2 Reflectance Estimation

For a surface point in the scene, we try to obtain its reflectance information, which is described by a BRDF(bidirectional reflectance distribution function) , where is the incoming lighting direction, is the outgoing light direction, and is the wavelength of the light.

4.2.1 Reflectance Models

While one material may not exhibit exactly the same reflectance characteristic in different ranges of the light spectrum, the difference is negligible for dielectrics in the range of visible and near-infrared spectrum according to [20]. Thus, we simplify the BRDF model as , where is the BRDF estimated in the IR spectrum, is the normalized

color vector estimated in the visible spectrum.

A general is a 4D function and requires much effort to be fully captured. Instead of adopting oversimplified parametric models, we assume isotropy and half-vector symmetry of BRDFs. Then the 4D function can be reduced to a bivariate BRDF


where or are the half angle and difference angle defined in [15]. It is worth noting that isotropy and/or half-vector symmetry have been used extensively in previous work on photometric stereo [2, 14, 21].

We further assume that the scene contains only a few different BRDFs. This assumption is a good approximation of an indoor scene consisting of artificial furniture and articles [23], which is exactly the target scene of our system.

Note that we do not adopt the dichromatic BRDF model because it increases the complexity of our BRDF model, which makes it prone to large noises observed in our data.

BRDF Representation We represent a BRDF as a 2D lookup table. is discretized into 45 bins of equal width while is discretized into 48 bins. Thus, the BRDF representation is a table with each entry containing a 3-vector . We note an entry of the table as a BRDF cell. Each vertex is associated with such a BRDF table.

4.2.2 Per-Vertex Color

By projecting a 3D vertex onto an RGB image, the RGB values at the projected 2D pixel give an estimation of the 3D vertex’s color.

Given the pre-calibrated relative pose between the depth camera and the RGB camera, we can easily calculate the camera pose of every RGB image as follows


where and are the poses of the RGB camera and depth camera for a pair of RGB and depth images captured with the same timestamp . With the aid of the RGB camera’s pose, a vertex can be projected into an RGB image, where the corresponding pixel’s RGB values give an estimation of the vertex color.

After collecting a set of RGB values from a fraction of all RGB images, a 3D vertex’s color is calculated as the median of these RGB values on a per-channel basis. Note that we discard saturated RGB values and those estimates when viewing at a grazing angle(). The normalized per-vertex color is the result of normalization of the above estimated color.

4.2.3 Per-Vertex IR BRDF

The BRDF of a vertex in the IR spectrum is estimated from IR images. For a vertex on a surface of the scene, we can estimate one of its BRDF values from an IR image acquired at time based on the following image formation model


where is the vertex ’s normal, is the light direction for at time , is the LED’s relative brightness, is the distance between the LED and , and is the image coordinate when projecting onto the IR image. The function models vignette effect while encodes both visibility and shadow information.

It is worth noting that since we adopt the perspective camera model and point light source model, both and vary as our device moves. Thus, we are able to collect BRDF values at different .

To facilitate the estimation, we need to first know the IR camera’s pose, which, unfortunately, cannot simply be calculated from any single camera pose of the depth camera due to lack of frame-to-frame correspondence. To deal with this, the IR camera’s pose is interpolated from two temporally neighboring depth camera’s poses, given known timestamps. Note that the rotational component of a camera pose can be interpolated via spherical linear interpolation of quaternion


in a similar way as we estimate an RGB image’s pose


where is the pre-calibrated relative pose between the IR camera and the depth camera.

However, since the IR stream and depth stream do not have a frame-to-frame correspondence, does not exist and can only be obtained through interpolation of two neighboring camera poses and , where .

The interpolation of a camera pose, which is composed of a translational component and a rotational component, involves interpolation of both separately. The interpolation of translational vectors is simple linear interpolation while the rotational interpolation is achieved through spherical linear interpolation of quaternion, which is a representation of rotation. Details of spherical interpolation of quaternion can be found in [4].

After obtaining the camera pose of an IR image, the position of the IR image’s corresponding LED can be easily obtained since all LEDs’ positions in the IR camera’s coordinate system are pre-calibrated. Thus, the IR BRDF value can be calculate based on image_formation.

Given a vertex’s normalized color from sec:per_vert_color and an IR BRDF value from this section, we can calculate its full BRDF vector at in the visible spectrum as . Multiple BRDF vectors at the same are averaged before being placed into the vertex’s associated BRDF table.

4.2.4 Material Segmentation

Motivation Due to limited variation in the viewing/lighting directions, every single vertex has only a sparse set of non-empty BRDF cells in its associated BRDF table. Besides, unlike traditional BRDF acquisition systems [18, 1] where almost perfect calibration and registration are available, the BRDF values obtained using our device are severely corrupted due to less accurate camera poses, imperfect shape-image registration and noisy surface normals. Thus, if we could identify 3D vertices of the same material, their BRDF tables can be merged into a more complete and accurate BRDF estimation for that material.

If we consider the BRDF table as a feature vector for each vertex, this material segmentation problem can simply be cast as a spectral clustering problem by defining an affinity matrix for all vertices based on their BRDF tables. However, the affinity matrix is difficult to define because feature vectors have many different missing entries and the rest noisy entries are quite unreliable.

Another straightforward approach is to cluster vertices based on their colors. Mathematically, this amounts to feature dimension reduction, where the RGB vector is the empirical reduction of a whole BRDF table. Obviously, the RGB vector is not guaranteed to be the ‘optimal’ reduction for clustering. In fact, different materials might share the same color. Even the same material might exhibit different colors when the lighting/viewing configuration changes(See the highlight on the gymball of gymball_results(a)).

Due to limited variation in the viewing/lighting directions, every single vertex has only a sparse set of non-empty BRDF cells in its associated BRDF table. If we could identify vertices of the same material, their BRDF tables would complement each other and produce a more complete BRDF estimation.

Algorithm In view of this, we propose a novel method for clustering vertices by utilizing all the incomplete and noisy BRDF tables. We notice that while clustering all vertices at once is challenging, partial clustering within a single BRDF cell is relatively easy and more reliable even with large noise. The basic idea of our algorithm is to segment vertices into different material groups based on their samples in the same BRDF cell, and then propagate this segmentation information into other BRDF cells to facilitate further segmentation of other vertices.

While there is usually an undefined number of materials in a real scene, we detail our method for the 2-material segmentation case here to illustrate our idea. The algorithm is presented in segmentation_algo, with each step detailed below.

Multi-material segmentation shares the same idea and its implementation is omitted here for brevity. It is worth noting that our method does not require the user specify the number of materials in the scene.

1:BRDF tables of all vertices
2:Every vertex’s material label
3:Set up with initial clustering within each of its cells
4:Select the most separable cell
5:Separate to initialize material groups and
6:while  There are cells to be separated  do
7:   Select currently most separable cell
8:   Separate to update and
9:end while
Algorithm 1 Two-material segmentation propagation

Line 1 We set up a new BRDF table . A cell at of is associated with a list of BRDF samples collected from every vertex’s corresponding BRDF cell if it is non-empty. mat_segmentation(a) is a visualization of these samples at a cell in the RGB space. Note that only a random set of vertices of the mesh are used here for efficiency.

Ideally, BRDF samples in a cell of are merely repetitions of a few isolated points in the RGB space, corresponding to different materials. Due to various noises, however, these samples form a few clusters. We run the meanshift algorithm to automatically cluster these samples. The bandwidth of the meanshift algorithm is empirically set as , where , and

are variances of samples along three dimensions respectively. Those clusters with less than

of all samples in the cell are discarded.

Note that in the 2-material case, there are no more than 2 clusters for each BRDF cell.

Line 2 We choose the most ‘separable’ cell in to initialize material segmentation. We first model each cluster in every cell of

by a Gaussian distribution(one_dim_illustration(a)). If there are two clusters in a BRDF cell, we define the following ‘separability score’


where , , and are the mean vectors and covariance matrices for the two Gaussians respectively, and is Mahalanobis distance defined in the following way


The cell with a single or no cluster has a score of zero. The cell in with the highest score is selected for initial segmentation.

Line 3 For each sample in the selected cell , we calculate its Mahalanobis distances and to the two Gaussians. If and , the sample’s corresponding vertex will be placed into an empty vertex set . If and , this vertex goes into another empty vertex set . The two vertex sets correspond to two materials in the scene. mat_segmentation(b) shows and on the original mesh.

We note that value is chosen as threshold because it corresponds to the

rule of a 1D Gaussian. In other words, if a sample is drawn from a Gaussian, there is a high probability that the Mahalanobis distance between them will be less than


(a) (b)
(c) (d)
(e) (f)
Figure 3: Illustration of segmentation propagation. Note that while the scene involves over 2 materials, it can still be used for demonstration because the initially segmented BRDF cell contains only two clusters

Line 5 In this step, we find a new cell in to be separated. The selection is based on a new separability score.

For cell , We denote the set of all its BRDF samples’ corresponding vertices as and find intersections and . If (or ) contains less than half of the vertices in (or ), the new separability score of this BRDF cell will be set to zero. Otherwise, we estimate two Gaussian distributions in with the vertices in and . ’s new separability score is calculated with the two new Gaussians using sep_score. The cell in with the highest new separability score is selected.

Line 6 We propagate the segmentation information of and to the selected cell .

From last step, we have already obtained two Gaussian distributions estimated from and . For every BRDF sample in whose corresponding vertex does not belong to , we calculate its Mahalanobis distances and to the two Gaussians of and . If and , the vertex will be added to . If and , it goes into . See one_dim_illustration.

During this step, the material segmentation information encoded in and is propagated to , which in turn improves the segmentation by adding additional vertices into and .

Selection of Next BRDF Cell and Segmentation Propagation are repeated until there is no BRDF cell which can be segmented, either because of being already segmented or because there are less than two clusters in the BRDF cell.

mat_segmentation(c-f) shows how sets and grows as the above algorithm proceeds.

Due to data noise and incomplete BRDF tables, some vertices are never classified into the either or . In this case, we do not force such vertices into either set because there is a lack of information for us to make confident assertions about them.

Multi-Material Extension In the case of multi-material segmentation, the meanshift algorithm in Step 1 could produce over 2 clusters in a cell of . Then separability can be defined for each cluster as its Mahalanobis distance to the ‘nearest’ cluster within the same cell. The segmentation will start from the most separable cluster and be propagated in every cell in . After each round, a new material is segmented out. This kind of segmentation could proceed till no new material is identified. Clearly, the user does not need to specify the number of materials in advance.

Figure 4: 1D illustration of algorithm. (a) shows the histogram of BRDF samples in the initially selected cell . Two clusters in are fitted to Gaussians as shown by the red and green curves. These samples’ corresponding vertices can be classified into two material groups and based on the two curves. (b) shows the histogram of another cell to be separated. (c)(d) show the histograms of BRDF samples in whose corresponding vertices belong to and respectively. Unclassified vertices associated with could be classified based on the curves in (c) and (d).

4.2.5 Post-processing

After identifying vertices of the same material, we can merge their associated BRDF tables and obtain a more complete and accurate representation of the material’s BRDF. Even so, however, there are still empty cells in the BRDF table. This is mainly due to two reasons. First, we discard BRDF values observed when or . Second, because of the fixed baseline between the IR camera and LEDs, only varies between and and there is no BRDF samples beyond this range. Thus, our system can only provide a portion of the full BRDF. While we could fit the available BRDF samples to either a parametric model or a set of measured real-world BRDFs as [13], we adopt the simple approach of linear interpolation and extrapolation to complete the BRDF table. Note that this completion is only for rendering purpose and should not be considered as valid measurement.

5 Experiments

(a) (c) (e) (g)
(b) (d) (f) (h)
Figure 5: (a) shows an input RGB image. (b) shows the mesh and camera trajectory.(c-d) are material segmentation results: (c) is the results of 100,000 vertices while (d) is the final results. Note that not all vertices are classified into a material group. (e-f) are a pair of original IR image and a rendering under the same condition using estimated BRDFs. (g) shows the rendering of 3 dominant materials: wall, gymball and the board under gymball. (h) shows a novel rendering of the scene under novel lighting.

We demonstrate our system by results on three real-world scenes. Note that triangular meshes were produced by KinectFusion without any manual cropping.

gymball_results shows results on the scene Gymball. (a) is an input RGB image and (b) shows the reconstructed 3D shape as well as the depth camera’s trajectory produced by KinectFusion.

Our material segmentation algorithm in set:material_segmentation could be slow because of meanshift clustering during initialization. Thus, we ran our algorithm on a set of 100,000 randomly sampled vertices instead of all vertices in the scene. (c) shows the color-coded segmentation result of the sampled vertices. For those unsampled vertices, we simply assume that each of them is of the same material as the nearest sampled vertex within ’s distance. (d) shows this final segmentation result of all vertices. Please note that many vertices(in white) near the boundary of the scene are not classified to any material group due to lack of valid BRDF observations or large discrepancy from other vertices’ BRDF observations.

In total, our system generated 6 material groups and the number of vertices in each group is 7450, 33089, 8655, 1389, 269 and 140. The three largest material groups corresponded to the three dominant materials in the scene: the rubber-made gymball, the wall and the board beneath the gymball. The rest 3 material groups either corresponded to less dominant materials, or were due to various noise and large errors of surface normals. we rendered the three remaining materials in(g), where a sphere is rendered with different BRDFs under directional illumination. It is obvious that our system successfully captured the highlight of the gymball. (e) shows one of the input IR image and (f) is the rendering of the scene under the same condition as (e). Clearly, (f) is a colored version of (e). This resemblance suggests success of our reflectance estimation. (h) is a rendering of the scene under a novel lighting/viewing condition.

Please note that as mentioned earlier, some vertices are not associated with any materials. For these vertices, we assume Lambertian BRDF model to facilitate rendering. Also note that we mask out the four corners of (e) and (f) because we did not use these regions for strong vignette effect.

pot_results shows results of another scene Pot. As can be seen, different materials in the scene are well segmented by our algorithm and the renderings of a sphere and the original scene demonstrate successful capture of highlight observed in the brown plastic pot. However, our system failed to capture the highlight of the blue pot, which is also made of plastic. This is because our sampling of vertices was too sparse to observe highlight on the blue pot. For this scene, our system produces 19 material groups, 6 out of which are dominant. (g) shows the rendering of the 6 dominant materials, which correspond respectively to brown pot, chair, curtain, blue pot, ground and wall.

spiderman_results shows results of a scene Spiderman. Our system still successfully segmented vertices of different materials, as is shown in (c-d). Out of all 13 material groups produced by our system, there are 8 dominant groups having considerably more vertices than the rest. Rendering of the 8 materials is shown in (g). Notice that for some rendering, there is a circular artifact. This is because for that material, our system acquires only a portion of its BRDF and we use a naive interpolation/extrapolation method to complete the BRDF. Obviously, The simple interpolation/extrapolation strategy does not guarantee smooth rendering.

(a) (c) (e) (g)
(b) (d) (f) (h)

Figure 6: This figure shows results of the scene Pot. Arrangement of subfigures is similar to gymball_results. Note that the 6 materials in (g) correspond to the brown pot, chair, curtain, blue pot, ground and wall respectively.
(a) (c) (e) (g)
(b) (d) (f) (h)
Figure 7: This figure shows results of the scene Spiderman in a similar way as gymball_results and pot_results. The 8 materials in (g) correspond to pumpkin, red region on spiderman, blue region on spiderman, board behind spiderman, box behind the spiderman’s legs, box in front of spiderman, wall and the supporting board under spiderman.

Limitations As can be seen in pot_results(e-f), the highlight of the brown pot in the original IR image was smoothed out in the re-rendering. This is because sharp specularity can only be faithfully captured with highly accurate calibration of surface normal, lighting and viewing directions, which are impossible for our system. Due to the same reason, our system cannot handle surfaces with much texture well.

6 Conclusion and Future Work

In this paper, we present a novel device for capturing both shape and BRDFs. Unlike traditional appearance capture system, our device features not only portability and ease of use, but also the capability of acquiring the appearance of a scene instead of a single object.

The major difficulty in building such a system lies in handling the incomplete and severely corrupted BRDF observations, which was seldom seen in previous systems thanks to highly controlled environments. To solve the problem, we propose the material segmentation propagation algorithm, which automatically segments vertices into different material groups by using those sparse and noisy BRDF samples. We see our device as a stepping stone to more capable RGBD-M(aterial) scanners.

Currently our system processes data offline. We hope to make it work and give feedback in real-time, which is important in guiding the user for better scanning.


  • [1] M. Aittala, T. Weyrich, and J. Lehtinen. Practical svbrdf capture in the frequency domain. ACM Trans. Graph., 32(4):110, 2013.
  • [2] N. Alldrin, T. Zickler, and D. Kriegman. Photometric stereo with non-parametric and spatially-varying reflectance. In Proc. CVPR, pages 1–8. IEEE, 2008.
  • [3] G. Choe, J. Park, Y.-W. Tai, and I. Kweon. Exploiting shading cues in kinect ir images for geometry refinement. In Proc. CVPR, 2014.
  • [4] E. B. Dam, M. Koch, and M. Lillholm. Quaternions, interpolation and animation. Datalogisk Institut, Københavns Universitet, 1998.
  • [5] Y. Furukawa and J. Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell., 32(8):1362–1376, 2010.
  • [6] D. B. Goldman, B. Curless, A. Hertzmann, and S. M. Seitz. Shape and spatially-varying brdfs from photometric stereo. IEEE Trans. Pattern Anal. Mach. Intell., 32(6):1060–1071, 2010.
  • [7] S. Haque, A. Chatterjee, V. M. Govindu, et al. High quality photometric reconstruction using a depth camera. In Proc. CVPR, pages 2283–2290. IEEE, 2014.
  • [8] C. Hernández, G. Vogiatzis, and R. Cipolla. Multiview photometric stereo. IEEE Trans. Pattern Anal. Mach. Intell., 30(3):548–554, 2008.
  • [9] M. Holroyd, J. Lawrence, and T. Zickler. A coaxial optical scanner for synchronous acquisition of 3d geometry and surface reflectance. ACM Trans. Graph. (Proc. of SIGGRAPH), 29(4):99, 2010.
  • [10] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, et al. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 559–568. ACM, 2011.
  • [11] H. Lensch, J. Kautz, M. Goesele, W. Heidrich, and H.-P. Seidel. Image-based reconstruction of spatial appearance and geometric detail. ACM Trans. Graph. (Proc. of SIGGRAPH), 22(2):234–257, 2003.
  • [12] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison. Dtam: Dense tracking and mapping in real-time. In Proc. ICCV, pages 2320–2327. IEEE, 2011.
  • [13] T. Nöll, J. Köhler, and D. Stricker. Robust and accurate non-parametric estimation of reflectance using basis decomposition and correction functions. In Proc. ECCV, pages 376–391. Springer, 2014.
  • [14] F. Romeiro, Y. Vasilyev, and T. Zickler. Passive reflectometry. In Proc. ECCV, pages 859–872. Springer, 2008.
  • [15] S. M. Rusinkiewicz. A new change of variables for efficient brdf representation. In Rendering techniques’ 98, pages 11–22. Springer, 1998.
  • [16] Y. Sato, M. D. Wheeler, and K. Ikeuchi. Object shape and reflectance modeling from observation. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 379–387. ACM Press/Addison-Wesley Publishing Co., 1997.
  • [17] B. Tunwattanapong, G. Fyffe, P. Graham, J. Busch, X. Yu, A. Ghosh, and P. Debevec. Acquiring reflectance and shape from continuous spherical harmonic illumination. ACM Trans. Graph. (Proc. of SIGGRAPH), 32(4):109, 2013.
  • [18] J. Wang, S. Zhao, X. Tong, J. Snyder, and B. Guo. Modeling anisotropic surface reflectance with example-based microfacet synthesis. In ACM Transactions on Graphics (TOG), volume 27, page 41. ACM, 2008.
  • [19] C. Wu, M. Zollhöfer, M. Nießner, M. Stamminger, S. Izadi, and C. Theobalt. Real-time shading-based refinement for consumer depth cameras. Proc. SIGGRAPH Asia, 2014.
  • [20] H. Wu and K. Zhou. Appfusion: Interactive appearance acquisition using a kinect sensor. In Computer Graphics Forum. Wiley Online Library, 2015.
  • [21] Z. Wu and P. Tan. Calibrating photometric stereo by holistic reflectance symmetry analysis. In Proc. CVPR, pages 1498–1505. IEEE, 2013.
  • [22] L.-F. Yu, S.-K. Yeung, Y.-W. Tai, and S. Lin. Shading-based shape refinement of rgb-d images. In Proc. CVPR, pages 1415–1422. IEEE, 2013.
  • [23] Y. Yu, P. Debevec, J. Malik, and T. Hawkins. Inverse global illumination: Recovering reflectance models of real scenes from photographs. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 215–224. ACM Press/Addison-Wesley Publishing Co., 1999.
  • [24] Q.-Y. Zhou and V. Koltun. Dense scene reconstruction with points of interest. ACM Trans. Graph. (Proc. of SIGGRAPH), 32(4):112, 2013.
  • [25] Q.-Y. Zhou and V. Koltun. Color map optimization for 3d reconstruction with consumer depth cameras. ACM Trans. Graph. (Proc. of SIGGRAPH), 33(4):155, 2014.
  • [26] Z. Zhou, Z. Wu, and P. Tan. Multi-view photometric stereo with spatially varying isotropic materials. In Proc. CVPR, pages 1482–1489. IEEE, 2013.
  • [27] M. Zollhöfer, A. Dai, M. Innmann, C. Wu, M. Stamminger, C. Theobalt, and M. Nießner. Shading-based refinement on volumetric signed distance functions. Proc. SIGGRAPH, 2015.