ABO: Dataset and Benchmarks for Real-World 3D Object Understanding

10/12/2021 ∙ by Jasmine Collins, et al. ∙ 10

We introduce Amazon-Berkeley Objects (ABO), a new large-scale dataset of product images and 3D models corresponding to real household objects. We use this realistic, object-centric 3D dataset to measure the domain gap for single-view 3D reconstruction networks trained on synthetic objects. We also use multi-view images from ABO to measure the robustness of state-of-the-art metric learning approaches to different camera viewpoints. Finally, leveraging the physically-based rendering materials in ABO, we perform single- and multi-view material estimation for a variety of complex, real-world geometries. The full dataset is available for download at https://amazon-berkeley-objects.s3.amazonaws.com/index.html.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Progress in 2D image recognition has been driven by large-scale datasets [29, 10, 36]

. The ease of collecting 2D annotations (such as class labels or segmentation masks) has led to the scale of these diverse, in-the-wild datasets, which in turn has enabled the development of 2D computer vision systems that work in the real world. Ideally, progress in 3D computer vision should follow from equally large-scale datasets of 3D objects. However, collecting large amounts of high-quality 3D annotations (such as voxels or meshes) for individual real-world objects poses a challenge. One way around the challenging problem of getting 3D annotations for real images is to focus only on synthetic, computer-aided design (CAD) models 

[5, 67, 27]. This has the advantage that the data is large in scale (as there are many 3D CAD models available for download online) but most objects are untextured and there is no guarantee that the object may exist in the real world. For example, furniture items tend to be out-dated or unrealistic. As a result, the models tend to be of abstract or uncommon object geometries, and the models that are textured are quite simplistic. This has led to a variety of 3D reconstruction methods that work well on clear-background renderings of synthetic objects [7, 19, 61, 41] but do not necessarily generalize to real images or more complex object geometries.

To enable better real-world transfer, another class of 3D datasets aims to link existing 3D models with real-world images [60, 59]. These datasets find the closest matching CAD model for the objects in an image and have human annotators align the pose of the model to best match the image. While this has enabled the evaluation of 3D reconstruction methods in-the-wild, the shape (and thus pose) matches are approximate. Because this approach relies on matching CAD models to images, it inherits the limitations of the existing CAD model datasets (i.e. poor coverage of real-world objects, basic geometries and simplistic or no textures).

The IKEA [34] and Pix3D [53] datasets sought to improve upon this by annotating real images with exact, pixel-aligned 3D models. The exact nature of such datasets has allowed them to be used as training data for single-view reconstruction [16] and has bridged some of the synthetic-to-real domain gap, however the size of the datasets are relatively small (90 and 395 unique 3D models, respectively), likely due to the difficulty of finding images that correspond to exact matches of 3D models. Further, the larger of the two datasets [53] only contains 9 categories of objects. The provided 3D models are also untextured, thus the annotations in these datasets are largely used for shape or pose-based tasks, rather than tasks such as material prediction.

Rather than trying to match images to synthetic 3D models, another approach to collecting 3D datasets is to start with real images (or video) and reconstruct the scene by classical reconstruction techniques such as structure from motion, multi-view stereo and texture mapping [6, 49, 47]. The benefit of these methods is that the reconstructed geometry faithfully represents an object of the real world. However, the collection process requires lots of manual effort and thus datasets of this nature tend to also be quite small (398, 125, and 1032 unique 3D models, respectively). The objects are also typically imaged in a controlled lab setting and do not have corresponding real images of the object “in context”. Further, included textured surfaces are all assumed to be Lambertian and thus do not display realistic reflectance properties.

Motivated by the lack of large-scale datasets with realistic 3D objects from a diverse set of categories and corresponding real-world multi-view images, we introduce Amazon-Berkeley Objects (ABO). This dataset is derived from Amazon.com product listings, and as a result, contains imagery and 3D models that correspond to modern, real-world, household items. Overall, ABO contains 147,702 product listings associated with 398,212 unique images, and up to 18 unique attributes (category, color, material, weight, dimensions, etc.) per product. ABO also includes “360º View” turntable-style images for products and 7,953 products with corresponding artist-designed 3D meshes. Because the 3D models are designed by artists, they are equipped with high-resolution, physically-based materials that allow for photorealistic rendering. Examples of the kinds of data in ABO can be found in Figure 1.

In this work, we present the different properties of ABO and compare it to other existing 3D datasets as well as datasets for metric learning. We also use ABO to benchmark the performance of state-of-the-art algorithms for shape understanding tasks such as single-view 3D reconstruction (Section 4.1

), multi-view image retrieval (Section 

4.2), and material estimation (Section 4.3).

Dataset # Objects # Classes Real images Full 3D PBR
ShapeNet [5] 51.3K 55
Pix3D [53] 395 9
Google Scans [47] 1K -
3D-Future [14] 16.6K 8
CO3D [46] 18.6K 50
PhotoShape [45] 5.8K 1
ABO (Ours) 8K 98
Table 1: A comparison of the 3D models in ABO and other commonly used object-centric 3D datasets. ABO contains nearly 8K 3D models with physically-based rendering (PBR) materials and corresponding real-world catalog images.

2 Related Work

Datasets While 2D and single-view in nature, UT Zappos50K [64] is also a product listing dataset that contains 50K images of shoes with relative attribute annotations. By scraping Amazon, [51] obtained 150K single images of products with corresponding weights and dimensions. [40] gathered 84 million unique product reviews and ratings, as well as metadata such as category and price. The Online Products dataset [44] is similar to ours in that it contains images mined from product listings (Ebay), however this dataset contains only 20K products from only 12 categories and does not have associated metadata or 3D models.

ShapeNet [5] is a large-scale database of synthetic 3D CAD models commonly used for training single- and multi-view reconstruction models. IKEA Objects [35] and Pix3D [53] are image collections with 2D-3D alignment between CAD models and real images, however these images are limited to objects for which there is an exact CAD model match. Similarly, Pascal3D+ [60] and ObjectNet3D [59] provide 2D-3D alignment for images and provide more instances and categories, however the 3D annotations are only approximate matches. Multi-View Cars [24] and BigBIRD [49] are real-world object-focused multi-view datasets but have limited numbers of instances and categories. The Object Scans dataset [6] and Objectron [1] are both video datasets that have the camera operator walk around various objects, but are similarly limited in the number of categories represented in the dataset. CO3D [46]

also offers videos of common objects from 50 different categories, however they do not provide full 3D mesh reconstructions. There also exist many high quality datasets specifically made for 3D scene understanding 

[50, 52, 58, 48] rather than individual objects.

Existing 3D datasets assume very simplistic texture models that are not physically realistic. To improve on this, PhotoShapes [45] augmented ShapeNet CAD models by automatically mapping bidirectional reflectance distribution functions (BRDFs) to meshes, yet the dataset consists only of chairs. The works in [12, 15] provide high-quality spatially-varying BRDF maps, but only for planar surfaces. The dataset used in [25] contains only homogenous BRDFs. [32] and [2] introduce datasets containing full spatially-varying BRDFs, however their models are procedurally generated shapes that do not correspond to real objects. In contrast, ABO provides shapes and spatially-varying BRDFs created by professional artists for real-life objects that can be directly used for photorealistic rendering.

Table 1 compares ABO with other commonly used 3D datasets in terms of size (number of objects and classes) and properties such as the presence of real images, full 3D meshes and physically-based rendering (PBR) materials. ABO is the only dataset that contains all of these properties and is much more diverse in number of categories than existing 3D datasets.

3D Shape Reconstruction Recent methods for single-view 3D reconstruction differ mainly in the type of supervision and 3D representation used, whether it be voxels, point clouds, meshes, or implicit functions. Methods that require full shape supervision in the single-view [13, 66, 53, 41, 17] and multi-view [23, 7, 61] case are often trained using ShapeNet. There are other approaches that use more natural forms of multi-view supervision such as images, depth maps, and silhouettes [62, 57, 23, 55], with known cameras. Of course, multi-view 3D reconstruction has long been studied with classical computer vision techniques [20] like multi-view stereo and visual hull reconstruction. Learning-based methods are trained typically in a category-specific way and evaluated on new instances from the same category. Out of the works mentioned, only [66] claims to be category-agnostic.

As existing methods are largely trained in a fully supervised manner using ShapeNet [5], we are interested in how well they will transfer to more real-world objects. To measure how well these models transfer to real object instances, we evaluate the performance of a variety of these methods on our new dataset. Specifically we evaluate 3D-R2N2 [7], GenRe [66], Occupancy Networks [41], and Mesh R-CNN [17]. We selected these methods because they capture some of the top-performing single-view 3D reconstruction methods from the past few years and are varied in the type of 3D representation that they use (voxels in [7], spherical maps in [66], implicit functions in [41], and meshes in [17]) and the coordinate system used (canonical vs. view-space).

Chamfer Distance () Absolute Normal Consistency ()
bench chair couch dresser lamp table bench chair couch dresser lamp table
3D R2N2 [7] 2.10/0.85 1.45/0.77 1.26/0.59 1.82/0.25 3.78/2.02 2.78/0.66 0.53/0.55 0.59/0.61 0.57/0.62 0.54/0.67 0.51/0.54 0.51/0.65
Occ Nets [41] 1.61/0.51 0.74/0.39 0.93/0.30 0.78/0.23 2.52/1.66 1.75/0.41 0.66/0.68 0.73/0.76 0.71/0.77 0.72/0.77 0.65/0.69 0.67/0.78
GenRe [66] 1.55/2.86 0.92/0.79 1.19/2.18 1.49/2.03 3.73/2.47 2.22/2.37 0.63/0.56 0.68/0.67 0.66/0.60 0.62/0.59 0.59/0.57 0.61/0.59
Mesh R-CNN [17] 1.31/0.09 0.78/0.13 0.54/0.10 0.69/0.11 2.12/0.24 1.16/0.12 0.62/0.65 0.62/0.70 0.63/0.72 0.65/0.74 0.58/0.66 0.62/0.74
Table 2: Single-view 3D reconstruction generalization from ShapeNet to ABO. Chamfer distance and absolute normal consistency of predictions made on Amazon objects from common ShapeNet classes. We also report these metrics for ShapeNet objects (denoted in gray), following the same evaluation protocol. All methods, with the exception of GenRe, are trained on all of the ShapeNet categories listed.

Representation and Metric Learning Learning to represent 3D shapes and natural images of products in a single embedding space has been tackled by [31]. They consider various relevant tasks, including cross-view image retrieval, shape-based image retrieval and image-based shape retrieval, but all are inherently constrained by the limitations of ShapeNet [5] (cross-view image retrieval is only considered for chairs and cars). [28] introduced 3D object representations for fine-grained recognition and a dataset of cars with real-world 2D imagery (CARS-196), which is now widely used for metric learning evaluation. Similarly, other datasets for evaluating image-based representation/metric learning approaches typically focus on a single object type, such as birds [56] or clothes [37]. In contrast, we derive from ABO a challenging benchmark dataset with 576 unique categories and known azimuths for test query images, to measure the robustness of representations with respect to viewpoint changes.

Designs of losses for representation and metric learning in the recent literature [43] can be broadly summarized into: (i) pair-based losses [22, 44], where losses are computed based on 2, 3 or more distances between samples in the batches, with the goal to minimize the distances between the sample pairs from the same class while pushing apart sample pairs from different classes; (ii) classification losses [65]

, which use instance labels as classes to optimize the feature extraction backbone; and (iii) proxy losses 

[42, 26, 54], where data samples are summarized into proxies that are learnt jointly with the representation. In our experiments, we compare NormSoftmax [65] (classification), CGD [22] (classification and triplet-based ranking loss) and ProxyNCA++ [54] (proxy), thus covering the major trends in deep metric learning.

Material Estimation Several works have focused on modeling object appearance from a single image. [30] use two networks to estimate a homogeneous BRDF and a spatially varying BRDF of a flat surface from a single image. They employ a self-augmentation scheme to tackle the need for a large training set. Their work, however, is limited to a specific family of materials, and each separate material requires another trained network. [63] extend this idea of self-augmentation by using similar CNN structures to train with unlabeled data, but their work is limited to the same constraints as [30]. [11] propose using a modified U-Net and rendering loss to predict the spatially-varying BRDFs of a flash-lit photograph consisting of only a flat surface. This method achieves promising results when dealing with flat surfaces with different kinds of materials, but it cannot handle objects of arbitrary shape. [32] propose a cascaded CNN architecture with a single encoder and separate decoder for each spatially-varying BRDF parameter to handle complex shapes. They include an additional side network to predict global illumination, and use a cascading CNN design to refine the results. This method achieves good results on semi-uncontrolled lighting environments, however it requires the using the intermediate bounces of global illumination rendering as supervision. More recent works have turned towards using multiple images to improve spatially-varying BRDF estimation. For instance, [12] and [15] propose using multiple input images with a flash lit light source, but still only for a single planar surface. [2] estimate spatially-varying BRDFs of an object with complex shape from six wide-baseline multi-view images with collocated point lighting, and the network is training on procedurally generated shapes. [3] and [33] both tackle the problem by manipulating the lighting setup and use either flash no-flash image pairs or multiple images with different incident light directions to estimate the spatially-varying BRDFs. In this work, we propose a baseline method that can handle complex, real-world shapes.

3 The ABO dataset

The ABO dataset originates from worldwide product listings, metadata, images and 3D models provided directly by Amazon.com. This data consists of 147,702 listings of Amazon products from 576 product categories and 50 shops (e.g. Amazon, PrimeNow, Whole Foods). Each listing is identified by an item ID and the shop in which it is sold, and is provided with structured metadata corresponding to information that is publicly available on the listing’s main webpage (such as category, manufacturer, material, color, dimensions, …) as well as the media available for that product. This includes the high-resolution catalog images ( images, with up to 7M pixels and 90% above 1M pixels), the turntable images that are used for the “360º View” feature that shows the product imaged at 5º or 15º azimuth intervals ( products) and the high-quality 3D models in glTF 2.0 format used for generating photo-realistic images ( models), when available for the listing. Further, the 3D models are oriented in a canonical coordinate system where the “front” (when well defined) of all objects are aligned. For each listing, a single image is marked as the one chosen to illustrate the product on its webpage, which we refer below as the main image.

4 Experiments

Figure 2: Qualitative 3D reconstruction results for R2N2, Occupancy Networks, GenRe, and Mesh-RCNN.

Most methods tend to reconstruct stereotypical examples from the training classes well, but struggle with outlier objects such as the lamp in the 3rd row.

4.1 Evaluating Single-View 3D Reconstruction

While ShapeNet has enabled tremendous progress in methods for 3D reconstruction, the evaluation of these methods are often limited to those of the commonly used ShapeNet classes (airplanes, chairs, tables…) and the subset of geometries for which there exists CAD models. Evaluations on real-world images are thus strictly qualitative or performed on datasets such as Pix3D [53]. Here, we are interested in the ability of state-of-the-art single-image shape reconstruction to generalize to the real objects from ABO. To study this (irrespective of the question of cross-category generalization), we consider only the subset of 7,953 3D models objects that fall into ShapeNet training categories. Unlike models in ShapeNet, the 3D models in ABO can be photo-realistically rendered due to their detailed textures and non-lambertian BRDFs. We render each 3D mesh using Blender [8] into 16-bit 1024x1024 RGB-alpha images from 30 random cameras – each with a field-of-view and placed sufficiently far away for the entire object to be visible. Camera azimuth and elevation are sampled uniformly on the surface of a unit sphere with a lower limit on elevations to avoid uncommon bottom views. We downloaded a publicly available HDRI from hdrihaven.com [18] for scene lighting.

Out of the 98 categories in ABO with 3D models, we are able to map 13 to 6 commonly used ShapeNet classes (e.g. “table”, “desk” to “table”), capturing 4,193 of the 7,953 objects. Some common ShapeNet classes, such as “airplane”, have no matching category; similarly, some categories like “kitchen items” do not map well to ShapeNet classes.

View-Space Evaluations We consider two methods for single-view 3D reconstruction that make predictions in “view-space” (i.e. pose aligned to the image view): GenRe [66] and Mesh-RCNN [17]. Since view-space predictions can be scaled and -translated simultaneously while preserving the projection onto the camera, we must solve the depth-ambiguity to align the predicted and ground truth (GT) meshes for benchmarking. We use known camera extrinsics to transform the GT mesh into view-space, and align it with the predicted mesh by solving for a Chamfer-distance minimizing depth. In practice, we normalize average vertex depths for each mesh independently and then search through candidate depths. All models we consider are pre-trained on ShapeNet but, unlike other methods, GenRe trains on a different set of classes and takes as input a silhouette mask at train and test time. We compare the predicted and GT meshes after alignment following the evaluation protocol in [13, 17] and report Chamfer distance and Absolute Normal Consistency. Since Chamfer varies with the scale of the mesh, we follow [13, 17] and scale the meshes such that the longest edge of the GT mesh bounding box is of length 10.

Canonical-Space Evaluations We consider two more methods, R2N2 [7] and Occupancy Networks [41], that perform predictions in canonical space. That is, the predictions are made in the same category-specific, canonical pose despite the pose of the object in an image. To evaluate shapes predicted in canonical space, we must first align them with the GT shapes. Relying on cross-category semantic alignment of models in both ShapeNet and ABO, we use a single (manually-set) rotation-alignment for the entire data. We then solve for relative translation and scale, which remain inherently ambiguous, to minimize the Chamfer distance between the two meshes. In practice, we search over a

grid of candidate scale/translation after mean-centering (to vertex centroid) and re-scaling (via standard deviation of vertex distances to centroid) the two meshes independently. Note that R2N2 

[7] predicts a voxel grid so we convert it to a mesh for the purposes of benchmarking. Marching Cubes [38] is one such way to achieve this, however, we follow the more efficient and easily batched protocol of  [17] that replaces every occupied voxel with a cube, merges vertices, and removes internal faces.

Results A quantitative comparison of the four methods we considered can be found in Table 2. We evaluated each method separately for categories found in both the ShapeNet training set and in ABO (bench, chair, couch, dresser, lamp, table) to assess category-specific performance. We also re-evaluated each method’s predictions on the ShapeNet test set from R2N2 [7] with our evaluation protocol and report those metrics. As can be seen, there is a large performance gap between all ShapeNet and Amazon predictions for most categories, with the exception of GenRe (which was actually only trained on chairs, cars and airplanes). This suggests that shapes and textures from ABO, while derived from the same categories but from the real world, are out of distribution for the models trained on ShapeNet. Further, we notice that the lamp category has a particularly large performance drop from ShapeNet to ABO. We highlight some qualitative results in Figure 2, including one particularly challenging lamp instance.

4.2 Multi-View Object Retrieval

Dataset Curation In this work we are interested in multi-view 3D understanding, and thus we focused on rigid objects as determined by their categories. We manually filtered categories, removing garments (shirts, pants, sweaters, …), home linens (bed linen, towels, rugs, …) and some accessories (cellphone accessories, animal harnesses, cables, …). Despite this filtering, 145,825 listings remain (spanning 565 different categories), with 6,183 products containing 360-views.

To build a meaningfully challenging benchmark for metric learning from online product listings, we avoid having near-duplicate objects end up in both the train and test split. For example, it is important to note that the same physical product can be sold in different shops but be given a different item ID and different (e.g., translated) titles. Another example are the variations in available sizes, patterns or colors for furniture, or the variations of specifications for electronics (e.g., amount of internal memory). Despite having a metadata attribute indicating which listings are variations of each other, only a subset of variations correspond to visually distinct products. To get around this and build meaningful product groups, we identified connected components in the graph of listing IDs and their main product images using the Union-Find algorithm. This process yielded 51,250 product groups as connected components.

To create training splits for our benchmarks, we ensure that groups sharing any image are assigned to the same splits by identifying connected components of product groups and all of their images (not just the main image). In our multi-view retrieval benchmark, all the instances of each cluster are assigned together to either train, val or test splits. As the intent of this benchmark is to use known azimuth information of 360-view images to measure the robustness of state-of-the-art object retrieval methods with respect to viewpoint changes, we kept product clusters with any instance containing 360-view images for the test set.

We assigned the remaining clusters randomly between train (90%) and val (10%) sets. The images in the val set are split in val-query, with 1 random catalog image per instance, and val-target

set, containing the remaining images of each instance. The validation performance, which is used to tune model hyperparameters (

e.g

. the best epoch), is measured using

val-query as queries (4,294 images) against the union of train and val-target (162,386 images).

Finally, the test set is composed of test-target, containing all the 33,424 catalog images of the test instances, and test-query, which is built from 4 randomly chosen views (i.e., 4 different azimuths) from the 360-spin images for each test instance (23,860 images). The test performance is obtained by measuring the instance-level retrieval metrics (recall at 1, 2, 4 and 8) of the test-query images against the union of train, val and test-target (i.e., 200,103 images in total), as an aggregate over all queries but also per azimuth increments. Table 3 summarizes the statistics of this new retrieval benchmark in comparison to the most commonly used benchmark datasets for object retrieval in the literature.

Method

To evaluate their performance on our multi-view retrieval benchmark, we use the available PyTorch implementations of three recent works that are state-of-the-art for their respective approach to deep metric learning: NormSoftmax, (classification-based), ProxyNCA++ (proxy-based) and CGD (triplet-based).

Each model is configured to produce 2048-D embeddings from a ResNet-50 [21]

backbone and is trained for 60 epochs using the respective optimization parameters designed for the InShop dataset. To ensure a fair comparison, we standardized batch generation and the initial steps of image pre-processing across approaches: we used an initial image padding transformation to obtain undistorted square images before resizing to 256x256 and used class-balanced batches of 15 classes with 5 samples each. We chose the best checkpoint for each model based on the Recall@1 metric on the validation setup, computing it only every 5 epochs.

For test images, we also padded images to square but resized them to 224x224 directly, skipping the common center-crop step. This small modification of pre-processing takes advantage of the good imaging conditions of the query images (360-views), yielding performance improvements of about +3% in R@1 across the board.

Benchmark Domain # Classes
train
# Instances
val
test
train
# Images
val
test-target
test-query
Structure SOTA R@1
CUB-200-2011 Birds 200 - - - 5994 0 - 5794 15 parts 79.2% [22]
Cars-196 Cars 196 - - - 8144 0 - 8041 - 94.8% [22]
In-Shop Clothes 25 3997 0 3985 25882 0 12612 14218 Landmarks, Pose, Segm. 92.6% [26]
SOP Ebay 12 11318 0 11316 59551 0 - 60502 - 84.2% [22]
ABO (MVR) Amazon 565 39509 4294 5687 150214 16465 33424 23860 Azimuth for queries 58.0% [65]
Table 3: Statistics for retrieval datasets commonly used to benchmark metric learning approaches. Our proposed multi-view retrieval benchmark based on ABO is significantly larger, more diverse and challenging than existing benchmarks, and exploits azimuth available for test queries.

Results As shown in Table 4

, state-of-the-art deep metric learning methods provide significant improvements over a ResNet-50 baseline trained on ImageNet. NormSoftmax performs significantly better than the alternatives, confirming that classification is a strong approach for multi-view object retrieval. Moreover, it is worth noting that the performance of these models is significantly lower than what they achieve on existing metric learning benchmarks (see Table 

3). This confirms the risks of saturation and unfair evaluations in existing benchmarks [43] and the need for novel metric learning approaches to handle the large scale and unique challenges of our new benchmark.

Further, the varying azimuths available for test queries allow us to measure how the performance of state-of-the-art approaches degrades as azimuth () diverges from typical product viewpoints in Amazon.com’s catalog images. Figure 3 highlights two main regimes, and , with a large gap in favor of the former, consistently for all approaches. Particularly challenging are perfect frontal (), perfect back () and side views (), showing relatively lower performance within their regimes. Closing this gap is an interesting direction of future research for multi-view object retrieval.

Recall@k (%)

ResNet-50 (ImageNet)

37.6 47.6 55.6 61.9
NormSoftmax [65] 58.0 69.4 76.8 81.8
ProxyNCA++ [54] 47.3 58.6 67.8 74.6
CGD [22] 48.8 60.0 68.4 74.5
Table 4: Test performance of the state-of-the-art deep metric learning methods on the ABO retrieval benchmark. NormSoftmax outperforms all other methods.
Figure 3: Recall@1 as a function of the azimuth of the product view. Dashed lines correspond to the averages of the methods with the same color, as reported in Table 4. We find retrieval performance is sensitive to object pose.

4.3 Material Prediction

There exists a handful of publicly available datasets with large collections of 3D objects [5, 6, 14], but they don’t contain physically-accurate reflectance parameters that can be used for physically based rendering to generate photorealistic images. On the other hand, the realistic 3D models in ABO are artist-created and have highly varied shape and spatially-varying BRDFs that can enable material estimation for complex geometries.

Dataset Curation We use 7,695 3D models in ABO to create a benchmark for estimating per-pixel material maps. The BRDF factorization we use is the microfacet BRDF model by Cook-Torrence [9] and the Disney [4] basecolor-metallic-roughness parameterization specified in glTF 2.0 specification. We utilize Blender [8] and its path-tracer Cycles to render each model onto 512x512 images from 91 camera positions along an upper icosphere of the object with camera field-of-view of . To ensure diverse realistic lighting conditions, we illuminate the scene using 3 random environment maps out of 108 indoor HDRIs. We also generate the ground truth per-view base color, roughness, metallic, normal, segmentation and depth map under the same camera parameters. We split the 3D models into a non-overlapping train/test set of 6,922 models, and 773 models respectively. To test the generalizability of estimating spatially-varying BRDFs under different lighting, we reserve 10 out of 108 environment maps for test set only. The resulting dataset consists of 2.1 million rendered images of objects with global illumination.

Figure 4: Qualitative results of our single-view and multi-view networks. For a reference view of an object on the left, we show spatially-varying BRDF properties in reference view domain, estimated using each network.

Method

To evaluate single-view and multi-view material prediction and establish a baseline approach, we use a U-Net-based model with a ResNet34 backbone to estimate spatially-varying BRDFs from a single viewpoint. The U-Net has a common encoder that takes RGB image as input and multi-head decoder to output each component of the spatially-varying BRDF separately. To enable an analogous approach for a multi-view network, we align images from multiple viewpoints by projection using the depth data and bundle pairs made up of an original image and projected image as the input data for the network. We reuse the single-view architecture for the multi-view network and use global max pooling to handle an arbitrary number of input images. Similar to 

[11], we utilize a differential rendering layer to render the flash illuminated ground truth and compare it to similarly rendered images from our predictions to better regularize network and guide the training process.

Our model takes as input 256x256 rendered images. For training, we randomly subsample 40 views on the icosphere for each object. For the multi-view network, for each reference view we select its immediate 4 adjacent views as neighboring views. We use mean squared error as loss function for base color, roughness, metallicness, normal and render losses. We use AdamW optimizer 

[39] with a learning rate of 0.001 and weight decay of 1e-4. We train each network for 15 epochs.

Results Results for the single-view network (SV-net) and multi-view network (MV-net) can be found in Table 5. The multi-view network has similar or better performance compared to single-view network in terms of roughness, metallicness, and normal prediction tasks but has worse performance on base color. This is similar to what is reported in [2], where the naive U-Net outperforms the multi-view approach for base color. This could be interpreted as the multi-view inputs confusing the network about the true base color of the object when the object is viewed from different viewpoints.

We also run an ablation study on our multi-view network without using 3D structure to align neighboring views to reference view (denoted as MV-net: no projection). First, we observe that even without 3D structure-based alignment, the network performs similar or better than single-view network (except for base color). Comparing to the multi-view network, which uses 3D structure-based alignment, we can see structure information leads to better performance for all parameters. This shows that the extra projection step to align the input views indeed helps the the inference process. Qualitative results comparing the single- and multi-view networks can be found in Figure 4.

SV-net MV-net (no proj.) MV-net
Base Color () 0.132 0.136 0.134
Roughness () 0.156 0.148 0.117
Metallicness () 0.164 0.163 0.154
Normals () 0.947 0.948 0.974
Render () 0.091 0.091 0.089
Table 5: Material estimation results on ABO for a single-view (SV-net), multi-view (MV-net), and multi-view network without projection (MV-net no proj.) baseline.

Base color, roughness, metallicness and rendering are measured using RMSE (lower is better), normal similarity is measured using cosine similarity (higher is better).

.

5 Conclusion

In this work we introduced the ABO dataset and performed various experiments and benchmarks to highlight the unique properties of the dataset. We demonstrated that the set of real-world derived 3D models in ABO are a challenging test set for ShapeNet-trained 3D reconstruction approaches, and that both view- and canonical-space methods do not generalize well to ABO meshes despite sampling them from the same distribution of training classes. Utilizing the larger set of products images, we proposed a challenging multi-view retrieval task and benchmarked the performance of four state-of-the-art methods for metric learning with respect to the azimuth of query images. Finally, we trained both single-view and multi-view networks for spatially-varying BRDF material estimation of complex, real-world geometries - a task that is uniquely enabled by the nature of our 3D dataset. We found that incorporating multiple views leads to more accurate disentanglement of spatially-varying BRDF properties.

While not considered in this work, the large amounts of text annotations (product descriptions and keywords) and non-rigid products (apparel, home linens, accessories) enable a wide array of possible language and vision tasks, such as predicting styles, patterns, captions or keywords from product images. Furthermore, the 3D objects in ABO correspond to items that naturally occur in a home, and have associated object weight and dimensions. This can benefit robotics research and support simulations of manipulation and navigation.

References

  • [1] Adel Ahmadyan, Liangkai Zhang, Jianing Wei, Artsiom Ablavatski, and Matthias Grundmann. Objectron: A large scale dataset of object-centric videos in the wild with pose annotations. arXiv preprint arXiv:2012.09988, 2020.
  • [2] Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, and Ravi Ramamoorthi. Deep 3d capture: Geometry and reflectance from sparse multi-view images. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    , pages 5960–5969, 2020.
  • [3] Mark Boss, Varun Jampani, Kihwan Kim, Hendrik Lensch, and Jan Kautz. Two-shot spatially-varying brdf and shape estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3982–3991, 2020.
  • [4] Brent Burley and Walt Disney Animation Studios. Physically-based shading at disney. In ACM SIGGRAPH, volume 2012, pages 1–7. vol. 2012, 2012.
  • [5] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
  • [6] Sungjoon Choi, Qian-Yi Zhou, Stephen Miller, and Vladlen Koltun. A large dataset of object scans. arXiv preprint arXiv:1602.02481, 2016.
  • [7] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision, pages 628–644. Springer, 2016.
  • [8] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018.
  • [9] Robert L Cook and Kenneth E. Torrance. A reflectance model for computer graphics. ACM Transactions on Graphics (ToG), 1(1):7–24, 1982.
  • [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  • [11] Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, and Adrien Bousseau. Single-image svbrdf capture with a rendering-aware deep network. ACM Transactions on Graphics (TOG), 37(4):128, 2018.
  • [12] Valentin Deschaintre, Miika Aittala, Frédo Durand, George Drettakis, and Adrien Bousseau. Flexible svbrdf capture with a multi-image deep network. In Computer Graphics Forum, volume 38, pages 1–13. Wiley Online Library, 2019.
  • [13] Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 605–613, 2017.
  • [14] Huan Fu, Rongfei Jia, Lin Gao, Mingming Gong, Binqiang Zhao, Steve Maybank, and Dacheng Tao. 3d-future: 3d furniture shape with texture. arXiv preprint arXiv:2009.09633, 2020.
  • [15] Duan Gao, Xiao Li, Yue Dong, Pieter Peers, Kun Xu, and Xin Tong. Deep inverse rendering for high-resolution svbrdf estimation from an arbitrary number of images. ACM Transactions on Graphics (TOG), 38(4):134, 2019.
  • [16] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In ICCV, 2019.
  • [17] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 9785–9795, 2019.
  • [18] Andreas Mischok Greg Zaal, Sergej Majboroda. Hdrihaven. https://hdrihaven.com/. Accessed: 2010-09-30.
  • [19] Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. A papier-mâché approach to learning 3d surface generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 216–224, 2018.
  • [20] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
  • [21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [22] HeeJae Jun, ByungSoo Ko, Youngjoon Kim, Insik Kim, and Jongtack Kim. Combination of multiple global descriptors for image retrieval. arXiv preprint arXiv:1903.10663, 2019.
  • [23] Abhishek Kar, Christian Häne, and Jitendra Malik. Learning a multi-view stereo machine. In Advances in neural information processing systems, pages 365–376, 2017.
  • [24] Abhishek Kar, Shubham Tulsiani, Joao Carreira, and Jitendra Malik. Category-specific object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1966–1974, 2015.
  • [25] Kihwan Kim, Jinwei Gu, Stephen Tyree, Pavlo Molchanov, Matthias Nießner, and Jan Kautz. A lightweight approach for on-the-fly reflectance estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 20–28, 2017.
  • [26] Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  • [27] Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, and Daniele Panozzo.

    Abc: A big cad model dataset for geometric deep learning.

    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9601–9611, 2019.
  • [28] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
  • [29] Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009.
  • [30] Xiao Li, Yue Dong, Pieter Peers, and Xin Tong.

    Modeling surface appearance from a single photograph using self-augmented convolutional neural networks.

    ACM Transactions on Graphics (TOG), 36(4):45, 2017.
  • [31] Yangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J. Guibas. Joint embeddings of shapes and images via cnn image purification. ACM Trans. Graph., 2015.
  • [32] Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Kalyan Sunkavalli, and Manmohan Chandraker. Learning to reconstruct shape and spatially-varying reflectance from a single image. In SIGGRAPH Asia 2018 Technical Papers, page 269. ACM, 2018.
  • [33] Daniel Lichy, Jiaye Wu, Soumyadip Sengupta, and David W Jacobs. Shape and material capture at home. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6123–6133, 2021.
  • [34] Joseph J Lim, Hamed Pirsiavash, and Antonio Torralba.

    Parsing ikea objects: Fine pose estimation.

    In Proceedings of the IEEE International Conference on Computer Vision, pages 2992–2999, 2013.
  • [35] Joseph J Lim, Hamed Pirsiavash, and Antonio Torralba. Parsing ikea objects: Fine pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2992–2999, 2013.
  • [36] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick.

    Microsoft coco: Common objects in context.

    In European conference on computer vision, pages 740–755. Springer, 2014.
  • [37] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [38] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. ACM siggraph computer graphics, 21(4):163–169, 1987.
  • [39] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
  • [40] Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52, 2015.
  • [41] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4460–4470, 2019.
  • [42] Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, and Saurabh Singh. No fuss distance metric learning using proxies. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [43] Kevin Musgrave, Serge Belongie, and Ser-Nam Lim. A metric learning reality check. In European Conference on Computer Vision, pages 681–699. Springer, 2020.
  • [44] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4004–4012, 2016.
  • [45] Keunhong Park, Konstantinos Rematas, Ali Farhadi, and Steven M Seitz. Photoshape: Photorealistic materials for large-scale shape collections. arXiv preprint arXiv:1809.09761, 2018.
  • [46] Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, and David Novotny. Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction. arXiv preprint arXiv:2109.00512, 2021.
  • [47] Google Research. Google scanned objects, August.
  • [48] Mike Roberts and Nathan Paczan. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. arXiv preprint arXiv:2011.02523, 2020.
  • [49] Arjun Singh, James Sha, Karthik S Narayan, Tudor Achim, and Pieter Abbeel. Bigbird: A large-scale 3d database of object instances. In 2014 IEEE international conference on robotics and automation (ICRA), pages 509–516. IEEE, 2014.
  • [50] Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1746–1754, 2017.
  • [51] Trevor Standley, Ozan Sener, Dawn Chen, and Silvio Savarese. image2mass: Estimating the mass of an object from its image. In Conference on Robot Learning, pages 324–333. PMLR, 2017.
  • [52] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019.
  • [53] Xingyuan Sun, Jiajun Wu, Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Tianfan Xue, Joshua B Tenenbaum, and William T Freeman. Pix3d: Dataset and methods for single-image 3d shape modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2974–2983, 2018.
  • [54] Eu Wern Teh, Terrance DeVries, and Graham W Taylor. Proxynca++: Revisiting and revitalizing proxy neighborhood component analysis. arXiv preprint arXiv:2004.01113, 2020.
  • [55] Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2626–2634, 2017.
  • [56] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
  • [57] Olivia Wiles and Andrew Zisserman. Silnet: Single-and multi-view reconstruction by learning from silhouettes. arXiv preprint arXiv:1711.07888, 2017.
  • [58] Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. Gibson env: Real-world perception for embodied agents. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9068–9079, 2018.
  • [59] Yu Xiang, Wonhui Kim, Wei Chen, Jingwei Ji, Christopher Choy, Hao Su, Roozbeh Mottaghi, Leonidas Guibas, and Silvio Savarese. Objectnet3d: A large scale database for 3d object recognition. In European Conference on Computer Vision, pages 160–176. Springer, 2016.
  • [60] Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond pascal: A benchmark for 3d object detection in the wild. In IEEE winter conference on applications of computer vision, pages 75–82. IEEE, 2014.
  • [61] Haozhe Xie, Hongxun Yao, Xiaoshuai Sun, Shangchen Zhou, and Shengping Zhang. Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE International Conference on Computer Vision, pages 2690–2698, 2019.
  • [62] Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In Advances in neural information processing systems, pages 1696–1704, 2016.
  • [63] Wenjie Ye, Xiao Li, Yue Dong, Pieter Peers, and Xin Tong. Single image surface appearance modeling with self-augmented cnns and inexact supervision. In Computer Graphics Forum, volume 37, pages 201–211. Wiley Online Library, 2018.
  • [64] A. Yu and K. Grauman. Fine-grained visual comparisons with local learning. In Computer Vision and Pattern Recognition (CVPR), Jun 2014.
  • [65] Andrew Zhai and Hao-Yu Wu. Classification is a strong baseline for deep metric learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (BMVC), 2019.
  • [66] Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Josh Tenenbaum, Bill Freeman, and Jiajun Wu. Learning to reconstruct shapes from unseen classes. In Advances in Neural Information Processing Systems, pages 2257–2268, 2018.
  • [67] Qingnan Zhou and Alec Jacobson. Thingi10k: A dataset of 10,000 3d-printing models. arXiv preprint arXiv:1605.04797, 2016.