Deep learning has led to unprecedented performances in many computer vision and machine learning tasks. The success of deep networks is largely attributed to their capability to learn representations automatically from the sheer amount of data. Despite the pleasing results, the learned representations, especially at a more global level, are in many cases not explainable. Given a landscape image like the one shown in Fig. 1, existing feature-learning approaches focus on producing a global representation for the whole image, in which the features of the scene objects, like the tree and bird, are intertwined. Such object-tangled representations, in many cases, make a computer vision task like image editing cumbersome to be deployed.
We study in this paper a novel task, termed as disassembling object representations, towards interpretable representation learning. Unlike prior disentangling task that isolates attributes of different natures like color and lightness, disassembling aims at learning object-specific features. More specifically, given an image depicting multiple objects, disassembling attempts to separate features of objects from different categories into distinct parts of a latent representation. In this way, each such part encodes only the features of objects in one specific category. In Fig. 1, for example, one part of the disassembled representation corresponds to the tree, and another corresponds to the bird.
Disassembling may therefore potentially serve as an instrumental step for machine learning tasks, including but not limited to image editing, few- or zero-shot learning [akata2015evaluation, song2018transductive, xian2017zero-shot], and image classification [nam2014large, read2010scalable]. In the case of image editing, disassembling makes it possible to alter the appearance of designated objects via directly working on their isolated and thus filtered latent representation. In the case of few- or zero-shot learning, disassembling allows us to potentially extract pure and intact features from foreground objects and meanwhile suppresses the irrelevant ones from the background.
Towards solving the proposed disassembling task, we propose an unsupervised approach, which name as Unsupervised Disassembling Object Representation (UDOR). UDOR is motivated by visual integrity, referring to the fact that the scene should remain visually plausible if one or multiple scene objects are removed. Given a collection of images, the UDOR will extract disassembled object representations, of which each part corresponds to one category of objects.
The proposed UDOR comprises three major components: double AutoEncoder (AE), Fuzzy Classification, and Object-removing Operation, as shown in Fig.2
. UDOR follows a double AE architecture, which consists of an Image Reconstruction AE (IR-AE) and a Representation Reconstruction AE (RR-AE). The IR-AE and RR-AE are used to reconstruct the input image and the latent representation, respectively. The Fuzzy Classification component, on the other hand, is devised to constrain each part of the latent representation to encode features of up to one object category. As will be discussed later, it explicitly accounts for the fact that features of objects from the same category should be similar, while those from different categories should be distinct. The Object-removing Operation enforces the modularity of the derived representation. Specifically, we randomly reset parts of representation to empty vectors, meaning that we remove the corresponding objects from the scene, and then use a WGAN-GP[Gulrajani2017Improved] to produce a visually plausible reconstructed image so as to preserve the visual integrity.
To evaluate the disassembling performance, we also propose two metrics, one on modularity and the other on integrity. The former one measures the modularity and portability of the latent representation, while the latter evaluates the visual quality of the reconstructed image. As will be demonstrated in our experiments, the proposed UDOR, in spite of its unsupervised nature, achieves truly promising results that closely approach those of the supervised methods.
Our main contributions therefore include introducing the disassembling task and an unsupervised approach, termed as UDOR, towards solving it. UDOR follows a double AE architecture, with a dedicated Fuzzy Classification component and an Object-removing Operation combined with a WGAN-GP, to comply with the modularity of the learned latent representation. We also introduce two disassembling metrics, upon which the proposed UDOR achieves truly encouraging results almost on par with those of the supervised methods.
2 Related Work
There are, to our best knowledge, few methods tailored for learning disassembled object representations. The most related works are disentangled representation learning methods, which aim at learning dimension-wise interpretable attribute representation from image data.
Existing disentangling methods can be broadly classified into two categories: unsupervised and supervised approaches. Most of the existing unsupervised methods[Burgess2017Understanding, Chen2018Isolating, Dupont2018Joint, Gao2018Auto, Kim2018Disentangling] are based on the two most prominent methods -VAE [Higgins2016beta] and InfoGAN [Chen2016InfoGAN]. They impose the independent assumption of different dimensions of the latent representation to achieve disentangling. However, those methods only can disentangle the image’s attribute features, such as color, lightness, style, and so on. On the other hand, supervised methods [Banijamali2017JADE, Feng2018Dual, Kingma2014Semi, Perarnau2016Invertible, Wang2017Tag] focus on utilizing annotated data to supervise the input-to-attribute mapping explicitly. The original aim of the supervised method is to learn disentangled attribute representation. Through annotating the object information as the labels, a few supervised representation disentangling method can also be transferred to learn disassembled object representation. However, it requires a large amount of annotated samples. There are still some scene decomposition methods [burgess2019monet:, eslami2016attend, greff2019multi-object, van2018relational], which also can learn object-related features. However, those methods only can handle toy datasets.
We also give a brief review here about double AE, Fuzzy Classification, and Object-removing Operation, which relate to our UDOR. For the double AE, Feng et al. [Feng2018Dual] and Gonzalezgarcia et al. [gonzalezgarcia2018image-to-image] propose the Dual Swapping Disentangling (DSD) and cross-domain autoencoders to disentangle attribute representation with multiple autoencoders, respectively. The difference between our UDOR and them is that DSD needs two images as input simultaneously, which leads to the fixed framework with four autoencoders.
For the Fuzzy Classification, there is no directly related work so far to our knowledge. The most similar methods are some multi-label classification works [Goutsu2018Classification, Rothe2016Deep, wang2016cnn, wu2015weakly], which transform the multi-label classification into multiple single label classification tasks. However, those methods are all supervised by annotated labels, where the unsupervised fuzzy classification problem does not exist.
For the Object-removing Operation, Arandjelovic et al. [arandjelovic2019object] propose the copy-pasting GAN, which copies and pasts object-related parts of an image into a new image. The copy-pasting GAN is devised to learn object mask. Different from the above method, the Object-removing Operation replaces part of the latent representation with an empty vector. Besides, GAN [Goodfellow2014Generative] has been applied extensively. In our method, the WGAN-GP [Gulrajani2017Improved] is adopted to improve the quality of the generated image reconstructed with the object-removed representation.
3 The Disassembling Task
The definition of the disassembling object representation task is given as follows. It is assumed that we are given a dataset that contains categories of objects. Each sample, in our case taking the form of an image, is composed of () categories of objects. The object of the same category may appear multiple times in one sample. The granularity of the categories is determined by the original dataset and the requirements of the application.
For each sample in the dataset, the disassembled object representation is expected to meet two criteria: it should contain features of all objects in the sample, and each part of the disassembled object representation should only contain the entire features of the same category of objects in the sample.
4 The Proposed Method
In this section, we give more details of our proposed UDOR (Fig. 2). We start by introducing the basic architecture double AE, then describe the Fuzzy Classification component, and finally expound the Object-removing Operation.
4.1 Double AE
The goal of our proposed UDOR is to train an autoencoder that can disassemble features of different objects in an image into different parts of a latent representation. To get proper initial representation, we adopt a double AE architecture, including an Image Reconstruction AE (IR-AE) and a Representation Reconstruction AE (RR-AE), shown in Fig. 2. The IR-AE is composed of an encoder and a decoder . Through reconstructing the input image , the IR-AE ensures that the latent representation contains all the features of the input image. With as input, the RR-AE reconstructs the representation , which enhances the consistency between the input image and the representation . Therefore, the basic reconstruction loss is defined as:
where , , and is the balance parameter. It is noticeable that all the encoders and decoders share the same parameters, respectively.
The latent representation is devised to be composed of parts , where each part is multi-unit. The hyper-parameter is decided by the total number of object categories in the image dataset. By the same token, is also split into parts .
4.2 Fuzzy Classification
The features of the objects are still intertwined upon the derivation of the initial representation . We thus propose the Fuzzy Classification component to disassemble each object’s features into different parts of the representation. For the ideal object representation, each part should only contain entire features of the same category of objects or empty features. For those nonempty parts, features of objects from the same category should be similar, and those from different categories should be distinct. When one category of object is absent from the scene, and the corresponding part of representation should be empty. Therefore, it is a fuzzy classification problem how to supervise each part of the representation contain features of objects or empty features.
To solve the above fuzzy classification problem, we propose the fuzzy classification loss. For the dataset with object categories, there are kinds of features (features of object categories and the empty features ). To classify unlabel objects in samples, we predefine the ground-truth label of the representation’s -th part with label and label . Meanwhile, the ground-truth label of empty features is defined with label , which is used to strength the classifier’s ability for identifying empty features . For the ideal object representation, the -th part of representation is expected to contain features of the -th object category or the empty features . With the as input, the object classifier
predicts the class probability, which should be equal to the -th category’s label or the label . So, the fuzzy classification loss is defined as:
where is one-hot label vector, , , denotes summation of multi-dimension vector, and is the balance parameter.
4.3 Object-removing Operation
With the above Fuzzy Classification, each part of representation will contain relevant features of the specific object. However, irrelevant features of other objects may remain in . To enhance the modularity of latent representation, we propose the Object-removing Operation. As described above, when removing some objects, the image remains reasonable and integrated, which is called as visual integrity. Base on the visual integrity, we randomly reset part of representation to the empty vector , which generates the object-removed representation (). The empty vector is a part of representation extracted from an empty image , such as a full-white image, a full-black image, or other kinds of images, which is decided by each training dataset.
If the reset part contains independent and complete features of one object category, the RR-AE should reconstruct the object-removed representation perfectly. The object-removing loss is thus devised to reconstruct the object-removed representation and the empty image :
where is the generated image with object-removed representation , , , and is the balance parameter.
To keep the visual integrity of the object-removed image , the WGAN-GP [Gulrajani2017Improved] is adopted to discriminate that the corresponding objects are removed from . Similar to [Gulrajani2017Improved], the generative adversarial loss is given as:
where is the generator distribution, is the original data distribution, is the distribution of generated image in training, and is the balance parameter. The will constrain the -th part only contain entire features of the specific category of objects or empty features.
In summary, the total loss contains four parts: the basic reconstruction loss , fuzzy classification loss , object-removing loss and generative adversarial loss . ensures the features’ consistency between the input image and the latent representation. ensures that each part of the representation contains features of the specific object category or the empty features. is devised to enhance the modularity of latent representation. is adopt to improve the quality of the object-removed image reconstructed with the object-removed representation, which can ensure the object-removed image’s visual integrity. The total loss is therefore given as follows:
where , , and are the balance parameters.
5 Disassembling Metric
It is essential to measure the disassembling performance of different methods. To measure the disassembling performance effectively and fairly, we begin by defining the properties that we expect a disassembled representation to have. Then we describe our devised two metrics for quantitatively comparing the disassembling performance.
As described above, the image is usually comprised of several objects, which are removable from the scene. We therefore assume that the sample in the dataset is generated by a ground truth simulation process that uses different kinds of objects randomly. In this paper, we generate the Multi-MNIST dataset with the handwritten digits [lecun1998gradient-based] as objects. For the handwritten digit image, we first resize it to image, and then create an empty black image. For the top-left, top-right and left-bottom of the black image, we insert different digit / nothing, digit / nothing and digit / nothing, respectively. As a result, the generated dataset sample may contain nothing or one/ two/ three digits. For the Multi-MNIST dataset, the ideal disassembled representation is that each part of representation will only contain the entire features of a specific digit or empty features. When removing part of representation that contains digit’s features, the corresponding digit should disappear completely.
Therefore, we devise two disassembling metrics to measure the modularity on the latent representation and the integrity on the reconstructed image, respectively. For the modularity
, we run inference on images that are generated by fixing one object while randomly sampling all other objects or empty. If the modularity property holds for the inferred representations, there will be less variance in the inferred latent representations that correspond to the fixed object. In this paper, we randomly choosefixed digits. For the -th fixed digit, images are generated through random sampling all other digits or empty. Thus, we will get test images . For each group of digit-fixed samples, we can get disassembled digit representation parts that correspond to the -th fixed digit. Then, the Modularity Score , which measures the average difference value of , is calculated as follows:
where denotes summation of multi-dimension vector. For the integrity, we run reconstruction on the same test images through resetting the as an empty vector, which will reconstruct the fixed number disappeared images . Meanwhile, the corresponding ground-truth images are generated through replacing the fixed digit with a black patch. Then, the Integrity Score , which measures the visual integrity of reconstructed images, is defined as follows:
where is the pixel number of the image, denotes summation of image pixel difference value.
In this section, we first introduce implementation details and compare the results of our UDOR and other methods qualitatively. Then, we adopt the Modularity Score and the Integrity Score to evaluate the performance of different methods quantitatively. Next, we give the experiment how the object position influences the disassembling object representation. Meanwhile, we do the ablation study that verifies the effects of different loss terms. What’s more, we demonstrate the application performance on the image editing and image classification task. Lastly, some failure cases are expounded, and some interesting and challenging directions are discussed. (More results, experimental details, and source codes are given in the supplementary material.)
6.1 Experimental Setting
Dataset. In the experiment, we compare UDOR with other methods on six datasets. For Multi-MNIST, each sample is generated through randomly fill the downsampled handwritten images of digit zero, one, and two or a empty black patch into the top-left, top-right, and left-bottom of a black image, respectively. For Pattern-Design, we firstly collect a basic pattern and cut the basic part, which can recombine the pattern through jointing repetitively. The basic part contains three kinds of objects: the flower and the leaf and the tree. Then, we resize the basic part into the image. For the resized basic part, we generate basic parts through rendering objects with different colors. For Multi-Fashion, each sample is generated by choosing and combining some t-shirt, pants, bag, and shoes from Fashion-MNIST [xiao2017fashion-mnist:] randomly. The position for the t-shirt, pants, bag, and shoes are top-left, left-bottom, top-right, and right-bottom of the generated outfit image, respectively. In the outfit, some clothing items are allowed nonexistent, which satisfies the real-life scenes. For Mugshot [Feng2018Dual], the dataset contains selfie images of different subjects with different backgrounds. What’s more, some selfie images with the white background are added into the training datasets. Finally, there is a total of samples in the dataset. For Outfit [feng2018interpretable], each sample is generated with an outfit composition algorithm with real clothing items as input. There are up to five clothing items in each outfit image. The outfit dataset contains samples. The HAM [tschandl2018the] is a large collection of multi-source dermatoscopic images of common pigmented skin lesions. There are dermatoscopic images in the dataset.
Network Architectures. In the experiment, we adopt two kinds of network architecture for image size and , respectively. The encoders and decoders of network architecture have the same architecture as ResNet [eastwood2018a]. More details are given in the supplementary material B.
6.2 Qualitative Evaluation
As described above, there are few disassembling object representation methods so far. So we compare our unsupervised method with two most related (semi-)supervised methods, which can validate the performance of the UDOR intuitively. The first compared supervised method is a Supervised Auto-Encoder (S-AE), which adopts the annotated object labels to supervise each part of the representation learned by basic auto-encoder through a classifier. The encoder, decoder, and classifier have the same network architecture as UDOR’s. The detail network architecture of S-AE is given in the supplementary material A. What’s more, the UDOR is also compared with the semi-supervised method DSD [Feng2018Dual], which can be transferred into the object representation disassembling method by replacing the annotated attribute input with annotated object samples.
Fig. 3 gives some visualization results of the above methods on the six datasets. For each dataset, we show an input sample, a swapping candidate image (S-C). Meanwhile, we demonstrate the object-removed images and object-swapped images, which compares the disassembling performance of different methods qualitatively. The object-removed images are reconstructed through resetting different parts of representation to an empty vector, which is a part of the empty image’s representation. The object-swapped images are reconstructed by swapping part of the swapping candidate image’s representation into the representation of the input image.
From Fig. 3 (d-f), we see that S-AE fails to remove corresponding objects from the object-removed results, which demonstrates that only class labels are not enough for directly disassembling representation on complicated color datasets. What’s more, S-AE achieves the worst visual results. The reason is that the label only supervises the S-AE extract relevant features of the corresponding object into each part of the representation. However, it cannot prevent each part from extracting irrelevant features. The Object-removing Operation of our method and swapping module of DSD can restrict irrelevant features of other objects to be extracted into the corresponding part. From Fig. 3 (a-f), we can see that corresponding objects are successfully removed from the results of DSD and UDOR. On the average, DSD achieves the best visual results on the above six datasets. However, the results of UDOR are sharper and clearer than other methods. It should be noted that the paired samples of DSD are generated with hand-cut center patches, which leads to the mosaic results in Fig. 3 (f). In sum total, the proposed UDOR, despited unsupervised, achieves truly encouraging results on par with those of (semi-)supervised methods. More visual results are provided in the supplementary material C.
6.3 Quantitative Evaluation
To evaluate the disassembling performance quantitatively, we compare the proposed UDOR with S-AE and DSD [Feng2018Dual] on the Multi-MNIST using the modularity score and integrity score in different part-lengths. In the experiment, and are set to and , respectively. We sample part-lengths () and test all methods in those part-lengths setting.
Fig. 4 plots the modularity scores and integrity scores of different methods in different part lengths. In Fig. 4(a), the modularity scores of UDOR are smaller than DSD’s and S-AE’s in almost all part-lengths, which demonstrates that our unsupervised method achieves better disassembling performance than (semi-)supervised methods (S-AE and DSD) on modularity. The primary cause is that the Object-removing Operation can reduce the correlation between the parts of representation effectively. It can be verified by the modularity score of UDOR[-Rem], which is the highest than other methods in almost all part-lengths. It means that UDOR without object-removing loss achieves the worst modularity performance on the disassembled object representation.
For the integrity score, UDOR achieves a smaller score than S-AE and DSD in part-length , which is shown in Fig. 4(b). However, S-AE and DSD achieve smaller scores than our method in large part-lengths, which means that the (semi-)supervised method achieves better visual performance in large part-length. In general, each part of representation tends to contain more irrelevant features of other objects along with the part-length increases. For those (semi-)supervised methods, the label will supervise each part containing more relevant features of the corresponding object, which leads to better visual results. In summary, our proposed unsupervised method achieves better modularity performance in almost all part-lengths and achieves better visual results in part-length than (semi-)supervised methods (S-AE and DSD).
6.4 Ablation Study
In the UDOR, the total loss (Eqn. (5)) is composed of four parts: the basic reconstruction loss , the fuzzy classification loss , the object-removing loss , and the generative adversarial loss . The (Eqn. (1)) is the basic reconstruction loss of double AE, which ensures that all the features of the image are encoded into the latent representation entirely. The fuzzy classification loss (Eqn. (2)) is the core component for disassembling. Without the and , our framework will fail in disassembling object representation. The object-removing loss (Eqn. (3)) and generative adversarial loss (Eqn. (4)) is devised to improve the modularity of latent representation and visual integrity of reconstructed image, respectively.
We then conduct the ablation study by removing the and the from the framework. Fig. 4 gives the modularity and integrity scores of the methods without the object-removing loss (UDOR[-Rem]) and the generative adversarial loss (UDOR[-GAN]). In the Fig. 4(a), UDOR[-Rem] achieves the larger modularity score than UDOR, which demonstrates the effectiveness of object-removing loss for improving the modularity of latent representation. However, UDOR[-GAN] has a smaller modularity score than UDOR, which shows that the affects the modularity of latent representation negatively. From Fig. 4(b), we can see that UDOR[-GAN] has the larger scores than UDOR and UDOR[-Rem], which verifies that WGAN-GP module can effectively improve the visual integrity of reconstructed images.
6.5 Object Position
The object position is an important factor for disassembling object representation. We generate 12 different datasets with different offset settings of object position (). In the dataset, each sample is composed of one/two digit objects. All the samples are images. With those datasets, we train 12 models with samples and test them with test samples. The corresponding quantitative and qualitative results are shown in Fig. 5. The integrity scores and modularity scores along different offsets are shown in Fig. 5(a), where we can see that the integrity score becomes large when the offset is large than . The corresponding object-removed images are shown in Fig. 5(b) on different offset setting datasets. Like the integrity score, the corresponding objects can’t be removed entirely when the offset is large than . In summary, UDOR is robust on small object position offset and fails on large offset variance, which is the primary direction of our future research.
|Method||Muti-MNIST dataset||HAM dataset [tschandl2018the]|
describe the geometrical average of the precision and recall scores.
As described above, our method can be applied to many machine learning tasks, including image editing, few- or zero-shot learning, classification, and so on. In this section, we test the performance on two basic applications: image editing and classification. For image editing, Fig. 3 gives the object-removed and object-swapped results on six datasets. In the object-removed results, the corresponding objects are effectively removed, while the corresponding objects are swapped in the object-swapped results. It validates that our method can be well applied in the editing of many image scenes. More editing results can be found in the supplementary material C.
For the classification, we compare the UDOR with S-AE and DSD on Multi-MNIST and HAM [tschandl2018the]. After getting the disassembled representations, we adopt a simple linear SVM (Details are given in the supplementary material A) to train and test the classification performance. The simple classifier can reduce its influence on the final classification performance, which can help better measure the classification effect of different object representations. It should be noted that the classification performance can be promoted with more powerful classifiers and feature extractors. From Table 1, we can see that UDOR achieves a higher score than DSD’s on Muti-MNIST dataset and achieves the best performance on HAM dataset [tschandl2018the]
, which demonstrates that the object features extract by our method is more intact and independent. The S-AE achieves the best and the worst performance on the Multi-MNIST dataset and HAM dataset, which demonstrates that class labels are only directly effective for simple datasets. Meanwhile, the UDOR[-Rem] and UDOR[-GAN] achieve the lower scores than UDOR, which verifies the effectiveness of the Object-removing Operation and the WGAN-GP module once again. It’s noticeable that the UDOR[-GAN] achieves the worst performance on the classification task, which indicates that the WGAN-GP module can effectively enhance the integrity and independence of the object’s features.
6.7 Failure Case and Future Works
The above experiments show that the UDOR can achieve closed disassembling performance as (semi-)supervised methods. However, there are still some shortcomings. Firstly, there are many balance parameters in total loss (Eqn. (5)). For different datasets, the balance parameters need to be fine-tuned. Ill-suited settings may lead to overfitting or non-convergence. The detail balance parameters for all the datasets can be found in supplementary material B.
Some failure cases are shown in Fig. 6, where the digit zero is failed to be entirely disassembled in the case ‘c1’ of UDOR. The failed cause is that the model converges to a local minimum. In case ‘c2’, digit one and digit two are not well disassembled, which leads to the ghost of those digits in the reconstructed images. It is noticeable that those failure case also occurs in the S-AE and DSD. The primary cause of the ghost is the stability of converging in the training stage, which is a major direction in our future research. From Fig. 6(b3,c3), we can see that failure cases of DSD and S-AE are more serious when handling the complicated outfit dataset. Meanwhile, the S-AE fails to disassemble object representations on complicated image datasets, which is illustrated in Fig. 6(a3) and Fig. 3(d,e,f). In Section 5, the experiments on object position offset are expounded in detail, which demonstrates that the UDOR will fail on large offset variance. Therefore, combining position information into the disassembled representation is a significant direction in our future research.
In this paper, we study a new representation-learning task, termed disassembling object representation, for which the goal is to disassemble features of different objects within an image into distinct parts of a latent representation. Towards solving this task, we propose an unsupervised strategy, termed Unsupervised Disassembling Object Representation (UDOR). UDOR follows a double AE architecture, in which a Fuzzy Classification and a Object-removing Operation are imposed to achieve modularity and visual integrity. Furthermore, we devise two disassembling metrics to measure the modularity of representations and the integrity of images, respectively. Experiments on several datasets show that the proposed UDOR accomplishes favorable results, on par with those of supervised methods. In future work, we will focus on the stabilized convergence of UDOR and large changes of object’s position.