Visual Tactile Fusion Object Clustering

11/21/2019
by   Tao Zhang, et al.
Indiana University
0

Object clustering, aiming at grouping similar objects into one cluster with an unsupervised strategy, has been extensivelystudied among various data-driven applications. However, most existing state-of-the-art object clustering methods (e.g., single-view or multi-view clustering methods) only explore visual information, while ignoring one of most important sensing modalities, i.e., tactile information which can help capture different object properties and further boost the performance of object clustering task. To effectively benefit both visual and tactile modalities for object clustering, in this paper, we propose a deep Auto-Encoder-like Non-negative Matrix Factorization framework for visual-tactile fusion clustering. Specifically, deep matrix factorization constrained by an under-complete Auto-Encoder-like architecture is employed to jointly learn hierarchical expression of visual-tactile fusion data, and preserve the local structure of data generating distribution of visual and tactile modalities. Meanwhile, a graph regularizer is introduced to capture the intrinsic relations of data samples within each modality. Furthermore, we propose a modality-level consensus regularizer to effectively align thevisual and tactile data in a common subspace in which the gap between visual and tactile data is mitigated. For the model optimization, we present an efficient alternating minimization strategy to solve our proposed model. Finally, we conduct extensive experiments on public datasets to verify the effectiveness of our framework.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

05/01/2021

Multi-view Clustering with Deep Matrix Factorization and Global Graph Refinement

Multi-view clustering is an important yet challenging task in machine le...
08/11/2021

Elastic Tactile Simulation Towards Tactile-Visual Perception

Tactile sensing plays an important role in robotic perception and manipu...
02/21/2018

ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition

Vision and touch are two of the important sensing modalities for humans ...
06/19/2019

Constrained Bilinear Factorization Multi-view Subspace Clustering

Multi-view clustering is an important and fundamental problem. Many mult...
05/01/2021

Multi-view Clustering via Deep Matrix Factorization and Partition Alignment

Multi-view clustering (MVC) has been extensively studied to collect mult...
09/15/2021

A Framework for Multisensory Foresight for Embodied Agents

Predicting future sensory states is crucial for learning agents such as ...
06/19/2018

iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing

For humans, both the proprioception and touch sensing are highly utilize...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Figure 1: Illustration of the proposed visual-tactile fusion object clustering framework, where an under-complete Auto-Encoder-like structure is used to preserve the local structure of data generating distribution of visual and tactile modalities. With the consensus regularization, the gap between visual and tactile modalities can be well mitigated.

Grouping a set of objects in an unsupervised way that objects in the same group (called a cluster) are more similar to each other than these in other groups (i.e., object clustering) has attracted a lot attention in both academic and industrial communities in the past decades. Most current object clustering works  [1, 29, 27, 28, 24, 4]

aim at recognizing “similar behavior” based on visual information captured by a visual camera (e.g., RGB or Depth camera) or represented by different description methods (e.g., SURF, LBP or deep features). These above methods have been successfully applied into statistics, computer vision, biology or psychology 

[23, 10, 11, 18, 20].

However, most existing object clustering works ignore one of the important sensing modality, i.e., tactile information (e.g., hardness, force, and temperature), which casts a light in compensating visual information on many practical manipulation tasks [16, 26]. For example, in the practical situation that a robot grasps an apple, the visual information of the apple becomes unobservable due to the occlusion of a robot hand while the tactile information can be easily obtained. Some objects whose appearance are visually similar can be hardly distinguished via merely using visual information (e.g., ripe versus unripe fruits). However, the ripe versus unripe fruits can be easily distinguished by tactile properties (e.g., hardness). Besides, some objects cannot be well distinguished only by either visual information or tactile information. For instance, it is hard to differentiate three visually similar bottles, where two bottles are empty and the remaining one is full of water. Hence, it is beneficial from each other to perform object clustering by fusing visual and tactile modalities.

To integrate visual with tactile information, a naive solution is to treat visual or tactile data as single view data, and directly perform the existing multi-view clustering methods on the visual-tactile object clustering task. However, the gap between visual and tactile modalities is very large [15]. On the one hand, the devices which are used to collect tactile and visual data are different. Tactile sensor obtains tactile data through constant physical contact, while the visual modality can simultaneously generate multiple different features of an object at a distance. Moreover, the format, frequency and receptive field is diverse since visual sensor usually perceives color, global shape and rough texture, while touch sensor is usually used to acquire detailed texture, hardness and temperature. Therefore, how to establish a novel visual-tactile fusion object clustering model, which can tackle intrinsic gap challenge across visual and tactile data, is our focus in this work.

To address the challenges mentioned above, in this paper, we propose a deep Auto-Encoder-like Non-negative Matrix Factorization (NMF) framework for visual-tactile fused object clustering. More specifically, deep NMF constrained with an under-complete Auto-Encoder-like structure is adopted to learn the hierarchical semantics, while preserving the local data structure among visual and tactile data in a layer-wise manner. Then, we introduce a graph regularizer to reduce the differences between similar points inside each modality. Furthermore, as a non-trivial contribution, we carefully design a sparse consensus regularizer to tackle the intrinsic gap problems between visual and tactile data. We explore a consensus constraint to interact the individual component between different modalities with final consensus representation to align two modalities. Thus, it plays as the modality-level constraint to supervise the generation of a common subspace, in which the mutual information on visual and tactile data is maximized. To optimize our proposed framework, an efficient alternating minimization strategy is present. To the end, we conduct extensive experiments on public datasets to evaluate the effectiveness of our framework, wherein ours outperforms the state-of-the-arts. The contributions are summarized as:

  • We propose a deep Auto-Encoder-like Nonnegative Matrix Factorization framework for visual-tactile fusion object clustering. To our best knowledge, this is a pioneering work to incorporate visual modality with tactile modality in the object clustering task.

  • We develop an under-complete Auto-Encoder-like structure to jointly learn the hierarchical semantics and preserve the local data structure. Meanwhile, we design a sparse consensus regularization to seek a common subspace, in which the gap between visual and tactile modalities is mitigated and the mutual information is maximized.

  • To solve our proposed framework, an efficient solution based on an alternating direction minimization method is provided. Extensive experiment results verify the effectiveness of our proposed framework.

Related Work

The work in this paper lies in the tasks of visual-tactile sensing and multi-view clustering. We thus introduce the related work including visual-tactile sensing and multi-view clustering in this section.

Visual-Tactile Sensing

Vision and touch are the most important sensing modalities both for robots and humans, and they are widely-applied in robot tasks [6, 16, 26, 3]. Generally, visual-tactile sensing can be mainly divided into three categories including object recognition, 3D reconstruction and cross-modal matching.

Amongst the fields mentioned above, Liu et al. propose a visual-tactile fusion framework to recognize household objects based on kernel sparse coding method [16]

. Yuan and Luo et al. propose a deep learning framework for clothing material perception by fusing visual and tactile information 

[25]. Ilonen et al. develop to reconstruct 3D model of unknown symmetric objects by fusing visual and tactile information [6]. Wang et al. present to perceive accurate 3D object shape with a monocular camera and a high-resolution tactile sensor [22]. Yuan et al. propose a multi-input net to connect the visual and tactile properties of fabrics [26]. Li et al. introduce a conditional generative adversarial network based prediction model to connect visual and tactile measurement [13]

. Although the previous models have been successfully applied in supervised learning in the visual-tactile sensing fields, its application in object clustering is still under insufficient exploration.

Multi-View Clustering

Multi-view clustering has shown remarkable successes in many real-world applications. Based on standard spectral clustering 

[19], co-training [7] and co-regularizer [8] are performed to enforce consistence of different views. Based on the subspace clustering strategy, Cao and Zhang et al. try to capture complementary information from different views in the manner of subspace representations [1, 27] . Based on the framework of non-negative matrix factorization and its variants [21], Li et al. propose a consensus clustering and semi-supervised clustering method based on Semi-NMF [12]. Zhao et al. propose a deep Semi-NMF method for multi-view clustering [29].

The Proposed Method

NMF Revisit

NMF and its variants [9, 14] have previously shown to be promising in the field of multi-view clustering. The objective of NMF can be defined as:

(1)

where is the input feature matrix, is the basis matrix and is the compact representation, respectively. We can obtain the final clustering result by performing standard spectral clustering [19] on . However, in real-world applications, it is not enough to learn intrinsic data structure with single-layer NMF due to complex data structure and data noise. Zhao et al. show that a deep NMF model has an appealing performance in data representation [29]. The deep NMF can be formulated as:

(2)

where and represent the basis matrix and representation for the -th layer, respectively. Inspired by this idea, we intend to explore the deep NMF architecture into our visual-tactile object clustering framework.

The Proposed Framework

In the setting of visual-tactile fusion object clustering framework, we use as the input data, where is the number of modalities ( is defined as for the visual-tactile clustering task in this work), and represents the -th modality. denotes the feature matrix for the -th modality, represents the dimension of the feature, denotes the number of data samples. Then, we propose our deep visual-tactile fused object clustering model as follows:

(3)

where is the number of layer, and are the regularization parameters. represents the high hierarchical semantics of the -th modality.

Moreover, the first and second terms denote the NMF constrained by an under-complete Auto-Encoder-like structure, which is designed to learn the hierarchical semantics while preserving the local structure of the input visual and tactile data. The first term denotes an under-complete decoder process controlling the dimension of lower than and further force NMF to learn more salient features representation of . The second term denotes an encoder process which implicitly maintains the local data structure via recovering from . Furthermore, we have the following Remarks for the used regularization.

Remark 1

The graph regularization in the third term is designed to pull the similarities of nearby points inside each modality. denotes the graph Laplacian matrix for the -th modality, constructed in -nearest neighbor manner. By using the Eigen-decomposition technique on , i.e., , we obtain: , where

. However, the process of collecting tactile or visual data is easily contaminated by environmental change, which leads to noise and outliers in the source data. Meanwhile, Frobenius norm is sensitive to the noises and outliers. We thus replace Frobenius norm by the

-norm, which can jointly remove outliers and uncover more shared representation across the nearby points inside each modality.

Remark 2

The last item is the consensus regularization, which is designed to tackle the intrinsic gap problem between visual and tactile data. This term directly measures the similarity between and in a utility way, where is the best mapping matrix to align to . After aligning to , the -norm constraint is to calculate the dissimilarity between and in an efficient way. Therefore, this term plays as a modality-level constraint and learn a project matrix , which projects into the common subspace . In this subspace, the mutual information on each modality is maximized, which ultimately contributes to the object clustering.

Then the objective function Eq. (3) is further reformulated as:

(4)

Optimization

To efficiently solve the optimization problem Eq. (4), we propose a solution based on alternating direction minimization algorithm. To reduce the training time, we pre-train each layer to approximate the factor matrices and . For the pre-training process, we decompose the input data matrix by minimizing first, where and . Then we decompose as , where and . is the dimension of layer and is the dimension of layer 111The layer size for layer to is denoted as in this paper. Repeating the process until all layers have been pre-trained. Then each layer is fine-tuned by alternating minimization of the proposed framework in Eq. (4). Specifically, the update rules for each variable are as follows.

Update rule for :

With other variables fixed, we can have the following Lagrangian objective function:

(5)

where , and is set as when . Taking the derivative to zero and applying the Karush-Kuhn-Tucker (KKT) conditions, we can have:

(6)

This process converges because this is a fixed point equation. Then we obtain the update rule as:

(7)

where represents the element-wise product.

Update rule for :

By utilizing a similar proof as [29], we can formulate the update rule for as follows:

(8)

Update rule for and :

Solving these variables is a challenging problem since it is hard to directly get the explicit solutions. We thus introduce two auxiliary variables and to transform the optimization Eq. (4), and obtain the following objective function:

(9)

After converting Eq. (9) to an augmented Lagrangian function, we obtain the following expression:

(10)

where , and

are the Lagrangian multipliers,initialized with zero matrix;

, and are the parameters for penalty; is the slackness variable to satisfy the non-negative constraint for . We then employ the alternating direction method of multipliers to solve this equation, and the update rules are as follows.

Update rule for : With other variables fixed, we can have the following Lagrangian objective function:

(11)

Taking the derivative respect of to zero, we obtain:

(12)

Since Eq. (12) is a standard Sylvester equation, it can be effectively solved by Bartels-Stewart algorithm.

Update rule for and : With other variables fixed except for , we can have the following Lagrangian objective function:

(13)

Taking the derivative to zero, we obtain the following update rule:

(14)

where denotes the Moore-Penrose pseudo-inverse.

Similarly, can be updated with the following rule:

(15)

Update rule for and : and are solved in a similar way as that to solve , and we thus obtain the following update rules. The update rule for is written as follows:

(16)

where is a diagonal matrix with the i-th diagonal element as . is the -th row of the matrix .

is the identity matrix.

The update rule for can be written as follows:

(17)

Until now, we have obtained all the update rules. We summarize the overall update process of the proposed framework in Algorithm 1.

0:  Visual-tactile data , layer size , hyper-parameter , the number of clusters
1:  Initialize:
2:  for all layers in each modality do
3:     
4:      -NN graph construction on
5:  end for
6:  while not converged do
7:     for all layers in each modality do
8:        if  then
9:           Update via Eq. (8).
10:        else
11:           Update (i.e., ) via Eq. (12).
12:           Update via Eq. (14).
13:           Update via Eq. (15).
14:           Update via Eq. (16).
15:           Update via Eq. (17).
16:           Update Lagrangian multipliers , , .
17:        end if
18:        Update according to Eq. (7).
19:     end for
20:  end while
21:  return  .
Algorithm 1 Optimization of Problem (4)

After obtaining the optimized , we could obtain the final clustering result by performing a standard spectral clustering on .

Time Complexity

For the computational complexity, our proposed model consists of two steps, i.e., the pre-trained stage and the fine-tuned stage. In order to simplify the analysis, we suppose that all the layers are with the same size of hidden units. In the pre-trained stage, the computational complexity , where is the number of modalities, is number of layers, is the layer size, is the feature dimension, is the number of samples and is the number of iterations to achieve convergence in the pre-training process. In the fine-tuned stage, the computational complexity is , where is the number of iterations. Thus, the total time complexity is .

Experiments

In this section, we evaluate the performance of our proposed model via several empirical comparisons. We first provide the used datasets and experiment results, followed by some analyses about our model.

Experimental Setting

Extensive experiments are conducted on two visual-tactile fusion datasets and one benchmark dataset to evaluate our proposed model: 1) PHAC-2222http://people.eecs.berkeley.edu/ yg/icra2016 dataset: it contains color images and tactile signals of household objects. In this paper, we utilize all images and the first 8 tactile signals. 4096-D visual and 2048-D tactile features are extracted in a similar way as [5]. 2) GelFoldFabric333http://people.csail.mit.edu/yuan_wz/fabric-perception.htm dataset: it contains color images and tactile images of kinds of fabrics. More details about this dataset can be found in [26]. In this paper, we use the pre-trained VGG-19 net to extract 4096-D features both for tactile and visual images. 3) Yale444http://vision.ucsd.edu/content/yale-face-database dataset: it is employed to evaluate the performance of the proposed framework when the modality number of the input data is more than 2, which contains images of subjects. Similar to [29], three kinds of features (i.e., 3304-D LBP, 4096-D intensity, 6750-D Gabor) are extracted as different views.

Method ACC NMI AR F-score Precision Recall
Vision 35.141.89 64.731.35 10.812.68 13.182.49 8.350.24 13.241.48
Touch 26.251.03 55.970.79 7.520.80 9.340.72 7.910.85 11.480.63
ConcatFea 46.931.28 68.060.39 25.350.25 26.66 1.26 25.100.98 27.941.01
ConcatPCA 47.190.81 68.010.33 26.130.69 27.410.67 26.060.78 28.920.59
Co-Reg 50.980.20 61.050.51 15.310.63 16.810.62 15.750.58 18.040.66
Co-Training 52.301.70 72.361.30 32.371.90 32.303.00 33.522.90 36.742.90
Min-D 47.982.77 67.853.50 25.145.20 26.475.10 24.515.00 28.804.60
Multi-NMF 51.980.82 70.810.32 30.120.94 32.130.92 30.670.93 33.741.00
DiMSC 36.991.17 65.690.77 18.210.97 17.860.92 15.631.10 19.020.70
DMF-MVC 55.020.96 72.960.31 34.390.55 35.530.53 33.860.66 37.830.53
GLMSC 37.503.34 61.971.84 16.372.87 17.832.81 16.972.77 18.792.86
Ours 59.171.40 75.270.54 38.971.13 40.031.11 38.121.29 42.150.96
Table 1: Performance () comparison of different metrics (mean standard deviation) on PHAC-2 dataset.
Method ACC NMI AR F-score Precision Recall
Vision 35.461.08 65.910.70 17.301.26 17.961.25 16.871.17 19.211.34
Touch 33.921.05 65.000.52 15.710.92 16.390.91 15.420.85 17.481.00
ConcatFea 36.560.82 66.950.27 18.530.58 19.190.58 18.020.48 20.530.77
ConcatPCA 37.151.20 67.280.61 19.131.35 19.781.34 18.571.18 21.151.55
Co-Reg 45.801.28 55.330.47 36.090.68 36.540.70 33.390.88 39.630.78
Co-Training 37.850.78 45.850.78 35.141.70 35.591.74 32.432.00 39.271.62
Min-D 43.132.49 45.920.98 34.942.30 35.392.28 32.472.21 38.732.30
Multi-NMF 52.010.99 75.300.36 34.690.17 35.180.95 33.271.17 37.080.72
DiMSC 37.730.77 66.970.47 18.350.77 18.030.76 17.080.85 20.110.62
DMF-MVC 53.030.82 76.600.36 36.500.98 36.610.76 34.710.87 39.020.92
GLMSC 55.921.49 78.350.28 39.700.52 40.190.51 37.560.29 43.220.81
Ours 62.190.55 80.730.24 45.860.65 46.251.02 44.130.93 49.490.66
Table 2: Performance () comparison of different metrics (mean standard deviation) on GelFabric dataset.
Method ACC NMI AR F-score Precision Recall
BestSV 61.603.00 65.400.90 44.001.10 47.501.10 45.701.10 49.501.00
ConcatFea 54.403.80 64.100.60 39.200.90 43.100.80 41.500.70 44.800.80
ConcatPCA 57.803.80 66.503.70 39.601.10 43.401.10 41.901.20 45.000.90
Co-Reg 56.400.20 64.800.20 43.600.20 46.600.00 45.500.40 49.100.30
Co-Training 63.000.10 67.200.60 45.201.00 48.700.09 47.001.00 50.501.62
Min-D 61.504.30 64.500.50 43.300.60 47.000.60 44.600.50 49.600.60
Multi-NMF 67.300.10 69.000.10 49.500.10 52.700.00 51.200.03 54.300.02
DiMSC 70.900.30 72.701.00 53.500.10 56.400.20 54.300.10 58.600.30
DMF-MVC 74.501.10 78.201.00 57.900.20 60.100.20 59.800.10 61.300.20
GLMSC 75.453.86 78.432.93 54.000.50 57.090.95 51.812.23 63.763.60
Ours 80.730.63 82.090.94 64.510.69 63.350.66 62.250.73 65.091.17
Table 3: Performance () comparison of different metrics (mean standard deviation) on Yale dataset.

Comparison Models and Evaluation

We compare our proposed framework with the following models including 7 multi-view baselines and 4 related single-view baselines. Related single-view clustering competitors: Vision (Touch) performs standard spectral clustering [19] on the visual (tactile) features; ConcatFea concatenates all features first and then carries out standard spectral clustering; ConcatPCA concatenates all the features and does PCA to project the concatenated features into a low dimensional subspace, then performs standard spectral clustering on the projected features; Multi-view clustering competitors: Co-Reg [8] enforces the number shape between different views via co-regularizing the clustering hypotheses; Co-Training [7] works on the hypothesis that the true underlying clustering would assign a point to the same cluster irrespective of the view; Min-D [2] creates a bipartite graph basing on the “minimizing-disagreement” idea; Multi-NMF [17] utilizes non-negative matrix factorization to seek the common latent subspace for multi-view input data; DiMsc [1] utilizes a diversity term to explore the complementary information of multi-view data; DNMF-MVC [29] proposes a deep non-negative matrix factorization framework to capture the mutual information of multi-view data; GLMSC [27] simultaneously seeks the underlying representation and explores complementary information of multi-view data.

Similar to [1, 29], six different metrics i.e., accuracy (ACC), normalized mutual information (NMI), Precision, F-score, Recall, adjusted rand index  (AR) are adopted to evaluate the clustering performance. Higher value indicates the better performance for all metrics. We run all algorithms times and report the mean values along with standard deviations. Table 1 and Table 2 show the object clustering results on PHAC-2 dataset and GelFabric dataset, respectively. Table 3 shows the results on Yale dataset. BestSV performs standard spectral clustering on the features in each view and reports the best performance. For avoiding overfitting, the maximum number of iterations is set to 150 for all experiments.

From the presented results, we obtain the following observations: our framework achieves very competitive performance when comparing with all the competing models, which reveals the remarkable effectiveness of our framework in object clustering task. Specifically, the results shown in Table 1 and Table 2 reveal the importance of fusing visual and tactile information when comparing with the models using visual (or tactile) information alone. This observation also reveals that our framework is able to utilize the visual and tactile information more effectively, when comparing with state-of-the-arts. The results in Table 3 also reveal that our framework is not limited to the -modality (i.e., visual-tactile fusion) case, and it can be applied into other applications whose modality number is more than .

Ablation Study Convergence Analysis

In this subsection, we analyze the proposed framework from three perspectives. Firstly, we analyze the effectiveness of the proposed Auto-Encoder-like structure, graph regularization and the consensus regularization. Then, we analyze the parameter setting, followed by the convergence analysis.

Effectiveness of Auto-Encoder-like Structure, Graph Regularization and Consensus Regularization: Figure 2 presents the effectiveness of the used items. We can draw the following conclusion. Overall, “Ours” achieves the best performance revealing that all the regularization and the Auto-Encoder-like structure proposed in this paper contribute to learn the rich information between multi-modality data which further boost the performance of clustering tasks. Specifically, “AE” achieve better performance than “None” denotes that via the proposed Auto-Encoder-like structure which takes data local structure preservation into account could result better representation for the source data. “GR” achieve better performance than “None” reveal the effectiveness of the graph regularization which can pull the similarities of nearby points and remove outliers inside each modality. “CR” achieve better performance than “None” reveal that the proposed consensus regularization could fill the gap between visual and tactile data and ultimately boost the clustering tasks.

Figure 2: Effects of the Auto-Encoder-like structure, graph regularization and consensus regularization. “None” denotes that all items are not used while “Ours” denotes that all items are used. “AE”, “GR” and “CR” denote the models which only use the Auto-Encoder-like structure, the graph regularization, and the consensus regularization term, respectively.

Parameter Analysis: To explore the effect of our used parameters, i.e, control parameters and and the layer size , we use PHAC-2 dataset in this subsection. Specifically, Figure 3 shows the influence of ACC and NMI results w.r.t. the parameter under different layer sizes. As can be seen, under three different layer sizes, the framework performs best both in ACC and NMI when is set as . We thus set as default in this paper. Figure 4 explores the parameter sensitivity of the proposed framework w.r.t. the parameter under different layer sizes. In this experiment, is set as . Notice that the framework perform best both in ACC and NMI when is set as . So is set as default. Figure 3 and Figure 4 also explore the influence of model performance w.r.t. the layer sizes. We find that the setting of always leads to best performance. When the layer size is small, the framework is insufficient to learn the rich information behind the input data. And when the layer size is too large, it might introduce undesirable noise. This might be the possible reason why red curves perform better (i.e, layer size is ) than the blue curves (i.e.,)and the green curves (i.e.,).

(a) ACC(%) curves w.r.t
(b) NMI(%) curves w.r.t
Figure 3: ACC(%) and NMI (%) curves w.r.t parameter on PHAC-2 dataset with different layer sizes. is set as .
(a) ACC(%) curves w.r.t
(b) NMI(%) curves w.r.t
Figure 4: ACC(%) and NMI (%) curves w.r.t parameter on PHAC-2 dataset with different layer sizes. is set as .
(a) PHAC-2 Dataset
(b) GelFabric Dataset
Figure 5: Convergence analysis on PHAC-2 dataset (a) and GelFabric dataset (b). ACC(%) (blue line) and objective function value (red line) w.r.t. iteration time, respectively.

Convergence Analysis: Even though we have not proved that the proposed framework theoretically converges, we present the convergence property empirically in Figure 5. The objective value and ACC are plotted and we choose the default parameters, i.e., , and layer size = in this experiments. Notice that the objective value gradually decreases until it converges after iterations. ACC has two stages: in the first stage, ACC increases rapidly; in the second stage, ACC grows slowly and sightly bumps until reaching the best performance.

Conclusion

In this paper, we propose a deep Auto-Encoder-like NMF framework for visual-tactile fusion object clustering. By constraining the deep NMF architecture by an under-complete Auto-Encoder-like structure, our framework can jointly learn the hierarchical semantics of visual-tactile data and maintain the local structure of the source data. For each modality, a graph regularization is adopted to pull the similarities of nearby points and remove outliers inside each modality. To create a common subspace in which the gap between visual and tactile data is filled, a sparse consensus regularization is developed in this paper, while the mutual information amongst visual and tactile data is maximized. Extensive experiment results on two visual-tactile fusion datasets and one benchmark dataset confirm the effectiveness of our framework, comparing with existing state-of-the-art works.

References

  • [1] X. Cao, C. Zhang, H. Fu, S. Liu, and H. Zhang (2015) Diversity-induced multi-view subspace clustering. In CVPR, pp. 586–594. Cited by: Introduction, Multi-View Clustering, Comparison Models and Evaluation, Comparison Models and Evaluation.
  • [2] V. R. De Sa (2005) Spectral clustering with two views. In ICML Workshop, pp. 20–27. Cited by: Comparison Models and Evaluation.
  • [3] J. Dong, Y. Cong, G. Sun, and D. Hou (2019) Semantic-transferable weakly-supervised endoscopic lesions segmentation. In ICCV, pp. 2304–2310. Cited by: Visual-Tactile Sensing.
  • [4] S. Gan, C. Yang, W. Qianqian, L. Jun, and Y. Fu (2020) Lifelong spectral clustering. In AAAI, Cited by: Introduction.
  • [5] Y. Gao, L. A. Hendricks, K. J. Kuchenbecker, and T. Darrell (2016) Deep learning for tactile understanding from visual and haptic data. In ICRA, pp. 536–543. Cited by: Experimental Setting.
  • [6] J. Ilonen, J. Bohg, and V. Kyrki (2014) Three-dimensional object reconstruction of symmetric objects by fusing visual and tactile sensing. IJRR 33 (2), pp. 321–341. Cited by: Visual-Tactile Sensing, Visual-Tactile Sensing.
  • [7] A. Kumar and H. Daumé (2011) A co-training approach for multi-view spectral clustering. In ICML, pp. 393–400. Cited by: Multi-View Clustering, Comparison Models and Evaluation.
  • [8] A. Kumar, P. Rai, and H. Daume (2011) Co-regularized multi-view spectral clustering. In NeurlPS, pp. 1413–1421. Cited by: Multi-View Clustering, Comparison Models and Evaluation.
  • [9] D. Lee, H. Seung, and Sebastian (2001) Algorithms for non-negative matrix factorization. In NeurlPS, pp. 556–562. Cited by: NMF Revisit.
  • [10] J. Li, Y. Kong, and Y. Fu (2017) Sparse subspace clustering by learning approximation ℓ0 codes. In AAAI, Cited by: Introduction.
  • [11] J. Li and H. Liu (2017) Projective low-rank subspace clustering via learning deep encoder. In IJCAI, Cited by: Introduction.
  • [12] T. Li, C. Ding, and M. I. Jordan (2007) Solving consensus and semi-supervised clustering problems using nonnegative matrix factorization. In ICDM, pp. 577–582. Cited by: Multi-View Clustering.
  • [13] Y. Li, J. Zhu, R. Tedrake, and A. Torralba (2019) Connecting touch and vision via cross-modal prediction. In CVPR, pp. 10609–10618. Cited by: Visual-Tactile Sensing.
  • [14] H. Liu, Z. Wu, X. Li, D. Cai, and T. S. Huang (2011) Constrained nonnegative matrix factorization for image representation. TPAMI 34 (7), pp. 1299–1311. Cited by: NMF Revisit.
  • [15] H. Liu and F. Sun (2018) Robotic tactile perception and understanding: a sparse coding method. Springer. Cited by: Introduction.
  • [16] H. Liu, Y. Yu, F. Sun, and J. Gu (2016) Visual–tactile fusion for object recognition. TASE 14 (2), pp. 996–1008. Cited by: Introduction, Visual-Tactile Sensing, Visual-Tactile Sensing.
  • [17] J. Liu, C. Wang, J. Gao, and J. Han (2013) Multi-view clustering via joint nonnegative matrix factorization. In ICDM, pp. 252–260. Cited by: Comparison Models and Evaluation.
  • [18] L. Liu, F. Nie, A. Wiliem, Z. Li, T. Zhang, and B. C. Lovell (2018) Multi-modal joint clustering with application for unsupervised attribute discovery. TIP 27 (9), pp. 4345–4356. Cited by: Introduction.
  • [19] A. Y. Ng, M. I. Jordan, and Y. Weiss (2002)

    On spectral clustering: analysis and an algorithm

    .
    In NeurlPS, pp. 849–856. Cited by: Multi-View Clustering, NMF Revisit, Comparison Models and Evaluation.
  • [20] G. Sun, Y. Cong, Q. Wang, B. Zhong, and Y. Fu (2019) Representative task self-selection for flexible clustered lifelong learning. ARKIV. Cited by: Introduction.
  • [21] G. Trigeorgis, K. Bousmalis, S. Zafeiriou, and B. Schuller (2014)

    A deep semi-nmf model for learning hidden representations

    .
    In ICML, pp. 1692–1700. Cited by: Multi-View Clustering.
  • [22] S. Wang, J. Wu, X. Sun, W. Yuan, W. T. Freeman, J. B. Tenenbaum, and E. H. Adelson (2018) 3d shape perception from monocular vision, touch, and shape priors. In IROS, pp. 1606–1613. Cited by: Visual-Tactile Sensing.
  • [23] B. Wu, Y. Zhang, B. Hu, and Q. Ji (2013) Constrained clustering and its application to face clustering in videos. In CVPR, pp. 3507–3514. Cited by: Introduction.
  • [24] X. Yang, C. Deng, F. Zheng, J. Yan, and W. Liu (2019)

    Deep spectral clustering using dual autoencoder network

    .
    In CVPR, pp. 4066–4075. Cited by: Introduction.
  • [25] W. Yuan, Y. Mo, S. Wang, and E. H. Adelson (2018) Active clothing material perception using tactile sensing and deep learning. In ICRA, pp. 1–8. Cited by: Visual-Tactile Sensing.
  • [26] W. Yuan, S. Wang, S. Dong, and E. Adelson (2017) Connecting look and feel: associating the visual and tactile properties of physical materials. In CVPR, pp. 5580–5588. Cited by: Introduction, Visual-Tactile Sensing, Visual-Tactile Sensing, Experimental Setting.
  • [27] C. Zhang, H. Fu, Q. Hu, X. Cao, Y. Xie, D. Tao, and D. Xu (2018) Generalized latent multi-view subspace clustering. TPAMI. Cited by: Introduction, Multi-View Clustering, Comparison Models and Evaluation.
  • [28] Z. Zhang, L. Liu, F. Shen, H. T. Shen, and L. Shao (2018) Binary multi-view clustering. TPAMI 41 (7), pp. 1774–1782. Cited by: Introduction.
  • [29] H. Zhao, Z. Ding, and Y. Fu (2017) Multi-view clustering via deep matrix factorization. In AAAI, pp. 11108–1113. Cited by: Introduction, Multi-View Clustering, NMF Revisit, Update rule for :, Experimental Setting, Comparison Models and Evaluation, Comparison Models and Evaluation.