Dijun Luo

is this you? claim profile

0

  • Label Aware Graph Convolutional Network -- Not All Edges Deserve Your Attention

    Graph classification is practically important in many domains. To solve this problem, one usually calculates a low-dimensional representation for each node in the graph with supervised or unsupervised approaches. Most existing approaches consider all the edges between nodes while overlooking whether the edge will brings positive or negative influence to the node representation learning. In many real-world applications, however, some connections among the nodes can be noisy for graph convolution, and not all the edges deserve your attention. In this work, we distinguish the positive and negative impacts of the neighbors to the node in graph node classification, and propose to enhance the graph convolutional network by considering the labels between the neighbor edges. We present a novel GCN framework, called Label-aware Graph Convolutional Network (LAGCN), which incorporates the supervised and unsupervised learning by introducing the edge label predictor. As a general model, LAGCN can be easily adapted in various previous GCN and enhance their performance with some theoretical guarantees. Experimental results on multiple real-world datasets show that LAGCN is competitive against various state-of-the-art methods in graph classification.

    07/10/2019 ∙ by Hao Chen, et al. ∙ 8 share

    read it

  • Are Tensor Decomposition Solutions Unique? On the global convergence of HOSVD and ParaFac algorithms

    For tensor decompositions such as HOSVD and ParaFac, the objective functions are nonconvex. This implies, theoretically, there exists a large number of local optimas: starting from different starting point, the iteratively improved solution will converge to different local solutions. This non-uniqueness present a stability and reliability problem for image compression and retrieval. In this paper, we present the results of a comprehensive investigation of this problem. We found that although all tensor decomposition algorithms fail to reach a unique global solution on random data and severely scrambled data; surprisingly however, on all real life several data sets (even with substantial scramble and occlusions), HOSVD always produce the unique global solution in the parameter region suitable to practical applications, while ParaFac produce non-unique solutions. We provide an eigenvalue based rule for the assessing the solution uniqueness.

    02/26/2009 ∙ by Dijun Luo, et al. ∙ 0 share

    read it