Liang Zhao

is this you? claim profile

0

Assistant Professor at Lehman College of The City University of New York

  • Efficient two step optimization for large embedded deformation graph based SLAM

    Embedded deformation nodes based formulation has been widely applied in deformable geometry and graphical problems. Though being promising in stereo (or RGBD) sensor based SLAM applications, it remains challenging to keep constant speed in deformation nodes parameter estimation when model grows larger. In practice, the processing time grows rapidly in accordance with the expansion of maps. In this paper, we propose an approach to decouple nodes of deformation graph in large scale dense deformable SLAM and keep the estimation time to be constant. We observe that only partial deformable nodes in the graph are connected to visible points. Based on this fact, sparsity of original Hessian matrix is utilized to split parameter estimation in two independent steps. With this new technique, we achieve faster parameter estimation with amortized computation complexity reduced from O(n^2) to closing O(1). As a result, the computation cost barely increases as the map keeps growing. Based on our strategy, computational bottleneck in large scale embedded deformation graph based applications will be greatly mitigated. The effectiveness is validated by experiments, featuring large scale deformation scenarios.

    06/20/2019 ∙ by Jingwei Song, et al. ∙ 3 share

    read it

  • Occlusion Aware Unsupervised Learning of Optical Flow

    It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.

    11/16/2017 ∙ by Yang Wang, et al. ∙ 0 share

    read it

  • Unsupervised Learning of Geometry with Edge-aware Depth-Normal Consistency

    Learning to reconstruct depths in a single image by watching unlabeled videos via deep convolutional network (DCN) is attracting significant attention in recent years. In this paper, we introduce a surface normal representation for unsupervised depth estimation framework. Our estimated depths are constrained to be compatible with predicted normals, yielding more robust geometry results. Specifically, we formulate an edge-aware depth-normal consistency term, and solve it by constructing a depth-to-normal layer and a normal-to-depth layer inside of the DCN. The depth-to-normal layer takes estimated depths as input, and computes normal directions using cross production based on neighboring pixels. Then given the estimated normals, the normal-to-depth layer outputs a regularized depth map through local planar smoothness. Both layers are computed with awareness of edges inside the image to help address the issue of depth/normal discontinuity and preserve sharp edges. Finally, to train the network, we apply the photometric error and gradient smoothness for both depth and normal predictions. We conducted experiments on both outdoor (KITTI) and indoor (NYUv2) datasets, and show that our algorithm vastly outperforms state of the art, which demonstrates the benefits from our approach.

    11/10/2017 ∙ by Zhenheng Yang, et al. ∙ 0 share

    read it

  • A Generic Framework for Interesting Subspace Cluster Detection in Multi-attributed Networks

    Detection of interesting (e.g., coherent or anomalous) clusters has been studied extensively on plain or univariate networks, with various applications. Recently, algorithms have been extended to networks with multiple attributes for each node in the real-world. In a multi-attributed network, often, a cluster of nodes is only interesting for a subset (subspace) of attributes, and this type of clusters is called subspace clusters. However, in the current literature, few methods are capable of detecting subspace clusters, which involves concurrent feature selection and network cluster detection. These relevant methods are mostly heuristic-driven and customized for specific application scenarios. In this work, we present a generic and theoretical framework for detection of interesting subspace clusters in large multi-attributed networks. Specifically, we propose a subspace graph-structured matching pursuit algorithm, namely, SG-Pursuit, to address a broad class of such problems for different score functions (e.g., coherence or anomalous functions) and topology constraints (e.g., connected subgraphs and dense subgraphs). We prove that our algorithm 1) runs in nearly-linear time on the network size and the total number of attributes and 2) enjoys rigorous guarantees (geometrical convergence rate and tight error bound) analogous to those of the state-of-the-art algorithms for sparse feature selection problems and subgraph detection problems. As a case study, we specialize SG-Pursuit to optimize a number of well-known score functions for two typical tasks, including detection of coherent dense and anomalous connected subspace clusters in real-world networks. Empirical evidence demonstrates that our proposed generic algorithm SG-Pursuit performs superior over state-of-the-art methods that are designed specifically for these two tasks.

    09/15/2017 ∙ by Feng Chen, et al. ∙ 0 share

    read it

  • Online and Distributed Robust Regressions under Adversarial Data Corruption

    In today's era of big data, robust least-squares regression becomes a more challenging problem when considering the adversarial corruption along with explosive growth of datasets. Traditional robust methods can handle the noise but suffer from several challenges when applied in huge dataset including 1) computational infeasibility of handling an entire dataset at once, 2) existence of heterogeneously distributed corruption, and 3) difficulty in corruption estimation when data cannot be entirely loaded. This paper proposes online and distributed robust regression approaches, both of which can concurrently address all the above challenges. Specifically, the distributed algorithm optimizes the regression coefficients of each data block via heuristic hard thresholding and combines all the estimates in a distributed robust consolidation. Furthermore, an online version of the distributed algorithm is proposed to incrementally update the existing estimates with new incoming data. We also prove that our algorithms benefit from strong robustness guarantees in terms of regression coefficient recovery with a constant upper bound on the error of state-of-the-art batch methods. Extensive experiments on synthetic and real datasets demonstrate that our approaches are superior to those of existing methods in effectiveness, with competitive efficiency.

    10/02/2017 ∙ by Xuchao Zhang, et al. ∙ 0 share

    read it

  • Spatial Neural Networks and their Functional Samples: Similarities and Differences

    Models of neural networks have proven their utility in the development of learning algorithms in computer science and in the theoretical study of brain dynamics in computational neuroscience. We propose in this paper a spatial neural network model to analyze the important class of functional networks, which are commonly employed in computational studies of clinical brain imaging time series. We developed a simulation framework inspired by multichannel brain surface recordings (more specifically, EEG -- electroencephalogram) in order to link the mesoscopic network dynamics (represented by sampled functional networks) and the microscopic network structure (represented by an integrate-and-fire neural network located in a 3D space -- hence the term spatial neural network). Functional networks are obtained by computing pairwise correlations between time-series of mesoscopic electric potential dynamics, which allows the construction of a graph where each node represents one time-series. The spatial neural network model is central in this study in the sense that it allowed us to characterize sampled functional networks in terms of what features they are able to reproduce from the underlying spatial network. Our modeling approach shows that, in specific conditions of sample size and edge density, it is possible to precisely estimate several network measurements of spatial networks by just observing functional samples.

    05/03/2014 ∙ by Lucas Antiqueira, et al. ∙ 0 share

    read it

  • Unsupervised Learning Layers for Video Analysis

    This paper presents two unsupervised learning layers (UL layers) for label-free video analysis: one for fully connected layers, and the other for convolutional ones. The proposed UL layers can play two roles: they can be the cost function layer for providing global training signal; meanwhile they can be added to any regular neural network layers for providing local training signals and combined with the training signals backpropagated from upper layers for extracting both slow and fast changing features at layers of different depths. Therefore, the UL layers can be used in either pure unsupervised or semi-supervised settings. Both a closed-form solution and an online learning algorithm for two UL layers are provided. Experiments with unlabeled synthetic and real-world videos demonstrated that the neural networks equipped with UL layers and trained with the proposed online learning algorithm can extract shape and motion information from video sequences of moving objects. The experiments demonstrated the potential applications of UL layers and online learning algorithm to head orientation estimation and moving object localization.

    05/24/2017 ∙ by Liang Zhao, et al. ∙ 0 share

    read it

  • Network Unfolding Map by Edge Dynamics Modeling

    The emergence of collective dynamics in neural networks is a mechanism of the animal and human brain for information processing. In this paper, we develop a computational technique of distributed processing elements, which are called particles. We observe the collective dynamics of particles in a complex network for transductive inference on semi-supervised learning problems. Three actions govern the particles' dynamics: walking, absorption, and generation. Labeled vertices generate new particles that compete against rival particles for edge domination. Active particles randomly walk in the network until they are absorbed by either a rival vertex or an edge currently dominated by rival particles. The result from the model simulation consists of sets of edges sorted by the label dominance. Each set tends to form a connected subnetwork to represent a data class. Although the intrinsic dynamics of the model is a stochastic one, we prove there exists a deterministic version with largely reduced computational complexity; specifically, with subquadratic growth. Furthermore, the edge domination process corresponds to an unfolding map. Intuitively, edges "stretch" and "shrink" according to edge dynamics. Consequently, such effect summarizes the relevant relationships between vertices and uncovered data classes. The proposed model captures important details of connectivity patterns over the edge dynamics evolution, which contrasts with previous approaches focused on vertex dynamics. Computer simulations reveal that our model can identify nonlinear features in both real and artificial data, including boundaries between distinct classes and the overlapping structure of data.

    03/03/2016 ∙ by Filipe Alves Neto Verri, et al. ∙ 0 share

    read it

  • Theoretical Properties for Neural Networks with Weight Matrices of Low Displacement Rank

    Recently low displacement rank (LDR) matrices, or so-called structured matrices, have been proposed to compress large-scale neural networks. Empirical results have shown that neural networks with weight matrices of LDR matrices, referred as LDR neural networks, can achieve significant reduction in space and computational complexity while retaining high accuracy. We formally study LDR matrices in deep learning. First, we prove the universal approximation property of LDR neural networks with a mild condition on the displacement operators. We then show that the error bounds of LDR neural networks are as efficient as general neural networks with both single-layer and multiple-layer structure. Finally, we propose back-propagation based training algorithm for general LDR neural networks.

    03/01/2017 ∙ by Liang Zhao, et al. ∙ 0 share

    read it

  • High Level Pattern Classification via Tourist Walks in Networks

    Complex networks refer to large-scale graphs with nontrivial connection patterns. The salient and interesting features that the complex network study offer in comparison to graph theory are the emphasis on the dynamical properties of the networks and the ability of inherently uncovering pattern formation of the vertices. In this paper, we present a hybrid data classification technique combining a low level and a high level classifier. The low level term can be equipped with any traditional classification techniques, which realize the classification task considering only physical features (e.g., geometrical or statistical features) of the input data. On the other hand, the high level term has the ability of detecting data patterns with semantic meanings. In this way, the classification is realized by means of the extraction of the underlying network's features constructed from the input data. As a result, the high level classification process measures the compliance of the test instances with the pattern formation of the training data. Out of various high level perspectives that can be utilized to capture semantic meaning, we utilize the dynamical features that are generated from a tourist walker in a networked environment. Specifically, a weighted combination of transient and cycle lengths generated by the tourist walk is employed for that end. Interestingly, our study shows that the proposed technique is able to further improve the already optimized performance of traditional classification techniques.

    05/07/2013 ∙ by Thiago Christiano Silva, et al. ∙ 0 share

    read it

  • Time Series Clustering via Community Detection in Networks

    In this paper, we propose a technique for time series clustering using community detection in complex networks. Firstly, we present a method to transform a set of time series into a network using different distance functions, where each time series is represented by a vertex and the most similar ones are connected. Then, we apply community detection algorithms to identify groups of strongly connected vertices (called a community) and, consequently, identify time series clusters. Still in this paper, we make a comprehensive analysis on the influence of various combinations of time series distance functions, network generation methods and community detection techniques on clustering results. Experimental study shows that the proposed network-based approach achieves better results than various classic or up-to-date clustering techniques under consideration. Statistical tests confirm that the proposed method outperforms some classic clustering algorithms, such as k-medoids, diana, median-linkage and centroid-linkage in various data sets. Interestingly, the proposed method can effectively detect shape patterns presented in time series due to the topological structure of the underlying network constructed in the clustering process. At the same time, other techniques fail to identify such patterns. Moreover, the proposed method is robust enough to group time series presenting similar pattern but with time shifts and/or amplitude variations. In summary, the main point of the proposed method is the transformation of time series from time-space domain to topological domain. Therefore, we hope that our approach contributes not only for time series clustering, but also for general time series analysis tasks.

    08/19/2015 ∙ by Leonardo N. Ferreira, et al. ∙ 0 share

    read it