Tao Hu

is this you? claim profile

0

  • Online computation of sparse representations of time varying stimuli using a biologically motivated neural network

    Natural stimuli are highly redundant, possessing significant spatial and temporal correlations. While sparse coding has been proposed as an efficient strategy employed by neural systems to encode sensory stimuli, the underlying mechanisms are still not well understood. Most previous approaches model the neural dynamics by the sparse representation dictionary itself and compute the representation coefficients offline. In reality, faced with the challenge of constantly changing stimuli, neurons must compute the sparse representations dynamically in an online fashion. Here, we describe a leaky linearized Bregman iteration (LLBI) algorithm which computes the time varying sparse representations using a biologically motivated network of leaky rectifying neurons. Compared to previous attempt of dynamic sparse coding, LLBI exploits the temporal correlation of stimuli and demonstrate better performance both in representation error and the smoothness of temporal evolution of sparse coefficients.

    10/13/2012 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • A network of spiking neurons for computing sparse representations in an energy efficient way

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise, specifically, the representation error decays as 1/sqrt(t) for Gaussian white noise.

    10/04/2012 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • A Hebbian/Anti-Hebbian Network for Online Sparse Dictionary Learning Derived from Symmetric Matrix Factorization

    Olshausen and Field (OF) proposed that neural computations in the primary visual cortex (V1) can be partially modeled by sparse dictionary learning. By minimizing the regularized representation error they derived an online algorithm, which learns Gabor-filter receptive fields from a natural image ensemble in agreement with physiological experiments. Whereas the OF algorithm can be mapped onto the dynamics and synaptic plasticity in a single-layer neural network, the derived learning rule is nonlocal - the synaptic weight update depends on the activity of neurons other than just pre- and postsynaptic ones - and hence biologically implausible. Here, to overcome this problem, we derive sparse dictionary learning from a novel cost-function - a regularized error of the symmetric factorization of the input's similarity matrix. Our algorithm maps onto a neural network of the same architecture as OF but using only biologically plausible local learning rules. When trained on natural images our network learns Gabor-filter receptive fields and reproduces the correlation among synaptic weights hard-wired in the OF network. Therefore, online symmetric matrix factorization may serve as an algorithmic theory of neural computation.

    03/02/2015 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data

    Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis (PCA), by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling (MDS) cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step towards an algorithmic theory of neural computation.

    03/02/2015 ∙ by Cengiz Pehlevan, et al. ∙ 0 share

    read it

  • A Neuron as a Signal Processing Device

    A neuron is a basic physiological and computational unit of the brain. While much is known about the physiological properties of a neuron, its computational role is poorly understood. Here we propose to view a neuron as a signal processing device that represents the incoming streaming data matrix as a sparse vector of synaptic weights scaled by an outgoing sparse activity vector. Formally, a neuron minimizes a cost function comprising a cumulative squared representation error and regularization terms. We derive an online algorithm that minimizes such cost function by alternating between the minimization with respect to activity and with respect to synaptic weights. The steps of this algorithm reproduce well-known physiological properties of a neuron, such as weighted summation and leaky integration of synaptic inputs, as well as an Oja-like, but parameter-free, synaptic learning rule. Our theoretical framework makes several predictions, some of which can be verified by the existing data, others require further experiments. Such framework should allow modeling the function of neuronal circuits without necessarily measuring all the microscopic biophysical parameters, as well as facilitate the design of neuromorphic electronics.

    05/12/2014 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • Sparse LMS via Online Linearized Bregman Iteration

    We propose a version of least-mean-square (LMS) algorithm for sparse system identification. Our algorithm called online linearized Bregman iteration (OLBI) is derived from minimizing the cumulative prediction error squared along with an l1-l2 norm regularizer. By systematically treating the non-differentiable regularizer we arrive at a simple two-step iteration. We demonstrate that OLBI is bias free and compare its operation with existing sparse LMS algorithms by rederiving them in the online convex optimization framework. We perform convergence analysis of OLBI for white input signals and derive theoretical expressions for both the steady state and instantaneous mean square deviations (MSD). We demonstrate numerically that OLBI improves the performance of LMS type algorithms for signals generated from sparse tap weights.

    10/01/2012 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • Super-resolution using Sparse Representations over Learned Dictionaries: Reconstruction of Brain Structure using Electron Microscopy

    A central problem in neuroscience is reconstructing neuronal circuits on the synapse level. Due to a wide range of scales in brain architecture such reconstruction requires imaging that is both high-resolution and high-throughput. Existing electron microscopy (EM) techniques possess required resolution in the lateral plane and either high-throughput or high depth resolution but not both. Here, we exploit recent advances in unsupervised learning and signal processing to obtain high depth-resolution EM images computationally without sacrificing throughput. First, we show that the brain tissue can be represented as a sparse linear combination of localized basis functions that are learned using high-resolution datasets. We then develop compressive sensing-inspired techniques that can reconstruct the brain tissue from very few (typically 5) tomographic views of each section. This enables tracing of neuronal processes and, hence, high throughput reconstruction of neural circuits on the level of individual synapses.

    10/01/2012 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • EASM: Efficiency-Aware Switch Migration for Balancing Controller Loads in Software-Defined Networking

    Distributed multi-controller deployment is a promising method to achieve a scalable and reliable control plane of Software-Defined Networking (SDN). However, it brings a new challenge for balancing loads on the distributed controllers as the network traffic dynamically changes. The unbalanced load distribution on the controllers will increase response delay for processing flows and reduce the controllers'throughput. Switch migration is an effective approach to solve the problem. However, existing schemes focus only on the load balancing performance but ignore migration efficiency, which may result in high migration costs and unnecessary control overheads. This paper proposes Efficiency-Aware Switch Migration (EASM) to balance the controllers'loads and improve migration efficiency. We introduce load difference matrix and trigger factor to measure load balancing on controllers. We also introduce the migration efficiency problem, which considers load balancing rate and migration cost simultaneously to optimally migrate switches. We propose EASM to efficiently solve to the problem. The simulation results show that EASM outperforms baseline schemes by reducing the controller response time by about 21.9 good load balancing rate, low migration costs and migration time, when the network scale changes.

    11/23/2017 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • Facial Landmarks Detection by Self-Iterative Regression based Landmarks-Attention Network

    Cascaded Regression (CR) based methods have been proposed to solve facial landmarks detection problem, which learn a series of descent directions by multiple cascaded regressors separately trained in coarse and fine stages. They outperform the traditional gradient descent based methods in both accuracy and running speed. However, cascaded regression is not robust enough because each regressor's training data comes from the output of previous regressor. Moreover, training multiple regressors requires lots of computing resources, especially for deep learning based methods. In this paper, we develop a Self-Iterative Regression (SIR) framework to improve the model efficiency. Only one self-iterative regressor is trained to learn the descent directions for samples from coarse stages to fine stages, and parameters are iteratively updated by the same regressor. Specifically, we proposed Landmarks-Attention Network (LAN) as our regressor, which concurrently learns features around each landmark and obtains the holistic location increment. By doing so, not only the rest of regressors are removed to simplify the training process, but the number of model parameters is significantly decreased. The experiments demonstrate that with only 3.72M model parameters, our proposed method achieves the state-of-the-art performance.

    03/18/2018 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • Weakly Supervised Local Attention Network for Fine-Grained Visual Classification

    In the fine-grained visual classification task, objects usually share similar geometric structure but present different part distribution and variant local features. Therefore, localizing and extracting discriminative local features play a crucial role in obtaining accurate performance. Existing work that first locates specific several object parts and then extracts further local features either require additional location annotation or needs to train multiple independent networks. In this paper. We propose Weakly Supervised Local Attention Network (WS-LAN) to solve the problem, which jointly generates a great many attention maps (region-of-interest maps) to indicate the location of object parts and extract sequential local features by Local Attention Pooling (LAP). Besides, we adopt attention center loss and attention dropout so that each attention map will focus on a unique object part. WS-LAN can be trained end-to-end and achieves the state-of-the-art performance on multiple fine-grained classification datasets, including CUB-200-2011, Stanford Car and FGVC-Aircraft, which demonstrated its effectiveness.

    08/06/2018 ∙ by Tao Hu, et al. ∙ 0 share

    read it

  • See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification

    Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, the effect of regular data augmentation, such as random image crop, is limited since it might introduce much uncontrolled background noise. In this paper, we propose Weakly-Supervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object's discriminative parts by weakly supervised Learning. Next, we randomly choose one attention map to augment this image, including attention crop and attention drop. Weakly-supervised data augmentation network improves the classification accuracy in two folds. On the one hand, images can be seen better since multiple object parts can be activated. On the other hand, attention regions provide spatial information of objects, which can make images be looked closer to further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our method surpasses the state-of-the-art methods by a large margin, which demonstrated the effectiveness of the proposed method.

    01/26/2019 ∙ by Tao Hu, et al. ∙ 0 share

    read it