Bin Wang

is this you? claim profile

0

Professor at Chinese Academy of Sciences

  • Sequential Click Prediction for Sponsored Search with Recurrent Neural Networks

    Click prediction is one of the fundamental problems in sponsored search. Most of existing studies took advantage of machine learning approaches to predict ad click for each event of ad view independently. However, as observed in the real-world sponsored search system, user's behaviors on ads yield high dependency on how the user behaved along with the past time, especially in terms of what queries she submitted, what ads she clicked or ignored, and how long she spent on the landing pages of clicked ads, etc. Inspired by these observations, we introduce a novel framework based on Recurrent Neural Networks (RNN). Compared to traditional methods, this framework directly models the dependency on user's sequential behaviors into the click prediction process through the recurrent structure in RNN. Large scale evaluations on the click-through logs from a commercial search engine demonstrate that our approach can significantly improve the click prediction accuracy, compared to sequence-independent approaches.

    04/23/2014 ∙ by Yuyu Zhang, et al. ∙ 0 share

    read it

  • Multi-stage Multi-recursive-input Fully Convolutional Networks for Neuronal Boundary Detection

    In the field of connectomics, neuroscientists seek to identify cortical connectivity comprehensively. Neuronal boundary detection from the Electron Microscopy (EM) images is often done to assist the automatic reconstruction of neuronal circuit. But the segmentation of EM images is a challenging problem, as it requires the detector to be able to detect both filament-like thin and blob-like thick membrane, while suppressing the ambiguous intracellular structure. In this paper, we propose multi-stage multi-recursive-input fully convolutional networks to address this problem. The multiple recursive inputs for one stage, i.e., the multiple side outputs with different receptive field sizes learned from the lower stage, provide multi-scale contextual boundary information for the consecutive learning. This design is biologically-plausible, as it likes a human visual system to compare different possible segmentation solutions to address the ambiguous boundary issue. Our multi-stage networks are trained end-to-end. It achieves promising results on two public available EM segmentation datasets, the mouse piriform cortex dataset and the ISBI 2012 EM dataset.

    03/24/2017 ∙ by Wei Shen, et al. ∙ 0 share

    read it

  • Learning neural trans-dimensional random field language models with noise-contrastive estimation

    Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7

    10/30/2017 ∙ by Bin Wang, et al. ∙ 0 share

    read it

  • Language modeling with Neural trans-dimensional random fields

    Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency.

    07/23/2017 ∙ by Bin Wang, et al. ∙ 0 share

    read it

  • Model Interpolation with Trans-dimensional Random Field Language Models for Speech Recognition

    The dominant language models (LMs) such as n-gram and neural network (NN) models represent sentence probabilities in terms of conditionals. In contrast, a new trans-dimensional random field (TRF) LM has been recently introduced to show superior performances, where the whole sentence is modeled as a random field. In this paper, we examine how the TRF models can be interpolated with the NN models, and obtain 12.1% and 17.9% relative error rate reductions over 6-gram LMs for English and Chinese speech recognition respectively through log-linear combination.

    03/30/2016 ∙ by Bin Wang, et al. ∙ 0 share

    read it

  • Knowledge Graph Embedding with Iterative Guidance from Soft Rules

    Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.

    11/30/2017 ∙ by Shu Guo, et al. ∙ 0 share

    read it

  • Electric Vehicle Driver Clustering using Statistical Model and Machine Learning

    Electric Vehicle (EV) is playing a significant role in the distribution energy management systems since the power consumption level of the EVs is much higher than the other regular home appliances. The randomness of the EV driver behaviors make the optimal charging or discharging scheduling even more difficult due to the uncertain charging session parameters. To minimize the impact of behavioral uncertainties, it is critical to develop effective methods to predict EV load for smart EV energy management. Using the EV smart charging infrastructures on UCLA campus and city of Santa Monica as testbeds, we have collected real-world datasets of EV charging behaviors, based on which we proposed an EV user modeling technique which combines statistical analysis and machine learning approaches. Specifically, unsupervised clustering algorithm, and multilayer perceptron are applied to historical charging record to make the day-ahead EV parking and load prediction. Experimental results with cross-validation show that our model can achieve good performance for charging control scheduling and online EV load forecasting.

    02/12/2018 ∙ by Yingqi Xiong, et al. ∙ 0 share

    read it

  • Leveraging Text and Knowledge Bases for Triple Scoring: An Ensemble Approach - The BOKCHOY Triple Scorer at WSDM Cup 2017

    We present our winning solution for the WSDM Cup 2017 triple scoring task. We devise an ensemble of four base scorers, so as to leverage the power of both text and knowledge bases for that task. Then we further refine the outputs of the ensemble by trigger word detection, achieving even better predictive accuracy. The code is available at https://github.com/wsdm-cup-2017/bokchoy.

    12/22/2017 ∙ by Boyang Ding, et al. ∙ 0 share

    read it

  • Approximate Analytical Solutions of Power Flow Equations Based on Multi-Dimensional Holomorphic Embedding Method

    It is well known that closed-form analytical solutions for AC power flow equations do not exist in general. This paper proposes a multi-dimensional holomorphic embedding method (MDHEM) to obtain an explicit approximate analytical AC power-flow solution by finding a physical germ solution and arbitrarily embedding each power, each load or groups of loads with respective scales. Based on the MDHEM, the complete approximate analytical solutions to the power flow equations in the high-dimensional space become achievable, since the voltage vector of each bus can be explicitly expressed by a convergent multivariate power series of all the loads. Unlike the traditional iterative methods for power flow calculation and inaccurate sensitivity analysis method for voltage control, the algebraic variables of a power system in all operating conditions can be prepared offline and evaluated online by only plugging in the values of any operating conditions into the scales of the non-linear multivariate power series. Case studies implemented on the 4-bus test system and the IEEE 14-bus standard system confirm the effectiveness of the proposed method.

    06/20/2017 ∙ by Chengxi Liu, et al. ∙ 0 share

    read it

  • Improving Knowledge Graph Embedding Using Simple Constraints

    Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at <https://github.com/iieir-km/ComplEx-NNE_AER>.

    05/07/2018 ∙ by Boyang Ding, et al. ∙ 0 share

    read it

  • Evolving Deep Convolutional Neural Networks by Variable-length Particle Swarm Optimization for Image Classification

    Convolutional neural networks (CNNs) are one of the most effective deep learning methods to solve image classification problems, but the best architecture of a CNN to solve a specific problem can be extremely complicated and hard to design. This paper focuses on utilising Particle Swarm Optimisation (PSO) to automatically search for the optimal architecture of CNNs without any manual work involved. In order to achieve the goal, three improvements are made based on traditional PSO. First, a novel encoding strategy inspired by computer networks which empowers particle vectors to easily encode CNN layers is proposed; Second, in order to allow the proposed method to learn variable-length CNN architectures, a Disabled layer is designed to hide some dimensions of the particle vector to achieve variable-length particles; Third, since the learning process on large data is slow, partial datasets are randomly picked for the evaluation to dramatically speed it up. The proposed algorithm is examined and compared with 12 existing algorithms including the state-of-art methods on three widely used image classification benchmark datasets. The experimental results show that the proposed algorithm is a strong competitor to the state-of-art algorithms in terms of classification error. This is the first work using PSO for automatically evolving the architectures of CNNs.

    03/17/2018 ∙ by Bin Wang, et al. ∙ 0 share

    read it