
Matrix Product Operator Restricted Boltzmann Machines
A restricted Boltzmann machine (RBM) learns a probability distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multiway) input. Matrixvariate and tensorvariate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by model construction, which leads to a weak model expression power. This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power. A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed. Numerical experiments compare the MPORBM with the traditional RBM and MvRBM for data classification and image completion and denoising tasks. The expressive power of the MPORBM as a function of the MPOrank is also investigated.
11/12/2018 ∙ by Cong Chen, et al. ∙ 22 ∙ shareread it

Selfish Jobs with Favorite Machines: Price of Anarchy vs Strong Price of Anarchy
We consider the wellstudied gametheoretic version of machine scheduling in which jobs correspond to selfinterested users and machines correspond to resources. Here each user chooses a machine trying to minimize her own cost, and such selfish behavior typically results in some equilibrium which is not globally optimal: An equilibrium is an allocation where no user can reduce her own cost by moving to another machine, which in general need not minimize the makespan, i.e., the maximum load over the machines. We provide tight bounds on two wellstudied notions in algorithmic game theory, namely, the price of anarchy and the strong price of anarchy on machine scheduling setting which lies in between the related and the unrelated machine case. Both notions study the social cost (makespan) of the worst equilibrium compared to the optimum, with the strong price of anarchy restricting to a stronger form of equilibria. Our results extend a prior study comparing the price of anarchy to the strong price of anarchy for two related machines (Epstein, Acta Informatica 2010), thus providing further insights on the relation between these concepts. Our exact bounds give a qualitative and quantitative comparison between the two models. The bounds also show that the setting is indeed easier than the two unrelated machines: In the latter, the strong price of anarchy is 2, while in ours it is strictly smaller.
09/19/2017 ∙ by Cong Chen, et al. ∙ 0 ∙ shareread it

Deep Compression of SumProduct Networks on Tensor Networks
Sumproduct networks (SPNs) represent an emerging class of neural networks with clear probabilistic semantics and superior inference speed over graphical models. This work reveals a strikingly intimate connection between SPNs and tensor networks, thus leading to a highly efficient representation that we call tensor SPNs (tSPNs). For the first time, through mapping an SPN onto a tSPN and employing novel optimization techniques, we demonstrate remarkable parameter compression with negligible loss in accuracy.
11/09/2018 ∙ by ChingYun Ko, et al. ∙ 0 ∙ shareread it

Online scheduling of jobs with favorite machines
This work introduces a natural variant of the online machine scheduling problem on unrelated machines, which we refer to as the favorite machine model. In this model, each job has a minimum processing time on a certain set of machines, called favorite machines, and some longer processing times on other machines. This type of costs (processing times) arise quite naturally in many practical problems. In the online version, jobs arrive one by one and must be allocated irrevocably upon each arrival without knowing the future jobs. We consider online algorithms for allocating jobs in order to minimize the makespan. We obtain tight bounds on the competitive ratio of the greedy algorithm and characterize the optimal competitive ratio for the favorite machine model. Our bounds generalize the previous results of the greedy algorithm and the optimal algorithm for the unrelated machines and the identical machines. We also study a further restriction of the model, called the symmetric favorite machine model, where the machines are partitioned equally into two groups and each job has one of the groups as favorite machines. We obtain a 2.675competitive algorithm for this case, and the best possible algorithm for the two machines case.
12/04/2018 ∙ by Cong Chen, et al. ∙ 0 ∙ shareread it

A Support Tensor Train Machine
There has been growing interest in extending traditional vectorbased machine learning techniques to their tensor forms. An example is the support tensor machine (STM) that utilizes a rankone tensor to capture the data structure, thereby alleviating the overfitting and curse of dimensionality problems in the conventional support vector machine (SVM). However, the expressive power of a rankone tensor is restrictive for many realworld data. To overcome this limitation, we introduce a support tensor train machine (STTM) by replacing the rankone tensor in an STM with a tensor train. Experiments validate and confirm the superiority of an STTM over the SVM and STM.
04/17/2018 ∙ by Cong Chen, et al. ∙ 0 ∙ shareread it
Cong Chen
is this you? claim profile