
Classifying Partially Labeled Networked Data via Logistic Network Lasso
We apply the network Lasso to classify partially labeled data points whi...
03/26/2019 ∙ by Nguyen Tran, et al. ∙ 14 ∙ shareread it

Semisupervised Learning in NetworkStructured Data via Total Variation Minimization
We propose and analyze a method for semisupervised learning from partia...
01/28/2019 ∙ by Alexander Jung, et al. ∙ 18 ∙ shareread it

Classifying Big Data over Networks via the Logistic Network Lasso
We apply network Lasso to solve binary classification (clustering) probl...
05/07/2018 ∙ by Henrik Ambos, et al. ∙ 4 ∙ shareread it

Localized Linear Regression in Networked Data
The network Lasso (nLasso) has been proposed recently as an efficient le...
03/26/2019 ∙ by Alexander Jung, et al. ∙ 14 ∙ shareread it

Analysis of Network Lasso For SemiSupervised Regression
We characterize the statistical properties of network Lasso for semisup...
08/22/2018 ∙ by Alexander Jung, et al. ∙ 12 ∙ shareread it

Linear tSNE optimization for the Web
The tdistributed Stochastic Neighbor Embedding (tSNE) algorithm has bec...
05/28/2018 ∙ by Nicola Pezzotti, et al. ∙ 0 ∙ shareread it

Deterministic Stretchy Regression
An extension of the regularized leastsquares in which the estimation pa...
06/09/2018 ∙ by KarAnn Toh, et al. ∙ 0 ∙ shareread it
Learning Networked Exponential Families with Network Lasso
The data arising in many important bigdata applications, ranging from social networks to network medicine, consist of highdimensional data points related by an intrinsic (complex) network structure. In order to jointly leverage the information conveyed in the network structure as well as the statistical power contained in highdimensional data points, we propose networked exponential families. We apply the network Lasso to learn networked exponential families as a probabilistic model for heterogeneous datasets with intrinsic network structure. In order to allow for accurate learning from highdimensional data we borrow statistical strength, via the intrinsic network structure, across the dataset. The resulting method aims at regularized empirical risk minimization using the total variation of the model parameters as regularizer. This minimization problem is a nonsmooth convex optimization problem which we solve using a primaldual splitting method. This method is appealing for big data applications as it can be implemented as a highly scalable message passing algorithm.
READ FULL TEXT