DeepAI AI Chat
Log In Sign Up

The Network Nullspace Property for Compressed Sensing of Big Data over Networks

by   Alexander Jung, et al.

We adapt the nullspace property of compressed sensing for sparse vectors to semi-supervised learning of labels for network-structured datasets. In particular, we derive a sufficient condition, which we term the network nullspace property, for convex optimization methods to accurately learn labels which form smooth graph signals. The network nullspace property involves both the network topology and the sampling strategy and can be used to guide the design of efficient sampling strategies, i.e., the selection of those data points whose labels provide the most information for the learning task.


page 1

page 2

page 3

page 4


Semi-supervised Learning in Network-Structured Data via Total Variation Minimization

We propose and analyze a method for semi-supervised learning from partia...

Anisotropic compressed sensing for non-Cartesian MRI acquisitions

In the present note we develop some theoretical results in the theory of...

Randomness and isometries in echo state networks and compressed sensing

Although largely different concepts, echo state networks and compressed ...

Recovery Conditions and Sampling Strategies for Network Lasso

The network Lasso is a recently proposed convex optimization method for ...

Random Walk Sampling for Big Data over Networks

It has been shown recently that graph signals with small total variation...

Granger Causality for Compressively Sensed Sparse Signals

Compressed sensing is a scheme that allows for sparse signals to be acqu...

Analysis of Network Lasso For Semi-Supervised Regression

We characterize the statistical properties of network Lasso for semi-sup...