Learning from networked examples in a k-partite graph

06/03/2013
by   Yuyi Wang, et al.
0

Many machine learning algorithms are based on the assumption that training examples are drawn independently. However, this assumption does not hold anymore when learning from a networked sample where two or more training examples may share common features. We propose an efficient weighting method for learning from networked examples and show the sample error bound which is better than previous work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2014

Learning from networked examples

Many machine learning algorithms are based on the assumption that traini...
research
02/28/2023

Generating Accurate Virtual Examples For Lifelong Machine Learning

Lifelong machine learning (LML) is an area of machine learning research ...
research
06/06/2011

Using More Data to Speed-up Training Time

In many recent applications, data is plentiful. By now, we have a rather...
research
12/16/2021

Learning To Retrieve Prompts for In-Context Learning

In-context learning is a recent paradigm in natural language understandi...
research
11/25/2013

Are all training examples equally valuable?

When learning a new concept, not all training examples may prove equally...
research
02/09/2018

Information Planning for Text Data

Information planning enables faster learning with fewer training example...
research
11/30/2018

Are All Training Examples Created Equal? An Empirical Study

Modern computer vision algorithms often rely on very large training data...

Please sign up or login with your details

Forgot password? Click here to reset