Distribution-Independent Evolvability of Linear Threshold Functions

03/25/2011
by   Vitaly Feldman, et al.
0

Valiant's (2007) model of evolvability models the evolutionary process of acquiring useful functionality as a restricted form of learning from random examples. Linear threshold functions and their various subclasses, such as conjunctions and decision lists, play a fundamental role in learning theory and hence their evolvability has been the primary focus of research on Valiant's framework (2007). One of the main open problems regarding the model is whether conjunctions are evolvable distribution-independently (Feldman and Valiant, 2008). We show that the answer is negative. Our proof is based on a new combinatorial parameter of a concept class that lower-bounds the complexity of learning from correlations. We contrast the lower bound with a proof that linear threshold functions having a non-negligible margin on the data points are evolvable distribution-independently via a simple mutation algorithm. Our algorithm relies on a non-linear loss function being used to select the hypotheses instead of 0-1 loss in Valiant's (2007) original definition. The proof of evolvability requires that the loss function satisfies several mild conditions that are, for example, satisfied by the quadratic loss function studied in several other works (Michael, 2007; Feldman, 2009; Valiant, 2010). An important property of our evolution algorithm is monotonicity, that is the algorithm guarantees evolvability without any decreases in performance. Previously, monotone evolvability was only shown for conjunctions with quadratic loss (Feldman, 2009) or when the distribution on the domain is severely restricted (Michael, 2007; Feldman, 2009; Kanade et al., 2010)

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2019

Lower Bounds for Linear Decision Lists

We demonstrate a lower bound technique for linear decision lists, which ...
research
07/09/2019

Adaptive Margin Ranking Loss for Knowledge Graph Embeddings via a Correntropy Objective Function

Translation-based embedding models have gained significant attention in ...
research
11/13/2019

Quadratic number of nodes is sufficient to learn a dataset via gradient descent

We prove that if an activation function satisfies some mild conditions a...
research
01/19/2022

The Query Complexity of Certification

We study the problem of certification: given queries to a function f : {...
research
02/08/2021

The Optimality of Polynomial Regression for Agnostic Learning under Gaussian Marginals

We study the problem of agnostic learning under the Gaussian distributio...
research
10/20/2017

The satisfiability threshold for random linear equations

Let A be a random m× n matrix over the finite field F_q with precisely k...

Please sign up or login with your details

Forgot password? Click here to reset