MAP- and MLE-Based Teaching

07/11/2023
by   Hans Ulrich Simon, et al.
0

Imagine a learner L who tries to infer a hidden concept from a collection of observations. Building on the work [4] of Ferri et al., we assume the learner to be parameterized by priors P(c) and by c-conditional likelihoods P(z|c) where c ranges over all concepts in a given class C and z ranges over all observations in an observation set Z. L is called a MAP-learner (resp. an MLE-learner) if it thinks of a collection S of observations as a random sample and returns the concept with the maximum a-posteriori probability (resp. the concept which maximizes the c-conditional likelihood of S). Depending on whether L assumes that S is obtained from ordered or unordered sampling resp. from sampling with or without replacement, we can distinguish four different sampling modes. Given a target concept c in C, a teacher for a MAP-learner L aims at finding a smallest collection of observations that causes L to return c. This approach leads in a natural manner to various notions of a MAP- or MLE-teaching dimension of a concept class C. Our main results are: We show that this teaching model has some desirable monotonicity properties. We clarify how the four sampling modes are related to each other. As for the (important!) special case, where concepts are subsets of a domain and observations are 0,1-labeled examples, we obtain some additional results. First of all, we characterize the MAP- and MLE-teaching dimension associated with an optimally parameterized MAP-learner graph-theoretically. From this central result, some other ones are easy to derive. It is shown, for instance, that the MLE-teaching dimension is either equal to the MAP-teaching dimension or exceeds the latter by 1. It is shown furthermore that these dimensions can be bounded from above by the so-called antichain number, the VC-dimension and related combinatorial parameters. Moreover they can be computed in polynomial time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/19/2018

Finite Biased Teaching with Infinite Concept Classes

We investigate the teaching of infinite concept classes through the effe...
research
09/06/2023

Non-Clashing Teaching Maps for Balls in Graphs

Recently, Kirkpatrick et al. [ALT 2019] and Fallat et al. [JMLR 2023] in...
research
11/25/2018

Average-Case Information Complexity of Learning

How many bits of information are revealed by a learning algorithm for a ...
research
05/05/2022

Tournaments, Johnson Graphs, and NC-Teaching

Quite recently a teaching model, called "No-Clash Teaching" or simply "N...
research
03/10/2019

Optimal Collusion-Free Teaching

Formal models of learning from teachers need to respect certain criteria...
research
05/21/2018

Teaching Multiple Concepts to Forgetful Learners

How can we help a forgetful learner learn multiple concepts within a lim...
research
06/29/2021

Conditional Teaching Size

Recent research in machine teaching has explored the instruction of any ...

Please sign up or login with your details

Forgot password? Click here to reset