DeepAI AI Chat
Log In Sign Up

Generalizing the theory of cooperative inference

10/04/2018
by   Pei Wang, et al.
0

Cooperation information sharing is important to theories of human learning and has potential implications for machine learning. Prior work derived conditions for achieving optimal Cooperative Inference given strong, relatively restrictive assumptions. We relax these assumptions by demonstrating convergence for any discrete joint distribution, robustness through equivalence classes and stability under perturbation, and effectiveness by deriving bounds from structural properties of the original joint distribution. We provide geometric interpretations, connections to and implications for optimal transport, and connections to importance sampling, and conclude by outlining open questions and challenges to realizing the promise of Cooperative Inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/13/2020

Sequential Cooperative Bayesian Inference

Cooperation is often implicitly assumed when learning from other agents....
10/07/2019

A mathematical theory of cooperative communication

Cooperative communication plays a central role in theories of human cogn...
11/11/2019

Optimal partitions and Semi-discrete optimal transport

In the current book I suggest an off-road path to the subject of optimal...
05/08/2023

Earth Movers in The Big Data Era: A Review of Optimal Transport in Machine Learning

Optimal Transport (OT) is a mathematical framework that first emerged in...
05/25/2023

Characterizing Out-of-Distribution Error via Optimal Transport

Out-of-distribution (OOD) data poses serious challenges in deployed mach...
12/14/2020

Efficient Querying for Cooperative Probabilistic Commitments

Multiagent systems can use commitments as the core of a general coordina...
12/21/2020

Making transport more robust and interpretable by moving data through a small number of anchor points

Optimal transport (OT) is a widely used technique for distribution align...