Knowledge Aggregation via Epsilon Model Spaces

05/20/2018
by   Neel Guha, et al.
0

In many practical applications, machine learning is divided over multiple agents, where each agent learns a different task and/or learns from a different dataset. We present Epsilon Model Spaces (EMS), a framework for learning a global model by aggregating local learnings performed by each agent. Our approach forgoes sharing of data between agents, makes no assumptions on the distribution of data across agents, and requires minimal communication between agents. We empirically validate our techniques on MNIST experiments and discuss how EMS can generalize to a wide range of problem settings, including federated averaging and catastrophic forgetting. We believe our framework to be among the first to lay out a general methodology for "combining" distinct models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset