ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations

10/09/2019
by   Vivek Sharma, et al.
12

Recently, there has been the development of Split Learning, a framework for distributed computation where model components are split between the client and server (Vepakomma et al., 2018b). As Split Learning scales to include many different model components, there needs to be a method of matching client-side model components with the best server-side model components. A solution to this problem was introduced in the ExpertMatcher (Sharma et al., 2019) framework, which uses autoencoders to match raw data to models. In this work, we propose an extension of ExpertMatcher, where matching can be performed without the need to share the client's raw data representation. The technique is applicable to situations where there are local clients and centralized expert ML models, but the sharing of raw data is constrained.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2019

ExpertMatcher: Automating ML Model Selection for Users in Resource Constrained Countries

In this work we introduce ExpertMatcher, a method for automating deep le...
research
02/16/2023

Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks enable more efficient and privacy-a...
research
08/20/2021

SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks, such as split learning, have recen...
research
02/12/2018

client2vec: Towards Systematic Baselines for Banking Applications

The workflow of data scientists normally involves potentially inefficien...
research
03/21/2020

An Online Framework to Interact and Efficiently Compute Linear Layouts of Graphs

We present a prototype online system to automate the procedure of comput...
research
07/11/2022

SIMC 2.0: Improved Secure ML Inference Against Malicious Clients

In this paper, we study the problem of secure ML inference against a mal...
research
06/06/2022

Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning

In this paper, we use tools from rate-distortion theory to establish new...

Please sign up or login with your details

Forgot password? Click here to reset