Earning Extra Performance from Restrictive Feedbacks

04/28/2023
by   Jing Li, et al.
0

Many machine learning applications encounter a situation where model providers are required to further refine the previously trained model so as to gratify the specific need of local users. This problem is reduced to the standard model tuning paradigm if the target data is permissibly fed to the model. However, it is rather difficult in a wide range of practical cases where target data is not shared with model providers but commonly some evaluations about the model are accessible. In this paper, we formally set up a challenge named Earning eXtra PerformancE from restriCTive feEDdbacks (EXPECTED) to describe this form of model tuning problems. Concretely, EXPECTED admits a model provider to access the operational performance of the candidate model multiple times via feedback from a local user (or a group of users). The goal of the model provider is to eventually deliver a satisfactory model to the local user(s) by utilizing the feedbacks. Unlike existing model tuning methods where the target data is always ready for calculating model gradients, the model providers in EXPECTED only see some feedbacks which could be as simple as scalars, such as inference accuracy or usage rate. To enable tuning in this restrictive circumstance, we propose to characterize the geometry of the model performance with regard to model parameters through exploring the parameters' distribution. In particular, for the deep models whose parameters distribute across multiple layers, a more query-efficient algorithm is further tailor-designed that conducts layerwise tuning with more attention to those layers which pay off better. Our theoretical analyses justify the proposed algorithms from the aspects of both efficacy and efficiency. Extensive experiments on different applications demonstrate that our work forges a sound solution to the EXPECTED problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2021

User-specific Adaptive Fine-tuning for Cross-domain Recommendations

Making accurate recommendations for cold-start users has been a longstan...
research
03/17/2023

LION: Implicit Vision Prompt Tuning

Despite recent competitive performance across a range of vision tasks, v...
research
10/01/2021

UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis

Global models are trained to be as generalizable as possible, with user ...
research
06/16/2022

I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences

Machine Learning-as-a-Service (MLaaS) has become a widespread paradigm, ...
research
07/27/2023

PredictChain: Empowering Collaboration and Data Accessibility for AI in a Decentralized Blockchain-based Marketplace

Limited access to computing resources and training data poses significan...
research
10/06/2021

Federated Distillation of Natural Language Understanding with Confident Sinkhorns

Enhancing the user experience is an essential task for application servi...

Please sign up or login with your details

Forgot password? Click here to reset