On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning

08/06/2022
by   Congyu Fang, et al.
3

Proof-of-learning (PoL) proposes a model owner use machine learning training checkpoints to establish a proof of having expended the necessary compute for training. The authors of PoL forego cryptographic approaches and trade rigorous security guarantees for scalability to deep learning by being applicable to stochastic gradient descent and adaptive variants. This lack of formal analysis leaves the possibility that an attacker may be able to spoof a proof for a model they did not train. We contribute a formal analysis of why the PoL protocol cannot be formally (dis)proven to be robust against spoofing adversaries. To do so, we disentangle the two roles of proof verification in PoL: (a) efficiently determining if a proof is a valid gradient descent trajectory, and (b) establishing precedence by making it more expensive to craft a proof after training completes (i.e., spoofing). We show that efficient verification results in a tradeoff between accepting legitimate proofs and rejecting invalid proofs because deep learning necessarily involves noise. Without a precise analytical model for how this noise affects training, we cannot formally guarantee if a PoL verification algorithm is robust. Then, we demonstrate that establishing precedence robustly also reduces to an open problem in learning theory: spoofing a PoL post hoc training is akin to finding different trajectories with the same endpoint in non-convex learning. Yet, we do not rigorously know if priori knowledge of the final model weights helps discover such trajectories. We conclude that, until the aforementioned open problems are addressed, relying more heavily on cryptography is likely needed to formulate a new class of PoL protocols with formal robustness guarantees. In particular, this will help with establishing precedence. As a by-product of insights from our analysis, we also demonstrate two novel attacks against PoL.

READ FULL TEXT
research
03/21/2021

Formal verification of Zagier's one-sentence proof

We comment on two formal proofs of Fermat's sum of two squares theorem, ...
research
05/31/2018

How to Simulate It in Isabelle: Towards Formal Proof for Secure Multi-Party Computation

In cryptography, secure Multi-Party Computation (MPC) protocols allow pa...
research
06/24/2020

Befriending The Byzantines Through Reputation Scores

We propose two novel stochastic gradient descent algorithms, ByGARS and ...
research
12/13/2019

A Formal Proof of the Irrationality of ζ(3)

This paper presents a complete formal verification of a proof that the e...
research
10/17/2022

Verifiable and Provably Secure Machine Unlearning

Machine unlearning aims to remove points from the training dataset of a ...
research
03/09/2021

Proof-of-Learning: Definitions and Practice

Training machine learning (ML) models typically involves expensive itera...
research
09/06/2017

Distant decimals of π

We describe how to compute very far decimals of π and how to provide for...

Please sign up or login with your details

Forgot password? Click here to reset