Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks

06/20/2020 ∙ by Lixin Fan, et al. ∙ 19

This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks. First, we propose to quantitatively measure the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks. Second, we formulate reconstruction attacks as solving a noisy system of linear equations, and prove that attacks are guaranteed to be defeated if condition (2) is unfulfilled. Third, based on theoretical analysis, a novel Secret Polarization Network (SPN) is proposed to thwart privacy attacks, which pose serious challenges to existing PPDL methods. Extensive experiments showed that model accuracies are improved on average by 5-20 in regimes where data privacy are satisfactorily protected.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 20

page 23

page 26

page 29

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.