Data-driven Regularized Inference Privacy
Data is used widely by service providers as input to inference systems to perform decision making for authorized tasks. The raw data however allows a service provider to infer other sensitive information it has not been authorized for. We propose a data-driven inference privacy preserving framework to sanitize data so as to prevent leakage of sensitive information that is present in the raw data, while ensuring that the sanitized data is still compatible with the service provider's legacy inference system. We develop an inference privacy framework based on the variational method and include maximum mean discrepancy and domain adaption as techniques to regularize the domain of the sanitized data to ensure its legacy compatibility. However, the variational method leads to weak privacy in cases where the underlying data distribution is hard to approximate. It may also face difficulties when handling continuous private variables. To overcome this, we propose an alternative formulation of the privacy metric using maximal correlation and we present empirical methods to estimate it. Finally, we develop a deep learning model as an example of the proposed inference privacy framework. Numerical experiments verify the feasibility of our approach.
READ FULL TEXT