Privacy-Preserving Mechanisms on Data-Driven Deep Learning Applications

Date:

Download slides here

With the advent of the deep learning era, ubiquitous data-driven applications, such as medical diagnosis recognition, human attribute recognition, and retail checkout recognition, are launched via advanced deep learning models. However, due to the reliance on massive data uploaded to third-party platforms to accomplish these applications, these deep learning models may face a serious risk of privacy leakage. For example, attackers can infer private information via extracted features and/or victim model’s weights, causing substantial economic losses for individuals and institutions. More problematically, they even can design attack mechanisms in black-box applications (APIs) by only utilizing the distribution of model’s outputs. Although a number of privacy-preserving approaches have been proposed, their essential drawbacks limit the effect of privacy protection in real applications. In this talk, we share our novel designs of privacy-preserving mechanisms to find out the tradeoff between data privacy protection and data utility on applications.