JavaScript is required to use this site. Please enable JavaScript in your browser settings.

Prof. Esfandiar Mohammadi

Presentation:
Tight composition bounds for strong utility-privacy tradeoffs in Privacy-Preserving Machine Learning

Recent privacy attacks show that widely-deployed Machine Learning methods can leak information about the training data, by extracting information about training data from the resulting models. As a remedy, Privacy-Preserving Machine Learning (PPML) methods have been designed to protect information about training data by limiting the influence of single data points and further randomizing the learning methods. These modifications perturb the training and lead to a reduction in the models’ accuracy. Hence, PPML methods strive to find strong utility-privacy tradeoffs, i.e., to optimize how much utility (e.g., classification accuracy) can be achieved while guaranteeing a required degree of privacy. While utility is an expected case property and can be meaningfully estimated experimentally, privacy guarantees have to consider worst-case scenarios, which cannot be estimated experimentally and require rigorous analytical proofs. For non-convex optimization problems, such as training MLPs, CNNs, ResNet via SGD, one key source for suboptimal utility-privacy tradeoffs in practice is loose privacy guarantees due to suboptimal composition bounds that are used in the privacy proofs. Improving the tightness of composition bounds directly leads to improved estimates for utility-privacy bounds. In this talk, I will present challenges while designing PPML methods with strong utility-privacy tradeoffs, illustrate some PPML methods from the literature that guarantee a notion called differential privacy, and talk about our own work on how to achieve tight composition bounds and thereby improve utility-privacy tradeoffs.

Bio

Esfandiar Mohammadi works on Privacy-Preserving Machine Learning and anonymous communication. He received in 2015 his doctoral degree at Saarland University under the supervision of Michael Backes, visited from 2016 to 2019 as a ZISC fellow the group of David Basin at ETH Zürich, and is since 2019 a tenured W2 (i.e., associate) professor at University of Lübeck in the Institute for IT-Security.

Back to the Summer School 2022 overview

funded by:
Gefördert vom Bundesministerium für Bildung und Forschung.
Gefördert vom Freistaat Sachsen.