Heterogeneous Gaussian Mechanism:
Preserving Differential Privacy in Deep Learning with Provable Robustness
Abstract
In this paper, we propose a novel Heterogeneous
Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable
robustness against adversarial examples. We first
relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, infty), with a new bound of the noise scale to preserve
differential privacy. The noise in our mechanism
can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model
utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise
into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and
thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline
approaches, under a variety of model attacks