Automating Differentially-Private Mechanism Design
Boriana Gjura and I published a paper at the Theory and Practice of Differential Privacy workshop at CCS 2019! You can find the paper here.
The gist of the work is that designing differentially-private mechanisms is a fairly expert-labor-intensive task, and that it would be great if we could come up with a new mechanism automatically when given query access to a new query type, a distribution of expected datasets similar to the true datasets, and some measure for the penalty associated with releasing a particular answer compared with a different true answer.
For example, in the case of mean release, we might want to use an L2 penalty – so if the true answer was 0.3, and we released 0.5, the penalty would be .04. Alternatively, if we were doing median release, the penalty might be the number of elements between the released value and the true median.
The point is that the loss can be any function of the released answer, the true answer, and the original dataset – and unlike the exponential mechanism, we do not require that the loss have bounded sensitivity.
Our method learns to approximate the true query function by learning a release mechanism function that minimizes the expected penalty received from responding according to the mechanism, given a dataset and the true query of that dataset. In order to be able to ensure that this learned release mechanism is bounded-sensitivity (and thus that we can a bounded amount of noise to render it private), we optimize a Lipschitz Neural Network, a class of functions with a pre-specified Lipschitz constant, and ensure that our original inputs are in a fixed range.
We run a few experiments on real but simple settings, and showed mixed performance. We think the direction of automatically learning mechanisms certainly merits further exploration, though our method in particular encountered convergence issues (Lipschitz NNs are hard to train). Our particular approach benefit from a more concrete domain, and from an improved optimization strategy beyond gradient descent. Future work ;)
-Yo