News

NEW PAPER ACCEPTED TO AAAI 2021!

Happy to announce that our paper, “Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images“, has been accepted to AAAI 2021!

In this paper, we look at state-of-the-art privacy-preserving generative adversarial networks (PP-GANs) that remove sensitive attributes from images while maintaining useful information for other tasks. Such PP-GANs do not offer formal proofs of privacy and instead rely on experimentally measuring information leakage using classification accuracy on the sensitive attributes of deep learning (DL)-based discriminators. We question the rigor of such checks by subverting existing privacy-preserving GANs for facial expression recognition. We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction, which can even allow for reconstruction of the entire input images while satisfying privacy checks. Our experimental results raise fundamental questions about the need for more rigorous privacy checks of PP-GANs, and we provide insights into the social impact of these.

Launched CSAW’20 HackML Challenge

We are pleased to let you know that CSAW’20 HackML competition is now live.

As you may know, recent work has shown that neural networks can contain stealthy backdoors, much like traditional software that allow attackers to trigger malicious behavior. The HackML competition will challenge teams to develop novel, more powerful backdoor defenses and detections.
 
The upcoming second edition of the HackML challenge will present an opportunity for the selected teams to exhibit their defense strategy during the annual CSAW’20 conference, which will be a virtual event this year.

New Paper Accepted to NeurIPS 2020!

Happy to announce that our paper, “CryptoNAS: Private Inference on a ReLU budget“, has been accepted to NeurIPS 2020! This is collaborative work with Prof. Brandon Reagen.
In this paper we look at private inference for deep neural networks, and argue that existing models are not well suited for this task. In private inference, non-linear operations dominate latency, while linear layers become effectively free. Based on this insight, we introduce the idea of a ReLU budget as a proxy for inference latency, and develop CryptoNAS to build models that maximize accuracy within a given budget.