Assorted Papers
Differential Privacy
Frank McSherry and Kunal Talwar.
Mechanism Design via Differential Privacy .
FOCS 2007.
Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy Rothblum.
Differential Privacy under Continual Observation .
STOC 2010.
T.-H. Hubert Chan, Elaine Shi, and Dawn Song.
Private and Continual Release of Statistics .
ICALP 2010.
Ilya Mironov.
On Significance of the Least Significant Bits For Differential Privacy .
CCS 2012.
Moritz Hardt, Katrina Ligett, and Frank McSherry.
A Simple and Practical Algorithm for Differentially Private Data Release .
NIPS 2012.
Daniel Kifer and Ashwin Machanavajjhala.
A Rigorous and Customizable Framework for Privacy .
PODS 2012.
Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova.
RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response .
CCS 2014.
Cynthia Dwork, Moni Naor, Omer Reingold, and Guy N. Rothblum.
Pure Differential Privacy for Rectangle Queries via Private Partitions .
ASIACRYPT 2015.
Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.
Deep Learning with Differential Privacy .
CCS 2016.
Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, and Li Zhang.
On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches .
CSF 2016.
Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar.
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data .
ICLR 2017.
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Úlfar Erlingsson.
Scalable Private Learning with PATE .
ICLR 2018.
Matthew Joseph, Aaron Roth, Jonathan Ullman, and Bo Waggoner.
Local Differential Privacy for Evolving Data .
NeurIPS 2018.
Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, and Maxim Zhilyaev.
Distributed Differential Privacy via Shuffling .
EUROCRYPT 2019.
Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta.
Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity .
SODA 2019.
Jingcheng Liu and Kunal Talwar.
Private Selection from Private Candidates .
STOC 2019.
Adversarial ML
Battista Biggio, Blaine Nelson, and Pavel Laskov.
Poisoning Attacks against Support Vector Machines .
ICML 2012.
Battista Biggio, Ignazio Pillai, Samuel Rota Bulò, Davide Ariu, Marcello Pelillo, and Fabio Roli.
Is Data Clustering in Adversarial Settings Secure? .
AISec 2013.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus.
Intriguing Properties of Neural Networks .
ICLR 2014.
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.
Explaining and Harnessing Adversarial Examples .
ICLR 2015.
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart.
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures .
CCS 2015.
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow.
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples .
arXiv 2016.
Nicholas Carlini and David Wagner.
Towards Evaluating the Robustness of Neural Networks .
S&P 2017.
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov.
Membership Inference Attacks against Machine Learning Models .
S&P 2017.
Nicholas Carlini and David Wagner.
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods .
AISec 2017.
Jacob Steinhardt, Pang Wei Koh, and Percy Liang.
Certified Defenses for Data Poisoning Attacks .
NIPS 2017.
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.
Robust Physical-World Attacks on Deep Learning Models .
CVPR 2018.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
Towards Deep Learning Models Resistant to Adversarial Attacks .
ICLR 2018.
Aditi Raghunathan, Jacob Steinhardt, and Percy Liang.
Certified Defenses against Adversarial Examples .
ICLR 2018.
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel.
Ensemble Adversarial Training: Attacks and Defenses .
ICLR 2018.
Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein.
Poison Frogs! Targeted Clean-Label PoisoningAttacks on Neural Networks .
NeurIPS 2019.
Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song.
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks .
USENIX 2019.
Vitaly Feldman.
Does Learning Require Memorization? A Short Tale about a Long Tail .
arXiv 2019.
Applied Cryptography
Benjamin Braun, Ariel J. Feldman, Zuocheng Ren, Srinath Setty, Andrew J. Blumberg, and Michael Walfish.
Verifying Computations with State .
SOSP 2013.
Bryan Parno, Jon Howell, Craig Gentry, and Mariana Raykova.
Pinocchio: Nearly Practical Verifiable Computation .
S&P 2013.
Aseem Rastogi, Matthew A. Hammer and Michael Hicks.
Wysteria: A Programming Language for Generic, Mixed-Mode Multiparty Computations .
S&P 2014.
Shai Halevi and Victor Shoup.
Algorithms in HElib .
CRYPTO 2014.
Shai Halevi and Victor Shoup.
Bootstrapping for HElib .
EUROCRYPT 2015.
Léo Ducas and Daniele Micciancio.
FHEW: Bootstrapping Homomorphic Encryption in Less than a Second .
EUROCRYPT 2015.
Peter Kairouz, Sewoong Oh, and Pramod Viswanath.
Secure Multi-party Differential Privacy .
NIPS 2015.
Arjun Narayan, Ariel Feldman, Antonis Papadimitriou, and Andreas Haeberlen.
Verifiable Differential Privacy .
EUROSYS 2015.
Henry Corrigan-Gibbs and Dan Boneh.
Prio: Private, Robust, and Scalable Computation of Aggregate Statistics .
NSDI 2017.
Zahra Ghodsi, Tianyu Gu, Siddharth Garg.
SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud .
NIPS 2017.
Valerie Chen, Valerio Pastro, Mariana Raykova.
Secure Computation for Machine Learning With SPDZ .
NeurIPS 2018.
Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph. Stoecklin, Heqing Huang, and Ian Molloy.
Protecting Intellectual Property of Deep Neural Networks with Watermarking .
AsiaCCS 2018.
Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet.
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring .
USENIX 2018.
Wenting Zheng, Raluca Ada Popa, Joseph E. Gonzalez, Ion Stoica.
Helen: Maliciously Secure Coopetitive Learning for Linear Models .
S&P 2019.
Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar.
DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models .
ASPLOS 2019.
Roshan Dathathri, Olli Saarikivi, Hao Chen, Kim Laine, Kristin Lauter, Saeed Maleki, Madanlal Musuvathi, and Todd Mytkowicz.
CHET: an optimizing compiler for fully-homomorphic neural-network inferencing .
PLDI 2019.
Algorithmic Fairness
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel.
Fairness through Awarness .
ITCS 2012.
Moritz Hardt, Eric Price, and Nathan Srebro.
Equality of Opportunity in Supervised Learning .
NIPS 2016.
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai.
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings .
NIPS 2016.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang.
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints .
EMNLP 2017.
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan.
Inherent Trade-Offs in the Fair Determination of Risk Scores .
ITCS 2017.
Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf.
Avoiding Discrimination through Causal Reasoning .
NIPS 2017.
Matt J. Kusner, Joshua R. Loftus, Chris Russell, Ricardo Silva.
Counterfactual Fairness .
NIPS 2017.
Razieh Nabi and Ilya Shpitser.
Fair Inference on Outcomes .
AAAI 2018.
Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum.
Multicalibration: Calibration for the (Computationally-Identifiable) Masses .
ICML 2018.
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness .
ICML 2018.
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach.
A Reductions Approach to Fair Classification .
ICML 2019.
Ben Hutchinson and Margaret Mitchell.
50 Years of Test (Un)fairness: Lessons for Machine Learning .
FAT* 2019.
PL and Verification
Martín Abadi and Andrew D. Gordon.
A Calculus for Cryptographic Protocols: The Spi Calculus .
Information and Computation, 1999.
Noah Goodman, Vikash Mansinghka, Daniel M. Roy, Keith Bonawitz, Joshua B. Tenenbaum.
Church: a language for generative models .
UAI 2008.
Frank McSherry.
Privacy Integrated Queries .
SIGMOD 2009.
Marta Kwiatkowska, Gethin Norman, and David Parker.
Advances and Challenges of Probabilistic Model Checking .
Allerton 2010.
Jason Reed and Benjamin C. Pierce.
Distance Makes the Types Grow Stronger: A Calculus for Differential Privacy .
ICFP 2010.
Daniel B. Griffin, Amit Levy, Deian Stefan, David Terei, David Mazières, John C. Mitchell, and Alejandro Russo.
Hails: Protecting Data Privacy in Untrusted Web Applications .
OSDI 2012.
Danfeng Zhang, Aslan Askarov, and Andrew C. Myers.
Language-Based Control and Mitigation of Timing Channels .
PLDI 2012.
Andrew Miller, Michael Hicks, Jonathan Katz, and Elaine Shi.
Authenticated Data Structures, Generically .
POPL 2014.
Andrew D. Gordon, Thomas A. Henzinger, Aditya V. Nori, and Sriram K. Rajamani.
Probabilistic Programming .
ICSE 2014.
Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, and Pierre-Yves Strub.
Higher-Order Approximate Relational Refinement Types for Mechanism Design and Differential Privacy .
POPL 2015.
Samee Zahur and David Evans.
Obliv-C: A Language for Extensible Data-Oblivious Computation .
IACR 2015.
Chang Liu, Xiao Shaun Wang, Kartik Nayak, Yan Huang, and Elaine Shi.
ObliVM: A Programming Framework for Secure Computation .
S&P 2015.
Gilles Barthe, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, and Pierre-Yves Strub.
A Program Logic for Union Bounds .
ICALP 2016.
Christian Albert Hammerschmidt, Sicco Verwer, Qin Lin, and Radu State.
Interpreting Finite Automata for Sequential Data .
NIPS 2016.
Joost-Pieter Katoen.
The Probabilistic Model Checking Landscape .
LICS 2016.
Andrew Ferraiuolo, Rui Xu, Danfeng Zhang, Andrew C. Myers, and G. Edward Suh.
Verification of a Practical Hardware Security Architecture Through Static Information Flow Analysis .
ASPLOS 2017.
Frits Vaandrager.
Model Learning .
CACM 2017.
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation .
S&P 2018.
Matthew Mirman, Timon Gehr, and Martin Vechev.
Differentiable Abstract Interpretation for Provably Robust Neural Networks .
ICML 2018.
Atilim Gunes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind.
Automatic differentiation in machine learning: a survey .
JMLR 2018.
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin T. Vechev.
An Abstract Domain for Certifying Neural Networks .
POPL 2019.
Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, and Martin Vechev.
DL2: Training and Querying Neural Networks with Logic .
ICML 2019.
Abhinav Verma, Hoang M. Le, Yisong Yue, and Swarat Chaudhuri.
Imitation-Projected Programmatic Reinforcement Learning .
NeurIPS 2019.
Supplemental Material