Computing over Distributed Sensitive Data: Publications

2019
Victor Balcer and Salil Vadhan. 9/2019. “Differential Privacy on Finite Computers.” in Journal Privacy Confidentiality; also presented at 9th Innovations in Theoretical Computer Science Conference (ITCS 2018) and Theory and Practice of Differential Privacy Conference (TPDP 2017). JPC PageAbstract

We consider the problem of designing and analyzing differentially private algorithms that can be implemented on discrete models of computation in strict polynomial time, motivated by known attacks on floating point implementations of real-arithmetic differentially private algorithms (Mironov, CCS 2012) and the potential for timing attacks on expected polynomial-time algorithms. As a case study, we examine the basic problem of approximating the histogram of a categorical dataset over a possibly large data universe X. The classic Laplace Mechanism (Dwork, McSherry, Nissim, Smith, TCC 2006 and J. Privacy & Condentiality 2017) does not satisfy our requirements, as it is based on real arithmetic, and natural discrete analogues, such as the Geometric Mechanism (Ghosh, Roughgarden, Sundarajan, STOC 2009 and SICOMP 2012), take time at least linear in |X|, which can be exponential in the bit length of the input.

In this paper, we provide strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy (the maximum error over all bins) matches that of the Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy guarantee.  One of our algorithms produces a sparse histogram as output. Its "per-bin accuracy" (the error on individual bins) is worse than that of the Laplace Mechanism by a factor of log |X|, but we prove a lower bound showing that this is necessary for any algorithm that produces a sparse histogram.  A second algorithm avoids this lower bound, and matches the per-bin accuracy of the Laplace Mechanism, by producing a compact and eciently computable representation of a dense histogram; it is based on an (n + 1) - wise independent implementation of an appropriately clamped version of the Discrete Geometric Mechanism.

 

JPC ITCS ArXiv
Kobbi Nissim and Uri Stemmer. 3/2019. “Concentration Bounds for High Sensitivity Functions Through Differential Privacy.” Journal of Privacy and Confidentiality, 9, 1. arXiv PageAbstract

A new line of work [6, 9, 15, 2] demonstrates how differential privacy [8] can be used as a mathematical tool for guaranteeing generalization in adaptive data analysis. Specifically, if a differentially private analysis is applied on a sample S of i.i.d. examples to select a lowsensitivity function f , then w.h.p. f (S) is close to its expectation, although f is being chosen based on the data. Very recently, Steinke and Ullman [16] observed that these generalization guarantees can be used for proving concentration bounds in the non-adaptive setting, where the low-sensitivity function is fixed beforehand. In particular, they obtain alternative proofs for classical concentration bounds for low-sensitivity functions, such as the Chernoff bound and McDiarmid’s Inequality. In this work, we set out to examine the situation for functions with high-sensitivity, for which differential privacy does not imply generalization guarantees under adaptive analysis. We show that differential privacy can be used to prove concentration bounds for such functions in the non-adaptive setting.

PDF
Anitha Gollamudi, Owen Arden, and Stephen Chong. 2019. “Information Flow Control for Distributed Trusted Execution Environments.” Computer Security Foundations. PDF
2018
Kobbi Nissim and Uri Stemmer. 4/2018. “Clustering Algorithms for the Centralized and Local Models.” Conference on Algorithmic Learning Theory (ALT 2018).
Mitali Bafna, Jack Murtagh, and Nikhil Vyas. 2018. “Thwarting Adversarial Examples: An L_0-Robust Sparse Fourier Transform.” Advances in Neural Information Processing Systems, 31, Pp. 10096--10106. Publisher's Version PDF
2017
Shiva Kasiviswanathan, Kobbi Nissim, and Hongxia Jin. 2017. “Private Incremental Regression.” in the ACM SIGMOD/PODS Conference (PODS 2017).
2016
G. Barthe, M. Gaboardi, E. J. Gallego Arias, J. Hsu, A. Roth, and P.-Y. Strub. 12/2016. “Computer-Aided Verification in Mechanism Design.” Conference on Internet and Economics, WINE . PDF
G. Barthe, N. Fong, M. Gaboardi, B. Gregoire, J. Hsu, and P.-Y. Strub. 10/2016. “Advanced Probabilistic Couplings for Differential Privacy.” 23rd ACM Conference on Computer and Communications Security, CCS. PDF
Georgios Kellaris, George Kollios, Kobbi Nissim, and Adam O'Neill. 10/2016. “Generic Attacks on Secure Outsourced Databases.” 23rd ACM Conference on Computer and Communications Security.Abstract

Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with “full-fledged” security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and communication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources. We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about N 4 queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way. Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.

PDF
G. Barthe, P.­Y. Strub, J. Hsu, A. D. Gordon, E. J. Gallego Arias, M. Gaboardi, and G. P. Farina. 2016. “Differentially Private Bayesian Programming.” 23rd ACM Conference on Computer and Communications Security, CCS. p68-barthe.pdf