%0 Conference Paper %B Theory and Practice of Differential Privacy 2022 %D 2022 %T Widespread Underestimation of Sensitivity in Differentially Private Libraries and How to Fix It %A Sílvia Casacuberta %A Michael Shoemate %A Salil Vadhan %A Connor Wagaman %X We identify a new class of vulnerabilities in implementations of differential privacy. Specifically, they arise when computing basic statistics such as sums, thanks to discrepancies between the implemented arithmetic using finite data types (namely, ints or floats) and idealized arithmetic over the reals or integers. These discrepancies cause the sensitivity of the implemented statistics (i.e., how much one individual's data can affect the result) to be much higher than the sensitivity we expect. Consequently, essentially all differential privacy libraries fail to introduce enough noise to hide individual-level information as required by differential privacy, and we show that this may be exploited in realistic attacks on differentially private query systems. In addition to presenting these vulnerabilities, we also provide a number of solutions, which modify or constrain the way in which the sum is implemented in order to recover the idealized or near-idealized bounds on sensitivity. %B Theory and Practice of Differential Privacy 2022 %G eng %U https://arxiv.org/abs/2207.10635 %0 Conference Paper %B Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS ‘22) %D 2022 %T Hypothesis testing for differentially private linear regression %A Daniel Alabi %A Salil Vadhan %X In this work, we design differentially private hypothesis tests for the following problems in the general linear model: testing a linear relationship and testing for the presence of mixtures. The majority of our hypothesis tests are based on differentially private versions of the F-statistic for the general linear model framework, which are uniformly most powerful unbiased in the nonprivate setting. We also present other tests for these problems, one of which is based on the differentially private nonparametric tests of Couch, Kazan, Shi, Bray, and Groce (CCS 2019), which is especially suited for the small dataset regime. We show that the differentially private Fstatistic converges to the asymptotic distribution of its non-private counterpart. As a corollary, the statistical power of the differentially private F-statistic converges to the statistical power of the non-private F-statistic. Through a suite of Monte Carlo based experiments, we show that our tests achieve desired significance levels and have a high power that approaches the power of the non-private tests as we increase sample sizes or the privacy-loss parameter. We also show when our tests outperform existing methods in the literature. %B Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS ‘22) %G eng %U https://arxiv.org/pdf/2206.14449.pdf %0 Conference Paper %B In Foundations of Responsible Computing (FORC 2022) %D 2022 %T Controlling Privacy Loss in Sampling Schemes: An Analysis of Stratified and Cluster Sampling. %A Mark Bun %A Jörg Drechsler %A Marco Gaboardi %A Audra McMillan %A Jayshree Sarathy %X

Sampling schemes are fundamental tools in statistics, survey design, and algorithm design. A fundamental result in differential privacy is that a differentially private mechanism run on a simple random sample of a population provides stronger privacy guarantees than the same algorithm run on the entire population. However, in practice, sampling designs are often more complex than the simple, data-independent sampling schemes that are addressed in prior work. In this work, we extend the study of privacy amplification results to more complex, data-dependent sampling schemes. We find that not only do these sampling schemes often fail to amplify privacy, they can actually result in privacy degradation. We analyze the privacy implications of the pervasive cluster sampling and stratified sampling paradigms, as well as provide some insight into the study of more general sampling designs.

%B In Foundations of Responsible Computing (FORC 2022) %G eng %0 Journal Article %J Harvard Data Science Review %D 2022 %T Harnessing the Known Unknowns: Differential Privacy and the 2020 Census (co-editors’ forward) %A Ruobin Gong %A Erica L. Groshen %A Salil Vadhan %B Harvard Data Science Review %G eng %U https://doi.org/10.1162/99608f92.cb06b469 %N Special Issue 2 %0 Journal Article %J To appear in the Harvard Data Science Review (HDSR) %D 2022 %T Differential Perspectives: Epistemic Disconnects Surrounding the US Census Bureau’s Use of Differential Privacy %A danah boyd %A Jayshree Sarathy %X When the U.S. Census Bureau announced its intention to modernize its disclosure
avoidance procedures for the 2020 Census, it sparked a controversy that is still underway. The move to differential privacy introduced technical and procedural uncertainties, leaving stakeholders unable to evaluate the quality of the data. More importantly, this transformation exposed the statistical illusions and limitations of census data, weakening stakeholders’ trust in the data and in the Census Bureau itself. This essay examines the epistemic currents of this controversy. Drawing on theories from Science and Technology Studies (STS) and ethnographic fieldwork, we analyze the current controversy over differential privacy as a battle over uncertainty, trust, and legitimacy of the Census. We argue that rebuilding trust will require more than technical repairs or improved communication; it will require reconstructing what we identify as a ‘statistical imaginary.’ %B To appear in the Harvard Data Science Review (HDSR) %G eng %U https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4077426 %0 Conference Paper %B In The 25th International Conference on Artificial Intelligence and Statistics (AISTATS) %D 2022 %T Private sequential hypothesis testing for statisticians: Privacy, error rates, and sample size. %A Rachel Cummings %A Yajun Mei %A Wanrong Zhang %X The sequential hypothesis testing problem is a class of statistical analyses where the sample size is not fixed in advance, and the analyst must make real-time decisions until a stopping criterion is reached. In this work, we study the sequential hypothesis testing problem under the constraint of Renyi differential privacy for the sample. Unlike previous work in private hypothesis testing that focuses on the classical fixed sample setting, our results may allow a conclusion to be reached much earlier, and thus saves the cost of collecting these additional samples. We present a new private algorithm based on Wald's Sequential Probability Ratio Test (SPRT) that gives strong theoretical privacy guarantees. We provide theoretical analysis on statistical performance measured by Type I and Type II error as well as the expected sample size, and we also provide empirical validation of our results. %B In The 25th International Conference on Artificial Intelligence and Statistics (AISTATS) %G eng %0 Report %D 2022 %T From Algorithmic to Institutional Logics: The Politics of Differential Privacy %A Jayshree Sarathy %X Over the past two decades, we have come to see that traditional de-anonymization techniques fail to protect the privacy of individuals in sensitive datasets. To address this problem, computer scientists introduced differential privacy, a strong statistical notion of privacy that bounds the amount of information a statistical release leaks about any individual. Differential privacy has become a gold standard for privacy protection: organizations from Google to the U.S. Census Bureau have adopted differentially private methods, and the MIT Technology Review named it as one of the top ten technologies expected to have “widespread consequences for human life.” Yet, while differential privacy offers rigorous statistical guarantees, we must also examine how these guarantees interact with social and contextual factors. In this paper, I investigate the political dimensions of differential privacy. What does the adoption of this standard reveal or obscure about the privacy practices within our sociotechnical systems? And how might a reliance on this standard impact our progress towards broader notions of privacy? Drawing on scholarship from sociology, law, computer science, and science and technology studies, I describe the entanglements between algorithmic privacy and institutional logics, highlighting disempowering practices that may emerge despite, or in response to, the adoption of differential privacy. The goal of this work is not to discourage the use of differential privacy, which I argue is necessary and beneficial in a wide range of settings, but to examine where it may have unintended consequences. I conclude with recommendations on how the privacy community can continue to develop formal privacy standards while elevating broader visions of privacy. %G eng %U https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4079222 %0 Journal Article %J To appear in the Journal of Survey Statistics and Methodology (JSSAM) %D 2022 %T Nonparametric Differentially Private Confidence Intervals for the Median. %A Jörg Drechsler %A Ira Globus-Harris %A Audra McMillan %A Jayshree Sarathy %A Smith, Adam %X Differential privacy is a restriction on data processing algorithms that provides strong confidentiality guarantees for individual records in the data. However, research on proper statistical inference, that is, research on properly quantifying the uncertainty of the (noisy) sample estimate regarding the true value in the population, is currently still limited. This paper proposes and evaluates several strategies to compute valid differentially private confidence intervals for the median. Instead of computing a differentially private point estimate and deriving its uncertainty, we directly estimate the interval bounds and discuss why this approach is superior if ensuring privacy is important. We also illustrate that addressing both sources of uncertainty--the error from sampling and the error from protecting the output--simultaneously should be preferred over simpler approaches that incorporate the uncertainty in a sequential fashion. We evaluate the performance of the different algorithms under various parameter settings in extensive simulation studies and demonstrate how the findings could be applied in practical settings using data from the 1940 Decennial Census. %B To appear in the Journal of Survey Statistics and Methodology (JSSAM) %G eng %U https://arxiv.org/pdf/2106.10333.pdf %0 Journal Article %J arXiv:2007.05157 %D 2022 %T Differentially Private Simple Linear Regression %A Daniel Alabi %A Smith, Adam %A Audra McMillan %A Salil Vadhan %A Jayshree Sarathy %X Economics and social science research often require analyzing datasets of sensitive personal information at fine granularity, with models fit to small subsets of the data. Unfortunately, such fine-grained analysis can easily reveal sensitive individual information. We study algorithms for simple linear regression that satisfy differential privacy, a constraint which guarantees that an algorithm's output reveals little about any individual input data record, even to an attacker with arbitrary side information about the dataset. We consider the design of differentially private algorithms for simple linear regression for small datasets, with tens to hundreds of datapoints, which is a particularly challenging regime for differential privacy. Focusing on a particular application to small-area analysis in economics research, we study the performance of a spectrum of algorithms we adapt to the setting. We identify key factors that affect their performance, showing through a range of experiments that algorithms based on robust estimators (in particular, the Theil-Sen estimator) perform well on the smallest datasets, but that other more standard algorithms do better as the dataset size increases. %B arXiv:2007.05157 %G eng %U http://arxiv.org/abs/2007.05157 %0 Conference Paper %B In Proceedings of the 2022 AAAI Conference on Artificial Intelligence %D 2022 %T Private rank aggregation in central and local models. %A Daniel Alabi %A Badih Ghazi %A Kumar, Ravi %A Pasin Manurangsi %X In social choice theory, (Kemeny) rank aggregation is a well-studied problem where the goal is to combine rankings from multiple voters into a single ranking on the same set of items. Since rankings can reveal preferences of voters (which a voter might like to keep private), it is important to aggregate preferences in such a way to preserve privacy. In this work, we present differentially private algorithms for rank aggregation in the pure and approximate settings along with distribution-independent utility upper and lower bounds. In addition to bounds in the central model, we also present utility bounds for the local model of differential privacy. %B In Proceedings of the 2022 AAAI Conference on Artificial Intelligence %G eng %U https://arxiv.org/pdf/2112.14652.pdf %0 Conference Paper %B Privacy-Preserving Machine Learning (PPML '21), Privacy in Machine Learning (PriML '21) %D 2021 %T Canonical Noise Distributions and Private Hypothesis Tests %A Jordan Awan %A Salil Vadhan %X f-DP has recently been proposed as a generalization of classical definitions of differential privacy allowing a lossless analysis of composition, post-processing, and privacy amplification via subsampling. In the setting of f-DP, we propose the concept canonical noise distribution (CND) which captures whether an additive privacy mechanism is appropriately tailored for a given f, and give a construction that produces a CND given an arbitrary tradeoff function f. We show that private hypothesis tests are intimately related to CNDs, allowing for the release of private p-values at no additional privacy cost as well as the construction of uniformly most powerful (UMP) tests for binary data.
We apply our techniques to the problem of difference of proportions testing, and construct a UMP unbiased "semi-private" test which upper bounds the performance of any DP test. Using this as a benchmark we propose a private test, based on the inversion of characteristic functions, which allows for optimal inference for the two population parameters and is nearly as powerful as the semi-private UMPU. When specialized to the case of (ϵ,0)-DP, we show empirically that our proposed test is more powerful than any (ϵ/2‾√)-DP test and has more accurate type I errors than the classic normal approximation test. %B Privacy-Preserving Machine Learning (PPML '21), Privacy in Machine Learning (PriML '21) %8 9 Aug, 2021 %G eng %U https://arxiv.org/abs/2108.04303 %0 Conference Paper %B Advances in Neural Information Processing Systems 34 (NeurIPS 2021) %D 2021 %T Multiclass versus Binary Differentially Private PAC Learning %A Mark Bun %A Marco Gaboardi %A Satchit Sivakumar %X We show a generic reduction from multiclass differentially private PAC learning to binary private PAC learning. We apply this transformation to a recently proposed binary private PAC learner to obtain a private multiclass learner with sample complexity that has a polynomial dependence on the multiclass Littlestone dimension and a poly-logarithmic dependence on the number of classes. This yields a doubly exponential improvement in the dependence on both parameters over learners from previous work. Our proof extends the notion of Ψ-dimension defined in work of Ben-David et al. [5] to the online setting and explores its general properties. %B Advances in Neural Information Processing Systems 34 (NeurIPS 2021) %G eng %U https://proceedings.neurips.cc/paper/2021/file/c1d53b7a97707b5cd1815c8d228d8ef1-Paper.pdf %0 Conference Proceedings %B 53rd Annual ACM Symposium on Theory of Computing (STOC 2021) %D 2021 %T Legal Theorems: Bridging Computer Science and Privacy Law %A Kobbi Nissim %B 53rd Annual ACM Symposium on Theory of Computing (STOC 2021) %G eng %0 Conference Paper %B 40th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (PODS 2021) %D 2021 %T Privacy: From Database Reconstruction to Legal Theorems %A Kobbi Nissim %X

 

There are significant gaps between legal and technical thinking around data privacy. Technical standards are described using mathematical language whereas legal standards are not rigorous from a mathematical point of view and often resort to concepts which they only partially define. As a result, arguments about the adequacy of technical privacy measures for satisfying legal privacy often lack rigor, and their conclusions are uncertain. The uncertainty is exacerbated by a litany of successful privacy attacks on privacy measures thought to meet legal expectations but then shown to fall short of doing so.

As computer systems manipulating individual privacy-sensitive data become integrated in almost every aspect of society, and as such systems increasingly make decisions of legal significance, the need to bridge the diverging, and sometimes conflicting legal and technical approaches becomes urgent.

We formulate and prove formal claims – “legal theorems” – addressing legal questions such as whether the use of technological measures satisfies the requirements of a legal privacy standard. In particular, we analyze the notion of singling out from the GDPR and whether technologies such as k-anonymity and differential privacy prevent singling out.

Our long-term goal is to develop concepts which are on one hand technical, so they can be integrated in the design of computer systems, and can be used in legal reasoning and for policymaking on the other hand.

 

%B 40th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (PODS 2021) %G eng %0 Journal Article %J Proceedings of the 19th Theory of Cryptography Conference (TCC '21) %D 2021 %T Concurrent Composition of Differential Privacy %A Salil Vadhan %A Tianhao Wang %X

We initiate a study of the composition properties of interactive dierentially private mechanisms. An interactive dierentially private mechanism is an algorithm that allows an analyst to adaptively ask queries about a sensitive dataset, with the property that an adversarial

analyst's view of the interaction is approximately the same regardless of whether or not any individual's data is in the dataset. Previous studies of composition of dierential privacy have focused on non-interactive algorithms, but interactive mechanisms are needed to capture many of the intended applications of dierential privacy and a number of the important dierentially private primitives.

 

We focus on concurrent composition, where an adversary can arbitrarily interleave its queries to several dierentially private mechanisms, which may be feasible when dierentially private query systems are deployed in practice. We prove that when the interactive mechanisms being composed are pure dierentially private, their concurrent composition achieves privacy parameters (with respect to pure or approximate dierential privacy) that match the (optimal) composition theorem for noninteractive dierential privacy. We also prove a composition theorem for interactive mechanisms that satisfy approximate dierential privacy. That bound is weaker than even the basic (suboptimal) composition theorem for noninteractive dierential privacy, and we leave closing the gap as a direction for future research, along with understanding concurrent composition for other variants of dierential privacy.

%B Proceedings of the 19th Theory of Cryptography Conference (TCC '21) %V 13043 %G eng %U https://arxiv.org/abs/2105.14427 %0 Journal Article %J BU Journal of Science Technology and Law %D 2021 %T What a Hybrid Legal-Technical Analysis Teaches Us About Privacy Regulation: The Case of Singling Out %A Micah Altman %A Aloni Cohen %A Kobbi Nissim %A Alexandra Wood %X
Abstract

This article advocates a hybrid legal-technical approach to the evaluation of technical measures designed to render information anonymous in order to bring it outside the scope of data protection regulation. The article demonstrates how such an approach can be used for instantiating a key anonymization concept appearing in the EU General Data Protection Regulation (GDPR) -- singling out. The analysis identifies and addresses a tension between a common, compelling theory of singling out and a mathematical analysis of this theory, and it demonstrates how to make determinations regarding the sufficiency of specific technologies for satisfying regulatory requirements for anonymization.

Doubts about the feasibility of effective anonymization and de-identification have gained prominence in recent years in response to high-profile privacy breaches enabled by scientific advances in privacy research, improved analytical capabilities, the wider availability of personal data, and the unprecedented richness of available data sources. At the same time, privacy regulations recognize the possibility, at least in principle, of data anonymization that is sufficiently protective so as to free the resulting (anonymized) data from regulation. As a result, practitioners developing privacy enhancing technologies face substantial uncertainty as to the legal standing of these technologies. More fundamentally, it is not clear how to make a determination of compliance even when the tool is fully described and available for examination.

This gap is symptomatic of a more general problem: Legal and technical approaches to data protection have developed in parallel, and their conceptual underpinnings are growing increasingly divergent. When lawmakers rely on purely legal concepts to engage areas that are affected by rapid scientific and technological change, the resulting laws, when applied in practice, frequently create substantial uncertainty for implementation; provide contradictory recommendations in important cases; disagree with current scientific technical understanding; and fail to scale to the rapid pace of technological development. This article argues that new hybrid concepts, created through technical and legal co-design, can inform practices that are practically complete, coherent, and scalable.

As a case study, the article focuses on a key privacy-related concept appearing in Recital 26 of the General Data Protection Regulation (GDPR) called singling out. We identify a compelling theory of singling out that is implicit in the most persuasive guidance available, and demonstrate that the theory is ultimately incomplete. We then use that theory as the basis for a new and mathematically rigorous privacy concept called predicate singling-out. Predicate singling-out sheds light on the notion of singling out in the GDPR, itself inextricably linked to anonymization. We argue that any data protection tool that purports to anonymize arbitrary personal data under the GDPR must prevent predicate singling-out. This enables, for the first time, a legally- and mathematically-grounded analysis of the standing of supposed anonymization technologies like k-anonymity and differential privacy. The analysis in this article is backed by a technical-mathematical analysis previously published by two of the authors.

Conceptually, our analysis demonstrates that a nuanced understanding of baseline risk is unavoidable for a theory of singling out based on current regulatory guidance. Practically, it identifies previously unrecognized failures of anonymization. In particular, it demonstrates that some k-anonymous mechanisms may allow singling out, challenging the prevailing regulatory guidance.

The article concludes with a discussion of specific recommendations for both policymakers and scholars regarding how to conduct a hybrid legal-technical analysis. Rather than formalizing or mathematizing the law, the article provides approaches for wielding formal tools in the service of practical regulation.

 

 
%B BU Journal of Science Technology and Law %V Vol 27 %G eng %U https://www.bu.edu/jostl/files/2021/06/1-Altman.pdf %N 1 %0 Thesis %D 2021 %T Differentially Private Ridge Regression %A Tyler Piazza %X

Studying problems of interest, like finding trends in medical data, can require analyzing data which contains sensitive and personally identifying information. As a result, it is often infeasible to release these datasets to researchers or to the general public. In this paper, we study algorithms that are differentially private, where there are theoretical guarantees that the mechanisms studied will reveal only limited amounts of information about individual people while still providing insights about large groups. This thesis discusses various forms of linear and ridge regression in this differentially private setting, with the goal of studying sensitive data to make predictions about future sensitive data. In particular, we will discuss the internal privacy-loss budgeting of the differentially private ridge regression technique adaSSP. This thesis provides 3 contributions. First, we discuss the existing SSP and adaSSP algorithms, and provide detailed proofs that they are each differentially private. Second, we introduce the two new algorithms adaSSPbudget and constSSPfull and prove that these are each differentially private. Third, we conduct experiments using synthetic and real world data to explore whether the precise privacy-loss budgeting used within these algorithms could improve their performance.

These experiments will explore the tradeoff between the accuracy of a hyperparameter and the accuracy of the other releases. Through the experimental results, we find that the performance is often insensitive to the particular privacy-loss budgeting and that for certain datasets, no choice of privacy-loss budget allows for the adaptive adaSSPbudget to outperform the standard SSP algorithm.

%G eng %0 Generic %D 2021 %T Differential Privacy for Government Agencies – Are We There Yet? %A Jörg Drechsler %X Government agencies always need to carefully consider potential risks of disclosure whenever they publish statistics based on their data or give external researchers access to the collected data. For this reason, research on disclosure avoiding techniques has a long tradition at statistical agencies. In this context, the promise of formal privacy guarantees offered by concepts such as differential privacy seem to be the panacea enabling the agencies to exactly quantify and control the privacy loss incurred by any data release. Still, despite the excitement in academia and industry, most agencies-with the prominent exception of the U.S. Census Bureau-have been reluctant to even consider the concept for their data release strategy.
This paper aims to shed some light on potential reasons for this. We argue that the requirements when implementing differential privacy approaches at government agencies are often fundamentally different from the requirements in industry. This raises many challenging problems and open questions that still need to be addressed before the concept might be used as an overarching principle when sharing data with the public. The paper will not offer any solutions to these challenges. Instead, we hope to stimulate some collaborative research efforts, as we believe that many of the problems can only be addressed by inter-disciplinary collaborations. %G eng %U https://arxiv.org/abs/2102.08847 %0 Conference Paper %B In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA) %D 2021 %T Connecting Robust Shuffle Privacy and Pan-Privacy %A Victor Balcer %A Albert Cheu %A Matthew Joseph %A Jieming Mao %X In the \emph{shuffle model} of differential privacy, data-holding users send randomized messages to a secure shuffler, the shuffler permutes the messages, and the resulting collection of messages must be differentially private with regard to user data. In the \emph{pan-private} model, an algorithm processes a stream of data while maintaining an internal state that is differentially private with regard to the stream data. We give evidence connecting these two apparently different models.
Our results focus on \emph{robustly} shuffle private protocols, whose privacy guarantees are not greatly affected by malicious users. First, we give robustly shuffle private protocols and upper bounds for counting distinct elements and uniformity testing. Second, we use pan-private lower bounds to prove robustly shuffle private lower bounds for both problems. Focusing on the dependence on the domain size k, we find that robust approximate shuffle privacy and approximate pan-privacy have additive error Θ(k√) for counting distinct elements. For uniformity testing, we give a robust approximate shuffle private protocol with sample complexity Õ (k2/3) and show that an Ω(k2/3) dependence is necessary for any robust pure shuffle private tester. Finally, we show that this connection is useful in both directions: we give a pan-private adaptation of recent work on shuffle private histograms and use it to recover further separations between pan-privacy and interactive local privacy. %B In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA) %P 2384-2403 %G eng %U https://arxiv.org/abs/2004.09481 %0 Conference Paper %B 6th Workshop on the Theory and Practice of Differential Privacy (TPDP 2020) %D 2021 %T A Programming Framework for OpenDP (extended abstract) %A Marco Gaboardi %A Michael Hay %A Salil Vadhan %X

In this paper, we propose a programming framework for the library of dierentially private algorithms that will be at the core of the new OpenDP open-source software project (http://opendp.io/).

%B 6th Workshop on the Theory and Practice of Differential Privacy (TPDP 2020) %G eng %0 Conference Paper %B 47th International Colloquium on Automata, Languages and Programming (To appear - ICALP 2020) %D 2020 %T The Complexity of Verifying Loop-free Programs as Differentially Private %A Marco Gaboardi %A Kobbi Nissim %A David Purser %X

We study the problem of verifying differential privacy for loop-free programs with probabilistic choice. Programs in this class can be seen as randomized Boolean circuits, which we will use as a formal model to answer two different questions: first, deciding whether a program satisfies a prescribed level of privacy; second, approximating the privacy parameters a program realizes. We show that the problem of deciding whether a program satisfies "-differential privacy is coNP#P-complete. In fact, this is the case when either the input domain or the output range of the program is large.

Further, we show that deciding whether a program is (", )-differentially private is coNP#P-hard, and in coNP#P for small output domains, but always in coNP#P#P. Finally, we show that the problem of approximating the level of differential privacy is both NP-hard and coNP-hard. These results complement previous results by Murtagh and Vadhan [34] showing that deciding the optimal composition of differentially private components is #P-complete, and that approximating the optimal composition of differentially private components is in P.

%B 47th International Colloquium on Automata, Languages and Programming (To appear - ICALP 2020) %8 7 July 2020 %G eng %U https://arxiv.org/abs/1911.03272 %0 Conference Paper %B 20th Privacy Enhancing Technologies Symposium (To appear - PET 2020) %D 2020 %T Formalizing Privacy Laws for License Generation and Data Repository Decision Automation %A Micah Altman %A Stephen Chong %A Alexandra Wood %X In this paper, we summarize work-in-progress on expert system support to automate some data deposit and release decisions within a data repository, and to generate custom license agreements for those data transfers.
Our approach formalizes via a logic programming language the privacy-relevant aspects of laws, regulations, and best practices, supported by legal analysis documented in legal memoranda. This formalization enables automated reasoning about the conditions under which a repository can transfer data, through interrogation of users, and the application of formal rules to the facts obtained from users. The proposed system takes the specific conditions for a given data release and produces a custom data use agreement that accurately captures the relevant restrictions on data use. This enables appropriate decisions and accurate licenses, while removing the bottleneck of lawyer effort per data transfer. The operation of the system aims to be transparent, in the sense that administrators, lawyers, institutional review boards, and other interested parties can evaluate the legal reasoning and interpretation embodied in the formalization, and the specific rationale for a decision to accept or release a particular dataset. %B 20th Privacy Enhancing Technologies Symposium (To appear - PET 2020) %G eng %U https://arxiv.org/abs/1910.10096 %0 Conference Paper %B 52nd ACM Symposium on Theory of Computing (To appear - STOC 2020) %D 2020 %T Better Secret-Sharing via Robust Conditional Disclosure of Secrets %A Benny Applebaum %A Amos Beimel %A Oded Nir %A Naty Peter %X

A secret-sharing scheme allows to distribute a secret s among n parties such that only some predefined

“authorized” sets of parties can reconstruct the secret, and all other “unauthorized” sets learn

nothing about s. The collection of authorized sets is called the access structure. For over 30 years, it

was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme

whose shares are of size 2n-o(n) and until recently no better scheme was known. In a recent breakthrough,

Liu and Vaikuntanathan (STOC 2018) have reduced the share size to 20:994n+o(n), which was

later improved to 20:892n+o(n) by Applebaum et al. (EUROCRYPT 2019).

 

In this paper we improve the exponent of general secret-sharing down to 0:637. For the special case

of linear secret-sharing schemes, we get an exponent of 0:762 (compared to 0:942 of Applebaum et al.).

As our main building block, we introduce a new robust variant of conditional disclosure of secrets

(robust CDS) that achieves unconditional security even under limited form of re-usability. We show that

the problem of general secret-sharing reduces to robust CDS with sub-exponential overhead and derive

our main result by implementing robust CDS with a non-trivial exponent. The latter construction follows

by presenting a general immunization procedure that turns standard CDS into a robust CDS.

%B 52nd ACM Symposium on Theory of Computing (To appear - STOC 2020) %G eng %U https://eprint.iacr.org/2020/080 %0 Conference Paper %B Information-Theoretic Cryptography (To appear - ITC 2020) %D 2020 %T The Power of Synergy in Differential Privacy: Combining a Small Curator with Local Randomizers %A Amos Beimel %A Aleksandra Korolova %A Kobbi Nissim %A Or Sheffet %A Uri Stemmer %X

Motivated by the desire to bridge the utility gap between local and trusted curator modelsof differential privacy for practical applications, we initiate the theoretical study of a hybridmodel introduced by “Blender” [Avent et al., USENIX Security ’17], in which differentially private protocols of n agents that work in the local-model are assisted by a differentially private curator that has access to the data of m additional users. We focus on the regime where mn and study the new capabilities of this (m;n)-hybrid model. We show that, despite the fact that the hybrid model adds no significant new capabilities for the basic task of simple hypothesistesting, there are many other tasks (under a wide range of parameters) that can be solved in the hybrid model yet cannot be solved either by the curator or by the local-users separately. Moreover, we exhibit additional tasks where at least one round of interaction between the curator and the local-users is necessary – namely, no hybrid model protocol without such interaction can solve these tasks. Taken together, our results show that the combination of the local model with a small curator can become part of a promising toolkit for designing and implementing differential privacy.

%B Information-Theoretic Cryptography (To appear - ITC 2020) %G eng %U https://arxiv.org/abs/1912.08951 %0 Journal Article %J Proceedings of the National Academy of Sciences %D 2020 %T Towards formalizing the GDPR’s notion of singling out %A Aloni Cohen %A Kobbi Nissim %X There is a significant conceptual gap between legal and mathematical thinking around data privacy. The effect is uncertainty as to which technical offerings meet legal standards. This uncertainty is exacerbated by a litany of successful privacy attacks demonstrating that traditional statistical disclosure limitation techniques often fall short of the privacy envisioned by regulators. We define “predicate singling out,” a type of privacy attack intended to capture the concept of singling out appearing in the General Data Protection Regulation (GDPR). An adversary predicate singles out a dataset x using the output of a data-release mechanism M(x) if it finds a predicate p matching exactly one row in x with probability much better than a statistical baseline. A data-release mechanism that precludes such attacks is “secure against predicate singling out” (PSO secure). We argue that PSO security is a mathematical concept with legal consequences. Any data-release mechanism that purports to “render anonymous” personal data under the GDPR must prevent singling out and, hence, must be PSO secure. We analyze the properties of PSO security, showing that it fails to compose. Namely, a combination of more than logarithmically many exact counts, each individually PSO secure, facilitates predicate singling out. Finally, we ask whether differential privacy and k-anonymity are PSO secure. Leveraging a connection to statistical generalization, we show that differential privacy implies PSO security. However, and in contrast with current legal guidance, k-anonymity does not: There exists a simple predicate singling out attack under mild assumptions on the k-anonymizer and the data distribution. %B Proceedings of the National Academy of Sciences %G eng %U https://doi.org/10.1073/pnas.1914598117 %0 Thesis %D 2020 %T Certifiably Accurate Private Data Release with Generative Adversarial Networks %A Michael Fine %X

Society is caught in a vise. The exponential growth in the power and ubiquity of computing devices has enabled the collection and analysis of data at an unprecedented scale. This Cambrian explosion in data collection promises enormous benefits across commercial, scientific, and policy fields. Unfortunately, this collection and analysis of increasingly-personal data has also proved to be a grave threat to individual’s privacy.

%G eng %9 Bachelor of Arts Thesis %0 Conference Paper %B Information-Theoretic Cryptography (To appear - ITC 2020) %D 2020 %T Separating Local & Shuffled Differential Privacy via Histograms %A Victor Balcer %A Albert Cheu %X

Recent work in differential privacy has highlighted the shuffled model as a promising avenue to compute accurate statistics while keeping raw data in users’ hands. We present a protocol in this model that estimates histograms with error independent of the domain size. This impliesan arbitrarily large gap in sample complexity between the shuffled and local models. On theother hand, we show that the models are equivalent when we impose the constraints of pure differential privacy and single-message randomizers.

%B Information-Theoretic Cryptography (To appear - ITC 2020) %G eng %U https://arxiv.org/abs/1911.06879 %0 Conference Paper %B In First Information-Theoretic Cryptography Conference (ITC) %D 2020 %T Separating Local & Shuffled Differential Privacy via Histograms %A Victor Balcer %A Albert Cheu %X Recent work in differential privacy has highlighted the shuffled model as a promising avenue to compute accurate statistics while keeping raw data in users' hands. We present a protocol in this model that estimates histograms with error independent of the domain size. This implies an arbitrarily large gap in sample complexity between the shuffled and local models. On the other hand, the models are equivalent when we impose the constraints of pure differential privacy and single-message randomizers. %B In First Information-Theoretic Cryptography Conference (ITC) %G eng %U https://arxiv.org/abs/1911.06879 %0 Journal Article %J The Proceedings of the National Academy of Sciences (PNAS) %D 2020 %T Towards formalizing the GDPR’s notion of singling out %A Aloni Cohen %A Kobbi Nissim %X

There is a significant conceptual gap between legal and mathematica thinking around data privacy. The effect is uncertainty as to which technical offerings meet legal standards. This uncertainty is exacerbated by a litany of successful privacy attacks demonstrating that traditional statistical disclosure limitation techniques often fall short of the privacy envisioned by regulators.

We define “predicate singling out,” a type of privacy attack intended to capture the concept of singling out appearing in the General Data Protection Regulation (GDPR). An adversary predicate singles out a dataset x using the output of a data-release mechanism M(x) if it finds a predicate p matching exactly one row in x with probability much better than a statistical baseline. A data-release mechanism that precludes suchattacks is “secure against predicate singling out” (PSO secure). We argue that PSO security is a mathematical concept with legal consequences. Any data-release mechanism that purports to “render anonymous” personal data under the GDPR must prevent singling out and, hence, must be PSO secure. We analyze the properties of PSO security, showing that it fails to compose. Namely, a combination of more than logarithmically many exact counts, each individually PSO secure, facilitates predicate singling out. Finally, we ask whether differential privacy and kanonymity

are PSO secure. Leveraging a connection to statistical generalization, we show that differential privacy implies PSO security. However, and in contrast with current legal guidance, kanonymity does not: There exists a simple predicate singling out attack under mild assumptions on the k-anonymizer and the data distribution.

%B The Proceedings of the National Academy of Sciences (PNAS) %V 117 %P 8344-8352 %G eng %N 15 %0 Journal Article %J ACM Transactions on Economics and Computation %D 2020 %T Privacy Games %A Yiling Chen %A Or Sheffet %A Salil Vadhan %X

Version History: 

Previously published as: Yiling Chen, Or Sheffet, and Salil Vadhan. Privacy games. In Proceedings of the 10th International Conference on Web and Internet Economics (WINE ‘14), volume 8877 of Lecture Notes in Computer Science, pages 371–385. Springer-Verlag, 14–17 December 2014. (WINE Publisher's Version linked here: https://link.springer.com/chapter/10.1007/978-3-319-13129-0_30); PDF attached as WINE2014.

The problem of analyzing the effect of privacy concerns on the behavior of selfish utility-maximizing agents has received much attention lately. Privacy concerns are often modeled by altering the utility functions of agents to consider also their privacy loss. Such privacy aware agents prefer to take a randomized strategy even in very simple games in which non-privacy aware agents play pure strategies. In some cases, the behavior of privacy aware agents follows the framework of Randomized Response, a well-known mechanism that preserves differential privacy. 


Our work is aimed at better understanding the behavior of agents in settings where their privacy concerns are explicitly given. We consider a toy setting where agent A, in an attempt to discover the secret type of agent B, offers B a gift that one type of B agent likes and the other type dislikes. As opposed to previous works, B's incentive to keep her type a secret isn't the result of "hardwiring" B's utility function to consider privacy, but rather takes the form of a payment between B and A. We investigate three different types of payment functions and analyze B's behavior in each of the resulting games. As we show, under some payments, B's behavior is very different than the behavior of agents with hardwired privacy concerns and might even be deterministic. Under a different payment we show that B's BNE strategy does fall into the framework of Randomized Response.

%B ACM Transactions on Economics and Computation %V 8 %P Article 9 %G eng %U https://dl.acm.org/doi/10.1145/3381533 %N 2 %0 Journal Article %J Journal of Computer Security %D 2020 %T A Calculus for Flow-Limited Authorization %A Owen Arden %A Anitha Gollamudi %A Ethan Cecchetti %A Stephen Chong %A Andrew C. Myers %X

Real-world applications routinely make authorization decisions based on dynamic computation. Reasoning about dynamically computed authority is challenging. Integrity of the system might be compromised if attackers can improperly influence the authorizing computation. Confidentiality can also be compromised by authorization, since authorization decisions are often based on sensitive data such as membership lists and passwords. Previous formal models for authorization do not fully address the security implications of permitting trust relationships to change, which limits their ability to reason about authority that derives from dynamic computation. Our goal is a way to construct dynamic authorization mechanisms that do not violate confidentiality or integrity.

We introduce the Flow-Limited Authorization Calculus (FLAC), which is both a simple, expressive model for reasoning about dynamic authorization and also an information flow control language for securely implementing various authorization mechanisms. FLAC combines the insights of two previous models: it extends the Dependency Core Calculus with features made possible by the Flow-Limited Authorization Model. FLAC provides strong end-to-end information security guarantees even for programs that incorporate and implement rich dynamic authorization mechanisms. These guarantees include noninterference and robust declassification, which prevent attackers from influencing information disclosures in unauthorized ways. We prove these security properties formally for all FLAC programs and explore the expressiveness of FLAC with several examples.

%B Journal of Computer Security %G eng %0 Conference Paper %D 2020 %T The OpenDP White Paper %A The OpenDP Team %X

Talks: 

OpenDP is a community effort to build a trustworthy suite of open-source tools for enabling privacy-protective analysis of sensitive personal data, focused on a library of algorithms for generating differentially private statistical releases. The target use cases for OpenDP are to enable government, industry, and academic institutions to safely and confidently share sensitive data to support scientifically oriented research and exploration in the public interest. We aim for OpenDP to flexibly grow with the rapidly advancing science of differential privacy, and be a pathway to bring the newest algorithmic developments to a wide array of practitioners.

OpenDP is led by Faculty Directors Gary King and Salil Vadhan and an Executive Committee at Harvard University, funded in part by a grant from the Sloan Foundation. Its efforts so far have included implementing a differentially private curator application in collaboration with Microsoft, and developing a framework for a community-driven OpenDP Commons through the work of an Ad Hoc Design Committee including external experts. Going forward, the project plans to engage with a wide community of stakeholders, establish partnerships with a wide variety of groups from industry, academia, and government, and adopt increasing levels of community governance.

%G eng %U https://projects.iq.harvard.edu/files/opendp/files/opendp_white_paper_11may2020.pdf %0 Book %D 2020 %T Using Administrative Data for Research and Evidence-based Policy - A Handbook %A Micah Altman %A Kobbi Nissim %A Salil Vadhan %A Alexandra Wood %X Webinar %I Abdul Latif Jameel Poverty Action Lab (J-PAL) %C Cambridge %P 173 - 242 %G eng %U https://admindatahandbook.mit.edu/book/v1.0/diffpriv.html %0 Journal Article %J INSS Cyber Intelligence and Security [Internet] %D 2019 %T Social Change Through Computerized Accessibility of Legal Rules %A Tadjer, Michel %A Bar-Sinai, Michael %A Mor Vilozni %X

This article presents a self-help software system that makes rights accessible through an on-line interview. The interview is based on a formal model of the relevant jurisprudence and does not require the involvement of a service representative, only a user who wants to understand his or her rights. In addition, the article provides a methodology for building models and interviews for similar social contexts and describes building a model for workers’ rights according to Israeli law, upon completing their employment. In addition to conducting interviews, these models can be used to create diagrams and perform legal queries. This kind of system can fulfill a central role in empowering disadvantaged populations, as it enables people to asses their rights in a user-friendly manner, which is personalized to the situation of the interviewee and not overburdened with large amounts of information that it is difficult to navigate.

%B INSS Cyber Intelligence and Security [Internet] %V 3 %P 81-98 %G eng %U https://www.inss.org.il/publication/social-change-through-computerized-accessibility-of-legal-rules/ %N 2 %0 Conference Paper %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019 %D 2019 %T Exploring Differential Obliviousness %A Amos Beimel %A Kobbi Nissim %A Mohammad Zaherix %X

In a recent paper, Chan et al. [SODA ’19] proposed a relaxation of the notion of (full) memory obliviousness, which was introduced by Goldreich and Ostrovsky [J. ACM ’96] and extensively researched by cryptographers. The new notion, differential obliviousness, requires that any two neighboring inputs exhibit similar memory access patterns, where the similarity requirement is that of differential privacy.

Chan et al. demonstrated that differential obliviousness allows achieving improved efficiency for several algorithmic tasks, including sorting, merging of sorted lists, and range query data structures.

In this work, we continue the exploration of differential obliviousness, focusing on algorithms that do not necessarily examine all their input. This choice is motivated by the fact that the existence of logarithmic overhead ORAM protocols implies that differential obliviousness can yield at most a logarithmic improvement in efficiency for computations that need to examine all their input. In particular, we explore property testing, where we show that differential obliviousness yields an almost linear improvement in overhead in the dense graph model, and at most quadratic improvement in the bounded degree model.

We also explore tasks where a non-oblivious algorithm would need to explore different portions of the input, where the latter would depend on the input itself, and where we show that such a behavior can be maintained under differential obliviousness, but not under full obliviousness. Our examples suggest that there would be benefits in further exploring which class of computational tasks are amenable to differential obliviousness.

%B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019 %G eng %U https://drops.dagstuhl.de/opus/frontdoor.php?source_opus=11280 %0 Conference Paper %B 21st International Symposium on Principles and Practice of Declarative Programming (PPDP 2019) %D 2019 %T Relational Symbolic Execution %A Gian Pietro Farina %A Stephen Chong %A Marco Gaboardi %X

Symbolic execution is a classical program analysis technique used to show that programs satisfy or violate given specifications. In this work we generalize symbolic execution to support program analysis for relational specifications in the form of relational properties - these are properties about two runs of two programs on related inputs, or about two executions of a single program on related inputs. Relational properties are useful to formalize notions in security and privacy, and to reason about program optimizations. We design a relational symbolic execution engine, named RelSymwhich supports interactive refutation, as well as proving of relational properties for programs written in a language with arrays and for-like loops.

%B 21st International Symposium on Principles and Practice of Declarative Programming (PPDP 2019) %G eng %U https://doi.org/10.1145/3354166.3354175 %0 Journal Article %J Journal of Privacy and Confidentiality %D 2019 %T Differential Privacy on Finite Computers %A Victor Balcer %A Salil Vadhan %X

Version History: 

Also presented at TPDP 2017; preliminary version posted as arXiv:1709.05396 [cs.DS].

2018: Published in Anna R. Karlin, editor, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018), volume 94 of Leibniz International Proceedings in Informatics (LIPIcs), pp 43:1-43:21. http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=8353

We consider the problem of designing and analyzing differentially private algorithms that can be implemented on discrete models of computation in strict polynomial time, motivated by known attacks on floating point implementations of real-arithmetic differentially private algorithms (Mironov, CCS 2012) and the potential for timing attacks on expected polynomial-time algorithms. As a case study, we examine the basic problem of approximating the histogram of a categorical dataset over a possibly large data universe \(X\). The classic Laplace Mechanism (Dwork, McSherry, Nissim, Smith, TCC 2006 and J. Privacy & Condentiality 2017) does not satisfy our requirements, as it is based on real arithmetic, and natural discrete analogues, such as the Geometric Mechanism (Ghosh, Roughgarden, Sundarajan, STOC 2009 and SICOMP 2012), take time at least linear in \(|X|\), which can be exponential in the bit length of the input.

In this paper, we provide strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy (the maximum error over all bins) matches that of the Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy guarantee.  One of our algorithms produces a sparse histogram as output. Its "per-bin accuracy" (the error on individual bins) is worse than that of the Laplace Mechanism by a factor of \(\log |X|\), but we prove a lower bound showing that this is necessary for any algorithm that produces a sparse histogram.  A second algorithm avoids this lower bound, and matches the per-bin accuracy of the Laplace Mechanism, by producing a compact and eciently computable representation of a dense histogram; it is based on an \((n + 1)\) - wise independent implementation of an appropriately clamped version of the Discrete Geometric Mechanism.

 

%B Journal of Privacy and Confidentiality %V 9 %G eng %U https://journalprivacyconfidentiality.org/index.php/jpc/article/view/679 %N 2 %0 Conference Paper %B International Cryptology Conference (CRYPTO 2019) %D 2019 %T The Privacy Blanket of the Shuffle Model %A Borja Balle %A James Bell %A Adria Gascon %A Kobbi Nissim %X This work studies differential privacy in the context of the recently proposed shuffle model. Unlike in the local model, where the server collecting privatized data from users can track back an input to a specific user, in the shuffle model users submit their privatized inputs to a server anonymously. This setup yields a trust model which sits in between the classical curator and local models for differential privacy. The shuffle model is the core idea in the Encode, Shuffle, Analyze (ESA) model introduced by Bittau et al. (SOPS 2017). Recent work by Cheu et al. (EUROCRYPT 2019) analyzes the differential privacy properties of the shuffle model and shows that in some cases shuffled protocols provide strictly better accuracy than local protocols. Additionally, Erlingsson et al. (SODA 2019) provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user. In this context, we make three contributions. First, we provide an optimal single message protocol for summation of real numbers in the shuffle model. Our protocol is very simple and has better accuracy and communication than the protocols for this same problem proposed by Cheu et al. Optimality of this protocol follows from our second contribution, a new lower bound for the accuracy of private protocols for summation of real numbers in the shuffle model. The third contribution is a new amplification bound for analyzing the privacy of protocols in the shuffle model in terms of the privacy provided by the corresponding local randomizer. Our amplification bound generalizes the results by Erlingsson et al. to a wider range of parameters, and provides a whole family of methods to analyze privacy amplification in the shuffle model. %B International Cryptology Conference (CRYPTO 2019) %G eng %U https://link.springer.com/chapter/10.1007/978-3-030-26951-7_22 %0 Conference Paper %B Computer Security Foundations %D 2019 %T Information Flow Control for Distributed Trusted Execution Environments %A Anitha Gollamudi %A Owen Arden %A Stephen Chong %X Distributed applications cannot assume that their security policies will be enforced on untrusted hosts. Trusted execution environments (TEEs) combined with cryptographic mechanisms enable execution of known code on an untrusted host and the exchange of confidential and authenticated messages with it. TEEs do not, however, establish the trustworthiness of code executing in a TEE. Thus, developing secure applications using TEEs requires specialized expertise and careful auditing. This paper presents DFLATE, a core security calculus for distributed applications with TEEs. DFLATE offers high-level abstractions that reflect both the guarantees and limitations of the underlying security mechanisms they are based on. The accuracy of these abstractions is exhibited by asymmetry between confidentiality and integrity in our formal results: DFLATE enforces a strong form of noninterference for confidentiality, but only a weak form for integrity. This reflects the asymmetry of the security guarantees of a TEE: a malicious host cannot access secrets in the TEE or modify its contents, but they can suppress or manipulate the sequence of its inputs and outputs. Therefore DFLATE cannot protect against the suppression of high-integrity messages, but when these messages are delivered, their contents cannot have been influenced by an attacker. %B Computer Security Foundations %G eng %U https://ieeexplore.ieee.org/document/8823697 %0 Conference Paper %B Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT 2019) %D 2019 %T Secret-Sharing Schemes for General and Uniform Access Structures %A Benny Applebaum %A Amos Beimel %A Oriol Farr`as %A Oded Nir %A Naty Peter %X A secret-sharing scheme allows some authorized sets of parties to reconstruct a secret; the collection of authorized sets is called the access structure. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size2n−o(n)and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to O(20.994n) Our first contribution is improving the exponent of secret sharing down to 0.892. For the special case of linear secret-sharing schemes, we get an exponent of 0.942 (compared to 0.999 of Liu and Vaikuntanathan).Motivated by the construction of Liu and Vaikuntanathan, we study secret-sharing schemes for uniform access structures. An access structure is k-uniform if all sets of size larger than k are authorized, all sets of size smaller than k are unauthorized, and each set of size k can be either authorized or unauthorized. The construction of Liu and Vaikuntanathan starts from protocols for conditional disclosure of secrets, constructs secret-sharing schemes for uniform access structures from them, and combines these schemes in order to obtain secret-sharing schemes for general access structures. Our second contribution in this paper is constructions of secret-sharing schemes for uniform access structures. We achieve the following results:A secret-sharing scheme for k-uniform access structures for large secrets in which the share size is O(k2) times the size of the secret.A linear secret-sharing scheme for k-uniform access structures for a binary secret in which the share size is~O(2h(k/n)n/2) (where h is the binary entropy function). By counting arguments, this construction is optimal (up to polynomial factors).A secret-sharing scheme for k-uniform access structures for a binary secret in which the share size is 2~O(√klogn). Our third contribution is a construction of ad-hoc PSM protocols, i.e., PSM protocols in which only a subset of the parties will compute a function on their inputs. This result is based on ideas we used in the construction of secret-sharing schemes for k-uniform access structures for a binary secret. %B Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT 2019) %G eng %U https://link.springer.com/chapter/10.1007/978-3-030-17659-4_15 %0 Conference Paper %B 2019 Symposium on the Theory of Computation %D 2019 %T The Structure of Optimal Private Tests for Simple Hypotheses %A Clement L. Canonne %A Gautam Kamath %A Audra McMillan %A Smith, Adam %A Jonathan Ullman %X Hypothesis testing plays a central role in statistical inference, and is used in many settings where privacy concerns are paramount. This work answers a basic question about privately testing simple hypotheses: given two distributions P and Q, and a privacy level ε, how many i.i.d. samples are needed to distinguish P from Q subject to ε-differential privacy, and what sort of tests have optimal sample complexity? Specifically, we characterize this sample complexity up to constant factors in terms of the structure of P and Q and the privacy level ε, and show that this sample complexity is achieved by a certain randomized and clamped variant of the log-likelihood ratio test. Our result is an analogue of the classical Neyman–Pearson lemma in the setting of private hypothesis testing. We also give an application of our result to private change-point detection. Our characterization applies more generally to hypothesis tests satisfying essentially any notion of algorithmic stability, which is known to imply strong generalization bounds in adaptive data analysis, and thus our results have applications even when privacy is not a primary concern. %B 2019 Symposium on the Theory of Computation %G eng %U https://arxiv.org/pdf/1811.11148.pdf %0 Journal Article %J Journal of Privacy and Confidentiality %D 2019 %T Concentration Bounds for High Sensitivity Functions Through Differential Privacy %A Kobbi Nissim %A Uri Stemmer %X

A new line of work [6, 9, 15, 2] demonstrates how differential privacy [8] can be used as a mathematical tool for guaranteeing generalization in adaptive data analysis. Specifically, if a differentially private analysis is applied on a sample S of i.i.d. examples to select a lowsensitivity function f , then w.h.p. f (S) is close to its expectation, although f is being chosen based on the data. Very recently, Steinke and Ullman [16] observed that these generalization guarantees can be used for proving concentration bounds in the non-adaptive setting, where the low-sensitivity function is fixed beforehand. In particular, they obtain alternative proofs for classical concentration bounds for low-sensitivity functions, such as the Chernoff bound and McDiarmid’s Inequality. In this work, we set out to examine the situation for functions with high-sensitivity, for which differential privacy does not imply generalization guarantees under adaptive analysis. We show that differential privacy can be used to prove concentration bounds for such functions in the non-adaptive setting.

%B Journal of Privacy and Confidentiality %V 9 %G eng %U https://journalprivacyconfidentiality.org/index.php/jpc/article/view/658 %N 1 %0 Conference Paper %B 32nd Annual Conference on Learning Theory %D 2019 %T Private Center Points and Learning of Halfspaces %A Amos Beimel %A Shay Moran %A Kobbi Nissim %A Uri Stemmer %X We present a private learner for halfspaces over an arbitrary finite domain X⊂ℝd with sample complexity mathrmpoly(d,2log∗|X|). The building block for this learner is a differentially private algorithm for locating an approximate center point of m>poly(d,2log∗|X|) points -- a high dimensional generalization of the median function. Our construction establishes a relationship between these two problems that is reminiscent of the relation between the median and learning one-dimensional thresholds [Bun et al.\ FOCS '15]. This relationship suggests that the problem of privately locating a center point may have further applications in the design of differentially private algorithms.
We also provide a lower bound on the sample complexity for privately finding a point in the convex hull. For approximate differential privacy, we show a lower bound of m=Ω(d+log|X|), whereas for pure differential privacy m=Ω(d log|X|). %B 32nd Annual Conference on Learning Theory %V vol 99 %P 1–14 %G eng %U http://proceedings.mlr.press/v99/beimel19a.html %0 Journal Article %J European Data Protection Law Review %D 2019 %T Data Protection's Composition Problem %A Aaron Fluitt %A Aloni Cohen %A Micah Altman %A Kobbi Nissim %A Salome Viljoen %A Alexandra Wood %X

Is it possible to piece together the confidential data of almost everyone in the US from statistics published by the Census Bureau—without breaching Census security or policy? Could someone—a doctor, a nosy neighbor, or a foreign state actor—determine whether a particular person participated in a genetic study of hundreds of individuals, when each individual contributed only tiny trace amounts of DNA to a highly complex and aggregated genetic mixture? Could police detectives re-trace a suspect’s every movement over the course of many months and thereby learn intimate details about the suspect’s political, religious, and sexual associations—without having to deploy any sort of surveillance or tracking devices? Could someone reliably deduce the sexual preferences of a Facebook user without looking at any content that user has shared?

Until recently, most people probably never imagined that their highly sensitive personal data could be so vulnerable to discovery from seemingly innocuous sources. Many continue to believe that the privacy risks from purely public, statistical, and anonymised data are merely theoretical, and that the practical risks are negligibly small. Yet all of the privacy violations described above are not only theoretically possible—they have already been successfully executed.

The foregoing examples of real-world privacy attacks all leverage one particular vulnerability that we refer to as composition effects. This vulnerability stems from the cumulative erosions of privacy that inhere in every piece of data about people. These erosions occur no matter how aggregated, insignificant, or anonymised the data may seem, and even small erosions can combine in unanticipated ways to create big risks.

Privacy and data protection failures from unanticipated composition effects reflect a type of data myopia—a short-sighted approach toward addressing increasingly-ubiquitous surveillance and privacy risks from Big Data analytics, characterized by a near-total focus on individual data processors and processes and by pervasive underestimation of systemic risks accumulating from independent data products. The failure to recognize accumulation of risk in the information ecosystem reflects a more general societal blind spot to cumulative systemic risks, with parallels in collective failures to foresee or forestall global financial crises, and to adequately address mounting risks to the natural environment.

As the volume and complexity of data uses and publications grow rapidly across a broad range of contexts, the need to develop frameworks for addressing cumulative privacy risks is likely to become an increasingly urgent and widespread problem. Threats to privacy are growing due to the accelerating abundance, and richness, of data about individuals being generated and made publicly available. Furthermore, substantial increases in computing power and algorithmic improvements are making the execution of such attacks more technically feasible. These threats will be impossible to overcome unless regulations are designed to explicitly regulate cumulative risk in a manner that is consistent with the science of composition effects.

 

 
%B European Data Protection Law Review %V 5 %P 285-292 %G eng %U https://edpl.lexxion.eu/article/EDPL/2019/3/4 %N 3 %0 Generic %D 2018 %T Bootstrap Inference and Differential Privacy: Standard Errors for Free %A Thomas Brawner %A James Honaker %X The bootstrap is a common and powerful statistical tool for numerically computing the standard error of estimators, that is, a calculation of the uncertainty of functions computed on sample data so as to make an inference back to the original population from which the sample was drawn. Understanding uncertainty, and inferential questions, in the context of private data is an increasingly important task within the literature of differential privacy [7, 20, 15]. We show how to construct an implementation of the bootstrap within differential privacy. Most importantly, we show that, for a broad class of functions under zero concentrated differential privacy, the bootstrap can be implemented at no cost. That is, for a given choice of privacy parameter and associated expected error of some query, the bootstrap can be implemented for the exact same privacy guarantee, resulting in the same expected error (or sometimes less) in the desired query, but additionally provide the standard error of that query. In section 2 we provide a brief overview of differential privacy. Then to describe these results on bootstrap inference, in section 3 we describe some foundational results on the aggregation of repeated queries under contrasting privacy and composition definitions. This leads to a tangential result in section 4 on a low-noise Gaussian mechanism for pure differential privacy. Next we provide a brief foundation on the bootstrap algorithm in statistics in section 5, before showing our algorithmic construction of the bootstrap using the mechanisms of differential privacy in section 6. In section 7 we describe how to use the differentially private estimate of the standard error in the construction of confidence intervals and hypothesis tests, and then demonstrate this in section 8 with examples using published Census microdata in the style of privacy sensitive data. %B Summer Meetings of the Society for Political Methodology, Provo, UT %G eng %0 Conference Paper %B ACM SIGMOD/PODS Conference International Conference on Management of Data (PODS 2018) %D 2018 %T Heavy Hitters and the Structure of Local Privacy %A Mark Bun %A Jelani Nelson %A Uri Stemmer %B ACM SIGMOD/PODS Conference International Conference on Management of Data (PODS 2018) %8 2018 %G eng %0 Journal Article %J Conference on Algorithmic Learning Theory (ALT 2018) %D 2018 %T Clustering Algorithms for the Centralized and Local Models %A Kobbi Nissim %A Uri Stemmer %B Conference on Algorithmic Learning Theory (ALT 2018) %G eng %0 Government Document %D 2018 %T Comments to the U.S. Office of Management and Budget, Re: Request for information (New Techniques and Methodologies Based on Combining Data from Multiple Sources) %A Micah Altman %A Aloni Cohen %A Aaron Fluitt %A James Honaker %A Kobbi Nissim %A Michael Washington %A Alexandra Wood %G eng %0 Thesis %B Applied Mathematics %D 2018 %T Towards Differentially Private Inference on Network Data %A Alexander Koujianos Goldberg %X Statistical analysis of network data, while popular in a broad range of fields, can also be highly problematic from a privacy standpoint. In this thesis, we study privacy-preserving inference on network data using the rigorous notion of differential privacy. We propose new methods for differentially private inference using a common class of models known as Exponential Random Graph Models (ERGMs). The goal of our work is to accurately estimate the parameters of an ERGM applied to a network dataset, while offering meaningful privacy guarantees to participants. We propose methods that provably guarantee differential privacy at two different granularities: edge-level privacy, which protects the privacy of any single relationship in the network and node-level privacy, which protects all of the relationships of a participant. Specifically, using the framework of "restricted sensitivity," we take advantage of the sparsity of real-world networks to perturb data much less than prior work while guaranteeing differential privacy. We empirically evaluate the accuracy of inference in a series of experiments on both synthetic networks and a real network dataset. Experimental results suggest that our proposed methods enable accurate inference under meaningful privacy guarantees in settings where current methods do not, moving us closer to the goal of useful differentially private statistical modeling of network data. %B Applied Mathematics %G eng %9 Undergraduate thesis %0 Journal Article %J International Data Privacy Law %D 2018 %T Practical Approaches to Big Data Privacy Over Time %A Micah Altman %A Alexandra Wood %A O'Brien, David %A Gasser, Urs %B International Data Privacy Law %V 8 %P 29-51 %G eng %U https://academic.oup.com/idpl/advance-article/doi/10.1093/idpl/ipx027/4930711 %N 1 %0 Conference Paper %D 2018 %T Bridging the Gap between Computer Science and Legal Approaches to Privacy %A K. Nissim %A A Bembenek %A A Wood %A M Bun %A M Gaboardi %A Gasser, U. %A D O'Brien %A T Steinke %A S. Vadhan %X
The fields of law and computer science incorporate contrasting notions of the privacy risks associated with the analysis and release of statistical data about individuals and groups of individuals. Emerging concepts from the theoretical computer science literature provide formal mathematical models for quantifying and mitigating privacy risks, where the set of risks they take into account is much broader than the privacy risks contemplated by many privacy laws. An example of such a model is differential privacy, which provides a provable guarantee of privacy against a wide range of potential attacks, including types of attacks currently unknown or unforeseen. The subject of much theoretical investigation, new privacy technologies based on formal models have recently been making significant strides towards practical implementation. For these tools to be used with sensitive personal information, it is important to demonstrate that they satisfy relevant legal requirements for privacy protection. However, making such an argument is challenging due to the conceptual gaps between the legal and technical approaches to defining privacy. Notably, information privacy laws are generally subject to interpretation and some degree of flexibility, which creates uncertainty for the implementation of more formal approaches. This Article articulates the gaps between legal and technical approaches to privacy and presents a methodology for rigorously arguing that a technological method for privacy protection satisfies the requirements of a particular law. The proposed methodology has two main components: (i) extraction of a formal mathematical requirement of privacy based on a legal standard found in an information privacy law, and (ii) construction of a rigorous mathematical proof for establishing that a technological privacy solution satisfies the mathematical requirement derived from the law. To handle ambiguities that can lead to different interpretations of a legal standard, the methodology takes a conservative “worst-case” approach and attempts to extract a mathematical requirement that is robust to potential ambiguities. Under this approach, the mathematical proof demonstrates that a technological method satisfies a broad range of reasonable interpretations of a legal standard. The Article demonstrates the application of the proposed methodology with an example bridging between the requirements of the Family Educational Rights and Privacy Act of 1974 and differential privacy.
%7 2 %I Harvard Journal of Law & Technology %V 31 %P 687-780 %8 2016 %G eng %U https://jolt.law.harvard.edu/assets/articlePDFs/v31/02.-Article-Wood-7.21.pdf %0 Conference Paper %B Theory of Cryptography Conference (TCC 2016) %D 2018 %T The Complexity of Computing the Optimal Composition of Differential Privacy %A Jack Murtagh %A Salil Vadhan %X

In the study of differential privacy, composition theorems (starting with the original paper of Dwork, McSherry, Nissim, and Smith (TCC'06)) bound the degradation of privacy when composing several differentially private algorithms. Kairouz, Oh, and Viswanath (ICML'15) showed how to compute the optimal bound for composing k arbitrary (ϵ,δ)-differentially private algorithms. We characterize the optimal composition for the more general case of k arbitrary (ϵ1,δ1),,(ϵk,δk)-differentially private algorithms where the privacy parameters may differ for each algorithm in the composition. We show that computing the optimal composition in general is #P-complete. Since computing optimal composition exactly is infeasible (unless FP=#P), we give an approximation algorithm that computes the composition to arbitrary accuracy in polynomial time. The algorithm is a modification of Dyer's dynamic programming approach to approximately counting solutions to knapsack problems (STOC'03).

%B Theory of Cryptography Conference (TCC 2016) %7 8 %I Theory of Computing (2018) %V 14 %P 1-35 %G eng %U http://theoryofcomputing.org/articles/v014a008/ %0 Generic %D 2018 %T Composable and Versatile Privacy via Truncated CDP (Poster) %A Mark Bun %A Cynthia Dwork %A Guy Rothblum %A Thomas Steinke %B ACM Symposium on the Theory of Computing (STOC 2018) %G eng %0 Journal Article %D 2018 %T Computational Two-Party Correlation: A Dichotomy for Key-Agreement Protocols %A Iftach Haitner %A Ronen Shaltiel %A Jad Silbak %A Kobbi Nissim %A Eran Omri %G eng %U https://www.irif.fr/~focs2018/accepted/ %0 Journal Article %J Vanderbilt Journal of Entertainment and Technology Law %D 2018 %T Differential Privacy: A Primer for a Non-technical Audience %A Kobbi Nissim %A Thomas Steinke %A Alexandra Wood %A Micah Altman %A Aaron Bembenek %A Mark Bun %A Marco Gaboardi %A O'Brien, David %A Salil Vadhan %X

This document is a primer on differential privacy, which is a formal mathematical framework for guaranteeing privacy protection when analyzing or releasing statistical data. Recently emerging from the theoretical computer science literature, differential privacy is now in initial stages of implementation and use in various academic, industry, and government settings. Using intuitive illustrations and limited mathematical formalism, this document provides an introduction to differential privacy for non-technical practitioners, who are increasingly tasked with making decisions with respect to differential privacy as it grows more widespread in use. In particular, the examples in this document illustrate ways in which social scientists can conceptualize the guarantees provided by differential privacy with respect to the decisions they make when managing personal data about research subjects and informing them about the privacy protection they will be afforded. 

 

%B Vanderbilt Journal of Entertainment and Technology Law %I a product of the "Bridging Privacy Definitions" working group, part of the Privacy Tools for Sharing Research Data project at Harvard University %C Cambridge, MA %V 21 %P 209-276 %8 2018 %G eng %N 1 %0 Journal Article %J 9th Innovations in Theoretical Computer Science Conference (ITCS 2018) %D 2018 %T Finite Sample Differentially Private Confidence Intervals %A Vishesh Karwa %A Salil Vadhan %X We study the problem of estimating finite sample confidence intervals of the mean of a normal population under the constraint of differential privacy. We consider both the known and unknown variance cases and construct differentially private algorithms to estimate confidence intervals. Crucially, our algorithms guarantee a finite sample coverage, as opposed to an asymptotic coverage. Unlike most previous differentially private algorithms, we do not require the domain of the samples to be bounded. We also prove lower bounds on the expected size of any differentially private confidence set showing that our the parameters are optimal up to polylogarithmic factors. %B 9th Innovations in Theoretical Computer Science Conference (ITCS 2018) %G eng %U https://arxiv.org/abs/1711.03908 %0 Journal Article %J IEEE Security & Privacy %D 2018 %T A Harm-Reduction Framework for Algorithmic Accountability over Personal Information %A Micah Altman %A Alexandra Wood %A Effy Vayena %B IEEE Security & Privacy %V 16 %P 34-45 %G eng %U https://ieeexplore.ieee.org/document/8395114/ %N 3 %0 Journal Article %J Philosophical Transaction of the Royal Society A %D 2018 %T Is Privacy Privacy? %A Kobbi Nissim %A Alexandra Wood %X This position paper observes how different technical and normative conceptions of privacy have evolved in parallel and describes the practical challenges that these divergent approaches pose. Notably, past technologies relied on intuitive, heuristic understandings of privacy that have since been shown not to satisfy expectations for privacy protection. With computations ubiquitously integrated in almost every aspect of our lives, it is increasingly important to ensure that privacy technologies provide protection that is in line with relevant social norms and normative expectations. Similarly, it is also important to examine social norms and normative expectations with respect to the evolving scientific study of privacy. To this end, we argue for a rigorous analysis of the mapping from normative to technical concepts of privacy and vice versa. We review the landscape of normative and technical definitions of privacy and discuss specific examples of gaps between definitions that are relevant in the context of privacy in statistical computation. We then identify opportunities for overcoming their differences in the design of new approaches to protecting privacy in accordance with both technical and normative standards. %B Philosophical Transaction of the Royal Society A %V 376 %G eng %U http://rsta.royalsocietypublishing.org/content/376/2128/20170358 %N 2128 %0 Journal Article %J Advances in Neural Information Processing Systems %D 2018 %T Thwarting Adversarial Examples: An L_0-Robust Sparse Fourier Transform %A Mitali Bafna %A Jack Murtagh %A Nikhil Vyas %B Advances in Neural Information Processing Systems %V 31 %P 10096--10106 %G eng %U http://papers.nips.cc/paper/8211-thwarting-adversarial-examples-an-l_0-robust-sparse-fourier-transform %0 Journal Article %D 2018 %T Usable Differential Privacy: A Case Study with PSI %A Jack Murtagh %A Taylor, Kathryn %A George Kellaris %A Salil Vadhan %G eng %U https://arxiv.org/abs/1809.04103 %0 Report %D 2017 %T Comments on the City of Seattle Open Data Risk Assessment %A Alexandra Wood %A Micah Altman %A Suso Baleato %A Salil Vadhan %X

The transparency goals of the open data movement serve important social, economic, and democratic functions in cities like Seattle. At the same time, some municipal datasets about the city and its citizens’ activities carry inherent risks to individual privacy when shared publicly. In 2016, the City of Seattle declared in its Open Data Policy that the city’s data would be “open by preference,” except when doing so may affect individual privacy. To ensure its Open Data program effectively protects individuals, Seattle committed to performing an annual risk assessment and tasked the Future of Privacy Forum (FPF) with creating and deploying an initial privacy risk assessment methodology for open data.

This Draft Report provides tools and guidance to the City of Seattle and other municipalities navigating the complex policy, operational, technical, organizational, and ethical standards that support privacyprotective open data programs. Although there is a growing body of research on open data privacy, open data managers and departmental data owners need to be able to employ a standardized methodology for assessing the privacy risks and benefits of particular datasets internally, without a bevy of expert statisticians, privacy lawyers, or philosophers. By following a flexible, risk-based assessment process, the City of Seattle – and other municipal open data programs – can maximize the utility and openness of civic data while minimizing privacy risks to individuals and community concerns about ethical challenges, fairness, and equity.

This Draft Report first describes inherent privacy risks in an open data landscape, with an emphasis on potential harms related to re-identification, data quality, and fairness. Accompanying this, the Draft Report includes a Model Open Data Benefit Risk Analysis (MODBRA). The model template evaluates the types of data contained in a proposed open dataset, the potential benefits – and concomitant risks – of releasing the dataset publicly, and strategies for effective de-identification and risk mitigation. This holistic assessment guides city officials to determine whether to release the dataset openly, in a limited access environment, or to withhold it from publication (absent countervailing public policy considerations). The Draft Report methodology builds on extensive work done in this field by experts at the National Institute of Standards and Technology, the University of Washington, the Berkman Klein Center for Internet & Society at Harvard University, and others, and adapts existing frameworks to the unique challenges faced by cities as local governments, technological system integrators, and consumer facing service providers.

%B Comments to the Future of Privacy Forum %G eng %U https://fpf.org/wp-content/uploads/2018/01/Wood-Altman-Baleato-Vadhan_Comments-on-FPF-Seattle-Open-Data-Draft-Report.pdf %0 Journal Article %J Symposium on the Principle of Programming Languages, ACM %D 2017 %T Relational Cost Analysis %A Ezgi Cicek %A Gilles Barthe %A Marco Gaboardi %A Deepak Garg %A Jan Hoffmann %X Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are well-studied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop RelCost, a refinement type and effect system for a higher-order functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing—relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive non-relational approach. We prove our type system sound using a semantic model based on step-indexed unary and binary logical relations accounting for non-relational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples. %B Symposium on the Principle of Programming Languages, ACM %8 Jan 2017 %G eng %0 Journal Article %J Symposium on the Principle of Programming Languages, ACM %D 2017 %T A Semantic Account of Metric Preservation %A Arthur Azevedo de Amorim %A Marco Gaboardi %A Justin Hsu %A Shin-ya Katsumata %A Ikram Cherigui %X Program sensitivity measures how robust a program is to small changes in its input, and is a fundamental notion in domains ranging from differential privacy to cyber-physical systems. A natural way to formalize program sensitivity is in terms of metrics on the input and output spaces, requiring that an r-sensitive function map inputs that are at distance d to outputs that are at distance at most r⋅d. Program sensitivity is thus an analogue of Lipschitz continuity for programs. Reed and Pierce introduced Fuzz, a functional language with a linear type system that can express program sensitivity. They show soundness operationally, in the form of a metric preservation property. Inspired by their work, we study program sensitivity and metric preservation from a denotational point of view. In particular, we introduce metric CPOs, a novel semantic structure for reasoning about computation on metric spaces, by endowing CPOs with a compatible notion of distance. This structure is useful for reasoning about metric properties of programs, and specifically about program sensitivity. We demonstrate metric CPOs by giving a model for the deterministic fragment of Fuzz. %B Symposium on the Principle of Programming Languages, ACM %8 Jan 2017 %G eng %U https://arxiv.org/abs/1702.00374 %0 Journal Article %J Journal of Privacy and Confidentiality %D 2017 %T Between Pure and Approximate Differential Privacy %A Thomas Steinke %A Jonathan Ullman %X We show a new lower bound on the sample complexity of (ε, δ)-differentially private algorithms that accurately answer statistical queries on high-dimensional databases. The novelty of our bound is that it depends optimally on the parameter δ, which loosely corresponds to the probability that the algorithm fails to be private, and is the first to smoothly interpolate between approximate differential privacy (δ > 0) and pure differential privacy (δ = 0). %B Journal of Privacy and Confidentiality %G eng %0 Book Section %B Tutorials on the Foundations of Cryptography %D 2017 %T The Complexity of Differential Privacy %A Salil Vadhan %X

Version History: 

August 2016: Manuscript v1 (see files attached)

March 2017: Manuscript v2 (see files attached); Errata

April 2017: Published Version (in Tutorials on the Foundations of Cryptography; see above)

Differential privacy is a theoretical framework for ensuring the privacy of individual-level data when performing statistical analysis of privacy-sensitive datasets. This tutorial provides an introduction to and overview of differential privacy, with the goal of conveying its deep connections to a variety of other topics in computational complexity, cryptography, and theoretical computer science at large. This tutorial is written in celebration of Oded Goldreich’s 60th birthday, starting from notes taken during a minicourse given by the author and Kunal Talwar at the 26th McGill Invitational Workshop on Computational Complexity [1].

%B Tutorials on the Foundations of Cryptography %I Springer, Yehuda Lindell, ed. %P 347-450 %G eng %U https://link.springer.com/chapter/10.1007/978-3-319-57048-8_7 %0 Journal Article %J Research Data Alliance (RDA) 8th Plenary on Privacy Implications of Research Data Sets, during International Data Week 2016 %D 2017 %T The DataTags System: Sharing Sensitive Data with Confidence %A Merce Crosas %B Research Data Alliance (RDA) 8th Plenary on Privacy Implications of Research Data Sets, during International Data Week 2016 %G eng %0 Conference Paper %B Fairness and Transparency in Machine Learning Conference (FATML) %D 2017 %T Decoupled Classifiers for Fair and Efficient Machine Learning %A Cynthia Dwork %A Nicole Immorlica %A Adam Kalai %A Max Leiserson %X

When it is ethical and legal to use a sensitive attribute (such as gender or race) in machine learning systems, the question remains how to do so. We show that the naive application of machine learning algorithms using sensitive features leads to an inherent tradeoff in accuracy between groups. We provide a simple and efficient decoupling technique, that can be added on top of any black-box machine learning algorithm, to learn different classifiers for different groups. Transfer learning is used to mitigate the problem of having too little data on any one group. 

The method can apply to a range of fairness criteria. In particular, we require the application designer to specify as joint loss function that makes explicit the trade-off between fairness and accuracy. Our reduction is shown to efficiently find the minimum loss as long as the objective has a certain natural monotonicity property which may be of independent interest in the study of fairness in algorithms.

%B Fairness and Transparency in Machine Learning Conference (FATML) %G eng %0 Journal Article %J In Proceedings of the 34th International Conference on Machine Learning (ICML 2017) %D 2017 %T Differentially Private Submodular Maximization: Data Summarization in Disguise %A Marko Mitrovic %A Mark Bun %A Andreas Krause %A Amin Karbasi %X Many data summarization applications are captured by the general framework of submodular maximization. As a consequence, a wide range of efficient approximation algorithms have been developed. However, when such applications involve sensitive data about individuals, their privacy concerns are not automatically addressed. To remedy this problem, we propose a general and systematic study of differentially private submodular maximization. We present privacy-preserving algorithms for both monotone and non-monotone submodular maximization under cardinality, matroid, and p-extendible system constraints, with guarantees that are competitive with optimal. Along the way, we analyze a new algorithm for non-monotone submodular maximization, which is the first (even non-privately) to achieve a constant approximation ratio while running in linear time. We additionally provide two concrete experiments to validate the efficacy of these algorithms. %B In Proceedings of the 34th International Conference on Machine Learning (ICML 2017) %G eng %0 Journal Article %J Annual Review of Statistics and Its Application (2017) %D 2017 %T Exposed! A Survey of Attacks on Private Data %A Cynthia Dwork %A Smith, Adam %A Thomas Steinke %A Jonathan Ullman %X Privacy-preserving statistical data analysis addresses the general question of protecting privacy when publicly releasing information about a sensitive dataset. A privacy attack takes seemingly innocuous released information and uses it to discern the private details of individuals, thus demonstrating that such information compromises privacy. For example, re-identification attacks have shown that it is easy to link supposedly de-identified records to the identity of the individual concerned. This survey focuses on attacking aggregate data, such as statistics about how many individuals have a certain disease, genetic trait, or combination thereof. We consider two types of attacks: reconstruction attacks, which approximately determine a sensitive feature of all the individuals covered by the dataset, and tracing attacks, which determine whether or not a target individual's data are included in the dataset.Wealso discuss techniques from the differential privacy literature for releasing approximate aggregate statistics while provably thwarting any privacy attack. %B Annual Review of Statistics and Its Application (2017) %G eng %0 Journal Article %J National Academies of Sciences, Engineering, and Medicine paper %D 2017 %T Innovations in Federal Statistics: Combining Data Sources While Protecting Privacy %A Robert M Groves %A Michael E Chernew %A Piet Daas %A Cynthia Dwork %A Ophir Frieder %A Hosagrahar V Jagadish %A Frauke Kreuter %A Sharon Lohr %A James P Lynch %A Colm O'Muircheartaigh %A Trivellore Raghunathan %A Rigobon, Roberto %A Marc Rotenberg %X Federal government statistics provide critical information to the country and serve a key role in a democracy. For decades, sample surveys with instruments carefully designed for particular data needs have been one of the primary methods for collecting data for federal statistics. However, the costs of conducting such surveys have been increasing while response rates have been declining, and many surveys are not able to fulfill growing demands for more timely information and for more detailed information at state and local levels. %B National Academies of Sciences, Engineering, and Medicine paper %G eng %0 Journal Article %J Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) %D 2017 %T Make Up Your Mind: The Price of Online Queries in Differential Privacy. %A Mark Bun %A Thomas Steinke %A Jonathan Ullman %X

We consider the problem of answering queries about a sensitive dataset subject to differential privacy. The queries may be chosen adversarially from a larger set Q of allowable queries in one of three ways, which we list in order from easiest to hardest to answer:

• Offline: The queries are chosen all at once and the differentially private mechanism answers the queries in a single batch.

• Online: The queries are chosen all at once, but the mechanism only receives the queries in a streaming fashion and must answer each query before seeing the next query.

• Adaptive: The queries are chosen one at a time and the mechanism must answer each query before the next query is chosen. In particular, each query may depend on the answers given to previous queries.

Many differentially private mechanisms are just as efficient in the adaptive model as they are in the offline model. Meanwhile, most lower bounds for differential privacy hold in the offline setting. This suggests that the three models may be equivalent. We prove that these models are all, in fact, distinct. Specifically, we show that there is a family of statistical queries such that exponentially more queries from this family can be answered in the offline model than in the online model. We also exhibit a family of search queries such that exponentially more queries from this family can be answered in the online model than in the adaptive model. We also investigate whether such separations might hold for simple queries like threshold queries over the real line.

%B Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) %G eng %U https://arxiv.org/abs/1604.04618 %0 Journal Article %J Proceedings of The 30th Conference on Learning Theory Conference (COLT 2017) %D 2017 %T The Price of Selection in Differential Privacy %A Mitali Bafna %A Jonathan Ullman %B Proceedings of The 30th Conference on Learning Theory Conference (COLT 2017) %G eng %U https://arxiv.org/abs/1702.02970 %0 Journal Article %J in the ACM SIGMOD/PODS Conference (PODS 2017) %D 2017 %T Private Incremental Regression %A Kasiviswanathan, Shiva %A Kobbi Nissim %A Hongxia Jin %B in the ACM SIGMOD/PODS Conference (PODS 2017) %G eng %0 Conference Paper %B European Social Policy Analysis network (ESPAnet). Israel %D 2017 %T Public Policy Modeling using the DataTags Toolset %A Bar-Sinai, Michael %A Rotem Medzini %X We apply Tags, a framework for modeling data handling policies, to a welfare policy. The generated model is useful for assessing entitlements of specific cases, and for gaining insights into the modeled policy as a whole. %B European Social Policy Analysis network (ESPAnet). Israel %G eng %0 Journal Article %J in Technology Science %D 2017 %T Reidentification Risks in HIPAA Safe Harbor Data: A study of data from one environmental health study %A Sweeney, Latanya %A Ji Su Yoo %A Laura Perovich %A Katherine E Boronow %A Brown, Phil %A Julia Green Brody %B in Technology Science %G eng %0 Journal Article %J in the IEEE Secure Development Conference (IEEE 2017) %D 2017 %T Securing Dataverse with an Adapted Command Design Pattern %A Gustavo Durand %A Bar-Sinai, Michael %A Merce Crosas %B in the IEEE Secure Development Conference (IEEE 2017) %G eng %0 Journal Article %J in the 58th Annual Symposium on Foundations of Computer Science (FOCS 2017) %D 2017 %T Tight Lower Bounds for Differentially Private Selection %A Thomas Steinke %A Jonathan Ullman %B in the 58th Annual Symposium on Foundations of Computer Science (FOCS 2017) %G eng %U https://arxiv.org/abs/1704.03024 %0 Journal Article %J in Technology Science %D 2017 %T Voter Identity Theft: Submitting Changes to Voter Registrations Online to Disrupt Elections %A Sweeney, Latanya %A Ji Su Yoo %A Jinyan Zang %B in Technology Science %G eng %0 Journal Article %J in the 23rd SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2017) %D 2017 %T What's Fair? %A Cynthia Dwork %B in the 23rd SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2017) %G eng %0 Journal Article %J Harvard Law Review %D 2016 %T Recoding Privacy Law: Reflections on the Future Relationship Among Law, Technology, and Privacy %A Gasser, Urs %B Harvard Law Review %8 12 Dec, 2016 %G eng %U http://harvardlawreview.org/2016/12/recoding-privacy-law-reflections-on-the-future-relationship-among-law-technology-and-privacy/ %0 Journal Article %J Conference on Internet and Economics, WINE %D 2016 %T Computer-Aided Verification in Mechanism Design %A G. Barthe %A M. Gaboardi %A E. J. Gallego Arias %A J. Hsu %A A. Roth %A P.-Y. Strub %B Conference on Internet and Economics, WINE %8 Dec 2016 %G eng %0 Journal Article %J 23rd ACM Conference on Computer and Communications Security, CCS %D 2016 %T Advanced Probabilistic Couplings for Differential Privacy %A G. Barthe %A N. Fong %A M. Gaboardi %A B. Gregoire %A J. Hsu %A P.-Y. Strub %B 23rd ACM Conference on Computer and Communications Security, CCS %8 Oct 2016 %G eng %0 Journal Article %J 23rd ACM Conference on Computer and Communications Security %D 2016 %T Generic Attacks on Secure Outsourced Databases %A Georgios Kellaris %A George Kollios %A Kobbi Nissim %A Adam O'Neill %X

Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with “full-fledged” security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and communication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources. We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about N 4 queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way. Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.

%B 23rd ACM Conference on Computer and Communications Security %8 Oct 2016 %G eng %0 Conference Paper %B Security and Privacy Workshops (SPW), 2016 IEEE %D 2016 %T DataTags, Data Handling Policy Spaces and the Tags Language %A Bar-Sinai, Michael %A Sweeney, Latanya %A Merce Crosas %X

Widespread sharing of scientific datasets holds great promise for new scientific discoveries and great risks for personal privacy. Dataset handling policies play the critical role of balancing privacy risks and scientific value. We propose an extensible, formal, theoretical model for dataset handling policies. We define binary operators for policy composition and for comparing policy strictness, such that propositions like "this policy is stricter than that policy" can be formally phrased. Using this model, The policies are described in a machine-executable and human-readable way. We further present the Tags programming language and toolset, created especially for working with the proposed model. Tags allows composing interactive, friendly questionnaires which, when given a dataset, can suggest a data handling policy that follows legal and technical guidelines. Currently, creating such a policy is a manual process requiring access to legal and technical experts, which are not always available. We present some of Tags' tools, such as interview systems, visualizers, development environment, and questionnaire inspectors. Finally, we discuss methodologies for questionnaire development. Data for this paper include a questionnaire for suggesting a HIPAA compliant data handling policy, and formal description of the set of data tags proposed by the authors in a recent paper.

%B Security and Privacy Workshops (SPW), 2016 IEEE %I IEEE %C San Jose, California %8 22-26 May %G eng %U http://ieeexplore.ieee.org/document/7527746/?reload=true&arnumber=7527746 %0 Conference Paper %B Theory and Practice of Differential Privacy %D 2016 %T PSI (Ψ): a Private data Sharing Interface %A Marco Gaboardi %A James Honaker %A Gary King %A Kobbi Nissim %A Jonathan Ullman %A Salil Vadhan %A Jack Murtagh %X

We provide an overview of PSI (“a Private data Sharing Interface”), a system we are devel- oping to enable researchers in the social sciences and other fields to share and explore privacy- sensitive datasets with the strong privacy protections of differential privacy.

Poster presented at Theory and Practice of Differential Privacy (TPDP 2016).

%B Theory and Practice of Differential Privacy %C New York, NY %8 2016 %G eng %U https://arxiv.org/abs/1609.04340 %0 Journal Article %J ACM Transactions on Economics and Computation (TEAC) %D 2016 %T Truthful Mechanisms for Agents that Value Privacy %A Yiling Chen %A Stephen Chong %A Ian Kash %A Tal Moran %A Salil Vadhan %X

Recent work has constructed economic mechanisms that are both truthful and differentially private. In these mechanisms, privacy is treated separately from truthfulness; it is not incorporated in players’ utility functions (and doing so has been shown to lead to nontruthfulness in some cases). In this work, we propose a new, general way of modeling privacy in players’ utility functions. Specifically, we only assume that if an outcome o has the property that any report of player i would have led to o with approximately the same probability, then o has a small privacy cost to player i. We give three mechanisms that are truthful with respect to our modeling of privacy: for an election between two candidates, for a discrete version of the facility location problem, and for a general social choice problem with discrete utilities (via a VCG-like mechanism). As the number n of players increases, the social welfare achieved by our mechanisms approaches optimal (as a fraction of n).

%B ACM Transactions on Economics and Computation (TEAC) %V 4 %G eng %U http://dl.acm.org/citation.cfm?doid=2905047.2892555 %N 3 %0 Journal Article %J Annals of Statistics %D 2016 %T Asymptotic and Finite-Sample Properties of Estimators based on Stochastic Gradients %A Toulis Panagiotis %A Edoardo Airoldi %X

Stochastic gradient descent procedures have gained popularity for parameter estimation from large data sets. However, their statistical properties are not well understood, in theory. And in practice, avoiding numerical instability requires careful tuning of key parameters. Here, we introduce implicit stochastic gradient descent procedures, which involve parameter updates that are implicitly defined. Intuitively, implicit updates shrink standard stochastic gradient descent updates. The amount of shrinkage depends on the observed Fisher information matrix, which does not need to be explicitly computed; thus, implicit procedures increase stability without increasing the computational burden. Our theoretical analysis provides the first full characterization of the asymptotic behavior of both standard and implicit stochastic gradient descent-based estimators, including finite-sample error bounds. Importantly, analytical expressions for the variances of these stochastic gradient-based estimators reveal their exact loss of efficiency. We also develop new algorithms to compute implicit stochastic gradient descent-based estimators for generalized linear models, Cox proportional hazards, M-estimators, in practice, and perform extensive experiments. Our results suggest that implicit stochastic gradient descent procedures are poised to become a workhorse for approximate inference from large data sets.

%B Annals of Statistics %G eng %U https://arxiv.org/abs/1408.2923 %0 Journal Article %J NIPS 2015 Workshop on Learning and Privacy with Incomplete Data and Weak Supervision %D 2016 %T Private Posterior Distributions from Variational Approximations %A Vishesh Karwa %A Dan Kifer %A Aleksandra Slavkovic %X

Privacy preserving mechanisms such as differential privacy inject additional randomness in the form of noise in the data, beyond the sampling mechanism. Ignoring this additional noise can lead to inaccurate and invalid inferences. In this paper, we incorporate the privacy mechanism explicitly into the likelihood function by treating the original data as missing, with an end goal of estimating posterior distributions over model parameters. This leads to a principled way of performing valid statistical inference using private data, however, the corresponding likelihoods are intractable. In this paper, we derive fast and accurate variational approximations to tackle such intractable likelihoods that arise due to privacy. We focus on estimating posterior distributions of parameters of the naive Bayes log-linear model, where the sufficient statistics of this model are shared using a differentially private interface. Using a simulation study, we show that the posterior approximations outperform the naive method of ignoring the noise addition mechanism.

%B NIPS 2015 Workshop on Learning and Privacy with Incomplete Data and Weak Supervision %G eng %U https://arxiv.org/abs/1511.07896 %0 Journal Article %J The Ethics of Biomedical Big Data %D 2016 %T Strictly Biomedical? Sketching the Ethics of the Big Data Ecosystem in Biomedicine %A Effy Vayena %A Gasser, Urs %X

In today’s ever evolving data ecosystem it is evident that data generated for a wide range of purposes unrelated to biomedicine possess tremendous potential value for biomedical research. Analyses of our Google searches, social media content, loyalty card points and the like are used to draw a fairly accurate picture of our health, our future health, our attitudes towards vaccination, disease outbreaks within a county and epidemic trajectories in other continents. These data sets are different from traditional biomedical data, if a biomedical purpose is the categorical variable. Yet the results their analyses yield are of serious biomedical relevance. This paper discusses important but unresolved challenges within typical biomedical data, and it explores examples of non-biomedical Big Data with high biomedical value, including the specific conundrums these engender, especially when we apply biomedical data concepts to them. It also highlights the “digital phenotype” project, illustrating the Big Data ecosystem in action and an approach believed as likely to yield biomedical and health knowledge. We argue that to address the challenges and make full use of the opportunities that Big Data offers to biomedicine, a new ethical framework taking a data ecosystem approach is urgently needed. We conclude by discussing key components, design requirements and substantive normative elements of such a framework.

%B The Ethics of Biomedical Big Data %G eng %U http://link.springer.com/chapter/10.1007/978-3-319-33525-4_2 %0 Journal Article %J Conference on Learning Theory (COLT) %D 2016 %T Adaptive Learning with Robust Generalization Guarantees %A Rachel Cummings %A Katrina Ligett %A Kobbi Nissim %A Aaron Roth %A Zhiwei Steven Wu %X

The traditional notion of generalization---i.e., learning a hypothesis whose empirical error is close to its true error---is surprisingly brittle. As has recently been noted in [DFH+15b], even if several algorithms have this guarantee in isolation, the guarantee need not hold if the algorithms are composed adaptively. In this paper, we study three notions of generalization---increasing in strength---that are robust to postprocessing and amenable to adaptive composition, and examine the relationships between them. We call the weakest such notion Robust Generalization. A second, intermediate, notion is the stability guarantee known as differential privacy. The strongest guarantee we consider we call Perfect Generalization. We prove that every hypothesis class that is PAC learnable is also PAC learnable in a robustly generalizing fashion, with almost the same sample complexity. It was previously known that differentially private algorithms satisfy robust generalization. In this paper, we show that robust generalization is a strictly weaker concept, and that there is a learning task that can be carried out subject to robust generalization guarantees, yet cannot be carried out subject to differential privacy. We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.

%B Conference on Learning Theory (COLT) %G eng %U https://arxiv.org/abs/1602.07726 %0 Journal Article %J 48th Annual Symposium on the Theory of Computing %D 2016 %T Algorithmic Stability for Adaptive Data Analysis %A Raef Bassily %A Kobbi Nissim %A Smith, Adam %A Thomas Steinke %A Uri Stemmer %A Jonathan Ullman %X

Adaptivity is an important feature of data analysis---the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated the formal study of this problem, and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution P and a set of n independent samples x is drawn from P. We seek an algorithm that, given x as input, accurately answers a sequence of adaptively chosen queries about the unknown distribution P. How many samples n must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions: (i) We give upper bounds on the number of samples n that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. (STOC, 2015), and have been applied in subsequent work by those authors (Science, 2015, NIPS, 2015). (ii) We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries. As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that stability implies low generalization error. We also study weaker stability guarantees such as bounded KL divergence and total variation distance.

%B 48th Annual Symposium on the Theory of Computing %G eng %U https://arxiv.org/abs/1511.02513v1 %0 Journal Article %J PLoS Med %D 2016 %T Between Openness and Privacy in Genomics. %A Effy Vayena %A Gasser, Urs %X

Summary Points

%B PLoS Med %V 13 %G eng %U http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001937 %N 1 %0 Journal Article %J 14th Theory of Cryptography Conference %D 2016 %T Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds %A Mark Bun %A Thomas Steinke %X

"Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy."

%B 14th Theory of Cryptography Conference %G eng %U https://arxiv.org/abs/1605.02065 %0 Journal Article %J 23rd ACM Conference on Computer and Communications Security, CCS %D 2016 %T Differentially Private Bayesian Programming %A G. Barthe %A P.­Y. Strub %A J. Hsu %A A. D. Gordon %A E. J. Gallego Arias %A M. Gaboardi %A G. P. Farina %B 23rd ACM Conference on Computer and Communications Security, CCS %G eng %0 Journal Article %J Proceedings of The 33rd International Conference on Machine Learning, PMLR %D 2016 %T Differentially Private Chi-Squared Hypothesis Testing: Goodness of Fit and Independence Testing %A Marco Gaboardi %A Hyun woo Lim %A Ryan Rogers %A Salil Vadhan %X

Hypothesis testing is a useful statistical tool in determining whether a given model should be rejected based on a sample from the population. Sample data may contain sensitive information about individuals, such as medical information. Thus it is important to design statistical tests that guarantee the privacy of subjects in the data. In this work, we study hypothesis testing subject to differential privacy, specifically chi-squared tests for goodness of fit for multinomial data and independence between two categorical variables.
We propose new tests for goodness of fit and independence testing that like the classical versions can be used to determine whether a given model should be rejected or not, and that additionally can ensure differential privacy. We give both Monte Carlo based hypothesis tests as well as hypothesis tests that more closely follow the classical chi-squared goodness of fit test and the Pearson chi-squared test for independence. Crucially, our tests account for the distribution of the noise that is injected to ensure privacy in determining significance.
We show that these tests can be used to achieve desired significance levels, in sharp contrast to direct applications of classical tests to differentially private contingency tables which can result in wildly varying significance levels. Moreover, we study the statistical power of these tests. We empirically show that to achieve the same level of power as the classical non-private tests our new tests need only a relatively modest increase in sample size.

%B Proceedings of The 33rd International Conference on Machine Learning, PMLR %G eng %U http://proceedings.mlr.press/v48/rogers16.html %0 Journal Article %J Washington and Lee Law Review %D 2016 %T Elements of a New Ethical Framework for Big Data Research %A Effy Vayena %A Gasser, Urs %A Alexandra Wood %A David R. O'Brien %A Micah Altman %X

merging large-scale data sources hold tremendous potential for new scientific research into human biology, behaviors, and relationships. At the same time, big data research presents privacy and ethical challenges that the current regulatory framework is ill-suited to address. In light of the immense value of large-scale research data, the central question moving forward is not whether such data should be made available for research, but rather how the benefits can be captured in a way that respects fundamental principles of ethics and privacy.

In response, this Essay outlines elements of a new ethical framework for big data research. It argues that oversight should aim to provide universal coverage of human subjects research, regardless of funding source, across all stages of the information lifecycle. New definitions and standards should be developed based on a modern understanding of privacy science and the expectations of research subjects. In addition, researchers and review boards should be encouraged to incorporate systematic risk-benefit assessments and new procedural and technological solutions from the wide range of interventions that are available. Finally, oversight mechanisms and the safeguards implemented should be tailored to the intended uses, benefits, threats, harms, and vulnerabilities associated with a specific research activity.

Development of a new ethical framework with these elements should be the product of a dynamic multistakeholder process that is designed to capture the latest scientific understanding of privacy, analytical methods, available safeguards, community and social norms, and best practices for research ethics as they evolve over time. Such a framework would support big data utilization and help harness the value of big data in a sustainable and trust-building manner.

%B Washington and Lee Law Review %V 72 %8 31 Mar, 2016 %G eng %U http://lawreview.journals.wlu.io/elements-of-a-new-ethical-framework-for-big-data-research/ %N 3 %0 Journal Article %J Annals of Statistics %D 2016 %T Inference Using Noisy Degrees: Differentially Private Beta-Model and Synthetic Graphs %A Vishesh Karwa %A Aleksandra Slavković %X

The β-model of random graphs is an exponential family model with the degree sequence as a sufficient statistic. In this paper, we contribute three key results. First, we characterize conditions that lead to a quadratic time algorithm to check for the existence of MLE of the β-model, and show that the MLE never exists for the degree partition β-model. Second, motivated by privacy problems with network data, we derive a differentially private estimator of the parameters of β-model, and show it is consistent and asymptotically normally distributed - it achieves the same rate of convergence as the nonprivate estimator. We present an efficient algorithm for the private estimator that can be used to release synthetic graphs. Our techniques can also be used to release degree distributions and degree partitions accurately and privately, and to perform inference from noisy degrees arising from contexts other than privacy. We evaluate the proposed estimator on real graphs and compare it with a current algorithm for releasing degree distributions and find that it does significantly better. Finally, our paper addresses shortcomings of current approaches to a fundamental problem of how to perform valid statistical inference from data released by privacy mechanisms, and lays a foundational groundwork on how to achieve optimal and private statistical inference in a principled manner by modeling the privacy mechanism; these principles should be applicable to a class of models beyond the β-model.

%B Annals of Statistics %V 44 %P 87-112 %G eng %N 1 %0 Journal Article %J Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS) %D 2016 %T Lipschitz Extensions for Node­Private Graph Statistics and the Generalized Exponential Mechanism %A Raskhodnikova, Sofya %A Smith, Adam %B Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS) %G eng %0 Conference Paper %B PODS 2016 %D 2016 %T Locating a Small Cluster Privately %A Kobbi Nissim %A Uri Stemmer %A Salil Vadhan %X

We present a new algorithm for locating a small cluster of points with differential privacy [Dwork, McSherry, Nissim,and Smith, 2006]. Our algorithm has implications to private data exploration, clustering, and removal of outliers. Furthermore, we use it to significantly relax the requirements of the sample and aggregate technique [Nissim, Raskhodnikova,and Smith, 2007], which allows compiling of "off the shelf" (non-private) analyses into analyses that preserve differential privacy.

%B PODS 2016 %C ACM SIGMOD/PODS Conference, San Francisco, USA, 2016 %8 June 26 2016 %G eng %0 Conference Paper %B Proceedings of the 12th Theory of Cryptography Conference (TCC 2016) %D 2016 %T Order revealing encryption and the hardness of private learning %A Mark Bun %A Mark Zhandry %X

An order-revealing encryption scheme gives a public procedure by which two ciphertexts can be compared to reveal the ordering of their underlying plaintexts. We show how to use order-revealing encryption to separate computationally efficient PAC learning from efficient (ϵ,δ)-differentially private PAC learning. That is, we construct a concept class that is efficiently PAC learnable, but for which every efficient learner fails to be differentially private. This answers a question of Kasiviswanathan et al. (FOCS '08, SIAM J. Comput. '11).
To prove our result, we give a generic transformation from an order-revealing encryption scheme into one with strongly correct comparison, which enables the consistent comparison of ciphertexts that are not obtained as the valid encryption of any message. We believe this construction may be of independent interest.

%B Proceedings of the 12th Theory of Cryptography Conference (TCC 2016) %C Tel-Aviv, Israel %8 1/10-13/2016 %G eng %0 Unpublished Work %D 2016 %T Practical Approaches to Big Data Privacy Over Time %A Micah Altman %A Alexandra Wood %A David R. O'Brien %A Gasser, Urs %X

Increasingly, governments and businesses are collecting, analyzing, and sharing detailed information about individuals over long periods of time. Vast quantities of data from new sources and novel methods for large-scale data analysis promise to yield deeper understanding of human characteristics, behavior, and relationships and advance the state of science, public policy, and innovation. At the same time, the collection and use of fine-grained personal data over time is associated with significant risks to individuals, groups, and society at large. In this article, we examine a range of longterm data collections, conducted by researchers in social science, in order to identify the characteristics of these programs that drive their unique sets of risks and benefits. We also examine the practices that have been established by social scientists to protect the privacy of data subjects in light of the challenges presented in long-term studies. We argue that many uses of big data, across academic, government, and industry settings, have characteristics similar to those of traditional long-term research studies. In this article, we discuss the lessons that can be learned from longstanding data management practices in research and potentially applied in the context of newly emerging data sources and uses.

%B Brussels Privacy Symposium %C Brussels, Belgium %8 2016 %G eng %0 Journal Article %J Neural Information Processing Systems (NIPS) %D 2016 %T Privacy Odometers and Filters: Pay-as-you-Go Composition %A Ryan Rogers %A Aaron Roth %A Jonathan Ullman %A Salil Vadhan %X

In this paper we initiate the study of adaptive composition in differential privacy when the length of the composition, and the privacy parameters themselves can be chosen adaptively, as a function of the outcome of previously run analyses. This case is much more delicate than the setting covered by existing composition theorems, in which the algorithms themselves can be chosen adaptively, but the privacy parameters must be fixed up front. Indeed, it isn't even clear how to define differential privacy in the adaptive parameter setting. We proceed by defining two objects which cover the two main use cases of composition theorems. A privacy filter is a stopping time rule that allows an analyst to halt a computation before his pre-specified privacy budget is exceeded. A privacy odometer allows the analyst to track realized privacy loss as he goes, without needing to pre-specify a privacy budget. We show that unlike the case in which privacy parameters are fixed, in the adaptive parameter setting, these two use cases are distinct. We show that there exist privacy filters with bounds comparable (up to constants) with existing privacy composition theorems. We also give a privacy odometer that nearly matches non-adaptive private composition theorems, but is sometimes worse by a small asymptotic factor. Moreover, we show that this is inherent, and that any valid privacy odometer in the adaptive parameter setting must lose this factor, which shows a formal separation between the filter and odometer use-cases.

%B Neural Information Processing Systems (NIPS) %8 2016 %G eng %0 Journal Article %J 43rd International Colloquium on Automata, Languages, and Programming %D 2016 %T Sensitivity of Counting Queries %A Myrto Arapinis %A Diego Figueira %A Marco Gaboardi %X

In the context of statistical databases, the release of accurate statistical information about the collected data often puts at risk the privacy of the individual contributors. The goal of differential privacy is to maximize the utility of a query while protecting the individual records in the database. A natural way to achieve differential privacy is to add statistical noise to the result of the query. In this context, a mechanism for releasing statistical information is thus a trade-off between utility and privacy. In order to balance these two "conflicting" requirements, privacy preserving mechanisms calibrate the added noise to the so-called sensitivity of the query, and thus a precise estimate of the sensitivity of the query is necessary to determine the amplitude of the noise to be added. In this paper, we initiate a systematic study of sensitivity of counting queries over relational databases. We first observe that the sensitivity of a Relational Algebra query with counting is not computable in general, and that while the sensitivity of Conjunctive Queries with counting is computable, it becomes unbounded as soon as the query includes a join. We then consider restricted classes of databases (databases with constraints), and study the problem of computing the sensitivity of a query given such constraints. We are able to establish bounds on the sensitivity of counting conjunctive queries over constrained databases. The kind of constraints studied here are: functional dependencies and cardinality dependencies. The latter is a natural generalization of functional dependencies that allows us to provide tight bounds on the sensitivity of counting conjunctive queries.

%B 43rd International Colloquium on Automata, Languages, and Programming %G eng %0 Journal Article %J Proceedings of the 14th Theory of Cryptography Conference (TCC 2016-B) %D 2016 %T Separating Computational and Statistical Differential Privacy in the Client-Server Model %A Mark Bun %A Yi Hsiu Chen %A Salil Vadhan %X

Differential privacy is a mathematical definition of privacy for statistical data analysis. It guarantees that any (possibly adversarial) data analyst is unable to learn too much information that is specific to an individual. Mironov et al. (CRYPTO 2009) proposed several computational relaxations of differential privacy (CDP), which relax this guarantee to hold only against computationally bounded adversaries. Their work and subsequent work showed that CDP can yield substantial accuracy improvements in various multiparty privacy problems. However, these works left open whether such improvements are possible in the traditional client-server model of data analysis. In fact, Groce, Katz and Yerukhimovich (TCC 2011) showed that, in this setting, it is impossible to take advantage of CDP for many natural statistical tasks. Our main result shows that, assuming the existence of sub-exponentially secure one-way functions and 2-message witness indistinguishable proofs (zaps) for NP, that there is in fact a computational task in the client-server model that can be efficiently performed with CDP, but is infeasible to perform with information-theoretic differential privacy.

%B Proceedings of the 14th Theory of Cryptography Conference (TCC 2016-B) %8 Aug 2016 %G eng %0 Journal Article %J Berkeley Technology Law Journal %D 2016 %T Towards a Modern Approach to Privacy-Aware Government Data Releases %A Micah Altman %A Alexandra Wood %A O'Brien, David %A Salil Vadhan %A Gasser, Urs %X

This article summarizes research exploring various models by which governments release data to the public and the interventions in place to protect the privacy of individuals in the data. Applying concepts from the recent scientific and legal literature on privacy, the authors propose a framework for a modern privacy analysis and illustrate how governments can use the framework to select appropriate privacy controls that are calibrated to the specific benefits and risks in individual data releases.

%B Berkeley Technology Law Journal %V 30 %G eng %N 3 %0 Thesis %B Computer Science Department at Harvard University %D 2016 %T Upper and Lower Bounds for Privacy and Adaptivity in Algorithmic Data Analysis %A Thomas Steinke %B Computer Science Department at Harvard University %G eng %9 PhD Thesis %0 Journal Article %J Proceedings of the 32nd International Conference on Machine Learning %D 2015 %T Consistent Estimation of Dynamic and Multi-Layer Block Models %A Qiuyi Han %A Kevin Xu %A Edoardo Airoldi %X

Significant progress has been made recently on theoretical analysis of estimators for the stochastic block model (SBM). In this paper, we consider the multi-graph SBM, which serves as a foundation for many application settings including dynamic and multi-layer networks. We explore the asymptotic properties of two estimators for the multi-graph SBM, namely spectral clustering and the maximum-likelihood estimate (MLE), as the number of layers of the multi-graph increases. We derive sufficient conditions for consistency of both estimators and propose a variational approximation to the MLE that is computationally feasible for large networks. We verify the sufficient conditions via simulation and demonstrate that they are practical. In addition, we apply the model to two real data sets: a dynamic social network and a multi-layer social network with several types of relations.

%B Proceedings of the 32nd International Conference on Machine Learning %G eng %U https://arxiv.org/pdf/1410.8597v3.pdf %0 Journal Article %J Encyclopedia of Algorithms %D 2015 %T Differentially Private Analysis of Graphs %A Sofya Raskhodnikova and Adam Smith %B Encyclopedia of Algorithms %G eng %0 Journal Article %J ZSR %D 2015 %T Perspectives on the Future of Digital Privacy %A Gasser, Urs %B ZSR %P 339-448 %G eng %0 Generic %D 2015 %T All the Data on All the People %A Sweeney, Latanya %B UC Berkeley Law School & GWU Law School (Berkeley Center for Law & Technology). The Privacy Law Scholars Conference (PLSC). Berkeley, CA. %G eng %0 Journal Article %J The ANNALS of the American Academy of Political and Social Science %D 2015 %T Automating Open Science for Big Data %A Mercè Crosas %A Gary King %A James Honaker %A Sweeney, Latanya %X

The vast majority of social science research uses small (megabyte- or gigabyte-scale) datasets. These fixed-scale datasets are commonly downloaded to the researcher’s computer where the analysis is performed. The data can be shared, archived, and cited with well-established technologies, such as the Dataverse Project, to support the published results. The trend toward big data—including large-scale streaming data—is starting to transform research and has the potential to impact policymaking as well as our understanding of the social, economic, and political problems that affect human societies. However, big data research poses new challenges to the execution of the analysis, archiving and reuse of the data, and reproduction of the results. Downloading these datasets to a researcher’s computer is impractical, leading to analyses taking place in the cloud, and requiring unusual expertise, collaboration, and tool development. The increased amount of information in these large datasets is an advantage, but at the same time it poses an increased risk of revealing personally identifiable sensitive information. In this article, we discuss solutions to these new challenges so that the social sciences can realize the potential of big data.

%B The ANNALS of the American Academy of Political and Social Science %V 659 %P 260-273 %G eng %U http://ann.sagepub.com/content/659/1/260.abstract %N 1 %0 Generic %D 2015 %T Between Pure and Approximate Differential Privacy %A Thomas Steinke %A Jon Ullman %B Theory and Practice of Differential Privacy (TPDP 2015), London, UK %G eng %U http://tpdp.computing.dundee.ac.uk/abstracts/TPDP_2015_3.pdf %0 Journal Article %J Proceedings of the 28th IEEE Computer Security Foundations Symposium (CSF) %D 2015 %T Cryptographic Enforcement of Language-Based Erasure %A A Askarov %A Moore, S. %A C Dimoulas %A S Chong %X

Information erasure is a formal security requirement that stipulates when sensitive data must be removed from computer systems. In a system that correctly enforces erasure requirements, an attacker who observes the system after sensitive data is required to have been erased cannot deduce anything about the data. Practical obstacles to enforcing information erasure include: (1) correctly determining which data requires erasure; and (2) reliably deleting potentially large volumes of data, despite untrustworthy storage services.

In this paper, we present a novel formalization of language- based information erasure that supports cryptographic enforcement of erasure requirements: sensitive data is encrypted be- fore storage, and upon erasure, only a relatively small set of decryption keys needs to be deleted. This cryptographic technique has been used by a number of systems that implement data deletion to allow the use of untrustworthy storage services. However, these systems provide no support to correctly determine which data requires erasure, nor have the formal semantic properties of these systems been explained or proven to hold. We address these shortcomings. Specifically, we study a programming language extended with primitives for public- key cryptography, and demonstrate how information-flow control mechanisms can automatically track data that requires erasure and provably enforce erasure requirements even when programs employ cryptographic techniques for erasure. 

%B Proceedings of the 28th IEEE Computer Security Foundations Symposium (CSF) %G eng %0 Generic %D 2015 %T Data and Privacy %A Robert Faris %A David R. O’Brien %X

This essay first appeared in the Internet Monitor project’s second annual report, Internet Monitor 2014: Reflections on the Digital World. The report, published by the Berkman Center for Internet & Society, is a collection of roughly three dozen short contributions that highlight and discuss some of the most compelling events and trends in the digitally networked environment over the past year.

%B Internet Monitor 2014: Data and Privacy %G eng %U https://medium.com/internet-monitor-2014-data-and-privacy/data-and-privacy-f7bfa24bbddc %0 Report %D 2015 %T The Differential Privacy of Bayesian Inference %A Shijie Zheng %X

Differential privacy is one recent framework for analyzing and quantifying the amount of privacy lost when data is released. Meanwhile, multiple imputation is an existing Bayesian-inference based technique from statistics that learns a model using real data, then releases synthetic data by drawing from that model. Because multiple imputation does not directly release any real data, it is generally believed to protect privacy.

In this thesis, we examine that claim. While there exist newer synthetic data algorithms specifically designed to provide differential privacy, we evaluate whether multiple imputation already includes differential privacy for free. Thus, we focus on several method variants for releasing the learned model and releasing the synthetic data, and how these methods perform for models taking on two common distributions: the Bernoulli and the Gaussian with known variance. 

We prove a number of new or improved bounds on the amount of privacy afforded by multiple imputation for these distributions. We find that while differential privacy is ostensibly achievable for most of our method variants, the conditions needed for it to do so are often not realistic for practical usage. At least in theory, this is particularly true if we want absolute privacy (ε-differential privacy), but that the methods are more practically compatible with privacy when we allow a small probability of a catastrophic data leakage ((ε, δ)-differential privacy).
%I Bachelor's thesis, Harvard College %G eng %U http://dash.harvard.edu/handle/1/14398533 %9 Computer Science and Mathematics %0 Conference Paper %B 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 15) %D 2015 %T Differentially Private Release and Learning of Threshold Functions %A Mark Bun %A Kobbi Nissim %A Uri Stemmer %A Salil Vadhan %X

We prove new upper and lower bounds on the sample complexity of (ϵ,δ) differentially private algorithms for releasing approximate answers to threshold functions. A threshold function cx over a totally ordered domain X evaluates to cx(y)=1 if yx, and evaluates to 0 otherwise. We give the first nontrivial lower bound for releasing thresholds with (ϵ,δ) differential privacy, showing that the task is impossible over an infinite domain X, and moreover requires sample complexity nΩ(log|X|), which grows with the size of the domain. Inspired by the techniques used to prove this lower bound, we give an algorithm for releasing thresholds with n2(1+o(1))log|X| samples. This improves the previous best upper bound of 8(1+o(1))log|X| (Beimel et al., RANDOM '13).
Our sample complexity upper and lower bounds also apply to the tasks of learning distributions with respect to Kolmogorov distance and of properly PAC learning thresholds with differential privacy. The lower bound gives the first separation between the sample complexity of properly learning a concept class with (ϵ,δ) differential privacy and learning without privacy. For properly learning thresholds in dimensions, this lower bound extends to nΩ(log|X|).
To obtain our results, we give reductions in both directions from releasing and properly learning thresholds and the simpler interior point problem. Given a database D of elements from X, the interior point problem asks for an element between the smallest and largest elements in D. We introduce new recursive constructions for bounding the sample complexity of the interior point problem, as well as further reductions and techniques for proving impossibility results for other basic problems in differential privacy.

%B 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 15) %C Berkeley, California %G eng %U http://arxiv.org/abs/1504.07553 %0 Generic %D 2015 %T Efficient Use of Differentially Private Binary Trees %A James Honaker %B Theory and Practice of Differential Privacy (TPDP 2015), London, UK %G eng %0 Conference Paper %B Future of Privacy Forum Workshop: Beyond IRBs: Designing Ethical Review Processes for Big Data %D 2015 %T Elements of a new Ethical Framework for Big Data Research %A Effy Vayena %A Gasser, Urs %A Alexandra Wood %A David R. O'Brien %A Micah Altman %X

Emerging large-scale data sources hold tremendous potential for new scientific research into human biology, behaviors, and relationships. At the same time, big data research presents privacy and ethical challenges that the current regulatory framework is ill-suited to address. In light of the immense value of large-scale research data, the central question moving forward is not whether such data should be made available for research, but rather how the benefits can be captured in a way that respects fundamental principles of ethics and privacy. 

%B Future of Privacy Forum Workshop: Beyond IRBs: Designing Ethical Review Processes for Big Data %C Washington D.C. %8 Dec 2015 %G eng %0 Conference Paper %B AAI Conference on Artificial Intelligence %D 2015 %T Fair Information Sharing for Treasure Hunting. %A Chen, Y. %A K. Nissim %A B. Waggoner %X

In a search task, a group of agents compete to be the first to find the solution. Each agent has different private information to incorporate into its search. This problem is inspired by settings such as scientific research, Bitcoin hash inversion, or hunting for some buried treasure. A social planner such as a funding agency, mining pool, or pirate captain might like to convince the agents to collaborate, share their information, and greatly reduce the cost of searching. However, this cooperation is in tension with the individuals' competitive desire to each be the first to win the search. The planner's proposal should incentivize truthful information sharing, reduce the total cost of searching, and satisfy fairness properties that preserve the spirit of the competition. We design contract-based mechanisms for information sharing without money. The planner solicits the agents' information and assigns search locations to the agents, who may then search only within their assignments. Truthful reporting of information to the mechanism maximizes an agent's chance to win the search. Epsilon-voluntary participation is satisfied for large search spaces. In order to formalize the planner's goals of fairness and reduced search cost, we propose a simplified, simulated game as a benchmark and quantify fairness and search cost relative to this benchmark scenario. The game is also used to implement our mechanisms. Finally, we extend to the case where coalitions of agents may participate in the mechanism, forming larger coalitions recursively.

%B AAI Conference on Artificial Intelligence %I Association for the Advancement of Artificial Intelligence (AAAI) %C North America %8 February %G eng %U http://people.seas.harvard.edu/~bwaggoner/papers/2015/fair-information-sharing-for-treasure-hunting--2015--chen,nissim,waggoner.pdf %0 Conference Proceedings %D 2015 %T On the Generalization Properties of Differential Privacy %A Kobbi Nissim %A Uri Stemmer %X

A new line of work, started with Dwork et al., studies the task of answering statistical queries using a sample and relates the problem to the concept of differential privacy. By the Hoeffding bound, a sample of size O(logk/α2) suffices to answer k non-adaptive queries within error α, where the answers are computed by evaluating the statistical queries on the sample. This argument fails when the queries are chosen adaptively (and can hence depend on the sample). Dwork et al. showed that if the answers are computed with (ϵ,δ)-differential privacy then O(ϵ) accuracy is guaranteed with probability 1O(δϵ). Using the Private Multiplicative Weights mechanism, they concluded that the sample size can still grow polylogarithmically with the k.
Very recently, Bassily et al. presented an improved bound and showed that (a variant of) the private multiplicative weights algorithm can answer k adaptively chosen statistical queries using sample complexity that grows logarithmically in k. However, their results no longer hold for every differentially private algorithm, and require modifying the private multiplicative weights algorithm in order to obtain their high probability bounds.
We greatly simplify the results of Dwork et al. and improve on the bound by showing that differential privacy guarantees O(ϵ) accuracy with probability 1O(δlog(1/ϵ)/ϵ). It would be tempting to guess that an (ϵ,δ)-differentially private computation should guarantee O(ϵ) accuracy with probability 1O(δ). However, we show that this is not the case, and that our bound is tight (up to logarithmic factors).

%G eng %U http://arxiv.org/abs/1504.05800 %0 Journal Article %J The 22nd ACM Conference on Computer and Communications Security %D 2015 %T Grecs: Graph Encryption for Approximate Shortest Distance Queries %A Xianrui Meng %A Seny Kamara %A Kobbi Nissim %A George Kollios %X

We propose graph encryption schemes that efficiently support approximate shortest distance queries on large-scale encrypted graphs. Shortest distance queries are one of the most fundamental graph operations and have a wide range of applications. Using such graph encryption schemes, a client can outsource large-scale privacy-sensitive graphs to an untrusted server without losing the ability to query it. Other applications include encrypted graph databases and controlled disclosure systems. We propose GRECS (stands for GRaph EnCryption for approximate Shortest distance queries) which includes three schemes that are provably secure against any semi-honest server. Our first construction makes use of only symmetric-key operations, resulting in a computationally-efficient construction. Our second scheme, makes use of somewhat-homomorphic encryption and is less computationally-efficient but achieves optimal communication complexity (i.e., uses a minimal amount of bandwidth). Finally, our third scheme is both computationally-efficient and achieves optimal communication complexity at the cost of a small amount of additional leakage. We implemented and evaluated the efficiency of our constructions experimentally. The experiments demonstrate that our schemes are efficient and can be applied to graphs that scale up to 1.6 million nodes and 11 million edges.

%B The 22nd ACM Conference on Computer and Communications Security %G eng %U https://eprint.iacr.org/2015/266 %0 Journal Article %J International Colloquium on Automata, Languages, and Programming (ICALP 2015) BG %D 2015 %T Hardness Amplification and the Approximate Degree of Constant-Depth Circuits %A Mark Bun %A Justin Thaler %X

We establish a generic form of hardness amplification for the approximability of constant-depth Boolean circuits by polynomials. Specifically, we show that if a Boolean circuit cannot be pointwise approximated by low-degree polynomials to within constant error in a certain one-sided sense, then an OR of disjoint copies of that circuit cannot be pointwise approximated even with very high error. As our main application, we show that for every sequence of degrees d(n), there is an explicit depththree circuit F : {−1, 1} n → {−1, 1} of polynomial-size such that any degree-d polynomial cannot pointwise approximate F to error better than 1 − exp  −Ω( ˜ nd−3/2 )  . As a consequence of our main result, we obtain an exp  −Ω( ˜ n 2/5 )  upper bound on the the discrepancy of a function in AC0, and an exp  Ω( ˜ n 2/5 )  lower bound on the threshold weight of AC0, improving over the previous best results of exp −Ω(n 1/3 )  and exp Ω(n 1/3 )  respectively. Our techniques also yield a new lower bound of Ω  n 1/2/ log(d−2)/2 (n)  on the approximate degree of the AND-OR tree of depth d, which is tight up to polylogarithmic factors for any constant d, as well as new bounds for read-once DNF formulas. In turn, these results imply new lower bounds on the communication and circuit complexity of these classes, and demonstrate strong limitations on existing PAC learning algorithms.

%B International Colloquium on Automata, Languages, and Programming (ICALP 2015) BG %G eng %U http://arxiv.org/abs/1311.1616 %0 Generic %D 2015 %T In the Age of the Web, What Does “Public” Mean? %A Rob Faris %A O'Brien, David %B Internet Monitor 2014: Data and Privacy %G eng %U https://medium.com/internet-monitor-2014-data-and-privacy/in-the-age-of-the-web-what-does-public-mean-ee74df403174 %0 Journal Article %J Social Science Research Network %D 2015 %T Integrating Approaches to Privacy Across the Research Lifecycle: When is Information Purely Public? %A O'Brien, David %A Jonathan Ullman %A Micah Altman %A Gasser, Urs %A Bar-Sinai, Michael %A Kobbi Nissim %A Salil Vadhan %A Michael Wojcik %A Alexandra Wood %X

On September 24-25, 2013, the Privacy Tools for Sharing Research Data project at Harvard University held a workshop titled "Integrating Approaches to Privacy across the Research Data Lifecycle." Over forty leading experts in computer science, statistics, law, policy, and social science research convened to discuss the state of the art in data privacy research. The resulting conversations centered on the emerging tools and approaches from the participants’ various disciplines and how they should be integrated in the context of real-world use cases that involve the management of confidential research data.

Researchers are increasingly obtaining data from social networking websites, publicly-placed sensors, government records and other public sources. Much of this information appears public, at least to first impressions, and it is capable of being used in research for a wide variety of purposes with seemingly minimal legal restrictions. The insights about human behaviors we may gain from research that uses this data are promising. However, members of the research community are questioning the ethics of these practices, and at the heart of the matter are some difficult questions about the boundaries between public and private information. This workshop report, the second in a series, identifies selected questions and explores issues around the meaning of “public” in the context of using data about individuals for research purposes.

%B Social Science Research Network %G eng %U http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2586158 %0 Journal Article %J JMLR: Workshop and Conference Proceedings %D 2015 %T Interactive Fingerprinting Codes and the Hardness of Preventing False Discovery %A Thomas Steinke %A Jonathan Ullman %X

We show an essentially tight bound on the number of adaptively chosen statistical queries that a computationally efficient algorithm can answer accurately given n samples from an unknown distribution. A statistical query asks for the expectation of a predicate over the underlying distribution, and an answer to a statistical query is accurate if it is “close” to the correct expectation over the distribution. This question was recently studied by Dwork et al. (2015), who showed how to answer Ω( ˜ n 2 ) queries efficiently, and also by Hardt and Ullman (2014), who showed that answering O˜(n 3 ) queries is hard. We close the gap between the two bounds and show that, under a standard hardness assumption, there is no computationally efficient algorithm that, given n samples from an unknown distribution, can give valid answers to O(n 2 ) adaptively chosen statistical queries. An implication of our results is that computationally efficient algorithms for answering arbitrary, adaptively chosen statistical queries may as well be differentially private. We obtain our results using a new connection between the problem of answering adaptively chosen statistical queries and a combinatorial object called an interactive fingerprinting code Fiat and Tassa (2001). In order to optimize our hardness result, we give a new Fourier-analytic approach to analyzing fingerprinting codes that is simpler, more flexible, and yields better parameters than previous constructions.

%B JMLR: Workshop and Conference Proceedings %V 40 %P 1-41 %G eng %U http://jmlr.org/proceedings/papers/v40/Steinke15.pdf %N 201 %0 Journal Article %J Accepted for publication, SODA 2015 %D 2015 %T Learning Privately with Labeled and Unlabeled Examples %A A. Beimel %A K. Nissim %A U. Stemmer %X

A private learner is an algorithm that given a sample of labeled individual examples outputs a generalizing hypothesis while preserving the privacy of each individual. In 2008, Kasiviswanathan et al. (FOCS 2008) gave a generic construction of private learners, in which the sample complexity is (generally) higher than what is needed for non-private learners. This gap in the sample complexity was then further studied in several followup papers, showing that (at least in some cases) this gap is unavoidable. Moreover, those papers considered ways to overcome the gap, by relaxing either the privacy or the learning guarantees of the learner.
We suggest an alternative approach, inspired by the (non-private) models of semi-supervised learning and active-learning, where the focus is on the sample complexity of labeled examples whereas unlabeled examples are of a significantly lower cost. We consider private semi-supervised learners that operate on a random sample, where only a (hopefully small) portion of this sample is labeled. The learners have no control over which of the sample elements are labeled. Our main result is that the labeled sample complexity of private learners is characterized by the VC dimension.
We present two generic constructions of private semi-supervised learners. The first construction is of learners where the labeled sample complexity is proportional to the VC dimension of the concept class, however, the unlabeled sample complexity of the algorithm is as big as the representation length of domain elements. Our second construction presents a new technique for decreasing the labeled sample complexity of a given private learner, while only slightly increasing its unlabeled sample complexity. In addition, we show that in some settings the labeled sample complexity does not depend on the privacy parameters of the learner.

%B Accepted for publication, SODA 2015 %G eng %U http://arxiv.org/abs/1407.2662 %0 Book Section %B Encyclopedia of Algorithms %D 2015 %T Mechanism Design and Differential Privacy %A Kobbi Nissim %A Xiao, David %B Encyclopedia of Algorithms %I Springer Berlin Heidelberg %C New York, NY %P 1-12 %G eng %U http://link.springer.com/referenceworkentry/10.1007/978-3-642-27848-8_548-1 %0 Journal Article %J Arxiv.org Computer Science, Computers and Scoiety [Internet] %D 2015 %T An Open Science Platform for the Next Generation of Data %A Sweeney, Latanya %A Merce Crosas %X

Imagine an online work environment where researchers have direct and immediate access to myriad data sources and tools and data management resources, useful throughout the research lifecycle. This is our vision for the next generation of the Dataverse Network: an Open Science Platform (OSP). For the first time, researchers would be able to seamlessly access and create primary and derived data from a variety of sources: prior research results, public data sets, harvested online data, physical instruments, private data collections, and even data from other standalone repositories. Researchers could recruit research participants and conduct research directly on the OSP, if desired, using readily available tools. Researchers could create private or shared workspaces to house data, access tools, and computation and could publish data directly on the platform or publish elsewhere with persistent, data citations on the OSP. This manuscript describes the details of an Open Science Platform and its construction. Having an Open Science Platform will especially impact the rate of new scientific discoveries and make scientific findings more credible and accountable.

%B Arxiv.org Computer Science, Computers and Scoiety [Internet] %G eng %U http://arxiv.org/abs/1506.05632 %0 Generic %D 2015 %T Privacy as a Sword and Shield in Public Health %A Sweeney, Latanya %B New York City Department of Public Health. New York, NY. %G eng %0 Generic %D 2015 %T Privacy Principles (framing talk) %A Micah Altman %B United Nations Global Pulse Workshop on ICT4D Principle 8: Address Privacy & Security In Development Programs. New York, USA %G eng %U https://www.slideshare.net/drmaltman/un-global-pulse-privacy-framing %0 Journal Article %D 2015 %T Private Approximations of the 2nd-Moment Matrix Using Existing Techniques in Linear Regression %A Or Sheffet %X

We introduce three differentially-private algorithms that approximates the 2nd-moment matrix of the data. These algorithm, which in contrast to existing algorithms output positive-definite matrices, correspond to existing techniques in linear regression literature. Specifically, we discuss the following three techniques. (i) For Ridge Regression, we propose setting the regularization coefficient so that by approximating the solution using Johnson-Lindenstrauss transform we preserve privacy. (ii) We show that adding a small batch of random samples to our data preserves differential privacy. (iii) We show that sampling the 2nd-moment matrix from a Bayesian posterior inverse-Wishart distribution is differentially private provided the prior is set correctly. We also evaluate our techniques experimentally and compare them to the existing "Analyze Gauss" algorithm of Dwork et al.

%G eng %U http://arxiv.org/abs/1507.00056 %0 Conference Paper %B IEEE Symposium on Foundations of Computer Science (FOCS 2015) %D 2015 %T Robust Traceability from Trace Amounts %A C. Dwork %A Smith, A %A T Steinke %A J Ullman %A S. Vadhan %X

The privacy risks inherent in the release of a large number of summary statistics were illustrated by Homer et al. (PLoS Genetics, 2008), who considered the case of 1-way marginals of SNP allele frequencies obtained in a genome-wide association study: Given a large number of minor allele frequencies from a case group of individuals diagnosed with a particular disease, together with the genomic data of a single target individual and statistics from a sizable reference dataset independently drawn from the same population, an attacker can determine with high confidence whether or not the target is in the case group. In this work we describe and analyze a simple attack that succeeds even if the summary statistics are significantly distorted, whether due to measurement error or noise intentionally introduced to protect privacy. Our attack only requires that the vector of distorted summary statistics is close to the vector of true marginals in `1 norm. Moreover, the reference pool required by previous attacks can be replaced by a single sample drawn from the underlying population. The new attack, which is not specific to genomics and which handles Gaussian as well as Bernouilli data, significantly generalizes recent lower bounds on the noise needed to ensure differential privacy (Bun, Ullman, and Vadhan, STOC 2014; Steinke and Ullman, 2015), obviating the need for the attacker to control the exact distribution of the data.

%B IEEE Symposium on Foundations of Computer Science (FOCS 2015) %C Berkeley, California %8 10/18-20/2015 %G eng %0 Journal Article %J Technology Science %D 2015 %T Sharing Sensitive Data with Confidence: The Datatags System %A Sweeney, Latanya %A Mercè Crosas %A Bar-Sinai, Michael %X

Society generates data on a scale previously unimagined. Wide sharing of these data promises to improve personal health, lower healthcare costs, and provide a better quality of life. There is a tendency to want to share data freely. However, these same data often include sensitive information about people that could cause serious harms if shared widely. A multitude of regulations, laws and best practices protect data that contain sensitive personal information. Government agencies, research labs, and corporations that share data, as well as review boards and privacy officers making data sharing decisions, are vigilant but uncertain. This uncertainty creates a tendency not to share data at all. Some data are more harmful than other data; sharing should not be an all-or-nothing choice. How do we share data in ways that ensure access is commensurate with risks of harm?

%B Technology Science %G eng %U http://techscience.org/a/2015101601/ %0 Conference Paper %D 2015 %T Simultaneous private learning of multiple concepts %A Mark Bun %A Kobbi Nissim %A Uri Stemmer %X

We investigate the direct-sum problem in the context of differentially private PAC learning: What is the sample complexity of solving k learning tasks simultaneously under differential privacy, and how does this cost compare to that of solving k learning tasks without privacy? In our setting, an individual example consists of a domain element x labeled by k unknown concepts (c1,,ck). The goal of a multi-learner is to output k hypotheses (h1,,hk) that generalize the input examples. 
Without concern for privacy, the sample complexity needed to simultaneously learn k concepts is essentially the same as needed for learning a single concept. Under differential privacy, the basic strategy of learning each hypothesis independently yields sample complexity that grows polynomially with k. For some concept classes, we give multi-learners that require fewer samples than the basic strategy. Unfortunately, however, we also give lower bounds showing that even for very simple concept classes, the sample cost of private multi-learning must grow polynomially in k.

%G eng %0 Journal Article %J Report for the Computing Community Consortium (CCC) %D 2015 %T Towards a Privacy Research Roadmap for the Computing Community. %A L. Cranor %A T. Rabin %A V. Shmatikov %A S. Vadhan %A D. Weitzner %B Report for the Computing Community Consortium (CCC) %G eng %U http://www.cccblog.org/2015/05/11/ccc-community-report-for-a-national-privacy-research-strategy/ %0 Journal Article %J New England Law Review %D 2015 %T What Stays in Vegas: The Road to “Zero Privacy" %A David Abrams %B New England Law Review %V 49 %G eng %U http://newenglrev.com/current-issue-2/abrams-what-stays-in-vegas/ %N 4 %0 Government Document %D 2014 %T Consumer Privacy Bill of Rights and Big Data: Response to White House Office of Science and Technology Policy Request for Information %A Daniel J. Weitzner %A Hal Abelson %A Cynthia Dwork %A Cameron Kerry %A Daniela Rus %A Sandy Pentland %A Salil Vadhan %X

In response to the White House Office of Science and Technology Policy Request for Information on Big Data Privacy we offer these comments based on presentations and discussions at the White House-MIT Workshop “Big Data Privacy Workshop: Advancing the State of the Art in Technology and Practice” and subsequent workshops co-sponsored with Data & Society and NYU Information Law Institute and the UC Berkeley iSchool.

%G eng %0 Government Document %D 2014 %T Comment to The White House Office of Science and Technology Policy (OSTP): Big Data Study, Request for Information %A Micah Altman %A David O’Brien %A Salil Vadhan %A Alexandra Wood %X
On January 23, 2014, President Barack Obama asked John Podesta to perform a comprehensive review of big data and privacy. During this review, the White House Office of Science and Technology Policy issued a request for public comment on questions related to the public policy implications of big data.
 
 
Micah Altman, David O’Brien, Salil Vadhan, and Alexandra Wood submitted a response on behalf of the Privacy Tools for Sharing Research Data project. Their comments outline a broad, comprehensive, and systematic framework for privacy analysis and provide a taxonomy of modern technological, statistical, and cryptographic approaches to preserving both data privacy and utility. They argue that an analysis of information privacy should address the scope of information covered, the sensitivity of that information, the risk that sensitive information will be disclosed, the availability of control and accountability mechanisms, and the suitability of existing data sharing models, as applied across the entire lifecyle of information use, from collection through dissemination and reuse.
 
 
With this submission, the authors discuss the inadequacy of traditional approaches to privacy protection and recommend a modern approach that considers three principles. First, the risks of informational harm are generally not a simple function of the presence or absence of specific fields, attributes, or keywords in the released set of data. Second, redaction, pseudonymization, coarsening, and hashing, are often neither an adequate nor appropriate practice, nor is releasing less information necessary more privacy protective. Third, a thoughtful analysis with expert consultation is necessary in order to evaluate the sensitivity of the data collected, to quantify the associated re-identification risks, and to design useful and safe release mechanisms.
%G eng %0 Government Document %D 2014 %T Comment to the Federal Trade Commission on Mobile Device Tracking %A Micah Altman %G eng %0 Government Document %D 2014 %T Comment on the Occupational Safety and Health Administration (OSHA) Proposed Rule: Improve Tracking of Workplace Injuries and Illnesses; Extension of Comment Period %A Micah Altman %A David O’Brien %A Salil Vadhan %A Alexandra Wood %G eng %U http://www.regulations.gov/#%21documentDetail;D=OSHA-2013-0023-1207 %0 Conference Paper %B Proceedings of the 27th {IEEE} Computer Security Foundations Symposium %D 2014 %T Declarative Policies for Capability Control %A Christos Dimoulas %A Scott Moore %A Aslan Askarov %A Stephen Chong %X

In capability-safe languages, components can access a resource only if they possess a capability for that resource. As a result, a programmer can prevent an untrusted component from accessing a sensitive resource by ensuring that the component never acquires the corresponding capability. In order to reason about which components may use a sensitive resource it is necessary to reason about how capabilities propagate through a system. This may be difficult, or, in the case of dynamically composed code, impossible to do before running the system.

To counter this situation, we propose extensions to capability-safe languages that restrict the use of capabilities according to declarative policies. We introduce two independently useful semantic security policies to regulate capabilities and describe language-based mechanisms that enforce them. Access control policies restrict which components may use a capability and are enforced using higher-order contracts. Integrity policies restrict which components may influence (directly or indirectly) the use of a capability and are enforced using an information-flow type system. Finally, we describe how programmers can dynamically and soundly combine components that enforce access control or integrity policies with components that enforce different policies or even no policy at all.

%B Proceedings of the 27th {IEEE} Computer Security Foundations Symposium %I IEEE Press %C Piscataway, NJ, USA %G eng %0 Conference Paper %B Proceedings of the 5th Conference on Innovations in Theoretical Computer Science %D 2014 %T Faster Private Release of Marginals on Small Databases %A Karthekeyan Chandrasekaran %A Justin Thaler %A Jonathan Ullman %A Andrew Wan %K approximation theory %K differential privacy %K marginal queries %B Proceedings of the 5th Conference on Innovations in Theoretical Computer Science %S ITCS '14 %I ACM %C New York, NY, USA %P 387–402 %@ 978-1-4503-2698-8 %G eng %U http://doi.acm.org/10.1145/2554797.2554833 %R 10.1145/2554797.2554833 %0 Conference Paper %B SIAM Journal on Computing Special Issue on STOC `14. %D 2014 %T Fingerprinting Codes and the Price of Approximate Differential Privacy %A Mark Bun %A Jonathan Ullman %A Salil Vadhan %K differential privacy %K fingerprinting codes %B SIAM Journal on Computing Special Issue on STOC `14. %S STOC '14 %P 1–10 %@ 978-1-4503-2710-7 %G eng %R 10.1145/2591796.2591877 %0 Online Database %D 2014 %T In the Age of the Web, What Does ''Public' Mean? %A O'Brien, David %B Internet Monitor 2014: Reflections on the Digital World, Pp. 87-89 %G eng %U http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2538813. %N Berkman Center Research Publication 2014-17 %0 Report %D 2014 %T Integrating Approaches to Privacy Across the Research Lifecycle: Long-Term Longitudinal Studies %A Alexandra Wood %A O'Brien, David %A Micah Altman %A Karr, Alan %A Gasser, Urs %A Bar-Sinai, Michael %A Kobbi Nissim %A Jonathan Ullman %A Salil Vadhan %A Wojcik, Michael John, %X

On September 24-25, 2013, the Privacy Tools for Sharing Research Data project at Harvard University held a workshop titled "Integrating Approaches to Privacy across the Research Data Lifecycle." Over forty leading experts in computer science, statistics, law, policy, and social science research convened to discuss the state of the art in data privacy research. The resulting conversations centered on the emerging tools and approaches from the participants’ various disciplines and how they should be integrated in the context of real-world use cases that involve the management of confidential research data.

This workshop report, the first in a series, provides an overview of the long-term longitudinal study use case. Long-term longitudinal studies collect, at multiple points over a long period of time, highly-specific and often sensitive data describing the health, socioeconomic, or behavioral characteristics of human subjects. The value of such studies lies in part in their ability to link a set of behaviors and changes to each individual, but these factors tend to make the combination of observable characteristics associated with each subject unique and potentially identifiable.

Using the research information lifecycle as a framework, this report discusses the defining features of long-term longitudinal studies and the associated challenges for researchers tasked with collecting and analyzing such data while protecting the privacy of human subjects. It also describes the disclosure risks and common legal and technical approaches currently used to manage confidentiality in longitudinal data. Finally, it identifies urgent problems and areas for future research to advance the integration of various methods for preserving confidentiality in research data.

%B Social Science Research Network %I Harvard University %C Cambridge %8 07/22 %G eng %U http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2469848## %0 Journal Article %D 2014 %T Interactive Fingerprinting Codes and the Hardness of Preventing False Discovery %A Thomas Steinke %A Jonathan Ullman %X

We show a tight bound on the number of adaptively chosen statistical queries that a computationally efficient algorithm can answer accurately given n samples from an unknown distribution. A statistical query asks for the expectation of a predicate over the underlying distribution, and an answer to a statistical query is accurate if it is "close" to the correct expectation over the distribution. This question was recently considered by Dwork et al., who showed that Ω~(n2) queries can be answer efficiently, and also by Hardt and Ullman, who showed that answering O~(n3) queries is computationally hard. We close the gap between the two bounds by proving a new, nearly-optimal hardness result. Specifically, we show that, under a standard hardness assumption, there is no computationally efficient algorithm that given n samples from an unknown distribution can give valid answers to O(n2) adaptively chosen statistical queries. An implication of our results is that computationally efficient algorithms for answering arbitrary, adaptively chosen statistical queries may as well be differentially private. We obtain our results via an optimal construction of a new combinatorial object that we call an interactive fingerprinting code, which may be of independent interest.

%G eng %U http://arxiv.org/abs/1410.1228 %0 Book %D 2014 %T Internet Monitor 2014: Reflections on the Digital World: Platforms, Policy, Privacy, and Public Discourse %A Gasser, Urs %A Jonathan Zittrain %A R Faris %A R.H Jones %X

This publication is the second annual report of the Internet Monitor project at the Berkman Center for Internet & Society at Harvard University. As with the inaugural report, this year’s edition is a collaborative effort of the extended Berkman community. Internet Monitor 2014: Reflections on the Digital World includes nearly three dozen contributions from friends and colleagues around the world that highlight and discuss some of the most compelling events and trends in the digitally networked environment over the past year.

The result, intended for a general interest audience, brings together reflection and analysis on a broad range of issues and regions — from an examination of Europe’s “right to be forgotten” to a review of the current state of mobile security to an exploration of a new wave of movements attempting to counter hate speech online — and offers it up for debate and discussion. Our goal remains not to provide a definitive assessment of the “state of the Internet” but rather to provide a rich compendium of commentary on the year’s developments with respect to the online space.

Last year’s report examined the dynamics of Internet controls and online activity through the actions of government, corporations, and civil society. We focus this year on the interplay between technological platforms and policy; growing tensions between protecting personal privacy and using big data for social good; the implications of digital communications tools for public discourse and collective action; and current debates around the future of Internet governance.

The report reflects the diversity of ideas and input the Internet Monitor project seeks to invite. Some of the contributions are descriptive; others prescriptive. Some contain purely factual observations; others offer personal opinion. In addition to those in traditional essay format, contributions this year include a speculative fiction story exploring what our increasingly data-driven world might bring, a selection of “visual thinking” illustrations that accompany a number of essays, a “Year in Review” timeline that highlights many of the year’s most fascinating Internet-related news stories (and an interactive version of which is available at the netmonitor.org), and a slightly tongue-in-cheek “By the Numbers” section that offers a look at the year’s important digital statistics. We believe that each contribution offers insights, and hope they provoke further reflection, conversation, and debate in both offline and online settings around the globe.

%7 17 %I Berkman Center Research Publication %C Cambridge, MA %G eng %U http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2538813 %0 Conference Paper %B Proceedings of the 5th Conference on Innovations in Theoretical Computer Science %D 2014 %T Mechanism Design in Large Games: Incentives and Privacy %A Michael Kearns %A Mallesh Pai %A Aaron Roth %A Jonathan Ullman %K differential privacy %K equilibrium selection %K game theory %K mechanism design %B Proceedings of the 5th Conference on Innovations in Theoretical Computer Science %S ITCS '14 %I ACM %C New York, NY, USA %P 403–410 %@ 978-1-4503-2698-8 %G eng %U http://doi.acm.org/10.1145/2554797.2554834 %R 10.1145/2554797.2554834 %0 Journal Article %J CoRR %D 2014 %T

An Anti-Folk Theorem for Large Repeated Games with Imperfect Monitoring

%A Mallesh M. Pai %A Aaron Roth %A Jonathan Ullman %B CoRR %V abs/1402.2801 %G eng %0 Conference Paper %B 10th Conference on Web and Internet Economics (WINE) %D 2014 %T Privacy Games %A Yiling Chen %A Or Sheffet %A Salil Vadhan %B 10th Conference on Web and Internet Economics (WINE) %C Beijing, China %8 December %G eng %0 Conference Paper %B Proceedings of the 2014 International Workshop on Privacy & Security in Programming (PSP '14) %D 2014 %T Privacy Integrated Data Stream Queries %A Lucas Waye %B Proceedings of the 2014 International Workshop on Privacy & Security in Programming (PSP '14) %I ACM %C New York, NY %G eng %U http://delivery.acm.org/10.1145/2690000/2687150/p19-waye.pdf?ip=65.112.10.216&id=2687150&acc=OPEN&key=4D4702B0C3E38B35.4D4702B0C3E38B35.4D4702B0C3E38B35.6D218144511F3437&CFID=469747432&CFTOKEN=41625734&__acm__=1420474755_45ce38ca730887129ca49b667ac6e0aa %0 Conference Paper %B ICML 2014 Workshop on Learning, Security and Privacy %D 2014 %T Private Empirical Risk Minimization, Revisited %A Raef Bassily %A Smith, Adam %A Abhradeep Thakurta %X

In this paper, we initiate a systematic investigation of differentially private algorithms for convex empirical risk minimization. Various instantiations of this problem have been studied before. We provide new algorithms and matching lower bounds for private ERM assuming only that each data point's contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. 

Our algorithms run in polynomial time, and in some cases even match the optimal non-private running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ϵ,0)- and (ϵ,δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different.

Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.

%B ICML 2014 Workshop on Learning, Security and Privacy %C Beijing, China %8 25 Jun %G eng %U http://arxiv.org/abs/1405.7085 %0 Book Section %B Automata, Languages, and Programming %D 2014 %T Privately Solving Linear Programs %A Justin Hsu %A Aaron Roth %A Roughgarden, Tim %A Jonathan Ullman %A Esparza, Javier %A Fraigniaud, Pierre %A Husfeldt, Thore %A Koutsoupias, Elias %B Automata, Languages, and Programming %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 8572 %P 612-624 %@ 978-3-662-43947-0 %G eng %U http://dx.doi.org/10.1007/978-3-662-43948-7_51 %R 10.1007/978-3-662-43948-7_51 %0 Conference Paper %B Proceedings of the 5th Conference on Innovations in Theoretical Computer Science %D 2014 %T Redrawing the Boundaries on Purchasing Data from Privacy-sensitive Individuals %A Kobbi Nissim %A Salil Vadhan %A Xiao, David %K differential privacy %K mechanism design %B Proceedings of the 5th Conference on Innovations in Theoretical Computer Science %S ITCS '14 %I ACM %C New York, NY, USA %P 411–422 %@ 978-1-4503-2698-8 %G eng %U http://doi.acm.org/10.1145/2554797.2554835 %R 10.1145/2554797.2554835 %0 Conference Proceedings %B Proceedings of The 27th Conference on Learning Theory (COLT 2014) %D 2014 %T Sample Complexity Bounds on Differentially Private Learning via Communication Complexity %A Vitaly Feldman %A Xiao, David %X

In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by Dwork et al. (2006) that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private PAC and agnostic learning was studied in a number of prior works starting with (Kasiviswanathan et al., 2008) but a number of basic questions still remain open (Beimel et al. 2010; Chaudhuri and Hsu, 2011; Beimel et al., 2013ab). 

Our main contribution is an equivalence between the sample complexity of differentially-private learning of a concept class C (or SCDP(C)) and the randomized one-way communication complexity of the evaluation problem for concepts from C. Using this equivalence we prove the following bounds:

We also obtain simpler proofs for a number of known related results. Our equivalence builds on a characterization of sample complexity by Beimel et al., (2013a) and our bounds rely on a number of known results from communication complexity.

%B Proceedings of The 27th Conference on Learning Theory (COLT 2014) %I JMLR Workshop and Conference Proceedings %C Barcelona, Spain %V 35 %P 1-20 %G eng %U http://jmlr.org/proceedings/papers/v35/feldman14b.pdf %0 Book %D 2014 %T What Stays in Vegas: The World of Personal Data -- Lifeblood of Big Business -- and the End of Privacy as We Know It. %A Adam Tanner %X

The greatest threat to privacy today is not the NSA, but good-old American companies. Internet giants, leading retailers, and other firms are voraciously gathering data with little oversight from anyone.

In Las Vegas, no company knows the value of data better than Caesars Entertainment. Many thousands of enthusiastic clients pour through the ever-open doors of their casinos. The secret to the company’s success lies in their one unrivaled asset: they know their clients intimately by tracking the activities of the overwhelming majority of gamblers. They know exactly what games they like to play, what foods they enjoy for breakfast, when they prefer to visit, who their favorite hostess might be, and exactly how to keep them coming back for more.

Caesars’ dogged data-gathering methods have been so successful that they have grown to become the world’s largest casino operator, and have inspired companies of all kinds to ramp up their own data mining in the hopes of boosting their targeted marketing efforts. Some do this themselves. Some rely on data brokers. Others clearly enter a moral gray zone that should make American consumers deeply uncomfortable.

We live in an age when our personal information is harvested and aggregated whether we like it or not. And it is growing ever more difficult for those businesses that choose not to engage in more intrusive data gathering to compete with those that do. Tanner’s timely warning resounds: Yes, there are many benefits to the free flow of all this data, but there is a dark, unregulated, and destructive netherworld as well.

%I Public Affairs %C New York, NY %G eng %U http://www.amazon.com/What-Stays-Vegas-Personal-Lifeblood/dp/1610394186 %0 Generic %D 2013 %T Response to the National Institute of Health Request Information: Input on Development of NIH Data Catalog %A George Alter %A Micah Altman %A Mark Abrahamson %A Roper Center %A Merce Crosas %A Jon Crabtree %A Ted Hull %G eng %0 Conference Paper %B Proceedings of the 10th Theory of Cryptography Conference on Theory of Cryptography %D 2013 %T Analyzing Graphs with Node Differential Privacy %A Kasiviswanathan, ShivaPrasad %A Kobbi Nissim %A Raskhodnikova, Sofya %A Smith, Adam %B Proceedings of the 10th Theory of Cryptography Conference on Theory of Cryptography %S TCC'13 %I Springer-Verlag %C Berlin, Heidelberg %P 457–476 %@ 978-3-642-36593-5 %G eng %U http://dx.doi.org/10.1007/978-3-642-36594-2_26 %R 10.1007/978-3-642-36594-2_26 %0 Conference Paper %B Proceedings of the 45th annual ACM symposium on Symposium on theory of computing %D 2013 %T Answering n{2+o(1)} counting queries with differential privacy is hard %A Jonathan Ullman %X A central problem in differentially private data analysis is how to design efficient algorithms capable of answering large numbers of counting queries on a sensitive database. Counting queries are of the form "What fraction of individual records in the database satisfy the property q?" We prove that if one-way functions exist, then there is no algorithm that takes as input a database db ∈ dbset, and k = ~Θ(n2) arbitrary efficiently computable counting queries, runs in time poly(d, n), and returns an approximate answer to each query, while satisfying differential privacy. We also consider the complexity of answering "simple" counting queries, and make some progress in this direction by showing that the above result holds even when we require that the queries are computable by constant-depth (AC0) circuits. Our result is almost tight because it is known that ~Ω(n2) counting queries can be answered efficiently while satisfying differential privacy. Moreover, many more than n2 queries (even exponential in n) can be answered in exponential time. We prove our results by extending the connection between differentially private query release and cryptographic traitor-tracing schemes to the setting where the queries are given to the sanitizer as input, and by constructing a traitor-tracing scheme that is secure in this setting. %B Proceedings of the 45th annual ACM symposium on Symposium on theory of computing %I ACM %C Palo Alto, California, USA %P 361-370 %@ 978-1-4503-2029-0 %G eng %U http://dx.doi.org/10.1145/2488608.2488653 %1 2488653 %R 10.1145/2488608.2488653 %0 Conference Paper %B Proceedings of the 4th conference on Innovations in Theoretical Computer Science %D 2013 %T Characterizing the sample complexity of private learners %A Amos Beimel %A Kobbi Nissim %A Uri Stemmer %X In 2008, Kasiviswanathan el al. defined private learning as a combination of PAC learning and differential privacy [16]. Informally, a private learner is applied to a collection of labeled individual information and outputs a hypothesis while preserving the privacy of each individual. Kasiviswanathan et al. gave a generic construction of private learners for (finite) concept classes, with sample complexity logarithmic in the size of the concept class. This sample complexity is higher than what is needed for non-private learners, hence leaving open the possibility that the sample complexity of private learning may be sometimes significantly higher than that of non-private learning. We give a combinatorial characterization of the sample size sufficient and necessary to privately learn a class of concepts. This characterization is analogous to the well known characterization of the sample complexity of non-private learning in terms of the VC dimension of the concept class. We introduce the notion of probabilistic representation of a concept class, and our new complexity measure RepDim corresponds to the size of the smallest probabilistic representation of the concept class. We show that any private learning algorithm for a concept class C with sample complexity m implies RepDim(C) = O(m), and that there exists a private learning algorithm with sample complexity m = O(RepDim(C)). We further demonstrate that a similar characterization holds for the database size needed for privately computing a large class of optimization problems and also for the well studied problem of private data release. %B Proceedings of the 4th conference on Innovations in Theoretical Computer Science %I ACM %C Berkeley, California, USA %P 97-110 %@ 978-1-4503-1859-4 %G eng %U http://dx.doi.org/10.1145/2422436.2422450 %1 2422450 %R 10.1145/2422436.2422450 %0 Conference Paper %B Proceedings of the 45th annual ACM symposium on Symposium on theory of computing %D 2013 %T Differential privacy for the analyst via private equilibrium computation %A Justin Hsu %A Aaron Roth %A Jonathan Ullman %X We give new mechanisms for answering exponentially many queries from multiple analysts on a private database, while protecting dif- ferential privacy both for the individuals in the database and for the analysts. That is, our mechanism's answer to each query is nearly insensitive to changes in the queries asked by other analysts. Our mechanism is the first to offer differential privacy on the joint distribution over analysts' answers, providing privacy for data an- alysts even if the other data analysts collude or register multiple accounts. In some settings, we are able to achieve nearly optimal error rates (even compared to mechanisms which do not offer an- alyst privacy), and we are able to extend our techniques to handle non-linear queries. Our analysis is based on a novel view of the pri- vate query-release problem as a two-player zero-sum game, which may be of independent interest. %B Proceedings of the 45th annual ACM symposium on Symposium on theory of computing %I ACM %C Palo Alto, California, USA %P 341-350 %@ 978-1-4503-2029-0 %G eng %U http://dx.doi.org/10.1145/2488608.2488651 %1 2488651 %R 10.1145/2488608.2488651 %0 Journal Article %J Commun. ACM %D 2013 %T Discrimination in online ad delivery %A Sweeney, Latanya %X Google ads, black names and white names, racial discrimination, and click advertising. %B Commun. ACM %I ACM %C New York, NY, USA %V 56 %P 44–54 %G eng %U http://doi.acm.org/10.1145/2447976.2447990 %N 5 %R 10.1145/2447976.2447990 %0 Journal Article %J Automata, Languages, and Programming %D 2013 %T Dual Lower Bounds for Approximate Degree and Markov-Bernstein Inequalities %A Mark Bun %A Justin Thaler %E Fomin, FedorV. %E Freivalds, Rūsiņš %E Kwiatkowska, Marta %E Peleg, David %X The ε-approximate degree of a Boolean function f: { − 1, 1} n  → { − 1, 1} is the minimum degree of a real polynomial that approximates f to within ε in the ℓ ∞  norm. We prove several lower bounds on this important complexity measure by explicitly constructing solutions to the dual of an appropriate linear program. Our first result resolves the ε-approximate degree of the two-level AND-OR tree for any constant ε > 0. We show that this quantity is Θ(n‾‾√) , closing a line of incrementally larger lower bounds [3,11,21,30,32]. The same lower bound was recently obtained independently by Sherstov using related techniques [25]. Our second result gives an explicit dual polynomial that witnesses a tight lower bound for the approximate degree of any symmetric Boolean function, addressing a question of Špalek [34]. Our final contribution is to reprove several Markov-type inequalities from approximation theory by constructing explicit dual solutions to natural linear programs. These inequalities underly the proofs of many of the best-known approximate degree lower bounds, and have important uses throughout theoretical computer science. %B Automata, Languages, and Programming %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 7965 %P 303-314 %@ 978-3-642-39205-4 %G eng %U http://dx.doi.org/10.1007/978-3-642-39206-1_26 %0 Conference Paper %B Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE %D 2013 %T Estimation of exchangeable graph models by stochastic blockmodel approximation %A Chan, S.H. %A Costa, T.B. %A Airoldi, E.M. %K Abstracts %K Approximation algorithms %K Approximation methods %K Arrays %K Data models %K exchangeable graph model estimation %K exchangeable random arrays %K exchangeable random graph model %K graph theory %K graphlet %K graphon %K Indexes %K Network analysis %K network theory (graphs) %K non-parametric estimation %K stochastic blockmodel %K stochastic blockmodel approximation %K Stochastic Processes %B Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE %P 293-296 %8 Dec %G eng %R 10.1109/GlobalSIP.2013.6736873 %0 Journal Article %J CoRR %D 2013 %T Faster Private Release of Marginals on Small Databases %A Karthekeyan Chandrasekaran %A Justin Thaler %A Jonathan Ullman %A Andrew Wan %X We study the problem of answering \emph{$k$-way marginal} queries on a database $D \in (\{0,1\}^d)^n$, while preserving differential privacy. The answer to a $k$-way marginal query is the fraction of the database's records $x \in \{0,1\}^d$ with a given value in each of a given set of up to $k$ columns. Marginal queries enable a rich class of statistical analyses on a dataset, and designing efficient algorithms for privately answering marginal queries has been identified as an important open problem in private data analysis. For any $k$, we give a differentially private online algorithm that runs in time $$ \min{\exp(d^{1-\Omega(1/\sqrt{k})}), \exp(d / \log^{.99} d)\} $$ per query and answers any (possibly superpolynomially long and adaptively chosen) sequence of $k$-way marginal queries up to error at most $\pm .01$ on every query, provided $n \gtrsim d^{.51} $. To the best of our knowledge, this is the first algorithm capable of privately answering marginal queries with a non-trivial worst-case accuracy guarantee on a database of size $\poly(d, k)$ in time $\exp(o(d))$. Our algorithms are a variant of the private multiplicative weights algorithm (Hardt and Rothblum, FOCS '10), but using a different low-weight representation of the database. We derive our low-weight representation using approximations to the OR function by low-degree polynomials with coefficients of bounded $L_1$-norm. We also prove a strong limitation on our approach that is of independent approximation-theoretic interest. Specifically, we show that for any $k = o(\log d)$, any polynomial with coefficients of $L_1$-norm $poly(d)$ that pointwise approximates the $d$-variate OR function on all inputs of Hamming weight at most $k$ must have degree $d^{1-O(1/\sqrt{k})}$. %B CoRR %V abs/1304.3754 %G eng %U http://arxiv.org/abs/1304.3754 %0 Unpublished Work %D 2013 %T Identifying Participants in the Personal Genome Project by Name %A Sweeney, Latanya %A Abu, Akua %A Winn, Julia %B Data Privacy Lab, IQSS, Harvard University %8 04/24/2013 %G eng %U http://dataprivacylab.org/projects/pgp/ %9 White paper %0 Conference Paper %B Proceedings of the 45th annual ACM symposium on Symposium on theory of computing %D 2013 %T Interactive proofs of proximity: delegating computation in sublinear time %A Guy N. Rothblum %A Salil Vadhan %A Avi Wigderson %X

We study interactive proofs with sublinear-time verifiers. These proof systems can be used to ensure approximate correctness for the results of computations delegated to an untrusted server. Following the literature on property testing, we seek proof systems where with high probability the verifier accepts every input in the language, and rejects every input that is far from the language. The verifier's query complexity (and computation complexity), as well as the communication, should all be sublinear. We call such a proof system an Interactive Proof of Proximity (IPP). On the positive side, our main result is that all languages in NC have Interactive Proofs of Proximity with roughly √n query and communication and complexities, and polylog(n) communication rounds. This is achieved by identifying a natural language, membership in an affine subspace (for a structured class of subspaces), that is complete for constructing interactive proofs of proximity, and providing efficient protocols for it. In building an IPP for this complete language, we show a tradeoff between the query and communication complexity and the number of rounds. For example, we give a 2-round protocol with roughly n3/4 queries and communication. On the negative side, we show that there exist natural languages in NC1, for which the sum of queries and communication in any constant-round interactive proof of proximity must be polynomially related to n. In particular, for any 2-round protocol, the sum of queries and communication must be at least ~Ω(√n). Finally, we construct much better IPPs for specific functions, such as bipartiteness on random or well-mixing graphs, and the majority function. The query complexities of these protocols are provably better (by exponential or polynomial factors) than what is possible in the standard property testing model, i.e. without a prover.

%B Proceedings of the 45th annual ACM symposium on Symposium on theory of computing %I ACM %C Palo Alto, California, USA %P 793-802 %@ 978-1-4503-2029-0 %G eng %U http://dx.doi.org/10.1145/2488608.2488709 %1 2488709 %R 10.1145/2488608.2488709 %0 Unpublished Work %D 2013 %T Matching Known Patients to Health Records in Washington State Data %A Sweeney, Latanya %B Data Privacy Lab, IQSS, Harvard University %8 06/01/2013 %G eng %U http://thedatamap.org/risks.html %0 Book Section %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %D 2013 %T Private Learning and Sanitization: Pure vs. Approximate Differential Privacy %A Amos Beimel %A Kobbi Nissim %A Uri Stemmer %A Raghavendra, Prasad %A Raskhodnikova, Sofya %A Jansen, Klaus %A Rolim, JoséD.P. %K differential privacy %K Private Learning %K Sanitization %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %V 8096 %P 363-378 %@ 978-3-642-40327-9 %G eng %U http://dx.doi.org/10.1007/978-3-642-40328-6_26 %R 10.1007/978-3-642-40328-6_26 %0 Journal Article %J JAMA %D 2013 %T Putting Health IT on the Path to Success %A Sweeney, Latanya %A Yasnoff, William A. %A Shortliffe, Edward H. %X The promise of health information technology (HIT) is comprehensive electronic patient records when and where needed, leading to improved quality of care at reduced cost. However, physician experience and other available evidence suggest that this promise is largely unfulfilled. Serious flaws in current approaches to health information exchanges: (1) complex and expensive; (2) prone to error and insecurity; (3) increase liability; (4) not financially sustainable; (5) unable to protect privacy; (6) unable to ensure stakeholder cooperation; and, (7) unable to facilitate robust data sharing. The good news is that personal health record banks pose a viable alternative that is: (a) simpler; (b) scalable; (c) less expensive; (d) more secure; (e) community oriented to ensure stakeholder participation; and, (e) capable of providing the most comprehensive records. The idea of patient controlled records is not new, but what is new is how personally controlled records can help achieve the HIT vision. %B JAMA %V 309 %P 989-990 %G eng %U http://dx.doi.org/10.1001/jama.2013.1474 %N 10 %0 Unpublished Work %D 2013 %T Survey of Publicly Available State Health Databases %A Hooley, Sean %A Sweeney, Latanya %B Data Privacy Lab, IQSS, Harvard University %8 06/01/2013 %G eng %U http://thedatamap.org/states.html %0 Conference Paper %B Proceedings of the fourteenth ACM conference on Electronic commerce %D 2013 %T Truthful mechanisms for agents that value privacy %A Yiling Chen %A Stephen Chong %A Ian A. Kash %A Tal Moran %A Salil Vadhan %X Recent work has constructed economic mechanisms that are both truthful and differentially private. In these mechanisms, privacy is treated separately from the truthfulness; it is not incorporated in players' utility functions (and doing so has been shown to lead to non-truthfulness in some cases). In this work, we propose a new, general way of modelling privacy in players' utility functions. Specifically, we only assume that if an outcome o has the property that any report of player i would have led to o with approximately the same probability, then o has small privacy cost to player i. We give three mechanisms that are truthful with respect to our modelling of privacy: for an election between two candidates, for a discrete version of the facility location problem, and for a general social choice problem with discrete utilities (via a VCG-like mechanism). As the number n of players increases, the social welfare achieved by our mechanisms approaches optimal (as a fraction of n). %B Proceedings of the fourteenth ACM conference on Electronic commerce %I ACM %C Philadelphia, Pennsylvania, USA %P 215-232 %@ 978-1-4503-1962-1 %G eng %U http://dx.doi.org/10.1145/2482540.2482549 %1 2482549 %R 10.1145/2482540.2482549 %0 Conference Paper %B Proceedings of the 32nd International Cryptology Conference (CRYPTO `12) %D 2012 %T Differential Privacy with Imperfect Randomness %A Yevgeniy Dodis %A Adriana López-Alt %A Ilya Mironov %A Salil Vadhan %X

In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC’07) showed that if a source of randomness R is “good enough” to generate a secret key capable of encrypting k bits, then one can deterministically extract nearly k almost uniform bits from R, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an “extractable” source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific “non-extractable” sources of randomness, such as the γ-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias γ < 1 (possibly depending on prior bits). We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC’06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary “low sensitivity” functions that works even with randomness coming from a γ-Santha-Vazirani source, for any γ < 1. This provides a somewhat surprising “separation” between traditional privacy and differential privacy with respect to imperfect randomness. Interestingly, the design of our mechanism is quite different from the traditional “additive-noise” mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (accurate and private) “SV-robust” mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism.

%B Proceedings of the 32nd International Cryptology Conference (CRYPTO `12) %S Lecture Notes on Computer Science %7 Lecture Notes on Computer Science %I Springer-Verlag %C Santa Barbara, CA %V 7417 %P 497–516 %8 19–23 August %G eng %U http://link.springer.com/chapter/10.1007%2F978-3-642-32009-5_29 %0 Conference Paper %B Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012 %D 2012 %T Faster Algorithms for Privately Releasing Marginals %A Justin Thaler %A Jonathan Ullman %A Salil P. Vadhan %X

We study the problem of releasing k-way marginals of a database D ∈ ({0, 1} d ) n , while preserving differential privacy. The answer to a k-way marginal query is the fraction of D’s records x ∈ {0, 1} d with a given value in each of a given set of up to k columns. Marginal queries enable a rich class of statistical analyses of a dataset, and designing efficient algorithms for privately releasing marginal queries has been identified as an important open problem in private data analysis (cf. Barak et. al., PODS ’07). We give an algorithm that runs in time dO(k√) and releases a private summary capable of answering any k-way marginal query with at most ±.01 error on every query as long as n≥dO(k√) . To our knowledge, ours is the first algorithm capable of privately releasing marginal queries with non-trivial worst-case accuracy guarantees in time substantially smaller than the number of k-way marginal queries, which is d Θ(k) (for k ≪ d).

%B Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012 %7 Lecture Notes in Computer Science %I Springer %C Warwick, UK %V 7391 %8 9-13 Jul. %G eng %U http://dx.doi.org/10.1007/978-3-642-31594-7_68 %0 Conference Paper %B Theory of Cryptography - 9th Theory of Cryptography Conference, TCC 2012 %D 2012 %T Iterative Constructions and Private Data Release %A Anupam Gupta %A Aaron Roth %A Jonathan Ullman %X

In this paper we study the problem of approximately releasing the cut function of a graph while preserving differential privacy, and give new algorithms (and new analyses of existing algorithms) in both the interactive and non-interactive settings. Our algorithms in the interactive setting are achieved by revisiting the problem of releasing differentially private, approximate answers to a large number of queries on a database. We show that several algorithms for this problem fall into the same basic framework, and are based on the existence of objects which we call iterative database construction algorithms. We give a new generic framework in which new (efficient) IDC algorithms give rise to new (efficient) interactive private query release mechanisms. Our modular analysis simplifies and tightens the analysis of previous algorithms, leading to improved bounds. We then give a new IDC algorithm (and therefore a new private, interactive query release mechanism) based on the Frieze/Kannan low-rank matrix decomposition. This new release mechanism gives an improvement on prior work in a range of parameters where the size of the database is comparable to the size of the data universe (such as releasing all cut queries on dense graphs). We also give a non-interactive algorithm for efficiently releasing private synthetic data for graph cuts with error O(|V|1.5). Our algorithm is based on randomized response and a non-private implementation of the SDP-based, constant-factor approximation algorithm for cut-norm due to Alon and Naor. Finally, we give a reduction based on the IDC framework showing that an efficient, private algorithm for computing sufficiently accurate rank-1 matrix approximations would lead to an improved efficient algorithm for releasing private synthetic data for graph cuts. We leave finding such an algorithm as our main open problem.

%B Theory of Cryptography - 9th Theory of Cryptography Conference, TCC 2012 %7 Lecture Notes in Computer Science %I Springer %C Taormina, Sicily, Italy %V 7194 %P 339-356 %8 19-21 Mar. %G eng %U http://dx.doi.org/10.1007/978-3-642-28914-9_19 %0 Conference Paper %B Proceedings of the 53rd Annual {IEEE} Symposium on Foundations of Computer Science (FOCS `12) %D 2012 %T The Privacy of the Analyst and the Power of the State %A Cynthia Dwork %A Moni Naor %A Salil Vadhan %X

We initiate the study of "privacy for the analyst" in differentially private data analysis. That is, not only will we be concerned with ensuring differential privacy for the data (i.e. individuals or customers), which are the usual concern of differential privacy, but we also consider (differential) privacy for the set of queries posed by each data analyst. The goal is to achieve privacy with respect to other analysts, or users of the system. This problem arises only in the context of stateful privacy mechanisms, in which the responses to queries depend on other queries posed (a recent wave of results in the area utilized cleverly coordinated noise and state in order to allow answering privately hugely many queries). We argue that the problem is real by proving an exponential gap between the number of queries that can be answered (with non-trivial error) by stateless and stateful differentially private mechanisms. We then give a stateful algorithm for differentially private data analysis that also ensures differential privacy for the analyst and can answer exponentially many queries.

%B Proceedings of the 53rd Annual {IEEE} Symposium on Foundations of Computer Science (FOCS `12) %I IEEE %C New Brunswick, NJ %P 400–409 %8 20–23 October %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6375318&tag=1 %0 Manuscript %D 2012 %T Private Equilibrium Release, Large Games, and No-Regret Learning %A Michael Kearns %A Mallesh Pai %A Aaron Roth %A Jonathan Ullman %X

We give mechanisms in which each of n players in a game is given their component of an (approximate) equilibrium in a way that guarantees differential privacy---that is, the revelation of the equilibrium components does not reveal too much information about the utilities of the other players. More precisely, we show how to compute an approximate correlated equilibrium (CE) under the constraint of differential privacy (DP), provided n is large and any player's action affects any other's payoff by at most a small amount. Our results draw interesting connections between noisy generalizations of classical convergence results for no-regret learning, and the noisy mechanisms developed for differential privacy. Our results imply the ability to truthfully implement good social-welfare solutions in many games, such as games with small Price of Anarchy, even if the mechanism does not have the ability to enforce outcomes. We give two different mechanisms for DP computation of approximate CE. The first is computationally efficient, but has a suboptimal dependence on the number of actions in the game; the second is computationally efficient, but allows for games with exponentially many actions. We also give a matching lower bound, showing that our results are tight up to logarithmic factors.

%G eng %U http://arxiv.org/abs/1207.4084 %0 Government Document %D 2011 %T Comments on Advance Notice of Proposed Rulemaking: Human Subjects Research Protections: Enhancing Protections for Research Subjects and Reducing Burden, Delay, and Ambiguity for Investigators, Docket ID number HHS-OPHS-2011-0005 %A Salil Vadhan %A David Abrams %A Micah Altman %A Cynthia Dwork %A Paul Kominers %A Scott Duke Kominers %A Harry R. Lewis %A Tal Moran %A Guy Rothblum %X

Comments by Salil Vadhan, David Abrams, Micah Altman, Cynthia Dwork, Scott Duke Kominers, Paul Kominers, Harry Lewis, Tal Moran, Guy Rothblum, and Jon Ullman (at Harvard, Microsoft Research, the University of Chicago, MIT, and the Herzilya Interdisciplinary Center) These comments address the issues of data privacy and de-identification raised in the ANPRM. Our perspective is informed by substantial advances in privacy science that have been made in the computer science literature.

%8 October %G eng %U http://www.regulations.gov/#!documentDetail;D=HHS-OPHS-2011-0005-1101 %0 Conference Paper %B Proceedings of the 8th IACR Theory of Cryptography Conference (TCC `11) %D 2011 %T PCPs and the Hardness of Generating Synthetic Data %A Jon Ullman %A Salil Vadhan %E Yuval Ishai %X

Assuming the existence of one-way functions, we show that there is no polynomial-time, differentially private algorithm A that takes a database D\in ({0,1}^d)^n and outputs a ``synthetic database'' D' all of whose two-way marginals are approximately equal to those of D. (A two-way marginal is the fraction of database rows x\in {0,1}^d with a given pair of values in a given pair of columns.) This answers a question of Barak et al. (PODS `07), who gave an algorithm running in time poly(n,2^d). Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by Dwork et al., STOC `09) with PCP-based Levin-reductions from NP search problems to finding approximate solutions to CSPs.

%B Proceedings of the 8th IACR Theory of Cryptography Conference (TCC `11) %S Lecture Notes on Computer Science %7 Lecture Notes on Computer Science %I Springer-Verlag %C Providence, RI %V 5978 %P 572–587 %8 28–30 March %G eng %U http://link.springer.com/chapter/10.1007%2F978-3-642-19571-6_24 %0 Conference Paper %B Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011 %D 2011 %T Privately releasing conjunctions and the statistical query barrier %A Anupam Gupta %A Moritz Hardt %A Aaron Roth %A Jonathan Ullman %X Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ)model. This gives a complete answer to the question when running time is not a concern. We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to an algorithm that efficiently releases differentially private answers to all Boolean conjunctions with 1% average error. This presents progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms. %B Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011 %I ACM %C San Jose, CA, USA %P 803-812 %8 6-8 June %G eng %U http://dl.acm.org/citation.cfm?id=1993742 %0 Conference Paper %B Proceedings of the 51st Annual {IEEE} Symposium on Foundations of Computer Science (FOCS `10) %D 2010 %T Boosting and Differential Privacy %A Cynthia Dwork %A Guy Rothblum %A Salil Vadhan %X

Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-pre serving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of the responses to every query in Q, even when the number of queries is much larger than the number of rows in the database. Given a base synopsis generator that takes a distribution on Q and produces a "weak" synopsis that yields "good" answers for a majority of the weight in Q, our Boosting for Queries algorithm obtains a synopsis that is good for all of Q. We ensure privacy for the rows of the database, but the boosting is performed on the queries. We also provide the first synopsis generators for arbitrary sets of arbitrary low-sensitivity queries, i.e., queries whose answers do not vary much under the addition or deletion of a single row. In the execution of our algorithm certain tasks, each incurring some privacy loss, are performed many times. To analyze the cumulative privacy loss, we obtain an O(ε2) bound on the expected privacy loss from a single e-differentially private mechanism. Combining this with evolution of confidence arguments from the literature, we get stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides e-differential privacy or one of its relaxations, and each of which operates on (potentially) different, adaptively chosen, databases.

%B Proceedings of the 51st Annual {IEEE} Symposium on Foundations of Computer Science (FOCS `10) %I IEEE %C Las Vegas, NV %P 51–60 %8 23–26 October %G eng %U http://dx.doi.org/10.1109/FOCS.2010.12 %0 Conference Paper %B Proceedings of the 51st Annual {IEEE} Symposium on Foundations of Computer Science (FOCS `10) %D 2010 %T The Limits of Two-Party Differential Privacy %A Andrew McGregor %A Ilya Mironov %A Toniann Pitassi %A Omer Reingold %A Kunal Talwar %A Salil Vadhan %X

We study differential privacy in a distributed setting where two parties would like to perform analysis of their joint data while preserving privacy for both datasets. Our results imply almost tight lower bounds on the accuracy of such data analyses, both for specific natural functions (such as Hamming distance) and in general. Our bounds expose a sharp contrast between the two-party setting and the simpler client-server setting (where privacy guarantees are one-sided). In addition, those bounds demonstrate a dramatic gap between the accuracy that can be obtained by differentially private data analysis versus the accuracy obtainable when privacy is relaxed to a computational variant of differential privacy. The first proof technique we develop demonstrates a connection between differential privacy and deterministic extraction from Santha-Vazirani sources. A second connection we expose indicates that the ability to approximate a function by a low-error differentially private protocol is strongly related to the ability to approximate it by a low communication protocol. (The connection goes in both directions).

%B Proceedings of the 51st Annual {IEEE} Symposium on Foundations of Computer Science (FOCS `10) %I IEEE %C Las Vegas, NV %P 81–90 %8 23–26 October %G eng %U http://dx.doi.org/10.1109/FOCS.2010.14 %0 Conference Paper %B Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC `09) %D 2009 %T On the Complexity of Differentially Private Data Release: Efficient Algorithms and Hardness Results %A Cynthia Dwork %A Moni Naor %A Omer Reingold %A Guy Rothblum %A Salil Vadhan %X

We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a "synthetic data set" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role. For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.

%B Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC `09) %C Bethesda, MD %P 381–390 %8 31 May–2 June %G eng %U http://dl.acm.org/citation.cfm?id=1536467 %0 Conference Paper %B Advances in Cryptology–-CRYPTO `09 %D 2009 %T Computational Differential Privacy %A Ilya Mironov %A Omkant Pandey %A Omer Reingold %A Salil Vadhan %X

The definition of differential privacy has recently emerged as a leading standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficient—i.e., computationally-bounded—adversaries. We establish various relationships among these notions, and in doing so, we observe their close connection with the theory of pseudodense sets by Reingold et al.[1]. We extend the dense model theorem of Reingold et al. to demonstrate equivalence between two definitions (indistinguishability- and simulatability-based) of computational differential privacy. Our computational analogues of differential privacy seem to allow for more accurate constructions than the standard information-theoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentially-private protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differentially private.

%B Advances in Cryptology–-CRYPTO `09 %S Lecture Notes in Computer Science %I Springer-Verlag %C Santa Barbara, CA %V 5677 %P 126–142 %8 16–20 August %G eng %U http://link.springer.com/chapter/10.1007%2F978-3-642-03356-8_8 %0 Unpublished Work %D 2000 %T Simple Demographics Often Identify People Uniquely %A Sweeney, Latanya %B Carnegie Mellon University, Data Privacy %G eng %U http://dataprivacylab.org/projects/identifiability/ %9 Working paper