M. Bun, K. Nissim, and U. Stemmer, “Simultaneous private learning of multiple concepts,” in Innovations in Theoretical Computer Science (ITCS 2016), Submitted.
M. Bun, J. Ullman, and S. Vadhan, “Fingerprinting Codes and the Price of Approximate Differential Privacy,” SIAM Journal on Computing (SICOMP), Submitted. bunullmanvadhan.pdf
M. Altman, A. Wood, D. O'Brien, S. Vadhan, and U. Gasser, “Towards a Modern Approach to Privacy-Aware Government Data Releases,” Berkeley Journal of Technology Law, Forthcoming. modernopendataprivacy.pdf
M. Bun and M. Zhandry, “Order revealing encryption and the hardness of private learning,” in Proceedings of the 12th Theory of Cryptography Conference (TCC 2016), Tel-Aviv, Israel, 2016. ArXiv Version BunZhandry.pdf
J. Murtagh and S. Vadhan, “The Complexity of Computing the Optimal Composition of Differential Privacy,” in Theory of Cryptography Conference (TCC 2016), 2016. ArXiv Version
Y. Chen, K. Nissim, and B. Waggoner, “Fair Information Sharing for Treasure Hunting.,” in AAI Conference on Artificial Intelligence, North America, 2015. PDF Abstract
In a search task, a group of agents compete to be the first to find the solution. Each agent has different private information to incorporate into its search. This problem is inspired by settings such as scientific research, Bitcoin hash inversion, or hunting for some buried treasure. A social planner such as a funding agency, mining pool, or pirate captain might like to convince the agents to collaborate, share their information, and greatly reduce the cost of searching. However, this cooperation is in tension with the individuals' competitive desire to each be the first to win the search. The planner's proposal should incentivize truthful information sharing, reduce the total cost of searching, and satisfy fairness properties that preserve the spirit of the competition. We design contract-based mechanisms for information sharing without money. The planner solicits the agents' information and assigns search locations to the agents, who may then search only within their assignments. Truthful reporting of information to the mechanism maximizes an agent's chance to win the search. Epsilon-voluntary participation is satisfied for large search spaces. In order to formalize the planner's goals of fairness and reduced search cost, we propose a simplified, simulated game as a benchmark and quantify fairness and search cost relative to this benchmark scenario. The game is also used to implement our mechanisms. Finally, we extend to the case where coalitions of agents may participate in the mechanism, forming larger coalitions recursively.
C. Dwork, A. Smith, T. Steinke, J. Ullman, and S. Vadhan, “Robust Traceability from Trace Amounts,” in IEEE Symposium on Foundations of Computer Science (FOCS 2015), Berkeley, California, 2015.
S. Zheng, The Differential Privacy of Bayesian Inference. Bachelor's thesis, Harvard College, 2015. DASH Version
M. Bun, K. Nissim, U. Stemmer, and S. Vadhan, “Differentially Private Release and Learning of Threshold Functions,” in 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 15), Berkeley, California, 2015. ArXiv VersionAbstract
We prove new upper and lower bounds on the sample complexity of (ϵ,δ) differentially private algorithms for releasing approximate answers to threshold functions. A threshold function cx over a totally ordered domain X evaluates to cx(y)=1 if y≤x, and evaluates to 0 otherwise. We give the first nontrivial lower bound for releasing thresholds with (ϵ,δ) differential privacy, showing that the task is impossible over an infinite domain X, and moreover requires sample complexity n≥Ω(log∗|X|), which grows with the size of the domain. Inspired by the techniques used to prove this lower bound, we give an algorithm for releasing thresholds with n≤2(1+o(1))log∗|X| samples. This improves the previous best upper bound of 8(1+o(1))log∗|X| (Beimel et al., RANDOM '13). Our sample complexity upper and lower bounds also apply to the tasks of learning distributions with respect to Kolmogorov distance and of properly PAC learning thresholds with differential privacy. The lower bound gives the first separation between the sample complexity of properly learning a concept class with (ϵ,δ) differential privacy and learning without privacy. For properly learning thresholds in ℓ dimensions, this lower bound extends to n≥Ω(ℓ⋅log∗|X|). To obtain our results, we give reductions in both directions from releasing and properly learning thresholds and the simpler interior point problem. Given a database D of elements from X, the interior point problem asks for an element between the smallest and largest elements in D. We introduce new recursive constructions for bounding the sample complexity of the interior point problem, as well as further reductions and techniques for proving impossibility results for other basic problems in differential privacy.
O. Sheffet, “Private Approximations of the 2nd-Moment Matrix Using Existing Techniques in Linear Regression,” Neural Information Processing Systems (NIPS 2015), 2015. ArXiv VersionAbstract
We introduce three differentially-private algorithms that approximates the 2nd-moment matrix of the data. These algorithm, which in contrast to existing algorithms output positive-definite matrices, correspond to existing techniques in linear regression literature. Specifically, we discuss the following three techniques. (i) For Ridge Regression, we propose setting the regularization coefficient so that by approximating the solution using Johnson-Lindenstrauss transform we preserve privacy. (ii) We show that adding a small batch of random samples to our data preserves differential privacy. (iii) We show that sampling the 2nd-moment matrix from a Bayesian posterior inverse-Wishart distribution is differentially private provided the prior is set correctly. We also evaluate our techniques experimentally and compare them to the existing "Analyze Gauss" algorithm of Dwork et al.
Hardness Amplification and the Approximate Degree of Constant-Depth Circuits,” International Colloquium on Automata, Languages, and Programming (ICALP 2015) BG, 2015. ArXiv Version hardnessamplification.pdf
A. Askarov, S. Moore, C. Dimoulas, and S. Chong, “Cryptographic Enforcement of Language-Based Erasure,” Proceedings of the 28th IEEE Computer Security Foundations Symposium (CSF), 2015.
L. Sweeney and M. Crosas, “An Open Science Platform for the Next Generation of Data,” Computer Science, Computers and Scoiety [Internet], 2015. ArXiv VersionAbstract
Imagine an online work environment where researchers have direct and immediate access to myriad data sources and tools and data management resources, useful throughout the research lifecycle. This is our vision for the next generation of the Dataverse Network: an Open Science Platform (OSP). For the first time, researchers would be able to seamlessly access and create primary and derived data from a variety of sources: prior research results, public data sets, harvested online data, physical instruments, private data collections, and even data from other standalone repositories. Researchers could recruit research participants and conduct research directly on the OSP, if desired, using readily available tools. Researchers could create private or shared workspaces to house data, access tools, and computation and could publish data directly on the platform or publish elsewhere with persistent, data citations on the OSP. This manuscript describes the details of an Open Science Platform and its construction. Having an Open Science Platform will especially impact the rate of new scientific discoveries and make scientific findings more credible and accountable.
L. Sweeney and J. S. Yoo, “De-anonymizing South Korean Resident Registration Numbers Shared in Prescription Data,” Technology Science, 2015. Publisher's VersionAbstract
When marketing and data analytic companies purchase medical data, sensitive medical facts about patients can leak if the information includes national identifiers. In some cases, patient information is at risk of exposure even when national identifiers are encrypted to provide privacy. We examined prescription data with encrypted national identifiers from South Korean decedents. Because the data did not include the patient’s name or address and encrypted the patient’s national identifier (Resident Registration Number, or RRN), the data was presumed to be anonymous and to resemble data shared with IMS Health, a large multinational corporation headquartered in the United States that collects these kinds of prescription data on millions of living South Koreans. However, weakly encrypted RRNs may be vulnerable to de-anonymization. They have demographics embedded within its digits with a publicly-known pattern, and like credit cards, a last digit that is a weighted sum of prior digits which allows for arithmetic inspection of the accuracy of re-identification. We conducted two experiments on the prescription data that demonstrates the vulnerability of its RRN encryption method. This work is timely because South Korea is debating a redesign of its national identifier, the United States is discussing a new universal patient identifier, and countries worldwide are struggling with breaches of personal information that rely on national identification systems.
L. Sweeney, “All the Data on All the People,” UC Berkeley Law School & GWU Law School (Berkeley Center for Law & Technology). The Privacy Law Scholars Conference (PLSC). Berkeley, CA. . 2015.
L. Sweeney, “Privacy as a Sword and Shield in Public Health,” New York City Department of Public Health. New York, NY. . 2015.
M. Altman, “Privacy Principles (framing talk),” United Nations Global Pulse Workshop on ICT4D Principle 8: Address Privacy & Security In Development Programs. New York, USA. 2015.
L. Cranor, T. Rabin, V. Shmatikov, S. Vadhan, and D. Weitzner, “Towards a Privacy Research Roadmap for the Computing Community.,” Report for the Computing Community Consortium (CCC), 2015. CCC Version
M. Crosas, G. King, J. Honaker, and L. Sweeney, “Automating Open Science for Big Data,” The ANNALS of the American Academy of Political and Social Science, vol. 659, no. 1, pp. 260-273 , 2015. Publisher's VersionAbstract
The vast majority of social science research uses small (megabyte- or gigabyte-scale) datasets. These fixed-scale datasets are commonly downloaded to the researcher’s computer where the analysis is performed. The data can be shared, archived, and cited with well-established technologies, such as the Dataverse Project, to support the published results. The trend toward big data—including large-scale streaming data—is starting to transform research and has the potential to impact policymaking as well as our understanding of the social, economic, and political problems that affect human societies. However, big data research poses new challenges to the execution of the analysis, archiving and reuse of the data, and reproduction of the results. Downloading these datasets to a researcher’s computer is impractical, leading to analyses taking place in the cloud, and requiring unusual expertise, collaboration, and tool development. The increased amount of information in these large datasets is an advantage, but at the same time it poses an increased risk of revealing personally identifiable sensitive information. In this article, we discuss solutions to these new challenges so that the social sciences can realize the potential of big data.
K. Nissim and U. Stemmer, “On the Generalization Properties of Differential Privacy”. 2015. ArXiv VersionAbstract
A new line of work, started with Dwork et al., studies the task of answering statistical queries using a sample and relates the problem to the concept of differential privacy. By the Hoeffding bound, a sample of size O(logk/α2) suffices to answer k non-adaptive queries within error α, where the answers are computed by evaluating the statistical queries on the sample. This argument fails when the queries are chosen adaptively (and can hence depend on the sample). Dwork et al. showed that if the answers are computed with (ϵ,δ)-differential privacy then O(ϵ) accuracy is guaranteed with probability 1−O(δϵ). Using the Private Multiplicative Weights mechanism, they concluded that the sample size can still grow polylogarithmically with the k. Very recently, Bassily et al. presented an improved bound and showed that (a variant of) the private multiplicative weights algorithm can answer k adaptively chosen statistical queries using sample complexity that grows logarithmically in k. However, their results no longer hold for every differentially private algorithm, and require modifying the private multiplicative weights algorithm in order to obtain their high probability bounds. We greatly simplify the results of Dwork et al. and improve on the bound by showing that differential privacy guarantees O(ϵ) accuracy with probability 1−O(δlog(1/ϵ)/ϵ). It would be tempting to guess that an (ϵ,δ)-differentially private computation should guarantee O(ϵ) accuracy with probability 1−O(δ). However, we show that this is not the case, and that our bound is tight (up to logarithmic factors).