Publications by Year: 2014

2014
Daniel J. Weitzner, Hal Abelson, Cynthia Dwork, Cameron Kerry, Daniela Rus, Sandy Pentland, and Salil Vadhan. 4/4/2014. “Consumer Privacy Bill of Rights and Big Data: Response to White House Office of Science and Technology Policy Request for Information”.Abstract

In response to the White House Office of Science and Technology Policy Request for Information on Big Data Privacy we offer these comments based on presentations and discussions at the White House-MIT Workshop “Big Data Privacy Workshop: Advancing the State of the Art in Technology and Practice” and subsequent workshops co-sponsored with Data & Society and NYU Information Law Institute and the UC Berkeley iSchool.

PDF
Micah Altman, David O’Brien, Salil Vadhan, and Alexandra Wood. 3/31/2014. “Comment to The White House Office of Science and Technology Policy (OSTP): Big Data Study, Request for Information”.Abstract
On January 23, 2014, President Barack Obama asked John Podesta to perform a comprehensive review of big data and privacy. During this review, the White House Office of Science and Technology Policy issued a request for public comment on questions related to the public policy implications of big data.
 
 
Micah Altman, David O’Brien, Salil Vadhan, and Alexandra Wood submitted a response on behalf of the Privacy Tools for Sharing Research Data project. Their comments outline a broad, comprehensive, and systematic framework for privacy analysis and provide a taxonomy of modern technological, statistical, and cryptographic approaches to preserving both data privacy and utility. They argue that an analysis of information privacy should address the scope of information covered, the sensitivity of that information, the risk that sensitive information will be disclosed, the availability of control and accountability mechanisms, and the suitability of existing data sharing models, as applied across the entire lifecyle of information use, from collection through dissemination and reuse.
 
 
With this submission, the authors discuss the inadequacy of traditional approaches to privacy protection and recommend a modern approach that considers three principles. First, the risks of informational harm are generally not a simple function of the presence or absence of specific fields, attributes, or keywords in the released set of data. Second, redaction, pseudonymization, coarsening, and hashing, are often neither an adequate nor appropriate practice, nor is releasing less information necessary more privacy protective. Third, a thoughtful analysis with expert consultation is necessary in order to evaluate the sensitivity of the data collected, to quantify the associated re-identification risks, and to design useful and safe release mechanisms.
PDF
Micah Altman. 3/19/2014. “Comment to the Federal Trade Commission on Mobile Device Tracking”. PDF
Micah Altman, David O’Brien, Salil Vadhan, and Alexandra Wood. 3/8/2014. “Comment on the Occupational Safety and Health Administration (OSHA) Proposed Rule: Improve Tracking of Workplace Injuries and Illnesses; Extension of Comment Period”. Full Text at Regulations.gov PDF
Christos Dimoulas, Scott Moore, Aslan Askarov, and Stephen Chong. 2014. “Declarative Policies for Capability Control.” In Proceedings of the 27th {IEEE} Computer Security Foundations Symposium. Piscataway, NJ, USA: IEEE Press.Abstract

In capability-safe languages, components can access a resource only if they possess a capability for that resource. As a result, a programmer can prevent an untrusted component from accessing a sensitive resource by ensuring that the component never acquires the corresponding capability. In order to reason about which components may use a sensitive resource it is necessary to reason about how capabilities propagate through a system. This may be difficult, or, in the case of dynamically composed code, impossible to do before running the system.

To counter this situation, we propose extensions to capability-safe languages that restrict the use of capabilities according to declarative policies. We introduce two independently useful semantic security policies to regulate capabilities and describe language-based mechanisms that enforce them. Access control policies restrict which components may use a capability and are enforced using higher-order contracts. Integrity policies restrict which components may influence (directly or indirectly) the use of a capability and are enforced using an information-flow type system. Finally, we describe how programmers can dynamically and soundly combine components that enforce access control or integrity policies with components that enforce different policies or even no policy at all.

PDF
Karthekeyan Chandrasekaran, Justin Thaler, Jonathan Ullman, and Andrew Wan. 2014. “Faster Private Release of Marginals on Small Databases.” In Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, Pp. 387–402. New York, NY, USA: ACM. Publisher's Version PDF
Mark Bun, Jonathan Ullman, and Salil Vadhan. 2014. “Fingerprinting Codes and the Price of Approximate Differential Privacy.” In SIAM Journal on Computing Special Issue on STOC `14., Pp. 1–10. PDF arXiv Version
David O'Brien. 2014. “In the Age of the Web, What Does ''Public' Mean?” Internet Monitor 2014: Reflections on the Digital World, Pp. 87-89 (Berkman Center Research Publication 2014-17 ). SSRN Version PDF
Alexandra Wood, David O'Brien, Micah Altman, Alan Karr, Urs Gasser, Michael Bar-Sinai, Kobbi Nissim, Jonathan Ullman, Salil Vadhan, and Wojcik, Michael John. 2014. Integrating Approaches to Privacy Across the Research Lifecycle: Long-Term Longitudinal Studies. Social Science Research Network. Cambridge: Harvard University. Publisher's VersionAbstract

On September 24-25, 2013, the Privacy Tools for Sharing Research Data project at Harvard University held a workshop titled "Integrating Approaches to Privacy across the Research Data Lifecycle." Over forty leading experts in computer science, statistics, law, policy, and social science research convened to discuss the state of the art in data privacy research. The resulting conversations centered on the emerging tools and approaches from the participants’ various disciplines and how they should be integrated in the context of real-world use cases that involve the management of confidential research data.

This workshop report, the first in a series, provides an overview of the long-term longitudinal study use case. Long-term longitudinal studies collect, at multiple points over a long period of time, highly-specific and often sensitive data describing the health, socioeconomic, or behavioral characteristics of human subjects. The value of such studies lies in part in their ability to link a set of behaviors and changes to each individual, but these factors tend to make the combination of observable characteristics associated with each subject unique and potentially identifiable.

Using the research information lifecycle as a framework, this report discusses the defining features of long-term longitudinal studies and the associated challenges for researchers tasked with collecting and analyzing such data while protecting the privacy of human subjects. It also describes the disclosure risks and common legal and technical approaches currently used to manage confidentiality in longitudinal data. Finally, it identifies urgent problems and areas for future research to advance the integration of various methods for preserving confidentiality in research data.

PDF
Thomas Steinke and Jonathan Ullman. 2014. “Interactive Fingerprinting Codes and the Hardness of Preventing False Discovery”. arXiv.orgAbstract

We show a tight bound on the number of adaptively chosen statistical queries that a computationally efficient algorithm can answer accurately given n samples from an unknown distribution. A statistical query asks for the expectation of a predicate over the underlying distribution, and an answer to a statistical query is accurate if it is "close" to the correct expectation over the distribution. This question was recently considered by Dwork et al., who showed that Ω~(n2) queries can be answer efficiently, and also by Hardt and Ullman, who showed that answering O~(n3) queries is computationally hard. We close the gap between the two bounds by proving a new, nearly-optimal hardness result. Specifically, we show that, under a standard hardness assumption, there is no computationally efficient algorithm that given n samples from an unknown distribution can give valid answers to O(n2) adaptively chosen statistical queries. An implication of our results is that computationally efficient algorithms for answering arbitrary, adaptively chosen statistical queries may as well be differentially private. We obtain our results via an optimal construction of a new combinatorial object that we call an interactive fingerprinting code, which may be of independent interest.

PDF
Urs Gasser, Jonathan Zittrain, R Faris, and R.H Jones. 2014. Internet Monitor 2014: Reflections on the Digital World: Platforms, Policy, Privacy, and Public Discourse. 17th ed. Cambridge, MA: Berkman Center Research Publication. SSNR VersionAbstract

This publication is the second annual report of the Internet Monitor project at the Berkman Center for Internet & Society at Harvard University. As with the inaugural report, this year’s edition is a collaborative effort of the extended Berkman community. Internet Monitor 2014: Reflections on the Digital World includes nearly three dozen contributions from friends and colleagues around the world that highlight and discuss some of the most compelling events and trends in the digitally networked environment over the past year.

The result, intended for a general interest audience, brings together reflection and analysis on a broad range of issues and regions — from an examination of Europe’s “right to be forgotten” to a review of the current state of mobile security to an exploration of a new wave of movements attempting to counter hate speech online — and offers it up for debate and discussion. Our goal remains not to provide a definitive assessment of the “state of the Internet” but rather to provide a rich compendium of commentary on the year’s developments with respect to the online space.

Last year’s report examined the dynamics of Internet controls and online activity through the actions of government, corporations, and civil society. We focus this year on the interplay between technological platforms and policy; growing tensions between protecting personal privacy and using big data for social good; the implications of digital communications tools for public discourse and collective action; and current debates around the future of Internet governance.

The report reflects the diversity of ideas and input the Internet Monitor project seeks to invite. Some of the contributions are descriptive; others prescriptive. Some contain purely factual observations; others offer personal opinion. In addition to those in traditional essay format, contributions this year include a speculative fiction story exploring what our increasingly data-driven world might bring, a selection of “visual thinking” illustrations that accompany a number of essays, a “Year in Review” timeline that highlights many of the year’s most fascinating Internet-related news stories (and an interactive version of which is available at the netmonitor.org), and a slightly tongue-in-cheek “By the Numbers” section that offers a look at the year’s important digital statistics. We believe that each contribution offers insights, and hope they provoke further reflection, conversation, and debate in both offline and online settings around the globe.

PDF
Michael Kearns, Mallesh Pai, Aaron Roth, and Jonathan Ullman. 2014. “Mechanism Design in Large Games: Incentives and Privacy.” In Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, Pp. 403–410. New York, NY, USA: ACM. Publisher's Version PDF
Mallesh M Pai, Aaron Roth, and Jonathan Ullman. 2014. “An Anti-Folk Theorem for Large Repeated Games with Imperfect Monitoring.” CoRR, abs/1402.2801. 1402.2801v1.pdf
Yiling Chen, Or Sheffet, and Salil Vadhan. 2014. “Privacy Games.” In 10th Conference on Web and Internet Economics (WINE). Beijing, China. PDF
Lucas Waye. 2014. “Privacy Integrated Data Stream Queries.” In Proceedings of the 2014 International Workshop on Privacy & Security in Programming (PSP '14). New York, NY: ACM. ACM Digital Library Version PDF
Raef Bassily, Adam Smith, and Abhradeep Thakurta. 2014. “Private Empirical Risk Minimization, Revisited.” In ICML 2014 Workshop on Learning, Security and Privacy. Beijing, China. Publisher's VersionAbstract

In this paper, we initiate a systematic investigation of differentially private algorithms for convex empirical risk minimization. Various instantiations of this problem have been studied before. We provide new algorithms and matching lower bounds for private ERM assuming only that each data point's contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. 

Our algorithms run in polynomial time, and in some cases even match the optimal non-private running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ϵ,0)- and (ϵ,δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different.

Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.

PDF
Justin Hsu, Aaron Roth, Tim Roughgarden, Jonathan Ullman, Javier Esparza, Pierre Fraigniaud, Thore Husfeldt, and Elias Koutsoupias. 2014. “Privately Solving Linear Programs.” In Automata, Languages, and Programming, 8572: Pp. 612-624. Springer Berlin Heidelberg. Publisher's Version PDF
Kobbi Nissim, Salil Vadhan, and David Xiao. 2014. “Redrawing the Boundaries on Purchasing Data from Privacy-sensitive Individuals.” In Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, Pp. 411–422. New York, NY, USA: ACM. Publisher's Version PDF
Vitaly Feldman and David Xiao. 2014. “Sample Complexity Bounds on Differentially Private Learning via Communication Complexity.” Proceedings of The 27th Conference on Learning Theory (COLT 2014) 35, Pp. 1-20. Barcelona, Spain: JMLR Workshop and Conference Proceedings. Publisher's VersionAbstract

In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by Dwork et al. (2006) that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private PAC and agnostic learning was studied in a number of prior works starting with (Kasiviswanathan et al., 2008) but a number of basic questions still remain open (Beimel et al. 2010; Chaudhuri and Hsu, 2011; Beimel et al., 2013ab). 

Our main contribution is an equivalence between the sample complexity of differentially-private learning of a concept class C (or SCDP(C)) and the randomized one-way communication complexity of the evaluation problem for concepts from C. Using this equivalence we prove the following bounds:

  • SCDP(C)=Ω(LDim(C)), where LDim(C) is the Littlestone's (1987) dimension characterizing the number of mistakes in the online-mistake-bound learning model. This result implies that SCDP(C) is different from the VC-dimension of C, resolving one of the main open questions from prior work.
  • For any t, there exists a class C such that LDim(C)=2 but SCDP(C)t.
  • For any t, there exists a class C such that the sample complexity of (pure) α-differentially private PAC learning is Ω(t/α) but the sample complexity of the relaxed (α,β)-differentially private PAC learning is O(log(1/β)/α). This resolves an open problem from (Beimel et al., 2013b). 

We also obtain simpler proofs for a number of known related results. Our equivalence builds on a characterization of sample complexity by Beimel et al., (2013a) and our bounds rely on a number of known results from communication complexity.

PDF
What Stays in Vegas: The World of Personal Data -- Lifeblood of Big Business -- and the End of Privacy as We Know It.
Adam Tanner. 2014. What Stays in Vegas: The World of Personal Data -- Lifeblood of Big Business -- and the End of Privacy as We Know It.. New York, NY: Public Affairs. Available on AmazonAbstract

The greatest threat to privacy today is not the NSA, but good-old American companies. Internet giants, leading retailers, and other firms are voraciously gathering data with little oversight from anyone.

In Las Vegas, no company knows the value of data better than Caesars Entertainment. Many thousands of enthusiastic clients pour through the ever-open doors of their casinos. The secret to the company’s success lies in their one unrivaled asset: they know their clients intimately by tracking the activities of the overwhelming majority of gamblers. They know exactly what games they like to play, what foods they enjoy for breakfast, when they prefer to visit, who their favorite hostess might be, and exactly how to keep them coming back for more.

Caesars’ dogged data-gathering methods have been so successful that they have grown to become the world’s largest casino operator, and have inspired companies of all kinds to ramp up their own data mining in the hopes of boosting their targeted marketing efforts. Some do this themselves. Some rely on data brokers. Others clearly enter a moral gray zone that should make American consumers deeply uncomfortable.

We live in an age when our personal information is harvested and aggregated whether we like it or not. And it is growing ever more difficult for those businesses that choose not to engage in more intrusive data gathering to compete with those that do. Tanner’s timely warning resounds: Yes, there are many benefits to the free flow of all this data, but there is a dark, unregulated, and destructive netherworld as well.