Hypothesis testing is a useful statistical tool in determining whether a given model should be rejected based on a sample from the population. Sample data may contain sensitive information about individuals, such as medical information. Thus it is important to design statistical tests that guarantee the privacy of subjects in the data. In this work, we study hypothesis testing subject to differential privacy, specifically chi-squared tests for goodness of fit for multinomial data and independence between two categorical variables.
We propose new tests for goodness of fit and independence testing that like the classical versions can be used to determine whether a given model should be rejected or not, and that additionally can ensure differential privacy. We give both Monte Carlo based hypothesis tests as well as hypothesis tests that more closely follow the classical chi-squared goodness of fit test and the Pearson chi-squared test for independence. Crucially, our tests account for the distribution of the noise that is injected to ensure privacy in determining significance.
We show that these tests can be used to achieve desired significance levels, in sharp contrast to direct applications of classical tests to differentially private contingency tables which can result in wildly varying significance levels. Moreover, we study the statistical power of these tests. We empirically show that to achieve the same level of power as the classical non-private tests our new tests need only a relatively modest increase in sample size.
Privacy Tools for Sharing Research Data: Publications
Differentially Private Chi-Squared Hypothesis Testing: Goodness of Fit and Independence Testing.” Proceedings of The 33rd International Conference on Machine Learning, PMLR . Publisher's VersionAbstract
. 2016. “
The Complexity of Computing the Optimal Composition of Differential Privacy.” In Theory of Cryptography Conference (TCC 2016), 8th ed., 14: Pp. 1-35. Theory of Computing (2018). TOC's VersionAbstract
. 2018. “
Order revealing encryption and the hardness of private learning.” In Proceedings of the 12th Theory of Cryptography Conference (TCC 2016). Tel-Aviv, Israel.Abstract
. 2016. “
Robust Traceability from Trace Amounts.” In IEEE Symposium on Foundations of Computer Science (FOCS 2015). Berkeley, California.Abstract
. 2015. “
Towards a Modern Approach to Privacy-Aware Government Data Releases.” Berkeley Technology Law Journal, 30, 3.Abstract
. 2016. “
Interactive Fingerprinting Codes and the Hardness of Preventing False Discovery.” JMLR: Workshop and Conference Proceedings, 40, 201, Pp. 1-41. PDFAbstract
. 2015. “
Automating Open Science for Big Data.” The ANNALS of the American Academy of Political and Social Science, 659, 1, Pp. 260-273 . Publisher's VersionAbstract
. 2015. “
Differentially Private Release and Learning of Threshold Functions.” In 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 15). Berkeley, California. ArXiv VersionAbstract
. 2015. “
Integrating Approaches to Privacy Across the Research Lifecycle: When is Information Purely Public?” Social Science Research Network. SSRN VersionAbstract
. 2015. “
In the Age of the Web, What Does “Public” Mean?” Internet Monitor 2014: Data and Privacy. Online Version
. 2015. “
Fair Information Sharing for Treasure Hunting.” In AAI Conference on Artificial Intelligence. North America: Association for the Advancement of Artificial Intelligence (AAAI). Publisher's VersionAbstract
. 2015. “
Learning Privately with Labeled and Unlabeled Examples.” Accepted for publication, SODA 2015. arXiv.orgAbstract
. 2015. “
Privacy Games.” In 10th Conference on Web and Internet Economics (WINE). Beijing, China.
. 2014. “