|
[1]Rakesh Agrawal, Tomasz Imieliński, and Arun Swami. Mining association rules between sets of items in large databases. In Acm sigmod record, volume 22, pages 207–216. ACM, 1993. [2]Rakesh Agrawal, Ramakrishnan Srikant, et al. Fast algorithms for mining association rules. In Proc. 20th int. conf. very large data bases, VLDB, volume 1215, pages 487– 499, 1994. [3]Brendan Avent, Aleksandra Korolova, David Zeber, Torgeir Hovden, and Benjamin Livshits. Blender: enabling local search with a hybrid differential privacy model. In Proc. of the 26th USENIX Security Symposium, pages 747–764, 2017. [4]Raghav Bhaskar, Srivatsan Laxman, Adam Smith, and Abhradeep Thakurta. Discov- ering frequent patterns in sensitive data. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 503–512. ACM, 2010. [5]Ferenc Bodon. A fast apriori implementation. In FIMI, volume 3, page 63, 2003. [6]Tom Brijs, Gilbert Swinnen, Koen Vanhoof, and Geert Wets. Using association rules for product assortment decisions: A case study. In Knowledge Discovery and Data Mining, pages 254–260, 1999. [7]Wei-Yen Day and Ninghui Li. Differentially private publishing of high-dimensional data using sensitivity control. In Proceedings of the 10th ACM Symposium on Infor- mation, Computer and Communications Security, pages 451–462. ACM, 2015. [8]Apple Differential Privacy Team. Learning with privacy at scale. Apple Machine Learning Journal, 1(8), 2017. [9]Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. Collecting telemetry data privately. In Advances in Neural Information Processing Systems, pages 3571–3580, 2017. [10]Cynthia Dwork. Differential privacy. In Automata, Languages, and Programming, pages 1–12. Springer, 2006. [11]Cynthia Dwork and Moni Naor. On the difficulties of disclosure prevention in sta- tistical databases or the case for differential privacy. Journal of Privacy and Confi- dentiality, 2(1), 2010. [12]Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Annual Inter- national Conference on the Theory and Applications of Cryptographic Techniques, pages 486–503. Springer, 2006. [13]Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pages 265–284. Springer, 2006. [14]Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential pri- vacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014. [15]Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality, 7(3): 17–51, 2017. [16]Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. Rappor: Randomized ag- gregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, pages 1054–1067. ACM, 2014. [17]Jiawei Han, Jian Pei, and Yiwen Yin. Mining frequent patterns without candidate generation. In ACM sigmod record, volume 29, pages 1–12. ACM, 2000. [18]Jiawei Han, Jian Pei, Yiwen Yin, and Runying Mao. Mining frequent patterns with- out candidate generation: A frequent-pattern tree approach. Data mining and knowl- edge discovery, 8(1):53–87, 2004. [19]F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):19, 2016. [20]Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? SIAM Journal on Computing, 40 (3):793–826, 2011. [21]Daniel Kifer and Ashwin Machanavajjhala. No free lunch in data privacy. In Pro- ceedings of the 2011 ACM SIGMOD International Conference on Management of data, pages 193–204. ACM, 2011. [22]Jing Lei. Differentially private m-estimators. In Advances in Neural Information Processing Systems, pages 361–369, 2011. [23]Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. t-closeness: Privacy beyond k-anonymity and l-diversity. In Data Engineering, 2007. ICDE 2007. IEEE 23rd International Conference on, pages 106–115. IEEE, 2007. [24]Ninghui Li, Wahbeh Qardaji, Dong Su, and Jianneng Cao. Privbasis: frequent item- set mining with differential privacy. Proceedings of the VLDB Endowment, 5(11): 1340–1351, 2012. [25]Ninghui Li, Min Lyu, Dong Su, and Weining Yang. Differential privacy: From theory to practice. Synthesis Lectures on Information Security, Privacy, & Trust, 8 (4):1–138, 2016. [26]Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, and Steven Z Wu. Accuracy first: Selecting a differential privacy level for accuracy constrained erm. In Advances in Neural Information Processing Systems, pages 2566–2576, 2017. [27]Bing Liu, Wynne Hsu, and Yiming Ma. Mining association rules with multiple min- imum supports. In Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 337–341. ACM, 1999. [28]Ashwin Machanavajjhala, Johannes Gehrke, Daniel Kifer, and Muthuramakrishnan Venkitasubramaniam. l-diversity: Privacy beyond k-anonymity. In null, page 24. IEEE, 2006. [29]Mihai Maruseac and Gabriel Ghinita. Differentially-private mining of moderately- frequent high-confidence association rules. In Proceedings of the 5th ACM Confer- ence on Data and Application Security and Privacy, pages 13–24. ACM, 2015. [30]Mihai Maruseac and Gabriel Ghinita. Precision-enhanced differentially-private min- ing of high-confidence association rules. IEEE Transactions on Dependable and Secure Computing, 2018. [31]Viktor Mayer-Schonberger and Kenneth Cukier. Big data: the essential guide to work, life and learning in the age of insight. Hachette UK, 2013. [32]Frank D McSherry. Privacy integrated queries: an extensible platform for privacy- preserving data analysis. In Proceedings of the 2009 ACM SIGMOD International Conference on Management of data, pages 19–30. ACM, 2009. [33]Chirag N Modi, Udai Pratap Rao, and Dhiren R Patel. Maintaining privacy and data quality in privacy preserving association rule mining. In Computing Communication and Networking Technologies (ICCCNT), 2010 International Conference on, pages 1–6. IEEE, 2010. [34]Arvind Narayanan and Vitaly Shmatikov. Robust de-anonymization of large sparse datasets. In Security and Privacy, 2008. SP 2008. IEEE Symposium on, pages 111–125. IEEE, 2008. [35]Kobbi Nissim and Alexandra Wood. Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376 (2128), 2018. [36]David Reinsel, John Gantz, and John Rydning. Data age 2025: The evolution of data to life-critical. Don’t Focus on Big Data, 2017. [37]Ryan M Rogers, Aaron Roth, Jonathan Ullman, and Salil Vadhan. Privacy odome- ters and filters: Pay-as-you-go composition. In Advances in Neural Information Processing Systems, pages 1921–1929, 2016. [38]Latanya Sweeney. k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 10(05):557–570, 2002. [39]Salil Vadhan. The complexity of differential privacy. In Tutorials on the Foundations of Cryptography, pages 347–450. Springer, 2017. [40]Mohammed Javeed Zaki. Scalable algorithms for association mining. IEEE trans- actions on knowledge and data engineering, 12(3):372–390, 2000. [41]Chen Zeng, Jeffrey F Naughton, and Jin-Yi Cai. On differentially private frequent itemset mining. Proceedings of the VLDB Endowment, 6(1):25–36, 2012. [42]Zijian Zheng, Ron Kohavi, and Llew Mason. Real world performance of association rule algorithms. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 401–406. ACM, 2001.
|