skip to main content
research-article

Privacy-Preserving Distributed Multi-Task Learning against Inference Attack in Cloud Computing

Authors Info & Claims
Published:22 October 2021Publication History
Skip Abstract Section

Abstract

Because of the powerful computing and storage capability in cloud computing, machine learning as a service (MLaaS) has recently been valued by the organizations for machine learning training over some related representative datasets. When these datasets are collected from different organizations and have different distributions, multi-task learning (MTL) is usually used to improve the generalization performance by scheduling the related training tasks into the virtual machines in MLaaS and transferring the related knowledge between those tasks. However, because of concerns about privacy breaches (e.g., property inference attack and model inverse attack), organizations cannot directly outsource their training data to MLaaS or share their extracted knowledge in plaintext, especially the organizations in sensitive domains. In this article, we propose a novel privacy-preserving mechanism for distributed MTL, namely NOInfer, to allow several task nodes to train the model locally and transfer their shared knowledge privately. Specifically, we construct a single-server architecture to achieve the private MTL, which protects task nodes’ local data even if \(n-1\) out of \(n\) nodes colluded. Then, a new protocol for the Alternating Direction Method of Multipliers (ADMM) is designed to perform the privacy-preserving model training, which resists the inference attack through the intermediate results and ensures that the training efficiency is independent of the number of training samples. When releasing the trained model, we also design a differentially private model releasing mechanism to resist the membership inference attack. Furthermore, we analyze the privacy preservation and efficiency of NOInfer in theory. Finally, we evaluate our NOInfer over two testing datasets and evaluation results demonstrate that NOInfer efficiently and effectively achieves the distributed MTL.

REFERENCES

  1. [1] Abadi Martín, Chu Andy, Goodfellow Ian J., McMahan H. Brendan, Mironov Ilya, Talwar Kunal, and Zhang Li. 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 308318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Bost Raphael, Popa Raluca Ada, Tu Stephen, and Goldwasser Shafi. 2015. Machine learning classification over encrypted data. In Proceedings of the 22nd Annual Network and Distributed System Security Symposium. 114.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Boyd Stephen P., Parikh Neal, Chu Eric, Peleato Borja, and Eckstein Jonathan. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 3, 1 (2011), 1122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Chaudhuri Kamalika, Monteleoni Claire, and Sarwate Anand D.. 2011. Differentially private empirical risk minimization. Journal of Machine Learning Research 12 (2011), 10691109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Dwork Cynthia. 2006. Differential privacy. In Proceedings of Automata, Languages and Programming, 33rd International Colloquium, Part II. 112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Dwork Cynthia, McSherry Frank, Nissim Kobbi, and Smith Adam D.. 2016. Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality 7, 3 (2016), 1751.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Fenner Peter and Pyzer-Knapp Edward. 2020. Privacy-preserving Gaussian process regression—A modular approach to the application of homomorphic encryption. In Proceedings of the 34th AAAI Conference on Artificial Intelligence. 38663873.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Gao Sheng, Ma Jianfeng, Shi Weisong, Zhan Guoxing, and Sun Cong. 2013. TrPF: A trajectory privacy-preserving framework for participatory sensing. IEEE Transactions on Information Forensics and Security 8, 6 (2013), 874887. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Goldreich Oded. 2004. The Foundations of Cryptography—Volume 2, Basic Applications. Cambridge University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Gu Ke, Wang Linyu, and Yin Bo. 2019. Social community detection and message propagation scheme based on personal willingness in social network. Soft Computing 23, 15 (2019), 62676285. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Huang Zonghao, Hu Rui, Guo Yuanxiong, Chan-Tin Eric, and Gong Yanmin. 2020. DP-ADMM: ADMM-based distributed learning with differential privacy. IEEE Transactions on Information Forensics and Security 15 (2020), 10021012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Konecný Jakub, McMahan H. Brendan, Yu Felix X., Richtárik Peter, Suresh Ananda Theertha, and Bacon Dave. 2016. Federated learning: Strategies for improving communication efficiency. CoRR abs/1610.05492 (2016).Google ScholarGoogle Scholar
  13. [13] Li Ping, Li Jin, Huang Zhengan, Li Tong, Gao Chong-Zhi, Yiu Siu-Ming, and Chen Kai. 2017. Multi-key privacy-preserving deep learning in cloud computing. Future Generation Computer Systems 74 (2017), 7685. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Li Tong, Li Jin, Chen Xiaofeng, Liu Zheli, Lou Wenjing, and Hou Thomas. 2020. NPMML: A framework for non-interactive privacy-preserving multi-party machine learning. IEEE Transactions on Dependable and Secure Computing. Early access, February 4, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Liu Sulin, Pan Sinno Jialin, and Ho Qirong. 2017. Distributed multi-task relationship learning. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, 937946. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Liu Ximeng, Deng Robert H., Choo Kim-Kwang Raymond, and Weng Jian. 2016. An efficient privacy-preserving outsourced calculation toolkit with multiple keys. IEEE Transactions on Information Forensics and Security 11, 11 (2016), 24012414.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Lyu Lingjuan, Yu Jiangshan, Nandakumar Karthik, Li Yitong, Ma Xingjun, Jin Jiong, Yu Han, and Ng Kee Siong. 2020. Towards fair and privacy-preserving federated deep models. IEEE Transactions on Parallel and Distributed Systems 31, 11 (2020), 25242541.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Ma Xindi, Li Hui, Ma Jianfeng, Jiang Qi, Gao Sheng, Xi Ning, and Lu Di. 2017. APPLET: A privacy-preserving framework for location-aware recommender system. Science China Information Sciences 60, 9 (2017), 092101.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Ma Xindi, Ma Jianfeng, Li Hui, Jiang Qi, and Gao Sheng. 2018. ARMOR: A trust-based privacy-preserving framework for decentralized friend recommendation in online social networks. Future Generation Computer Systems 79 (2018), 8294.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Ma Xindi, Ma Jianfeng, Li Hui, Jiang Qi, and Gao Sheng. 2018. PDLM: Privacy-preserving deep learning model on cloud with multiple keys. IEEE Transactions on Services Computing 14, 4 (2018), 1251–1263.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Melis Luca, Song Congzheng, Cristofaro Emiliano De, and Shmatikov Vitaly. 2018. Inference attacks against collaborative learning. CoRR abs/1805.04049 (2018).Google ScholarGoogle Scholar
  22. [22] Melis Luca, Song Congzheng, Cristofaro Emiliano De, and Shmatikov Vitaly. 2019. Exploiting unintended feature leakage in collaborative learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. 691706.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Mohassel Payman and Zhang Yupeng. 2017. SecureML: A system for scalable privacy-preserving machine learning. In Proceedings of the IEEE Symposium on Security and Privacy. 1938.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Nasr Milad, Shokri Reza, and Houmansadr Amir. 2018. Machine learning with membership privacy using adversarial regularization. In Proceedings of ACM SIGSAC Conference on Computer and Communications Security. 634646. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Phong Le Trieu, Aono Yoshinori, Hayashi Takuya, Wang Lihua, and Moriai Shiho. 2018. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5 (2018), 13331345. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Le Trieu Phong and Tran Thi Phuong. 2019. Privacy-preserving deep learning via weight transmission. IEEE Transactions on Information Forensics and Security 14, 11 (2019), 30033015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Rachuri Rahul and Suresh Ajith. 2019. Trident: Efficient 4PC framework for privacy preserving machine learning. CoRR abs/1912.02631 (2019).Google ScholarGoogle Scholar
  28. [28] Shokri Reza and Shmatikov Vitaly. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 13101321. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Shokri Reza, Stronati Marco, Song Congzheng, and Shmatikov Vitaly. 2017. Membership inference attacks against machine learning models. In Proceedings of IEEE Symposium on Security and Privacy. 318.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Truex Stacey, Liu Ling, Gursoy Mehmet Emre, Wei Wenqi, and Yu Lei. 2019. Effects of differential privacy and data skewness on membership inference vulnerability. CoRR abs/1911.09777 (2019).Google ScholarGoogle Scholar
  31. [31] Wagh Sameer, Tople Shruti, Benhamouda Fabrice, Kushilevitz Eyal, Mittal Prateek, and Rabin Tal. 2020. FALCON: Honest-majority maliciously secure framework for private deep learning. CoRR abs/2004.02229 (2020).Google ScholarGoogle Scholar
  32. [32] Wu Dapeng, Si Shushan, Wu Shaoen, and Wang Ruyan. 2018. Dynamic trust relationships aware data privacy protection in mobile crowd-sensing. IEEE Internet of Things Journal 5, 4 (2018), 29582970.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Xie Liyang, Baytas Inci M., Lin Kaixiang, and Zhou Jiayu. 2017. Privacy-preserving distributed multi-task learning with asynchronous updates. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 11951204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Xu Guowen, Li Hongwei, Liu Sen, Yang Kan, and Lin Xiaodong. 2020. VerifyNet: Secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security 15 (2020), 911926.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Yang Yang, Liu Ximeng, and Deng Robert H.. 2020. Multi-user multi-keyword rank search over encrypted data in arbitrary language. IEEE Transactions on Dependable and Secure Computing 17, 2 (2020), 320334.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Zhang Chen, Hu Xiongwei, Xie Yu, Gong Maoguo, and Yu Bin. 2020. A privacy-preserving multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. Frontiers in Neurorobotics 13 (2020), 112.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Zhang Qingchen, Yang Laurence T., and Chen Zhikui. 2016. Privacy preserving deep computation model on cloud for big data feature learning. IEEE Transactions on Computers 65, 5 (2016), 13511362. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Zhang Tao and Zhu Quanyan. 2017. Dynamic differential privacy for ADMM-based distributed classification learning. IEEE Transactions on Information Forensics and Security 12, 1 (2017), 172187. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Zhao Lingchen, Zhang Yan, Wang Qian, Chen Yanjiao, Wang Cong, and Zou Qin. 2018. Privacy-preserving collaborative deep learning with irregular participants. CoRR abs/1812.10113 (2018).Google ScholarGoogle Scholar
  40. [40] Zheng Wenting, Popa Raluca Ada, Gonzalez Joseph E., and Stoica Ion. 2019. Helen: Maliciously secure coopetitive learning for linear models. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. 724738.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Privacy-Preserving Distributed Multi-Task Learning against Inference Attack in Cloud Computing

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Internet Technology
          ACM Transactions on Internet Technology  Volume 22, Issue 2
          May 2022
          582 pages
          ISSN:1533-5399
          EISSN:1557-6051
          DOI:10.1145/3490674
          • Editor:
          • Ling Liu
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 22 October 2021
          • Revised: 1 September 2020
          • Accepted: 1 September 2020
          • Received: 1 June 2020
          Published in toit Volume 22, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!