skip to main content
10.1145/3459637.3482249acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Desirable Companion for Vertical Federated Learning: New Zeroth-Order Gradient Based Algorithm

Published:30 October 2021Publication History

ABSTRACT

Vertical federated learning (VFL) attracts increasing attention due to the emerging demands of multi-party collaborative modeling and concerns of privacy leakage. A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy security, communication cost, and computation efficiency, where privacy security is especially important to VFL. However, to the best of our knowledge, there does not exist a VFL algorithm satisfying all these criteria very well. To address this challenging problem, in this paper, we reveal that zeroth-order optimization (ZOO) is a desirable companion for VFL. Specifically, ZOO can 1) improve the model applicability of VFL framework, 2) prevent VFL framework from privacy leakage under curious, colluding, and malicious threat models, 3) support inexpensive communication and efficient computation. Based on that, we propose a novel and practical VFL framework with black-box models, which is inseparably interconnected to the promising properties of ZOO. We believe that it takes one stride towards designing a practical VFL framework matching all the criteria. Under this framework, we raise two novel asynchronous zeroth-order algorithms for vertical federated learning (AsyREVEL) with different smoothing techniques. We theoretically drive the convergence rates of AsyREVEL algorithms under nonconvex condition. More importantly, we prove the privacy security of our proposed framework under existing VFL attacks on different levels. Extensive experiments on benchmark datasets demonstrate the favorable model applicability, satisfied privacy security, inexpensive communication, efficient computation, scalability and losslessness of our framework.

References

  1. Tianyi Chen, Xiao Jin, Sun, and Wotao Yin. 2020. Vafl: a method of vertical asynchronous federated learning. arXiv preprint arXiv:2007.06081 (2020).Google ScholarGoogle Scholar
  2. Kewei Cheng, Tao Fan, Yilun Jin, Yang Liu, Tianjian Chen, and Qiang Yang. 2019. SecureBoost: A Lossless Federated Learning Framework. arXiv preprint arXiv:1901.08755 (2019).Google ScholarGoogle Scholar
  3. Wenliang Du, Yunghsiang S Han, and Shigang Chen. 2004. Privacy-preserving multivariate statistical analysis: Linear regression and classification. In Proceedings of the 2004 SIAM international conference on data mining. SIAM, 222--233.Google ScholarGoogle ScholarCross RefCross Ref
  4. Adrià Gascón, Phillipp Schoppmann, Borja Balle, Mariana Raykova, Jack Doerner, Samee Zahur, and David Evans. 2016. Secure Linear Regression on Vertically Partitioned Datasets. IACR Cryptology ePrint Archive, Vol. 2016 (2016), 892.Google ScholarGoogle Scholar
  5. Yanmin Gong, Yuguang Fang, and Yuanxiong Guo. 2016. Private data analytics on biomedical sensing data via distributed computation. IEEE/ACM transactions on computational biology and bioinformatics, Vol. 13, 3 (2016), 431--444. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Bin Gu, An Xu, Cheng Deng, and heng Huang. 2020. Privacy-Preserving Asynchronous Federated Learning Algorithms for Multi-Party Vertically Collaborative Learning. arXiv preprint arXiv:2008.06233 (2020).Google ScholarGoogle Scholar
  7. Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Richard Nock, Giorgio Patrini, Guillaume Smith, and Brian Thorne. 2017. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv preprint arXiv:1711.10677 (2017).Google ScholarGoogle Scholar
  8. Yaochen Hu, Di Niu, Jianming Yang, and Shengping Zhou. 2019. FDML: A collaborative machine learning framework for distributed features. In Proceedings of the 25th ACM SIGKDD. 2232--2240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Feihu Huang, Songcan Chen, and Heng Huang. 2019a. Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization. In ICML.Google ScholarGoogle Scholar
  10. Feihu Huang, Songcan Chen, and Heng Huang. 2019b. Faster stochastic alternating direction method of multipliers for nonconvex optimization. In The 36th International Conference on Machine Learning (ICML 2019).Google ScholarGoogle Scholar
  11. Feihu Huang, Shangqian Gao, Jian Pei, and Heng Huang. 2019 c. Nonconvex zeroth-order stochastic admm methods with lower function query complexity. arXiv preprint arXiv:1907.13463 (2019).Google ScholarGoogle Scholar
  12. Feihu Huang, Shangqian Gao, Jian Pei, and Heng Huang. 2020. Accelerated zeroth-order momentum methods from mini to minimax optimization. arXiv preprint arXiv:2008.08170 (2020).Google ScholarGoogle Scholar
  13. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. 2019. Advances and Open Problems in Federated Learning. arXiv preprint arXiv:1912.04977 (2019).Google ScholarGoogle Scholar
  14. John Karro, Greg Kochanski, and Daniel Golovin. 2017. Black box optimization via a bayesian-optimized genetic algorithm. In Proc. OPTML 2017: 10th NIPS Workshop Optim. Mach. Learn.Google ScholarGoogle Scholar
  15. Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Paishun Ting, Shiyu Chang, and Lisa Amini. 2018. Zeroth-order stochastic variance reduction for nonconvex optimization. arXiv preprint arXiv:1805.10367 (2018).Google ScholarGoogle Scholar
  16. Yang Liu, Yan Kang, Xinwei Zhang, Liping Li, Yong Cheng, Tianjian Chen, Mingyi Hong, and Qiang Yang. 2019a. A communication efficient vertical federated learning framework. arXiv preprint arXiv:1912.11187 (2019).Google ScholarGoogle Scholar
  17. Yang Liu, Zhuo Ma, Ximeng Liu, Siqi Ma, Surya Nepal, and Robert Deng. 2019b. Boosting privately: Privacy-preserving federated extreme boosting for mobile crowdsensing. arXiv preprint arXiv:1907.10218 (2019).Google ScholarGoogle Scholar
  18. Yang Liu, Zhihao Yi, and Tianjian Chen. 2020. Backdoor attacks and defenses in feature-partitioned collaborative learning. arXiv:2007.03608 (2020).Google ScholarGoogle Scholar
  19. Xinjian Luo, Yuncheng Wu, Xiaokui Xiao, and Beng Chin Ooi. 2020. Feature Inference Attack on Model Predictions in Vertical Federated Learning. arXiv preprint arXiv:2010.10152 (2020).Google ScholarGoogle Scholar
  20. H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. 2016. Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016).Google ScholarGoogle Scholar
  21. Ohad Shamir. 2017. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. The JMLR, Vol. 18, 1 (2017), 1703--1713. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. 2017. Federated multi-task learning. In Advances in NeurIPS. 4424--4434. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Artem Sokolov, Julian Hitschler, and Stefan Riezler. 2018. Sparse stochastic zeroth-order optimization with an application to bandit structured prediction. arXiv preprint arXiv:1806.04458 (2018).Google ScholarGoogle Scholar
  24. Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, and Ramesh Raskar. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. arXiv preprint arXiv:1812.00564 (2018).Google ScholarGoogle Scholar
  25. Li Wan, Wee Keong Ng, Shuguo Han, and Vincent Lee. 2007. Privacy-preservation for gradient descent methods. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 775--783. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Haiqin Weng, Juntao Zhang, Feng Xue, Tao Wei, and Zhiyuan Zong. 2020. Privacy Leakage of Real-World Vertical Federated Learning. arXiv:2011.09290 (2020).Google ScholarGoogle Scholar
  27. Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar, and Heiko Ludwig. 2019. Hybridalpha: An efficient approach for privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Kai Yang, Tao Fan, Tianjian Chen, Yuanming Shi, and Qiang Yang. 2019a. A Quasi-Newton Method Based Vertical Federated Learning Framework for Logistic Regression. arXiv preprint arXiv:1912.00513 (2019).Google ScholarGoogle Scholar
  29. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019b. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), Vol. 10, 2 (2019), 12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Kwang-Seon Yoo and Seog-Young Han. 2014. Modified ant colony optimization for topology optimization of geometrically nonlinear structures. International journal of precision engineering and manufacturing, Vol. 15, 4 (2014), 679--687.Google ScholarGoogle ScholarCross RefCross Ref
  31. Gong-Duo Zhang, Shen-Yi Zhao, Hao Gao, and Wu-Jun Li. 2018. Feature-Distributed SVRG for High-Dimensional Linear Classification. arXiv preprint arXiv:1802.03604 (2018).Google ScholarGoogle Scholar
  32. Qingsong Zhang, Bin Gu, Cheng Deng, Songxiang Gu, Liefeng Bo, Jian Pei, and Heng Huang. 2021b. AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 3917--3927. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Qingsong Zhang, Bin Gu, Cheng Deng, and Heng Huang. 2021a. Secure Bilevel Asynchronous Vertical Federated Learning with Backward Updating. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 10896--10904.Google ScholarGoogle Scholar
  34. Qingsong Zhang, Feihu Huang, Cheng Deng, and Heng Huang. 2021 c. Faster stochastic quasi-Newton methods. IEEE Transactions on Neural Networks and Learning Systems (2021).Google ScholarGoogle Scholar

Index Terms

  1. Desirable Companion for Vertical Federated Learning: New Zeroth-Order Gradient Based Algorithm

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management
          October 2021
          4966 pages
          ISBN:9781450384469
          DOI:10.1145/3459637

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 30 October 2021

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate1,861of8,427submissions,22%

          Upcoming Conference

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader