skip to main content
10.1145/3580305.3599390acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Free Access

Improving the Expressiveness of K-hop Message-Passing GNNs by Injecting Contextualized Substructure Information

Published:04 August 2023Publication History

ABSTRACT

Graph neural networks (GNNs) have become the de facto standard for representational learning in graphs, and have achieved state-of-the-art performance in many graph-related tasks; however, it has been shown that the expressive power of standard GNNs are equivalent maximally to 1-dimensional Weisfeiler-Lehman (1-WL) Test. Recently, there is a line of works aiming to enhance the expressive power of graph neural networks. One line of such works aim at developing K-hop message-passing GNNs where node representation is updated by aggregating information from not only direct neighbors but all neighbors within K-hop of the node. Another line of works leverages subgraph information to enhance the expressive power which is proven to be strictly more powerful than 1-WL test. In this work, we discuss the limitation of K-hop message-passing GNNs and propose substructure encoding function to uplift the expressive power of any K-hop message-passing GNN. We further inject contextualized substructure information to enhance the expressiveness of K-hop message-passing GNNs. Our method is provably more powerful than previous works on K-hop graph neural networks and 1-WL subgraph GNNs, which is a specific type of subgraph based GNN models, and not less powerful than 3-WL. Empirically, our proposed method set new state-of-the-art performance or achieves comparable performance for a variety of datasets. Our code is available at https://github.com/tianyao-aka/Expresive_K_hop_GNNs.

Skip Supplemental Material Section

Supplemental Material

rtfp0298-2min-promo.mp4

mp4

5.4 MB

References

  1. Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. 2020. The surprising power of graph neural networks with random node initialization. arXiv preprint arXiv:2010.01179 (2020).Google ScholarGoogle Scholar
  2. Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. 2019. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning. PMLR, 21--29.Google ScholarGoogle Scholar
  3. László Babai and Ludik Kucera. 1979. Canonical labelling of graphs in linear average time. In 20th Annual Symposium on Foundations of Computer Science (sfcs 1979). IEEE, 39--46.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. 2021. Equivariant subgraph aggregation networks. arXiv preprint arXiv:2110.02910 (2021).Google ScholarGoogle Scholar
  5. Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. 2021. Weisfeiler and Lehman go cellular: CW networks. Advances in Neural Information Processing Systems, Vol. 34 (2021), 2625--2640.Google ScholarGoogle Scholar
  6. Karsten M Borgwardt and Hans-Peter Kriegel. 2005. Shortest-path kernels on graphs. In Fifth IEEE international conference on data mining (ICDM'05). IEEE, 8--pp.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Giorgos Bouritsas, Fabrizio Frasca, Stefanos P Zafeiriou, and Michael Bronstein. 2022. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).Google ScholarGoogle Scholar
  8. Rémy Brossard, Oriel Frigo, and David Dehaene. 2020. Graph convolutions that can finally model local structure. arXiv preprint arXiv:2011.15069 (2020).Google ScholarGoogle Scholar
  9. Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. 2020. Can graph neural networks count substructures? Advances in neural information processing systems, Vol. 33 (2020), 10383--10395.Google ScholarGoogle Scholar
  10. Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. 2019. On the equivalence between graph isomorphism testing and function approximation with gnns. Advances in neural information processing systems, Vol. 32 (2019).Google ScholarGoogle Scholar
  11. Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2020. Adaptive universal generalized pagerank graph neural network. arXiv preprint arXiv:2006.07988 (2020).Google ScholarGoogle Scholar
  12. Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velivc kovi?. 2020. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, Vol. 33 (2020), 13260--13271.Google ScholarGoogle Scholar
  13. Leonardo Cotta, Christopher Morris, and Bruno Ribeiro. 2021. Reconstruction for powerful graph representations. Advances in Neural Information Processing Systems, Vol. 34 (2021), 1713--1726.Google ScholarGoogle Scholar
  14. George Dasoulas, Ludovic Dos Santos, Kevin Scaman, and Aladin Virmaux. 2019. Coloring graph neural networks for node disambiguation. arXiv preprint arXiv:1912.06058 (2019).Google ScholarGoogle Scholar
  15. Pim de Haan, Taco S Cohen, and Max Welling. 2020. Natural graph networks. Advances in Neural Information Processing Systems, Vol. 33 (2020), 3636--3646.Google ScholarGoogle Scholar
  16. Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. 1991. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, Vol. 34, 2 (1991), 786--797.Google ScholarGoogle ScholarCross RefCross Ref
  17. Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu. 2019. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. Advances in neural information processing systems, Vol. 32 (2019).Google ScholarGoogle Scholar
  18. Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. 2021. Graph neural networks with learnable structural and positional representations. arXiv preprint arXiv:2110.07875 (2021).Google ScholarGoogle Scholar
  19. Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The world wide web conference. 417--426.Google ScholarGoogle Scholar
  20. Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, and Muhan Zhang. 2022. How Powerful are K-hop Message Passing Graph Neural Networks. arXiv preprint arXiv:2205.13328 (2022).Google ScholarGoogle Scholar
  21. Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. arXiv preprint arXiv:1903.02428 (2019).Google ScholarGoogle Scholar
  22. Fabrizio Frasca, Beatrice Bevilacqua, Michael M Bronstein, and Haggai Maron. 2022. Understanding and extending subgraph gnns by rethinking their symmetries. arXiv preprint arXiv:2206.11140 (2022).Google ScholarGoogle Scholar
  23. Johannes Gasteiger, Florian Becker, and Stephan Günnemann. 2021. Gemnet: Universal directional graph neural networks for molecules. Advances in Neural Information Processing Systems, Vol. 34 (2021), 6790--6802.Google ScholarGoogle Scholar
  24. Floris Geerts and Juan L Reutter. 2022. Expressiveness and approximation properties of graph neural networks. arXiv preprint arXiv:2204.04661 (2022).Google ScholarGoogle Scholar
  25. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, Vol. 30 (2017).Google ScholarGoogle Scholar
  26. Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639--648.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Mingjian Jiang, Zhen Li, Shugang Zhang, Shuang Wang, Xiaofeng Wang, Qing Yuan, and Zhiqiang Wei. 2020. Drug--target affinity prediction using graph neural network and contact maps. RSC advances, Vol. 10, 35 (2020), 20701--20712.Google ScholarGoogle Scholar
  28. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).Google ScholarGoogle Scholar
  29. Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997 (2018).Google ScholarGoogle Scholar
  30. Risi Kondor, Hy Truong Son, Horace Pan, Brandon Anderson, and Shubhendu Trivedi. 2018. Covariant compositional networks for learning graphs. arXiv preprint arXiv:1801.02144 (2018).Google ScholarGoogle Scholar
  31. Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. 2021. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, Vol. 34 (2021), 21618--21629.Google ScholarGoogle Scholar
  32. Nils M Kriege, Pierre-Louis Giscard, and Richard Wilson. 2016. On valid optimal assignment kernels and applications to graph classification. Advances in neural information processing systems, Vol. 29 (2016).Google ScholarGoogle Scholar
  33. Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. 2020. Distance encoding--design provably more powerful gnns for structural representation learning. arXiv preprint arXiv:2009.00142 (2020).Google ScholarGoogle Scholar
  34. Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, and Stefanie Jegelka. 2022. Sign and Basis Invariant Networks for Spectral Graph Representation Learning. arXiv preprint arXiv:2202.13013 (2022).Google ScholarGoogle Scholar
  35. Meng Liu, Hongyang Gao, and Shuiwang Ji. 2020. Towards Deeper Graph Neural Networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM. https://doi.org/10.1145/3394486.3403076Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Andreas Loukas. 2019. What graph neural networks cannot learn: depth vs width. arXiv preprint arXiv:1907.03199 (2019).Google ScholarGoogle Scholar
  37. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. 2019a. Provably powerful graph networks. Advances in neural information processing systems, Vol. 32 (2019).Google ScholarGoogle Scholar
  38. Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. 2018. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902 (2018).Google ScholarGoogle Scholar
  39. Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. 2019b. On the universality of invariant networks. In International conference on machine learning. PMLR, 4363--4371.Google ScholarGoogle Scholar
  40. Haggai Maron, Or Litany, Gal Chechik, and Ethan Fetaya. 2020. On learning sets of symmetric elements. In International Conference on Machine Learning. PMLR, 6734--6744.Google ScholarGoogle Scholar
  41. Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M Kriege, Martin Grohe, Matthias Fey, and Karsten Borgwardt. 2021. Weisfeiler and leman go machine learning: The story so far. arXiv preprint arXiv:2112.09992 (2021).Google ScholarGoogle Scholar
  42. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. 2019. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 4602--4609.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. 2020. k-hop graph neural networks. Neural Networks, Vol. 130 (2020), 195--205.Google ScholarGoogle ScholarCross RefCross Ref
  44. Giannis Nikolentzos and Michalis Vazirgiannis. 2020. Random walk graph neural networks. Advances in Neural Information Processing Systems, Vol. 33 (2020), 16211--16222.Google ScholarGoogle Scholar
  45. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the web. Technical Report. Stanford InfoLab.Google ScholarGoogle Scholar
  46. Omri Puny, Heli Ben-Hamu, and Yaron Lipman. 2020. Global attention improves graph networks generalization. arXiv preprint arXiv:2006.07846 (2020).Google ScholarGoogle Scholar
  47. Ryoma Sato, Makoto Yamada, and Hisashi Kashima. 2021. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM). SIAM, 333--341.Google ScholarGoogle ScholarCross RefCross Ref
  48. Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. 2011. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, Vol. 12, 9 (2011).Google ScholarGoogle Scholar
  49. Erik Thiede, Wenda Zhou, and Risi Kondor. 2021. Autobahn: Automorphism-based graph neural nets. Advances in Neural Information Processing Systems, Vol. 34 (2021), 29922--29934.Google ScholarGoogle Scholar
  50. Vayer Titouan, Nicolas Courty, Romain Tavenard, and Rémi Flamary. 2019. Optimal transport for structured data with application on graphs. In International Conference on Machine Learning. PMLR, 6275--6284.Google ScholarGoogle Scholar
  51. Matteo Togninalli, Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, and Karsten Borgwardt. 2019. Wasserstein weisfeiler-lehman graph kernels. Advances in Neural Information Processing Systems, Vol. 32 (2019).Google ScholarGoogle Scholar
  52. Clement Vignac, Andreas Loukas, and Pascal Frossard. 2020. Building powerful and equivariant graph neural networks with structural message-passing. Advances in Neural Information Processing Systems, Vol. 33 (2020), 14143--14155.Google ScholarGoogle Scholar
  53. Nikil Wale, Ian A Watson, and George Karypis. 2008. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, Vol. 14, 3 (2008), 347--375.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Guangtao Wang, Rex Ying, Jing Huang, and Jure Leskovec. 2020. Multi-hop attention graph neural network. arXiv preprint arXiv:2009.14332 (2020).Google ScholarGoogle Scholar
  55. Boris Weisfeiler and Andrei Leman. 1968. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, Vol. 2, 9 (1968), 12--16.Google ScholarGoogle Scholar
  56. Asiri Wijesinghe and Qing Wang. 2021. A New Perspective on" How Graph Neural Networks Go Beyond Weisfeiler-Lehman?". In International Conference on Learning Representations.Google ScholarGoogle Scholar
  57. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. MoleculeNet: a benchmark for molecular machine learning. Chemical science, Vol. 9, 2 (2018), 513--530.Google ScholarGoogle Scholar
  58. Zhang Xinyi and Lihui Chen. 2018. Capsule graph neural network. In International conference on learning representations.Google ScholarGoogle Scholar
  59. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018a. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).Google ScholarGoogle Scholar
  60. Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. 2018b. Representation learning on graphs with jumping knowledge networks. In International conference on machine learning. PMLR, 5453--5462.Google ScholarGoogle Scholar
  61. Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1365--1374.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. 2021. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 10737--10745.Google ScholarGoogle ScholarCross RefCross Ref
  63. Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. 2023. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. arXiv preprint arXiv:2301.09505 (2023).Google ScholarGoogle Scholar
  64. Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. 2018a. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.Google ScholarGoogle ScholarCross RefCross Ref
  65. Muhan Zhang and Pan Li. 2021. Nested graph neural networks. Advances in Neural Information Processing Systems, Vol. 34 (2021), 15734--15747.Google ScholarGoogle Scholar
  66. Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. 2021. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, Vol. 34 (2021), 9061--9073.Google ScholarGoogle Scholar
  67. Zhen Zhang, Mianzhi Wang, Yijian Xiang, Yan Huang, and Arye Nehorai. 2018b. Retgk: Graph kernels based on return probabilities of random walks. Advances in Neural Information Processing Systems, Vol. 31 (2018).Google ScholarGoogle Scholar
  68. Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. 2021. From stars to subgraphs: Uplifting any GNN with local structure awareness. arXiv preprint arXiv:2110.03753 (2021).Google ScholarGoogle Scholar

Index Terms

  1. Improving the Expressiveness of K-hop Message-Passing GNNs by Injecting Contextualized Substructure Information

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
          August 2023
          5996 pages
          ISBN:9798400701030
          DOI:10.1145/3580305

          Copyright © 2023 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 4 August 2023

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate1,133of8,635submissions,13%

          Upcoming Conference

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Access Granted

        The conference sponsors are committed to making content openly accessible in a timely manner.
        This article is provided by ACM and the conference, through the ACM OpenTOC service.