ABSTRACT
Graph neural networks (GNNs) have become the de facto standard for representational learning in graphs, and have achieved state-of-the-art performance in many graph-related tasks; however, it has been shown that the expressive power of standard GNNs are equivalent maximally to 1-dimensional Weisfeiler-Lehman (1-WL) Test. Recently, there is a line of works aiming to enhance the expressive power of graph neural networks. One line of such works aim at developing K-hop message-passing GNNs where node representation is updated by aggregating information from not only direct neighbors but all neighbors within K-hop of the node. Another line of works leverages subgraph information to enhance the expressive power which is proven to be strictly more powerful than 1-WL test. In this work, we discuss the limitation of K-hop message-passing GNNs and propose substructure encoding function to uplift the expressive power of any K-hop message-passing GNN. We further inject contextualized substructure information to enhance the expressiveness of K-hop message-passing GNNs. Our method is provably more powerful than previous works on K-hop graph neural networks and 1-WL subgraph GNNs, which is a specific type of subgraph based GNN models, and not less powerful than 3-WL. Empirically, our proposed method set new state-of-the-art performance or achieves comparable performance for a variety of datasets. Our code is available at https://github.com/tianyao-aka/Expresive_K_hop_GNNs.
Supplemental Material
- Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. 2020. The surprising power of graph neural networks with random node initialization. arXiv preprint arXiv:2010.01179 (2020).Google Scholar
- Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. 2019. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning. PMLR, 21--29.Google Scholar
- László Babai and Ludik Kucera. 1979. Canonical labelling of graphs in linear average time. In 20th Annual Symposium on Foundations of Computer Science (sfcs 1979). IEEE, 39--46.Google Scholar
Digital Library
- Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. 2021. Equivariant subgraph aggregation networks. arXiv preprint arXiv:2110.02910 (2021).Google Scholar
- Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. 2021. Weisfeiler and Lehman go cellular: CW networks. Advances in Neural Information Processing Systems, Vol. 34 (2021), 2625--2640.Google Scholar
- Karsten M Borgwardt and Hans-Peter Kriegel. 2005. Shortest-path kernels on graphs. In Fifth IEEE international conference on data mining (ICDM'05). IEEE, 8--pp.Google Scholar
Digital Library
- Giorgos Bouritsas, Fabrizio Frasca, Stefanos P Zafeiriou, and Michael Bronstein. 2022. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).Google Scholar
- Rémy Brossard, Oriel Frigo, and David Dehaene. 2020. Graph convolutions that can finally model local structure. arXiv preprint arXiv:2011.15069 (2020).Google Scholar
- Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. 2020. Can graph neural networks count substructures? Advances in neural information processing systems, Vol. 33 (2020), 10383--10395.Google Scholar
- Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. 2019. On the equivalence between graph isomorphism testing and function approximation with gnns. Advances in neural information processing systems, Vol. 32 (2019).Google Scholar
- Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2020. Adaptive universal generalized pagerank graph neural network. arXiv preprint arXiv:2006.07988 (2020).Google Scholar
- Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velivc kovi?. 2020. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, Vol. 33 (2020), 13260--13271.Google Scholar
- Leonardo Cotta, Christopher Morris, and Bruno Ribeiro. 2021. Reconstruction for powerful graph representations. Advances in Neural Information Processing Systems, Vol. 34 (2021), 1713--1726.Google Scholar
- George Dasoulas, Ludovic Dos Santos, Kevin Scaman, and Aladin Virmaux. 2019. Coloring graph neural networks for node disambiguation. arXiv preprint arXiv:1912.06058 (2019).Google Scholar
- Pim de Haan, Taco S Cohen, and Max Welling. 2020. Natural graph networks. Advances in Neural Information Processing Systems, Vol. 33 (2020), 3636--3646.Google Scholar
- Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. 1991. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, Vol. 34, 2 (1991), 786--797.Google Scholar
Cross Ref
- Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu. 2019. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. Advances in neural information processing systems, Vol. 32 (2019).Google Scholar
- Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. 2021. Graph neural networks with learnable structural and positional representations. arXiv preprint arXiv:2110.07875 (2021).Google Scholar
- Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The world wide web conference. 417--426.Google Scholar
- Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, and Muhan Zhang. 2022. How Powerful are K-hop Message Passing Graph Neural Networks. arXiv preprint arXiv:2205.13328 (2022).Google Scholar
- Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. arXiv preprint arXiv:1903.02428 (2019).Google Scholar
- Fabrizio Frasca, Beatrice Bevilacqua, Michael M Bronstein, and Haggai Maron. 2022. Understanding and extending subgraph gnns by rethinking their symmetries. arXiv preprint arXiv:2206.11140 (2022).Google Scholar
- Johannes Gasteiger, Florian Becker, and Stephan Günnemann. 2021. Gemnet: Universal directional graph neural networks for molecules. Advances in Neural Information Processing Systems, Vol. 34 (2021), 6790--6802.Google Scholar
- Floris Geerts and Juan L Reutter. 2022. Expressiveness and approximation properties of graph neural networks. arXiv preprint arXiv:2204.04661 (2022).Google Scholar
- Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, Vol. 30 (2017).Google Scholar
- Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639--648.Google Scholar
Digital Library
- Mingjian Jiang, Zhen Li, Shugang Zhang, Shuang Wang, Xiaofeng Wang, Qing Yuan, and Zhiqiang Wei. 2020. Drug--target affinity prediction using graph neural network and contact maps. RSC advances, Vol. 10, 35 (2020), 20701--20712.Google Scholar
- Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).Google Scholar
- Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997 (2018).Google Scholar
- Risi Kondor, Hy Truong Son, Horace Pan, Brandon Anderson, and Shubhendu Trivedi. 2018. Covariant compositional networks for learning graphs. arXiv preprint arXiv:1801.02144 (2018).Google Scholar
- Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. 2021. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, Vol. 34 (2021), 21618--21629.Google Scholar
- Nils M Kriege, Pierre-Louis Giscard, and Richard Wilson. 2016. On valid optimal assignment kernels and applications to graph classification. Advances in neural information processing systems, Vol. 29 (2016).Google Scholar
- Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. 2020. Distance encoding--design provably more powerful gnns for structural representation learning. arXiv preprint arXiv:2009.00142 (2020).Google Scholar
- Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, and Stefanie Jegelka. 2022. Sign and Basis Invariant Networks for Spectral Graph Representation Learning. arXiv preprint arXiv:2202.13013 (2022).Google Scholar
- Meng Liu, Hongyang Gao, and Shuiwang Ji. 2020. Towards Deeper Graph Neural Networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM. https://doi.org/10.1145/3394486.3403076Google Scholar
Digital Library
- Andreas Loukas. 2019. What graph neural networks cannot learn: depth vs width. arXiv preprint arXiv:1907.03199 (2019).Google Scholar
- Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. 2019a. Provably powerful graph networks. Advances in neural information processing systems, Vol. 32 (2019).Google Scholar
- Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. 2018. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902 (2018).Google Scholar
- Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. 2019b. On the universality of invariant networks. In International conference on machine learning. PMLR, 4363--4371.Google Scholar
- Haggai Maron, Or Litany, Gal Chechik, and Ethan Fetaya. 2020. On learning sets of symmetric elements. In International Conference on Machine Learning. PMLR, 6734--6744.Google Scholar
- Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M Kriege, Martin Grohe, Matthias Fey, and Karsten Borgwardt. 2021. Weisfeiler and leman go machine learning: The story so far. arXiv preprint arXiv:2112.09992 (2021).Google Scholar
- Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. 2019. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 4602--4609.Google Scholar
Digital Library
- Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. 2020. k-hop graph neural networks. Neural Networks, Vol. 130 (2020), 195--205.Google Scholar
Cross Ref
- Giannis Nikolentzos and Michalis Vazirgiannis. 2020. Random walk graph neural networks. Advances in Neural Information Processing Systems, Vol. 33 (2020), 16211--16222.Google Scholar
- Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the web. Technical Report. Stanford InfoLab.Google Scholar
- Omri Puny, Heli Ben-Hamu, and Yaron Lipman. 2020. Global attention improves graph networks generalization. arXiv preprint arXiv:2006.07846 (2020).Google Scholar
- Ryoma Sato, Makoto Yamada, and Hisashi Kashima. 2021. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM). SIAM, 333--341.Google Scholar
Cross Ref
- Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. 2011. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, Vol. 12, 9 (2011).Google Scholar
- Erik Thiede, Wenda Zhou, and Risi Kondor. 2021. Autobahn: Automorphism-based graph neural nets. Advances in Neural Information Processing Systems, Vol. 34 (2021), 29922--29934.Google Scholar
- Vayer Titouan, Nicolas Courty, Romain Tavenard, and Rémi Flamary. 2019. Optimal transport for structured data with application on graphs. In International Conference on Machine Learning. PMLR, 6275--6284.Google Scholar
- Matteo Togninalli, Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, and Karsten Borgwardt. 2019. Wasserstein weisfeiler-lehman graph kernels. Advances in Neural Information Processing Systems, Vol. 32 (2019).Google Scholar
- Clement Vignac, Andreas Loukas, and Pascal Frossard. 2020. Building powerful and equivariant graph neural networks with structural message-passing. Advances in Neural Information Processing Systems, Vol. 33 (2020), 14143--14155.Google Scholar
- Nikil Wale, Ian A Watson, and George Karypis. 2008. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, Vol. 14, 3 (2008), 347--375.Google Scholar
Digital Library
- Guangtao Wang, Rex Ying, Jing Huang, and Jure Leskovec. 2020. Multi-hop attention graph neural network. arXiv preprint arXiv:2009.14332 (2020).Google Scholar
- Boris Weisfeiler and Andrei Leman. 1968. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, Vol. 2, 9 (1968), 12--16.Google Scholar
- Asiri Wijesinghe and Qing Wang. 2021. A New Perspective on" How Graph Neural Networks Go Beyond Weisfeiler-Lehman?". In International Conference on Learning Representations.Google Scholar
- Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. MoleculeNet: a benchmark for molecular machine learning. Chemical science, Vol. 9, 2 (2018), 513--530.Google Scholar
- Zhang Xinyi and Lihui Chen. 2018. Capsule graph neural network. In International conference on learning representations.Google Scholar
- Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018a. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).Google Scholar
- Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. 2018b. Representation learning on graphs with jumping knowledge networks. In International conference on machine learning. PMLR, 5453--5462.Google Scholar
- Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1365--1374.Google Scholar
Digital Library
- Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. 2021. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 10737--10745.Google Scholar
Cross Ref
- Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. 2023. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. arXiv preprint arXiv:2301.09505 (2023).Google Scholar
- Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. 2018a. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.Google Scholar
Cross Ref
- Muhan Zhang and Pan Li. 2021. Nested graph neural networks. Advances in Neural Information Processing Systems, Vol. 34 (2021), 15734--15747.Google Scholar
- Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. 2021. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, Vol. 34 (2021), 9061--9073.Google Scholar
- Zhen Zhang, Mianzhi Wang, Yijian Xiang, Yan Huang, and Arye Nehorai. 2018b. Retgk: Graph kernels based on return probabilities of random walks. Advances in Neural Information Processing Systems, Vol. 31 (2018).Google Scholar
- Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. 2021. From stars to subgraphs: Uplifting any GNN with local structure awareness. arXiv preprint arXiv:2110.03753 (2021).Google Scholar
Index Terms
- Improving the Expressiveness of K-hop Message-Passing GNNs by Injecting Contextualized Substructure Information
Recommendations
Improving Expressivity of GNNs with Subgraph-specific Factor Embedded Normalization
KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data MiningGraph Neural Networks~(GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data. However, existing GNNs typically ignore crucial structural characteristics in node-induced subgraphs, which thus limits their ...
Generalizing Message Passing Neural Networks to Heterophily Using Position Information
Artificial Neural Networks and Machine Learning – ICANN 2021AbstractMessage Passing Neural Networks (MPNNs) is a promising architecture for machine learning on graphs, which iteratively propagates the information among nodes. The existing MPNNs methods are more suitable for homophily graphs in which the ...
Second-Order Global Attention Networks for Graph Classification and Regression
Artificial IntelligenceAbstractGraph Neural Networks (GNNs) are powerful to learn representation of graph-structured data, which fuse both attributive and topological information. Prior researches have investigated the expressive power of GNNs by comparing it with Weisfeiler-...





Comments