ARIEL: Adversarial Graph Contrastive Learning

Contrastive learning is an effective unsupervised method in graph representation learning, and the key component of contrastive learning lies in the construction of positive and negative samples. Previous methods usually utilize the proximity of nodes in the graph as the principle. Recently, the data-augmentation-based contrastive learning method has advanced to show great power in the visual domain, and some works extended this method from images to graphs. However, unlike the data augmentation on images, the data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples, which leaves much space for improvement. In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints. We develop a new technique called information regularization for stable training and use subgraph sampling for scalability. We generalize our method from node-level contrastive learning to the graph level by treating each graph instance as a super-node. ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks on real-world datasets. We further demonstrate that ARIEL is more robust in the face of adversarial attacks.


INTRODUCTION
Contrastive learning is a widely used technique in various graph representation learning tasks.In contrastive learning, the model tries to minimize the distances among positive pairs and maximize the distances among negative pairs in the embedding space [10,17,20,21,28,51,52,59,62,65].The definition of positive and negative pairs is the key component in contrastive learning.Earlier methods like DeepWalk [37] and node2vec [9] define positive and negative pairs based on the co-occurrence of node pairs in the random walks.For knowledge graph embedding, it is a common practice to define positive and negative pairs based on translations [3,15,29,40,53,55,60].
Recently, the breakthroughs of contrastive learning in computer vision have inspired some works to apply similar ideas from visual representation learning to graph representation learning.To name a few, Deep Graph Infomax (DGI) [51] extends Deep InfoMax [12] and achieves significant improvements over previous random-walk-based methods.Graphical Mutual Information (GMI) [36] uses the same framework as DGI but generalizes the concept of mutual information from vector space to graph domain.Contrastive multi-view graph representation learning (referred to as MVGRL in this paper) [10] further improves DGI by introducing graph diffusion into the contrastive learning framework.The more recent works often follow the data-augmentation-based contrastive learning methods [5,11], which treat the data-augmented samples from the same instance as positive pairs and different instances as negative pairs.Graph Contrastive Coding (GCC) [38] uses random walks with restart [48] to generate two subgraphs for each node as two data-augmented samples.Graph Contrastive learning with Adaptive augmentation (GCA) [65] introduces an adaptive data augmentation method that perturbs both the node features and edges according to their importance, and it is trained in a similar way as the famous visual contrastive learning framework SimCLR [5].Its preliminary work, which uses uniform random sampling rather than adaptive sampling, is referred to as GRACE [64] in this paper.Robinson et al. [39] propose a way to select hard negative samples based on the distances in the embedding space, and use it to obtain high-quality graph embedding.There are also many works [62,63] systemically studying the data augmentation on the graphs.
However, unlike the rotation and color jitter operations on images, the transformations on graphs, such as edge dropping and feature masking, are far less intuitive to human beings.The data augmentation on the graph could be either too similar to or totally different from the original graph.This, in turn, leads to a crucial question, that is, how to generate a new graph that is hard enough for the model to discriminate from the original one, and in the meanwhile also maintains the desired properties?
Inspired by some recent works [13,16,24,26,47], we introduce the adversarial training on the graph contrastive learning and propose a new framework called Adversarial GRaph ContrastIvE Learning (ArieL).Through the adversarial attack on both topology and node features, we generate an adversarial sample from the original graph.On the one hand, since the perturbation is under the constraint, the adversarial sample still stays close enough to the original one.On the other hand, the adversarial attack makes sure the adversarial sample is hard to discriminate from the other view by increasing the contrastive loss.On top of that, we propose a new constraint called information regularization which could stabilize the training of ArieL and prevent the collapsing.We bridge the gap between node-level graph contrastive learning and graph-level contrastive learning by treating each graph instance as a super-node in node-level graph contrastive learning and thus make ArieL a universal graph representation learning framework.We demonstrate that the proposed ArieL outperforms the existing graph contrastive learning frameworks in the node classification and graph classification tasks on both real-world graphs and adversarially attacked graphs.
In summary, we make the following contributions.First, we introduce an adversarial view as a new form of data augmentation in graph contrastive learning, which makes the data augmentation more informative under mild perturbations.
Second, we propose a new technique called information regularization to stabilize the training of adversarial graph contrastive learning by regularizing the mutual information among positive pairs.Furthermore, we bridge the gap between node-level graph contrastive learning and graph-level contrastive learning and we unify their formulation under our framework.
Finally, we empirically demonstrate that ArieL can achieve better performance and higher robustness compared with previous graph contrastive learning methods.
The rest of the paper is organized as follows.Section 2 gives the problem definition of graph representation learning and the preliminaries.Section 3 describes the proposed algorithm.The experimental results are presented in section 4.After reviewing related work in section 5, we conclude the paper in section 6.

PROBLEM DEFINITION
In this section, we will introduce all the notations used in this paper and give a formal definition of our problem.Besides, we briefly introduce the preliminaries of our method.

Graph Representation Learning
For graph representation learning, let  = {V, E, X} be an attributed graph, where V = { 1 ,  2 , . . .,   } denotes the set of nodes, E ⊆ V × V denotes the set of edges, and X ∈ R × denotes the feature matrix.Each node   has a -dimensional feature X[, :], and all edges are assumed to be unweighted and undirected.We use a binary adjacency matrix A ∈ {0, 1} × to represent the information of nodes and edges, where A[, ] = 1 if and only if the node pair (  ,   ) ∈ E. In the following text, we will use  = {A, X} to represent the graph.
The objective of the graph representation learning is to learn an encoder  : R × ×R × → R × ′ , which maps the nodes in the graph into low-dimensional embeddings.Denote the node embedding matrix H =  (A, X), where H[, :] ∈ R  ′ is the embedding for node   .This representation could be used for downstream tasks like node classification.Based on the node embedding matrix, we can further obtain the graph embedding through an order-invariant readout function (•), which generates the graph representation as (H) ∈ R  ′′ .

InfoNCE Loss
InfoNCE loss [49] is the predominant work-horse of the contrastive learning loss, which maximizes the lower bound of the mutual information between two random variables.For each positive pair (x, x + ) associated with  negative samples of x, denoted as {x , InfoNCE loss could be written as Here (•) is the density ratio with the property that (a, b) ∝ , where ∝ stands for proportional to.It has been shown by [49] that −  actually serves as the lower bound of the mutual information  (x; x + ) with  (x; x + ) ≥ log () −   . (2)

Graph Contrastive Learning
We build the proposed method upon the framework of SimCLR [5], which is also the basic framework that GCA [65] and GraphCL [62] are built on.

Node-level Contrastive
Learning.Given a graph , two views of the graph  1 = {A 1 , X 1 } and  2 = {A 2 , X 2 } are first generated.This step can be treated as the data augmentation on the original graph, and various augmentation methods can be used herein.We use random edge dropping and feature masking as GCA does.The node embedding matrix for each graph can be computed as The corresponding node pairs in two graph views are the positive pairs and all other node pairs are negative.Define  (u, v) to be the similarity function between vectors u and v, in practice, it is usually chosen as the cosine similarity on the projected embedding of each vector, using a two-layer neural network as the projection head.Denote u  = H 1 [, :] and v  = H 2 [, :], the contrastive loss is defined as where  is a temperature parameter. (v  , u  ) is symmetrically defined by exchanging the variables in  (u  , v  ).This loss is basically a variant of InfoNCE loss which is symmetrically defined instead.

Graph-level Contrastive
Learning.The graph-level contrastive learning is closer to contrastive learning in the visual domain.For a batch of graphs B = { 1 , • • • ,   }, we obtain the augmentation of each graph as } through node dropping, subgraph sampling, edge perturbation, and feature masking as in GraphCL [62], the loss function is thus defined on these two batches of graphs as where R  = (H  ) and R +  = (H +  ).By abuse of notation, we also use  con to denote the loss function for the graph-level contrastive learning, the actual meaning of  con is dependent on the input type, graph or set, in the following text.
Specifically, we notice that a set of graphs with   = {A  , X  } can be combined into one graph as Under this transformation, graph embedding of   can be treated as the embedding of a supernode in  * .This observation helps us bridge the gap between node-level contrastive learning and graph-level contrastive learning, where the only difference between them is the granularity of the instance in the contrastive learning loss.Therefore, we can build a universal framework for graph contrastive learning which can be used for both node-level and graph-level downstream tasks.

Graph Encoder.
In principle, our framework could be applied on any graph neural network (GNN) architecture for encoder as long as it could be attacked.For simplicity, we employ a two-layer Graph Convolutional Network (GCN) [27] for node-level contrastive learning and a three-layer Graph Isomorphism Network (GIN) [58] for graph-level contrastive learning in this work.Define the symmetrically normalized adjacency matrix where Ã = A + I  is the adjacency matrix with self-connections added and I  is the identity matrix, D is the diagonal degree matrix of Ã with D[, ] =  Ã[, ].The two-layer GCN is given as  (A, X) =  ( Â ( ÂXW (1) )W (2) ), where W (1) and W (2) are the weights of the first and second layer respectively,  (•) is the activation function.
The Graph Isomorphism operator could be defined as where ℎ(•) is a neural network such as multi-layer perceptrons (MLPs) and  is a non-negative scalar.
A three-layer GIN is the stack of three Graph Isomorphism operators.In this work, ℎ(•) is a twolayer MLP followed by an activation function and Batch Normalization [14], and  is set as 0 for all operators.Use X ( ) to denote the node embeddings after the -th operator, the final node embeddings are the concatenation of X ( ) , H = Concat(X ( ) | = 1, 2, 3), and the graph embedding is the concatenation of the node embeddings after mean-pooling, (H) = Concat(Mean(X ( ) )| = 1, 2, 3).

Projected Gradient Descent Attack
Projected Gradient Descent (PGD) attack [32] is an iterative attack method that projects the perturbation onto the ball of interest at the end of each iteration.Assuming that the loss (•) is a function of the input matrix Z ∈ R × , at -th iteration, the perturbation matrix ∆  ∈ R × under an  ∞ -norm constraint could be written as where  is the step size, sgn(•) takes the sign of each element in the input matrix, and Π ∥∆∥ ∞ ≤ projects the perturbation onto the -ball in the  ∞ -norm.

METHOD
In this section, we will first investigate the vulnerability of the graph contrastive learning, then we will spend the remaining section discussing each part of ArieL in detail.Based on the connection we build upon the node-level contrastive learning and graph-level contrastive learning, we will illustrate our method from the perspective of node-level contrastive learning and extend it to the graph level.

Vulnerability of the Graph Contrastive Learning
Many GNNs are known to be vulnerable to adversarial attacks [2,66], so we first investigate the vulnerability of the graph neural networks trained with the contrastive learning objective in Equation (3).We generate a sequence of 60 graphs by iteratively dropping edges and masking the features.Let  0 = , for the -th iteration, we generate   from   −1 by randomly dropping the edges in   −1 and randomly masking the unmasked features, both with probability  = 0.03.Since   is guaranteed to contain less information than   −1 ,   should be less similar to  0 than   −1 , on both the graph and node level.Denote the node embeddings of   as H  , we measure the similarity  (H  [, :], H 0 [, :]), and it is expected that the similarity decreases as the iteration goes on.We generate the sequences on two datasets, Amazon-Computers and Amazon-Photo [43], and the results are shown in Figure 1.At the 30-th iteration, with 0.97 30 = 40.10%edges and features left, the average similarity of the positive samples are under 0.5 on Amazon-Photo.At the 60-th iteration, with 0.97 60 = 16.08% edges and features left, the average similarity drops under 0.2 on both Amazon-Computers and Amazon-Photo.Additionally, starting from the 30-th iteration, the cosine similarity has around 0.3 standard deviation for both datasets, which indicates that a lot of nodes are actually very sensitive to the external perturbations, even if we do not add any adversarial component but just mask out some information.These results demonstrate that the current graph contrastive learning framework is not trained over enough high-quality contrastive samples and is not robust to adversarial attacks.Besides, the similarities of the corresponding nodes (dashed lines) will get penalized by the information regularization if they exceed the estimated upper bound.The objective of ArieL is to minimize the contrastive loss (grey arrows) between the augmented views, the adversarial view, and the corresponding augmented view, and the information regularization.Best viewed in color.
Given this observation, we are motivated to build an adversarial graph contrastive learning framework that could improve the performance and robustness of the previous graph contrastive learning methods.The overview of our framework is shown in Figure 2.

Adversarial Training
Adversarial training uses the samples generated through the adversarial attack methods to improve the generalization ability and robustness of the original method during training.Although most existing attack frameworks are targeted at supervised learning, it is natural to generalize these methods to contrastive learning by replacing the classification loss with the contrastive loss.The goal of the adversarial attack on graph contrastive learning is to maximize the contrastive loss by adding a small perturbation on the contrastive samples, which can be formulated as where  ′ = {A ′ , X ′ } is generated from the original graph , and the change is constrained by the budget Δ A and Δ X as We treat adversarial attacks as one kind of data augmentation.Although we find it effective to make the adversarial attack on one or two augmented views as well, we follow the typical contrastive learning procedure as in SimCLR [5] to make the attack on the original graph in this work.Besides, it does not matter whether  1 ,  2 or  is chosen as the anchor for the adversary, each choice works in our framework and it can also be sampled as a third view.In our experiments, we use PGD attack [32] as our attack method.We generally follow the method proposed by Xu et al. [57] to make the PGD attack on the graph structure and apply the regular PGD attack method on the node features.Define the supplement of the adjacency matrix as Ā = 1 × − I  − A, where 1 × is the ones matrix of size  × .The perturbed adjacency matrix can be written as where • is the element-wise product, and L A ∈ {0, 1} × is a symmetric matrix with each element L A [, ] corresponding to the modification (e.g., add, delete or no modification) of the edge between the node pair (  ,   ).The perturbation on X follows the regular PGD attack procedure and the perturbed feature matrix can be written as where L X ∈ R × is the perturbation on the feature matrix.
For the ease of optimization, L A is relaxed to its convex hull LA ∈ [0, 1] × , which satisfies where we directly treat  X as the constraint on the feature perturbation.In each iteration, we make the updates where  denotes the current number of iterations, and denote the gradients of the loss with respect to LA at L( −1)

A
and L X at L ( −1) X respectively.Here X }.The projection operation Π S X simply clips L X into the range [− X ,  X ] elementwisely.The projection operation Π S A is calculated as where . We use the bisection method [4] to solve the equation To finally obtain L A from LA , each element is independently sampled from a Bernoulli distribution as L A [, ] ∼ Bernoulli( LA [, ]).To obtain a symmetric matrix, we only sample the upper triangular part (the elements on the diagonal are known to be 0 in our formulation) and obtain the lower triangular part through transposition.

Adversarial Graph Contrastive Learning
To assimilate the graph contrastive learning and adversarial training together, we treat the adversarial view  adv obtained from Equation (11) as another view of the graph.We define the adversarial contrastive loss as the contrastive loss between  1 and  adv .The adversarial contrastive loss is added to the original contrastive loss in Equation ( 3), which becomes where  1 > 0 is the adversarial contrastive loss coefficient.We further adopt two additional subtleties on top of this basic framework: subgraph sampling and curriculum learning.For each iteration, a subgraph   with a fixed size is first sampled from the original graph , then the data augmentation and adversarial attack are both conducted on this subgraph.The subgraph sampling could avoid the gradient derivation on the whole graph, which will lead to heavy computation on a large network.Besides, we also observe that subgraph sampling could increase the randomness of the sample and sometimes boost the performance.To avoid the imbalanced sample on the isolated nodes, we uniformly sample a random set of nodes and then construct the subgraph atop them.For every  epochs, the adversarial contrastive loss coefficient is multiplied by a weight .When  > 1, the portion of the adversarial contrastive loss is gradually increasing and the contrastive learning becomes harder as the training goes on.

Information Regularization
Adversarial training could effectively improve the model's robustness to the perturbations, nonetheless, we find these hard training samples could impose the additional risk of training collapsing, i.e., the model will be located at a bad parameter area at the early stage of the training, assigning higher probability to a highly perturbed sample than a mildly perturbed one.In our experiment, we find this vanilla adversarial training method may fail to converge in some cases (e.g., Amazon-Photo dataset).To stabilize the training, we add one constraint termed information regularization, whose main goal is to regularize the instance similarity in the feature space.The data processing inequality [6] states that for three random variables Z 1 , Z 2 and As proved by Zhu et al. [65], since the node embeddings of two views H 1 and H 2 are conditionally independent given the node embeddings of the original graph H, they also satisfy the Markov relation with H 1 → H → H 2 , and vice versa.Therefore, we can derive the following properties over their mutual information In fact, this inequality holds on each node.A sketch of the proof is that the embedding of each node   is determined by all the nodes from its -hop neighborhood if an -layer GNN is used as the encoder, and this subgraph composed of its -hop neighborhood also satisfies the Markov relation.Therefore, we can derive the more strict inequalities Since − con ( Specifically, the information regularization could be defined over any three graphs that satisfy the Markov relation, but for our framework, to save the memory and time complexity, we avoid additional sampling and directly ground the information regularization on the existing graphs.It is also fine to apply the information regularization on ,  1 and  adv or ,  2 and  adv .
The final loss of ArieL can be written as where  2 > 0 controls the strength of the information regularization.
The entire algorithm of ArieL is summarized in Algorithm 1.

Extension to Graph-Level Contrastive Learning
For a batch of graphs B and the batch of their augmentation views B + , we aim to generate a batch of adversarial views, which we denote as B adv .Denote the combined graph of each batch as  * ,  + * and  * adv .The objective of the adversarial graph contrastive learning on the graph level can be formulated as subject to ∑︁ It is worth noting that the constraints we use here are applied on the batch rather than each graph, i.e., we only constrain the total perturbations over all graphs rather than the perturbations on each graph.This can greatly reduce the computational cost in solving the above-constrained maximization problem in that it reduces the number of constraints from twice the batch size to 2. However, it also introduces the additional risk that the perturbations could be severely imbalanced among the graphs in the batch, e.g., a graph is heavily perturbed while others are almost unchanged.
In our experiment, we do not observe this problem but it could theoretically happen.A good practice is to start from this simple form, and then gradually add constraints to the vulnerable graphs in the batch if the imbalanced perturbations are observed.
During the attack stage, the perturbation matrix L A and its convex hull LA are further subject to the constraints that they should be block diagonal matrices with 0 at position (, ) if node  and node  are the nodes from two graphs in the batch.This could be easily implemented by using a block diagonal mask to zero out the gradients during the forward propagation, where   is the number of nodes in the graph   in the batch.With this processing, the projection operation on the adjacency matrix remains the same as in Equation (21).In case we need to apply the constraints for each graph in the batch, we just need to apply the projection operation defined in Equation ( 21) on the adjacency matrix of each graph, using the bisection method to solve  for each graph separately.The projection operation on the feature perturbation matrix is not affected on the graph level, which still clips L X into the range [− X ,  X ] elementwisely.Furthermore, the information regularization also applies to graph-level contrastive learning, where we only need to replace the node embedding with the graph embedding in Equation (30).Hence, we can derive the bound atop different views of the same graph in B, B + and B adv , The final loss of ArieL for the graph-level contrastive learning could be written as The graph-level adversarial contrastive learning could also follow the steps outlined in Algorithm 1 for training, by simply replacing the input graph with the input batch in loss functions.

EXPERIMENTS
In this section, we conduct empirical evaluations, which are designed to answer the following three questions: RQ1.How effective is the proposed ArieL in comparison with previous graph contrastive learning methods on the node classification and graph classification task?RQ2.To what extent does ArieL gain robustness over the attacked graph?RQ3.How does each part of ArieL contribute to its performance?
We evaluate our method with the node classification task and graph classification task on the real-world graphs and further evaluate the robustness of it with the node classification task on the attacked graphs.The node/graph embeddings are first learned by the proposed ArieL algorithm, then the embeddings are fixed to perform the classification with a simple classifier trained over it.All our experiments are conducted on the NVIDIA Tesla V100S GPU with 32G memory.

Datasets.
For the node-level contrastive learning, we use eight datasets for the evaluation, including Cora, CiteSeer, Amazon-Computers, Amazon-Photo, Coauthor-CS, Coauthor-Physics, Facebook and LastFM Asia.Cora and CiteSeer [61] are citation networks, where nodes represent documents and edges correspond to citations.Amazon-Computers and Amazon-Photo [43] are extracted from the Amazon co-purchase graph.In these graphs, nodes are the goods and they are connected by an edge if they are frequently bought together.Coauthor-CS and Coauthor-Physics [43] are the co-authorship graphs, where each node is an author and the edge indicates the co-authorship on a paper.Facebook [41] is a page-page graph of verified Facebook pages where edges correspond to the likes of each other.LastFM Asia [42] is a social network of Asian users, each node represents a user and they are connected via friendship.
For the graph-level contrastive learning, we evaluate ArieL on four datasets from the benchmark TUDataset [33], including the biochemical molecules graphs NCI1, PROTEINS, DD and MUTAG.

Dataset
Nodes A summary of the datasets1 statistics is in Table 1 and Table 2. 4.1.2Baselines.We consider seven graph contrastive learning methods for node-level contrastive learning, including DeepWalk [37], DGI [51], Robust DGI (abbreviated as RDGI) [56], GMI [36], MVGRL [10], GRACE [64] and GCA [65].Since DeepWalk only generates the embeddings for the graph topology, we concatenate the node features to the generated embeddings for evaluation so that the final embeddings can incorporate both topology and attribute information.Besides, we also compare our method with two supervised methods Graph Convolutional Network (GCN) [27] and Graph Attention Network (GAT) [50].

Evaluation protocol.
For each dataset, we randomly select 10% nodes/graphs for training, 10% nodes/graphs for validation, and the remaining for testing.For contrastive learning methods, a logistic regression classifier is trained to do the node classification over the node embeddings while a support vector machine is trained to do the graph classification over the graph embeddings.The accuracy is used as the evaluation metric.
For node-level contrastive learning, we search each method over 6 different random seeds, including 5 random seeds from our own and the best random seed of GCA on each dataset.For each seed, we evaluate the method on 20 random training-validation-testing dataset splits and report the mean and the standard deviation of the accuracy on the best seed.Specifically, for the supervised learning methods, we abandon the existing splits, for example on Cora and CiteSeer, but instead do a random split before the training and report the results over 20 splits.
For graph-level contrastive learning, we keep the evaluation protocol the same as the setting in [44] and [62], where the experiments are conducted on 5 random seeds, each corresponding to a 10-fold evaluation.
Besides testing on the original, clean graphs, we also evaluate our method on the attacked graphs for node-level contrastive learning.We use Metattack [67] to perform the poisoning attack.Since Metattack is targeted at graph structure only and computationally inefficient on large graphs, we first randomly sample a subgraph of 5000 nodes if the number of nodes in the original graph is greater than 5000, then we randomly mask out 20% node features and finally use Metattack to perturb 20% edges to generate the final attacked graph.For ArieL, we use the hyperparameters of the best models we obtain on the clean graphs for evaluation.For GCA, we report the performance in our main results for its three variants, GCA-DE, GCA-PR, and GCA-EV, which correspond to the adoption of degree, eigenvector, and PageRank [25,35] centrality measures, and use the best variant on each dataset for the evaluation on the attacked graphs.

Hyperparameters
For node-level contrastive learning, we use the same parameters and design choices for ArieL's network architecture, optimizer and training scheme as in GRACE and GCA on each dataset.However, we find GCA not behave well on Cora with a significant performance drop, so we research the parameters for GCA on Cora separately and use a different temperature for it.Other contrastive learning-specific parameters are kept the same over GRACE, GCA and ArieL.On graph-level contrastive learning, we keep ArieL's hyperparameters the same as the ones used by GraphCL except for its own parameters.
All GNN-based baselines on node-contrastive learning use a two-layer GCN as the encoder.For each method, we compare its default hyperparameters and the ones used by ArieL and use the hyperparameters leading to better performance.Other algorithm-specific hyperparameters all respect the default setting in its official implementation.For graph-level contrastive learning, ArieL uses a three-layer GIN as the encoder, and we take the results for each baseline from its original paper under the same experimental setting.
Other hyperparameters of ArieL are summarized as follows: • Adversarial contrastive loss coefficient  1 and information regularization strength  2 .We search them over {0.5, 1, 1.5, 2} and use the one with the best performance on the validation set of each dataset.Specifically, we first fix  2 as 0 and decide the optimal value for all other parameters, then we search  2 on top of the model with other hyperparameters fixed.• Number of attack steps and perturbation constraints.These parameters are fixed on all datasets.For node-level contrastive learning, we set the number of attack steps 5, edge perturbation constraint Δ A = 0.1 , A[, ] and feature perturbation constraint  X = 0.5.For graph-level contrastive learning, we set the number of attack steps 5, edge perturbation constraint Δ A = 0.05 , A[, ] and feature perturbation constraint  X = 0.04.• Curriculum learning weight  and change period  .In our experiments, we simply fix  = 1.1 and  = 20 for node-level contrastive learning and  = 1 for graph-level contrastive learning.• Graph perturbation rate  and feature perturbation rate .We search both of them over {0.001, 0.01, 0.1} and take the best one on the validation set of each dataset.• Subgraph size.On node-level contrastive learning, we keep the subgraph size 500 for ArieL on all datasets except Facebook and LastFM Asia, where we use a subgraph size 3000.We do not do the subgraph sampling on graph-level contrastive learning.Instead, we control the batch size , where we fix  = 32 for DD and  = 128 for the other three datasets.

Main Results
The comparison results of node classification on all eight datasets are summarized in Table 3.Our method ArieL outperforms baselines over all datasets except on Cora and Facebook, with only 0.11% and 0.40% lower in accuracy than MVGRL.It can be seen that the previous state-of-the-art method GCA does not bear significant improvements over previous methods.In contrast, ArieL can achieve consistent improvements over GRACE and GCA on all datasets, especially on Amazon-Computers with almost 3% gain.Table 3. Node classification accuracy in percentage on eight real-world datasets.We bold the results with the best mean accuracy.The methods above the line are the supervised ones, and the ones below the line are unsupervised.OOM stands for Out-of-Memory on our 32G GPUs.
Besides, we find MVGRL a solid baseline whose performance is close to or even better than GCA on these datasets.It achieves the highest score on Cora and Facebook, and the second-highest on Amazon-Computers and Amazon-Photo.However, it does not behave well on CiteSeer, where GCA can effectively increase the score of GRACE.To sum up, previous modifications over the grounded frameworks are mostly based on specific knowledge, for example, MVGRL introduces the diffusion matrix to DGI and GCA defines the importance on the edges and features, and they cannot consistently take effect on all datasets.However, ArieL uses the adversarial attack to automatically construct the high-quality contrastive samples and achieves more stable performance improvements.
In comparison with the supervised methods, ArieL also achieves a clear advantage over all of them.Although it would be premature to conclude that ArieL is more powerful than these supervised methods since they are usually tested under the specific training-testing split, these results do demonstrate that ArieL can indeed generate highly expressive node embeddings for the node classification task, which can achieve comparable performance to the supervised methods.
The graph classification results are summarized in Table 4. Compared with our basic framework GraphCL, which uses naive augmentation methods, ArieL achieves even stronger performance on all datasets.GraphCL does not show a clear advantage against previous baselines such as InfoGraph and it does not behave well on the dataset with small graph size (e.g., NCI1 and MUTAG).However, ArieL can take the lead on three of the datasets and greatly reduce the performance gap on NCI1 with the graph kernel methods.It can be clearly seen that ArieL behaves better than GraphCL on NCI1 and MUTAG with at least 1% improvement in accuracy.In comparison with another graph contrastive learning method InfoGraph, we can also see that ArieL takes an overall lead on all datasets, even on MUTAG where InfoGraph shows a dominant advantage against other baselines.The above empirical results on the node classification and graph classification tasks clearly demonstrate the advantage of ArieL on real-world graphs, which indicates the better augmentation strategy of ArieL.

Results under Attack
The results on attacked graphs are summarized in Table 5.Specifically, we evaluate all these methods on the attacked subgraph of Amazon-Computers, Amazon-Photo, Coauthor-CS, Coauthor-Physics, Facebook, and LastFM Asia, so their results are not directly comparable to the results in Table 3.To compare with the previous results, we look at the datasets where ArieL takes the lead, and then find the performance of the second-best method on each dataset for both the original graph and the attacked one.If ArieL outperforms the second-best method by a much larger margin on the attacked graph compared with that on the original graph, we claim that ArieL is significantly robust on that dataset.Table 5. Node classification accuracy in percentage on the graphs under Metattack, where subgraphs of Amazon-Computers, Amazon-Photo, Coauthor-CS and Coauthor-Physics, Facebook and LastFM Asia are used for attack and their results are not directly comparable to those in Table 3.We bold the results with the best mean accuracy.GCA is evaluated on its best variant on each clean graph.
Under this principle, we can see that ArieL is significantly robust on CiteSeer, with the margin to the second best method increasing from 0.58% to 3.96%, Amazon-Computers, with the margin increasing from 2.11% to 3.56%, and Coauthor-CS, with the margin increasing from 0.74% to 1.71%.On Coauthor-Physics, ArieL and GCA both show clear robustness against the remaining methods.
Although some baselines are robust on specific datasets, for example, MVGRL on Cora, GMI on CiteSeer, GCN on Facebook, and GCA on Coauthor-CS and Coauthor-Physics, they fail to achieve consistent robustness over all datasets.Although GCA indeed makes GRACE more robust for most datasets, it is still vulnerable on Cora, CiteSeer, and Amazon-Computers, with more than 3% lower than ArieL in the final accuracy.
Besides, we can also see that ArieL still shows high robustness on the datasets where it cannot take the lead.On Cora and Facebook, ArieL is only less than 1% lower in accuracy than the best method and it is still better than most baselines.It does not show a sudden performance drop on any dataset, such as MVGRL on CiteSeer and GCA on Facebook.
Basically, MVGRL and GCA can improve the robustness of their respective grounded frameworks over specific datasets, but we find this kind of improvement relatively minor.Instead, ArieL has more significant improvements and greatly increases robustness.It is worth noting that though RDGI is also developed to improve the robustness of graph representation learning, it does not show a clear advantage against DGI in our evaluation.This is mainly because the original RDGI considers the attack at test time and what we evaluate is the robustness against the attack at training time, which is more common for the graph learning tasks [2,66,67].Based on the comparative results, we claim that ArieL is more robust than previous graph contrastive learning methods in the face of the adversarial attack.

Ablation Study
For this section, we first set  2 as 0 and investigate the role of adversarial contrastive loss.The adversarial contrastive loss coefficient  1 controls the portion of the adversarial contrastive loss in the final loss.When  1 = 0, the final loss reduces to the regular contrastive loss in Equation (3).To explore the effect of the adversarial contrastive loss, we fix other parameters in our best models on Cora and CiteSeer and gradually increase  1 from 0 to 2. The changes in the final performance are shown in Figure 3.
The dashed line represents the performance of GRACE with subgraph sampling, i.e.,  1 = 0.Although there exist some variations, ArieL is always above the baseline under a positive  1 with around 2% improvement.The subgraph sampling trick may sometimes help the model, for example, it improves GRACE without subgraph sampling by 1% on CiteSeer, but it could be detrimental as well, such as on Cora.This is understandable since subgraph sampling can simultaneously enrich the data augmentation and lessen the number of negative samples, both critical to contrastive learning.While for the adversarial contrastive loss, it has a stable and significant improvement on GRACE with subgraph sampling, which demonstrates that the performance improvement of ArieL mainly stems from the adversarial loss rather than the subgraph sampling.Next, we fix all other parameters and check the behavior of  2 .Information regularization is mainly designed to stabilize the training of ArieL.We find ArieL would experience the collapsing at the early training stage and the information regularization could mitigate this issue.We choose the best run on Amazon-Photo, where the collapsing frequently occurs, and similar to before, we gradually increase  2 from 0 to 2, the results are shown in Figure 4 (left).As can be seen, without using the information regularization, ArieL could collapse without learning anything, while setting  2 greater than 0 can effectively avoid this situation.To further illustrate this, we draw the training curve of the regular contrastive loss in Figure 4 (right), for the best ArieL model on Amazon-Photo and the same model by simply removing the information regularization.Without information regularization, the model could get stuck in a bad parameter area and fail to converge, while information regularization can resolve this issue.

Training Analysis
Here we compare the training of ArieL on node-level contrastive learning to other methods on our NVIDIA Tesla V100S GPU with 32G memory.Adversarial attacks on graphs tend to be highly computationally expensive since the attack requires the gradient calculation over the entire adjacency matrix, which is of size  ( 2 ).For ArieL, we resolve this bottleneck with subgraph sampling on large graphs and empirically show that the adversarial training on the subgraph still yields significant improvements, without increasing the number of training iterations.In our experiments, we find GMI the most memory inefficient, which cannot be trained on Coauthor-CS, Coauthor-Physics, and Facebook.For DGI, MVGRL, GRACE, and GCA, the training of them also amounts to 30G GPU memory on Coauthor-Physics while the training of ArieL requires no more than 8G GPU memory.In terms of the training time, DGI and MVGRL are the fastest to converge, but it takes MVGRL a long time to compute the diffusion matrix on large graphs.ArieL is slower than GRACE and GCA on Cora and CiteSeer, but it is faster on large graphs like Coauthor-CS and Coauthor-Physics, with the training time for each iteration invariant to the graph size due to the subgraph sampling.On the largest graph Coauthor-Physics, each iteration takes GRACE 0.875 second and GCA 1.264 seconds, while it only takes ArieL 0.082 second.This demonstrates that ArieL has even better scalability than GRACE and GCA.
Subgraph sampling, under some mild assumptions, could be an efficient way to reduce the computational cost for any node-level contrastive learning algorithm.Besides this general trick, we want to point out that the attack is in fact not always needed on the whole graph to generate the adversarial view.Another solution to avoid explosive memory is to select some anchor nodes and only perturb the edges among these anchor nodes and their features.Since the scalability issue has been resolved by subgraph sampling on all datasets appearing in this work, we will not further discuss the details of this method and empirically prove its effectiveness, but leave it for future work.

RELATED WORK
In this section, we review the related work in the following three categories: graph contrastive learning, adversarial attack on graphs, and adversarial contrastive learning.

Graph Contrastive Learning
Contrastive learning is known for its simplicity and strong expressivity.Traditional methods ground the contrastive samples on the node proximity in the graph, such as DeepWalk [37] and node2vec [9], which use random walks to generate the node sequences and approximate the co-occurrence probability of node pairs.However, these methods can only learn the embeddings for the graph structures but regardless of the node features.
GNNs [27,50] can easily capture the local proximity and node features [19,22,51,62,63].To further improve the performance, the Information Maximization (InfoMax) principle [30] has been introduced.DGI [51] is adapted from Deep InfoMax [12] to maximize the mutual information between the local and global features.It generates the negative samples with a corrupted graph and contrasts the node embeddings with the original graph embedding and the corrupted one.Based on a similar idea, GMI [36] generalizes the concept of mutual information to the graph domain by separately defining the mutual information on the features and edges.Graph Community Infomax (GCI) [45] instead tries to maximize the mutual information between the community representation and the node representation for those positive pairs.Another follow-up work of DGI, MVGRL [10], maximizes the mutual information between the first-order neighbors and graph-diffusion.On the graph level, InfoGraph [44] makes use of a similar idea to maximize the mutual information between the global representation and patch representation from the same graph.HDI [18] introduces highorder mutual information to consider both intrinsic and extrinsic training signals.However, mutual information-based methods generate the corrupted graphs by simply randomly shuffling the node features.Recent methods exploit the graph topology and features to generate better-augmented graphs.GCC [38] adopts a random-walk-based strategy to generate different views of the context graph for a node, but it ignores the augmentation on the feature level.GCA [64], instead, considers the data augmentation from both the topology and feature level, and introduces the adaptive augmentation by considering the importance of each edge and feature dimension.To investigate the power of different data augmentations in graph domains, GraphCL [62] systematically studies the different combinations of graph augmentation strategies and applies them to different graph learning settings.Unlike the above methods which construct the data augmentation samples based on domain knowledge, ArieL uses an adversarial attack to construct the view that maximizes the contrastive loss, which is more informative with broader applicability.

Adversarial Attack on Graphs
Deep learning methods are known vulnerable to adversarial attacks, and this is also the case in the graph domain.As shown by Bojchevski et al. [2], both random-walk-based methods and GNNs-based methods could be attacked by flipping a small portion of edges.Xu et al. [57] propose a PGD attack and min-max attack on the graph structure from the optimization perspective.NETTACK [66] is the first to attack GNNs using both structure attack and feature attack, causing a significant performance drop of GNNs on the benchmarks.After that, Metattack [67] formulates the poisoning attack of GNNs as a meta-learning problem and achieves remarkable performance by only perturbing the graph structure.Node Injection Poisoning Attacks [46] uses a hierarchical reinforcement learning approach to sequentially manipulate the labels and links of the injected nodes.Recently, InfMax [31] formulated the adversarial attack on GNNs as an influence maximization problem.

Adversarial Contrastive Learning
The concept of adversarial contrastive learning is first proposed on visual domains [13,16,26].All these works propose a similar idea to use the adversarial sample as a form of data augmentation in contrastive learning and it can bring better downstream task performance and higher robustness.ACL [26] studies the different paradigms of adversarial contrastive learning, by replacing one or two of the augmentation views with the adversarial view generated by PGD attack [32].CLAE [13] and RoCL [16] use FGSM [8] to generate an additional adversarial view atop the two standard augmentation views.RDGI [56] and AD-GCL [47] are the most relevant work to ours in graph domains.RDGI quantifies the robustness of node representation as the decrease in mutual information between the graph and its embedding under adversarial attacks.It learns a robust node representation by simultaneously minimizing the standard contrastive learning loss and improving the robustness.Nonetheless, its objective sacrifices the expressiveness of the node representation for robustness while ArieL can improve both of them.AD-GCL formulates adversarial graph contrastive learning in a min-max form and uses a parameterized network for edge dropping.However, AD-GCL is designed for the graph classification task only and does not explore the robustness of graph contrastive learning.Finally, All previous adversarial contrastive learning methods do not take scalability into consideration, with visual models and AD-GCL dealing with independent instances and RDGI only working on small graphs, but ArieL can work for both interconnected instances (node embedding) and independent instances (graph embedding) on a large scale.Some recent theoretical analyses further reveal the vulnerability of contrastive learning.Jing et al. [23] show that dimensional collapse could happen if the variation of the data augmentation exceeds the variation of the data itself in contrastive learning.Wang et al. [54] prove that contrastive learning could cluster the instances from the same class only when the support of different intra-class samples overlaps under data augmentation.The representations learned by contrastive learning may fail in downstream tasks when either under-overlapping or overly overlapping happens.From these perspectives, searching for adversarial contrastive samples in a safe area is more likely to generate useful representations for downstream tasks.

CONCLUSION
In this paper, we propose a universal framework for graph contrastive learning by introducing an adversarial view, scaling it through subgraph sampling, and stabilizing it through information regularization.It consistently outperforms the state-of-the-art graph contrastive learning methods in the node classification and graph classification tasks and exhibits a higher degree of robustness to the adversarial attack.Our framework is not limited to the graph contrastive learning frameworks we build on in this paper, and it can be naturally extended to other graph contrastive learning methods as well.In the future, we plan to further investigate (1) the adversarial attack on graph contrastive learning and (2) the integration of graph contrastive learning and supervised methods.authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.

Fig. 1 .Fig. 2 .
Fig. 1.Average cosine similarity between the node embeddings of the original graph and the perturbed graph, results are on datasets Amazon-Computers and Amazon-Photo.The shaded area represents the standard deviation.

Fig. 3 .
Fig. 3. Effect of adversarial contrastive loss coefficient  1 on Cora and CiteSeer.The dashed line represents the performance of GRACE with subgraph sampling.

Fig. 4 .
Fig. 4. Effect of information regularization on Amazon-Photo.The left figure shows the model performance under different  2 and the right figure plots the training curve of ArieL under  2 = 0 and  2 = 1.0.
1 ,  2 ) is only a lower bound of the mutual information, directly applying the above constraints is hard, we only consider the constraints on the density ratio.Using the Markov relation for each node, we give the following theorem: Theorem 1.For two graph views  1 and  2 independently transformed from the graph , the density ratio of their node embeddings H 1 and H 2 should satisfy (H 2 [, :], H 1 [, :]) ≤ (H 2 [, :], H[, :]) and (H 1 [, :], H 2 [, :]) ≤ (H 1 [, :], H[, :]), where H is the node embeddings of the original graph.Proof.Following the Markov relation of each node, we get Algorithm 1 Algorithm of ArieL Input data: Graph  = (A, X) Input parameters: , , Δ A ,  X ,  1 ,  2 ,  and  Randomly initialize the graph encoder  for iteration  = 0, 1, • • • do Sample a subgraph   from  Generate two views  1 and  2 from   Generate the adversarial view  adv according to Equations (18) and (17) Update model  to minimize ( 1 ,  2 ,  adv ) in Equation (32) if ( + 1) mod  = 0 then Update  1 ←  *  1 end if end for return: Node embedding matrix H =  (A, X)

Table 1 .
Node-level contrastive learning datasets statistics, the number of nodes, edges, node feature dimensions, and classes are listed.

Table 2 .
Graph-level contrastive learning datasets statistics, number of graphs, the average number of nodes and degree, and the number of node feature dimensions and classes are listed.

Table 4 .
Graph classification accuracy in percentage on four real-world datasets.We bold the results with the best mean accuracy.The methods above the double line belong to the graph kernel methods, and the ones below the double line are unsupervised representation learning methods.The compared numbers are from the original paper under the same experimental setting.