Node Embedding Preserving Graph Summarization

Graph summarization is a useful tool for analyzing large-scale graphs. Some works tried to preserve original node embeddings encoding rich structural information of nodes on the summary graph. However, their algorithms are designed heuristically and not theoretically guaranteed. In this article, we theoretically study the problem of preserving node embeddings on summary graph. We prove that three matrix-factorization-based node embedding methods of the original graph can be approximated by that of the summary graph, and we propose a novel graph summarization method, named HCSumm, based on this analysis. Extensive experiments are performed on real-world datasets to evaluate the effectiveness of our proposed method. The experimental results show that our method outperforms the state-of-the-art methods in preserving node embeddings.


INTRODUCTION
Graphs are widely used to represent various objects in real-world and relationships among them, including social networks, computer networks, and transportation networks, and so on.And graph-related applications have been widely studied in various fields [21,38].Recent years have witnessed the explosive growth of data size and such large scale brings great challenge to processing, analyzing and understanding graph data.To tackle this problem, some researchers resort to graph summarization.Given a graph G, graph summarization finds a compact representation of it.The typical form is a summary graph by grouping nodes in G into supernodes and aggregating edges in G into superedges.Figure 1 shows a small example of graph summarization.The original graph with nine nodes are summarized into a summary graph with three supernodes and six superedges.The summary graph is smaller and easier to process and analyze than the original graph and thus can be used to analyze the original graph [8,23,26,29,36].
Generally, a good summary graph is expected to keep the properties of the original graph.Most graph summarization methods aims to preserve the adjacency matrix.However, the adjacency matrix is only the most fundamental representation of a graph and fails to represent the high-order properties of a graph.However, node embedding methods have shown great power in capturing the structural properties and have become a fundamental tool in graph mining.Typically, node embedding methods learn low-dimensional representations of nodes in a graph, which can be used for various downstream tasks, such as link prediction, node classification, and anomaly detection.Moreover, graph summarization may capture high-order relations and help learn high-quality node embeddings [3,22].Thus, it is important to preserve the node embeddings of the original graph in the summary graph.
Several studies have attempted to learn node embeddings for large-scale graphs by combining graph summarization and node embedding methods.They first summarize input graphs into smaller summary graphs, and then learn summary embeddings on them, which are subsequently recovered to approximate the original node embeddings.The main objective of these approaches is to preserve the node embeddings of the original graph in the summary graph.Despite the empirical success of these methods, there is a key limitation that they summarize input graphs heuristically and do not investigate the theoretical connection between input graphs and summary graphs.
In this work, we study the theoretical connection between them in node embedding methods.We analyze three matrix-factorization-based node embedding methods, namely, NetMF [33], DeepWalk [32], and LINE [40].These three methods learn node embeddings by factorizing the proximity matrix [33] of the input graph.By showing that the proximity matrix in these methods can be approximated by the one of the summary graph, we provide theoretical foundation for learning node embeddings via summary graphs.We further analyze the error raised by the summarization and relate it to a trace optimization problem.Based on the analysis, propose a novel graph Summarization method based on Hierarchical Clustering, named HCSumm, to minimize the error.We conduct extensive experiments on several real-world datasets and show that our method outperforms several state-of-the-art methods with better summary.
In summary, our contributions include: -Theory: We reveal the theoretical connection between the proposed scheme and three nodeembedding learning methods, which provides theoretical foundation for learning node embeddings via summary graphs.-Method: Based on the theoretical analysis, we propose a graph summarization method HCSumm based on hierarchical clustering.-Effectiveness: We perform extensive experiments on several real-world datasets and the experimental results show that our HCSumm algorithm outperforms the state-of-the-art methods with better node embedding preservation.-Scalability: Our HCSumm algorithm runs fast and scales linearly in the size of graphs.

RELATED WORK 2.1 Graph Summarization
Graph summarization methods can be categorized based on many aspects.Here, we categorize them according to their objectives.See the comprehensive survey [25] for more knowledge about this topic.
Error of adjacency matrix: These methods try to minimize some error metrics between the original and reconstructed adjacency matrices and are the main focus of this article.k-Gs [20] aimed to find a summary graph with at most k supernodes, such that the L1 reconstruction error is minimized.Riondato et al. [35] revealed the connection between the geometric clustering problem and the graph summarization problem under multiple error metrics (including L1 error, L2 error and cut-norm error), and they proposed a polynomial-time approximate graph summarization method based on geometric clustering algorithms.Beg et al. [2] developed a randomized algorithm SAA-Gs using weighted sampling and count-min sketch [4] techniques to find promising node pairs efficiently.SpecSumm [27] reformulate the graph summarization problem as a trace optimization problem and propose a spectral algorithm based on k-means clustering on the eigenvectors of the adjacency matrix.
Total edge number: In this kind of method, the objective function is defined as number of edges in summary graph plus edge corrections.In Reference [28], Navlakha et al. proposed two algorithms: Greedy and Randomized.The former considers all possible node pairs at each step, and merges the best pair (u, v), which results in the greatest decrease of the total edge number.The latter samples a supernode as u randomly at each step, checks all other supernodes, finds the best v and merges them together.This process continues until the summary graph becomes smaller than a given size.However, both algorithms are computationally expensive.To address this problem, SWeG [39] reduces the search space by grouping supernodes, according to their shingle values, and only considers merging node pairs in the same groups.Reference [44] further uses weighted LSH and scales to large graphs with tens of billions of edges.
Encoding length: These kinds of methods often adopt the MDL principle and use the total encoding length as the objective function.They typically optimize the total description length under their proposed encoding scheme.LeFevre and Terzi [20] formulated the graph summarization problem Gs based on the MDL principle, and they proposed three algorithms Greedy, SamplePairs, and LinearCheck.Lee et al. [19] designed a dual-encoding scheme and proposed  [18] adopted a vocabulary-based encoding scheme, which encodes the graph using frequent patterns in real-world graphs, such as cliques, stars, and bipartite cores.Methods mentioned above mainly focus on static simple graphs.There are works aiming to summarize other types of graphs, including dynamic graphs [1,34,37], attributed graphs [11,16,42], and streaming graphs [17,41].

Graph Summarization Preserving Node Embeddings
There are some existing works that aim to learn node embeddings via summary graphs [25,43].The typical approach is to coarsen the original graph into a smaller summary graph and apply representation learning methods on it to obtain intermediate embeddings.The embeddings of the original nodes are then restored with a further refinement step.For example, HARP [3] finds a series of smaller graphs that preserve the global structure of the input graph and learns representations hierarchically.HSRL [9] learns embeddings on multi-level summary graphs, and concatenate them to restore original embeddings.MILE [22] repeatedly coarsens the input graph into smaller ones using a hybrid matching strategy, and finally refines the embeddings via GCN to obtain the original node embeddings.GPA [24] uses METIS [15] to partition the graphs, and smooths the restored embeddings via a propagation process.GraphZoom [5] employs an extra graph fusion step to combine the structural information and feature information, and then uses a spectral coarsening method to merge nodes based on their spectral similarities.Embeddings are then refined by a graph filter to ensure feature smoothness.Reference [7] learns embeddings of the given subset of nodes by coarsening the remaining nodes, which is not capable to learn embeddings of the remaining ones.

CR RECONSTRUCTION SCHEME
In this section, we introduce the configuration-based reconstruction (CR) scheme after introducing some basic concepts of graph summarization.We list the frequently used symbols in Table 1 for readability.

Graph Summarization and Reconstruction Scheme
Given an input graph G = (V, E) with n = |V | nodes, graph summarization aims to find a smaller summary graph G s = (V s , E s ) (with n s = |V s | nodes) that preserves the structural information of the original graph.The supernode set V s forms a partition of the original node set V such that every node v ∈ V belongs to exactly one supernode S ∈ V s .The supernodes are connected via superedges E s , which are weighted by the sum of original edges between the constituent nodes.That is, superedge A s (k, l) between supernodes S k , S l is defined as ( Degree of supernodes is defined as the sum of node degrees within supernode S p , i.e., d (s) p = i ∈S p d i .The adjacency matrix of the summary graph A s can be formulated using a membership matrix P ∈ R n s ×n as A s = PAP , where One could find a good summary graph by making the summary graph close to the original graph, for example, minimizing dis(G, G s ) for some distance metric dis.However, the summary graph and the original graph have different size, and it is difficult to directly compare two graphs with different sizes.
This issue can be avoided by introducing a reconstructed graph.Given the summary graph G s , the original graph G can be approximated with the reconstructed graphs G r with adjacency matrix A r defined as where Q ∈ R n×n s is the reconstruction matrix.The reconstructed graph G r has the same size with the original graph G, and is comparable to it.For example, one can minimize the difference of adjacency matrices A − A r for some matrix norm.Note that A r can be seen as a low-rank approximation of the original A.
A simple and intuitive reconstruction method is the uniform reconstruction scheme, which is widely applied in current works.The corresponding Q and A r are where S k and S l are the supernodes to which node i and node j belong, respectively.It can be seen from Equation ( 5) that the edges between two supernodes S p and S l , i.e., A s (k, l), are equally assigned to each node pair between them, and each node pair has the same connection weight.Thus, this approach assumes the G(n, p) random graph model (or Erdős-Rényi model equivalently) [6] and SBM (Stochastic Block Model) [12].However, real-world graphs have highly skewed degree distributions.Therefore, this uniform reconstruction scheme is not suitable for real-world graphs.
Thus, we introduce the configuration-based reconstruction scheme [45].Different from the uniform reconstruction scheme, it reconstructs A r based on node degrees: where S k and S l are the supernodes to which node i and node j belong, respectively.We use d i and d j to denote the degrees of nodes i and j; and D k and D l to denote the degrees of supernodes S k and S l .The corresponding Q matrix is In this way, the reconstructed edge weight A r (i, j) is proportional to the product of endpoints' degrees.This approach is based on the configuration model [30] and the DC-SBM (degreecorrected stochastic block model) [14], which has proved successful in modularity-based community detection [31].
Note that the proposed CR scheme is able to preserve the degrees of nodes, as show below: Property 1 (Degree Preservation). Proof.
Thus, we also call the CR scheme degree-preserving scheme.

CONNECTION WITH NODE EMBEDDING METHODS
In this section, we present the connection of the proposed CR scheme and three matrixfactorization-based node embedding methods: DeepWalk [32], LINE [40], and NetMF [33].In short, we show that learning node embeddings on a summary graph with restoration is equivalent to learning embeddings on the reconstructed graph under the CR scheme.

Matrix-factorization-based Node Embedding Methods
DeepWalk.DeepWalk [32] is an unsupervised graph representation learning method inspired by the success of word2vec in text embedding.It generates random walk sequences and treats them as sentences that are later fed into a skip-gram model with negative sampling to learn latent node representations.[40] learns embeddings by optimizing a carefully designed objective function that aims to preserve both the first-order and second-order proximity.

LINE. LINE
NetMF.NetMF aims to unify some node embedding methods into a matrix factorization framework [33].It shows that DeepWalk is implicitly approximating and factorizing the following proximity matrix: where T and b are the context window size and the number of negative samples in DeepWalk, respectively.
Similarly, LINE is equivalent to factorizing a similar matrix to Equation ( 9) and is a special case of DeepWalk for T = 1: We throw out the element-wise log function and constant factors away, and extract a form of kernel matrix defined as follows.
where τ is a positive integer, and A and D are adjacency matrix and degree matrix of G, respectively.We omit the subscript τ if there is no ambiguity.

Approximating Kernel Matrix
Now, we show that, under the configuration-based reconstruction scheme [see Equations ( 6) and ( 7)], the kernel matrix on the original graph, K(G), can be approximated with the same kernel matrix on the summary graph, K(G s ), in a closed form.
Theorem 1.Given A r (reconstructed by the configuration-based scheme, see Equation ( 6)) as a lowrank approximation of the original adjacency matrix A, the kernel matrix of G can be approximated by the one on G s as follows: where R ∈ R n×n s is the restoration matrix: Proof.See Appendix.

Approximating Node Embeddings
Based on Theorem 1, we now discuss how to approximate node embeddings for the original nodes.Since DeepWalk and LINE can be viewed as special cases of NetMF, we focus on NetMF in the following discussion.
Theorem 2. Embeddings learned by NetMF on the original graph G, E, can be approximated by embeddings learned by NetMF on the summary graph G s , E s , using the restoration matrix R in Equation (12), i.e., Proof.Consider A r as a low-rank approximation of A, and replace A by A r in the NetMF matrix.According to Corollary 1: Here M s is the corresponding matrix DeepWalk factorizing on summary graph That is, embeddings of original graph G can be approximated by embeddings learned on summary graph G s with a restoration matrix R: According to Theorem 2 and the definition of R matrix (R(i, p) = 1 if v i ∈ S p ), we can conclude that nodes in the same supernode get the same embeddings after restoration.This approach, is exactly the way how related works (including HARP, MILE, and GraphZoom) restore the embeddings.Thus, Theorem 2 provides a theoretical interpretation for the restoration step of existing methods.

PROPOSED METHODS
In this section, we first reveal that the error of kernel matrix is closely related to the error of the normalized adjacency matrix.Then, by showing that the latter error is bounded by a trace maximization objective function, we propose a summarization method HCSumm based on spectral clustering.

Kernel Matrix Error Analysis
From the previous section, it is known that the kernel matrix is closely related to many graph properties and graph mining tasks.Hence, it is important to preserve the kernel matrix of the original graph.One may ask that how much the error of kernel matrix is introduced by replacing A by A r ?The following theorem gives a brief analysis.

Theorem 3. By replacing A by A r , the error of kernel matrix is bounded by
where 2 is a constant only depending on the input graph.
Proof.Note that the kernel matrix K τ (G) can be rewritten as min is a constant only depending on the input graph.
for notation simplicity, we have

Applying it recursively, we have
where 2 is a constant only depending on the input graph.

HCSumm
Theorem 3 states that the error of τ -order kernel matrix is bounded by τ times the error of A.
Hence, we aim to design algorithms minimizing the error of A − A r to preserve the kernel matrix.
Lemma 1.Let A r be the adjacency matrix of G r .Then, A r can be written as where Π = YY is a projection matrix on the column space of D Proof.Let A r be the normalized adjacency matrix of G r .Then, And given the definition of Π in Equation ( 19), we have From the above lemma, the error of the normalized adjacency matrix can be formulated as A − ΠAΠ F , which is further bounded by Although there is a factor of 2, we find that these two terms are very close in practice.Hence, it is a good choice to use A − ΠA as an approximation of A − ΠAΠ F .
A − ΠA is easier to analyze and equivalent to a trace optimization problem: Since tr(A 2 ) is a constant, minimizing A − ΠA 2 F is equivalent to maximizing tr(Y A 2 Y) where Y Y = I, which is a trace maximization problem.If we relax the constraint that the Y is a discrete solution obtained from a summary graph, then this trace maximization problem can be easily solved by calculating the first k large eigenvectors of A 2 using the Rayleigh-Ritz theorem.Since A 2 and A share the eigenvectors, we can use the first k singular vectors of A instead to avoid calculating A 2 .
To obtain the discrete solution from the continuous solution, the typical way is using the kmeans algorithm to partition the rows of Y into k clusters.However, the cluster number in k-means is relatively small compared to the summary graph size in graph summarization problem, which makes k-means insufficient in our scenario.Thus, we use hierarchical clustering with ward linkage (also known as Ward's method) instead.Ward's method is a hierarchical clustering algorithm sharing the same objective function with k-means but working in a bottom-up approach.It starts with each data point as a cluster and iteratively merge the cluster pair raising the minimal cost increment.
Based on the above analysis, we propose a graph summarization algorithm HCSumm using hierarchical clustering, described in Algorithm 1. First, it computes the first d singular vectors of A. To enhance the efficiency, we use randomized SVD [10]  x ← arg min v deg(v) 6: Merge x and y 8: n ← n − 1 9: end while 10: return Return G s calculate the singular vector of A. Then, it clusters the rows of Z into k clusters using Ward's method.Finally, the summary graph is constructed according to the clustering result partition P and returned.
Algorithm 1 still bears the efficiency problem facing large input graphs, since Ward's method needs to keep track of all the pairwise distances between clusters.Thus, we propose HCSumm-Large (Algorithm2) for large-scale graphs using a degree heuristic.In each step, it chooses a node x with the minimum degree and find another node y nearest to x.To find the closest node to x, we use faiss [13] library and build a simple IVF index on Z .Then, it merges the two nodes together and repeats the process until all the nodes are merged into k supernodes.

EXPERIMENTS
In this section, we design experiments to answer the following research questions: -Summary Quality: How well does HCSumm preserve the normalized adjacency matrix of input graphs?-Node Embedding Preservation: How well does HCSumm preserve the node embeddings of input graphs?-Scalability: How does HCSumm scale with the input graph size?

Experimental Setup
Datasets.We evaluate HCSumm on four real-world social network datasets frequently used in node embedding learning.The statistics of these datasets are shown in Table 2. Cora dataset is a citation network of machine learning papers and labels are the research areas of papers.BlogCatalog dataset is a social network of bloggers at BlogCatalog website and labels are the interests of Baselines.We compare our HCSumm with two baselines, GraphZoom and SpecSumm.Graph-Zoom2 is the state-of-the-art graph summarization method for learning node embeddings and show significant better performance than earlier methods such as HARP and MILE.SpecSumm3 shares the similar approach with HCSumm, but aims at minimize the reconstruction error of the adjacency matrix.It computes the first d eigenvectors of the adjacency matrix and uses mini-batch k-means to obtain the summary graph.

Summary sizes.
We summarize input graphs with different summary sizes and evaluate the quality of them.To make a fair comparison, we should evaluate the summary quality of different methods with the same summary sizes.For our method HCSumm and SpecSumm, the summary size is a parameter that can be set by users.For GraphZoom, it is a multi-level summarization method and produces summary graphs with different sizes in each level.Summary size of each level is fixed and users can only set the number of level but not the summary size.For more details, please refer to the original paper [5].Thus, to make a fair comparison, we set the summary sizes in HCSumm and SpecSumm to the same values as GraphZoom's summary sizes in different levels.
Implementation details.We implement HCSumm in Python3.For SpecSumm and GraphZoom, we use the source code released by the authors.All experiments are performed on a machine with a 2.4 GHz Intel Xeon E5-2640 CPU and 128 GB memory.GraphZoom has a variant version utilizing node features.We use the feature-fusion version of GraphZoom on Flickr, since it has node features and use the vanilla version on other datasets without node features.For our method, we run the vanilla HCSumm (Algorithm 1) on the BlogCatalog dataset and HCSumm-Large (Algorithm 2) on the other two datasets.

Summary Quality
We first evaluate the summary graph quality of different methods.Two metrics, A − ΠAΠ F and A − ΠA F are applied to measure the quality.The former is the Frobenius norm of the difference between the original normalized adjacency matrix and the reconstructed one, and the latter is the objective function in the trace optimization problem (see Equation (20)).Due to the memory limit, we only evaluate on BlogCatalog and Flickr datasets.
Experimental Results.The results are listed in Table 4. From the table, we notice that the error 1 and error 2 terms, i.e., A − ΠAΠ F and A − ΠA F , are very close and the ratio of them is far from the theoretical bound of 2. Thus, it is reasonable to use A − ΠA F as a surrogate of A −ΠAΠ F .For BlogCatalog and Cora dataset, our method achieves the smallest error measures.For Flickr dataset, HCSumm always outperforms SpecSumm.Compared to GraphZoom, HCSumm Node Embedding Preserving Graph Summarization 145:13   does not achieve the smallest error when the summary size is 22,525.As the summary size goes smaller, the errors of HCSumm gradually approaches that of GraphZoom and outperforms it when the summary size is 2,954.

Kernel Matrix Error.
We also calculate the kernel matrix (see Equation ( 9)) error of different summarization ratios.The kernel matrix error is defined as the Frobenius norm of the difference between the original kernel matrix and the kernel matrix of the summary graph.Since the node embeddings are directly from the kernel matrix, the kernel matrix error can reflect how well the node embeddings are preserved by different methods.Due to the memory limit (the kernel matrix is dense), we only calculate the kernel matrix error on BlogCatalog dataset.The results are shown in Figure 2. From the results, we can see that HCSumm achieves the smallest kernel matrix error and thus preserves the node embeddings best.This result is consistent with the node classification performance in the next section.

Node Embedding Preservation
In this experiment, we aim to evaluate how well HCSumm preserves the node embeddings.We evaluate it by downstream node classification tasks.We run NetMF and DeepWalk on the summary graphs and restore the embeddings of the original nodes (Equation ( 15)).Then, we use the restored embeddings to train a Logistic Regression classifier and evaluate the performance.We set the training ratios to {0.20, 0.40, 0.60, 0.80} on BlogCatalog and Cora dataset and {0.02, 0.04, 0.06, 0.08} on Flickr and YouTube dataset and report the mean accuracy (for Cora dataset) and the micro f1 scores and macro f1 scores (for other three datasets) of five average runs.The dimension of the embedding is set to 128 in all experiments.We do not run SpecSumm on YouTube dataset due to its long run time on such a large dataset.Experimental Results.Mean micro f1 scores and macro f1 scores of five average runs are shown in Figure 3. 4 It can be seen that both the micro f1 scores and macro f1 scores drop after summarization.Compared to baselines, our HCSumm method achieves the slightest drop on Cora, BlogCatalog and Flickr dataset.On YouTube dataset, despite that GraphZoom outperforms HCSumm when the summary size is GraphZoom shows an unstable performance on different summary sizes.For example, the micro f1 score drops under 0.28 when the summary size is 48,532 and goes up to 0.34 when the summary size is 21,669.On the contrary, HCSumm method achieves a stable and relative good performance on all summary sizes.Overall, our HCSumm method can preserve the node embedding information better than baselines and are consistent with the results in the previous section.Experimental Results.Similar to NetMF, we report the mean micro f1 scores and macro f1 scores of five average runs in Figure 4. HCSumm outperforms other baselines on Cora, BlogCatalog, and Flickr dataset and is slightly worse than GraphZoom only on YouTube dataset.In general, HCSumm preserves the node embeddings better than baselines and is consistent with the results in the previous section.

Scalability
In this experiment, we evaluate the efficiency and scalability of our HCSumm method.We sample graphs with different sizes ranging from 1,000 to 1 million the largest YouTube dataset and record the running time of HCSumm method on these graphs.We represent the average running time of 5 runs in Figure 5.It can be seen that the running time of HCSumm method is linear with the graph size.

CONCLUSION
In this work, we study the connection between graph summarization and node embedding learning.We reveal that three matrix-factorization-based node embeddings (DeepWalk, LINE, and NetMF) of the original graph and the summary graph are closely related via a configuration-based reconstructed graph.We analyze the upper bound of node embedding error and propose HCSumm to summarize input graphs while preserving node embeddings.Extensive experiments on real-world datasets show that HCSumm preserves the node embedding better than baselines.Overall, our study helps understand the existing works of learning node embeddings via graph summarization and provides theoretical insights for future works on this problem.Proof.The (p, q)th entry in It is easy to see that the result is not zero only when p = q (since a node v i cannot belong to two supernodes S p and S q simultaneously).And diagonal items are (note that d (s) p = v i ∈S p d i ): Proof.Suppose v i ∈ S k , then the (i, k)-th entry of RD −1 s is And the (i, k)th entry of D −c Q is Thus, RD −1 s = D −1 Q.Now, we prove Theorem 1. Denote K τ (G) = D −1 A r τ D −1 for convenience.
Prove by induction.When τ = 1, Suppose the lemma holds for τ = i, i.e., K i (G r ) = R K i (G s ) R .For the case τ = i + 1, (Lemmas 2 and 3) Applying principal of induction finishes the proof.

1 2 P and Y = D 1 2P
(PDP) − 1 2 : r stands for the summary ratio.error 1 and error 2 refer to A − Π A F and A − Π AΠ F , respectively.

Fig. 2 .
Fig.2.Kernel matrix error of BlogCatalog datasets.The x-axis is the summary graph size.Our method achieves the smallest kernel matrix error.

6. 3 . 2
DeepWalk.Parameter Settings.The window size and the negative samples b are set to 10 and 1, respectively.The number of walks per node is set to 10 and the length of each walk is 80.

1
Before we prove Theorem 1, we first introduce Lemmas 2 and 3. Lemma 2. Q D −1 Q = D −1 s , where Q is the reconstruction matrix (Equation (7)), D and D s are degree matrix of the original graph and the summary graph.

Table 1 .
Major Symbols and Definitions instead of eigen-decomposition to Graph G = (V, E, A), Summary graph size k, Singular vector number d Output: Summary graph G s 1: A ← normalized adjacency matrix 2: Z ← randomizedSVD(A, d) 3: P ← partition the rows of Z into k clusters using Ward's method 4: G s ← construct the summary graph using P 5: return Return G s 2: Z ← randomizedSVD(A, d) 3: n ← |V| 4: while n > k do 5:

Table 2 .
Dataset Statistics Flickr dataset is a user social network on Flickr website and labels are the user interest groups.YouTube dataset is a network of users on YouTube website and labels are the user interest tags.

Table 4 .
Error Measures of Summary Graphs