On the Maximal Independent Sets of k-mers with the Edit Distance

In computational biology, k-mers and edit distance are fundamental concepts. However, little is known about the metric space of all k-mers equipped with the edit distance. In this work, we explore the structure of the k-mer space by studying its maximal independent sets (MISs). An MIS is a sparse sketch of all k-mers with nice theoretical properties, and therefore admits critical applications in clustering, indexing, hashing, and sketching large-scale sequencing data, particularly those with high error-rates. Finding an MIS is a challenging problem, as the size of a k-mer space grows geometrically with respect to k. We propose three algorithms for this problem. The first and the most intuitive one uses a greedy strategy. The second method implements two techniques to avoid redundant comparisons by taking advantage of the locality-property of the k-mer space and the estimated bounds on the edit distance. The last algorithm avoids expensive calculations of the edit distance by translating the edit distance into the shortest path in a specifically designed graph. These algorithms are implemented and the calculated MISs of k-mer spaces and their statistical properties are reported and analyzed for k up to 15. Source code is freely available at https://github.com/Shao-Group/kmerspace.


Introduction
Given an alphabet Σ, we denote by S k the set of all sequences of length k over Σ.Clearly, |S k | = |Σ| k .In this work, we call a sequence of length k a k-mer, and call S k the k-mer space, for the sake of convenience.The Levenshtein distance [9], also known as the edit distance, between two sequences u and v, denoted as edit(u, v), is defined as the minimum number of insertions, deletions, and substitutions needed to transform u into v.The edit distance is a metric; the k-mer space together with the edit distance (S k , edit) forms a metric space.
Despite the ubiquitous use of k-mers and the edit distance in computational biology, the properties and structures of the metric space (S k , edit) remain largely unknown.It is worth noting that (S k , edit) is intrinsically different from the well-studied metric spaces on sequences such as the Hamming distance space or other normed spaces (when a k-mer is represented as a vector of dimension k), as evidenced by the fact that embedding the edit distance on {0, 1} k into the L 1 space requires a distortion of Ω(log k) [14,8].
In this work, we study the maximal independent set (MIS) of a k-mer space parameterized with an integer d ≥ 0. Formally, we say a subset M ⊂ S k is independent if edit(u, v) > d for every two k-mers u and v in M , and we say an independent subset M is maximal if there does not exist another independent subset M such that M M .These definitions can be equivalently stated using the language of graph.Given an integer d ≥ 0, we define an undirected graph where there is an edge (u, v) ∈ E d k if edit(u, v) ≤ d.The above defined MISs of the k-mer space with respect to an integer d is equivalent to MISs in the graph G d k .We address the problem of finding an MIS of a k-mer space.Solving this problem helps understand the structure of the metric space (S k , edit) as an MIS indicates how dense a k-mer space is under the edit distance.More importantly, finding MISs of G d k has critical applications in large-scale sequence analysis thanks to its nice properties.We elaborate on a few below.
First, an MIS M naturally can be used as a set of "centers" for clustering k-mers.The centers (k-mers in an M ) are distant from each other which means that clusters are not crammed together.In addition, every k-mer v can find a "nearby" center, as there exists a k-mer u ∈ M such that edit(u, v) ≤ d (otherwise M ∪ {v} is an independent set that strictly contains M ).These properties make an MIS an ideal set of centers for clustering k-mers.
Second, k-mers of an MIS can be used to design a new seeding scheme for aligning error-prone sequencing reads.Modern fast aligners often use exact k-mer matches as anchors (or seeds) [3,10].However, such methods exhibits low sensitivity when aligning reads with high error-rate (i.e., longreads data generated by PacBio [16] and Oxford Nanopore [5] protocols), because under high error-rate, two biologically related sequences hardly share any identical k-mers (for a reasonable choice of k such as k = 15 and an error rate of 15%).One can choose to use smaller k but this will lead to high false positives as unrelated sequences can, by chance, share many short k-mers.A new seeding scheme can be designed by first mapping a k-mer into its nearest k-mer u in an MIS M : two k-mers form an anchor if they are mapped to the same k-mer in M .The advantage of this MIS-based seeding scheme is that it tolerates edits in k-mers and is therefore more sensitive for error-prone sequencing data.Comparing to other k-mer-alternative seeding methods such as spaced seeds [2,11] and indel seeds [12] where only restricted types of edits are allowed (for example, spaced seeds cannot model indels), this MIS-based scheme can recognize similar k-mers with both indels and substitutions.Moreover, the intrinsic properties of an MIS also control the false-positives: by triangle inequality, two distant k-mers that are 2d edits apart will never be mapped to the same k-mer.
Third, an MIS of the k-mer space can be used to improve sketching approaches.Sketching enables scaling to large datasets by only selecting a subset of representative k-mers in a sequence.Existing k-mer sketching methods such as FracMinHash [1,4] often use a random permutation of k-mers and pick the top fraction as representative k-mers.By using an MIS as a representative subset instead, the selected k-mers are guaranteed to be at least of d edits apart of each other, and are hence more efficient.
All above applications require finding an MIS in the first place.However, computing an MIS of a k-mer space is difficult, simply because the number of k-mers |S k | = |Σ| k (equivalently, the number of vertices in the graph G d k ) grows exponentially with k and the number of edges in the graph ) grows even faster.Existing algorithms for finding an MIS in a general graph, for example, the greedy algorithm that takes O(m) time for a graph with m edges, are therefore not suitable for our problem.In fact, even constructing E d k explicitly is often unaffordable.In this work, we design three algorithms for finding an MIS of a k-mer space.Our algorithms take into account the special properties of k-mers and the edit distance so that the underlying graph G d k does not need to be explicitly constructed.Our algorithms are able to scale to instances with k = 15.We implemented these algorithms and calculated an MIS for all combinations of k and d for k ≤ 15 and reported their statistics.We also analyzed and concluded when to use which algorithm for different combinations of k and d.

Algorithms
We design three algorithms for finding a MIS of all k-mers, given k and d.The first one is a greedy algorithm that is similar to the greedy algorithm for computing an MIS on general graphs but without explicitly building all edges of the underlying graph.The second algorithm improves the first one by reducing redundant comparisons through recognizing the locality properties of k-mers and estimating and incorporating the bounds of the edit distance into the algorithm.The last algorithm transforms calculating the edit distance into finding the shortest path in a specifically constructed graph, and finding an MIS is then transformed into efficient graph traversing together with accompanied data structures to speed up.

A Simple Greedy Algorithm
This algorithm maintains a (dynamic) array M that stores the current MIS, initialized as an empty array.The algorithm examines each k-mer in S k : for the current k-mer v, it compares with all k-mers in M , and adds v to M if edit(u, v) > d for every u ∈ M .See the pseudocode given below.
We now show that this algorithm is correct, i.e., the returned M is indeed an MIS.According to the algorithm, k-mer v is added to M only if edit(u, v) > d for all u ∈ M .This means that any two k-mers in M have an edit distance larger than d, i.e., M is independent.On the other hand, for any k-mer v not included in M , the algorithm guarantees that there exists a k-mer u in M such that edit(u, v) ≤ d; this implies that M is maximal.
The running time is output-sensitive.It is in favor of instances with small MIS.

An Improved Greedy Algorithm
We say a k-mer v is mapped to a k-mer u in the (partially) constructed MIS M if edit(u, v) ≤ d.Such a mapping explains why v is not selected into the MIS.Algorithm 1 essentially finds a mapping for each k-mer v ∈ S k \ M using an iterative, exhaustive search.As the size of an MIS increases, this can significantly impair the performance of the algorithm.Observe that the search order matters: if a mapping of a k-mer is found early, all the following comparisons can be avoided.Furthermore, based on the results of previous comparisons, it can be inferred that some k-mers in the MIS are close to v while some others are too far away to serve as mappings.We design two techniques that allow us to quickly determine if v can be mapped to some k-mer in M , by utilizing the locality-properties, and to quickly filter out those k-mers in M that v cannot be mapped to, by estimating the bounds on the edit distance.These two techniques serve as extensions to the first algorithm to avoid unnecessary comparisons.
The first technique is based on the observation that a k-mer is likely to have a shared mapping with its "neighboring" k-mers.For a k-mer v, we define its nearest neighbors to be the subset of S k in which each k-mer has edit distance 1 from v, i.e., a substitution.For any given k-mer, clearly, its nearest neighbor set contains k(|Σ| − 1) k-mers.For example, the set of nearest neighbors of the 3-mer AAA is {AAC, AAG, AAT, ACA, AGA, AT A, CAA, GAA, T AA}.If AAA is selected into the MIS, all the above 9 k-mers will have it as a shared mapping.This locality-property suggests a modification to Algorithm 1: store the mapping information of k-mers (once found) so that potential shared mappings can be examined first.Note that a k-mer can have multiple valid mappings in the MIS, but we only keep record of one to save both time and space.When checking whether a k-mer v should be added to the MIS or not, we first compute the nearest neighbors of v and check if v shares a mapping with them.If a shared mapping is found, we store this mapping information for v and directly conclude that v cannot be included in the MIS.
The second technique uses the estimated bounds of the edit distance.Recall that (S k , edit) is a metric space, in particular, the triangle inequality holds.Let σ k be the k-mer that is purely composed of the character σ.For any two k-mers u, v ∈ S k and σ ∈ Σ, we have Note that the calculation of edit(u, σ k ) is simple: all characters in u that are not σ should be substituted by σ (insertions and deletions cannot reduce the number of edits needed).Hence, the upper and lower bounds provided by inequalities ( 1) and ( 2) can be used as a filter before performing the expensive calculation of edit(u, v).Using this idea, we modify the algorithm as follows.For each k-mer u in the constructed MIS, we store additional values edit(u, σ k ) for each σ ∈ Σ.When searching for a mapping of a k-mer v, for each k-mer u in the MIS, we check if the parameter d is within the range given by inequalities (1) and ( 2).If so, we calculate the exact distance edit(u, v) and compare it with d; otherwise, if d is less than max σ∈Σ edit u, σ k − edit σ k , v , we directly conclude that edit(u, v) > d and therefore u cannot be a valid mapping of v; if d is greater than min σ∈Σ edit u, σ k + edit σ k , v , then we know that edit(u, v) < d and v cannot be included in the MIS.Algorithm 2 incorporates above two techniques.Note that Algorithm 2 is also correct, i.e., the returned M is guaranteed an MIS, following the correctness of Algorithm 1 and above analysis.The worst-case running time of Algorithm 2 is the same with Algorithm 1 but it runs faster in practice (see Section 3).

Algorithm 2 Improved Greedy Algorithm
initialize empty arrays M , mapping, and dσ //dσ[u] is later used to store edit(u,

A BFS-based Algorithm
In the third algorithm, we build a graph in which vertices correspond to all k-mers and (k − 1)-mers, and edges correspond to pairs of vertices with edit distance being exactly 1 except that there is no edge between any two (k − 1)-mers.For example, if we take k = 4, the graph contains all the 4-mers and 3-mers.There is no edge between any pair of 3-mers.Two 4-mers are connected if one can be obtained from the other by a single substitution.A 4-mer and a 3-mer are connected if the 4-mer can be obtained by inserting a character to the 3-mer.
The key property of this graph is that two k-mers u and v have edit(u, v) = d if and only if the distance (the length of the shortest path) between u and v in this graph is d.To formally see this, by the construction of the graph, every path between two k-mers gives a valid sequence of edits transforming one to the other.Hence, we only need to show that given a sequence of d edits that transforms a k-mer u to a k-mer v, the edits can be rearranged such that it corresponds to a path of length d from u to v in the graph.The only issue comes from the fact that the given sequence of edits may contain several insertions (or deletions) in a row so the intermediate strings may not be k-mers or (k −1)-mers.However, because both u and v are k-mers, the sequence of edits must contain the same number of insertions and deletions.Thus the sequence can be rearranged so that each deletion is followed immediately by an insertion.Such a sequence has a direct representation in the graph.For example, the transformation from the 5-mer T GAT T to the 5-mer AT T GA can be represented by the following shortest path of length 4: Following the above property, the calculation of all k-mers within an edit distance of at most d from a k-mer u can be (efficiently) achieved by exploring u using breadth-first search, i.e., traversing all vertices reachable from u within a distance of d.Again, if u is already added to the MIS, then all vertices found during the exploration can be marked as mapped to u.
We do not need to fully explore every single k-mer.Instead, we can reuse the information stored in exploring previous vertices to stop early.Specifically, for each vertex u (not in the current MIS), we store the distance from u to any vertex in the MIS, and update it in the exploring.See Algorithm 3 for the complete pseudocode.The exploring step (variant of BFS) guarantees that all the unexplored k-mers must have an edit distance greater than d from all vertices in the MIS, and all the explored k-mers must have an edit distance less than or equal to d from a vertex in the MIS.Hence, k-mers in the returned MIS are at least d + 1 edits apart (i.e., independent) and no other k-mers can be added to the resulting MIS (i.e., maximal).This shows that Algorithm 3 is correct.
To see its time complexity, note that the initial distance of a vertex is at most d, hence it can be updated (decreased) at most d − 1 times by the subsequent explorings.In other words, each vertex can be explored at most d times.Each time a k-mer is explored, we compute its neighbors with distance 1.The number of neighbors increases linearly with respect to k (for a constant-size alphabet).Thus, the running time for this algorithm is is the total number of k-mers and (k − 1)-mers.The space complexity of this algorithm is O(|V |).For large values of k, the graph is dense and too large to be stored in memory, we compute edges of the graph on the fly.The space is mainly used to store the distance array, which grows linearly with respect to the number of vertices.

Results
We implemented all three algorithms described above and conducted experiments for 2 ≤ k ≤ 15 and d < k (with the DNA alphabet Σ = {A, C, G, T }).Table 1 reports the sizes of the resulting MISs which are depicted in Fig. 1.For each row (i.e., with the same d value), the size of the MIS increases (approximately) geometrically with respect to k, which is consistent with the geometric growth of the size of the k-mer space S k .For each column (i.e., with the same k value), the size of the MIS decreases geometrically with respect to the growth of d.As discussed before, d specifies the sparsity of the resulting MIS.With larger d value, fewer vertices can be selected into the MIS.
Table 2 reports the fastest algorithm to calculate the MIS for each combination of k and d.Table 3 and Table 4 record the corresponding time and memory usage, respectively.The rows of Table 3 show that the running time generally increases geometrically with respect to the parameter k.The current computation bottleneck is the case with k = 15 and d = 5, which takes approximately 15 hours using Algorithm 2. Algorithm 2 always has the largest memory overhead comparing with other two.The extra memory is mainly used to store the mapping information of all k-mers.The peak memory usage is approximately 2 GB for k = 15 and d ∈ {5, 6, . . ., 10}.

Conclusion and Discussion
We studied the problem of extracting an MIS as a representative substructure from a k-mer space.Three algorithms are designed to efficiently solve the problem for different ranges of parameters k and  For future work, we would like to extend this study to larger k-mer spaces.Considering the current computation bottleneck, one potential improvement is to design more heuristics to partition the k-mer space so that only a small subset of k-mers will be involved in the expensive pairwise edit distance computation.
Another idea is to revise the BFS-based algorithm.The extended graph with both k-mers and (k −1)mers is highly symmetric with respect to permutations of the alphabet.For example, the search tree for the 5-mers AGAAC and T AT T G are isomorphic where A, G, and C are replaced with T , A, and G, respectively.This algorithm may take advantage of such symmetries to explore the graph more efficiently.Identifying and utilizing symmetric substructures of the k-mer space is of independent interest and has applications in other related fields [17].
Regarding the practical applications of the constructed MISs, one potentiality is their usage in k-merbased reads clustering problems [6,7], where the choice of centers is crucial.Because an MIS contains a group of representatives for the k-mer space that are guaranteed to be a certain edit distance apart, it is ideal for generating even-sized clusters.Moreover, the MISs can also be used to generate keys for k-mer-based reads hashing problems [13,15] where nearby k-mers are desired to share a hash value (i.e., locality-sensitive hashing), which is particularly useful in aligning error-prone long reads-a challenging yet unsolved problem.Last, it is interesting to apply the constructed MISs as the set of representative k-mers in sketching large-scale sequences.
suggests a general strategy to choose the best algorithm for a specific combination of k and d.Algorithm 1 is efficient for d values within the interval [k − 4, k − 1] since these d values result in an MIS with less than 15 vertices approximately.The BFS-based Algorithm (Algorithm 3) is the best for d values between 1 and 4 because such a small d leads to an MIS that is too large to perform pairwise comparisons efficiently.The rest of the d values are better be handled by Algorithm 2.

Figure 1 :
Figure 1: The size of an MIS (found by the fastest algorithm) with respect to parameters k and d.

Table 1 :
The size of calculated MIS (by the fastest algorithm) for different k and d.
The first one is a simple greedy algorithm similar to the greedy graph for general graphs.The second algorithm extends the first by implementing two techniques to avoid some redundant comparisons.The third algorithm represents the edit distance as a shortest path in an extended graph and uses a variant of BFS.Experiments are done for k up to 15.The computation bottleneck occurs at k = 15 and d = 5 where the second algorithm performs the best.The corresponding peak running time is approximately 15 hours, and the peak memory usage is about 2 GB.

Table 3 :
Running time (seconds) of the fastest algorithm for each (k, d).

Table 4 :
Memory usage (kB) of the fastest algorithm for each (k, d).