Ghost Value Augmentation for k-Edge-Connectivity

We give a poly-time algorithm for the k-edge-connected spanning subgraph (k-ECSS) problem that returns a solution of cost no greater than the cheapest (k+10)-ECSS on the same graph. Our approach enhances the iterative relaxation framework with a new ingredient, which we call ghost values, that allows for high sparsity in intermediate problems. Our guarantees improve upon the best-known approximation factor of 2 for k-ECSS whenever the optimal value of (k+10)-ECSS is close to that of k-ECSS. This is a property that holds for the closely related problem k-edge-connected spanning multi-subgraph (k-ECSM), which is identical to k-ECSS except edges can be selected multiple times at the same cost. As a consequence, we obtain a 1+O(1/k)-approximation algorithm for k-ECSM, which resolves a conjecture of Pritchard and improves upon a recent 1+O(1/√k)-approximation algorithm of Karlin, Klein, Oveis Gharan, and Zhang. Moreover, we present a matching lower bound for k-ECSM, showing that our approximation ratio is tight up to the constant factor in O(1/k), unless P=NP.


Introduction
Computing k-edge-connected subgraphs of minimum cost is a fundamental problem in combinatorial optimization.For k = 1, this is the famous minimum spanning tree (MST) problem.For k ≥ 2, this problem is known as the k-edge-connected spanning subgraph (k-ECSS) problem.Formally, in k-ECSS we are given a multi-graph G = (V, E) with an edge cost function c : E → R ≥0 and our goal is to find a set of edges F ⊆ E where (V, F) is k-edge-connected while minimizing c(F) := ∑ e∈F c(e).
In this work, we show that one can achieve a close-to-1 approximation for k-ECSS whenever the cost of the optimal (k + 10)-edge-connected solution is close to the cost of the optimal k-edgeconnected solution.Specifically, we give a resource augmentation result for k-ECSS whereby we compare the quality of our algorithm's output to an adversary that has fewer resources (namely, the resource of cost budget): We show that one can poly-time compute a k-edge-connected graph with cost no greater than that of the optimal solution which (k + 10)-edge-connects the graph.
Similar resource augmentation results are known for other well-studied network design problems such as in the minimum cost bounded degree spanning tree problem [Goe06; CRRT09; KR00; KR03; RS06]; here, the state-of-the-art is an algorithm of Singh and Lau [SL15], which shows that one can find a spanning tree of maximum degree d that has cost no greater than the spanning tree of maximum degree d − 1 in poly-time.Likewise, resource augmentation is often considered in online and scheduling algorithms [ST85; PSTW97; JW08; CGKK04; Rou20].However, to our knowledge, no similar results are known for k-connectivity-type problems, and standard techniques [Nas61] would require augmenting the budget by k rather than O(1).
Our result is cost-competitive with the optimal LP solution, defined as follows.For ease of presentation, we fix an arbitrary vertex r ∈ V, called the root, and represent each cut (S, V \ S) by the side that does not contain r.Letting δ(S) be the set of edges crossing a cut S ⊆ V and x(F) := ∑ e∈F x e for F ⊆ E, we have the following LP for k-ECSS.
Denoting by LPOPT k−ECSS the cost of an optimal solution to k−ECSS LP and by OPT k−ECSS the cost of the optimal k-ECSS solution, our main result for k-ECSS is as follows.
Equivalently, we show that one can find in poly-time a (k − 10)-edge-connected subgraph of cost at most OPT k−ECSS .Thus, in some sense, our result demonstrates that achieving the last small constant amount of connectivity is the NP-hard part of k-ECSS.
One might reasonably wonder if it is often the case that the optimal k-edge-connected solution has cost close to the optimal (k + 10)-edge-connected solution.In fact, a well-studied problem closely related to k-ECSS called the k-edge-connected spanning multi-subgraph (k-ECSM) problem satisfies exactly this property.k-ECSM is the same as k-ECSS except our solution F is a multiset that can include each edge of E as many times as we want (where we pay for every copy of an edge).The canonical LP for k-ECSM is the same as that of k-ECSS but has no upper bound on how many times we choose an edge.
(k−ECSM LP) We notate by LPOPT k−ECSM the cost of an optimal solution to k−ECSM LP and by OPT k−ECSM the cost of an optimal k-ECSM solution.It is easy to see that scaling the optimal k-ECSM LP solution by (k + 10)/k results in a (k + 10)-ECSM LP solution, so we have the claimed relation between the costs of the optimal k-edge-connected and (k + 10)-edge-connected LP solutions: Theorem 1.4.There exists a constant ϵ TAP > 0 (given by Theorem 5.2) such that there does not exist a poly-time algorithm which, given an instance of (unweighted) k-ECSM where k is part of the input, always returns a 1 + ϵ TAP 9k -approximate solution, unless P = NP.Such a hardness result was also identified as an open question by Pritchard [Pri11].
We first give a proof sketch in Section 1.1.In Section 2, we then describe our main technical rounding theorem and how it implies our results on k-ECSS and k-ECSM.In particular, we show it allows us to compute a k-ECSS solution with cost at most that of the optimal (k + 10)-ECSS solution, and we also show it implies a 1 + O( 1 /k)-approximate k-ECSM algorithm-settling Pritchard's k-ECSM conjecture.Later, in Section 5, we show a matching 1 + Ω( 1 /k) hardness of approximation for k-ECSM.In Section 3, we describe our algorithm and give further intuition on why it gives strong guarantees before formally analyzing its performance in Section 4.

Iterative Relaxation Barriers and Ghost Value Augmentations to Overcome Them
First note that for the rest of the paper, to slightly simplify notation, we will work with a solution to the k-ECSS LP and round it to a (k − 10)-edge-connected graph.This is of course equivalent to working with k-edge-connectivity and the (k + 10)-ECSS LP.
Our approach to showing Theorem 1.1 is to apply iterative LP relaxation methods to the k-ECSS LP and obtain a (k − 10)-edge-connected graph.Iterative LP methods are a well-studied approach first pioneered by Jain [Jai01].See Lau, Ravi, and Singh [LRS11] for a comprehensive overview.Generally, iterative relaxation algorithms repeatedly compute an optimal solution to a suitable LP and then make progress towards producing a desired output by either finding an edge with LP value 0 that can be deleted, a newly integral edge that can be frozen at its integral value, or some LP constraint which is nearly satisfied and can be dropped from future LP recomputations while approximately preserving feasibility.
Applying such an iterative relaxation approach to the k-ECSS LP is naturally suited to proving Theorem 1.1 since we are allowed O(1) slack in connectivity.Specifically, one might hope to argue that if no edge can be frozen then there is some constraint of the k-ECSS LP corresponding to a cut which has at least k − O(1) frozen edges crossing it.Such a constraint can be safely dropped from future recomputations since our ultimate goal allows for slack O(1) in connectivity.However, it is not too hard to see that such a natural approach faces a significant barrier and so a non-standard idea is required.
In what follows, we describe this approach and barrier in more detail and how our key non-standard idea of "ghost value augmentations" allows us to overcome this barrier.

A Standard Iterative Relaxation Approach.
A first attempt to prove Theorem 1.1 is as follows, where we replace the constant 10 with an arbitrary constant c ∈ Z ≥1 .Repeat the following until there are no remaining variables.
Let y be an extreme point solution to k-ECSS LP.For each edge e with y e = 0, delete e from the LP.For each edge e with y e = 1, add the constraint x e = 1 to the LP and call e "frozen."We let F be all edges we have frozen so far.Now suppose that whenever we cannot delete or freeze a new edge, there is always a cut S ⊆ V \ {r} such that |δ(S) ∩ F| ≥ k − c.We call this the light cut property.In this case, we can delete the constraint x(δ(S)) ≥ k from the LP.This is a safe operation since δ(S) already has at least k − c frozen edges.
Once there are no remaining variables, we have an integral solution and can return the set of frozen edges.Provided one can still efficiently solve the LP even after dropping constraints and so long as the light cut property always holds when we cannot delete or freeze a new edge, standard arguments would demonstrate that the above algorithm always returns an integral (k − c)-ECSS solution of cost no more than the cost of LPOPT k−ECSS .
A Barrier to the Standard Approach.We do not know if the light cut property is true or not, and proving or disproving it is a very interesting open problem.However, there is a major barrier to proving it using known techniques for iterative relaxation, which we detail here.
To prove results like the light cut property, one generally first demonstrates that the set of tight constraints at any stage of the algorithm's execution can be "uncrossed" to obtain a laminar family. 2 In our case, a set S ⊆ V is tight if y(δ(S)) = k and one wants to show that there is a laminar family L ⊆ 2 V such that every constraint x(δ(S)) ≥ k where S is tight is spanned by the constraints {x(δ(S)) ≥ k : S ∈ L}. 3The major barrier to the standard iterative relaxation approach is that it does not appear that uncrossing is possible.One sufficient criteria for uncrossing is if the constraints can be expressed as x(δ(S)) ≥ f (S) where the "requirement function" f : V → Z ≥0 is skew supermodular [Jai01].
A function f : V → Z ≥0 is called skew supermodular if for all S, T ⊆ V we have either At the beginning of our process, f (S) = k for all S ⊆ V, and thus both of these inequalities trivially hold with equality.However, once we drop a constraint S for which we had |F ∩ δ(S)| ≥ k − c, this property fails to hold.In particular, we may have dropped the constraint for S ∩ T and S ∖ T so that f (S ∩ T) = f (S ∖ T) = 0. Thus, in this situation, the left-hand side of each equation would be 2k and the right-hand side k.For example, the situation in Figure 1 could occur, where we dropped cuts with at least k − c frozen edges.
Figure 1 shows that we cannot apply the result of Jain as a black box.However, one can still successfully uncross in this situation.The reason is that the constraints S ∖ T, T ∖ S, S ∩ T, S ∪ T are all still tight.Therefore, even though the constraints are not present in the LP, one can still replace the constraints S and T with the constraints S ∖ T and T ∖ S (or S ∩ T and S ∪ T).
The true issue arises when the connectivity of some sets drop below k, as in Figure 2. In Figure 2, none of S ∖ T, T ∖ S, S ∩ T, and S ∪ T are tight constraints.One could still consider adding them to the family.However, if we did this, we would face two major issues.
1.The constraints may not be integers, as is the case for all of the sets here.This leads to problems in the next phase of the iterative relaxation argument which uses integrality of the constraints to argue that if S, T are two tight sets in the family then the symmetric difference δ(S)∆δ(T) has size at least 2.
2. Unlike in the case in which S and T are minimum cuts of the graph (in which we can apply standard uncrossing) it is not necessarily the case that where χ F for F ⊆ E is the vector in {0, 1} E that is 1 at the edges in F and 0 elsewhere.In this example this equality does not hold due to the edge with value a.Similarly, it is not necessarily the case that To see this, one can extend the example by adding a small fractional edge between S ∖ T to T ∖ S and adjusting other edges accordingly (since only S, T are tight, this is not difficult).
However, relations as the two highlighted above are central in classical uncrossing arguments.
Therefore, to prove the light cut property, one would likely have to deal with a fairly complicated family of tight sets.Overcoming the Barrier with Ghost Values.Instead of working with this uncrossable family, we introduce a relaxation approach we call "ghost value augmentation."We consider the LP solution y together with a ghost vector g that augments y, so that the LP constraints are now of the form x(δ(S)) + g(δ(S)) = (x + g)(δ(S)) ≥ k.We say such a constraint is tight with respect to solution y if (y + g)(δ(S)) = k.This ghost vector will help us achieve uncrossing of tight sets.However, crucially, it will never be used in the final solution.This ensures that we never increase the cost of the solution compared to the LP.
We will still follow the general framework of iterative relaxation.Given an extreme point solution y, we will delete edges with y e = 0 and freeze edges with y e = 1.And, as before, we will drop constraints corresponding to tight sets S with the property that |δ(S) ∩ F| ≥ k − O(1) (where F is the set of frozen edges).However, we will only drop such a set S if: (i) S is a minimal tight set corresponding to an LP constraint.We restrict ourselves to tight sets for which there is no T ⊊ S such that the cut constraint corresponding to T is tight and still in the LP.Such sets S are desirable because they have the additional property that they are either vertices or all edges with both endpoints inside them are frozen.
(ii) δ(S) has only O(1) fractional edges.This allows us to ensure that the cut δ(S) does not change much over the course of the remainder of the algorithm.In particular, this will let us derive upper bounds on the number of edges crossing a dropped set, which in turn helps to lower bound the value of other cuts.
Of course, point (ii) implies that there are k − O(1) frozen edges, so it is sufficient to check (i) and (ii) for all sets.These points together allow us to argue that after dropping such a constraint S with |S| ≥ 2, it is safe to contract S to a vertex. 4This is because cuts contained in S will not change by more than O(1) throughout the execution of the entire algorithm, as they only contain edges inside S, which are all frozen by (i), and edges in δ(S), of which all but O(1) are frozen by (ii).See Figure 3.
The blue edges are frozen, and the dotted edges are fractional.Consider the red set S with 2 incident fractional edges.Since δ(S) has only 2 fractional edges, and the edges inside S are all frozen, any set contained in S is already safe, as is the case with the blue vertex.
Contraction now gives us the crucial property that at every iteration of the algorithm, operating on a graph G = (V, E) resulting from contracting some number of sets, we have y(δ(S)) ≥ k for all S ⊆ V ∖ {r} for which 2 ≤ |S|.For all vertices v ∈ V, we only have the weaker guarantee that y(δ(v)) ≥ k − O(1) where we use the notation δ(v) = δ({v}).Even though the connectivity of the graph is not uniform, the fact that the only cuts below k are the vertices allows us to show that either: 4 Contracting a vertex set S ⊆ V in G = (V, E) means that we remove S from V and add a single new vertex S.An edge (u, v) ∈ E before contraction with u ∈ S and v ∈ V \ S will become an edge between S and v; moreover, edges with both endpoints in S will be removed in the contracted graph.As is common, when having an edge e = (u, v) ∈ E before contraction that gets transformed into an edge (S, v) after contraction, we still consider this to be the same edge.In particular, any LP value or constraint on e before contraction will be interpreted as a value or constraint, respectively, on the new edge after contraction.
(a) There is an edge e = (u, v) with (y 2 ) (where E(u, v) is the set of edges between u and v).In this case, we perform a ghost value augmentation: we artificially increase the value of (y + g)(E(u, v)) to at least k 2 by increasing g e by O(1) for an edge in E(u, v).
(b) Otherwise, the set of tight constraints can be successfully uncrossed.In this case, we argue that there must be a set S that is safe to drop that has no tight children and at most 3 fractional edges.If S is not a singleton, then we additionally contract it.
At first, (a) may look like a strange property to expect for k-ECSS solutions.Indeed, the underlying input graph may not have any multi-edges, so that at the first iteration y(E(u, v)) ≤ 1 for all vertices u, v.However, as sets are contracted, these structures may begin to emerge as barriers for uncrossing.For example, in Fig. 2, since S ∖ T and S ∩ T have been dropped, they are now singletons.This allows us to study y(E(u, v)) for {u} = S ∖ T and {v} = S ∩ T. And one can see that y(E(u, v)) = k 2 − O(1), giving us a candidate for ghost value augmentation. 5For more intuition about why these structures should appear, one can study the cactus representation of minimum cuts (see [FF09] for a nice overview of this representation).
In both cases (a) and (b) we make progress: we either drop a constraint that is safe to drop (and possibly contract its corresponding set), or we fix an edge in our current graph to at least k 2 .We describe this process in more detail in Section 3. In Section 4 we show that one of these two cases must occur and that at the end of the algorithm the frozen edges (k − O(1))-connect the graph.

Main Rounding Theorem
Our main result will follow immediately from a general rounding theorem.
Theorem 1.1.There is a poly-time algorithm that, for any k-ECSS instance with k ∈ Z ≥1 , returns a k-ECSS solution of cost at most Proof.The result is immediate from applying Theorem 2.1 to any optimal solution y to (k + 10)-ECSS LP. y is poly-time computable by the Ellipsoid Method [GLS81] and standard separation oracles.
We note that in fact the same proof shows we can get the same guarantee for a slightly more general problem than k-ECSS.In particular, edges can be given arbitrary lower and upper bounds, and we can still obtain a solution in polynomial time (this includes the case in which k and possibly the lower and upper bounds are exponentially large compared to the input size).
Theorem 1.3.There is a poly-time algorithm for k-ECSM that, for any k-ECSM instance with k ∈ Z ≥1 , returns a k-ECSM solution of cost at most (1 Proof of Theorem 1.3 without assuming that k is given in unary.Let y be an optimal solution to (k + 10)-ECSS LP of cost at most LPOPT (k+10)−ECSM .Next, apply Theorem 2.1 to y to compute a solution z.Return (the edge multiset naturally corresponding to) z as our solution.y is computable by the Ellipsoid Method [GLS81] and standard separation oracles.Furthermore, by the guarantees of Theorem 2.1 we have that our solution is feasible for k-ECSM and computable in poly-time.Lastly, by Theorem 2.1 and Equation (1) its cost is upper bounded by Thus, in what follows, we focus on showing Theorem 2.1.Furthermore, observe that in order to do so it suffices to show the result for even k since, if we are given an odd k, applying the result to (the even number) k − 1 immediately gives the result for (the odd number) k.Thus, in what follows we assume that k is even.

Algorithm for the Rounding Theorem
Having reduced our problem to showing Theorem 2.1, we proceed to describe the algorithm for Theorem 2.1.As discussed earlier, our algorithm for Theorem 2.1 uses iterative relaxation techniques.Informally our algorithm is as follows.We repeatedly solve an LP that tries to round y.Each time we solve our LP, either our solution is integral in which case we return our solution, it has a newly integral edge which we freeze at its current value, it has an edge with value 0 which we delete, or we perform one of the following two relaxations of our LP: 1. Ghost Value Augmentation: we costlessly increase the LP value between two carefully chosen vertices.Specifically, if there are two vertices u and v that have nearly k /2 total LP value on edges between them (namely total edge value in [ k /2 − 2, k /2)) then we add "ghost values" by increasing the LP value between u and v by 2 at cost 0.
2. Drop/Contract: we drop a carefully chosen set's constraint and contract it.Specifically, if there is a tight set S (i.e., a set with exactly k total LP edge mass leaving it) corresponding to a constraint in the LP, such that S is (i) minimal in the sense that it contains no strict subset corresponding to a tight LP constraint, and (ii) has at most 3 fractional edges incident to it, then we remove the constraint corresponding to S from our LP and (if the set contains 2 or more vertices) we contract it into a single vertex.
As we later argue, we will always be in one of the above cases.The final solution we return will ignore the ghost values, and we will show that even after ignoring the ghost values our solution is well-connected.
We now more precisely define our LP given an input vector y ∈ R E ≥0 which we would like to round.We denote by ⌊y⌋ and ⌈y⌉ the integral vector obtained from y by taking the floor and ceiling of each component respectively.We let x ∈ [⌊y⌋, ⌈y⌉] denote that ⌊y e ⌋ ≤ x e ≤ ⌈y e ⌉ for each e ∈ E. To achieve a clean and elegant bookkeeping of our ghost values, we will save them in a separate vector g ∈ Z E ≥0 .Thus, for a given y and g, the LP we will solve is the following (potentially with extra constraints for frozen coordinates).
For a ghost value vector g ∈ Z E ≥0 , we call the constraint x(δ(S)) ≥ k − g(δ(S)) the g-cut constraint for S. We say that such a constraint is y-tight if y(δ(S)) = k − g(δ(S)).Analogously, we will say that a constraint x e = p e for p e ∈ Z ≥1 on a single edge e is y-tight if y e = p e .
Our algorithm is formally described in Algorithm 1 and uses the following notation.For any vector y ∈ R E , we denote by frac(y) := {e ∈ E : y e ̸ ∈ Z} all edges with a fractional y-value.To clearly distinguish between the original input graph and the graph at each iteration of our algorithm, we will denote the original input graph by G = (V, E) and the graph used in each iteration of the algorithm (i.e., G after some vertex contractions and edge deletions) by G = (V, E).We will be explicit about the specific graph considered.For any two vertex sets S, T ⊆ V, we denote by E(S, T) ⊆ E all edges with one endpoint in S and one in T. For vertices u, v and S ⊆ V, we also use the shorthand E(u, v) := E({u}, {v}) and E(u, S) := E({u}, S).For S ⊆ V, we let E[S] := {{u, v} ∈ E : u, v ∈ S} be all edges with both endpoints in S. We note that even though one could perform multiple ghost value augmentations or multiple cut relaxations in a single iteration of the while loop, Line 9 only performs a single such operation per iteration for simplicity.
Algorithm Intuition.We summarize the intuition for our algorithm.As discussed earlier, a natural approach to rounding our vector y would be to argue that whenever we re-solve our LP and it does not have a newly integral edge, there must be a constraint of our LP corresponding to a set that has at most O(1) fractional edges crossing it.Such a constraint can be safely dropped from our LP and doing so corresponds to our drop/contract relaxation.However, standard arguments that the non-existence of such a set implies a newly integral edge would require showing that the relevant cut constraints can be uncrossed into a laminar family.This uncrossing is not necessarily possible if we have already dropped some of our cut constraints.The somewhat unusual operation of ghost value augmentations will rescue us from this situation.In particular, we will see that the cases where uncrossing fails are exactly those when we are able to perform a ghost value augmentation.Indeed, a ghost value augmentation can be thought of as its own sort of cut relaxation: putting a ghost value of 2 units on some edge e ∈ E, simply corresponds to replacing the cut constraints x(δ(S)) ≥ k by x(δ(S)) ≥ k − 2 for all S ⊆ V with e ∈ δ(S).

Analysis of Our Algorithm
We proceed to analyze our algorithm.Doing so will require addressing several non-trivial issues.First, it is not a priori clear that our algorithm terminates and, in particular, that we can always Algorithm 1: Main Algorithm.(k is even.) 1 Initialization 2 z(e) = 0 for all e ∈ E.
// z ∈ Z E ≥0 will be the returned solution.
3 g(e) = 0 for all e ∈ E. // Start with all ghost values being set to 0.

5
Let LP Alg be k−EC Ghost LP using y and update y to an optimal vertex solution.

6
Delete from G all edges e ∈ E with y e = 0.
Ghost Value Augmentation: set g e = g e + 2 for an arbitrary edge e ∈ E(u, v). 14 Compute an optimal vertex solution y to LP Alg (with ghost values g).

15
Delete from G (and from LP Alg ) all edges e ∈ E with y e + g e = 0.

16
For all e ∈ E with y e ∈ Z, we add the constraint x e = y e to LP Alg .
17 z e = y e for all e ∈ E. 18 Return z. make progress by some relaxation or edge deletion or freezing.Second, even if the algorithm terminates, it is not clear that the returned vector z is integral, much less that it has value at least k − O(1) on every cut.For integrality, we will need to argue that, whenever we contract a set, all of its internal edges are integral.For near-k-edge-connectivity, it is particularly not clear that we do not end up with a single cut across which we have performed many ghost value augmentations, and so we will need to argue that this does not happen.Lastly, there are several efficiency concerns to address, including how to find the set S ⊆ V of Line 13 if there is one satisfying the stated criteria.Before addressing these challenges, we introduce some notation we use throughout our analysis.For brevity, when saying that a property holds at any iteration of the algorithm, we mean that it holds at the beginning of any iteration of the while loop in Algorithm 1.As in Algorithm 1, we will let LP Alg be the LP used by our algorithm at the beginning of an iteration.Note that although we compute our vertex solution y at the end of an iteration and then possibly delete edges from G and LP Alg , even after deleting said edges y remains a vertex to the resulting LP Alg . 7n other words, y is a vertex solution to LP Alg at the beginning of each iteration.Likewise, we denote by G = (V, E), y ∈ R E ≥0 , and g ∈ Z E ≥0 the current graph, the current vertex solution to LP Alg , and the current ghost values, respectively, at that iteration.Throughout our analysis, we will represent by R ⊆ 2 V\{r} all sets that are contracted throughout the algorithm.More precisely, for some set R ⊆ V \ {r}, we have R ∈ R if R is not a singleton and during some point in the algorithm we had a vertex that, when undoing the contractions, corresponds to the vertex set R. Because we perform contractions consecutively, the family R is laminar.At any iteration of the algorithm, each vertex v ∈ V of the current graph G = (V, E) is either an original vertex, i.e., v ∈ V, or it corresponds to a set R ∈ R that was contracted in a prior iteration of the algorithm.

Termination of Algorithm
We start by showing that Algorithm 1 terminates.
To show that Algorithm 1 terminates, we show that, at any iteration of the while loop, if y is not yet integral, and we cannot perform a ghost value augmentation, then we can perform a cut relaxation.Because cut relaxations require cut constraints with high integrality, i.e., the number of fractional edges must be at most 3, we will derive sparsity results in the following.These results provide upper bounds on the number of y-fractional edges, or show that certain edges must have integral y-values.
We start with a basic property on the (y + g)-load on each cut.A consequence is that at any iteration, we have (y + g)(δ(v)) ≥ k − 2 for any vertex v, and for every S ⊆ V with 2 ≤ |S| ≤ n − 2 we have (y + g)(δ(v)) ≥ k.Thus our graph is close to being fractionally k-edge-connected.
Proof.If the g-cut constraint corresponding to S is still in LP Alg , then we even have (y + g)(δ(S)) ≥ k.Otherwise, let y and g be the LP Alg solution and ghost values, respectively, at the iteration when the g-cut constraint corresponding to S got relaxed.Hence, ( y + g)(δ(S)) ≥ k, because, as before, the g-cut constraint corresponding to S is part of LP Alg at the beginning of the iteration when S gets relaxed.Moreover, | frac( y) ∩ δ(S)| ≤ 3, which implies Because the left-hand side is integral, we get The first statement now follows by observing that integral y-values get fixed, and hence y ≥ ⌊ y⌋, and that g ≥ g, because ghost values are non-decreasing.Moreover, this reasoning shows that to get (y + g)(δ(S)) = k − 2, we need y(δ(S)) = ⌊ y⌋(δ(S)).Because y ≥ ⌊ y⌋, this implies as desired y e = ⌊ y e ⌋ ∈ Z ≥0 ∀e ∈ δ(S).
The following lemma formalizes that y has low fractionality on any set of parallel edges.Below, recall that y is a vertex solution to the relevant LP in each iteration.Lemma 4.2.At any iteration of the algorithm, we have Proof.Assume for the sake of deriving a contradiction that there is a pair of vertices u we have that both y + ϵ(χ {e 1 } − χ {e 2 } ) and y − ϵ(χ {e 1 } − χ {e 2 } ) are feasible for the LP.This holds because there is no constraint that contains e 1 and not e 2 or vice-versa, except for integral bounds (lower/upper bounds, or equality constraints) on the values of y e 1 and y e 2 .This contradicts that y is a vertex solution to the LP.
To obtain strong enough sparsity to show that a cut relaxation is possible, we use a classic strategy.Namely, we first show that, at any iteration of Algorithm 1, a vertex solution can be described as the unique solution to a very structured system of y-tight LP constraints.More precisely, the y-tight g-cut constraints in this system correspond to sets that form a laminar family.This is where we crucially exploit the use of ghost values, without which a statement as below would be wrong, as demonstrated in Fig. 4.
Figure 4: A situation in which ghost value augmentation is necessary.All dotted edges have value 1/2, so S and T are tight but not dropped.S ∩ T and S ∖ T, however, have been dropped and thus are allowed to drop below connectivity k.Therefore it is not possible to uncross S and T.
Note that the integrality of the rightmost blue edge leaving T is not necessary.In particular, this rightmost blue edge could be any number of fractional edges and the instance would have the same behavior.
Lemma 4.3.Consider any iteration of Algorithm 1 where no ghost value augmentation can be performed.
)} be a maximal laminar family corresponding to y-tight g-cut constraints in LP Alg .Then, the linear equation system consisting of together with all y-tight constraints on single edges in LP Alg , i.e., constraints of type x e = p e for some e ∈ E and p e ∈ Z, form a full column rank system8 of y-tight constraints of LP Alg .Thus, because y is a vertex solution to LP Alg , it is the unique solution to this system.
Proof.Assume for the sake of contradiction that there is an iteration of the algorithm where no ghost value augmentation can be performed, and such that there is a maximal laminar family L ⊆ 2 V\{r} corresponding to y-tight g-cut constraints in LP Alg fulfilling the following: The linear equation system consisting of all equations corresponding to y-tight g-cut constraints of sets in L, together with equations corresponding to all y-tight constraints on single edges in LP Alg , does not have full column rank.We call this linear equation system the reference system.
The reference system not having full column rank implies that there must be a y-tight g-cut constraint x(δ(S)) ≥ k − g(δ(S)) not implied by it.Among all such sets S ⊆ V \ {r}, we choose one where L S := {L ∈ L : L and S are crossing} has smallest cardinality.Because the constraint x(δ(S)) ≥ k − g(δ(S)) is not implied by the reference system, and we did not add that constraint to it, it must be that S crosses some set in L.
Hence, |L S | ≥ 1.Let L ∈ L S .We continue by showing that the g-cut constraints corresponding to certain sets must have been dropped, allowing us to reduce to a setting similar to Fig. 4.
Claim 4.4.We have (i) • The g-cut constraint corresponding to S ∩ L has been dropped from LP Alg and the equation x(δ(S ∩ L)) = k − g(δ(S ∩ L)) is not implied by the reference system, • (y + g)(δ(S ∩ L)) ≤ k, and Proof.We start by proving (i).If (y + g)(δ(S ∩ L)) < k, then the g-constraint corresponding to S ∩ L must have been dropped, and all conditions of (i) are fulfilled.Hence, assume from now on (y + g)(δ(S ∩ L)) ≥ k.
We use the following well-known basic relation, which can be verified by checking that the shown equation holds for each coordinate (note that every coordinate corresponds to an edge): (2) By taking the scalar product of the above equation with y + g, we get (3) Note that the g-cut constraint corresponding to S ∪ L has not been dropped yet because |S ∪ L| ≥ 2. Hence, which holds because we deleted all edges with (y + g)-value zero.Hence, (3) simplifies to Because χ δ(L) is a row of our reference system and χ δ(S) is not spanned by the rows of our reference system, we have that either the equation is not implied by our reference system.Note that both equations correspond to y-tight g-cut constraints as shown above.Because the g-cut constraint corresponding to S ∪ L is still part of LP Alg as |S ∪ L| ≥ 2, and L S∪L ⊊ L S , 9 this g-cut constraint must be implied by the reference system.For otherwise, we could have chosen S ∪ L instead of S, which violates our choice of S. Hence, the g-cut constraint corresponding to S ∩ L is not implied by our reference system.Because also L S∩L ⊊ L S , the g-cut constraint corresponding to S ∩ L cannot be part of LP Alg anymore, as this would again violate our choice of S.
For point (ii) we can follow an analogous approach as for (i).If there is a set Q ∈ {S \ L, L \ S} with (y + g)(δ(Q)) < k, then we choose Q 1 := Q.Indeed, the g-cut constraint corresponding to Q 1 must have been dropped from LP Alg because (y + g)(δ(Q 1 )) < k.Thus, Q 1 fulfills all conditions of point (ii).Hence, from now on assume (y + g)(δ(S \ L)) ≥ k and (y + g)(δ(L \ S)) ≥ k.
We use the following well-known basic relation: (L\S) .Because χ δ(L) is a row in our reference system and χ δ(S) is not spanned by the rows of your reference system, we have that either the equation x(δ(S \ L)) = k − g(δ(S \ L)) or x(δ(L \ S)) = k − g(δ(L \ S)) (or both) is not implied by the reference system.Note that both equations correspond to y-tight g-cut constraints as shown above.Let ) is not implied by the reference system.Because L Q 1 ⊊ L S , the g-cut constraint corresponding to Q 1 must have been dropped from LP Alg .For otherwise, we could have chosen Q 1 instead of S, which violates our choice of S. Hence, Q 1 fulfills all conditions of point (ii).
(Claim 4.4) By Claim 4.4 (i), the g-cut constraint corresponding to S ∩ L got relaxed/dropped from LP Alg and x(δ(S ∩ L)) = k − g(δ(S ∩ L)) is not implied by the reference system.Moreover, Claim 4.4 (ii) implies that the constraint corresponding to at least one of the sets S \ L or L \ S got dropped.Recall that dropped sets got contracted and therefore correspond to singletons.We finish the proof by showing that a ghost value augmentation could have been performed with respect to the singleton S ∩ L and either S \ L or L \ S.
To this end consider the different (y + g)-loads between the four sets S \ L, S ∩ L, L \ S, and V \ (S ∪ L).For brevity let Let L be a laminar family over some finite ground set N, and let S ⊆ N and L ∈ L be such that S crosses L. Then any set in L that crosses S ∪ L also crosses S.However, the set L, which crosses S, does not cross S ∪ L.
The above definitions immediately lead to the following basic relations where the equalities at the start of the first two lines hold because S and L correspond to y-tight g-cut constraints, the inequality in the third line follows from Claim 4.4 (i), and the inequalities at the start of the last three lines hold by Lemma 4.1.
By subtracting the first relation above from the sum of the forth and fifth one, we get Analogously, by subtracting the second relation in (4) from the sum of the forth and sixth one, we get Let Q 1 ∈ {S \ L, L \ S} be the set as described in Claim 4.4 (ii).We will show that a ghost value augmentation could have been performed between Q 1 and S ∩ L. Note that these two sets correspond to singletons, and (5)/(6) show that the (parallel) edges between them fulfill the lower bound condition necessary to apply a ghost value augmentation.It remains to show that, if Q 1 = S \ L, we have a < k 2 , and analogously, if Then all conditions are fulfilled to apply a ghost value augmentation, which is the desired contradiction.
The proof for the two cases is identical; hence, we assume from now on Q 1 = S \ L. We have where the inequality follows from the first and third relation in (4), and from (y + g)(δ(S \ L)) ≤ k, which holds by Claim 4.4 (ii) and the assumption Q 1 = S \ L. It remains to show that a ̸ = k 2 .Assume for the sake of deriving a contradiction that a = k 2 .We therefore must have (y + g)(δ(S ∩ L)) = k and (y + g)(δ(S \ L)) = k, which implies by Claim 4.4 (ii) in particular (y + g)(δ(S ∪ L)) = k, (y + g)(δ(L \ S)) = k, and E(S ∩ L, V \ (S ∪ L)) = ∅.The contradiction we derive will be that the equation Moreover, a = k 2 ∈ Z (recall we are assuming k is even) implies y(e) ∈ Z for e ∈ E(S \ L, S ∩ L) because of Lemma 4.2.Hence, the y-values on the edge E(S \ L, S ∩ L) are fixed by the integrality constraints, which are part of the reference system.Moreover, we also have b = (x + y)(δ(S ∩ L)) − a = k 2 , where the first equality holds because E(S ∩ L, V \ (S ∪ L)) = ∅.Hence, if also the g-cut constraint corresponding to L \ S got dropped from LP Alg , then we have analogously y(e) ∈ Z for e ∈ E(L \ S, S ∩ L).However, then (7) implies that x(δ(S ∩ L)) = k − g(δ(S ∩ L)) is implied by LP Alg , which is a contradiction.Thus, it remains to consider the case where the g-cut constraint corresponding to L \ S is still part of LP Alg .Note that Observe that χ δ(L\S) , χ δ(S∪L) , and χ δ(S) are all row vectors of our reference system.Hence, also the row vector χ E(L\S,S∩L) is implied by our reference system.This in turn implies by (7) that the equation x(δ(S ∩ L)) = k − g(δ(S ∩ L)) is implied by our reference system, which leads to the desired contradiction.(Lemma 4.3) For completeness, we now present a classic reasoning, adjusted to our context, showing that the sparsity of LP Alg at any iteration of the algorithm can be bounded by the number of linearly independent y-tight g-cut constraints in a full column rank equation system of tight LP constraints.We state the result for an equation system defining the LP Alg vertex y with an arbitrary family L corresponding to y-tight g-cut constraints.We later apply the statement with an equation system of y-tight constraints coming from Lemma 4.3, where the family L is laminar.Lemma 4.5.Consider an iteration of Algorithm 1 where y is not yet integral, and no ghost value augmentation can be performed.Consider a full column rank system of equations corresponding to y-tight constraints of LP Alg , and let L ⊆ 2 V\{r} be all cuts of y-tight g-cut constraints that correspond to an equation in the equation system.Then | frac(y)| ≤ |L|.
Proof.First, we can assume that the considered equation system, for simplicity we call it the reference system, is a square system.Indeed, if the reference system is not square, then we can successively remove equations from the system that are implied by the other equations of the system until we get a square system.Moreover, the implication of the statement for the square system implies the statement for the original system.The square reference system has two types of equations: • equations corresponding to y-tight g-cut constraints, i.e., x(δ(S)) = k − g(δ(S)) ∀S ∈ L, and • equations corresponding to y-tight constraints on single edges, i.e., these are of the form x e = p e , where e ∈ E and p e ∈ Z ≥1 .Let F ⊆ E be all edges for which an equation of the second type is in the system.Because the reference system is square, we have |E| = |L| + |F|.Moreover, all edges in F have integral y-values, which implies the result because of Finally, the following lemma shows that we make progress in each iteration of Algorithm 1. Lemma 4.6.At any iteration of Algorithm 1, if y is not integral and no ghost value augmentation can be applied (i.e., the algorithm is at an iteration where it reaches Line 10), then there is a g-cut constraint of LP Alg that can be relaxed.
Proof.Let L ⊆ {S ⊆ V \ {r} : x(δ(S)) = k − g(δ(S))} be a maximal laminar family of sets corresponding to y-tight g-cut constraints of LP Alg .By Lemma 4.3, these constraints, together with all y-tight constraints of LP Alg on single edges, correspond to a full column rank equation system with y being its unique solution.We successively remove from L constraints that are redundant in that system, until no g-cut constraint corresponding to a set in L is redundant.
Assume for the sake of deriving a contradiction that no cut relaxation can be applied.We will derive a contradiction by showing that this would imply |L| > | frac(y)|, which contradicts Lemma 4.5.To this end we use a token counting argument, where we assign two tokens to each edge in frac(y), and assign those tokens to the sets L such that each set in L gets at least 2 tokens, and at least one set gets strictly more than 2 tokens.
First, each edge {u, v} ∈ frac(y) assigns one token to the smallest set in L that contains u (if there is such a set) and one to the smallest set in L that contains v (if there is such a set).We then consider the sets in L in any smallest-to-largest order, and reassign excess tokens from children to their parent. 10With a smallest-to-largest order, we mean any order such that when considering L ∈ L, then all descendants of L in L have already been considered.We maintain the following invariant: After considering a set L ∈ L, we have reassigned the tokens of L and its descendants in a way that L has at least 4 tokens, and each of its descendants has 2 tokens.By showing that this invariant can be maintained, the results follows because at the end of the procedure, the maximal sets in L will have obtained at least 4 tokens, and all other sets in L obtained 2 tokens.
We start by showing that the invariant holds for each minimal set L in L. If the edges frac(y) have at least 4 endpoints in L, then the invariant holds.Otherwise, we have | frac(y) ∩ E(L, V)| ≤ 3, and because we assumed that we cannot apply a cut relaxation to L, there must be a smaller set S ⊆ L corresponding to a y-tight g-cut constraint.We choose S to be a minimal such set.However, because δ(S) ⊆ E(L, V), we have | frac(y) ∩ δ(S)| ≤ | frac(y) ∩ E(L, V)| ≤ 3, i.e., the set δ(S) contains at most 3 edges with fractional y-values.This implies that we could have applied a cut relaxation to S, which contradicts our assumption that no cut relaxation was possible.
Consider now a non-minimal set L in L, and assume that the invariant holds for all of its children.If L has at least two children in L, then it can get 2 tokens from each of them, and the invariant holds for L. Hence, assume that L has only one child C ∈ L in L. In this case, L can get two tokens from C, which has 4 tokens.We complete the proof by showing that there are at least two edges of frac(y) with one endpoint in L \ C, which will give an additional 2 tokens to L, to obtain the 4 tokens required by the invariant.First observe that we must have frac(y) ∩ δ(C) ̸ = frac(y) ∩ δ(L); since otherwise the y-tight g-cut constraint corresponding to L is implied by the y-tight g-cut constraint that corresponds to C and the y-tight equality constraints on single edges.This implies and already shows that L obtains at least one more token due to a fractional edge with an endpoint in L \ C. Because both L and C correspond to y-tight g-cut constraints, we have Due to integrality of the ghost values g, this implies y(δ(L)) ∈ Z and y(δ(C)) ∈ Z.Thus, Note that Lemma 4.6 readily implies that Algorithm 1 terminates.Indeed, the number of ghost value augmentations one can perform on an edge e is no more than ⌈ k /4⌉, because then just the ghost values alone already provide a load of at least k /2 on e, and e therefore does not qualify anymore for a ghost value augmentation.Moreover, the sets we relax correspond to the laminar family R, which can have size at most O(|V|).
However, bounding the number of ghost value augmentations per edge by ⌈ k /4⌉ turns out to be very loose.Actually, as we will show next, we can perform at most one ghost value augmentation per edge, which is a result that is also helpful later on.This holds because whenever a ghost value augmentation is performed on an edge with endpoints u and v, then a large y-value on E(u, v) is integral and, because we fix/freeze integral values, we will have a load of at least ⌊y⌋(E(u, v)) between the vertices u and v in any future iteration where u and v did not yet get contracted through a cut relaxation.This load will be too high for another ghost value augmentation to be performed on an edge with endpoints u and v.
The following lemma implies in particular that at most one ghost value augmentation can be applied per edge.Lemma 4.7.At any iteration of the algorithm when a ghost value augmentation is performed between two vertices u and v, then no edge in E(u, v) has strictly positive ghost value, i.e., g f = 0 for f ∈ E(u, v).
Proof.Consider an iteration of the algorithm, with solution y to LP Alg and ghost values g, and let e ∈ E be an edge with endpoints u and v and strictly positive ghost value, i.e., g e > 0. We show the statement by showing that no ghost value augmentation can be applied in the current iteration to any edge in E(u, v).Consider a prior iteration when a ghost value augmentation was applied to e. (Such an iteration must exist because g e > 0.) Let F be the set of edges parallel to e at the moment when this ghost value augmentation was applied to e, and let y and g be the LP Alg solution and ghost values, respectively, at that iteration.Thus, g are the ghost values right before the ghost value augmentation to e happened.Hence, F ⊆ E(u, v), and Because ghost values are always integral, k is even by assumption, and, by Lemma 4.2, y has at most one fractional value, we have Moreover, as the ghost value of e gets augmented by two units, and because integral y-values are fixed, we have showing as desired that no ghost value augmentation can be applied to any edge in E(u, v) at the current iteration.
Using the above lemmas, we can now provide a better bound on the number of iterations of Algorithm 1 than the ⌈ k 4 ⌉ • |E| + O(|V|) mentioned above.This lemma will also be useful again later on. .This follows from standard combinatorial uncrossing techniques, which imply that the vector y computed during the initialization of the algorithm is the unique solution to a square full rank system corresponding to tight cut constraints for cut sets in a laminar family L. Hence, because the system is square, we have | supp(y)| ≤ |L|.Finally, a laminar family L over V can have size at most O(|V|).Thus, the number of ghost value augmentations is bounded by O(|V|).
Moreover, the number of cut relaxations is bounded by |R| = O(|V|), because R is a laminar family over V.
The result now follows because, by Lemma 4.6, each iteration of the algorithm either performs a ghost value augmentation, a cut relaxation, or we have that y is integral in which case the algorithm terminates.

Guarantees on Cut Constraints
Our goal now is to show that the solution z returned by Algorithm 1 satisfies z(δ(S)) ≥ k − 9 for all S ⊆ V \ {r} with S ̸ = ∅ (again, recall we are assuming that k is even; hence the 9 instead of 10).To achieve guarantees on z, we crucially and repeatedly exploit that integral y-values get frozen/fixed.This guarantees that at any iteration of Algorithm 1, the solution y to LP Alg provides the following lower bound on entries of z, i.e., the solution returned by the algorithm.⌊y⌋ e ≤ z e ∀e ∈ E.
We recall that z in the above inequality is the final solution returned by our algorithm (not an intermediate value of z).
We therefore start by lower bounding the ⌊y⌋-values on different edge sets, starting with edges parallel to an edge to which a ghost value augmentation was applied.Lemma 4.9.At any iteration of Algorithm 1 where we apply a ghost value augmentation, say on an edge e with endpoints u and v, we have ⌊y⌋(E(u, v)) ≥ k 2 − 2. Proof.By Lemma 4.7, right before applying a ghost value augmentation on edge e between vertices u and v, we have g f = 0 for all f ∈ E(u, v).Because a ghost value augmentation can now be applied to E(u, v), we have y(E(u, v)) = (y + g)(E(u, v)) ≥ k 2 − 2. Finally, as y has at most one fractional value in E(u, v) due to Lemma 4.2 and k is even, we obtain as desired The following lemma shows that y-values within a relaxed set are integral.This shows in particular that our algorithm does return an integral vector z.Lemma 4.10.Consider any iteration of the algorithm that will relax a cut S ⊆ V \ {r}.Then y e ∈ Z for e ∈ E[S].
Proof.Let L ⊆ 2 V\{r} be a maximal laminar family of y-tight g-cut constraints that contains the set S. Consider a reference equation system containing an equation for each g-cut constraint for cut sets in L and all y-tight constraints on single edges.By Lemma 4.3, this reference system has full column rank and its unique solution is thus y.Because S will be relaxed, there is no y-tight g-cut constraint x(δ(T)) ≥ k − g(δ(S)) with T ⊊ S in the reference system.Hence, the reference system cannot contain an equation corresponding to any y-tight g-cut constraint of this type.This implies in particular that no edge e ∈ E[S] is part of an equation in the reference system that corresponds to a y-tight g-cut constraint.However, if there was an edge e ∈ E[S] with y e ̸ ∈ Z, then such an edge would also not be part of any y-tight constraint on single edges, because these are constraints requiring that y-values on certain edges are integral, and y e is not integral.This violates that each edge in the support of y must be contained in at least one equation of any full column rank equation system over the edges, thus implying the statement.
Before making a statement about the z-loads on cuts, we observe that we have high ⌊y⌋-loads on cuts right before the algorithm either relaxes a set containing them or terminates.Lemma 4.11.Consider an iteration of Algorithm 1.Let S ⊆ V \ {r} with S ̸ = ∅, and assume that at this iteration of the algorithm either a cut R ⊆ V with S ⊆ R gets relaxed, or the algorithm terminates.Then ⌊y⌋(δ(S)) ≥ k − 4.
Proof.We start by observing that without loss of generality, we can assume that the g-cut constraint corresponding to S is still part of LP Alg during the considered iteration.If this is not the case, then we consider the iteration where S got relaxed, and apply the result to that iteration, during which the above assumption holds.If we denote by y the y-values of the iteration when S got relaxed, then we get ⌊ y⌋(δ(S)) ≥ k − 4, which implies ⌊y⌋(δ(S)) ≥ k − 4, because integral y-values get fixed.
Hence, we assume from now on that the g-cut constraint corresponding to S is still part of LP Alg at the considered iteration.We make a case distinction based on the number of edges e ∈ δ(S) to which a ghost value augmentation was applied, i.e., the number of edges e ∈ δ(S) with g e = 2.If there are at least two distinct edges e 1 , e 2 ∈ δ(S) with g e 1 = g e 2 = 2, then let F j ⊆ E be the edges that were parallel to e j , for j ∈ {1, 2}, when the ghost value augmentation was applied to e j .Lemma 4.7 implies F 1 ∩ F 2 = ∅.Moreover, by Lemma 4.9 and the fact that integral y-values get fixed, we have ⌊y⌋(F j ) ≥ k 2 − 2 for j ∈ {1, 2}.Hence, ⌊y⌋(δ(S)) ≥ ⌊y⌋(F 1 ) + ⌊y⌋(F 2 ) ≥ k − 4, as desired.
Assume now that a ghost value augmentation was applied to at most one edge in δ(S), i.e., g(δ(S)) ≤ 2. Because the g-cut constraint corresponding to S is still part of LP Alg , we get (y + g)(δ(S)) ≥ k. (11) If the algorithm terminates at this moment, we have that y is integral, and hence Whereas the previous statement provided guarantees for the current solution y at some iteration of the algorithm, we now derive from this guarantees for the final solution z returned by Algorithm 1. Recall that R is the set family whose sets correspond to non-singleton sets we relaxed (by contraction and dropping the corresponding cut from LP Alg ); see the beginning of Section 4 for a definition.We start by providing a guarantee for cut sets S ∈ V \ {r} that are "compatible" with the laminar family R, i.e., adding S to R preserves laminarity.Lemma 4.12.For any S ⊆ V \ {r} such that R ∪ {S} is laminar, we have z(δ(S)) ≥ k − 4.
Proof.If there is no set R ∈ R with S ⊆ R, then the g-cut constraint corresponding to S is in LP Alg until the last iteration, and we can apply Lemma 4.11 to the last iteration of the algorithm to obtain ⌊y⌋(δ(S)) ≥ k − 4. The result follows because the z-values on the edges δ(S) are the same as the y-values during the last iteration.
Otherwise, let R ∈ R with S ⊆ R be the smallest set in R containing S, and consider the iteration where R gets relaxed.By Lemma 4.11, we obtain ⌊y⌋(δ(S)) ≥ k − 4, and the result follows because integer values get fixed, and hence z ≥ ⌊y⌋.Indeed, this implies z(δ(S)) ≥ ⌊y⌋(δ(S)) ≥ k − 4.
We now turn to showing that z has a large load on any cut set S ⊆ V \ {r}, even if S is not compatible with R. Such a non-compatible set S will cross at least one set R ∈ R. We first show that there is a large z-value on edges between R \ S and R ∩ S. (The set S in the statement below corresponds to the set S ∩ R when S is a cut set crossing R.) Lemma 4.13.Let R ∈ R and S ⊊ R with S ̸ = ∅.Then z(E(S ∩ R, R \ S)) ≥ k 2 − 4.
Proof.We first observe that we can assume the following minimality condition for the pair R and S: there is no set R ∈ R with R ⊊ R so that S and R are crossing.Assume that we proved the statement only for such sets R and S fulfilling this minimality condition.Given an arbitrary R ∈ R and S ⊊ R with S ̸ = ∅, we define R to be the smallest set R ∈ R contained within R that crosses S.Moreover, we set S := S ∩ R.This pair R and S fulfills the above-mentioned minimality condition.Hence, Moreover, because E( S, R \ S) ⊆ E(S, R \ S), we also get z(E(S, R \ S)) ≥ k 2 − 4, as desired.Hence, assume from now on that R and S fulfill the above-mentioned minimality condition.This minimality condition implies 1. R ∪ {S} is laminar, and 2. R ∪ {R \ S} is laminar.Consider now the iteration of the algorithm when the set R gets relaxed.Due to (1) and (2), there are vertex sets A, B ⊆ V \ {r} of the current graph G = (V, E) that correspond to S and R \ S, respectively, i.e., δ(A) = δ(S) and δ(B) = δ(R \ S).In other words, when undoing all contractions within A we get S, and when undoing all contractions within B we get R \ S. Hence, by Lemma 4.11, we have y(δ(S)) = y(δ(A)) ≥ k − 4, and The desired result now follows from: The first equation holds because at the iteration when R gets contracted, no edge with one endpoint in S and one in R \ S has been contracted yet.The second equation holds because when R gets relaxed, Algorithm 1 sets the z-values to the y-values for edges in E[R].Finally, the inequality follows from (12) and y(δ(R)) ≤ (y + g)(δ(R)) = k, which holds because R corresponds to a y-tight g-cut constraint as it gets relaxed at the considered iteration.
Before proving our final guarantee for z-values of any cut set S ⊆ V \ {r} with S ̸ = ∅, we show that the edges crossing a set R ∈ R will have a z-load of at most k + 2. This is the last ingredient we need to provide a lower bound on the z-value of the edges crossing any cut set.Lemma 4.14.For any R ∈ R, we have z(δ(R)) ≤ k + 2.
Proof.Consider the iteration when R gets relaxed.At this iteration we have (y + g)(δ(R)) = k, because the g-cut constraint corresponding to R is y-tight.
On the other hand, suppose | frac(y) ∩ δ(R)| > 0 in the iteration in which we relax R. It follows that ⌊y⌋(δ(R)) ≤ k − 1.However, since R is about to be relaxed we know | frac(y) ∩ δ(R)| ≤ 3 and so it follows that in any future iteration with LP Alg solution y, we have Finally, Lemma 4.15 shows that z has a large load on the edges crossing any cut set S ⊆ V \ {r}.More precisely, the load will be at least k − 9 (for k even), as claimed by our main rounding theorem, Theorem 2.1.Lemma 4.15.For all S ⊆ V \ {r} where S ̸ = ∅ we have z(δ(S)) ≥ k − 9.
Proof.The result follows from Lemma 4.12 if R ∪ {S} is laminar.Hence, assume from now on that at least one set R ∈ R crosses S.
If there are at least two disjoint sets R 1 , R 2 ∈ R that cross S, then we have by Lemma 4.13.Hence, assume in what follows that there are no two disjoint sets in R that cross S. Let R ∈ R be the maximal set in R that crosses S. Maximality of R and the fact that there is no set in R disjoint from R that crosses S implies: • R ∪ {R ∪ S} is laminar, and • R ∪ {S \ R} is laminar.By a simple counting of edges leaving the relevant sets, we have The result now follows by using • z(E(S ∩ R, R \ S)) ≥ k 2 − 4 by Lemma 4.13, • z(δ(S \ R)) ≥ k − 4 by Lemma 4.12, which applies due to laminarity of R ∪ {R ∪ S}, • z(δ(S ∪ R)) ≥ k − 4 by Lemma 4.12, which applies due to laminarity of R ∪ {S \ R}, and • z(δ(R)) ≤ k + 2, which holds because of Lemma 4.14.

Cost of Returned Solution
The required cost bound is easily proven.
Observation 4.16.The solution z returned by Algorithm 1 has cost bounded by c ⊤ y where y is the input vector of Theorem 2.1.
allow for verifying all constraints of (14) of the form x(δ(v)) ≥ k − g(δ(v)) for v ∈ V \ (W \ {r}), and all constraints of the first constraint family of (14) for any S ⊆ V \ {r} containing at least one vertex of V \ W. For the remaining constraints, which correspond to constraints of the first constraint family of (14) for sets S ⊆ W with |S| ≥ 2, we compute, for each subset Q ⊆ W with |Q| = 2, a minimum Q-r cut with edge capacities given by x + g.This allows for checking all remaining constraints and for identifying a violated constraint if there is one, which completes the separation procedure.11Lemma 4.18.At any iteration of Algorithm 1, one can in poly-time determine a set S ⊆ V \ {r} corresponding to a y-tight g-cut constraint to which the cut relaxation operation (Line 13) can be applied, if there is such a set.
Proof.Consider an iteration of the algorithm and let G = (V, E) be the current graph and y the current solution to LP Alg , as usual.Because a cut relaxation always contracts the corresponding vertex set, the g-cut constraints in the LP contain a constraint for each set S of size at least 2, and also for a subset of the singleton sets-namely those that do not correspond to an original vertex in V and are therefore not a set obtained through contractions.Let W ⊆ V be all singletons that correspond to contracted vertices.Hence, the family of all g-cut constraints of the LP is the following: x We now first describe the procedure we use to identify a set S ⊆ V \ {r} to which the cut relaxation operation can be performed (if there is one) and then discuss why it is correct.
We start by checking whether there is any singleton set S = {v} for v ∈ V \ (W ∪ {r}) that can be relaxed.This is the case if x(δ(v)) = k − g(δ(v)) and | frac(y) ∩ δ(v)| ≤ 3, which can be checked in poly-time.If no such singleton exists, we consider all sets U ⊆ frac(y) with |U| ≤ 3.For each such U, consider all sets W ⊆ V(U), where V(U) ⊆ V are all endpoints of edges in U, such that W contains exactly one endpoint of each edge of U. If |W| < 2, consider all sets Z ⊆ V with W ⊆ Z and |Z| = 2; otherwise, we only consider Z = W for the set W. Let Z be all considered sets Z.Note that |Z | = O(|V| 3 ) because |Z| ≤ 3.
For each Z ∈ Z we do the following.We compute a minimal minimum Z-({r} ∪ (V(frac(y)) \ Z)) cut in G with edge capacities y + g. 12 Let S Z ⊆ V \ {r} be the computed cut for Z ∈ Z.We now focus only on the minimal cuts within the family {S Z } Z∈Z .If there is a minimal cut S Z in the family with (y + g)(δ(S Z )) = k, then we claim that a cut relaxation can be applied to S Z .If there is no such cut, we claim that no cut relaxation can be applied at the current iteration.Note to construct our instance of k-ECSM, we begin with our TAP instance and then convert each edge of our TAP instance into two parallel and bisected edges.See Figure 5d.More formally, we construct the following unit-cost instance of k-ECSM on an auxiliary graph H = (W, B).
1. Vertices: Let V E := {w e , w ′ e } e∈E be a set of vertices, two for each edge of E. The vertex set of our k-ECSM instance is W := V ∪ V E .
2. Edges: For each edge e = {u, v} ∈ E, we have 4 edges in our k-ECSM instance, namely {u, w e }, {w e , v}, {u, w ′ e }, and {w ′ e , v}.Let E Gadget be all such edges.The edge set of our k-ECSM instance is B := E Gadget ∪ L.

Costs:
The cost of each edge b ∈ B in our k-ECSM instance is 1, i.e., c b = 1.
If S is a tree cut of G, then we say that the following cut in G ′ corresponds to S: where E[S] ⊆ E denotes all edges of E with both endpoints in S. In words, S ′ consists of S and all gadget vertices created by edges internal to S. See Figure 5f.A rough sketch of our analysis is as follows.By simple modifications to our k-ECSM solution, we can guarantee that for each vertex of V E , one incident edge has value ⌊ k /2⌋ and one incident edge has value ⌈ k /2⌉.It follows that there must be some link in the support of our k-ECSM solution that covers e (in the TAP sense), and so all links in the support of our k-ECSM solution form a feasible TAP solution.See Figure 5e.On the other hand, one can show that the links in the support of our k-ECSM solution must make up at most an O( 1 /k) fraction of the total cost of our k-ECSM solution (on links and edges).Hence, if our k-ECSM solution is much better than 1 + ϵ TAP k -approximate, the links in the support of our k-ECSM solution must be much better than (1 + ϵ TAP )-approximate as compared to the optimal TAP solution.
The following summarizes the simple modifications we will make to guarantee that the incident edges of each w ∈ V E have values ⌊k/2⌋ and ⌈k/2⌉.Lemma 5.3.Given a feasible and integral solution z ′ for the above instance of k-ECSM on H for k ∈ Z ≥1 odd, one can in poly-time compute a feasible and integral z of equal cost such that for each w ∈ V E , one of the two edges incident to w has z ′ -value ⌊ k /2⌋ and one has z ′ -value ⌈ k /2⌉.Proof.We initially set z ′ ℓ = z ℓ for all ℓ ∈ L. (The other entries of z ′ , i.e., z ′ b for b ∈ E Gadget , will be set directly later and do not need to be initialized.)For each edge e = {u, v} ∈ E of the TAP graph G (in an arbitrary order), we do the following: • For each w ∈ {w e , w ′ e }, we assign a z ′ -value of ⌈ k /2⌉ to one of the two edges incident to w and a z ′ -value of ⌊ k /2⌋ to the other edge incident to w.
• We select an arbitrary link ℓ ∈ cov(e) and increase its value by z(δ H (w e )) + z(δ H (w ′ e )) − 2k.(Because z(δ H (w e )), z(δ H (w ′ e )) ≥ k, as z is a k-ECSM solution, this increase is non-negative.)Note that z ′ can clearly be computed in poly-time, and it fulfills by construction that each vertex w ∈ V E is incident to one edge of value ⌈ k /2⌉ and one of value ⌊ k /2⌋.Hence, it remains to show that z ′ is a k-ECSM solution.
Any cut Q ⊆ W such that |δ H (Q) ∩ E Gadget | ≥ 3 has z ′ -value z ′ (δ H (Q)) ≥ 3⌊ k /2⌋ ≥ k, as desired.The other cuts of H contain precisely 2 gadget edges, which must both be within the same cycle. 13ence, let Q ⊆ W be a such a cycle in H that contains precisely two edges of E Gadget , say within the cycle/gadget corresponding to e ∈ E. If either δ H (w e ) ⊆ δ H (Q) or δ H (w ′ e ) ⊆ δ H (Q), then we again obtain z ′ (δ H (Q)) ≥ k, because the two edges incident with w e (or w ′ e ) have z ′ -values that sum up to k.Thus, we can assume that δ H (Q) contains one edge incident with w e and one incident with w ′ e .These two edges already lead to a z ′ -value of at least 2⌊ k /2⌋ = k − 1.We complete the proof by showing that z ′ (δ(Q) ∩ L) ≥ 1.Note that Q ∩ V corresponds to a tree cut in G, with e being the only edge of E with one endpoint in Q and one outside.Hence, δ(Q) ∩ L = cov(e), and it suffices to show that z ′ (cov(e)) ≥ 1.If z(cov(e)) ≥ 1, then we are done because the z ′ -value on any link is no less than its z-value.Otherwise, we have z(cov(e)) = 0, and observe that z(δ H (w e )) + z(δ H (w ′ e )) > 2k; indeed, if z(δ H (w e )) = k and z(δ H (w ′ e )) = k, then there is an edge in both δ H (w e ) and δ H (w ′ e ) with z-value at most ⌊ k /2⌋, and the cut in H that contains these two edges and the links in cov(e) has z-value strictly below k, which violates that z is a k-ECSM solution.Hence, in our construction of z ′ , we increased the value of at least one link ℓ ∈ cov(e) by z(δ H (w e )) + z(δ H (w ′ e )) − 2k ≥ 1, which finishes the proof.
We conclude our hardness of approximation.
Theorem 1.4.There exists a constant ϵ TAP > 0 (given by Theorem 5.2) such that there does not exist a poly-time algorithm which, given an instance of (unweighted) k-ECSM where k is part of the input, always returns a 1 + ϵ TAP 9k -approximate solution, unless P = NP.Proof.As discussed above, we reduce from unweighted TAP.Suppose that we are given an instance of unweighted TAP on graph G = (V, E) with links L and optimal value OPT TAP ≥ 1 4 • |E|.Assume for the sake of deriving a contradiction that one can always 1 + ϵ TAP 9k -approximate k-ECSM and let H be the instance of k-ECSM resulting from the above reduction.Let z ′ be a 1 + ϵ TAP 9k -approximate solution to the k-ECSM instance and let z be the result of applying Lemma 5.3 to z ′ .Lastly, let F be all links in the support of z.Clearly constructing F takes poly-time.
We claim that F is a feasible solution for TAP on G. Consider a tree cut S ⊆ G cutting e in G. Let S ′ ⊆ W be the corresponding cut in H = (W, B).By the guarantees of Lemma 5.3, we know that some S ∈ {S ′ , S ′ + w e , S ′ + w ′ e , S ′ + w e + w ′ e } satisfies z(δ( S) \ L) = k − 1. (For example, in Figure 5f it is S ′ + w e .)However, since z is feasible, it follows that some link incident to S and therefore incident to S must be in the support of z.Thus, F is feasible.
Lastly, we argue that F has small cost, resulting in a contradiction to Theorem 5.2.Roughly, we observe that all gadget edges we add make up at most a 1 − 1 k fraction of the total cost of our solution.So (1 + ϵ TAP 9k )-approximating k-ECSM translates to (1 + ϵ TAP )-approximating TAP.More formally, let OPT k−ECSM be the value of the optimal k-ECSM solution to our problem on G ′ .It is easy to verify that the k-ECSM solution which for each w ∈ V e sets one incident edge to ⌊k/2⌋ and the other to ⌈k/2⌉, and sets each edge in the support of the optimal TAP solution to 1 and all other edges to 0, is feasible for k-ECSM.It follows that OPT k−ECSM ≤ OPT TAP + 2k • |E|. (16) On the other hand, observe that by the fact that c ⊤ z ′ = c ⊤ z and the definition of F, we have Combining Equation (16), Equation (17), the fact that c ⊤ z = c ⊤ z ′ , and the fact that z ′ is (1 +

Figure 1 :
Figure 1: An example where skew supermodularity of the requirement function fails.The solid blue edges represent collections of frozen edges between sets.The dashed edges represent collections of non-frozen edges.Here, S ∖ T and S ∩ T have been dropped as they have at least k − c frozen edges.However, S, T, T ∖ S, and S ∪ T have fewer than k − c frozen edges and thus remain.So, f (S) = f (T) = k but f (S ∖ T) = f (S ∩ T) = 0.

Figure 2 :
Figure 2: An example of where uncrossing breaks down, where in the top figure we let ℓ = ⌈ c+1 2 ⌉ and in the bottom figure we assume c = 10 for concreteness.The blue edges represent collections of frozen edges and the dotted edges represent collections of fractional edges.To ensure feasibility we let 0 < a < 1.In this example, we drop cuts when there are k − c frozen edges (or k − 10 in the bottom picture).Then, in both examples we have dropped S ∖ T and S ∩ T, but we have not yet dropped S and T.

10 else if there
is a y-tight g-cut constraint x(δ(S)) ≥ k − g(δ(S)) in LP Alg such that (i) LP Alg does not contain a y-tight g-cut constraint x(δ(T)) ≥ k − g(δ(T)) for T ⊊ S, and (ii) | frac(y) ∩ δ(S)| ≤ 3, 11 then 12 z e = y e for all e ∈ E[S].13 Drop/Contract: Contract set S in G and remove g-cut constraint for S from LP Alg .

Lemma 4. 1 .
At any iteration of Algorithm 1, we have

9 L
S∪L ⊊ L S (and analogously L S∩L , L S\L , L L\S ⊊ L S ) follows from the following observation on laminar families.
and (10) thus implies that δ(L \ C) cannot have a single edge with fractional y-value.Because frac(y) ∩ δ(L \ C) ̸ = ∅, we have | frac(y) ∩ δ(L \ C)| ≥ 2, showing as desired that L gets at least two tokens from edges of frac(y) with one endpoint in L \ C.

e( a )
TAP Instance on G. (b) Feasible TAP solution.S (c) Tree cut S.

Figure 5 :
Figure 5: Our reduction from TAP to k-ECSM.5a: TAP instance on G with links L dashed and edge e labeled.5b: feasible TAP solution in blue.5c: a tree cut S. 5d: k-ECSM instance on H with w e , w ′ e labeled; V E are the small nodes.5e: feasible k-ECSM solution for H where non-zero links in blue and values of edges incident to w e and w ′ e labeled.5f: cut S ′ in H corresponding to S in G.