Near-Optimal Dynamic Rounding of Fractional Matchings in Bipartite Graphs

We study dynamic $(1-\epsilon)$-approximate rounding of fractional matchings -- a key ingredient in numerous breakthroughs in the dynamic graph algorithms literature. Our first contribution is a surprisingly simple deterministic rounding algorithm in bipartite graphs with amortized update time $O(\epsilon^{-1} \log^2 (\epsilon^{-1} \cdot n))$, matching an (unconditional) recourse lower bound of $\Omega(\epsilon^{-1})$ up to logarithmic factors. Moreover, this algorithm's update time improves provided the minimum (non-zero) weight in the fractional matching is lower bounded throughout. Combining this algorithm with novel dynamic \emph{partial rounding} algorithms to increase this minimum weight, we obtain several algorithms that improve this dependence on $n$. For example, we give a high-probability randomized algorithm with $\tilde{O}(\epsilon^{-1}\cdot (\log\log n)^2)$-update time against adaptive adversaries. (We use Soft-Oh notation, $\tilde{O}$, to suppress polylogarithmic factors in the argument, i.e., $\tilde{O}(f)=O(f\cdot \mathrm{poly}(\log f))$.) Using our rounding algorithms, we also round known $(1-\epsilon)$-decremental fractional bipartite matching algorithms with no asymptotic overhead, thus improving on state-of-the-art algorithms for the decremental bipartite matching problem. Further, we provide extensions of our results to general graphs and to maintaining almost-maximal matchings.


Introduction
Dynamic matching is one of the most central and well-studied dynamic algorithm problems.Here, a graph undergoes edge insertions and deletions, and we wish to quickly update a large matching (vertex-disjoint set of edges) following each such change to the graph.
A cornerstone of numerous dynamic matching results is the dynamic relax-and-round approach: the combination of dynamic fractional matching algorithms [BHI15,BHN16,BHN17,BK19,BCH20] with dynamic rounding algorithms [ACC + 18, Waj20, BK21, Kis22,BKSW23].This dynamic fractional matching problem asks to maintain a vector x ∈ R E ≥0 such that x(v) := e∋v x e satisfies the fractional degree constraint x(v) ≤ 1 for all vertices v ∈ V and x := e x e is large compared to the size of the largest (fractional) matching in the dynamic graph G = (V, E).The goal typically is to solve this problem while minimizing the amortized or worst-case time per edge update in G. 2For the rounding problem (the focus of this work), an abstract interface can be defined as follows.
Definition 1.1.A dynamic rounding algorithm (for fractional matchings) is a data structure supporting the following operations: • init(G = (V, E), x ∈ R E ≥0 , ǫ ∈ (0, 1)): initializes the data structure for undirected graph G with vertices V and edges E, current fractional matching x in G, and target error ǫ.
• update(e ∈ E, ν ∈ [0, 1]): sets x e ← ν under the promise that the resulting x is a fractional matching in G. a The algorithm must maintain a matching M in the support of x, supp(x) := {e ∈ E | x e > 0}, such that M is a (1 − ǫ)-approximation with respect to x := e x e , i.e., M ⊆ supp(x) , M is a matching , and |M | ≥ (1 − ǫ) • x .
a Invoking update(e, 0) essentially deletes e and subsequently invoking update(e, ν) for ν > 0 essentially adds e back.So, G might as well be the complete graph on V .However, we find the notation G = (V, E) convenient.
The combination of fast fractional algorithms with fast dynamic rounding algorithms plays a key role in state-of-the-art time/approximation trade-offs for the dynamic matching problem against an adaptive adversary [Waj20, BK21,Kis22], including the recent breakthroughs of [BKSW23,Beh23].Here, a randomized algorithm works against an adaptive adversary (or is adaptive, for short) if its guarantees hold even when future updates depend on the algorithm's previous output and its internal state.Slightly weaker are output-adaptive algorithms, that allow updates to depend only on the algorithms' output.Note that deterministic algorithms are automatically adaptive.A major motivation to study output-adaptive dynamic algorithms is their black-box use as subroutines within other algorithms.(See discussions in, e.g., [NS17, BKM + 22, CK19].)Despite significant effort and success in designing and applying dynamic rounding algorithms, the update time of current (1 − ε)-approximate dynamic rounding approaches are slower by large poly(ε −1 , log n) factors than an unconditional recourse (changes per update) lower bound of Ω(ǫ −1 ) (Fact 2.3). 3 Consequently, rounding is a computational bottleneck for the running time of many state-of-the-art dynamic matching algorithms [Waj20, BK21, Kis22, BKSW23, Beh23, ABR24] and decremental (only allowing deletions) matching algorithms [BGS20,JJST22].
The question thus arises, can one design (output-adaptive) optimal dynamic rounding algorithms for fractional matching?We answer this question in the affirmative in a strong sense.

Our Contributions
Our main results are deterministic and randomized dynamic fractional matching rounding algorithms for bipartite graphs that match the aforementioned simple recourse lower bound of Ω(ε −1 ) up to logarithmic factors in ε and (sub-)logarithmic factors in n := |V |.These results are summarized by the following theorem. 4heorem 1.2.The dynamic bipartite matching rounding problem admits: 1.A deterministic algorithm with Õ(ε −1 log n) update time.
The init(G, x, ε) time of each of these algorithms is O(ε • |supp(x)|) times its update time.
In contrast, prior approaches have update time at least Ω(ε −4 ) (see Section 1.2).Moreover, all previous adaptive algorithms with high probability (w.h.p.) or deterministic guarantees all have at least (poly)logarithmic dependence on n, as opposed to our (sub-)logarithmic dependence on n.
Theorem 1.4 (Informal version of Theorem 6.6).There exist dynamic algorithms maintaining ε-AMM in general graphs in update time Õ(ε −3 ) + O(t f + u f • t r ), where t f and u f are the update time and number of calls to update of any "structured" dynamic fractional matching algorithm, and t r is the update time for "partial" rounding.Furthermore, there exist dynamic partial rounding algorithms with the same update times and adaptivity as those of Theorem 1.2.

Applications
Applying our rounding algorithms to known fractional algorithms yields a number of new state-ofthe-art dynamic matching results.
For example, by a black-box application of Theorem 1.2, we deterministically round known decremental (fractional) bipartite matching algorithms [JJST22,BGS20] with no asymptotic overhead, yielding faster (1 − ε)-approximate decremental bipartite matching algorithms.We also discuss how a variant of Theorem 1.4 together with the general-graph decremental algorithm of [ABD22] leads to a conjecture regarding the first deterministic sub-polynomial update time (1 − ε)-approximate decremental matching algorithm in general graphs.
Our main application is obtained by applying our rounding algorithm for general graphs of Theorem 1.4 to the O(ε −2 )-time fractional matching algorithm of [BK19], yielding the following.
In contrast, all prior non-oblivious (1/2 − ε)-approximate matching algorithms had at least quartic dependence on ε, which the above result improves to cubic.Moreover, this result yields the first deterministic O(log n)-time and adaptive o(log n)-time high-probability algorithms for this widely-studied approximation range and for near-maximal matchings.This nearly concludes a long line of work on deterministic/adaptive dynamic matching algorithms for the (1/2−ε) approximation regime [BHI15, BHN16, BDH + 19, BK19, Waj20, BK21, Kis22, BKSW23].

Our Approach in a Nutshell
Here we outline our approach, focusing on the key ideas behind Theorem 1.2.To better contrast our techniques with those of prior work, we start by briefly overviewing the latter.
Previous approaches.Prior dynamic rounding algorithms [ACC + 18, Waj20, BK21, Kis22] all broadly work by partially rounding the fractional matching x to obtain a matching sparsifier S (a sparse subgraph approximately preserving the fractional matching size compared to x).Then, they periodically compute a (1 − ε)-approximate matching in this sparsifier S using a static Õ(|S| • ε −1 )time algorithm (e.g., [DP14]) whenever x changes by ε • x , i.e., every Ω(ε • x ) updates.This period length guarantees that the matching computed remains a good approximation of the current fractional matching during the period, with as good an approximation ratio as the sparsifier S. Now, for sparsifier S to be O(1)-approximate, it must have size |S| = Ω( x ), and so this approach results in an update time of at least Ω(ε −2 ).Known dynamic partial rounding approaches all result in even larger sparsifiers, resulting in large poly(ε −1 , log n) update times.
Direct to integrality.Our first rounding algorithm for bipartite graphs breaks from this framework and directly rounds to integrality.This avoids overhead of periodic recomputation of static near-maximum matching algorithms, necessary to avoid super-linear-in-ε −1 update time (or n o (1)  factors, if we substitute the static approximate algorithms with the breakthrough near-linear-time max-flow algorithm of [CKL + 22]).The key idea is that, by encoding each edge's weight in binary, we can round the fractional matching "bit-by-bit", deciding for each edge whether to round a component of value 2 −i to a component of value 2 −i+1 .This can be done statically in near-linear-time by variants of standard degree splitting algorithms, decreasing the degree of each node in a multigraph by a factor of two (see Section 2).Letting L := log((min e:xe =0 x e ) −1 ), we show that by buffering updates of total value at most O(ε • x /L) for each power of 2, we can efficiently dynamize this approach, obtaining a dynamic rounding algorithm with update time Õ(ε −1 •L 2 ).As we can assume that min e:xe =0 x e ≥ ε/n 2 (Observation 2.2), this gives our bipartite Õ(ε −1 • log 2 n) time algorithm.
Faster partial rounding.The second ingredient needed for Theorem 1.2 are a number of algorithms for "partially rounding" fractional matchings, increasing min e:xe =0 x e while approximately preserving the value of the fractional matching.(The output is not quite a fractional matching, but in a sense is close to one.See Definition 4.1.)Our partial rounding algorithms draw on a number of techniques, including fast algorithms for partitioning a fractional matching's support into multiple sparsifiers, as opposed to a single such sparsifier in prior work, and a new output-adaptive sampling data structure of possible independent interest (Appendix B). 6,7 Combining these partial rounding algorithms with our simple algorithm underlies all our bipartite rounding results of Theorem 1.2, as well as our general graph rounding results of Theorem 1.4 (with further work -see Section 6).

Related Work
The dynamic matching literature is vast, and so we only briefly discuss it here.For a more detailed discussion, see, e.g., the recent papers [Beh23,BKSW23,BKS23a,ABKL23].

Paper Outline
Following some preliminaries in Section 2, we provide our first simple bipartite rounding algorithm in Section 3. In Section 4 we introduce the notion of partial roundings that we study and show how such partial rounding algorithms (which we provide and analyze in Section 5) can be combined with our simple algorithm to obtain the (bipartite) rounding algorithms of Theorem 1.2.In Section 6 we analyze our rounding algorithms when applied to known fractional matchings in general graphs, from which we obtain Theorems 1.4 and 1.5.We conclude with our decremental results in Section 7.

Preliminaries
Assumptions and Model.Throughout, we assume that x ≥ 1, as otherwise it is trivial to round x within a factor of 1 − ε, by maintaining a pointer to any edge in supp(x) whenever the latter is not empty.In this paper we work in the word RAM model of computation with words of size w := Θ(log n), allowing us to index any of 2 O(w) = poly(n) memory addresses, perform arithmetic on w-bit words, and draw w-bit random variables, all in constant time.We will perform all above operations on O(log(ε −1 • n))-bit words, which is still O(w) provided ε −1 = poly(n).If ε is much smaller, all stated running times trivially increase by a factor of O(log(ε −1 )).
Notation.For multisets S 1 and S 2 , we denote by S 1 ⊎ S 2 the "union" multiset, in which each element has multiplicity that is the sum of its multiplicities in S 1 and S 2 .A vector x is λ-uniform if x e = λ for all e ∈ supp(x), and is uniform if it is λ-uniform for some λ.Given fractional matching x, we call an integral matching Finally, we use the following notion of distance and its monotonicity.
) for all ǫ ≥ ǫ ′ .Moreover, by the triangle inequality and the basic fact that (a + b) + ≤ a + + b + for all real a, b, we have V (y, z) for all ε 1 , ǫ 2 ≥ 0 and vectors x, y, z ∈ R E .Support and Binary encoding.We denote the binary encoding of each edge e's fractional value by x e := i (x e ) i • 2 −i .We further let supp i (x) := {e ∈ E | (x e ) i = 1} denote the set of coordinates of x whose i-th bit is a 1.So, supp(x) = i supp i (x).Next, we let x min := min e∈supp(x) x e .The following observation allows us to restrict our attention to a small number of bits when rounding bipartite fractional matchings x.(In Section 6 we extend this observation to the structured fractional matchings in general graphs that interest us there.)Observation 2.2.For rounding bipartite fractional matching, by decreasing ε by a constant factor, it is without loss of generality that x min ≥ ε/n 2 and moreover if ∆ ≤ x min and L := 1 + ⌈log(ε −1 ∆ −1 )⌉, we may safely assume that (x e ) i = 0 for all i > L.
Proof.Let ε ′ = ε/3.Consider the vector x ′ obtained by zeroing out all entries e of x with x e < ε ′ /n 2 and setting (x e ) i = 0 for all edges e and indices i > L. Clearly, supp(x ′ ) ⊆ supp(x) and x ′ is a fractional matching, as x ′ ≤ x.The following shows that x ′ is not much smaller than x ≥ 1.
Recourse Lower Bound.We note that the number of changes to M per update (a.k.a. the rounding algorithm's recourse) is at least Ω(ε −1 ) in the worst case.and then update(•, 1/2) for these two edges sufficiently many times implies that the matching M maintained by A must change by an average of Ω(ε −1 ) edges per update.

The degree-split subroutine
Throughout the paper, we use the following subroutine to partition a graph into two subgraphs of roughly equal sizes while roughly halving all vertices' degrees.Such subroutines obtained by e.g., computing maximal walks and partitioning them into odd/indexed edges, have appeared in the literature before in various places.For completeness, we provide this algorithm in Appendix A.
Proposition 2.4.There exists an algorithm degree-split, which on multigraph G = (V, E) with maximum edge multiplicity at most two (i.e., no edge has more than two copies) computes in O(|E|) time two (simple) edge-sets E 1 and E 2 of two disjoint sub-graphs of G, such that E 1 , E 2 and the degrees d G (v) and

Simple Rounding for Bipartite Matchings
In this section we use the binary encoding of x to approximately round fractional bipartite matchings in a "linear" manner, rounding from the least-significant to most-significant bit of the encoding.We first illustrate this approach in a static setting in Section 3.1.This will serve as a warm-up for our first dynamic rounding algorithm provided in Section 3.2, which is essentially a dynamic variant of the static algorithm (with its init procedure being essentially the static algorithm).

Warm-up: Static Bipartite Rounding
In this section, we provide a simple static bipartite rounding algorithm for fractional matchings.Specifically, we prove the following Theorem 3.1, analyzing our rounding algorithm, Algorithm 1.The algorithm simply considers for all i, E i := supp i (x), i.e., the edges whose i-th bit is set to one in x.Starting from F L = ∅, for i = L, . . ., 1, the algorithm applies degree-split to the multigraph G[F i ⊎ E i ] and sets F i−1 to be the first edge-set output by degree-split (by induction, E i , F i are simple sets, and so G[F i ⊎ E i ] has maximum multiplicity two.)Overloading notation slightly, we denote this by Theorem 3.1.On fractional bipartite matching x and error parameter ǫ ∈ (0, 1), Algorithm 1 outputs an integral matching By Observation 2.2, L = O(log(ε −1 •n)), and so Theorem 3.1 implies an O(|supp(x)|•log(ε −1 •n)) runtime for Algorithm 1.We prove this theorem in several steps.Key to our analysis is the following sequence of vectors (which we will soon show are fractional matchings if supp(x) is bipartite).Definition 3.2.Letting F i (e) := 1[e ∈ F i ] and E i (e) := 1[e ∈ E i ] = (x e ) i , we define a sequence of vectors x (i) ∈ R E ≥0 for i = 0, 1, . . ., L as follows.
So, .This allows us to prove the following lower bound on the size of the output.
Proof.By Property (P1) we have that Therefore, repeatedly invoking the above bound and appealing to Observation 2.2, we have that indeed, for all i ∈ {0, 1, . . ., L}, A simple proof by induction shows that if supp(x) is bipartite, then the above procedure preserves all vertices' fractional degree constraints, i.e., the vectors x (i) are all fractional matchings.Lemma 3.4.If x is a fractional bipartite matching then x (i) (v) ≤ 1 for every vertex v ∈ V and i ∈ {0, 1, . . ., L}.
Proof.By reverse induction on i ≤ L. The base case holds since x (L) ≤ x is a fractional matching.To prove the inductive step for i − 1 assuming the inductive hypothesis By Property (P3), we have the following upper bound on v's fractional degree under x (i−1) .
If d G i (v) is even, then we are done, by the inductive hypothesis giving e is evenly divisible by 2 −i and therefore the same holds for x (i) (v).By the same token, d G i (v) is odd if and only if x (i) (v) is not evenly divisible by 2 −i+1 .However, since x (i) (v) is evenly divisible by 2 −i and it is at most one, this implies that x (i) (v) ≤ 1 − 2 −i .Combined with Equation (2), we obtain the desired inequality when d G i (v) is odd as well, since Now, since the vector x (0) is integral, the preceding lemmas imply that if x is a bipartite fractional matching then M is a large integral matching.
Proof.By Lemma 3.4, the (binary) vector x (0) (the characteristic vector of M ) is a feasible fractional matching, and so M is indeed a matching.That M ⊆ supp(x) follows from degree-split outputting a sub(multi)set of the edges of its input, and therefore a simple proof by induction proves that supp(x) ⊇ supp(x (L) ) ⊇ supp(x Thus, the algorithm runs in the desired time of O(mL + L) = O(mL).
Theorem 3.1 follows by combining the two preceding lemmas.We now turn to the dynamic counterpart of Algorithm 1.

A Simple Dynamic Bipartite Rounding Algorithm
In this section we dynamize the preceding warm-up static algorithm, obtaining the following result.
Theorem 3.7.Algorithm 2 is a deterministic dynamic bipartite matching rounding algorithm.Under the promise that the dynamic input vector x satisfies x min ≥ δ throughout, its amortized ) and its init time on vector Since δ ≥ ε/n 2 by Observation 2.2, Theorem 3.7 yields an Õ(ε −1 • log 2 n) update time algorithm.Our dynamic algorithm follows the preceding static approach.For example, its initialization is precisely the static Algorithm 1 (and so the init time follows from Theorem 3.1).In particular, the algorithm considers a sequence of graphs G i := G[E i ⊎ F i ] and fractional matchings x (i) defined by G i and the i most significant bits of x e , as in Equation (1).However, to allow for low (amortized) update time we allow for a small number of unprocessed changed or deleted edges for each i, denoted by c i .When such a number c i becomes large, we rebuild the solution defined by F i and supp i (x), . . ., supp 0 (x) as in the static algorithm.Formally, our algorithm is given in Algorithm 2.
Conventions and notation.Most of our lemmas concerning Algorithm 2 hold for arbitrary nonnegative vectors x ∈ R E ≥0 , a fact that will prove useful in later sections.We state explicitly which lemmas hold if x is a fractional bipartite matching.In the analysis of Algorithm 2 we let x (i) be as defined in Equation (1), but with E i and F i of the dynamic algorithm.Furthermore, we prove all structural properties of Algorithm 2 for any time after init and any number of update operations, and so we avoid stating this in all these lemmas' statements for brevity.Next, we use the shorthand S i := supp i (x), and note that unlike in the static algorithm, due to deletions from E i before the next rebuild(i), the containment E i ⊆ S i may be strict.// In init we assume that the algorithm knows δ, a lower bound on x min for all nonzero x encountered after an operation function init(G = (V, E), x ∈ R E ≥0 , ǫ ∈ (0, 1)) Save x and ǫ as global variables; Initialize L ← 1 + ⌈log 2 (ε −1 δ −1 )⌉, c i ← 0, and F i ← ∅, for all i ∈ {0, 1, . . ., L}; Call rebuild(L); x e ← ν; else remove one edge adjacent to each endpoint of e from F i−1 (if there is one); L then call rebuild(i) and return; end First, we prove that M is a matching if x is a bipartite fractional matching.More generally, we prove that each x (i) , and in particular x (0) , is a fractional matching, implying the above.
Proof.Fix vertex v, and let F i (v) and S i (v) be the number of edges of v in F i and S i respectively, for all i ∈ {0, 1, . . ., L}.To upper bound x (i) (v), we start by upper bounding F i (v), as follows.
We prove the above by induction on the number of operations and by reverse induction on i ∈ {0, 1, . . ., L}, as follows.The base case i = L is trivial, as F L (v) = 0 throughout and the RHS is non-negative.Next, for i < L, consider the effect on F i (v) of an update resulting in a call to rebuild(i + 1) (e.g., after calling init), at which point E i+1 ← S i+1 .
, where the last inequality follows from the basic fact that for non-negative y, z with y an integer, Next, it remains to prove the inductive step for index i and a call to update for which rebuild(i + 1) is not called: but such an update only decreases the left-hand side of Equation (3), while it causes a decrease in the right-hand side (by one) only if an edge of v was updated in this call to update, in which case we delete at least one edge incident to v in F i , if any exist, and so the left-hand side also decreases by one (or is already zero).
Above, the first inequality follows from Equation ( 1) and The second inequality follows from Equation (3).Finally, the final inequality relies on j S j (v)•2 Remark 3.9.The same proof approach, using Property (P1) of degree-split (for possibly nonbipartite graphs) implies the global upper bound Next, we prove the second property of a rounding algorithm, namely that M ⊆ supp(x).
Proof.We prove the stronger claim by induction on the number of operations of Algorithm 2 and by reverse induction on i ∈ {0, 1, . . ., L} that supp(x when rebuild(i) is called (and in particular after init was called), and subsequently all edges e ∈ E i that are updated (and in particular each edge whose x e value is set to zero) are removed from E i .Therefore E i ⊆ supp(x) throughout, and in particular supp(x (L) ) = E L ⊆ supp(x).Similarly, by the properties of degree-split and the inductive hypothesis, we have that after rebuild(i) is called, , and each edge e updated since is subsequently deleted from F i (as are some additional edges).Therefore F i ⊆ supp(x) throughout.We conclude that supp(x (i) ) ⊆ supp(x) for all i, as desired.
We now argue that the unprocessed edges have a negligible effect on values of x (i) compared to their counterparts obtained by running the static algorithm on the entire input x.
Proof.As in the proof of Lemma 3.3, by Property (P1), after init or any update(•, •) triggering a call to rebuild(i) we have that and so x (i−1) ≥ x (i) .On the other hand, between calls to rebuild(i) there are at most In contrast, by Equation ( 1), any changes in E j for j = i have no effect on x (i−1) − x (i) .On the other hand, until the next rebuild(i) is triggered, we have that E i and F i can only decrease (contributing to an increase in x (i−1) − x (i) ); E i can only decrease since edges are only added to E i when rebuild(i) is called, and F i only decreases until rebuild(i + 1) is called, which triggers a call to rebuild(i).Therefore, x (i−1) − x (i) decreases by at most ε x L during updates until the next call to rebuild(i), and so after init and after every update of Algorithm 2, we have that Invoking the above inequality L times, and using that x (L) ≥ (1 − ε) • x by Observation 2.2, we obtain the desired inequality.
Remark 3.12.The latter is nearly tight, as Proof (Sketch).The proof follows that of Lemma 3.11, with the following changes: By Property (P1), after init or any update(•, •) triggering a call to rebuild(i) we have the upper bound and so x (i−1) ≤ x (i) + 2 −i+1 .On the other hand, the increase in L (similarly to the decrease in the same).The proof then concludes similarly to that of Lemma 3.11, also using that x (L) ≤ x .
Finally, we turn to analyzing the algorithm's update time.
Lemma 3.13.The (amortized) time per update of Algorithm 2 is Summing over all i ∈ {0, 1, . . ., L}, we find that indeed, the amortized time per update operation, which is O(L) (due to deleting O(1) edges from each E i and F i for each i) plus its contribution to periodic calls to rebuild, is We are finally ready to prove Theorem 3.7.
Proof of Theorem 3.7.Algorithm 2 is a dynamic rounding algorithm for bipartite fractional matchings, since M is a matching contained in supp( To (nearly) conclude, this section provides a simple bipartite rounding algorithm with nearoptimal ε-dependence.In the following section we show how partially rounding the fractional matching allows to dynamically guarantee that x min be sub-polynomial in ε/n, thus allowing us to decrease L and obtain speedups (improved n-dependence) when combined with Algorithm 2.
Algorithm 2 in general graphs.Before continuing to the next section, we mention that the alluded-to notion of partial rounding will also be useful when rounding (well-structured) fractional matchings in general graphs as well (see Section 6).With this in mind, we provide the following lemma, which is useful to analyze Algorithm 2 when rounding general graph matchings.
Proof.First, we verify that the inequality holds (with some extra slack) right after rebuild(i) (and in particular right after init).First, by Property (P2) of degree-split, during the invocation of which . Therefore, by Equation ( 1), for each vertex v, we have after rebuild(i) that On the other hand, before an update with current input x there are at most L many edges being added or deleted from E i ∪ F i .Therefore, during the updates between calls to rebuild(i), the total variation distance between x (i) and x (i+1) changes by at most ε x L , and so after init and after any update, Now, using the basic fact that (a + b) + ≤ a + + b + for all real a, b, summing the above difference over all i, and using that x = x (L) by Observation 2.2, we obtain the desired inequality, as follows.
4 Partial Rounding: a Path to Speedups So far, we have provided a rounding algorithm with near-optimal dependence on ε (by Fact 2.3) and polylogarithmic dependence on x −1 min = poly(ε −1 n) of the fractional matching x.To speed up our algorithm we thus wish to dynamically maintain a "coarser" fractional matching x ′ (i.e., with larger (x ′ min ) −1 than x −1 min ) that approximately preserves the value of x.The following definition captures this notion of coarser fractional matchings that we will use.9Definition 4.1.
x ′ e = x e otherwise.The coarsening x ′ is bounded if it also satisfies the following property: We briefly motivate the above definition: As we shall see, properties (C1) and (C2) imply that x ′ (after mild post-processing) is a (1 − ε)-approximation of x, and so rounding The less immediately intuitive Property (C2) will also prove useful when rounding in general graphs, in Section 6.For now, we will use this property when combining coarsenings of disjoint parts of the support of x.Property (C3) then allows us to round such coarsening x ′ efficiently, with only a polylogarithmic dependence on δ −1 , using Algorithm 2 (by Theorem 3.7).Finally, Property (C4) guarantees that x ′ /(1 + ε) is a fractional matching.
A key ingredient for subsequent sections is thus a dynamic coarsening algorithm, as follows.
Definition 4.2.A dynamic (ε, δ)-coarsening algorithm is a data structure supporting the following operations: initializes the data structure for undirected graph G with vertices V and edges E, current vector x.
The algorithm must maintain an (ε, δ)-coarsening x ′ of (the current) x.
As we show in Section 6, the internal state of Algorithm 2 yields a dynamic coarsening algorithm.In this section we state bounds for a number of dynamic coarsening algorithms (analyzed in Section 5), with the objective of using their output as the input of Algorithm 2, from which we obtain faster dynamic bipartite rounding algorithms than when using the latter algorithm in isolation.The following lemma, proved in Section 4.2, captures the benefit of this approach.
Lemma 4.3.(From coarsening to rounding).Let C be a dynamic (ǫ, δ)-coarsening algorithm with update time t Let R be a dynamic rounding algorithm for fractional matchings x with x min ≥ δ, with update time t R In our invocations of Lemma 4.3 we will use Algorithm 2 to play the role of Algorithm R. In Section 5 we provide a number of coarsening algorithm, whose properties we state in this section, together with the obtained rounding algorithms' guarantees.
A number of our coarsening algorithms will make use of subroutines for splitting (most of) the fractional matching's support into numerous disjoint coarsenings, as in the following.Definition 4.4.An (ǫ, δ)-split of fractional matching z ∈ R E ≥0 with z max ≤ δ consists of (ǫ, δ)coarsenings z (1) , . . ., z (k) with disjoint supports, together covering at least half of supp(z), i.e., The following lemma combined with Lemma 4.3 motivates our interest in such splits.
Then there exists a dynamic algorithm C which for any (possibly non-uniform) fractional matching Section outline.We prove Lemmas 4.3 and 4.5 in Sections 4.2 and 4.3, respectively.Before that, we state bounds of a number of such partial rounding algorithms (presented and analyzed in Section 5), together with the rounding algorithms we obtain from these, yielding Theorem 1.2.

Partial Rounding Algorithms, with Applications
Here we state the properties of our coarsening and splitting algorithms presented in Section 5, together with the implications to dynamic rounding, as stated in Theorem 1.2.In Section 5.1, we provide a deterministic static split algorithm as stated in the following lemma.
Lemma 4.6.For any ε > 0, there exists a deterministic static (4ε, ε)-split algorithm which on input uniform fractional matchings Combining the above lemma with Lemmas 4.3 and 4.5 yields the first result of Theorem 1.2.
Corollary 4.7.There exists a deterministic dynamic bipartite rounding algorithm with update time Proof.By Lemma 4.6, there exists a deterministic static 4ε 3 log n , ε 3 log n -split algorithm that on uniform fractional matching Moreover, by Theorem 3.7 there exists a deterministic dynamic bipartite matching rounding algorithm R for fractional bipartite matchings x with x min = Ω(poly(ε . Plugging these algorithms into Lemma 4.10, we obtain a deterministic algorithm which has update time ).The last equality holds for all ranges of n and ε, whether Next, in Section 5.2 we provide a simple linear-time subsampling-based randomized split algorithm with the following properties.
Lemma 4.8.For any ε > 0, there exists a static randomized algorithm that on uniform fractional matchings x computes an (ε, ε 4 24 log 2 n )-split in O(|supp(x)|)-time, and succeeds w.h.p.Combining the above lemma with lemmas 4.3 and 4.5 yields the w.h.p. result of Theorem 1.2.
Corollary 4.9.There exists an adaptive dynamic bipartite rounding algorithm that succeeds w.h.p., with update time ).On the other hand, by Theorem 3.7, there exists a deterministic dynamic bipartite matching rounding algorithm R for fractional matchings x with x min = Ω(poly(ε . Plugging these algorithms into Lemma 4.10, we obtain a randomized adaptive algorithm which works with high probability and has update time , ).The last equality holds whether Finally, in Section 5.3, building on a output-adaptive dynamic set sampling algorithm which we provide in Appendix B, we give an output-adaptive coarsening algorithm with constant (and in particular independent of n) expected amortized update time.Finally, combining the above lemma with Lemma 4.3 yields the third result of Theorem 1.2.
Corollary 4.11.There exists an output-adaptive dynamic bipartite rounding algorithm with ex- Proof.By Lemma 4.10, there exists a dynamic (ε, O(ε 3 ))-coarsening algorithm C with expected update time t C U = O(1) and init time O(|supp(x)|).On the other hand, by Theorem 3.7 there exists a deterministic (hence output-adaptive) dynamic bipartite matching rounding algorithm R for fractional matchings x with x min = Ω(ε 3 ) with update time Plugging these algorithms into Lemma 4.10, we obtain an output-adaptive algorithm with expected update time

Proof of Lemma 4.3: Reducing Rounding to Coarsening
The following lemma allows us to efficiently convert coarsenings to bounded coarsenings.Lemma 4.12.There exists a deterministic algorithm which given an Proof.The algorithm: Initialize x ′′ ← x ′ .For any vertex v ∈ V such that x ′′ (v) > x(v) + ǫ + 2δ, remove arbitrary edges e incident on v in supp(x ′′ ) such that x e ≤ δ until x ′′ (v) ≤ x(v) + ǫ + 2δ.
Note that edges e with x e ≥ δ have x ′ e = x e , and so the above process must terminate as once all edges with weight at most δ are removed for all vertices v ∈ V we have x ′′ v ≤ x v .Finally, return x ′′ .Running time: Each edge of supp(x ′′ ) has its value decreased at most once, hence the running time of the algorithm is at most O(|supp(x ′ )|).
Correctness: By construction and by Property (C0) of coarsening x ′ of x, we have that supp(x ′′ ) ⊆ supp(x ′ ) ⊆ supp(x), and so x ′′ satisfies Property (C0) of a coarsening of x.Next, since x ′ is an (ε, δ)-coarsening of x ′ and x ′′ agrees with x ′ on supp(x ′′ ), we find that x ′′ also satisfies Property (C3).Moreover, by definition, the algorithm ensures that x ′′ (v) ≤ x(v) + ǫ + 2δ for any vertex v ∈ V , and so x ′ satisfies Property (C4) of bounded (3(ε + δ), δ)-coarsenings of x.To prove the remaining properties we leverage some minor calculations, which we now turn to.
Consider some vertex v that had one of its edges in supp(x ′′ ) deleted.Since before the deletion it must be that x ′′ (v) > x(v) + ǫ + 2δ and the deleted edge had weight at most δ, we must have that after the deletion x ′′ (v) ≥ x(v) + ǫ.This implies that the total weight of deleted edges is upper bounded by Therefore, by Observation 2.1 and the above, we find that x ′′ also satisfies Property (C2) of (3(ε + δ), δ)-coarsenings of x.
With Claim 4.12 established, we are now ready to prove Lemma 4.3.
Proof of Lemma 4.3.We will describe the algorithm R * .The input dynamic fractional matching of R * is denoted by x.Recall that x ≤2δ and x >2δ refer to x restricted to edges with weight at most 2δ and greater then 2δ respectively.Algorithm R * uses C to always maintain fractional matching x A * an (ε, 2δ)-coarsening of x.Define scaling constant α = (1 + 3(ε + 2δ)) −1 , and assume α ≥ 1/2.
In addition R * completes the following operations: Initialization: R * uses the algorithm of Lemma 4.12 to obtain a bounded (3 N restricted to edges of supp(x ≤2δ ) and x Large ← x A * N restricted to edges of supp(x >2δ ).The algorithm initializes its output x ′ by providing x = (x Small + x Large ) • α as input to R. We will maintain throughout the algorithm, that x = (x Small + x Large ) • α and that x ′ is the output of R when given input fractional matching x.The algorithm furthermore sets counter C ← 0. Let x0 stand for the state of x at initialization.
Handling an edge update: For the sake of simplicity, we assume every update either changes the weight of some edge e = (u, v) from 0 to some positive value or vice versa, thus inserting or deleting an edge to/from supp(x).If x e > 2δ, algorithm R * removes e (if present) from supp(x Large ) or adds e to supp(x Large ) with weight x e and then updates x ′ using R. Note that by assumption α ≥ 1/2, hence if e was inserted into x then x undergoes an edge update with weight at least 2δ • α ≥ δ and hence R can handle this update correctly.
If x e ≤ 2δ, the algorithm removes one arbitrary edge incident on u and v from supp(x Small ) (if there is any) and administer the changes this makes on x and x ′ using R. Furthermore, if e was deleted from supp(x), then remove e from supp(x Small ) and update x ′ with R. If e was inserted then its effect (apart from the updates to its endpoints) is ignored.Either way, set C ← C + 12 • δ and re-initialize the datastrucrure if C > x 0 • ε.
Running Time: First note that at initialization the algorithm first needs to initialize Observe that unless the algorithm re-initializes after an update, it only uses R to handle O(1) updates on the support of x, which takes update time O(t R U ).The algorithm has to run C on the dynamic fractional matching x at all times which adds an update time of t C U .Furthermore, the algorithm sometimes re-initializes.
Consider the cost of initialization at time 0, which took O(|supp(x 0 )| • t R U ) time.As x has edge weights at least 2δ • α ≥ δ we know that |supp(x 0 )| = O( x0 /δ).By the next initialization C > x 0 • ε hence at least Ω( x 0 • ε/δ) updates must have occurred.By Claim 4.13 we have that x = Θ( x ).Amortizing the cost of the initialization over these updates yields an additional expected update time cost of O(t R U • ε −1 ).Hence, the total expected amortized update time of the algorithm is . Adaptivity: Note that other than the inner operations of C and R, the algorithm's actions are deterministic, and hence it is deterministic/adaptive/output-adaptive if both C and R are.Proof.We will first argue that x remains a valid fractional throughout the set of updates.By definition throughout the run of the algorithm x = (x Small + x Large ) • α.Fix some vertex v ∈ V .As x Large (v) = x >2δ (v) and x is a valid fractional matching it is sufficient to argue that x Small (v) ≤ x ≤2δ (v) + 3 • (ε + 2δ) at all times.Note that this holds at initialization due to Property (C4) of bounded coarsenings.Assume an edge update occurs to x ≤2δ to edge (u, v).As all edges of x ≤2δ have weight at most 2δ this update may decrease x Small (u) and x Small (u) by at most 2δ.The algorithm compensates for this through deleting an edge incident on both u and v if any are present in supp(x Small ) (which all have weight at least 2δ in x Small ).Hence for any edge e in supp(x Small ) we have that x Small (e) ≥ 2δ after these deletions x Small (u) ≤ x ≤2δ (u) + 3 • (ε + 2δ) and x Small (v) ≤ x ≤2δ (v) + 3 • (ε + 2δ) hence x remains a valid fractional matching.
It remains to argue that x ≥ x • (1 − O(ǫ + δ)).Observe, that at initialization the inequality holds (as x Small is a (3(ǫ + 2δ), δ)-coarsening of x ≤2δ ) and x >2δ ≤ x Large /α.As edges of x Small are in [2δ, 4δ] we also must have that by definition ( x Small + C) • α ≥ x ≤2δ throughout the run of the algorithm as C is increased by 4δ whenever an edge update occurs to x.As whenever C > x0 • ε we re-initialize we must have that x ≥ x • (1 − O(ǫ + δ)) at all times.This concludes the proof of Lemma 4.3.

Proof of Lemma 4.5: Reducing Dynamic Coarsening to Static Splitting
We start with a lemma concerning the "stability" of coarsenings under (few) updates.
Lemma 4.14.Let x (0) and x (t) be two fractional matchings with maximum edge weights δ which differ on at most Proof.Property (C0) of coarsenings follows trivially as x (t) ′ is restricted to supp(x (0) ) by definition.As x ′ is an (γ, δ)-coarsening of x (0) we are guaranteed by Property (C3) that edges of x ′ take weight in [δ, 2δ), implying Property (C3).
To conclude Property (C1) of coarsenings consider the following line of inequalities: Inequality 4 follows from the definitions of the distance d ε V and of norms.Inequality 5 follows from the observation that x (0) ≤ x (t) (1 + 2ε).
Property (C2) follows from a similar set of inequalities: Corollary 4.15.The statement of Lemma 4.14 holds assuming |supp(x (0) )|•γ and |supp(x (t) )|•γ are uniform fractional matchings (of the same uniform value) and they differ on at most |supp(x (0) )| • γ edges while x (0) ′ and x (t) ′ differ on at most |supp(x (t) ′ )| edges.Furthermore, x (t) ′ satisfies slightly stronger slack properties, To observe Corollary 4.15 note that all the inequalities in the proof of Lemma 4.14 hold in this slightly modified setting.In Claim 4.16, we show that if the input fractional matching x is uniform, then we can maintain a coarsening of x efficiently as it undergoes updates.Afterwards we show how to extend the argument to general fractional matchings.Lemma 4.16.Let S be a static (γ, δ)-split algorithm that on uniform fractional matching x takes time O(|supp(x)|•t s ), for t s = t s (n, γ, δ).Then there exists a dynamic (ǫ+γ, δ)-coarsening algorithm C U for uniform fractional matchings, whose output (ǫ + γ, δ)-coarsening x ′ of x satisfies slightly stronger slack properties, namely: Proof.The algorithm C U works as follows.
Initialization: C U calls S to compute a (γ/4, δ)-split x 1 , x 2 , . . . of x.Next, C U sets its output x ′ to be x 1 .Let x (0) and x (0) i respectively denote the states of x and x i , at initialization.Note that this implies an init time of O(|supp(x)| • t s ).
Handling an Update: If an edge e gets deleted from supp(x), then C U removes e from supp(x ′ ) as well (provided we had e ∈ supp(x ′ )).Once |supp(x (0) )|•ε/64 updates have occurred to x since the last initialization, C U re-initializes.Further, once more than |supp(x ′ )|•ε/32 edges have been deleted from supp(x ′ ), C U discards x ′ from memory and switches its output to be another coarsening of the split which satisfies that at most an ε/8 fraction of its support has been deleted so far.The effects of insertions are ignored, except that they contribute to the counter timing the next re-initialization.
Update Time: Observe that the algorithm re-initializes every Ω(ε • |supp(x (0) )|) updates.Hence, the re-initializations have an amortized update time of O(t s • ε −1 ).If at some point in time, for some coarsening x i at least |supp(x (0) i )| • ε/32 edges have been deleted from supp(x (0) i ), then x i can be discarded from memory as it will never again enter the output in future.Hence, C U adds/removes each edge of the initially computed split exactly once to/from the output.Accordingly, the total work of C U between two re-initializations is proportional to O(| ∪ i supp(x (0) i )|), which is at most O(|supp(x (0) )|) by definition.We similarly amortize this cost over Ω(ε • |supp(x (0) )|) updates, which leads to an update time of O(ε −1 ).
Observe that at all times x ′ is a (γ/4, δ)-coarsening of x-s state at the last re-initialization restricted to edges which have not been deleted since the last re-initialization.This implies both Properties (C0) and (C3).
Correctness: Let x (t) denote for the state of x at some time t after initialization (but before the next re-initialization).x (t) and x (0) may differ on at most |supp(x (0) )| • ε/64 edges.Suppose that the coarsening x ′ is the output of the algorithm at time t, and let x (0) ′ and x (t) ′ respectively denote the states of the coarsening x ′ at initialization and at time t.Since the algorithm would have discarded x ′ from memory if more than |supp(x (0) ′ )| • ε/32 edges of supp(x ′ ) got deleted we know that x (0) ′ and x (t) ′ differ on at most |supp(x (0) ′ )| • ε/32 edges.Based on this the correctness of the output of the algorithm follows from Corollary 4.15.
It now remains to argue that C U always maintains an output, i.e., the algorithm doesn't discard all coarsenings present in the initial split before re-initialization.By definition of splits, we know that | i supp(x and the supports of {x (0) i } are disjoint.The algorithm reinitializes after |supp(x (0) )| • ε/64 updates.Hence, less then a ε/32 fraction of i supp(x (0) i ) gets deleted between successive re-initializations.Thus, there is always a coarsening in the split that has less then ε/32 fraction deleted edges (due to the pigeonhole principle), and so C U always maintains a correct output throughout.
Proof of Lemma 4.5: We will now show how to extend Claim 4.16 to non-uniform fractional matching and thus prove Lemma 4.5.Assume that x is a fractional matching.Let φ = γ n 2 .For the sake of convenience assume that δ = φ•(1+ ǫ) L for some integer L < 2(log n + log(γ −1 ))•ε −1 .Define define fractional matching xi to be a uniform fractional matching with weight φ • (1 + ǫ) L−1 and support x i .Define x = i∈[L] xi .First note that as the graph has at most n 2 edges x <φ ≤ γ.Furthermore, for any edge e ∈ supp(x ≥φ ) we must have that Algorithm C will use algorithm C U from Claim 4.16 to maintain (ε + γ, δ)-coarsenings x ′ i of fractional matchings xi in parallel.The output of C will be x ′ = i∈L x ′ i + x >δ .In order to conclude the amortized update time bound observe that an edge update to G may only effect a single fractional matching xi .Note that at all times supp(x ′ ) is a subset of the support of one of the initially calculated coarsenings restricted to non-deleted edges, hence supp(x ′ ) ⊆ supp(x).Note that x ′ and x agrees on edges e where x(e) > δ.We now turn to proving the remaining properties of a coarsening: Property (C1) Global Slack: By Claim 4.16, we must have that Property (C2) Vertex Slack: d 0 V (x, x) ≤ 2 • γ + 2ε • x as the two matching differ only on the edges of x <φ or by just a factor of ε.

Coarsening and Splitting Algorithms
So far, we have provided Lemmas 4.3 and 4.5 which give a reduction from (faster) dynamic rounding to dynamic coarsening, and from dynamic coarsening to static splitting.We further stated a number of such dynamic coarsening and static splitting algorithms, as well as their corollaries for faster rounding algorithms.In this section we substantiate and analyze these stated coarsening and splitting algorithms.

Deterministic Static Splitting
In this section we prove Lemma 4.6, restated below for ease of reference.
Lemma 4.6.For any ε > 0, there exists a deterministic static (4ε, ε)-split algorithm which on input uniform fractional matchings x runs in time Assume x is λ-uniform.First note that if λ > ε then we may return {x} as the split trivially.On the other extreme end, if λ ≤ ε 2 • n −2 then we may return a split consisting of |supp(x)| many ε-uniform fractional matchings each having the support of a single edge in supp(x) (the properties of coarsening follow trivially).Hence, we may assume that The algorithm: Our algorithm inductively constructs sets of vectors F i for i ∈ {0, 1, . . ., L}.As our base case, we let F 0 := {x}.Next, for any vector x ′ ∈ F i−1 , using degree-split (See Proposition 2.4) on supp(x ′ ), we compute two edge sets E 1 x ′ , E 2 x ′ , and add to F i a λ • 2 i -uniform vector on each of these two edge sets.The algorithm outputs F L .
Running time: By Proposition 2.4, each edge in supp(x) belongs to the support of exactly one vector in each F i and so each Correctness: It remains to show that F L is a (4ǫ, ǫ)-split of x.First, as noted above, each edge in supp(x) belongs to the support of precisely one vector in F L , and so these vectors individually satisfy Property (C0) of coarsenings, and together they satisfy the covering property of splits.We now show that every vector x L ∈ F L satisfies the remaining properties of (4ǫ, ǫ)-coarsening of x.For i ∈ {1, . . ., L − 1}, inductively define x i−1 to be the fractional matching in F i−1 that was split using degree-split to generate x i (hence x 0 = x).By Property (P1) of Proposition 2.4: Thus, as x i and x i−1 are respectively λ • 2 i and λ Therefore, Property (C1) follows form the triangle inequality, as follows.
Similarly, this time by Property (P2) of Proposition 2.4, we have that 0, and so x L satisfies Property (C2) of a (4ε, ε)-coarsening of x.

Randomized Static Splitting
We now turn to proving Lemma 4.8, restated below for ease of reference.
Proof.Assume x is λ-uniform.First note that if λ > ε 4 24•log 2 n then we may return {x} as the split trivially.Note that if λ ≤ ε/n 2 then x ≤ ε.In this case, it is sufficient to return a split consisting of |supp(x)| coarsenings, where each coarsening is a single edge of supp(x) and has weight ε 4 / log 2 n.The same split can also be returned if x ≤ ε 2 / log n.Thus, from now on we assume that ε 4 24•log 2 n ≥ λ ≥ ε/n 2 and x ≥ ε 2 / log n.We start by defining the following two parameters.δ := ε 4 24 log 2 n and k := 2 ⌈log 2 δ λ ⌉ .
The algorithm: We (implicitly) initialize k zero vectors x (1) , . . ., x (k) .Next, for each edge e ∈ supp(x), we roll a k-sided die i ∼ Uni([k]), and set x Correctness: By construction, {supp(x (i)) } k i=1 is a partition of supp(x), and so each of these Hence, each of the vectors x (i) satisfy Property (C3) of an (ε, δ)-coarsening.It remains to prove the two other properties of a coarsening.
For each edge e ∈ supp(x) and i ∈ [k], we have x Thus, by linearity of expectation, we get: log n , by standard Chernoff bound and our choices of δ = ε 4 24 log 2 n and δ ′ ≤ 2δ, we have that Property (C1) holds w.h.p.
Similarly, E x (i) (v) = x(v) for each vertex v, and so again a Chernoff bound implies that: Therefore, by taking a union bound over the (n+2) bad events, for each of the k ≤ 2δ λ ≤ 2ε 4 24 log 2 n • n 2 ε ≤ 2n 2 vectors (i.e., O(n 3 ) bad events), we have that d ε V (x, x (i) ) = 0. Therefore Property (C2) also holds w.h.p., and so by union bound {x (i) } satisfy all the properties of an (ε, δ)-split w.h.p.

Output-Adaptive Dynamic Coarsening
In this section we prove Lemma 4.10, using the following kind of set sampling data structure.
Definition 5.1.A dynamic set sampler is a data structure supporting the following operations: • init(n, p ∈ [0, 1] n ): initialize the data structure for n-size set S and probability vector p.
• sample(): return T ⊆ R n containing each i ∈ S independently with probability p i .
Theorem 5.2 shows that there exists a dynamic output-adaptive set sampler with optimal properties.Concurrently to our work, another algorithm with similar guarantees was given by [YWW23].As that work did not address the adaptivity of their algorithm, we present our (somewhat simpler) algorithm and its analysis, in Appendix B. Proof.Initialization: The algorithm maintains a set sampler as in Definition 5.1, using the algorithm of Theorem 5.2 over n 2 elements.Each element with non-zero probability corresponds to an edge e ∈ supp(x ≤ε 3 ), and receives a probability of x e • ε −3 within the sampler.The algorithm initializes counter C ← 0. Afterwards the algorithm draws a sample E from its set sampler and defines its output x ′ to have support E ∪ supp(x >ε 3 ) and take weight ε 3 on edges of E and x(e) for edges e ∈ supp(x >ε 3 ).The algorithm repeats this process until x ′ ≤ε 3 is an (100 • ε, ε 3 )-coarsening of x ≤ε 3 .Define x (0) to stand for the state of x at the last initialization.
Handling updates: If an update occurs to x >ε 3 the algorithm simply updates x ′ accordingly.If an update occurs to edge e ∈ x ≤ε 3 then the algorithm first updates e-s weight within the setsampler appropriately.If e was deleted from supp(x) (or equivalently x e was set to 0) it is deleted from supp(x ′ ).If e was inserted into supp(x) (or equivalently x e was set to be some positive value from 0) it is ignored.The algorithm increases the counter C to C + ε 3 .If C reaches x (0) • ε the algorithm re-initializes x ′ (through repeatedly sampling E from its sampler until x ′ ≤ε 3 is a (100ε, ε 3 )-coarsening of x ≤ε 3 ) and resets C to 0.
Update Time: Observe that apart from the cost of re-initialization steps, the algorithm takes O(1) time to update its output.During a re-initialization the algorithm has to repeatedly draw edge sets from its sampler.Note that x ′ >ε 3 remains unaffected during this process.Whenever the algorithm draws an edge sample |E| by definition we have that E[|E| • ε 3 ] = x ≤ε 3 .We will later show that the algorithm only needs to draw samples O(1) times in expectation.As re-initialization occurs after Ω( x ≤ε 3 • ε −2 ) updates and each sample edge set is drawn in linear time with respect to its size we receive that the algorithm has O(ε −1 ) expected amortized update time.It remains to show that at each re-initialization the algorithm makes O(1) calls to its sampler in expectation.We will show this by arguing that every time a sample is drawn x ′ ≤ε 3 is an (100ε, ε 3 )-coarsening of x ≤ε 3 with constant probability.Note that the algorithm can triviarly check if this is indeed the case in O(|supp(x ≤ε 3 )|) time.
First note that Property (C0) and Property (C3) (of coarsenings) follow trivially from the definition of x ′ .We will first argue that Property (C1) holds with constant probability at each attempt.This will aslo imply that the algorithm has init time O(|supp(x)|) in expectation.
For any e ∈ supp(x) let X e stand for the indicator variable of event that e ∈ E (where E stands for the random edge sample drawn by the algorithm), and let X = X e .Note that X e are independently distributed random variables and E[ X] = x ≤ε 3 • ε −3 by definition.Furthermore, x ′ ≤ε 3 = X • ε 3 .Using standard Chernoff bounds we receive the following: Given E[ X] ≥ ε −2 /8 (or equivalently x ≤ε 3 ≥ ε/8), the probability in the last inequality is Ω(1).If, conversely, E[ X] ≤ ε −2 /8 (or equivalently x ≤ε 3 ≤ ε/8), then by a simple Markov's inequality argument we get the following: Either way, with constant probability It remains to prove Property (C2).Let random variable Y v stand for d 100ε {v} (x ≤ε 3 , x ′ ≤ε 3 ) and Ȳ . Consider any edge e ∈ E incident on v. Let y e be the indicator variable of the event that e was sampled by the dynamic sampler on query.Hence, we have E[y e ] = x e • ε −3 ≤ 1, and y e are independently distributed random variables.Let y v = e:v∈e y e .It follows that Then applying Chernoff bounds yields the following: Next, if ε 2 ≤ x ≤ε 3 (v) and k ≤ 1/ε, then a similar application of Chernoff bound gives: Summing over all the possible values of k, we get:

General Graphs
In this section, we extend our algorithms for rounding dynamic fractional matchings from bipartite to general graphs.Our main result in this section is a formalization of the informal Theorem 1.4.Formalizing this theorem and its key subroutine requires some build up, which we present in Section 6.1.For now, we restate the main application of this theorem, given by Theorem 1.5.
The rest of this section is organized as follows.In Section 6.1, we introduce some known tools from the dynamic matching literature, as well as one new lemma (Lemma 6.5) which motivates us to coarsen known fractional matching algorithms.In Section 6.2 we provide our general-graph coarsening algorithms for structured fractional matchings.Finally, in the last two subsections, we provide applications of these dynamic coarsening algorithms: computing AMMs, in Section 6.3, and rounding ε-restricted fractional matchings (see Definition 6.13), in Section 6.4.

Section Preliminaries
At a high level, our approach for rounding fractional matchings in general graphs is very close to our approach for the same task in their bipartite counterparts.However, as mentioned in Section 1, since the fractional matching relaxation studied in prior works is not integral in general graphs, we cannot hope to round arbitrary fractional matchings in general graphs.Therefore, in general graphs we focus our attention on a particular structured family of fractional matchings, introduced by [ACC + 18] and maintained dynamically by [BHN17,BK19].
For the sake of convenience, and similarly to Observation 2.2, we will assume that x has minimum edge weight of x min ≥ εδ/n.Observe that if all edges with weight below εδ/n are decreased to zero, then vertex weights change by at most ε, resulting in a (2ε, δ)-AMFM, which for our needs is as good as an (ε, δ)-AMFM, as we soon illustrate.For similar reasons, we will assume throughout that ε is a power of two.
Any kernel of a graph G contain an (1/2 − ε)-approximate matching with respect to G for d sufficiently large [ACC + 18].An alternative proof of this fact was given by [BKSW23], who show how to efficiently compute an ε-AMM (almost maximal matching, itself a (1/2 − ε)-approximate matching) of the host graph in a kernel.Specifically, they show the following.
Proof.The proof is given in [BKSW23, Lemma 5.5].The only missing detail is externalizing the dependence on ε of that lemma, which is that of the best static linear-time (1 − ε)-approximate maximum weight matching algorithm, currently O(m•ε −1 •log(ε −1 )) for m-edge graphs [DP14]. 10eing able to periodically compute ε-AMMs lends itself to dynamically maintaining ε-AMMs, due to these matchings' natural stability, as in the following lemma of [BKSW23] Proposition 6.4.Let ε ∈ (0, 1/2).If M is an ε-AMM in G, then the non-deleted edges of M during any sequence of at most ε • µ(G) updates constitute a 6ε-AMM in G.
Previous work [ACC + 18,Waj20] show that a the support of any (ε, δ)-AMFM contains a kernel, for δ sufficiently small.We now prove the simple fact that the support of an (ε, δ)-AMFM x is itself a kernel, provided x min ≥ δ.Lemma 6.5.(When AMFMs are kernels).Let x be an Proof.The degree upper bound follows from the condition x min ≥ δ together with the fractional matching constraint, implying that for each vertex v ∈ V , For the lower bound, fix an edge e ∈ E \ supp(x), which thus satisfies x e = 0 < δ.Therefore, by the (ε, δ)-AMFM property, some vertex v ∈ e has x(v) ≥ 1 − ε and max f ∈v x f ≤ δ.But since x min ≥ δ, this implies that each edge f ∋ v in supp(x) is assigned value exactly x f = δ.Therefore, any edge e ∈ E \ supp(x) has an endpoint v with high degree.
We are now ready to state this section's main result (the formal version of Theorem 1.4), proved in Section 6.3.The key idea behind this lemma is that coarsenings of AMFMs yield kernels of a slightly larger graph, obtained by adding O(ε) • µ(G) many dummy vertices, and therefore allow us to periodically compute an O(ε)-AMM M in this larger graph, whch is then an O(ε)-AMM in the current graph.Theorem 6.6.Let ǫ ∈ (0, 1).Let F be a dynamic (ε, ε/16)-AMFM algorithm with update time t f and output recourse u f .Let C be a dynamic (ε, ε/16)-coarsening algorithm with update time t c for vectors x satisfying x e ≥ ε/16 implies that (x e ) i = 0 for i > k + 4, and whose output x ′ on x satisfies x ′ e ∈ {0, ε/16} if x e < ε/16.Then there exists a dynamic ε-AMM algorithm A with update time O(ε While the above stipulations about C and x may seem restrictive, as we shall later see, these are satisfied by our coarsening algorithms, and by known fractional matchings in general graphs (up to minor modifications).

Proof of Lemma 6.7: Coarsening in General Graphs
In this section we show how to leverage our rounding and coarsening algorithms of previous sections to efficiently coarsen AMFMs, giving Lemma 6.7.First, we show that (the internal state of) Algorithm 2 yields a dynamic coarsening algorithm.
Unfortunately, Algorithm 2 is too slow to yield speedups on the state-of-the-art via Theorem 6.6.In contrast, the coarsening algorithms of Section 5 are also insufficient for that theorem's needs, as they only maintain (poly(ε −1 • log n), poly(ε −1 • log n))-coarsenings, and so to be relevant for Theorem 6.6, they must be run with a much smaller error parameter ε ′ = poly(ε/ log n), and again be too slow to yield any speedups.
Given Lemma 6.9, an algorithm reminiscent of the rounding algorithm of Lemma 4.3 allows us to combine two dynamic coarsening algorithms, as in the following dynamic composition lemma: Lemma 6.10.(Composing dynamic coarsenings).Let ε 1 , ε 2 ∈ [0, 1] and δ 1 ≤ δ 2 .For i = 1, 2, let C i be a dynamic (ε i , δ i )-coarsening algorithms with update times t )), which is deterministic/adaptive/output-adaptive if both C 1 and C 2 are.Moreover, it suffices for C 2 to be a coarsening algorithm only for inputs that are a subset of the support of an output of C 1 .
As the proof is near identical to that of Lemma 4.3, we only outine one salient difference.
Proof (Sketch).The algorithm broadly follows the logic of Lemma 4.3, with C 1 , C 2 , ε 1 and δ 1 playing the roles of C,R, ε and δ in that lemma.The outline of the algorithm and the resulting running time and determinism/adaptivity are as in Lemma 4.3.The only difference in correctness analysis is that by Lemma 4.14, taking the non-deleted edges in supp(x (1) ) ∩ {e | x e < δ 1 } periodically (i.e., after every ε 1 • x /δ 1 updates and using them as input to supp(x (1) ) ∩ {e | x e < δ 1 }) allows us to maintain an (O(ε 1 ), δ 1 )-coarsening of x as the (dynamic) input for C 2 throughout.(Note that C 2 is a dynamic coarsening algorithm for the resultant vector dominated by x (1) , by the lemma's hypothesis.)That the dynamic output of C 2 is an (O(ε 1 + ε 2 ), δ 2 )-coarsening of x then follows Lemma 6.9.Lemma 6.7 then follows from Lemma 6.10 in much the same way that Theorem 1.2 follows from Lemma 4.3 in Section 4.1, by taking the coarsening algorithms of Section 4 as C 1 and Algorithm 2 as the second coarsening algorithm C 2 (here, relying on Lemma 6.8).We omit the details to avoid repetition.

Proof of Theorem 6.6: Maintaining AMMs
In this section we show how to maintain O(ǫ)-AMMs, by dynamically coarsening AMFMs (motivated by Lemma 6.5), and periodically using these coarsened AMFMs to compute such AMMs.
Our first lemma provides a static algorithm substantiating the intuition that a coarsening of an AMFM allows to efficiently compute an AMM (under mild conditions, which we address below).
The above lemma allows us to compute an AMM quickly when µ(G) is large.The following simple complementary lemma allows us to compute a maximal matching quickly when µ(G) is small.Lemma 6.12.There exists a static deterministic algorithm that given an O(1)-approximate vertex cover U of graph G = (V, E), compute a maximal matching in G in time O(µ(G) 2 ).
Proof.First, the algorithm computes a maximal matching M in G time, since an O(1)-approximate vertex cover has size O(µ(G)).11Then, we extend M to be a maximal matching in all of G as follows: for each unmatched vertex u ∈ U , we scan the neighbors of u until we find an unmatched neighbor v (in which case we add (u, v) to M ) or until we run out of neighbors of u.As |M | ≤ µ(G) at all times by definition of µ(G), we scan at most 2µ(G) + 1 neighbors per node u ∈ U until we match u (or determine that all of its neighbors are unmatched), for a total running time of O(U • µ(G)) = O(µ(G) 2 ).Since each edge in G has an endpoint in the vertex cover U , this results in a maximal matching.
We are finally ready to prove Theorem 6.6, restated for ease of reference.Theorem 6.6.Let ǫ ∈ (0, 1).Let F be a dynamic (ε, ε/16)-AMFM algorithm with update time t f and output recourse u f .Let C be a dynamic (ε, ε/16)-coarsening algorithm with update time t c for vectors x satisfying x e ≥ ε/16 implies that (x e ) i = 0 for i > k + 4, and whose output x ′ on x satisfies x ′ e ∈ {0, ε/16} if x e < ε/16.Then there exists a dynamic ε-AMM algorithm A with update time O(ε −3 • log(ε −1 ) + t f + u f • t c ).Moreover, A is deterministic/adaptive/output-adaptive if both F and C are.Proof.Our algorithm dynamically maintains data structures, using which it periodically computes an ǫ-AMM, the non-deleted edges of which serve as its matching during the subsequent period.A period at whose start the algorithm computes an ǫ-AMM M consists of ⌊ǫ • |M |⌋ ≤ ǫ • µ(G) updates, and so by Proposition 6.4, the undeleted edges of M are a 6ǫ-AMM throughout the period.We turn to describing the necessary data structures and analyzing their update time and the (amortized) update time of the periodic AMM computations.Throughout, we assume without loss of generality that ε = 2 −k for k ≥ 0 some integer.
The data structures.First, we maintain an (ǫ, ǫ/16)-AMFM x using update time t f , and update recourse u f .For each edge x e ≥ ε/16, we set x e = ε/16, noting that this does not affect the salient properties of an (ε, ε/16)-AMFM.This second assumption implies that C is a coarsening algorithm for the dynamic AMFM we will wish to coarsen.This implies that coarsening algorithm C can be run on x to maintain an (ǫ, δ)-coarsening x ′ of x, where each change to x incurs update time t c , for a total update time of O(t f + u f • t c ).In addition, using [BK19], we deterministically maintain an O(1)-approximate vertex cover U ⊆ V using O(1) update time,12 which is dominated by the above update time.
The end of a period.When a period ends (and a new one begins), both the current matching M and the new matching M ′ that we compute for the next matching are O(ǫ)-AMMs, and so |M | = Θ(µ(G)) and |M ′ | = Θ(µ(G)).We compute M ′ as follows.If |M | ≤ ε −2 , we run the algorithm of Lemma 6.12, using the O( 1 and we use x and x ′ and Lemma 6.11 to compute To summarize, the above algorithm maintains an O(ǫ)-AMM within the claimed running time.As this algorithm consists of deterministic steps (including those of the vertex-cover algorithm of [BK19]) and runs algorithms F and C, the combined algorithm is deterministic/adaptive/outputadaptive if the latter two algorithms are.

Restricted Fractional Matchings
To conclude this section, we briefly note that the same approach underlying Theorem 6.6 also allows us to round other known structured fractional matchings in general graphs.In particular, we can also round the following fractional matchings, introduced by [BS16].Definition 6.13.Fractional matching The interest in restricted fractional matchings (in general graphs) is that their integrality gap is low, in the following sense (see, e.g., [BS16,ABD22]).Proposition 6.14.Let x be an α-approximate fractional matching in G (i.e., x ≥ α • µ(G)) that is also ε-restricted.Then, supp(x) contains an α • (1 − ε)-approximate integral matching.Theorem 6.15.Let ε = 2 −k for k ≥ 11 an integer.Let F be a dynamic α-approximate ε-restricted fractional matching algorithm with update time t f and output recourse u f .Let C be a dynamic (16ε, ε)-coarsening algorithm with update time t c for vectors x satisfying x e ≥ ε/16 implies that (x e ) i = 0 for i > k + 4. Then there exists a dynamic α(1 − O(ε))-approximate matching algorithm A with update time O(ε −3 + t f + u f • t r ).Moreover, A is deterministic/adaptive/output-adaptive if both F and C are.
The proof outline again mirrors that of Theorem 6.6 (and Lemma 4.3 before it), maintaining essentially the same data structures, and so we only discuss the key difference.
By combining Theorem 6.15 with Lemma 6.7, we obtain efficient dynamic rounding algorithms for ε-restricted fractional matchings, with update times and determinism/adaptivity as the stated in Theorem 1.5

Decremental Matching
In this section we discuss applications of our rounding algorithms to speeding up decremental matching algorithms.Prior results are obtained by a number of fractional algorithms [BGS20, JJST22,BKS23b,ABD22], combined with known rounding algorithms (or variants thereof [ABD22]).We show how our (partial) rounding algorithms yield speeds ups on known decremental bipartite matching algorithms, and bring us within touching distance of similar deterministic results in general graphs.
Robust fractional matchings.The approach underlying [BGS20, JJST22, ABD22] is to phaseically compute a robust fractional matching x, in the sense that the value of x restricted to non-deleted edges remains (1 − ε)-approximate with respect to the current maximum (integral) matching in the subgraph induced by non-deleted edges, unless the latter decreases by a (1 − O(ε)) factor.For example, in a complete graph, a uniform fractional matching is robust, while an integral matching is not, as deleting its edges would yield a 0-approximation to the maximum matching, which would be unaffected by such deletions.Formally, the above works implement the following.Definition 7.1.An ε-robust decremental fractional matching algorithm partitions the sequence of deletions into P phases of consecutive deletions (determined during the algorithm's run), and computes a fractional matching x i at the start of phase i, where x i restricted to non-deleted edges is a (1 − ε)-approximate throughout phase i (i.e., until x i+1 is computed).
As for dynamic algorithms, we may consider ε-robust decremental fractional matching algorithms that are either deterministic or adaptive.All currently known such algorithms are adaptive, and even deterministic.
In what follows, we show how our rounding algorigthms can be used to round known ε-robust decremental fractional matching algorithms to obtain new state-of-the-art decremental (integral) matching algorithms, starting with bipartite graphs.

Bipartite Graphs
Applying the rounding algorithms of Theorem 1.2, one immediately obtains a framework to rounding such decremental fractional bipartite matching algorithms.
Theorem 7.2.Let F be an ε-robust decremental fractional bipartite matching algorithm, using total time t F and at most p F phases.Let R be an ε-approximate bipartite rounding algorithm with update time t R U and init time t R I (x).Then, there exists a (1−2ε)-approximate decremental bipartite matching algorithm A which on graph starting with m edges takes total time O(t F + p F i=1 t R I (x i ) + m • t R U ). Algorithm A is deterministic/adaptive/output-adaptive if both F and R are.
Proof.The combined algorithm A is direct.Whenever F computes a new fractional matching x i , we run init(G, x i , ε) in algorithm R. Between computations of fractional matchings x i and x i+1 , we call update(e, 0) for each edge e deleted.The running time is trivially O(t F + p F i=1 t R I (x i ) + m • t R U ).As for the approximation ratio, the fractional matching x throughout a phase (i.e., the x i value of undeleted edges between computation of x i and x i+1 ) is (1 − ε)-approximate with respect to the current maximum matching µ(G), and the rounding algorithm R maintains a matching M ⊆ supp(x) with the desired approximation ratio, Finally, as Algorithm A only uses algorithms F and R, the former is deterministic/adaptive/outputadaptive if the latter two are.Theorem 7.2 in conjunction with Theorem 1.2 yield a number of improvements for the decremental bipartite matching problem (see Table 1). 14In particular, these theorems yield decremental (integral) bipartite matching algorithms with the same update time as the current best fractional algorithms for the same problems [BCH20,JJST22].Our improvements compared to prior work follows both from our faster update times (previously of the order Ω(ε −4 ), and some with high polylog dependencies), as well as our rounding algorithms' initialization times, which avoid us paying the update time per each of the potentially p F • m many edges in the supports of x 1 , x 2 , . . ., x p F .We discuss the fractional algorithms of [BGS20,JJST22], substantiating the results in the subsequent table, in Appendix C.

New Time Per Edge
Fractional Algorithm Closest Prior Best Reference Table 1: New results for (1 − ε)-approximate Decremental Bipartite Matching (D) and (A) stand for deterministic, and (randomized) adaptive, respectively

Potential Future Applications: General Graphs
In [ABD22], Assadi, Bernstein and Dudeja follow the same robust fractional matching approach of [BGS20], and show how to implement it in general graphs.By assuming that µ(G) ≥ ε • n throughout, which only incurs a (randomized) poly(log n) slowdown, by known vertex sparsification results [AKLY16], they show how to maintain an ε-robust fractional matching that is ε-restricted (Definition 6.13) in total time m • 2 O(ε −1 ) .By adapting the rounding scheme of [Waj20], they then round this fractional matching randomly (but adaptively).By appealing to Theorem 6.15 and Lemma 6.7, we can similarly round ε-restricted fractional matchings deterministically, with a log n • poly(ε −1 ) update time.Combining with deterministic vertex sparsification with a n o (1) slowdown [Kis22], we obtain almost all ingredients necessary for the first sub-polynomial-time deterministic (1 − ε)-approximate decremental matching algorithm in general graphs.Unfortunately, the fractional algorithm of [ABD22] relies on randomization in one additional crucial step to compute an ε-robust fractional matching, namely in the subroutine M -or-E * (), that outputs either a large fractional matching respecting some edge capacity constraints, or a minimum cut (whose capacities one then increased).For bipartite graphs, [BGS20] gave a deterministic implementation of this subroutine, though for general graphs this seems more challenging, and so [ABD22] resorted to randomization to implement this subroutine.Our work therefore leaves the de-randomization of this subroutine as a concrete last step to resolving in the affirmative the following conjecture.

APPENDIX A A degree-split Algorithm
In this section we provide an implementation and analysis of the degree-split algorithm guaranteed by Proposition 2.4, restated below.Proposition 2.4.There exists an algorithm degree-split, which on multigraph G = (V, E) with maximum edge multiplicity at most two (i.e., no edge has more than two copies) computes in O(|E|) time two (simple) edge-sets E 1 and E 2 of two disjoint sub-graphs of G, such that E 1 , E 2 and the degrees d G (v) and d i (v) of v in G and H i := (V, E i ) satisfy the following.
Proof of Proposition 2.4.First, suppose G is a simple graph (i.e., it contains no parallel edges).We later show how to easily extend this to a multigraph with maximum multiplicity two.
Recall that a walk W in a graph is a sequence of edges (e 1 , . . ., e k ), with e i = {v i , v i+1 }.We call v 1 and v k+1 the extreme vertices of W. We call odd-indexed and even-indexed edges e i odd or even for short.The walk W is a cycle if v 1 = v k+1 , and the walk is maximal if it cannot be extended, i.e., if there exists no edge e = {v 1 , v} or e = {v k+1 , v} outside of W.
While E = ∅, Algorithm degree-split repeatedly computes maximal walks W, removes these walks' edges from E and adds all odd edges (resp., even edges) of W to the smaller (resp., larger) of Finally, fix a vertex v and i ∈ {1, 2}.By maximality of the walks computed, once v is an extreme vertex of at most one walk.Thus, all but at most two edges of v are paired into successive odd/even edges in some walks computed.Consequently, the number of odd and even edges of v can differ by 0, in which case 2 , they may differ by 1, in which case 2 ⌉ , or they may differ by 2, in which case v has two more odd edges than even edges, the last walk v belongs to is an odd-length cycle (and hence G is not bipartite) and we have that Properties (P2) and (P3) follow.Now, to address the (slightly) more general case where some edges have two parallel copies, we let G ′ be the simple graph obtained after removing all parallel edges from G, and let E ′ 1 and E ′ 2 be the edge sets computed by running degree-split on G ′ .Then, for each pair of parallel edges (e, e ′ ) in G, we arbitrarily add one of the pair to E 1 and add the other to E 2 .Finally, we let It is easy to see that the sets E 1 and E 2 are simple (by construction), and that these sets then satisfy all three desired properties with respect to G, since the edge sets E ′ 1 and E ′ 2 satisfy these properties with respect to G ′ , and every vertex v has all of its (parallel) edges in E \ E ′ evenly divided between E 1 and E 2 .The linear running time is trivial given the linear time to compute E ′ 1 , E ′ 2 .

B Dynamic Set Sampling
In this section we provide an output-adaptive data structure for the dynamic set sampling problem (restated below).Recall that this is the basic problem of maintaining a dynamic subset of [n] where every element is included in the subset independently with probability p i under dynamic changes to p i and re-sampling.This basic problem was studied by [TWLH10, BP17, YWW23].
Definition 5.1.A dynamic set sampler is a data structure supporting the following operations: • init(n, p ∈ [0, 1] n ): initialize the data structure for n-size set S and probability vector p.
• sample(): return T ⊆ R n containing each i ∈ S independently with probability p i .
Our main result of this section is that we can implement in each operation in total time linear in the number of operations, n, and the size of the T output.)), under the promise that p i ≥ p min for all i ∈ [n] throughout.These guarantees hold even if the input is chosen by an output-adaptive adversary.

Definition 1 .
3. A matching M in G is an ε-almost maximal matching (ε-AMM) if M is maximal with respect to some subgraph G[V \ U ] obtained by removing at most |U | ≤ ε • µ(G) vertices from G,where µ(G) is the maximum matching size in G.

Theorem 5. 2 .
Algorithm 3 is a set sampler data structure using O(n) space that implements init in O(n) time, set in O(1) time, and T = sample() in expected O(1 + |T |) time in a word RAM model with word size w = Ω(log(p −1 min )), under the promise that p i ≥ p min for all i ∈ [n] throughout.These guarantees hold even if the input is chosen by an output-adaptive adversary.Equipped with Theorem 5.2, we are ready to prove Lemma 4.10.Lemma 4.10.There exists an output-adaptive dynamic (O(ε), ε 3 )-coarsening algorithm for dynamic fractional matchings x with expected update time O(ε −1 ) and expected init time O(|supp(x)|).
many vertices, including the at most O(ε ′′ ) • µ(G) many vertices of V matched by M ′′ to dummy vertices in V ′′ .It remains to construct G ′′ and x ′′ .

Theorem 5. 2 .
Algorithm 3 is a set sampler data structure using O(n) space that implements init in O(n) time, set in O(1) time, and T = sample() in expected O(1 + |T |) time in a word RAM model with word size w = Ω(log(p −1 min by definition and Observation 2.2.Moreover, each (copy of) edge e output/discarded by degree-split(G[E i ⊎ F i ]) corresponds to adding/subtracting 2 −i to/from x To analyze the runtime of the algorithm, note that it runs in time O(L+ L i=0 (|F i | + |E i |)).Further, |F L | = 0 and by Property (P1) we have that|F i | ≤ 1 2 |F i+1 | + 1 2 |E i+1 | + 1 for all i ∈ {0, 1, . .., L − 1}.Letting m := |supp(x)| we know that |E i | ≤ mfor all i, and so by induction Proof.By Lemma 4.8, there exists a randomized static algorithm that computes aε 3 log(n) , ε 12 24•log 6 (n)split ofany uniform fractional matching x in O(|supp(x)|) time, succeeding w.h.p.Plugging this algorithm into Lemma 4.5 we obtain a randomized (with high probability) dynamic O(ε), ε 12 24•log 6 (n)coarsening algorithm C with update time t C U = O(ε −1 ) and init time O(|supp(x)| ).