Knapsack with Small Items in Near-Quadratic Time

The Knapsack problem is one of the most fundamental NP-complete problems at the intersection of computer science, optimization, and operations research. A recent line of research worked towards understanding the complexity of pseudopolynomial-time algorithms for Knapsack parameterized by the maximum item weight $w_{\mathrm{max}}$ and the number of items $n$. A conditional lower bound rules out that Knapsack can be solved in time $O((n+w_{\mathrm{max}})^{2-\delta})$ for any $\delta>0$ [Cygan, Mucha, Wegrzycki, Wlodarczyk'17, K\"unnemann, Paturi, Schneider'17]. This raised the question whether Knapsack can be solved in time $\tilde O((n+w_{\mathrm{max}})^2)$. This was open both for 0-1-Knapsack (where each item can be picked at most once) and Bounded Knapsack (where each item comes with a multiplicity). The quest of resolving this question lead to algorithms that solve Bounded Knapsack in time $\tilde O(n^3 w_{\mathrm{max}}^2)$ [Tamir'09], $\tilde O(n^2 w_{\mathrm{max}}^2)$ and $\tilde O(n w_{\mathrm{max}}^3)$ [Bateni, Hajiaghayi, Seddighin, Stein'18], $O(n^2 w_{\mathrm{max}}^2)$ and $\tilde O(n w_{\mathrm{max}}^2)$ [Eisenbrand and Weismantel'18], $O(n + w_{\mathrm{max}}^3)$ [Polak, Rohwedder, Wegrzycki'21], and very recently $\tilde O(n + w_{\mathrm{max}}^{12/5})$ [Chen, Lian, Mao, Zhang'23]. In this paper we resolve this question by designing an algorithm for Bounded Knapsack with running time $\tilde O(n + w_{\mathrm{max}}^2)$, which is conditionally near-optimal. This resolves the question both for the classic 0-1-Knapsack problem and for the Bounded Knapsack problem.


Introduction
Knapsack is one of the most fundamental problems at the intersection of computer science, optimization, and operations research.It appeared as one of Karp's original 21 NP-hard problems [30] and has been subject to an extensive amount of research, see, e.g., the book [32].
In the 0-1-Knapsack problem we are given a weight budget W and n items, and for each item i we are given its weight w i and its profit p i .The goal is to select a set of items of total weight at most W and maximum total profit.Formally, 0-1-Knapsack is the following problem: maxtp T x : w T x ď W, x P t0, 1u n u.
A generalization of 0-1-Knapsack is the Bounded Knapsack problem, in which each item i also comes with a multiplicity u i , and item i can be selected up to u i times.This can be viewed as a compressed representation of a 0-1-Knapsack instance.In particular, any algorithm for Bounded Knapsack also solves 0-1-Knapsack.Formally, Bounded Knapsack is the following problem: maxtp T x : w T x ď W, 0 ď x ď u, x P Z n u.
There is a vast amount of literature on both problems.When the input integers are small, of particular importance are pseudopolynomial-time algorithms, i.e., algorithms whose running time depends polynomially on n and the input integers.A well-known example is Bellman's dynamic programming algorithm from 1957 [5] that solves 0-1-Knapsack in time OpnW q and Bounded Knapsack in time r OpnW q. 1 In the last few years, research on pseudopolynomial-time algorithms for Knapsack was driven by developments in fine-grained complexity theory (e.g., [18,33,1]) and in proximity bounds for integer programming (e.g., [20,25]).In particular, fine-grained complexity contributed conditional lower bounds for 0-1-Knapsack and Bounded Knapsack that rule out time Oppn `W q 2´δ q [18, 33] and W 1´δ ¨2opnq [1] for any constant δ ą 0. That is, by now we have evidence that the running time r OpnW q is near-optimal.To cope with these hardness results, recent work studied the maximum weight of any item w max as a parameter.This is especially interesting for Bounded Knapsack, where the parameter W can be much larger than w max and n due to the multiplicities.For this reason, it is far from obvious that Bounded Knapsack can be solved in polynomial time in terms of n and w max .The first such algorithm was designed by Tamir [39], achieving time r Opn 3 w 3 max q.This raised the question: What is the optimal running time for Bounded Knapsack in terms of n and w max ?More precisely, what is the smallest constant c such that Bounded Knapsack can be solved in time r Oppn `wmax q c q? The same conditional lower bound that rules out time Oppn`W q 2´δ q also rules out time Oppn`w max q 2´δ q for any constant δ ą 0 [18,33].Thus, we have c ě 2, and the driving question becomes: Can Bounded Knapsack be solved in time r Oppn `wmax q 2 q?
This question motivated a line of research developing Bounded Knapsack algorithms with better and better dependence on n and w max , see Table 1.In particular, after Tamir's first polynomial algorithm with time complexity r Opn 3 w 2 max q [39], a line of research developed and refined proximity bounds by exchange arguments leading to running times r Opn 2 w 2 max q and r Opnw 3 max q [4], Opn 2 w 2 max q and r Opnw 2 max q [20], and Opn`w 3 max q [36].Very recently, Chen et al. used exchange arguments based on additive combinatorics to obtain time r Opn `w2.4 max q [16].However, despite extensive research our understanding of the optimal running time in terms of n and w max is still incomplete.In particular, the above driving question remained open.
For 0-1-Knapsack the state of the art is very similar to its generalization Bounded Knapsack.In addition to the algorithms listed above, the following algorithms were developed specifically for 0-1-Knapsack: Note that instances with W ě nw max are trivial for 0-1-Knapsack.Instances with W ă nw max can be solved in time OpnW q ď Opn 2 w max q by Bellman's classic dynamic programming algorithm [5], in time Opn `wmax W q ď Opnw 2 max q [4, 3] (also implicit in [31]), and since recently in time r Opn `w2.5 max q [28].The same conditional lower bound as for Bounded Knapsack also works for 0-1-Knapsack, and thus for both problems our understanding had the same gap.
In particular, the driving question posed above was open not only for Bounded Knapsack but also for 0-1-Knapsack.We remark that this driving question has been asked repeatedly (for Bounded Knapsack or 0-1-Knapsack), see, e.g.[36,8,28,16].

Our Results
In this paper, we resolve the driving question by designing an algorithm for Bounded Knapsack that runs in time r Opn `w2 max q.
1 By r O-notation we hide logarithmic factors, i.e., r OpT q " Ť cě0 OpT log c T q.
Table 1: Pseudopolynomial-time algorithms for 0-1 Knapsack and Bounded Knapsack parameterized by the number of items n and the maximum weight of any item w max .
Alternatively, in Theorem 1 we can replace w max by the maximum profit of any item p max .
Theorem 2. Bounded Knapsack can be solved by a deterministic algorithm in time r Opn `p2 max q.
Independent Work Jin [27] independently also achieved running time r Opn `w2 max q as well as r Opn `p2 max q.Let us give a brief comparison: In Jin's favor, his r O hides only 4 logfactors, while our r O hides at least 7 logfactors.In our favor, (1) we handle Bounded Knapsack, while Jin's algorithm only works for 0-1-Knapsack, and (2) our algorithm seems to be significantly simpler.

Technical Overview
Here we describe the tools used in our algorithm and their history.
Classic Proximity Bound Let pw, p, W q be a 0-1-Knapsack instance sorted by non-increasing profit-to-weight ratios p 1 {w 1 ě . . .ě p n {w n .Let g be the maximal prefix solution, i.e., g picks the maximal prefix of the items that fits into the weight budget W . Intuitively, an optimal solution x should not deviate too much from g, since any deviation means replacing an item with higher profit-to-weight ratio by an item with lower profit-to-weight ratio.Formally, a classic proximity bound shows that there exists an optimal solution x ˚that differs from the maximal prefix solution g in Opw max q entries.That is, we have ÿ iPrns |x i ´gi | " Opw max q and thus ÿ iPrns w i ¨|x i ´gi | " Opw 2 max q. (CP) This proximity bound was pioneered by Eisenbrand and Weismantel [20].Their proof also works for Integer Linear Programs with more than one constraint and is based on Steinitz' lemma; for Knapsack a simplified proof using a simple exchange argument can be found in [36].This proximity bound has been used in almost all recent work on pseudopolynomial-time and approximation algorithms for Knapsack.We also make use of it, in order to reduce the Bounded Knapsack problem to the 0-1-Knapsack problem, as we explain next.
Reduction from Bounded Knapsack to 0-1-Knapsack We show the following reduction between the two Knapsack problems: If 0-1-Knapsack on Opw 2 max q items can be solved in time r Opw 2 max q, then Bounded Knapsack can be solved in time r Opn `w2 max q.After establishing this reduction, to show our main result that Bounded Knapsack can be solved in time r Opn `w2 max q it suffices to show that 0-1-Knapsack can be solved in time r Opn `w2 max q.We obtain this reduction along the lines of previous Bounded Knapsack algorithms, e.g.[20,36].In the following description for simplicity think of the starting point being a 0-1-Knapsack instance (the same reduction can also be efficiently implemented when starting from Bounded Knapsack).Given a Knapsack instance, we construct the maximal prefix solution g.For any weight ŵ P rw max s, among the items of weight ŵ that are picked by g, we greedily pick all but the Θpw max q least profitable items, and we remove the picked items from the instance.Similarly, among the items of weight ŵ that are not picked by g, we remove all but the Θpw max q most profitable items from the instance.The classic proximity bound (CP) implies that some optimal solution is consistent with these choices.The remaining instance has at most Θpw max q items of each weight in rw max s, so we reduced the number of items to Opw 2 max q.So from now on we focus on 0-1-Knapsack with few items.
Additive Combinatorics A fundamental result in additive combinatorics shows that for any set A Ď rns of size |A| " ?n the set of all subset sums of A contains an arithmetic progression of length n.This result was pioneered by Freiman [21] and Sárközy [37] and further improved by Szemerédi and Vu [38] and recently by Conlon, Fox, and Pham [17].A long line of work used variations of this result to design algorithms for dense cases of the Subset Sum problem [12,13,22,23,24,11].Notably, the majority of this line of research assumes that the input is a set, and only recently this work has been generalized to multi-sets [11].In this paper we distill the following tool from [11], which roughly states that two multi-sets of small integers X, Y have two subsets of equal sum if both the size of X and the sum of all elements of Y are sufficiently large.
Lemma 3 (Informal Version of Lemma 7).Let X, Y be multi-sets consisting of integers in rw max s, where each number in X has multiplicity at most µ.If |X| " pµ¨w max q 1{2 and ř yPY y " µ¨w 2 max {|X| then there exist non-empty subsets of X and Y that have the same sum.
Proximity Bounds based on Additive Combinatorics Results from additive combinatorics such as Lemma 3 can be used in exchange arguments to show proximity bounds.For Knapsack, this was first used by Deng, Jin and Mao to design approximation schemes [19], and recently by Jin [28] and Chen, Lian, Mao and Zhang [16] to design pseudopolynomial-time algorithms.Our arguments follow the approach of Chen et al. [16].
The basic exchange argument by Chen et al. is as follows.Assume for simplicity that all items have distinct profit-to-weight ratios, i.e., p 1 {w n ą . . .ą p n {w n .Fix an optimal solution x ˚, and consider a set of items I X that are picked by x ˚and a set of items I Y that are not picked by x ˚.
Consider the multi-set of all weights of items in I X , i.e., X " tw i : i P I X u, and similarly define Y " tw i : i P I Y u.If X and Y satisfy the constraints of Lemma 3, then there exist subsets I 1 X Ă I X and I 1 Y Ď I Y of equal total weight.Then removing I 1 X and adding I 1 Y to x ˚maintains the total weight, and thus yields a new feasible solution x 1 .Now suppose that all items in I Y have a higher profit-to-weight ratio than all items in I X ; since we sorted by decreasing profit-to-weight ratio this condition is equivalent to maxpI Y q ă minpI X q.Then this exchange strictly increases the profit, contradicting the choice of x ˚as an optimal solution.We can use this contradiction to argue that one of the assumptions of Lemma 3 must not be satisfied, and by ensuring that |X| " |I X | is large, we can conclude that the sum of all weights of the items in I Y must be small.By picking I Y as a set of items picked by g but not by x ˚, we obtain that these items have a small total weight.By a symmetric argument on the items picked by x ˚but not by g, we obtain that x ˚and g differ by a small total weight.This can yield a proximity bound similar to (CP).
The question now is how to choose the set I Y and ensure that there always exists a corresponding set I X so that the basic exchange argument can be applied.To this end, Chen et al. partition the items into 6 parts, and show applicability on each part.We devise a more intricate partitioning into a polylogarithmic number of parts, proving the following novel proximity bound.
Theorem 4 (Informal Version of Theorem 12).Given a 0-1-Knapsack instance pw, p, W q of size n with maximal prefix solution g, in time r Opnq we can construct a partitioning I 1 Y . . .Y I k " rns into k " Oplog 2 nq parts together with a proximity bound ∆ j for each part I j such that for every j we have (1) any optimal solution x ˚differs from g by total weight at most ∆ j among the items I j , and (2) the ∆ j 's are small, more precisely the product of ∆ j and the number of distinct weights in I j is r Opw 2 max q.
This theorem is the main novelty of our algorithm.For comparison, note that the unique partitioning consisting of k " 1 part satisfies | supppwpI 1 qq| ¨∆1 ď r Opw 3 max q, since | supppwpI 1 qq| ď w max holds trivially and ∆ 1 :" Θpw 2 max q works by the classic proximity bound (CP).This bound for k " 1 was recently improved to r Opw 2.5 max q by Jin [28].Chen et al. were the first to study partitionings into k ą 1 parts, constructing a partitioning into 6 parts with | supppwpI j qq| ¨∆j ď r Opw 2.4 max q [16, Lemmas 13,16 and 17].We improve this bound to r Opw 2 max q, while using more parts.
The Knapsack Algorithm Given Theorem 4, designing an algorithm that solves 0-1-Knapsack in time r Opn `w2 max q follows from standard techniques.We split each part I j into the items I j picked by g and the items I j not picked by g.By Theorem 4, any optimal solution x ˚for the whole instance pw, p, W q uses weight at most ∆ j in the set I j , and thus we can restrict our attention to weights 0 ď W 1 ď ∆ j in set I j .Similarly, g ´x˚u ses weight at most ∆ j in the set I j , and thus we can restrict our attention to weights 0 ď W 1 ď ∆ j in set I j .For simplicity, in this overview we focus on the set I j .We further split each set I j according to item weights; this splits I j into sets I j,1 , . . ., I j,ℓ j where ℓ j is the number of distinct weights among the items in I j .All items in I j,ℓ have the same weight, so a standard greedy algorithm computes the optimal profit for each weight 0 ď W 1 ď ∆ j in time Op∆ j q.The total time to run the greedy algorithm on all sets I j,1 , . . ., I j,ℓ j is Op∆ j q times the number of distinct weights of items in I j , which by guarantee (2) of Theorem 4 is r Opw 2 max q.Repeating this for all 1 ď j ď k " Oplog 2 nq still runs in time r Opw 2 max q.It remains to combine the results of the greedy algorithm.Here we use that the result of the greedy algorithm is a concave sequence, and the pmax, `q-convolution of an arbitrary sequence and a concave sequence can be computed in linear time.It follows that combining the results of the greedy algorithm can be done in roughly the same running time as it took to compute them, up to logarithmic factors.Hence, in total the running time is still r Opn `w2 max q.We remark that (apart from Theorem 4) this algorithm is quite standard: For k " 1 the same algorithm was described in [36, Lemma 2.2], for larger k essentially the same algorithm was described in [16,Lemma 6], and its ingredients (i.e., solving equal weights by a greedy algorithm and combining concave sequences) have been used repeatedly, see, e.g., [31,14,3,36,16].

Summary We described an algorithm that solves Bounded Knapsack in time r
Opn `w2 max q and thus resolves the driving question of a long line of research [39,4,3,20,36,28,16].Our main novelty is a proximity bound that splits the set of items into a polylogarithmic number of parts, so that within each part the optimal solution cannot deviate much from the maximal prefix solution.We highlight that additive combinatorics is a crucial tool in our algorithm, thus showing once more that the recent trend of using additive combinatorics for algorithm design is very fruitful.

Further Related Work
Further Parametrizations So far we discussed Knapsack algorithms that parametrize by the number of items n and the weight parameters W or w max .The analogous parameters for profits are OPT (the profit of an optimal solution) and p max (the maximum profit of any item).It is essentially symmetric to parametrize by weight or by profit parameters, see, e.g., [36,Section 4].However, interesting new algorithms are possible when parametrizing by both weight and profit parameters, see the line of work [35,4,8,9].To mention an exemplary result, 0-1-Knapsack can be solved in time r Opnw max p 2{3 max q [9].What we present in this paper is superior in case w max !p max , but worse in case w max « p max " n.

Unbounded Knapsack
The Unbounded Knapsack problem is the same as Bounded Knapsack but with u i " 8 for each item i.That is, every item can be selected any number of times, which makes the problem much simpler.The same conditional lower bound as for Bounded Knapsack ruling out time Oppn `wmax q 2´δ q for any δ ą 0 also works for this problem [18,33].Running time r Opn `w2 max q has been achieved for Unbounded Knapsack independently by Jansen and Rohwedder [25] and Axiotis and Tzamos [3].Their algorithms are significantly simpler than our algorithm for Bounded Knapsack, in particular they do not require additive combinatorics.Chan and He further improved the running time to r Opnw max q [15].

Subset Sum
The Subset Sum problem is the special case of 0-1-Knapsack with w i " p i for each item i.Here the weight bound W is typically called the target t.Bellman's classic dynamic programming algorithm solves Subset Sum in time Opntq [5], which has been improved to r Opn `tq [7,29].A conditional lower bound rules out time t 1´δ ¨2opnq for any constant δ ą 0 [1].
Subset Sum with parameter w max has also been studied recently.The problem can be solved in time r Opn `w1.5 max q, which was first known when the input is a set (by combining [11] and [7]), and was recently shown when the input is allowed to be a multi-set [28,16].There remains a gap to the best known conditional lower bound ruling out time w 1´δ max ¨2opnq for any constant δ ą 0 [1].

Organization
After a short preliminaries section, in Section 3 we present tools from additive combinatorics.The main novelty of this paper can be found in Section 4, where we develop our proximity bounds.Then in Section 5 we present our algorithmic ingredients.Note that Sections 4 and 5 are written for 0-1-Knapsack.We justify this by presenting a reduction from Bounded Knapsack to 0-1-Knapsack in Section 6.Finally, in Section 7 we combine these ingredients to prove our main result.

Preliminaries
Throughout the paper we use the notation N " t1, 2, . ..u and N 0 " t0, 1, 2, . ..u.For n P N we write rns " t1, 2, . . ., nu.We use r O-notation to hide logarithmic factors, where r OpT q :" Ť cě0 OpT log c T q.Formally, the problems studied in this paper are defined as follows.
Problem 5 (Bounded Knapsack).Given profits p P N n , weights w P N n , and multiplicities u P N n , as well as a weight budget W P N, compute maxtp T x : w T x ď W, x P Z n , 0 ď x ď uu.0-1-Knapsack is the same as Bounded Knapsack but all multiplicites are 1, i.e., u i " 1 for all i.
Problem 6 (0-1-Knapsack).Given profits p P N n , weights w P N n , and a weight budget W P N, compute maxtp T x : w T x ď W, x P t0, 1u n u.
Throughout the paper, we denote by w max the maximum weight w max " max i w i .For a given 0-1-Knapsack instance pw, p, W q of size n we usually assume that the items are sorted according to non-increasing profit-to-weight ratio p 1 {w 1 ě . . .ě p n {w n .Having fixed such an ordering, an important object in this paper is the maximal prefix solution g P t0, 1u n that selects a maximal feasible prefix of all items.More precisely, g is defined as follows.Let t P rn `1s be maximal such that w 1 `w2 `. . .`wt´1 ď W . Then let g 1 " . . ." g t´1 " 1 and g t " . . ." g n " 0.2 Note that g is a feasible solution, since it has weight w T g " w 1 `w2 `. . .`wt´1 ď W .It is also maximal, in the sense that adding item t would make it infeasible.In particular, the maximal prefix solution has weight w T g P pW ´wmax , W s (unless W ě w 1 `. . .`wn , in which case we have t " n `1).Clearly, the maximal prefix solution can be computed in time r Opnq.For a given Bounded Knapsack instance pw, p, u, W q of size n we also usually assume nonincreasing profit-to-weight ratios p 1 {w 1 ě . . .ě p n {w n .The maximal prefix solution g P N n 0 can then be defined analogously to the 0-1 case, by letting t P rn`1s be maximal such that ř t´1 i"1 u i ¨wi ď W and setting g 1 :" u 1 , . . ., g t´1 :" u t´1 and g t`1 " . . ." g n :" 0 and g t :" tpW ´řt´1 i"1 u i ¨wi q{w t u (if t ď n).Again, g is feasible and maximal and can be easily computed in time r Opnq.

Tools from Additive Combinatorics
A long line of work used additive combinatorics to design algorithms for dense cases of the Subset Sum problem [12,13,22,23,24,11].The majority of this line of research works under the assumption that the input is a set; only recently this work has been generalized to multi-sets [11].In this section, we distill a tool from [11] that we will later use in an exchange argument.In what follows, after introducing the necessary notation, we phrase this tool in Lemma 7, and then we show how to obtain Lemma 7 from [11].
Let X be a finite multi-set of positive integers, or multi-set for short.For an integer x we denote by µpx; Xq the multiplicity of x in X, indicating that the number x appears µpx; Xq times in X.The usual set notation generalizes naturally to multi-sets, e.g., we write X Ď Y if µpx; Xq ď µpx; Y q holds for all x.Furthermore, we define the following concepts: • Support supppXq: The set of all distinct integers contained in X, i.e., supppXq " tx P N | µpx; Xq ě 1u.
We remark that [16,Lemmas 11 and 12] are similar to Lemma 7 and are also derived from [11], but the specific formulations differ.The result is also similar in spirit to [28,Lemma 3.4].
The proof builds on the following definitions and theorems by Bringmann and Wellnitz [11].
Definition 8 (Definitions 3.1 and 3.2 in [11]).Let X be a multi-set.Write Xpdq :" X X dN to denote the multi-set of all numbers in X that are divisible by d.Further, write Xpdq :" XzXpdq to denote the multi-set of all numbers in X not divisible by d.An integer If all integers in X are divisible by d, we denote by X{d the multi-set where we divide each element of X by d, i.e., µpx; X{dq " µpd ¨x; Xq for all x.
We are now ready to prove our main additive combinatorics tool, Lemma 7.
Proof of Lemma 7. Note that the assumption |X| ě 1500 ¨plog 3 p2|X|qµpXqw max q 1{2 implies that |X| ě a C δ pXqµpXqw max , since ?1699200 ă 1500 and µpXq ď |X|.Hence, X is C δ pXq-dense.By Theorem 9, there exists an integer d ě 1 such that X 1 :" Xpdq{d is C δ pXq-dense and has no C α pXq-almost divisor.Since |X 1 | ď |X| and µpX 1 q ď µpXq, it follows that X 1 is also C δ pX 1 qdense and has no C α pX 1 q-almost divisor.It then follows from Theorem 10 that every integer in the range r λpX 1 q, ΣpX 1 q ´λpX 1 q s is a subset sum of X 1 .Hence, every multiple of d in the range r d ¨λpX 1 q, dpΣpX 1 q ´λpX 1 qq s is a subset sum of X.In what follows we need bounds on d ¨λpX 1 q and dpΣpX 1 q ´λpX 1 qq, which we prove in the next claim.
Finally, Theorem 9 yields d ď 4µpXqΣpXq{|X| 2 and thus dpΣpX 1 q ´2λpX 1 qq ě dw max .Now consider the multi-set Y .Suppose that |Y | ě d and pick any elements y 1 , . . ., y d P Y .Consider the prefix sums s i :" ř i j"1 y j for i " 0, 1, . . ., d.By the pigeonhole principle, two prefix sums are equal modulo d; say we have s i " s j for i ă j.Then s j ´si " y i`1 `yi`2 `. . .`yj is divisible by d.Moreover, we have y i`1 `yi`2 `. . .`yj ď pj ´iqw max ď dw max .We have thus shown that there exists Y 1 Ď Y whose sum ΣpY 1 q is divisible by d and bounded from above by dw max , assuming that |Y | ě d.
Repeatedly find such a multi-set Y 1 Ď Y , remove its elements from Y , and continue on the remaining multi-set Y .This works until Y has less than d elements remaining.This process yields multi-sets Y 1 , . . ., Y k Ď Y whose sums ΣpY 1 q, . . ., ΣpY k q are divisible by d and bounded from above by dw max .Moreover, since less than d elements remain, their total sum is ΣpY 1 q `. . .`ΣpY k q ě ΣpY q´dw max ě d¨λpX 1 q by Claim 11.Since by Claim 11 the interval r d¨λpX 1 q, dpΣpX 1 q´λpX 1 qq s has length at least dw max , it follows that one of the prefix sums ΣpY 1 q `. . .`ΣpY ℓ q for some ℓ lies in the interval r dλpX 1 q, dpΣpX 1 q ´λpX 1 qq s and is divisible by d.Since ΣpY 1 q `. . .`ΣpY ℓ q is a subset sum of Y , and since every integer in r d ¨λpX 1 q, dpΣpX 1 q ´λpX 1 qq s that is divisible by d is a subset sum of X, it follows that there exists a number that is a subset sum of both X and Y .

Proximity Bound
The main result of this section is the following theorem, which provides a partitioning of the items of a 0-1-Knapsack instance into few parts, along with a certain proximity bound for each part.In more detail, the theorem states that we can partition the items into a polylogarithmic number of sets I 1 , . . ., I k with corresponding proximity bounds ∆ 1 , . . ., ∆ k such that (1) inside each part I j the optimal solution deviates from the maximal prefix solution by a total weight of at most ∆ j , and (2) the ∆ j 's are not too large, in particular the product of ∆ j and the number of distinct weights in I j is bounded by r Opw 2 max q.In what follows for a 0-1-Knapsack instance pw, p, W q and an index set I Ď rns we denote by wpIq the multi-set tw i : i P Iu, and by supppwpIqq the support of this multi-set, i.e., the set of distinct weights appearing among the items in I.
Theorem 12 (Item Partitioning with Proximity Bounds).For a 0-1-Knapsack instance pw, p, W q of size n with distinct profit-to-weight ratios p 1 {w 1 ą . . .ą p n {w n , let g be the maximal prefix solution, and let x ˚be an arbitrary optimal solution.Given w, p, W , in time r Opnq we can compute a partitioning I 1 Y . . .Y I k " rns and proximity bounds ∆ 1 , . . ., ∆ k with k " Oplog 2 nq such that for each 1 ď j ď k we have ÿ iPI j w i |g i ´xi | ď ∆ j and | supppwpI j qq| ¨∆j ď 300000000 log 3 p2nq ¨w2 max .
We remark that this theorem is the main novelty of our algorithm.The classic proximity bound (CP) can be used to show that the unique partitioning consisting of k " 1 part satisfies | supppwpI j qq| ¨∆j ď r Opw 3 max q.This bound was recently improved to r Opw 2.5 max q [28].Chen et al. were the first to study partitionings into k ą 1 parts.They constructed a partitioning into at most 6 parts with | supppwpI j qq| ¨∆j ď r Opw 2.4 max q [16, Lemmas 13,16 and 17].We improve this construction to r Opw 2 max q.The rest of this section is devoted to the proof of Theorem 12.We start by presenting algorithm SingleSteppU q, which is given a set U Ď rns and constructs a set I Ď U and a distance bound ∆ with the same guarantees as in Theorem 12 (see Lemma 16 below).Later we will repeatedly apply algorithm SingleStep to obtain the partitioning promised by Theorem 12. Algorithm SingleSteppU q works as follows: Algorithm 1 SingleSteppU q, with input U Ď rns 1: We consider the class of weights of largest multiplicity, defined as follows.For every m P r2ns that is a power of 2, we define W m :" ŵ P rw max s : m{2 ă |ti P U : w i " ŵu| ď m( .
Let m " m U be the largest power of 2 such that W m is non-empty, and let W :" W m .2: We let J " J U " ti P U : w i P Wu be the items with weights in W. We split J into the items picked by g and the items unpicked by g: J ´:" ti P J : g i " 1u, J `:" ti P J : g i " 0u.

3:
We define I ´Ď J ´to consist of the smallest r|J ´|{2s indices in J ´(i.e., the items with largest profit-to-weight ratio), and I `Ď J `to consist of the largest r|J `|{2s indices in J `(i.e., the items with smallest profit-to-weight ratio).Finally, we define I " I U to be the larger of the sets I ´and I `.
In what follows we analyze this algorithm.We will use the notation introduced in the above algorithm throughout the rest of this section.We start with a simple observation on the size of I.
where we first used that I is the larger of I ´, I `, then that I ´is at least half of J ´and I `is at least half of J `, and finally that J ´Y J `" J is a partitioning.
We now observe the following upper bound on ∆ times the number of distinct weights among I.
Proof.Note that |J| ě m 2 ¨|W| holds since all weights from W appear at least m{2 times in U , and thus also in J.The following auxiliary lemma prepares the proof of the main proximity guarantee.
Lemma 15.There are no non-empty sets A Ď ti P rns : x i " 0u and B Ď ti P rns : x i " 1u with maxpAq ă minpBq that have the same sum of weights ΣpwpAqq " ΣpwpBqq.
Proof.Towards a proof by contradiction, assume that sets A, B as in the lemma statement exist.Let x 1 be the solution obtained from x ˚by removing the items B and adding the items A, i.e., ti P rns : x 1 i " 1u " pti P rns : x i " 1uzBq Y A. Since ΣpwpAqq " ΣpwpBqq, we maintain the total weight w T x ˚" w T x 1 , so the solution x 1 is feasible.Since maxpAq ă minpBq, the profit-to-weight ratio p i {w i of any item i P A is strictly larger than the profit-to-weight ratio p j {w j of any item in j P B, which yields p T x 1 ą p T x ˚.This contradicts x ˚being an optimal solution.
We can now prove the main guarantee of algorithm SingleSteppU q: On the items in I the optimal solution x ˚cannot deviate too much from the maximal prefix solution g.The proof of this proximity result is based on the additive combinatorics tool from the previous section.
Lemma 16.We have Proof.We split the proof into several cases.
Let I Y :" ti P I `: x i " 1u and consider its multi-set of weights Y " wpI Y q.Suppose that ΣpY q satisfies the assumption of Lemma 7. Then we can apply Lemma 7 to X and Y , which yields non-empty X 1 Ď X and Y 1 Ď Y with equal sum ΣpX 1 q " ΣpY 1 q.The corresponding subsets I X 1 Ď I X and I Y 1 Ď I Y have equal total weight ΣpwpI X 1 qq " ΣpwpI Y 1 qq.Since maxpI X q ă minpI Y q, this contradicts Lemma 15.
Note that since I " I `, all items i P I have g i " 0 and thus ΣpY q " ΣpwpI Y qq " ř iPI w i |g i ´xi |.We thus obtain ř iPI w i |g i ´xi | ď ∆, as desired.
Case 2.1.2: Let I X " ti P J `zI `: x i " 1u and consider its multi-set of weights X " wpI X q.We claim that X satisfies the assumption of Lemma 7, i.e., |X| ě 1500plog 3 p2|X|q ¨µpXq w max q 1{2 .We prove this claim in the remainder of this paragraph.Note that |X| " |I X | ě |J `zI `|{2, since all items i P J `zI `have g i " 0 and thus by the assumption of Case 2.1.2at least half of these items have x i " 1.The rest of the argument is exactly as in the previous case, and we arrive at |X| ě 1500plog 3 p2|X|q¨µpXq w max q 1{2 , so X satisfies the assumption of Lemma 7.
Let I Y :" ti P rns : g i " 1, x i " 0u and consider its multi-set of weights Y " wpI Y q.Suppose that ΣpY q satisfies the assumption of Lemma 7. Then we can apply Lemma 7 to X and Y , which yields X 1 Ď X and Y 1 Ď Y with equal sum ΣpX 1 q " ΣpY 1 q.The corresponding subsets I X 1 Ď I X and I Y 1 Ď I Y have equal total weight ΣpwpI X 1 qq " ΣpwpI Y 1 qq.Since maxpI Y q ă minpI X q, this contradicts Lemma 15.
Therefore, ΣpY q cannot satisfy the assumption of Lemma 7. By the same calculations as in the previous case we arrive at ΣpY q ď 1360000 logp2nq ¨m w 2 max {|I| ď ∆.
It remains to relate ΣpY q to ř iPI w i |g i ´xi |.Note that x ˚is maximal in the sense that no item can be added to it, and g is maximal in the sense that it selects a maximal prefix of the items.It follows that either w T x ˚" w T g " ř iPrns w i or w T x ˚, w T g P pW ´wmax , W s. Both yield |w T x ˚´w T g| ă w max .Moreover, we have w T x ˚´w T g " ÿ iPrns,g i "0 and thus ÿ iPI Finally, we note that ΣpY q " ΣpwpI Y qq " ÿ iPrns,g i "1 which implies the desired ř iPI w i |g i ´xi | ď ΣpY q `wmax ď ∆ (where we used m w max ě |I|).
Case 2.2: I " I ´.This is symmetric to Case 2.1.To ensure that there are no subtle different details, we provide the full proof in Appendix A. Lemma 17. Algorithm SingleSteppU q runs in time r Opnq.
Proof.This is straightforward.With one pass over all items in U we can determine the set of distinct weights of all items.With another pass we can count for each distinct weight the number of items having that weight.This yields the set W. With another pass over all items in U we can then determine J ´, J `, and from these sets we can easily read off I ´, I `, which yields I and ∆.
We are now ready to prove our main proximity result Theorem 12.
Proof of Theorem 12. Starting with U 1 " rns, we repeatedly run SingleSteppU j q to obtain pI j , ∆ j q and remove I j from U j to obtain U j`1 , until U j`1 " H.For details, see the following pseudocode.
Algorithm 2 Construction of the partitioning guaranteed by Theorem 12 1: U 1 :" rns 2: for j " 1, 2, . . .do: pI j , ∆ j q :" SingleSteppU j q 4: U j`1 :" U j zI j 5: If U j`1 " H then return pI 1 , ∆ 1 q, . . ., pI j , ∆ j q To see that this procedure terminates, observe that for non-empty U algorithm SingleSteppU q computes a non-empty set I. Thus, the procedure terminates after at most n iterations.We denote the number of iterations until termination by k.
It is easy to see that I 1 , . . ., I k form a partitioning of rns, since (1) in each iteration j the items I j are selected from the remaining items U j " rnszpI 1 Y . . .Y I j´1 q, and (2) in the end there are no more remaining items, so we have The proximity bounds for each I j follow directly from Lemmas 14 and 16.By Lemma 17 the total running time of this procedure is r Opknq.It remains to show that k " Oplog 2 nq.To this end, for any non-empty set U Ď rns we define the potential φpU q :" log 2 pm U q ¨2rlog 4{3 pnqs `rlog 4{3 p|J U |qs (where m U , J U are defined in lines 1 and 2 of algorithm SingleSteppU q).Observe that φpU q P N 0 , since m U is a power of 2. Also observe that φpU q ď Oplog 2 nq, since m U ď 2n and |J U | ď |U | ď n.We claim that φpU j`1 q ă φpU j q holds for any 1 ď j ă k, see Claim 18.This yields 0 ď φpU k q ă φpU k´1 q ă . . .ă φpU 1 q ď Oplog 2 nq, and thus k ď Oplog 2 nq.It remains to prove the claim.
Proof.We use the notation m j :" m U j and J j :" J U j for any j.Since U j`1 Ď U j we have m j`1 ď m j .We consider two cases: in one case m j decreases by a factor 2 and in the other case m j stays the same and |J j | decreases by a factor 3{4. Thus, in both cases the potential decreases by at least 1.More details follow.
In both cases we have φpU j`1 q ă φpU j q, which finishes the proof.Now that we proved Claim 18 we finished the proof of Theorem 12.

Algorithmic Ingredients
In this section, after gathering some algorithmic ingredients in Sections 5.1 and 5.2, in Section 5.3 we present an algorithm for 0-1-Knapsack that is given a partitioning and proximity bounds as in Theorem 12. Combining the algorithm in Section 5.3 with Theorem 12 then proves the main result for 0-1-Knapsack (as we will discuss in Section 7).

MaxPlusConv with a Concave Sequence
A fundamental subroutine of all recent Knapsack algorithms is the pmax, `q-convolution operation, or MaxPlusConv for short, which is defined as follows.
Problem 19 (MaxPlusConv).Given two sequences x " xr0..ns and y " yr0..ms with entries in Z Y t´8u, compute the sequence z " zr0..n `ms with zrks " maxtxris `yrk ´is : 0 ď i ď ku.Here out-of-bounds entries of x and y are interpreted as ´8.We denote this operation by z " x ‹ y.
MaxPlusConv is equivalent to the analogous problem MinPlusConv (with min replacing max and 8 replacing ´8 in the above definition). 3MaxPlusConv and MinPlusConv are central problems in the area of fine-grained complexity theory [40].On two sequences of length n they can be solved naively in time Opn 2 q, and a popular conjecture postulates that they cannot be solved in time Opn 2´δ q for any constant δ ą 0 [18,33].
While in general MaxPlusConv is conjectured to require essentially quadratic time, on certain structured instances faster algorithms are known.Here we make use of the special case where one of the sequences is concave, as has also been used, e.g., in [36,14,3,31,16].Definition 20.We say that a sequence y P pZ Y t´8uq m is concave if there exists an offset h P N and a length ℓ P N 0 such that (1) the entries yr0s, yrhs, . . ., yrℓ ¨hs are finite, (2) all other entries of y are ´8, (3) we have yri ¨hs ´yri ¨h ´hs ě yri ¨h `hs ´yri ¨hs for all 1 ď i ă ℓ.
Lemma 21 (MaxPlusConv with Concave Sequence).Given an arbitrary sequence x P pZYt´8uq n and a concave sequence y P pZYt´8uq m , we can compute their MaxPlusConv x‹y in time Opn`mq.
Proof.This is a standard application of the SMAWK algorithm [2].For completeness we present the argument, following the presentation in [36].For each remainder r P t0, 1, . . ., h ´1u we separately compute the entries of z " x ‹ y at indices that have remainder r modulo h (i.e., for a fixed r our goal is to compute the entries zrrs, zrr `hs, zrr `2hs, . . . of z).We define the matrix M P pZ Y t´8uq rpn`mq{hsˆrn{hs with M ri, js " xrj ¨h `rs `yrpi ´jqhs, where out-of-bounds entries of x and y are interpreted as ´8.We do not explicitly construct the matrix, but we can compute every entry of M in constant time.Observe that the matrix M is inverse-Monge, i.e., M ri, js `M ri `1, j `1s " xrj ¨h `rs `yrpi ´jqhs `xrpj `1q ¨h `rs `yrpi ´jqhs ě xrj ¨h `rs `yrpi ´jqh `hs `xrpj `1q ¨h `rs `yrpi ´jqh ´hs " M ri `1, js `M ri, j `1s.
Therefore, the SMAWK algorithm [2] is applicable and computes all row maximas of M in total time Oppn `mq{hq.Observe that the maximum of the i-th row of M is equal to zri¨h`rs.Running this algorithm for all r P t0, 1, . . ., h ´1u yields all entries of z and takes total time Opn `mq.

Greedy Algorithm for Equal Weights
0-1-Knapsack is particularly easy to solve if all items have the same weight, because then the greedy algorithm yields an optimal solution.In this situation we can even solve the instance for every weight budget 0 ď W 1 ď W in total time OpW q, as shown by the following well-known lemma.
Lemma 22 (0-1-Knapsack with Equal Weights).There is an algorithm EqualWeightspw, p, W q that, given a 0-1-Knapsack instance pw, p, W q of size n with w 1 " . . ." w n and p 1 ě . . .ě p n , in time OpW q computes the sequence y " yr0..W s with entries Proof.The following simple greedy strategy is optimal in case of equal weights, and it clearly runs in time OpW q.

Algorithm for 0-1-Knapsack with given Proximity Bounds
We are now ready to present an algorithm for 0-1-Knapsack that is given a partitioning of the items and for each part a proximity bound, as in Theorem 12.For more details see the following lemma.This result for k " 1 was proven in [36,Lemma 2.2].Our lemma is very similar to [16,Lemma 6], except that they partition the set of weights instead of the set of items.
Lemma 23.Suppose we are given a 0-1-Knapsack instance pw, p, W q of size n with distinct profitto-weight ratios p 1 {w 1 ą . . .ą p n {w n , a partitioning I 1 Y. ..YI k " rns, and integers ∆ 1 ď . . .ď ∆ k .Denote the maximal prefix solution by g.Assume that there exists an optimal solution x ˚such that ř iPI j w i ¨|g i ´xi | ď ∆ j holds for all j.Then we can solve the 0-1-Knapsack instance in time Opn `k ¨řk j"1 | supppwpI j qq| ¨∆j q.
Claim 25.At the beginning of line 14, the optimal profit p T x ˚satisfies p T x ˚" p T g `maxtz `ras `z´r bs : 0 ď a, b ď ÿ jďk ∆ j , a ´b ď W ´wT gu. (2) Proof.For one direction, set a :" w T x `and b :" w T x ´and use Claim 24 to obtain p T g `z`r as `z´r bs ě p T g `pT where in last equality we used that p T x `is the profit of all items in I `selected by x ˚, and p T x ís the profit of items removed from g to obtain x ˚, so p T g ´pT x ´is the profit of all items in I śelected by x ˚.Claim 24 promises that a, b are valid indices of z `, z ´, and thus 0 ď a, b ď ř jďk ∆ j .Moreover, we have W ě w T x ˚" w T x ``w T g ´wT x ´" a `wT g ´b, so the constraint a ´b ď W ´wT g is satisfied.
For the other direction, for any a, b we use that z `ras is the profit of some set of items in I ẁith total weight a (or ´8) and z ´rbs is the negated profit of some set of items in I ´with total weight b (or ´8), so p T g `z´r bs is the profit of some set of items in I ´with total weight w T g ´b (or ´8).Thus, z `ras `pt g `z´r bs is the profit of some set of items in I `Y I ´" rns with total weight a `wT g ´b (or ´8).Hence, for all a, b with a ´b ď W ´wT g the value z `ras `pt g `z´r bs is the profit of some feasible solution (or ´8), so we have z `ras `pt g `z´r bs ď p T x ˚.
It remains to argue that lines 14-16 of the algorithm compute the right hand side of equation (2).To this end, note that the sequence s `stores the maximum of every prefix of z `, i.e., s `ras :" maxtz `ra 1 s : 0 ď a 1 ď au, for every 0 ď a ď ř jďk ∆ j .Note that in the right hand side of equation ( 2) the maximum allowed value for a is mintb `W ´wT g, ř jďk ∆ j u.We can replace the maximization over z `ras by s èvaluated at the maximum allowed value for a.This yields p T x ˚" p T g `maxts `rmintb `W ´wT g, ÿ jďk ∆ j us `z´r bs : 0 ď b ď ÿ jďk ∆ j u.
This shows that in line 16 the algorithm computes the optimal profit p T x ˚, proving correctness.
Running time Lines 1-2 run in time Opnq.Lines 4-6 can be implemented by a scan over the set I j , and thus run in total time Op|I 1 | `. . .`|I k |q " Opnq.Lines 14-16 run in time Op ř jďk ∆ j q (since after precomputing w T g and ř jďk ∆ j each evaluation in line 16 takes constant time).Note that in phase j the length of z `, z ´is equal to ř j 1 ďj ∆ j 1 ď j ¨∆j .Lines 8-13 thus all run in time Opj ¨∆j q.The loop in line 7 has ℓ j " | supppwpI j qq| iterations.Hence, the total running time is This finishes the proof of Lemma 23.
6 Reduction to 0-1-Knapsack In this section we use the classic proximity bound [20] to reduce Bounded Knapsack to 0-1-Knapsack on Opw 2 max q items.The techniques used in this reduction are similar to prior algorithms for Bounded Knapsack, e.g., [20,36], but the formulation as a reduction is new.
Lemma 26 (Reduction).If 0-1-Knapsack on Opw 2 max q items with distinct profit-to-weight ratios can be solved in time r Opw 2 max q, then Bounded Knapsack can be solved in time r Opn `w2 max q.
Proof.We start with a proof overview.We first present a reduction from 0-1-Knapsack to 0-1-Knapsack with Opw 2 max q items.To this end, consider a given 0-1-Knapsack instance pw, p, W q and its maximal prefix solution g.For any weight ŵ P rw max s, among the items of weight ŵ that are picked by g, we greedily pick all but the 2w max least profitable items.Similarly, among the items of weight ŵ that are not picked by g, we remove all but the 2w max most profitable items.The classic proximity bound [20] shows that there exists an optimal solution that is consistent with these choices.The remaining instance has at most 4w max items of each weight in rw max s, so we have a reduction to Opw max q items.
A Bounded Knapsack instance can be converted to a 0-1-Knapsack instance by replacing each item i with multiplicity u i by u i copies of item i.Therefore, the same reduction as above also works starting from Bounded Knapsack.We show that this reduction can be implemented efficiently without explicitly blowing up the original Bounded Knapsack instance to a 0-1-Knapsack instance.The result of the reduction then is a Bounded Knapsack instance with total multiplicity of all items Opw 2 max q.Now we can afford to explicitly list all copies of items to arrive at 0-1-Knapsack.Finally, we scale up all profits by a large factor and then add small noise terms.This maintains the optimal solution and makes all profit-to-weight ratios distinct, showing the lemma.
In what follows we give the details of this reduction.
Reduction from 0-1-Knapsack We first give a reduction from 0-1-Knapsack to 0-1-Knapsack with Opw 2 max q items.Consider a 0-1-Knapsack instance pw, p, W q of size n, and sort it by nonincreasing profit-to-weight ratio p 1 {w 1 ě . . .ě p n {w n .We iterate over all weights ŵ P rw max s, so in what follows fix the weight ŵ.Denote by i 1 ă . . .ă i ℓ all indices of items of weight ŵ.Note that p i 1 ě . . .ě p i ℓ .The maximal prefix solution g picks a prefix of these items, i.e., for some index t we have g i j " 1 for all 1 ď j ď t and g i j " 0 for all t ă j ď ℓ.Among the items of weight ŵ not picked by g we want to remove all but the 2w max most profitable items, i.e., we want to remove the items i t`2wmax`1 , i t`2wmax`2 , . . ., i ℓ .Formally, we add all items i t`2wmax`1 , i t`2wmax`2 , . . ., i ℓ to an initially empty set I p0q that stores all removed items.Similarly, among the items of weight ŵ picked by g we want to pick all but the 2w max least profitable items, i.e., we want to pick the items i 1 , . . ., i t´2wmax .Formally, we add all items i 1 , . . ., i t´2wmax to an initially empty set I p1q that stores all picked items.After performing this process for each weight ŵ P rw max s, we arrive at some set of items I p0q that are to be removed and some set of items I p1q that are to be picked.We remember the total profit of the picked items P :" ř iPI p1q p i and reduce the weight bound accordingly to W :" W ´řiPI p1q w i .Then we remove the coordinates in I p0q Y I p1q from w and p.Call the resulting 0-1-Knapsack instance p w, p, W q, and denote its size by n :" n ´|I p0q | ´|I p1q |.Note that at most 4w max items of any weight in rw max s remain, so we have n ď 4w 2 max .We claim that the choices we have made are without loss of generality, i.e., there is an optimal solution x ˚P t0, 1u n of the original 0-1-Knapsack instance pw, p, W q that selects the items in I p1q and does not select the items in I p0q .This implies, maxtp T x : wT x ď W , x P t0, 1u nu `P " maxtp T x : w T x ď W, x P t0, 1u n u.
To prove the claim, consider all optimal solutions x ˚that minimize ř n i"1 |g i ´xi |, and among all such solutions pick an optimal solution x ˚minimizing ř iPI p0q x i ´řiPI p1q x i .Suppose for the sake of contradiction that x ˚selects one of the items i P I p0q that we have removed.We consider two cases.
Case 1: There is an item j P rnszpI p0q Y I p1q q of weight w j " w i that is not selected by x ˚.Then we can exchange item i by item j in x ˚, i.e., we set x i " 0 and x j " 1.Since both items have the same weight, this exchange maintains the weight w T x ˚.Since we removed items of weight w i with the smallest profits, we have p j ě p i .Thus, after this exchange x ˚is still an optimal solution.Since this change does not increase ř n i"1 |g i ´xi | and decreases ř iPI p0q x i ´řiPI p1q x i , we obtain a contradiction to the choice of x ˚.
Case 2: All items j P rnszpI p0q Y I p1q q of weight w j " w i are selected by x ˚.Let i 1 ă . . .ă i ℓ be all indices of items with weight w i , and let t be such that g picks items i 1 , . . ., i t but not i t`1 , . . ., i ℓ .Since i t`1 , . . ., i t`2wmax P rnszpI p0q Y I p1q q, it follows that x ˚selects all items i t`1 , . . ., i t`2wmax , and thus ř n i"1 |g i ´xi | ě 2w max .However, this contradicts the following classic proximity bound.
Lemma 27 ([20, 36]).We have Proof.For completeness, we repeat the proof of this proximity bound as presented in [36].Note that x ˚is maximal in the sense that no item can be added to it, and g is maximal in the sense that it selects a maximal prefix of the items.It follows that either w T x ˚" w T g " ř iPrns w i or w T x ˚, w T g P pW ´wmax , W s. Both yield ´wmax ă w T px ˚´gq ă w max . (3) Now consider the following process.Start with the vector x ˚´g.We will move its entries to 0, while maintaining the inequalities (3).That is, in each step of the process, if the current sum w T px ˚´gq is positive we reduce an arbitrary positive entry of x ˚´g by 1, otherwise we increase an arbitrary negative entry by 1.During this process in no two steps we can have the same sum, as otherwise we could apply to x ˚the additions and removals performed between these two steps, obtaining another solution x 1 .Because at both steps we had the same sum, the weight does not change, i.e., w T x 1 " w T x ˚, so x 1 is feasible.Since every item selected by g but not x ˚has no lower profit than any item selected by x ˚but not g, the additions performed to obtain x 1 from x ˚have a higher average profit-to-weight ratio than the removals.Since both have the same total weight, the total profit does not decrease, i.e., p T x 1 ě p T x ˚, so x 1 is also an optimal solution.Thus, the new solution x 1 is closer to g and still optimal, contradicting the choice of x ˚.
Hence, the number of steps of this process is at most 2w max ´1.Observing that the number of steps is equal to ř iPrns |x i ´gi | finishes the proof.
In both cases we arrived at a contradiction, so x ˚cannot select any item in I p0q .The proof that x ˚selects all items in I p1q is symmetric.This finishes the reduction from 0-1-Knapsack to 0-1-Knapsack with Opw 2 max q items.time r Opnq and writing down the 0-1-Knapsack instance with Opw 2 max q items takes time Opw 2 max q.
Distinct Profit-to-Weight Ratios It remains to make the profit-to-weight ratios distinct.After sorting we can assume that p1 { w1 ě . . .ě pn { wn .We replace p by p P N n with pi " M ¨p i `pn´iq¨w i for a sufficiently large integer M .Note that for any i ă j we have pi { wi " M pi { wi `pn ´iq ą M pj { wj `pn ´jq " pj { wj , where we used pi { wi ě pj { wj and i ă j.Therefore, all new profit-to-weight ratios are distinct.It remains to show that any optimal solution x P t0, 1u n for the 0-1-Knapsack instance p w, p, W q is also an optimal solution for the instance p w, p, W q. To see this, note that for any x P t0, 1u n we have pT x " M ¨p T x `ÿ iPrns,x i "1 pn ´iq ¨w i .
Since profits are integers and M is sufficiently large, any increase in pT x weighs more than any change in the second summand.Therefore, a maximizer for pT x is also a maximizer for pT x.(It suffices to set M ą n2 w max for this argument.) Note that the reduction runs in time r Opn `w2 max q.Hence, if 0-1-Knapsack on Opw 2 max q items with distinct profit-to-weight ratios can be solved in time r Opw 2 max q then Bounded Knapsack can be solved in time r Opn `w2 max q.This finishes the proof of Lemma 26.

Putting Everything Together
In this section we put together the tools that we gathered in the previous sections to prove our main result.We start by showing that 0-1-Knapsack can be solved in time r Opn `w2 max q.
Theorem 28.0-1-Knapsack on n items with distinct profit-to-weight ratios can be solved in time r Opn `w2 max q.
Now we can prove our main theorem, restated here for convenience.
Theorem 1. Bounded Knapsack can be solved by a deterministic algorithm in time r Opn `w2 max q.
Proof.This is immediate from combining Theorem 28 and Lemma 26.
Alternatively, we can parametrize by the maximum profit p max .
Theorem 2. Bounded Knapsack can be solved by a deterministic algorithm in time r Opn `p2 max q.
Proof.This result follows from Theorem 1 by a simple reduction described in [36,Section 4].

Open Problems
In this paper we presented an algorithm that solves Bounded Knapsack in time r Opn `w2 max q, which matches a conditional lower bound.This raises several open problems.
• Our algorithm has an impractical number of logarithmic factors.Indeed, the r O in our running time hides at least a factor log 7 n, where a factor k 2 " log 4 n comes from the number of parts in our partitioning, and a factor log 3 n comes from the additive combinatorics bounds.The independent work by Jin [27] has a factor log 4 w max .Can these factors be improved?
• Our algorithm has impractical constant factors.Indeed, in Theorem 12 we have a factor 300000000.As this is only slightly larger than the constant «17000000 in Theorem 10, the main bottleneck are the constant factors in [11].Reducing the constant to a practical value might require a significant change in the proof approach of [11].
• MaxPlusConv is conjectured to require essentially quadratic time in the worst case, but there are lower order improvements to time n 2 {2 Ωp ?log nq (by combining a reduction to All-Pairs Shortest Paths [6] with an algorithm for the latter [41]).Are similar improvements possible for Bounded Knapsack or 0-1-Knapsack?That is, can these problems be solved in time r Opn `w2 max {2 Ωp ?log wmaxq q?
• Can Bounded Knapsack or 0-1-Knapsack be solved in time r Opnw max q?Our approach does not seem able to give such a running time bound.
• Can Bounded Knapsack or 0-1-Knapsack be solved in time r Opn `pw max `pmax q 2´δ q for some δ ą 0? Time r Opn `pw max `pmax q 1.5 q was recently shown for Unbounded Knapsack [8].
• For Subset Sum the same parametrization by the number of items n and the largest item w max is well studied.Recent algorithms run in time r Opn `w1.5 max q [28,16], and a conditional lower bound rules out time w 1´δ max ¨2opnq for any δ ą 0 [1], which leaves a gap between w max and w 1.5  max .Can the ideas in this paper help to close this gap?