Tag Archives: Knapsack

Partially Ordered Knapsack

Continuing our “Variants on the Knapsack Problem” theme..

The problem: Partially Ordered Knapsack.  This is problem MP12 in the appendix.

The description: Like usual, we’re given a set U, where each element u in U has positive integer size s(u) and value v(u), a maximum total size B, and a target total value K.  We also are given a partial order \lessdot on the elements of U, which we can represent as a directed acyclic graph.  Can we find a subset U’ of U where the total size of all elements in U’ is at most B, the total value of all elements in U’ is at least K, and for all elements u in U’, if some other u’ is  \lessdot u’, then u’ is also in U’?

Example: Suppose we have the following 3 elements:

Item 1 2 3
Profit 3 3 5
Weight 3 3 5

If B=6 and K=6, and there is no partial order, then taking items 1 and 2 make a valid solution.  But if we add the partial order graph:

Than taking either item 1 or 2 means we have to also take item 3, which makes our size too big.  So this version can’t be solved.

Reduction: G&J have this as a “no reference” reduction, but I found a paper by Johnson and Neimi that has it, and I’m glad I did because it’s pretty slick.  The reduction is from Clique, So we’re given an undirected graph G=(V, E) and an integer K.   We need to build a partial order DAG (where the vertices will be knapsack items), and also sizes and values for each item.

The DAG has a vertex for each vertex and edge in G  (I’ll be talking about these as “vertices from V” and “vertices from E”)  Edges in the DAG connect vertices from V to the vertices from E that are incident on that vertex.  So there is no path of length more than 1 in the partial order (no edges come out of the vertices from E).

Both the value and size of each vertex from V are |E|+1, and the value and size of each vertex from E are 1.

B’ and K’ will both be set to K(|E|+1) + K(K-1)/2.  It’s worth noting that the first half of that equation will be the total profits and sizes of K vertices from V, and the second half will be total profits and sizes of the edges connecting those vertices if they form a clique.

Suppose we have a clique in G.  Then all of the vertices in the clique, plus all of the edges in the clique form a solution to the partially ordered knapsack problem, as described above.

If we have a solution to the Partially Ordered Knapsack problem, that means we have a set of vertices that respects the partial order with total value at least K’ and total size at most B’.  Since the value and size of each vertex are the same, that means that the total value = the total weight = K’ = B’.  The only way to get that is if we have K vertices from V, and K(K-1)/2 vertices from E.  Since the partial order is built such that we can’t take a vertex from E without taking a vertex from V that is an endpoint of the vertex from E, this means that we have a set of K(K-1)/2 edges in E that all have endpoints in one of K vertices from V, which forms a clique.

Difficulty: 5.  I like how the formula for B’ in the reduction is one where you can easily see what all of the pieces of it are doing, and where they all come from.

Continuous Multiple Choice Knapsack

This week we have a variant of the Knapsack problem that feels like greedy methods shold work on.  G&J actually provide several variants of this problem that the greedy approach would give us a polynomial solution.

The problem: Continuous Multiple Choice Knapsack.  This is problem MP11 in the appendix.

The description: Given a set of elements U, each element u in U has a positive size (s(u)) and value (v(u)).  We’re also given a partition of U into m disjoint sets U1 through Um, a maximum capacity C and a goal value K.   Can we pick a single element ui from each Ui, and assign each one a rational number ri between 0 and 1 to each on so that the sum of all ri*s(ui) is ≤ B and the sum of all ri*v(ui) is ≥ K?

Example: This seems like a problem where a solution similar to the greedy solution to fractional Knapsack should work- take the highest vaalue per size that you can.  This fails because you can only take 1 of each item, and you may need to take a less profitable item (in terms of ratio) with a higher total value.

For example, if partition 1 was:

Item Value Size
1 8 7
2 10 10
3 6 8

and partition 2 was:

Item Value Size
1 3 9
2 2 2
3 1 6

If K was 10, we could get 10 profit using only 9 size by taking all of item 1 from partition1 and all of item 2 from parttion 2.  But is 12, we can only do that by taking item 2 from parittion 1, which gives us a total size of 12.

Reduction: G&J say to use Partition, but the paper by Ibaraki that I have uses 0/1 Knapsack.  Define R to be a number > the max of all of the values * (the max of all of the weights -1).  We will construct a Ui set for each element in the initial Knapsack instance.  Each will each have 2 elements: ui1 has value R+v(i) and size s(i).  ui2 has value R and weight 0.  We’ll keep the same B.  Our K’ will be n*R+K.  (He doesn’t explicitly say this, instead proving that the optimial solution for this problem exists if and only if the corresponding items are taken for the 0/1 Knapsack problem.  But each element contributes at least R to the total value, whether we take ui1 or ui2.  We then either add v(i) to our value (if we take ui1) or 0 to it (if we take ui2))

Ibaraki shows that optimal solutions to this problem only ever take at most one fractional item- they take 0 or 1 of every other item.  He then shows that in this specific instance, you won’t take any fractional items.

He then proves that taking the ui1 from each set corresponds to taking ui once in the original Knapsack.  Which is pretty straightforward.

Difficulty: 6.  This is actually not that hard of a reduction, except for the need to pick a “big enough” R.  I’m not entirely sure why the chosen R is big enough, but other values aren’t.  The rating also assumes that you tell the students about the “maximum one fractional value” fact.

Integer Knapsack

MP9 is Knapsack

The next problem seems like an easy variant of Knapsack, and I tried for a while to find a quick easy reduction.  When I finally caved and looked up the reference, I found that it wasn’t easy at all.  Which made me feel good, at least, that I wasn’t missing something, but I do still feel like there should be an easier way to do this.

The problem: Integer Knapsack.  This is problem MP10 in the appendix.

The description: Given a set of elements, each item i has weight wi and profit pi, and two positive integers B and K, can we assign a non-negative integer ci to each item i such that:

  • \sum_{i=1}^{n} ci*wi ≤ B
  • \sum_{i=1}^{n} ci*pi ≥ K?

Example: This is just the classic Knapsack problem where we are allowed to take more than one of each item.   So, suppose we have 3 items:

  1. Profit 4, Weight 1
  2. Profit 6, Weight 2
  3. Profit 3, Weight 3

No matter what our B is, in the original 0/1 problem, we could only possibly get a profit of 13.  But here, if we say that B is 10, we could get a profit of 40 by taking the first item 10 times.

Reduction: It really seems very close to regular Knapsack, doesn’t it?  G&J even say that the reduction is from Sum of Subsets, just like our Knapsack reduction did.  But the reference from Lueker is actually pretty complicated:

We’re given a Sum of Subsets instance- a set S with n elements and a value K to try to hit exactly.

Define M to be 1+  the max of K, n* \sum si , and 2*n.  We’ll now create a set U for our Integer Knapsack instance with 2n elements:

Each element will be labeled uij, where i runs from 1 to n and j runs from 0 to 1.  Element uij will have profit and weight n* Mn+1+ Mi + j*si

Our K’ (and B’) will be: n*Mn+1 + \sum_{i=1}^{n} Mi  + K.

If S has a subset that sums to K, then for each element i, if we took si in the solution, we take ui1 in our Integer Knapsack solution.  Otherwise, we take ui0.

If U has an Integer Knapsack solution, we know that we have non-negative integer values cij such that \sum_{i=1}^{n} \sum_{j=0}^{1} cij*uij = K’.  Because of the way we chose M, if we divide K’ by Mn+1 and round down, we get n.  Because each uij has Mn+1 in it, dividing the summation side by Mn+1 gets us a number at least as big as the sum of the cij values.  So we can say that \sum_{i=1}^{n}\sum_{j=0}^{1} yij is ≤ n.  Thus, each element is taken at most n times.

Now we can go back to the original equation: \sum_{i=1}^{n} \sum_{j=0}^{1} cij*uij = K’ and do algebra.  We learn that in fact, each cij is 0 or 1, and that ci0 + ci1 = 1 for all i (so we never take both ci0 and ci1), and that the values of ci1 are 0/1 solutions to the corresponding elements in the Sum of Subsets problem.

Difficulty: 7.  There seems to be no natural reason to for this definition of  M except “that’s what made it work”.  I wish there was an easier reduction.

Protected: Integer Programming

This content is password protected. To view it please enter your password below:

Protected: Production Planning

This content is password protected. To view it please enter your password below:

Protected: Sequencing With Deadlines and Setup Times

This content is password protected. To view it please enter your password below:

Protected: Knapsack

This content is password protected. To view it please enter your password below: