## Capacity Assignment

This problem is from the same “unpublished manuscript” as last week’s.

The problem: Capacity Assignment.  This is problem SR7 in the appendix.

The description: Given a set C of “communication links”, and set M of positive capacities.  Each pair of a link c and a capacity m also has a cost function g(c,m) and delay penalty d(c,m) that has the following properties:

• If i < j ∈ M, then g(c,i) ≤ g(c,j)
• If i < j ∈ M, then d(c,i) ≥ d(c,j)

We’re also given positive integers K and J.  The problem is: Can we assign a capacity to each link such that the total g cost of all of our assignments is ≤ K and the total d cost of all of our assignments is ≤ J?

Example: There’s a lot to parse in that problem description.  The first thing to notice is that the set of links C doesn’t necessarily have to link anything together (it’s not like it has to apply to an underlying graph).  So we can just give them names:

C={a,b,c,d,e}

Next, there is no reason why the set of capacities has to be assigned as a bijection to C- the set M could be a different size entirely than the size of C:

M={1,2}

The cost function has to have the property that if we assign a 2 to a link, it has to cost as least as much as assigning 1 to the link:

g(c,1) = 3 for all c

g(c,2) = 4 for all c

The delay function has to have the property that if we assign a higher capacity to a link, the delay can’t be larger than assigning a lower capacity:

d(c,1) = 6 for all c

d(c,2) = 5 for all c

In this case, if we assign the capacity of 1 to all links, we get a total cost of 15 and a total delay of 30.  If we assign the capacity of 2 to all links, we get a total cost of 20 and a total delay of 25.     If we have K = 18, and J = 27, we can achieve that by setting 2 links to have capacity 1 and 3 links to have capacity 2.

The reduction: The example above is pretty close to how the reduction will work.  We will reduce from Sum of Subsets, so we start with a set S of integers and a target B.   Our set C will have one element for each element in S.  Our set M will be {1,2}.  Assigning a capacity of 1 will imply we don’t want to take this element in S’, and assigning a capacity of 2 will imply that we do.  (This makes more sense if I can use the set {0,1} for M, but the problem description says the elements of M have to be positive)

We will define our g function so that g(c,1) = 1 for all c, and g(c,2) will be s(c)+1 (where s(c) is the size of the element in S that corresponds to c).

Our d function will work similarly:  d(c,1) = s(c)+1 for all c, and d(c,2) = 1 for all c.  These functions both follow the restrictions for how g and d work.

Set K = |S| + B.  Since each cost is either s(c)+1 or 1, this is saying that there needs to be enough elements assigned a 1 (such that its cost is 1, instead of s(c)+1) to that the sizes of those elements does not exceed K.

Let T = The sum of all of the sizes of all of the elements in S.  Then let J = |S| + T – B.  Again, each d value always includes 1, and may include s(c) as well.  So this is saying that there needs to be enough values assigned a 2 (so that its delay is 1) so that the sizes of those elements does not exceed J.

If S has a SOS solution S’, then assigning a capacity of 2 to all elements in S’ and a 1 to all elements in S’ gives us a cost value of exactly K, and a delay value of exactly J.

If we have a Capacity Assignment solution, then notice that K+J = 2|S|  + T, and so is the sum of all delays and capacities no matter what assignment is chosen.  (g(c,m) + d(c,m) = s(c)+2, for all c, no matter what m we use).  So if the sum of the delays (or costs) were strictly less than K, the sum of the costs (or delays) would have to be strictly more than J.  The only way to satisfy both the K and J constraints is to make the sums exactly equal, which gives us a SOS solution.

Difficulty: 4.  I think the algebra for this problem is a little easier than last week’s, but it does take some work to understand what the problem is asking.  Changing the problem slightly to allow assignments and costs and delays to be 0 instead of making them all be positive integers makes the reduction easier too.

## Multiple Copy File Allocation

These next two problems are from an “unpublished manuscript” that I can’t find.  Assuming I did them correctly, I think they are also good problems for students to work on.

The problem: Multiple Copy File Allocation.  This is problem SR6 in the appendix.

The description: Given a graph G=(V,E), where each vertex v ∈ V has a “usage” u(v), and a “storage cost” s(v) (both positive integers), and a positive integer K.  Can we find a subset V’ of V where for each v ∈ V we define d(v) to be the length of the shortest path from v to some member of V’, that the sum of:

• The s(v) values for all vertices in V’ plus
• The d(v)*u(v) values for all vertices not in V’ is ≤ K?

Example: The idea is that each vertex in V’ pays its s(v) cost, and each vertex not in V’ pays its u(v) cost for each edge is has to go through to get to something in V’.  Here’s a simple example:

Each node lists its s value first and u value second.  If we take just the “1,10” node into V’, our total cost is:

• 1 from the “1,10” node (its s value)
• 2 + 3 + 4 from each of its neighbors (their u values * a distance of 1)
• 2 from the “90,1” vertex.  (u cost of 1, times a distance of 2 edges from the vertex in V’)

..for a total of 12.  I think that’s minimal.

Reduction: G&J say to use  Vertex Cover, and mention that we can make all s values equal to each other and all u values equal to each other.  The phrasing implies that we shouldn’t make the s value and u values the same altogether, which is a good hint.

So we start with out VC instance: a graph G=(V,E) and an integer K.  We will use the same graph.  Let s = the s(v) for each vertex = |V|.  Let u = the u(v) for each vertex =  (The smallest integer larger than |V|/2).

Set K’ = K*s + (|V|-K)* u.  Hopefully it’s clear how this number will allow a solution to the VC problem take that cover and create a solution to the MCFA problem.

If we have a MFCA solution, first notice that any vertices that are 2 (or more) edges away from something in V’ are contributing 2*u to the total cost.  Since 2*u > s, we can add a vertex to V’ at a distance 1 away from these distant vertices and keep our total cost ≤ K’.  This means we can transform any MCFA solution into one that is also a solution but where V’ is a cover of V.

Also notice that even after making these changes that if we were to have K+1 (or more) vertices in V’, our total cost would be (K+1)*s + (|V|-K-1)* u.  This is more than K’ (it adds an extra s of cost |V| and removes a u of cost |V|/2+1).

So, we can construct a MCFA solution where there are K or less vertices in V’ that form a cover.  This gives us a solution to the VC instance.

Difficulty: 5.  It takes a little playing around with the algebra to get the correct values, but the idea that we can make the s and u values the same for all vertices is helpful, and gives a clear path to how you’d reduce this problem from VC.

## Rooted Tree Storage Assignment

This whole section is filled with problem descriptions that confuse me.  Here’s another one:

The problem: Rooted Tree Storage Assignment.  This is problem SR5 in the appendix.

G&J’s definition: Given a set X, and a collection C= {X1..Xn} of n subsets of X, and an integer K.  Can we find a collection C’ = {X1‘..Xn‘} of n subsets of X, where:

• Each Xi‘ contains all of the elements of it’s corresponding Xi, plus possibly some extra elements.
• There are at most K new elements added across all Xi
• We can treat the elements of X as vertices and add directed edges to form a directed tree where the elements of each Xi form a directed path.

Example: Suppose X = {1,2,3,4,5}, and X1 = {1,2,3,4} X2 = {1,3,4}  X3={1,3,5}, X4 = {2,5}

Then we can set each Xi‘ to be = Xi, except X3‘ where we’ll add the element 2.  This gives us the arrangement:

The way to think of the X1 is as “required elements along a path from the root”, and the Xi‘ is as “the actual elements along the path from the root”.

But the paper from Gavril that has the reduction gives what I think is the better definition:

Gavril’s Definition: (He calls this problem “Augmented directed tree arrangement”) Given a bipartite graph B, whose vertices can be split into two sets X and Y, and an integer K, can we add K or less edges to B to form a new bipartite graph such that:

• The elements in X can be arranged as a directed tree
• For each subset (I think- it’s not entirely clear whether this is a subset or just a single vertex) S of Y, the vertices in X that are adjacent to S can be arranged to form a directed path in the directed tree of X.

I like this better because you can see that the vertices that need to be in order in the tree  are “pointed to” by edges from Y.  The extra edges that you get in Xi‘ are created by adding more edges between X and Y.

Example: The example above would be represented by the following bipartite graph:

Notice how the vertices from X are arranged to almost form a directed path with no gaps, but we need to add an edge from X2 to Y3 to fix the arrangement.

Reduction: The Gavril paper uses Rooted Tree Arrangement.  So we’re given a graph G=(V,E) and an integer K.  Our bipartite graph will have X=V, and Y=E, and each vertex in Y will connect to the 2 vertices in X that correspond to the endpoints of the edge in G. K’ = K-|E|.

If you’re worried- like I was- that this might possibly make K’ negative, keep in mind that the K for the Rooted Tree Arrangement Problem was the sum of the paths between pairs of vertices in G that are connected by an edge.  So for K to be meaningful it has to be ≥ E.

If we have a solution for the Rooted Tree Arrangement problem, then we have a path between all pairs of vertices connected by an edge in E.  So, for each vertex in Y, look at the path in the solution. If the path goes through additional vertices besides the endpoints of the edge, add an edge to our bipartite graph from the vertex in Y to the vertex passed through in X.  In doing so, we will add at most K-|E| edges to the bipartite graph, and can use the arrangement that is a solution to the Storage Assignment problem.

If we have a solution to the Storage Assignment problem, the edges that were added to the bipartite graph mark the vertices traveled along a path that solves the Rooted Tree Arrangement problem, and so tells us how to build a satisfying arrangement.

Difficulty: 6.  This isn’t the hardest reduction to understand, but it does require you to know the Rooted Tree Arrangement problem really well, and to understand just what the job of the bipartite graph is.

## Expected Retrieval Cost

Here’s another problem where the G&J definition confused me for a bit.

The problem: Expected Retrieval Cost.  This is problem SR4 in the appendix.

The description: Given a set R of records, each with a probability of being accessed between 0-1 (and the sum of all probabilities = 1), some number m of sectors to place records on, and a positive integer K.  Can we partition R into m disjoint subsets R1..Rm  such that:

• The “latency” cost of 2 sectors i and j, called d(i,j) is j-i-1  (if i < j) or m-i+j-1 (if i >=j)
• The probability of a sector, called  p(Ri), is the sum of the probabilities of the records on that sector
• The sum over all pairs of sectors i and j is p(Ri) * p(Rj) * d(i,j) is K or less

Example: The thing that was the hardest for me to understand was the definition of d.  The way it’s written, the distance between 2 adjacent sectors (for example d(2,3)) is 0.  The distance between a sector and itself (for example d(2,2)) is m-1.  The paper by Cody and Coffman do a better job of explaining the motivation: What we’re looking at is the time (in sectors traversed) for a disk to read sector j after finishing reading sector i.  So If we read sector 2 right before reading sector 3, the disk has no traversal time to go from the end of sector 2 to the beginning of sector 3.  But if we read sector 2 twice in a row, the disk reader (in this model) needs to scan to the end of all of the sectors, then return to the beginning, then scan all the way to the beginning of sector 2 to read again.

So, suppose we have m=2, and 4 records, each with .25 probability.  If we put them all in the same sector, we have d(i,j) = 1 for all pairs of sectors.  Since all pairs of sectors are in (say) R1, then p(R1) = 1, and p(R2) = 0.  So our sum is:

• p(R1)*p(R1)* d(1,1) = 1*1*1 = 1, plus
• p(R1) * p(R2) * d(1,2) = 1*0*0 = 0, plus
• p(R2) * p(R1)* d(2,1) = 0*1*0 = 0, plus
• p(R2)* p(R2) * d(2,2) = 0*0*1

..for a total of 1.

If we put 2 records in sector 1, and 2 records in sector 2, then p(R1) = p(R2) = .5.  So our sum is:

• p(R1)*p(R1)* d(1,1) = .5*.5*1 = .25, plus
• p(R1) * p(R2) * d(1,2) = .5*.5*0 = 0, plus
• p(R2) * p(R1)* d(2,1) = .5*1.5*0 = 0, plus
• p(R2)* p(R2) * d(2,2) = .5*.5*1 = .25

..for a total of .5.

The reduction: Hopefully the example using m=2 helps to show why using Partition is a good choice.  So we start with a set S of elements.  We will turn each element S into a value between 0 and 1 reflecting its proportion of the sum of all of the elements.  For example, if S={1,2,3,4,5}, then we would create a set R of values {1/15, 2/15, 3/15, 4/15, 5/15}.  These probabilities will all be between 0 and 1 and will all sum to 1.

We will set m=2, K = 1/2. Notice that d(1,2) = d(2,1) = 0.  So the only d values that will count for our sum is d(1,1) and d(2,2) (which are both 1)  So by our formula we need p(R1) * p(R2) + p(R2) * p(R1) = .5.

Some algebra tells us that this means that p(R1)*p(R2) = ..25, and we know that p(R1) + p(R2) = 1.  Solving that system of equations gets us p(R1) = p(R2) = .5.  Or, we have an Expected Retrieval Cost solution for R exactly when we have  a partition of S.

Difficulty: 4. Cody and Coffman say the details of the above reduction are “routine” after defining k = 1/2.  It is pretty straightforward, but there are some tricky parts to worry about.

I will say though that the definition in G&J, where it’s not clear how distances to adjacent things can be 0, struck me as much harder, and is the reason I dug up the Cody and Coffman paper in the first place.  I’d say that definition makes the problem a 6 or 7.

## Pruned Trie Space Minimization

This problem is hard to explain, partially because the definition given by G&J doesn’t really map to the structure they are talking about easily.

The problem: Pruned Trie Space Minimization.  This is problem SR3 in the appendix.

The description in G&J: Given a finite set S, a collection F of functions mapping elements of S to positive integers, and a positive integer K.  Can we find a sequence of m distinct functions from F <f1 .. fm> such that:

• For each pair of elements a and b in S, there is some function fi in the sequence where fi(a) ≠ fi(b)
• For each i from 1 to m, define N(i) to be the number of distinct tuples X= (x1..xi) where more than one a in S has the tuple (f1(a), …, fi(a)) = X, the sum of all of the N(i) values is at most K?

A better description: G&J’s definition removes all knowledge of the “tries” from the problem.  The Comer and Sethi paper that is referred to in the appendix I think does a better job.

First, a trie is a tree that separates a sequence of strings by letters. The idea is that each string has a unique path through the tree.  Here is the tree used in the paper:

This trie shows the path for the set of strings: {back, bane, bank, bare, barn, band, bang, barb, bark, been} by building the tree by considering letters in the string from left to right.  By using different orders of considering letters, we will get differently shaped tries, with different numbers of internal nodes.

A pruned trie recognizes that long paths of nodes with 1 child doesn’t actually need to be represented.  For example, once you go down the “b-e” side, the only place you can end up is at “been”.  So the trie is pruned by removing all such chains (we would consider the “e” node a leaf).

What we are interested in doing is finding an ordering on the letters in the string (or, more generally, the “attributes” of an element we are trying to distinguish) in order to minimize the number of nonleaf nodes in the pruned trie.

The actual question we want to solve is: Given a set of strings S and an integer K, can we construct a trie that differentiates the S strings with K or less internal nodes?

I think the way this maps to the G&J definition is:

S is the set of strings.  F is the set of attributes that map strings to an order of choosing attributes.  The sequence of functions <f1, …, fn> are the orders in which we choose attributes.  So f1(a) is the first node in the trie that we go to on the string a, f2(a) is the second node we go to and so on.  The fi(a) ≠ fi(b) requirement says that we need to eventually differentiate each string from each other, and the N(i) number is counting the number of internal nodes at each height of the tree:

Example: For the picture shown above, we get the following pruned trie (also from the paper):

This trie has 5 internal nodes.

Reduction: G&J say that the reduction goes from 3DM, but in the paper it goes from 3SAT. So we’ll start with a formula in 3CNF form with n variables and m clauses.  The strings we’ll build will have 3n+3m attributes (you can think of this as strings of length 3n+3m).    The first 2n attributes will correspond to literals (one attribute for the positive setting of a variable, one attribute for the negative setting).  The next 3m attributes will correspond to clauses (3 attributes for the 3 possible positions a variable can appear in a clause), and the last 3 attributes correspond to literals (to combine the positive and negative setting of that variable’s literals).

We will have one string for each literal (a 1 in the attribute matching that literal’s positive or negative setting, a 1 in the attributes matching that literal’s position in clauses, and a 1 in the attribute matching that variable, 0’s everywhere else).  We will have one string for each clause (a 1 in the three positions in each clause, 0’s everywhere else).  Then we will have a sequence of “hard to distinguish” strings made of decreasing numbers of 2’s (with 0’s everywhere else).

Here’s the example construction from the paper (blank spaces are zero’s).  It’s a little confusing because they chose n=m=3, but you can see where the various pieces are:

K=2n+m.

If the formula is satisfiable, then the ordering of attributes where we put all of the literals that form the satisfying arrangement first, then all of the clauses, then the W attributes (for the variables) distinguishes the strings in L with 2n+m internal nodes.

In fact, all tries must have at least K internal nodes to distinguish the strings in L- that can be seen from the table, since we have K strings made up of decreasing numbers of 2’s.  We also have to distinguish the strings in order (the strings with the most 2’s first, then the ones with less 2’s, all the way down to the last one with just one 2).  We need to choose one attribute for each literal (positive or negative).  Suppose we choose an attribute Ui (or its negation).  That node in the trie has 3 children:

• A 2, which distinguishes the string in L.
• A 1, which distinguishes the string corresponding to that literal in J.
• A 0, for everything else.

What this means is that we have “distinguished off” the literal string (in J) from the rest (on a 1), which means that the 1 it has in the clause position will not interfere with the 1 in that position of the clause string (in K).  So each clause string will be able to be distinguished by the clause position that satisfies the string.

So, if we have a trie with “only” K internal nodes, the attributes must line up to allow us to have a setting of a variable to satisfy each clause.

Difficulty: 8, with the Comer and Sethi trie definition.  If you are going straight from G&J’s definitions, it’s at least a 9.

## Dynamic Storage Allocation

Since Bin Packing was a redo, here is the first real problem in the Storage and Retrieval section.

The problem: Dynamic Storage Allocation.  This is problem SR2 in the appendix.

The description: Given a set A of items.  Each item a in A has size s(a), arrival time r(a) and departure time d(a) (all positive integers).  We’re also given a storage size D.  Can we allocate the items to D “slots” of storage such that:

• Each item is stored in consecutive slots.  So an element a has to be contained in s(a) adjacent locations from 1 to D.
• No two items overlap the same slot during the time they are in storage. In other words, if two items a and a’ are mapped to the same slot in D, the must not have any overlap between their arrival and departure times.

Example: Here’s a simple set of items:

 Item Number Arrival Departure Size 1 1 2 4 2 2 3 4 3 1 3 2

If D=6, we can store these items by using slots 1-4 to hold both items 1 and 2 (notice that they don’t overlap in time, and having one item arrive right as the other departs is ok), and slots 5-6 to hold item 3.

Reduction: The reference to Stockmeyer in G&J is to a private communication.  I tried working out my own reduction from 3-Partition, but couldn’t make it work.  My approach was to make the sizes of the elements in the 3-Parttion instance map to times in this problem, since G&J give the hint that you can make all sizes 1 or 2.  But I couldn’t figure out how to make it work.  I sort of expect there to be 3 possible sizes for a 3-partition problem, instead of 2.

Eventually, I found a paper by Lim that uses regular Partition, using the storage allocation problem as a special case of a problem involving berthing ships.   (The ship problem adds extra complications like each ship needing a specified clearance between it and other ships).  He starts with a set A of elements, and defines T to be the sum of all of the element sizes.  He then creates one item in the storage allocation problem for each element in S.  For a given s(a) in A, the new item has size s(a), arrival time 2, departure time 3 (so exist for just one time duration) .  He also adds 9 new items that have the effect of creating only two sequences of storage slots that can hold the items from s, each of size= T/2. We can place the items in these slots if and only if there is a partition of S.

Difficulty: 7.  I don’t think the idea is too hard to understand, but the 9 sets that are created are hard to come up with (even if you can understand what their purpose is, coming up with the sets that actually get that purpose accomplished is pretty hard).

## Ratio Clique

Last week it was pointed out to me that my reduction for Balanced Complete Bipartite Subgraph was wrong, and in my searches to fix it, I found that the real reduction (by Johnson) used a variant of Clique that said (without proof)) that Clique is NP-Complete even if K was fixed to be |V|/2.  I looked up the Clique problem in G&J, and they say in the comments that it is NP-Complete for K = any fixed ratio of V.

I thought this was a neat easy problem that fit in the 3-6 difficulty range I mentioned last week and decided it was worth a post.  But thinking about this brings up some subtle issues relating to ratios and constants that are common sources of errors among students.  I’ll talk about that at the end.

The problem: I don’t know if there is an official name, so I’m calling it “Ratio Clique”.  It is mentioned in the comments to GT19 (Clique).

The description: For any fixed number r, 0< r < 1, does G have a clique of size r*|V| or more?

Example:  Here’s a graph we’ve used for a previous problem:

If r = .5, then r*|V| = 3.5.  So we’re asking if a clique of 3.5 or more vertices exists (which really means a clique of 4 or more vertices).  It does not exist in this graph.  If r ≤ , then we would be looking for a clique of size 3, which does exist in this graph (vertices b, c, and t)

The reduction: We will be reducing from the regular Clique problem.  Since we want to show this “for any fixed value of r”, we can’t change r inside our reduction.

So we’re given a graph G=(V, E) and a K as our instance of Clique. We need to build a graph G’=(V’, E’) that has a fixed K’ = ⌈r*|V’|⌉.

G’ will start with G, and will add new vertices to the graph.  The vertices we add depend on the ratio s of K to |V|    (K = ⌈s*|V|⌉).  K’ is initially K, but may change as vertices are added to the graph.

If r > s, then we need to add vertices to V’ that will connect to each other vertex in V’, and will increase K’ by 1.  This increases the ratio of , and we keep adding vertices until that ratio is at least r.

If G has a clique of size K, then the extra vertices in K’ can be added to the clique to form a larger clique (since these new vertices connect to every other vertex)

If G’ has a clique of size K’, notice that it must contain at least K vertices that were initially in G. (We only added K’-K new vertices).  These vertices that exist in G are all connected to each other and so will form a clique in G.

If r < s, then we will add vertices to V’ that are isolated (have no edges connecting to them).  K’ will stay equal to K.  Each vertex we add will reduce the ratio of , and we keep adding vertices until  K=⌈r*|V’|⌉.

Since these new vertices can not be part of any clique in G’, any clique in G’ must consist only of vertices from G.  Since K=K’, this gives us a clique of size K in both graphs.

It is probably also worth mentioning just how many vertices need to get added to the graph in each case, to make sure that we are adding a polynomial number.  If r>s, we will be adding w vertices to satisfy the equation: ⌈s*|V|⌉ + w = ⌈r*(|V|+w)⌉

(These are both ways of expressing K’)

Dropping the ceiling function (since it only leads to a difference of at most one vertex) Solving for w gets us w = .  Since r > s, both sides of that division are negative, so w ends up being positive, and polynomial in |V|.

If r < s, we will be adding w vertices to satisfy the equation:

⌈s*|V|⌉ = ⌈r(|V|+w)⌉

(These are both ways of expressing K)

This can similarly be solved to w = s|V|-r|V|.  Since s > v, this is also a positive (and polynomial) number of new vertices.

A possible source of mistakes: I’m pretty sure this reduction works, but we need to be careful that there is a difference between “for any fixed ratio r of |V|” and “for any fixed K”.  Because for a fixed K (say, K=7) solving the “Does this graph have a 7-Clique?” problem can be solved in polynomial (by enumerating all subgraphs of size 7, for example.  There are subgraphs, which is O()).  By choosing a ratio instead of a constant K, we gain the ability to scale the size of K’ along with the size of the graph and avoid this issue.  But it is worth mentioning this to students as a possible pitfall.  It’s very easy to do things in a way that effectively is treating r|V| as a constant K, which won’t work.

Difficulty: 3, but if you’re going to make students to the algebra to show the number of vertices that are added, bump it up to a 4.

## Bin Packing Take 2

[So WordPress’s search function has failed me.  A search for posts on Bin Packing didn’t turn up this post, so I went ahead and wrote a whole second post for this problem.  Since this time my reduction uses 3-Partition instead of Partition (and so is a little less trivial for use as a homework problem), I figured I’d leave it for you as an alternate reduction.

I have been thinking off and on about whether it would be useful when I’m done with this project (years from now) to go back and try to find reductions that can be done easier (or harder) than what I’ve shown here, to give more options that are in the 3-6 difficulty range that I think is best for homework problems.  I’m not sure how feasible that task would be, but it’s something I’ll try to keep in mind as I go forward.

Anyway, here’s my post that talks about Bin Packing again:]

On to a new chapter! A4- “Storage and Retrieval”

This first one is a classic problem that I guess I haven’t done yet.

The problem: Bin Packing.  This is problem SR1 in the appendix.

The description: Given a finite set U of items, each with a positive integer size, and positive integers B and K.  Can we split U into k disjoint sets such that the sum of the elements in each set is B or less?

Example: Suppose U was {1,2,3,4,5,6}, K=4, and B= 6.  We want 4 disjoint sets that each sum to 6 or less.  For example:

• {1,5}
• {2,4}
• {3}
• {6}

Note that if K = 3, we would need 3 sets instead of 4, and this wouldn’t be solvable.

The simple reduction: G&J on page 124 say that Bin Packing contains 3-Partition as a special case.  So let’s try reducing from there. Recall the definition of 3-Partition:

Given a set A of 3M elements and an integer B such that the sizes of each element are between B/4 and B/2 and the sum of all of the elements is m*B, can we split A into m disjoint sets that each sum to exactly B?

Keeping in mind that the bounds on the elements in A mean that there are exactly 3 elements in each set in the partition, we can see how this maps easily to the Bin Packing problem:

• U = A
• K = m
• Use the same B

While it is true that the Bin Packing problem allows the sums to be B or less, and the 3-Parittion problem forces the sets to sum to exactly B, the fact that all of the sets have to contain 3 elements and the fact that the sum of all of the element in U is m*B means that if any set in the Bin Packing answer is < B some other set will necessarily be more than B.

Difficulty: 3.  It is basically the same problem, but I think there is enough work needed to justify the reduction that it makes sense as a good easy homework problem.

## Kth Largest m-Tuple

I think I’m going to move the posts to Wednesday this semester since I teach two 2-hour classes on Tuesday/Thursday.

SP19 is Minimum Sum of Squares

SP20 is Kth Largest Subset and is very similar to our next problem.

The problem: Kth Largest m-Tuple.  This is problem SP21 in the appendix.

The description: Given sets X1 through Xm that all contain positive integers, and integers K and B, are there at least K m-tuples (x1, .., xm) from X1 x X2 x … x Xm for which the sum of the elements in each tuple is at least B?

Example: Let X1 = {1,2,3,4}, X2 = {5,10}, X3 = {1,2}. Notice that the X’s can be different sizes, and can repeat elements.

If B = 15, then the only tuples that sum to at least 15 are {3.10,2), (4,10,1), and  (4,10,2).

Reduction: The paper by Johnson and Mizoguchi presents the reduction pretty densely, but here is what I think they are saying:  We’re going to use a Turing Reduction to solve the Partition problem, which means we’re given an instance of Partition, and assume we can call a subroutine to solve the Kth Largest m-tuple problem multiple times to solve the Partition instance.  Recall that we can use Binary Search on multiple calls of the subroutine to determine (for example) how many tuples sum to some B or more.  (We need multiple calls because the subroutine is a boolean one, just saying yes or no to a given instance).

Updated reduction:

This idea came from Said D. in the comments, and I wanted to make sure it got posted here because it’s so simple and elegant.  If we are given a partition instance S = {s1..sn} and need to know if a subset sums to B (= half the sum of all of the elements in S), then the sets we create for the Kth Largest m-tuple instance are:

{s1+1, 1}, {s2+1, 1}, …(sn+1, 1}

And we number the K-th Largest m-tuple is looking for is B+n.  Each set will contribute 1 to the tuple sum no matter which element is chosen.  It may also contribute si as well, corresponding to whether we “take” that element in the partition instance or not.

That’s way better than what I was trying to do.  Thanks, Said!

The rest of my not very good reduction:

I’ll leave the rest up here for posterity, but this is way worse than Said’s idea above, and glosses over the fact that you can’t just use any elements you want as the sets- you need to set them up in a way that makes sure you don’t repeat elements when you choose the tuples.

So, in a manner similar to that used in the Kth Largest Subset reduction, we can find the number of tuples that sum to any B we want (including half the sum of all of the elements in the partition instance).  The only tricky part is that our “subsets” are tuples of different sizes.  So we need to run this algorithm multiple times:

• Are there are 1-tuples that solve the partition problem?  (There are O(n)  1-tuples, so a binary search requires O(log n) calls)
• Are there any 2 tuples that solve the partition problem?  (There are O(N^2) 2-tuples, so a binary search requires O(2* log n) calls)
• Are there any m-tuples that solve the partition problem? (There are O(N!) n-tuples, so a binary search requires O(n * log n) calls)

Thus, we can do it in a polynomial number of calls to the K-th Largest m-tuple solver.

Difficulty: This is a little harder than the Kth largest subset reduction, which I gave a 5, so I’ll make this a 6.

## Expected Component Sum

Sometimes you need a nudge to see the right way to do a reduction.  The reduction to this problem is based on a reduction for a similar problem, which encouraged me to look at the problem in a way that I probably should have noticed myself.

The problem: Expected Component Sum.  This is problem SP18 in the appendix.

The description: Given a collection V of m-dimensional vectors, where each entry in each vector is a non-negative integer.  We’re also given positive integers K and B.  Can we partition V into K disjoint sets V1 through VK such that:

• For each Vi, we look at each position in each vector (from 1 to m), and we sum up the elements in that position in that Vi
• For each Vi,  we find the position with the largest sum
• We sum together the largest position sums of each Vi.  Is that sum at least B?

Example: Suppose we have 4 vectors, each with 5 elements.

• v1 = (1,2,3,4,5)
• v2 = (9,2,4,6,8)
• v3 = (3,7,1,1,4)
• v4 = (2,9,10,3,11)

If K=2, then we can create:

• V1 = v1 and v2  Column 5 has the highest sum (15)
• V2 = v3 and v4.  Column 2 has the highest sum (16)

The total of the sums from each element of the partition is 15+16 = 31.

Reduction: G&J say to use X3C, and also mention two important facts:

1. The problem is still NP-Complete even if the elements in all vectors are 0 and 1.  This implies to me that the vectors should be Boolean representations of participation in the sets.
2. The problem is no longer NP-Complete if we fix K.  This implies that the K value we choose in the reduction needs to be based on the X3C input somehow, and can’t be a simple number like 2 or 3.

My first pass of working on the reduction had me create a vector for each set in C, with positions in the vector corresponding to elements in X.  (Thus, each vector would have ones in exactly 3 positions and zeroes everywhere else.)  My natural inclination was to set K to 2 (one set for the cover, one set for “everything else”- the sets in C that were not in the cover).  But that ran afoul of the prohibition of a fixed value for K.

I toyed with the idea of inverting the sets but didn’t get very far.  Then in some web-searching for inspiration, I found a paper by Roy, Lakshmanan, and Liu that worked on a similar problem.  They call their version the “Perfect Expected Component Sum” problem and works similarly except they fix B to be equal to |C|, and want the final sum to be exactly equal to B, instead of at least B.

The key idea is to have one vector for each element in X, and to have the dimensionality of the vector to be the size of the sets in C.  So each vector vi has a 1 in position j if set Cj contains element i.  Now each position in the vector (from 1 to |C|)  has exactly three vectors with a 1 in that position (the three elements that make up the set corresponding to that position).  We set K=q and B=3q.

If a C’ exists that is an exact cover of X, then C’ consists of {C’1 .. C’q} – exactly q sets from C that contain each element in X exactly once.  Then we can partition V into sets of 3 vectors that correspond to the three elements of each C’i.  So the first partition has the 3 element vectors that correspond to the 3 elements in C’1, the second partition has the 3 element vectors that correspond to the 3 elements in C’2, and so on down.  Each partition will have one column that has a 1 in all 3 elements, and so the maximum sum of all columns will be 3.  Since we have q different sets of vectors in our partition of V, and each contributes 3 to the sum, our total sum is 3q=B.

If we have a partition of vectors that sums to at least V, notice that no set in C has more than 3 elements, therefore no column sum of any set of vectors in our partition will sum to more than 3.  Thus, the only way to reach a sum of at least B is to have a sum of 3 in each of the sets in the partition.  Since there are 3q vectors in V, this can only be accomplished by having exactly 3 vectors in each set in the partition.  These 3 sets must have at least one column that has 1’s in all entries in that column.  This column tells us which set to choose for the cover of X.

Difficulty: 5.  I’m pretty embarrassed I didn’t come up with the idea of using elements as vectors and sets as boolean entries in the vector.  It’s very similar to the graph theory reductions from 3SAT where we have vertices for edges and clauses, and an edge between a literal vertex and a clause vertex if the literal is in that clause.  There’s a similar property there that each clause has degree 3, that you can exploit.