Pruned Trie Space Minimization

This problem is hard to explain, partially because the definition given by G&J doesn’t really map to the structure they are talking about easily.

The problem: Pruned Trie Space Minimization.  This is problem SR3 in the appendix.

The description in G&J: Given a finite set S, a collection F of functions mapping elements of S to positive integers, and a positive integer K.  Can we find a sequence of m distinct functions from F <f1 .. fm> such that:

  • For each pair of elements a and b in S, there is some function fi in the sequence where fi(a) ≠ fi(b)
  • For each i from 1 to m, define N(i) to be the number of distinct tuples X= (x1..xi) where more than one a in S has the tuple (f1(a), …, fi(a)) = X, the sum of all of the N(i) values is at most K?

A better description: G&J’s definition removes all knowledge of the “tries” from the problem.  The Comer and Sethi paper that is referred to in the appendix I think does a better job.

First, a trie is a tree that separates a sequence of strings by letters. The idea is that each string has a unique path through the tree.  Here is the tree used in the paper:


This trie shows the path for the set of strings: {back, bane, bank, bare, barn, band, bang, barb, bark, been} by building the tree by considering letters in the string from left to right.  By using different orders of considering letters, we will get differently shaped tries, with different numbers of internal nodes.

A pruned trie recognizes that long paths of nodes with 1 child doesn’t actually need to be represented.  For example, once you go down the “b-e” side, the only place you can end up is at “been”.  So the trie is pruned by removing all such chains (we would consider the “e” node a leaf).

What we are interested in doing is finding an ordering on the letters in the string (or, more generally, the “attributes” of an element we are trying to distinguish) in order to minimize the number of nonleaf nodes in the pruned trie.

The actual question we want to solve is: Given a set of strings S and an integer K, can we construct a trie that differentiates the S strings with K or less internal nodes?

I think the way this maps to the G&J definition is:

S is the set of strings.  F is the set of attributes that map strings to an order of choosing attributes.  The sequence of functions <f1, …, fn> are the orders in which we choose attributes.  So f1(a) is the first node in the trie that we go to on the string a, f2(a) is the second node we go to and so on.  The fi(a) ≠ fi(b) requirement says that we need to eventually differentiate each string from each other, and the N(i) number is counting the number of internal nodes at each height of the tree:

Example: For the picture shown above, we get the following pruned trie (also from the paper):


This trie has 5 internal nodes.

Reduction: G&J say that the reduction goes from 3DM, but in the paper it goes from 3SAT. So we’ll start with a formula in 3CNF form with n variables and m clauses.  The strings we’ll build will have 3n+3m attributes (you can think of this as strings of length 3n+3m).    The first 2n attributes will correspond to literals (one attribute for the positive setting of a variable, one attribute for the negative setting).  The next 3m attributes will correspond to clauses (3 attributes for the 3 possible positions a variable can appear in a clause), and the last 3 attributes correspond to literals (to combine the positive and negative setting of that variable’s literals).

We will have one string for each literal (a 1 in the attribute matching that literal’s positive or negative setting, a 1 in the attributes matching that literal’s position in clauses, and a 1 in the attribute matching that variable, 0’s everywhere else).  We will have one string for each clause (a 1 in the three positions in each clause, 0’s everywhere else).  Then we will have a sequence of “hard to distinguish” strings made of decreasing numbers of 2’s (with 0’s everywhere else).

Here’s the example construction from the paper (blank spaces are zero’s).  It’s a little confusing because they chose n=m=3, but you can see where the various pieces are:pruned-trie-space-minimization3


If the formula is satisfiable, then the ordering of attributes where we put all of the literals that form the satisfying arrangement first, then all of the clauses, then the W attributes (for the variables) distinguishes the strings in L with 2n+m internal nodes.

In fact, all tries must have at least K internal nodes to distinguish the strings in L- that can be seen from the table, since we have K strings made up of decreasing numbers of 2’s.  We also have to distinguish the strings in order (the strings with the most 2’s first, then the ones with less 2’s, all the way down to the last one with just one 2).  We need to choose one attribute for each literal (positive or negative).  Suppose we choose an attribute Ui (or its negation).  That node in the trie has 3 children:

  • A 2, which distinguishes the string in L.
  • A 1, which distinguishes the string corresponding to that literal in J.
  • A 0, for everything else.

What this means is that we have “distinguished off” the literal string (in J) from the rest (on a 1), which means that the 1 it has in the clause position will not interfere with the 1 in that position of the clause string (in K).  So each clause string will be able to be distinguished by the clause position that satisfies the string.

So, if we have a trie with “only” K internal nodes, the attributes must line up to allow us to have a setting of a variable to satisfy each clause.

Difficulty: 8, with the Comer and Sethi trie definition.  If you are going straight from G&J’s definitions, it’s at least a 9.

Dynamic Storage Allocation

Since Bin Packing was a redo, here is the first real problem in the Storage and Retrieval section.

The problem: Dynamic Storage Allocation.  This is problem SR2 in the appendix.

The description: Given a set A of items.  Each item a in A has size s(a), arrival time r(a) and departure time d(a) (all positive integers).  We’re also given a storage size D.  Can we allocate the items to D “slots” of storage such that:

  • Each item is stored in consecutive slots.  So an element a has to be contained in s(a) adjacent locations from 1 to D.
  • No two items overlap the same slot during the time they are in storage. In other words, if two items a and a’ are mapped to the same slot in D, the must not have any overlap between their arrival and departure times.

Example: Here’s a simple set of items:

Item Number Arrival Departure Size
1 1 2 4
2 2 3 4
3 1 3 2

If D=6, we can store these items by using slots 1-4 to hold both items 1 and 2 (notice that they don’t overlap in time, and having one item arrive right as the other departs is ok), and slots 5-6 to hold item 3.

Reduction: The reference to Stockmeyer in G&J is to a private communication.  I tried working out my own reduction from 3-Partition, but couldn’t make it work.  My approach was to make the sizes of the elements in the 3-Parttion instance map to times in this problem, since G&J give the hint that you can make all sizes 1 or 2.  But I couldn’t figure out how to make it work.  I sort of expect there to be 3 possible sizes for a 3-partition problem, instead of 2.

Eventually, I found a paper by Lim that uses regular Partition, using the storage allocation problem as a special case of a problem involving berthing ships.   (The ship problem adds extra complications like each ship needing a specified clearance between it and other ships).  He starts with a set A of elements, and defines T to be the sum of all of the element sizes.  He then creates one item in the storage allocation problem for each element in S.  For a given s(a) in A, the new item has size s(a), arrival time 2, departure time 3 (so exist for just one time duration) .  He also adds 9 new items that have the effect of creating only two sequences of storage slots that can hold the items from s, each of size= T/2. We can place the items in these slots if and only if there is a partition of S.

Difficulty: 7.  I don’t think the idea is too hard to understand, but the 9 sets that are created are hard to come up with (even if you can understand what their purpose is, coming up with the sets that actually get that purpose accomplished is pretty hard).

Ratio Clique

Last week it was pointed out to me that my reduction for Balanced Complete Bipartite Subgraph was wrong, and in my searches to fix it, I found that the real reduction (by Johnson) used a variant of Clique that said (without proof)) that Clique is NP-Complete even if K was fixed to be |V|/2.  I looked up the Clique problem in G&J, and they say in the comments that it is NP-Complete for K = any fixed ratio of V.

I thought this was a neat easy problem that fit in the 3-6 difficulty range I mentioned last week and decided it was worth a post.  But thinking about this brings up some subtle issues relating to ratios and constants that are common sources of errors among students.  I’ll talk about that at the end.

The problem: I don’t know if there is an official name, so I’m calling it “Ratio Clique”.  It is mentioned in the comments to GT19 (Clique).

The description: For any fixed number r, 0< r < 1, does G have a clique of size r*|V| or more?

Example:  Here’s a graph we’ve used for a previous problem:

maximum fixed-length disjoint paths

If r = .5, then r*|V| = 3.5.  So we’re asking if a clique of 3.5 or more vertices exists (which really means a clique of 4 or more vertices).  It does not exist in this graph.  If r ≤ \frac{3}{7}, then we would be looking for a clique of size 3, which does exist in this graph (vertices b, c, and t)

The reduction: We will be reducing from the regular Clique problem.  Since we want to show this “for any fixed value of r”, we can’t change r inside our reduction.

So we’re given a graph G=(V, E) and a K as our instance of Clique. We need to build a graph G’=(V’, E’) that has a fixed K’ = ⌈r*|V’|⌉.

G’ will start with G, and will add new vertices to the graph.  The vertices we add depend on the ratio s of K to |V|    (K = ⌈s*|V|⌉).  K’ is initially K, but may change as vertices are added to the graph.

If r > s, then we need to add vertices to V’ that will connect to each other vertex in V’, and will increase K’ by 1.  This increases the ratio of \frac{K'}{|V'|}, and we keep adding vertices until that ratio is at least r.

If G has a clique of size K, then the extra vertices in K’ can be added to the clique to form a larger clique (since these new vertices connect to every other vertex)

If G’ has a clique of size K’, notice that it must contain at least K vertices that were initially in G. (We only added K’-K new vertices).  These vertices that exist in G are all connected to each other and so will form a clique in G.

If r < s, then we will add vertices to V’ that are isolated (have no edges connecting to them).  K’ will stay equal to K.  Each vertex we add will reduce the ratio of \frac{K'}{|V'|}, and we keep adding vertices until  K=⌈r*|V’|⌉.

Since these new vertices can not be part of any clique in G’, any clique in G’ must consist only of vertices from G.  Since K=K’, this gives us a clique of size K in both graphs.

It is probably also worth mentioning just how many vertices need to get added to the graph in each case, to make sure that we are adding a polynomial number.  If r>s, we will be adding w vertices to satisfy the equation: ⌈s*|V|⌉ + w = ⌈r*(|V|+w)⌉

(These are both ways of expressing K’)

Dropping the ceiling function (since it only leads to a difference of at most one vertex) Solving for w gets us w = \frac{(s|V|-r|V|)}{(r-1)}.  Since r > s, both sides of that division are negative, so w ends up being positive, and polynomial in |V|.

If r < s, we will be adding w vertices to satisfy the equation:

⌈s*|V|⌉ = ⌈r(|V|+w)⌉

(These are both ways of expressing K)

This can similarly be solved to w = s|V|-r|V|.  Since s > v, this is also a positive (and polynomial) number of new vertices.

A possible source of mistakes: I’m pretty sure this reduction works, but we need to be careful that there is a difference between “for any fixed ratio r of |V|” and “for any fixed K”.  Because for a fixed K (say, K=7) solving the “Does this graph have a 7-Clique?” problem can be solved in polynomial (by enumerating all subgraphs of size 7, for example.  There are n \choose 7 subgraphs, which is O(N^7)).  By choosing a ratio instead of a constant K, we gain the ability to scale the size of K’ along with the size of the graph and avoid this issue.  But it is worth mentioning this to students as a possible pitfall.  It’s very easy to do things in a way that effectively is treating r|V| as a constant K, which won’t work.

Difficulty: 3, but if you’re going to make students to the algebra to show the number of vertices that are added, bump it up to a 4.

Bin Packing Take 2

[So WordPress’s search function has failed me.  A search for posts on Bin Packing didn’t turn up this post, so I went ahead and wrote a whole second post for this problem.  Since this time my reduction uses 3-Partition instead of Partition (and so is a little less trivial for use as a homework problem), I figured I’d leave it for you as an alternate reduction.

I have been thinking off and on about whether it would be useful when I’m done with this project (years from now) to go back and try to find reductions that can be done easier (or harder) than what I’ve shown here, to give more options that are in the 3-6 difficulty range that I think is best for homework problems.  I’m not sure how feasible that task would be, but it’s something I’ll try to keep in mind as I go forward.

Anyway, here’s my post that talks about Bin Packing again:]

On to a new chapter! A4- “Storage and Retrieval”

This first one is a classic problem that I guess I haven’t done yet.

The problem: Bin Packing.  This is problem SR1 in the appendix.

The description: Given a finite set U of items, each with a positive integer size, and positive integers B and K.  Can we split U into k disjoint sets such that the sum of the elements in each set is B or less?

Example: Suppose U was {1,2,3,4,5,6}, K=4, and B= 6.  We want 4 disjoint sets that each sum to 6 or less.  For example:

  • {1,5}
  • {2,4}
  • {3}
  • {6}

Note that if K = 3, we would need 3 sets instead of 4, and this wouldn’t be solvable.

The simple reduction: G&J on page 124 say that Bin Packing contains 3-Partition as a special case.  So let’s try reducing from there. Recall the definition of 3-Partition:

Given a set A of 3M elements and an integer B such that the sizes of each element are between B/4 and B/2 and the sum of all of the elements is m*B, can we split A into m disjoint sets that each sum to exactly B?

Keeping in mind that the bounds on the elements in A mean that there are exactly 3 elements in each set in the partition, we can see how this maps easily to the Bin Packing problem:

  • U = A
  • K = m
  • Use the same B

While it is true that the Bin Packing problem allows the sums to be B or less, and the 3-Parittion problem forces the sets to sum to exactly B, the fact that all of the sets have to contain 3 elements and the fact that the sum of all of the element in U is m*B means that if any set in the Bin Packing answer is < B some other set will necessarily be more than B.

Difficulty: 3.  It is basically the same problem, but I think there is enough work needed to justify the reduction that it makes sense as a good easy homework problem.

Kth Largest m-Tuple

I think I’m going to move the posts to Wednesday this semester since I teach two 2-hour classes on Tuesday/Thursday.

SP19 is Minimum Sum of Squares

SP20 is Kth Largest Subset and is very similar to our next problem.

The problem: Kth Largest m-Tuple.  This is problem SP21 in the appendix.

The description: Given sets X1 through Xm that all contain positive integers, and integers K and B, are there at least K m-tuples (x1, .., xm) from X1 x X2 x … x Xm for which the sum of the elements in each tuple is at least B?

Example: Let X1 = {1,2,3,4}, X2 = {5,10}, X3 = {1,2}. Notice that the X’s can be different sizes, and can repeat elements.

If B = 15, then the only tuples that sum to at least 15 are {3.10,2), (4,10,1), and  (4,10,2).

Reduction: The paper by Johnson and Mizoguchi presents the reduction pretty densely, but here is what I think they are saying:  We’re going to use a Turing Reduction to solve the Partition problem, which means we’re given an instance of Partition, and assume we can call a subroutine to solve the Kth Largest m-tuple problem multiple times to solve the Partition instance.  Recall that we can use Binary Search on multiple calls of the subroutine to determine (for example) how many tuples sum to some B or more.  (We need multiple calls because the subroutine is a boolean one, just saying yes or no to a given instance).

Updated reduction:

This idea came from Said D. in the comments, and I wanted to make sure it got posted here because it’s so simple and elegant.  If we are given a partition instance S = {} and need to know if a subset sums to B (= half the sum of all of the elements in S), then the sets we create for the Kth Largest m-tuple instance are:

{s1+1, 1}, {s2+1, 1}, …(sn+1, 1}

And we number the K-th Largest m-tuple is looking for is B+n.  Each set will contribute 1 to the tuple sum no matter which element is chosen.  It may also contribute si as well, corresponding to whether we “take” that element in the partition instance or not.

That’s way better than what I was trying to do.  Thanks, Said!

The rest of my not very good reduction:

I’ll leave the rest up here for posterity, but this is way worse than Said’s idea above, and glosses over the fact that you can’t just use any elements you want as the sets- you need to set them up in a way that makes sure you don’t repeat elements when you choose the tuples.

So, in a manner similar to that used in the Kth Largest Subset reduction, we can find the number of tuples that sum to any B we want (including half the sum of all of the elements in the partition instance).  The only tricky part is that our “subsets” are tuples of different sizes.  So we need to run this algorithm multiple times:

  • Are there are 1-tuples that solve the partition problem?  (There are O(n)  1-tuples, so a binary search requires O(log n) calls)
  • Are there any 2 tuples that solve the partition problem?  (There are O(N^2) 2-tuples, so a binary search requires O(2* log n) calls)
  • Are there any m-tuples that solve the partition problem? (There are O(N!) n-tuples, so a binary search requires O(n * log n) calls)

Thus, we can do it in a polynomial number of calls to the K-th Largest m-tuple solver.

Difficulty: This is a little harder than the Kth largest subset reduction, which I gave a 5, so I’ll make this a 6.

Expected Component Sum

Sometimes you need a nudge to see the right way to do a reduction.  The reduction to this problem is based on a reduction for a similar problem, which encouraged me to look at the problem in a way that I probably should have noticed myself.

The problem: Expected Component Sum.  This is problem SP18 in the appendix.

The description: Given a collection V of m-dimensional vectors, where each entry in each vector is a non-negative integer.  We’re also given positive integers K and B.  Can we partition V into K disjoint sets V1 through VK such that:

  • For each Vi, we look at each position in each vector (from 1 to m), and we sum up the elements in that position in that Vi
  • For each Vi,  we find the position with the largest sum
  • We sum together the largest position sums of each Vi.  Is that sum at least B?

Example: Suppose we have 4 vectors, each with 5 elements.

  • v1 = (1,2,3,4,5)
  • v2 = (9,2,4,6,8)
  • v3 = (3,7,1,1,4)
  • v4 = (2,9,10,3,11)

If K=2, then we can create:

  • V1 = v1 and v2  Column 5 has the highest sum (15)
  • V2 = v3 and v4.  Column 2 has the highest sum (16)

The total of the sums from each element of the partition is 15+16 = 31.

Reduction: G&J say to use X3C, and also mention two important facts:

  1. The problem is still NP-Complete even if the elements in all vectors are 0 and 1.  This implies to me that the vectors should be Boolean representations of participation in the sets.
  2. The problem is no longer NP-Complete if we fix K.  This implies that the K value we choose in the reduction needs to be based on the X3C input somehow, and can’t be a simple number like 2 or 3.

My first pass of working on the reduction had me create a vector for each set in C, with positions in the vector corresponding to elements in X.  (Thus, each vector would have ones in exactly 3 positions and zeroes everywhere else.)  My natural inclination was to set K to 2 (one set for the cover, one set for “everything else”- the sets in C that were not in the cover).  But that ran afoul of the prohibition of a fixed value for K.

I toyed with the idea of inverting the sets but didn’t get very far.  Then in some web-searching for inspiration, I found a paper by Roy, Lakshmanan, and Liu that worked on a similar problem.  They call their version the “Perfect Expected Component Sum” problem and works similarly except they fix B to be equal to |C|, and want the final sum to be exactly equal to B, instead of at least B.

The key idea is to have one vector for each element in X, and to have the dimensionality of the vector to be the size of the sets in C.  So each vector vi has a 1 in position j if set Cj contains element i.  Now each position in the vector (from 1 to |C|)  has exactly three vectors with a 1 in that position (the three elements that make up the set corresponding to that position).  We set K=q and B=3q.

If a C’ exists that is an exact cover of X, then C’ consists of {C’1 .. C’q} – exactly q sets from C that contain each element in X exactly once.  Then we can partition V into sets of 3 vectors that correspond to the three elements of each C’i.  So the first partition has the 3 element vectors that correspond to the 3 elements in C’1, the second partition has the 3 element vectors that correspond to the 3 elements in C’2, and so on down.  Each partition will have one column that has a 1 in all 3 elements, and so the maximum sum of all columns will be 3.  Since we have q different sets of vectors in our partition of V, and each contributes 3 to the sum, our total sum is 3q=B.

If we have a partition of vectors that sums to at least V, notice that no set in C has more than 3 elements, therefore no column sum of any set of vectors in our partition will sum to more than 3.  Thus, the only way to reach a sum of at least B is to have a sum of 3 in each of the sets in the partition.  Since there are 3q vectors in V, this can only be accomplished by having exactly 3 vectors in each set in the partition.  These 3 sets must have at least one column that has 1’s in all entries in that column.  This column tells us which set to choose for the cover of X.

Difficulty: 5.  I’m pretty embarrassed I didn’t come up with the idea of using elements as vectors and sets as boolean entries in the vector.  It’s very similar to the graph theory reductions from 3SAT where we have vertices for edges and clauses, and an edge between a literal vertex and a clause vertex if the literal is in that clause.  There’s a similar property there that each clause has degree 3, that you can exploit.

Numerical Matching With Target Sums

In an effort to make my semesters easier, during breaks I do most of the research on the problems and write quick sketches of the reductions out.  This way when I get to the weekly post, most of the hard math work is done, and I don’t get surprised by a super hard problem.

(I’m doing something similar over our winter break at the present.  I’ve  got sketches up through the middle of April, and I’m currently working on problem SR13- “Sparse Matrix Compression”- which is an “unpublished manuscript”  problem that I’m having a lot of trouble with.  Keep your fingers crossed).

Anyway, I was looking through my notes today and I realized that I’d skipped this problem!  Luckily, I think the reduction is pretty easy.

The problem: Numerical Matching With Target Sums.  This is problem SP17 in the appendix.

The description: Given two sets X and Y, each with the same number (m) of elements, and each with a positive integer size.  We’re also given a “target vector” V, also of M elements, consisting of positive integer entities.  Can we create m sets A1 through Am such that:

  • Each Ai has one element from X and one element from Y
  • Each element in X and Y appears exactly once in some Ai
  • The sum of the sizes of the elements in each Ai is exactly Bi?

Example: I’ll use an example derived from last week’s Numerical 3-Dimensional Matching example because I think it will illustrate how the reduction will work:

  • X = {12,11,7,5}
  • Y = {1,1,4,5}
  • B = {13,12,11,10}

(W from last week was {1,2,3,4}, and B was 14.)

Letting A1 be the first elements of X and Y, A2 being the second elements of X and Y, and so on down, gives us a solution.

Reduction: G&J say to use Numerical 3-Dimensional Matching, and don’t even bother to mark it as “unpublished results”, probably because they think it’s so easy.

Our Numerical 3DM instance is three sets: W, X, and Y, and a bound B.  We need 2 sets and a “bound vector” for the instance of the Numerical Matching problem.  So what we do is:

  • X’ = X
  • Y’ = Y
  • Each bi in the B vector will be set to B-wi.  This is the amount we need the element from X and Y to add up to, so that when we add in the element from W, we get B.

If we have a solution to the Numerical 3-Dimensional Matching solution, then each Ai in that solution consists of 3 elements: wi, xj and yk that sum to B.  Then in our Numerical Matching With target Sums instance, we have a set Ai‘ where xj + yk sum to B-wi.  The same is true in the reverse direction.

Difficulty: 3, which may be too high.  I can see people getting confused by the fact that the sets in the 3DM instance can be taken in any order, but the B vector in the Target Sum matching problem needs to have Ai‘s element sum exactly to bi, and wondering how to force the correct W element to be in that spot?

(The answer is that you define it when you build B.  We set b1 to be “the sum that works when you use wi“, so it (or something with the exact same size, so we can swap the elements) has to go in that position in the vector).

Numerical 3-Dimensional Matching

SP15 is 3-Parttion.

The next problem is one of G&J’s “unpublished results” problems.  I tried figuring out an elegant way to doing it, but couldn’t make it happen.

The problem: Numerical 3-Dimensional Matching.  This is problem SP16 in the appendix.

The description: Given three sets W, X, and Y, each containing the same amount (m) of elements with positive “sizes” and a positive bound B.  Can we create m sets A1 through Am (containing 3 elements each), such that:

  • Each Ai has exactly one element from W, X, and Y
  • The sum of the sizes of the elements in each Ai is exactly B
  • Each element in W, X and Y is in some Ai

Example: Suppose we have the following sets:

  • W has elements with sizes {1,2,3,4}
  • X has elements with sizes {12,11,7,5}
  • Y has elements with sizes {1,1,4,5}  (the ability to allow repeat numbers is why we define the sets as elements with sizes rather than sets of integers)

If B=14, then the partition where A1 is the first element in W, X, and Y, A2 is the second element in W, X, and Y, and so on gives each Ai set a sum of 14.  Obviously, we don’t need to choose corresponding elements from W, X, and Y to form the sets (for example, rearranging the elements in X to be in increasing order doesn’t change whether the problem can be solved, just the exact composition of the Ai sets)

Reduction: I tried doing a reduction using 3-partition, but got stuck (I’ll show it below, in case you want to try to fix it).  G&J refer you to Theorem 4.4 in the book, which is the proof that 3-partition itself is NP-Complete.

We can follow that along and do similar steps to our problem:

  • Theorem 4.3 shows how to turn 3DM into 4-partition (a problem like 3-partition but each set in the solution has 4 elements instead of 3).  Since the sets that are created in the 4-partition solution come from 4 different places (page 98 calls then a “ui, a wi[·], an xj[·], and a yk[·]).  Since these partitioned sets all add to the same total (B) and come from 4 disjoint parent sets, we can see how we could do basically the same reduction and show that the “numerical 4-dimensional matching problem” is NP-Complete.
  • Theorem 4,4 shows how to turn a 4-parition problem into a 3-partition problem.  The idea is to add enough “pairing” and “filler” elements to the 3-partition instance to make any 4-partition set be split into two 3-partition sets, each consisting of 2 elements from the 4-parittion, plus the “pairing” element of one of the 2 elements chosen.  We can do something similar converting numerical 4-dimensional matching to numerical 3-dimentional matching.  (The difference is that we are given specifically which sets the elements are coming from)  So, if we’re given W, X, Y, and Z in our numerical 4DM instance, we construct W’ to be elements from W and Y, X’ to be elements from X and Z, and Y’ to be the pairing elements of pairs from W’ and X’.  We then need to add enough filler elements to our 3 sets in a similar way to the 3-partition proof (again, the difference is that we have to specifically assign them to W’, X’, or Y’.  But that can be determined by how the 3-partition proof allocates the items)

Difficulty: If you have gone over the 3-partition reduction, this is probably a 6.  Lots of tricky math but doable in a (hard) homework.    But keep in mind you’re tacking it on to the difficulty 8 of understanding the 3-partition reduction in the first place.

My reduction I couldn’t get to work: I really want there to be an easier way to do this.  I tried reducing from 3-partition directly because the problems are so similar.  Here is where I got to:

We’re given a 3-parititon instance S, and an integer B.  Our goal is to split S into sets of size 3 that all add up to B.

So, let’s use S to create 3 sets in our numerical 3DM instance:

  • W has all of the elements in S
  • X has all of the elements in S, but the sizes are each increased by 10B.
  • Y has all of the elements in S, but the sizes are each increased by 100B.

This would make B’ be 111B.

S has a 3-parititon, then for each set {si, sj, sk}, we take the three sets {wi, xj, yk}, {wj, xk, yi}, and {wk, xi, yj}  This will solve the numerical 3DM instance.

My problem comes showing this in the other direction.  If we have a numerical 3DM solution, we can only construct the 3-partition instance if the sets in the 3DM solution arrange themselves nicely like they do above.  I need to show that if the 3DM solution has {wi, xj, yk}, then the set in the 3DM solution that contains wj also contains xi (or xk) and yk (or yi).  I think you can get there by using the rules about how the bounds of the elements in the 3-partition instance work, but the work you need to do to show that it’s true makes this way of doing things no longer “easier” than the Theorem 4.4 proof I sketched above, so I gave up on it.

I still wish there was a more elegant way to transform this problem, though.

Subset Product

I’m setting this to post automatically on the 27th.  Hopefully, it posts correctly.

Sp12 is Partition

Sp13 is Subset Sum

This next problem is related to those, but has a cool twist.

The problem: Subset Product.  This is problem SP14 in the appendix.

The description: Given a set A of positive integers, and a positive integer B, is there a subset A’ of A such that the product of the sizes all elements in A’ is exactly B?

(The G&J definition of the problem defines A as a set of generic elements, each with a positive integer “size”.  This is more general in that it allows for two different elements in A to have the same size.  But most of the time this and similar problems (for example: Subset Sum, Partition) are encountered, it is with the definition above)

Example: Let A = {1,2,3,4,5,6} .  If B = 60, then setting A’ to {2,4,5} solves the problem.  If B=61, then no subset of A will multiply to B.  61 is easy to see since it’s prime, but other non-prime numbers (like 35) also will not have a solution.

The reduction: G&J say to use X3C, and I’ll admit that this idea came to me while I was wrestling with the reductions for Comparative Containment and its relatives, with all of their work creating sets based on prime numbers.

We start with an instance of X3C: a set X with 3q elements, and a collection of 3-element subsets of X.

What we’re going to do is assign a prime number to each element in X- so the first 3q prime numbers will be allocated.

Each element in C will be represented by an element whose size is the product of the 3 prime numbers corresponding to the elements in C.   Notice that since each of the elements in X are represented by distinct prime numbers, the only way for two elements in C to generate the same number is if the two elements in C were exactly the same.  (In which case, the duplicate can be safely removed).

Our set A will be the collection of these C numbers, and our integer B will be the product of the first 3q primes.

So, if a cover C’ exists, multiplying all of the elements in C’ together will give us a number that is the product of the first 3q primes, because each element x in X will appear as a factor in some c in C’ exactly once, and thus each x’s prime number assignment will appear exactly once in the product of all of the numbers that represent the sets in C’.

If a subset A’ of A that multiplies to B exists, then the prime factorization of B gets us each of the first 3q prime numbers exactly once.  Each of the elements in A’ corresponds to a set in C, and the prime factorization of that element in A’ will “cover” three elements in X.  The union of all such coverings will cover X entirely.

The only hard part that remains is to decide whether we can actually find the first 3q prime numbers in polynomial time.  The prime number theorem says that the nth prime number is proportional to n log n, and brute-forcing whether a number is prime is O(\sqrt{n}).  Thus, we should be able to find the first 3q prime numbers in polynomial time just by checking each number individually, starting from 2.  Obviously, more efficient methods also exist.

Difficulty: 4.  It’s easy to go off in very different directions (for example, trying to compare this problem to Sum of Subsets by realizing that taking the logarithm of a product gets you a sum). Also the prime number theorem stuff isn’t obvious, and I don’t know how you mention its existence to students without spoiling the entire reduction.

Still, once they have that possibly obscure bit of knowledge, this is a good easy reduction that many students can follow.

3-Matroid Intersection

This is a problem that was really hard for me to understand and explain.  It may be less so or other people, though.

The problem: 3-Matroid Intersection.  This is problem SP11 in the appendix.

The description: Given three matroids (E,F1), (E, F2), (E,F3), and a positive integer K.  Can we find a subset E’ of E with K elements that is in the intersection of F1, F2, and F3?

A matroid (E,F) is a set of elements E, and a family F of subsets of E with the following properties:

  • Some set S in the family F implies that all subsets of S are also in F.
  • If I have two sets S and S’ in F, and |S| = |S’|+1, then there is some element e that is in S but not S’, and adding e to S’ gets us a set also in F.

A common example of matroids comes from graph theory: Given a graph G=(V,E), the matroid based on this graph is (E,F), where E is the set of edges in the graph, and F is the collection of edges that have no cycles (i.e, forests)  This follows the two rules above:

  • If S is a collection of edges that do not form a cycle, then any subset of S is also a collection of edges that do not form a cycle.
  • If I have two collections of edges S and S’, and S is larger than S’ by one element, we need to find an edge in S that connects two connected components  (trees in the forest) in S’.  Since each tree on K vertices has exactly K-1 edges, S has at least one edge that is not part of any tree in S’ (adding an edge to a tree causes a cycle).  That edge must either connect two trees together, or connect a vertex in some tree to a vertex not used in S’, or connect two vertices not used in S’.  Either way, adding this edge to S’ gives us another forest.

Example:  Since the reduction we’re going to use involves building three matroids from a basic graph, let’s do that.  Here is a graph:


We’re going to use this graph to induce 3 matroids.  The first will be the graph matroid (using the rules defined above) on the undirected version of this graph:


So F1 is the set of all forests in the above graph.

F2 is collections of sets of edges in the directed graph where each vertex has at most one incoming edge.  So, sets like {(s,a), (a,t)} or {(a,b), (b,c), (c,t)}.  The exception is that vertex s must have zero incoming edges in any set in the family.

F3 is built similarly, except we count outgoing edges, and t (instead of s) is the vertex restricted to have 0 outgoing edges.

If K=4, then the following set of edges is in all three families: {(s,a}, (a,b), (b,c), (c,t)}.  It’s acyclic, so it’s in F1.  No vertex in that set has indegree or outdegree of more than 1 (and s has indegree 0 and t has outdegree o), so it’s in F2 and F3.

Just to clarify, because it’s confusing to me.  The sets E that are the “elements” of the matroid are edges in the graph, and the families F are collections of edges.  We just found a collection of 4 edges that will be in all three families.

The reduction: G&J don’t give a reference to this problem, but suggest using 3DM. In my searches, I found that Wikipedia and this set of lecture notes from MIT both want to use Hamiltonian path between two vertices.   (Notice that the “corrected” reduction for HP actually specifies the two vertices to connect between, so the HP reduction also proves this variant).  Here is my explanation of the reduction in the MIT lecture notes.  The class was taught by Michael Goemans, and I got the notes from his web site.

So, we start with an instance of HP- a possibly directed graph G=(V,E), and two special vertices s and t that we want to see if there is a Hamiltonian Path between.  We build the three matroids described in the examples based on using the edge set E: One based on forests in E, one based on collections of edges that have indegree at most 1 on each vertex, and one based on collections of edges that have outdegree at most 1 on each vertex.  Set K = |V|-1.

Notice that an intersection of these three matroids has to be a collection of paths that do not encounter the same vertex twice.  If we can create a collection that has K edges in it, the path will start at s (because F2 ensures that no edges going into s will be part of the intersection), encounter each vertex once (because F1 ensures that we have no cycles, and thus hit each vertex at most once), and end at t (because F3 ensures that no edges leaving t will be part of the intersection).  Thus, we have an intersection of K edges if and only if V has a Hamiltonian cycle.

Difficulty: It’s an 8 for me.  Since this is taught in a relatively small part of one lecture of a class (yes, a class at MIT, but still just a class), presumably it may be less for other people.  But I have a lot of trouble thinking in terms of matroids, and even now, I’m not really convinced that the three families of edges we create are polynomial in size.