Author Archives: Sean T. McCulloch

Algebraic Equations over GF[2]

AN8 is Quadratic Diophantine Equations.

The problem: Algebraic Equations over GF[2].  This is problem AN9 in the appendix.

The description: Given a set P of m polynomials over n variables (x1 through xn) where each polynomial is the sum of terms that is either 1 or the product of distinct xi, can we find a value ui for each xi in the range {0,1} that make each polynomial 0, if we define 1+1=0, and 1*1 = 1?

Example: It helps to think of GF[2] as a boolean logic world, where + is XOR and * is AND.  So, suppose we have three variables, and the polynomials:

  • P1 = 1 + x1x2 + x2x3
  • P2 = x1 + x1x2x3

..Then setting x1=0, x2=1, x3=1 makes both polynomials 0.

Reduction: G&J say that Fraenkel and Yesha use X3C, but the paper I found uses 3SAT.  We’re given an equation that has n variables and m clauses.  The variables of our polynomials will be the same variables in the 3SAT instance.  For each clause, we build a polynomial by:

  • Replacing a negated literal (~x) with the equation 1 + x.  (Remember, + means XOR in this system)
  • Replacing an OR clause (A ∨ B) with the equation A+ B +A*B
  • Xoring the whole above thing with 1.

Notice that the first replacement makes ~x have the opposite truth value of x, the second replacement rule is logically equivalent to A∨B, and the third part makes the polynomial 0 if and only if the clause evaluated to 1.  So the polynomial is 0 if and only if the clause is satisfiable.  So all polynomials are 0 if and only if the all clauses are satisfiable.

Difficulty: 5.  This is easy to follow.  It’s a little tricky to make students come up with the equivalence rules above, but I think if you can explain it right, it’s not that bad.

Non-Divisibility of a Product Polynomial

Now back to the problem we skipped over last week.

The problem: Non-Divisibility of a Product Polynomial.  This is problem AN6 in the appendix.

The description: Just like Non-Trivial Greatest Common Divisor, we’re given a set of m sequences of pairs of  integers (so sequence Ai is <(ai[1],bi[1]) …(ai[k],bi[k])>), where each b element is ≥ 0.  We’re also given an integer N.

Just like Non-Trivial Greatest Common Divisor, we build the polynomials \sum_{j=1}^k a_i[j]*z^{b_i[j]}, for each i from 1 to m.  But now we want to multiply them all together.  If we do, is the product not divisible by zN-1?

Example: Suppose A1 was <(1,3), (2,2), (-1,1), (-2,0)> and A2 was <(5,4), (3,2), (7,0)>.  Then A1‘s polynomial is z3 + 2z2-z-2, and A2‘s polynomial is 5z4+3z2+7.  The product of these is: 5z7+10z6 -2z5 – 4z4 + 4z3 + 8z2-7z-14.  If N=2, then it turns out that z2-1 is a factor (it’s actualy a factor of A1), so the decision problem would say “No”.

Reduction: The reduction for this is again by Plaisted, using his polynomials that come from logical statements. For each clause C of a 3SAT formula, you build the polynomial PC for that clause, and from that QC = (xN-1)/PC  He mentions in his paper that it is “easy to verify” (but doesn’t actually prove it) that The product of all of these QC‘s / (xN-1) is analytic iff S is inconsistent.  I think the reason is that QC represents in his polynomial the ways to assign variables to make a formula false. So if all assignments make it false, the product is equal to all assignments.  I still don’t quite see how the divisibility follows.

Difficulty: 8.  It’s about as hard as the last one.

Non-Trivial Greatest Common Divisor

We’ll do this next one out of order because I think this order is better explained by the paper.

The problem: Non-Trivial Greatest Common Divisor.  This is problem AN7 in the appendix.

The description: Given a set of m sequences of pairs of  integers (so sequence Ai is <(ai[1],bi[1]) …(ai[k],bi[k])>), where each b element is ≥ 0.  I’m pretty sure k can be different in each sequence.  If we build the polynomials \sum_{j=1}^k a_i[j]*z^{b_i[j]}, for each i from 1 to m, does the greatest common divisor of those polynomials have degree greater than 0?

Example: I think the hardest part of this description is picturing the polynomials, so here’s a simple example showing how that works.  Suppose m=2, and A1 was <(20,3),(4,2),(30,1)> and A2 was <(2,4),(8,1)>

(As an aside, I wish G&J didn’t use A for both the name of a sequence and lowercase a for the name of one of the entries of each pair.  They’re not related.)

The polynomial corresponding to A1 is 20x3 + 4x2+30x, and the polynomial corresponding to A2 is 2x4 +8x.  The polynomial 2x is a divisor of both, and I think it’s the GCD.  (If not, the GCD is larger, and so also has degree >0).  So, this problem instance would have the answer “yes”.

Reduction:

This reduction is also by Plaisted, who did last week’s reduction, but is in his 1984 paper.  His reduction is from 3SAT.  In the paper, he gives an algorithm that turns a logical formula into a polynomial.  The polynomial p that gets created has the property that p(z) = 0 if and only if z is an M-th root of unity (a solution to zM=1).  This leads to a way of setting the truth values of the logical formula: if ω is the root of unity, set variable Pj to true if ω^{M/q_j}=1, where qj is the jth prime number.  It turns out if you do that, you make the entire logical formula true.

So, for our problem, we’re handed a 3SAT instance with n variables.  Let M be the product of the first n primes.  The formula is satisfiable iff we can make all clauses true, which happens iff there is an Mth root of unity ω such that the polynomial on each clause is 0 (so we make a polynomial for each clause in the SAT formula). He states an “easy” identity (without proof) that the polynomial of A∧B is the gcd of the polynomials of A and B.  So therefore if ω exists for all polynomials, the gcd has degree greater than 0.

Difficulty: 8.  This is a little easier to follow because he states the properties of his polynomial up front.  I’d like to see more proofs of his “easy” properties and to be honest, I’m not sure how the last step happens (how having ω exist means that the gcd has degree > 0).  I also don’t know how you’d ask students to come up with these polynomials on their own.

Exponential Expression Divisibility

AN4 is Comparative Divisibility.

This next problem is related to one we’ve done before, but I think it merits its own entry.

The problem: Exponential Expression Divisibility.  This is problem AN5 in the appendix.

The description: Given two sequences A = a1..an and B=b1..bof positive integers, and an integer q. Does \prod_{i=1}^{n} (q^{a_{i}}-1) divide \prod_{i=1}^{m} (q^{b_{i}}-1)?

Let A={2,3} and B={1,4,5}.  If q=4, then the first product comes to 15×63=945 and the second product comes to 3x255x1023 = 782,595.  These two numbers don’t divide.

Reduction: Plaisted’s paper calls this a “refinement” of Polynomial Non-Divisibility.

The reduction for that built a polynomial called “Poly(Cj)”  and showed xN-1 is a factor of \prod_{j=1}^{k}Poly(~Cj)(x) if and only if S is inconsistent.  As I said in that post, there was a lot I don’t understand about it, because he doesn’t give clear proofs or explanations.  He uses some more manipulations to say that the SAT formula is inconsistent iff 2mn-1 divides \prod_{j=1}^{k} \frac{(2^{mn}-1) * Denom(C_j)(2^m)}{Num (C_j)(2*m)} .  (“Num” and “Den” were used in the creation of “Poly”).  This gives us a restatement of the Exponential Expression Divisibility problem where q =2.  Since there was nothing special about the 2, it could work for any integer and thus the reduction works in general.

Difficulty: 9.  Just like the Polynomial Non-Divisibility problem, I have a very hard time following what he’s doing here.  I wish I could find some better explanations

Simultaneous Divisibility of Linear Polynomials

Sorry for missing last week- my grading backlog is getting out of hand.

The problem: Simultaneous Divisibility of Linear Polynomials.  This is problem AN3 in the appendix.

The description: Given two sets A and B, each containing n vectors, each vector having m+1 integer entries (so for example, vector ai has values ai[0] through ai[m]) , can we find m positive integers x1 through xm such that for each pair of vectors ai and bi,

(ai[0] + ai[1]*x1 + ai[2]*x2+…+ai[m]*xm) divides (bi[0] + bi[1]*x1 + bi[2]*x2+…+bi[m]*xm)

Example: Suppose a1 was (1,2,3), b1 was (2,4,6), a2 was (4,2,4), and b2 was (7,9,10).  Then if I pick x1 = 3, x2 = 2, a1‘s equation evaluates to 13 and b1‘s evaluates to 26.   a2‘s equation evaluates to 18, and b2‘s equation evaluates to 54.  Since both a values divide their corresponding b values, we have found an x1 and x2 that work.

Reduction: The paper I found by Lipschitz that has the reduction reduces from Quadratic Congruences.  He starts by assuming that m ≤ 2n because at most 2n of the ai and bi are independent (and so the extra variables can be made equivalent to the first 2n variables by transformation).    He also uses a variation of Quadratic Congruences that states the problem is still NP-Complete if c ≤ b/2.  Then given a,b, and c, we build our divisibility problem with 5 formulas over the variables x, y, u, and v:

  • x+u | c+1
  • 4y + v | b2-1
  • x+b | y-b2
  • x+b+1 | y-(b+1)2
  • b | y-a

It turns out that if we can find an x such that x2 ≡ a mod b if and only if we can find x,y,u, and v to satisfy those divisibility problems.

If we can solve the quadratic congruence, then setting y=x2, u = c-x+1 and v = b2 – 4x2 + 1 satisfies all of the divisibility equations.

In the other direction, if we have found x,y,u, and v that works, we know:

  • x ≤ c (which is ≤ b/2) from the first equation
  • y ≤ b2/4 from the second equation
  • y = x2 + k(x+b)*(x+b+1) for some k ≥ 0 from the 3rd and 4th equations

Suppose k is not 0, and thus y is not equal to x2.  Then y has to be > b2, which violates the second fact above.  So k is really 0 and y=x2.  Then the last equation is a restatement of the quadratic congruence problem, and y (=x2 ) is congruent to a mod b.

Difficulty: 7.  I think the way this all works out is very slick, and I like how the equations give you the answer, but I have no idea how you’d come up with these equations on your own.

Simultaneous Incongruences

Back to the problems we’ve skipped over, starting with a cool take on the classic “Chinese Remainder Theorem”.

The problem: Simultaneous Incongruences.  This is problem AN2 in the appendix.

The description: Given n pairs of positive integers (a1, b1)…(an, bn), can we find an integer x such that x \not\equiv ai mod bi?

(G&J also add the rule that  ai ≤ bi for all i, but we can easily make that happen by doing division)

Example: I think it’s easier to see this as the actual congruences:

Can we find an x such that:

  • x \not\equiv 1 mod 2
  • x \not\equiv 2 mod 3, and
  • x \not\equiv 3 mod 4?

If we chose x as 4, we’ll see that it works.  For a simple example of a case that fails, we can do:

  • x \not\equiv 1 mod 3
  • x \not\equiv 2 mod 3, and
  • x \not\equiv 3 mod 3

Reduction: I found this reduction in the Algorithmic Number Theory book by Bach and Shallit.  Their Theorem 5.5.7 calls this the “Anti-Chinese Remainder theorem”.  They reduce from 3SAT.

Our formula will have t variables and each clause in our formula is made up of 3 literals, which we’ll represent as Ci = (zai∨zbi∨zci).  For each ai find pai, the aith prime number, and find pbi and pci similarly.  Define ai‘ to be 0 if zai is a positive literal, and 1 if it’s a negative literal, and define bi‘ and ci‘ similarly.   Now for each clause, use the regular Chinese Remainder theorem to find a value xi where:

  • xi \equiv ai‘ mod pai
  • xi \equiv bi‘ mod pbi
  • xi \equiv ci‘ mod pci

Our system of incongruences will be:

  • x \not\equiv 2 (mod 3)
  • x \not\equiv 2,3 (mod 5)
  • x \not\equiv 2,3,4,…pt (mod pi)   (pt is the tth prime number)

The above incongruences are there to force x to be 0 or 1 mod each pi. I think these correspond to true-false values to the variables in the SAT instance (x being 0 mod pi means setting that variable false, making it 1 mod pi means setting that variable true)

  • x \not\equiv xi (mod pai*pbi*pci) for each i

This turns out to be O(n+t3) incongruences by the Prime Number Theorem.

Each clause in the SAT instance is satisfiable unless all 3 literals are false.  By the way we’ve created our ai‘  (and b and c), this means that the variables can’t be set to be equal to all of ai‘, bi‘ and ci‘.  Because of how our xi was chosen, this means that x is not congruent to xi (mod pai*pbi*pci).

Difficulty: 7.  This is a cool short reduction, but the way the x value works is something you don’t usually see.

Quadratic Diophantine Equations

Since this problem’s reduction is basically the same as last week’s, let’s skip ahead to it now.

The problem: Quadratic Diophantine Equations.  This is problem AN8 in the appendix.

The description: Given positive integers a,b, and c, can we find positive integers x and y such that ax2 + by = c?

Example: Let a=1, b = 2, and c=5.  Then we’re asking if positive integers x and y exists to solve x2 + 2y = 5.  This is true when x=1 and y = 2.  If we change b to 3, then there are no positive integers x and y where x2 + 3y = 5.  (y has to be positive, so the only y that has a chance of working is y=1, which would require x to be √2)

Reduction: This is in the same paper by Manders and Adleman that had the reduction for Quadratic Congruences.  In fact, the reduction is almost exactly the same.  We go through the same process of creating the Subset Sum problem, and build the same H and K at the end.  The only difference is the instance: We build the quadratic (K+1)3*2*8m+1*(H2-x2) + K(x2-r2)-2K*8m+1y = 0.  We can multiply out the vales to get a,b, and c.

The rest of the reduction uses similar crazy algebra to the last problem.

Difficulty: 9, for the same reasons last week’s problem was.

Quadratic Congruences

On to the next section!  The “Algebra and Number Theory” section should take us through the end of the year.

The problem: Quadratic Congruences.  This is problem AN1 in the appendix.

The description: Given positive integers a,b, and c, can we find a positive integer x < c such that x2 ≡ a (mod b)?

Example: Let a = 3, b = 7, c = 13. It turns out that for every x < c, x2≡ either 0,1,2 or 4 mod b.   So this isn’t solvable.  But if we change a to 2, then the first c that works is 3.

One thing I learned in doing this problem is that the squares of mod b will always form a cycle with period b.  The proof is a pretty cool algebra problem if you set it up right.

Reduction: The paper by Manders and Adleman uses 3SAT.  Starting from the list of variables, they define a set Σ that holds all 3-element clauses of 3 different literals from that list of variables.  We won’t actually build this set (that’s exponential), but we can list out a number j for each of those clauses.  Then define τΦ to be -1* the sum of 8j for each of the j numbers that correspond to the clauses in our actual formula.

For each variable i, we define fi+ to be the sum of 8j for each j number in ∑ that has that variable positively, and fi to be -1* the sum of 8j for each number in Σ that has the variable negatively.

Let m be|Σ|, and set n=2m +The number of variables, and define a set C wit values c0 through cn:

  • c0 = 1
  • If j is odd(=2k-1) and ≤ 2m, cj = -8k/2
  • If j is even (=2k) and ≤ 2m, cj = -8k
  • If j is > 2m (= 2m+i), then cj = (fi++fi)/2

Set τ = τΦ + The sum of all of the ci‘s + The sum of all of the fi‘s.

This actually turns into a Subset Sum problem: Let C be our set, and count each value positively if we take it, and negatively if we don’t take it.

But we still need to prove this is SOS problem is solvable exactly when the original problem is.

Create a set p0 .. pn of consecutive prime numbers where the first one (p0) is 13.  Then define a set of θj values as the smallest number satisfying:

  • θj≡ ci (mod 8m+1)
  • θj ≡ 0 (mod The product of each (pi)n+1, except for j.)
  • θj is NOT ≡ 0 (mod pj)

Set H = The sum of all of the θj‘s.  Set L=the product of all of the (pi)n+1.  We are finally ready to create our Quadric Congruence instance:

b = 2*8m+1*K.

Let d = The inverse of (2*8m+1+K), mod b.

Then a = d*(K*τ2+2*8m+1 * H2)

c = H.

From here, it’s a bunch of algebra to show that an x exists if and only if the original formula was satisfiable.

Difficulty: 9.  Maybe it should be 10.  I don’t know enough number theory to be convinced that the θ’s exist, and be computed in polynomial time.  The algebra I’m glossing over is pretty complicated as well.

Comparative Vector Inequalites

This problem has a typo in G&J.  I’ll explain it once I get through the problem description.

The problem: Comparative Vector Inequalities.  This is problem MP13 in the appendix.

The description: Given two sets of m-tuples, X and Y.  X and Y can possibly be of different sizes (but each tuple in X and Y is always of length m).  Can I find an m-tuple \bar{z} such that if there are K different tuples \bar{x_i} in X that satisfy \bar{x_i} \geq \bar{z}, then there are less than K tuples \bar{y_i} in Y that satisfy \bar{y_i} \geq \bar{z}?

An m-tuple \bar{u} is \geq an m-tuple \bar{v} if an only if each element of \bar{u} is \geq the correspoinding element of \bar{v}.

The typo in G&J is that they say that it’s ok for there to be K or less (instead of strictly less) tuples in \bar{y} that satisfy the inequality.  But my book has an “update” written on the inside back cover that says it’s wrong.

Example: Suppose X is:

x1 1 2 3
x2 4 5 6
x3 7 8 9

..and Y is:

y1 1 1 1
y2 4 4 4
y3 7 7 7

If we choose \bar{z} as (4,4,5), then both x2 and x3 are ≥ that in X, but only y3 is ≥ it in Y.

Reduction: G&J say to use Comparative Containment, which is a good choice.  Comparative Containment gives you two sets R and S that are subsets of some set U (we used X in that problem, but let’s call it U to reduce confusion with the X in this problem), each with weights.  In our Comparative Containment reduction, we reduced to a version where all of the weights were 1, so we’ll use that version here.  The problem then asks: can we find a subset U’ of U such that there are more sets in R that have U’ as a subset than there are sets in S that have U’ as a subset?

We’ll make a set of |U| tuples, where each element in X is a 0/1 vector corresponding to an element in R. So if U was the numbers from 1-10, and an element of R was {2,4,7}, the corresponding element of X would be:

{0,1,0,1,0,0,1,0,0,0}

We do something similar for Y, where tuples correspond to elements in S.

The \bar{z} we pick will correspond to our U’. For any tuple in X or Y to be ≥ \bar{z}, \bar{z} will have to consist of only 0’s and 1’s.  A tuple in X (or Y) is ≥ \bar{z} if it has 1’s in it in all of the places there are 1’s in \bar{z}, and possibly 1’s in other places too.  This means that the elements in U corresponding to the 1’s in \bar{z} are a subset of the corresponding element of R (or S).

So the \bar{z} vector corresponds to a subset U’ of U that is a solution to the Comparative Containment problem.

Difficulty: 4.  The reduction itself isn’t that hard since they’re basically the same problem.  But the problems themselves are a little hard to wrap your head around.

Partially Ordered Knapsack

Continuing our “Variants on the Knapsack Problem” theme..

The problem: Partially Ordered Knapsack.  This is problem MP12 in the appendix.

The description: Like usual, we’re given a set U, where each element u in U has positive integer size s(u) and value v(u), a maximum total size B, and a target total value K.  We also are given a partial order \lessdot on the elements of U, which we can represent as a directed acyclic graph.  Can we find a subset U’ of U where the total size of all elements in U’ is at most B, the total value of all elements in U’ is at least K, and for all elements u in U’, if some other u’ is  \lessdot u’, then u’ is also in U’?

Example: Suppose we have the following 3 elements:

Item 1 2 3
Profit 3 3 5
Weight 3 3 5

If B=6 and K=6, and there is no partial order, then taking items 1 and 2 make a valid solution.  But if we add the partial order graph:

Than taking either item 1 or 2 means we have to also take item 3, which makes our size too big.  So this version can’t be solved.

Reduction: G&J have this as a “no reference” reduction, but I found a paper by Johnson and Neimi that has it, and I’m glad I did because it’s pretty slick.  The reduction is from Clique, So we’re given an undirected graph G=(V, E) and an integer K.   We need to build a partial order DAG (where the vertices will be knapsack items), and also sizes and values for each item.

The DAG has a vertex for each vertex and edge in G  (I’ll be talking about these as “vertices from V” and “vertices from E”)  Edges in the DAG connect vertices from V to the vertices from E that are incident on that vertex.  So there is no path of length more than 1 in the partial order (no edges come out of the vertices from E).

Both the value and size of each vertex from V are |E|+1, and the value and size of each vertex from E are 1.

B’ and K’ will both be set to K(|E|+1) + K(K-1)/2.  It’s worth noting that the first half of that equation will be the total profits and sizes of K vertices from V, and the second half will be total profits and sizes of the edges connecting those vertices if they form a clique.

Suppose we have a clique in G.  Then all of the vertices in the clique, plus all of the edges in the clique form a solution to the partially ordered knapsack problem, as described above.

If we have a solution to the Partially Ordered Knapsack problem, that means we have a set of vertices that respects the partial order with total value at least K’ and total size at most B’.  Since the value and size of each vertex are the same, that means that the total value = the total weight = K’ = B’.  The only way to get that is if we have K vertices from V, and K(K-1)/2 vertices from E.  Since the partial order is built such that we can’t take a vertex from E without taking a vertex from V that is an endpoint of the vertex from E, this means that we have a set of K(K-1)/2 edges in E that all have endpoints in one of K vertices from V, which forms a clique.

Difficulty: 5.  I like how the formula for B’ in the reduction is one where you can easily see what all of the pieces of it are doing, and where they all come from.