Tag Archives: 3SAT

Equilibrium Point

This next reduction is confusing to me, and I wonder if it’s because there is a typo in the paper.

The problem: Equilibrium Point.  This is problem AN15 in the appendix.

The description: Given a set X = {x1..xn} of variables, a collection F = {F1..Fn} of integer product polynomials over the variables, and a set of “ranges” M = {M1..Mn} of subsets of the integers.  Can we find a sequence Y = {y1..yn}, where each yi ∈ Mi, and for all y ∈ Mi, Fi(y1, y2,…yi-1, yi, yi+1, …yn) ≥ Fi(y1, y2,…yi-1, y, yi+1, …yn)?

Example: This concept of “Equilibrium point” is best through of from the perspective of Game Theory.  The functions F are the utility functions for each player.  The sequence Y is the set of choices each player makes.  We are asking whether we can find a set of values in Y where any player i changing their yi value to something else will not improve their personal Fi score.

So the classic “Prisoner Dilemma” problem can be represented in these terms: There are 2 players, so n is 2.  Each range is with in {0,1}, where 0 means “stay silent” and 1 means “betray”.  F1 is defined by a table:

Player 2 stays silent Player 2 betrays
Player 1 stays silent -1 -3
Player 1 betrays 0 -2

F2 is defined similarly (the 0 and -3 scores switch places).

Notice that if we chose y1=y2 = 0 (both sides stay silent).  Then F1(0,0)= F2=(0,0) = -1.  But this is < F1(1,0), where player 1 betrays.  So this is not an equilibrium point.

y1=y2=1 is an equilibrium point, where both functions return -2.  Any player changing their choice from 1 to 0 will see their F function go from -2 to -3.

Reduction: Sahni does this in his “Computationally Related Problems” paper that we’ve used in the past to do some graph reductions.  This reduction is from 3SAT.   I’ll just say now that he could have avoided a lot of manipulation if he’d have used One-In-Three 3Sat.  From a 3SAT instance, we build a game where there is one clause for each player, and the range of choices for each player is between {0,1}.  The goal is to make a function fi for a clause Ci that is 1 iff the corresponding clause is true.  I’ll skip over the manipulations he does because he’s using a harder SAT problem than he needs to.

Define hi(x’) to be 2* the product of all of the fi(x’) values (for some literal x’.  If x;’ is a positive literal, use the variable.  If it’s a negated literal, use 1- the variable).  F1 (x’) = h1(x’) for all players.  This means that if the formula was satisfiable, everyone could score 2, but if it wasn’t, they’d always score 0.

Now it gets weird.  We’re going to set up a second game G, a 2 player game with no equilibrium point, then define a second payoff function for our original game F2 where F2 (x) = the g function of x for the first 2 players, but 0 for everyone else.

The paper says that the actual payoff for the actual game we’re creating is: F(X) = F1(x) + F2(x) * 2 – F1(x)

The “2” is a payout of 2 for all players- since the above depends on matrix math, it’s an nx1 vector of all 2’s. This formula is very weird to me because the F1 and -F1 should cancel out.  This is where I think there might be a typo.  I’m pretty convinced there is a typo on the previous page where he was building his little fi function (he uses a + where there should be a -).  I’m thinking that there are missing parentheses in this formula, and it should be F(X) = F1(x)+F2(x)*(2-F1(x))

Now two things can happen.  If the formula was satisfiable, then F1(x) is all 2’s, and that is the max payout for everyone and is an equilibrium point.  If the formula was not satisfiable, then F1(x) is all 0’s, and so the scores in the F2 part influence the score for F, but the F2 part has no equilibrium, so F doesn’t either.

Difficulty: 8.  I think I found the typo though.

Permanent Evaluation

Going back to the problem we skipped over last week.

The problem: Permanent Evaluation.  This is problem AN13 in the appendix.

The description: Given an nxn matrix M of 0’s and 1’s, and a positive integer K, is the permanent of M equal to K?

Example: The permanent of M = \displaystyle \sum_\sigma \prod_{i=1}^n A_{i,\sigma(i)}

That is, for each permutation of the columns, we multiply down the main diagonal.  The sum of all of those products is the permanent.

1 2 3
4 5 6
7 8 9

..then the permanent is 1*5*9 + 1*6*8 + 2*4*9 + 2*6*7 + 3*5*7 + 3*4*8 = 450

Of course, we’re looking at 0/1 matrices, so I think what we’re really asking is how many permutations of the columns have 1’s on the main diagonal.

(Wrong) Reduction: If I’m right above that all we’re doing is counting how many ways there are 1’s in the main diagonal, then this becomes a pretty easy Turing Reduction from Hamiltonian Path.  Given an adjacency matrix of a graph, we want to know if the permanent of the adjacency matrix is 1 or more.  (Any Hamiltonian Path will give a permutation that multiplies to 1, and any permutation of vertices that is not a Hamiltonian Path multiplies to 0).   Given how complicated the “actual” reduction is, I’m a little worried that I missed something, though.

Edit on 1/21: This isn’t right.  The problem is that while you’re premuting the columns, you’re not permuting the rows.  So if we permute the column to be the second vertex in the Hamiltonian Path, the second row is still the vertices adjacent to vertex #2 (which might not be the second vertex in the path).

That’s a shame.  I wonder if there is a way to manipulate the problem to make it work this way anyway.

(Correct) Reduction:

The reduction by Valiant that G&J point you to uses 3SAT, He shows that if you have a formula F, and define t(F) to be 2x the number of literals in F – the number of clauses in F, then there is some function f,  computable by a deterministic Turing Machine  in polynomial time, that maps a formula to a matrix.  (The matrix has entries in {-1..3}, he does another transformation later to convert it to a 0/1 matrix).  The permanent of that matrix = 4^{t(F)} * s(F) , where s(F) is the number of ways to satisfy F.

Since one of the consequences of Cook’s Theorem is that we can take an arbitrary non-deterministic Turing Machine and turn it into a Satisfiability formula, we get the reduction.

The actual construction of that function f is complicated.  Given a formula, we construct a graph and use the matrix as the adjacency matrix of the graph.  The variables, literals, and clauses get mapped to subgraphs.

Difficulty: If my way was right, I’d give it a 4- I think it’s easier than most Turing Reductions.  The Valiant construction is an 8.

Periodic Solution Recurrence Relation

Probably the last post of the year- enjoy the holidays, everyone!

The problem: Periodic Solution Recurrence Relation.  This is problem AN12 in the appendix.

The description: Given a set of m ordered pairs (c_1,b_1) through (c_m, b_m with each b_i >0, can we find a sequence a_0 though a_{n-1} of integers, such that if we build the infinite sequence \displaystyle a_i = \sum^m_{j-1} c_j*a_{i-b_j} is periodic: that is, a_i \equiv a_{i (mod \: n)} for all i?

Example: Here’s a super simple example: m=2 and the pairs are (1,1) and (2,2).  This gives us the recurrence a_i = a_{i-1}  + 2a_{i-2}.  If we start with 1,1, this gives the sequence 1,1,3,5,11,21,43,…  which is periodic mod 10 (the last digit always repeats 1,1,3,5)

Reduction: This shows up in Plaisted’s 1984 paper.  He mentions it as a corollary to his Theorem 5.1 which showed that Non-Trivial Greatest Common Divisor and Root of Modulus 1 were NP-Complete.  Similar to the Root of Modulus 1 problem, we build a polynomial from a set of clauses that has 0’s on the unit circle.  The polynomial also has a leading coefficient of 1.  This means, apparently, that the recurrence relation corresponding to the polynomial has a periodic solution if and only if the polynomial has a root on the complex unit circle, which only happens if the original 3SAT formula was satisfiable.

Difficulty: 8.

Number of Roots for a Product Polynomial

The problem: Number of Roots for a Product Polynomial.  This is problem AN11 in the appendix.

The description: Given a set of sequences A1 through Am , each Ai containing a sequence of k pairs (a_i[1],b_i[1]) through (a_i[k],b_i[k]) , and an integer K.  If we build a polynomial for each Ai by \displaystyle \sum_{j=1}^k a_i[j]*z^{b_i[j]}, and then multiply all of those polynomials together, does the resulting product polynomial have less than K complex roots?

Example:  Suppose A1 was <(1,2), (2,1), (1,0)>, A2 was <(3,3), (2,2), (1,1), (0,0)>, and A3 was <(5,1), (7,0)>.  These represent the polynomials x2+2x+1, 3x3 + 2x2 + x, and 5x+7.  (I’m pretty sure it’s ok for the sequences to be of different length, because we could always add (0,0) pairs to shorter sequences).  This multiplies out to 15 x6 + 61 x5 + 96 x4+ 81 x3 +  50 x2 + 26x +7, which has 4 distint complex roots, according to Mathematica.

Reduction: This is another one that uses Plaisted’s 1977 paper.  (It’s problem P4).  He builds the polynomials PC and QC in the same way that he did in the reduction for Non-Divisibility of a Product Polynomial.  One of the statements that he says is “easy to verify” is that The product of the Q polynomials for each clause has N (for us, K) zeroes in the complex plane if and only if the original 3SAT formula was inconsistent.

Difficulty: I’m giving all of these problems based on the polynomials that come from a formula an 8.

Root of Modulus 1

After taking a week off for Thanksgiving, we move on to another equation problem.

The problem: Root of Modulus 1.  This is problem AN10 in the appendix.

The description: Given a set of ordered pairs (a_1,b_1) through (a_n, b_n) of integers, each b_i is non-negative.  Can we find a complex number q where \mid q \mid= 1 such that \displaystyle \sum_{i=1}^n a_i * q^{b_i} =0?

Example: It was hard for me to come up with an interesting example (where q is not just 1 or i), so thanks to this StackOverflow post for giving me something I could use.

Let our ordered pairs be (5,2), (-6,1), and (5,0).  This gives us the polynomial 5x2-6x+5.  Plugging these into the quadratic formula get us the roots \frac{3}{5} \pm  \frac{4}{5} i, which is on the complex unit circle.

Reduction: This one is again from Plaisted’s 1984 paper.  It again uses his polynomial that we’ve seen in some other problems (most recently Non-Divisibility of a Product Polynomial).  So again, we start with a 3SAT instance and build the polynomial.  He starts by showing that if you have a polynomial with real coefficients p(z), then p(z)*p(1/z) is a real, non-negative polynomial on the complex unit circle, and it has zeros on the unit circle exactly where p(z) does.

Then, we can do this for the sum of the polynomials made out of each clause, which means that this new polynomial has 0’s on the unit circle exactly where the original one did.  Which means it has a 0 on the complex unit circle if and only if the formula was consistent.

Difficulty: 8.  I’m starting to appreciate the coolness of turning a formula into a polynomial, and how it makes a lot of problems easier.  I just wish it was clearer to see how it all works.

 

Algebraic Equations over GF[2]

AN8 is Quadratic Diophantine Equations.

The problem: Algebraic Equations over GF[2].  This is problem AN9 in the appendix.

The description: Given a set P of m polynomials over n variables (x1 through xn) where each polynomial is the sum of terms that is either 1 or the product of distinct xi, can we find a value ui for each xi in the range {0,1} that make each polynomial 0, if we define 1+1=0, and 1*1 = 1?

Example: It helps to think of GF[2] as a boolean logic world, where + is XOR and * is AND.  So, suppose we have three variables, and the polynomials:

  • P1 = 1 + x1x2 + x2x3
  • P2 = x1 + x1x2x3

..Then setting x1=0, x2=1, x3=1 makes both polynomials 0.

Reduction: G&J say that Fraenkel and Yesha use X3C, but the paper I found uses 3SAT.  We’re given an equation that has n variables and m clauses.  The variables of our polynomials will be the same variables in the 3SAT instance.  For each clause, we build a polynomial by:

  • Replacing a negated literal (~x) with the equation 1 + x.  (Remember, + means XOR in this system)
  • Replacing an OR clause (A ∨ B) with the equation A+ B +A*B
  • Xoring the whole above thing with 1.

Notice that the first replacement makes ~x have the opposite truth value of x, the second replacement rule is logically equivalent to A∨B, and the third part makes the polynomial 0 if and only if the clause evaluated to 1.  So the polynomial is 0 if and only if the clause is satisfiable.  So all polynomials are 0 if and only if the all clauses are satisfiable.

Difficulty: 5.  This is easy to follow.  It’s a little tricky to make students come up with the equivalence rules above, but I think if you can explain it right, it’s not that bad.

Protected: Non-Divisibility of a Product Polynomial

This content is password protected. To view it please enter your password below:

Protected: Non-Trivial Greatest Common Divisor

This content is password protected. To view it please enter your password below:

Protected: Simultaneous Incongruences

This content is password protected. To view it please enter your password below:

Protected: Quadratic Congruences

This content is password protected. To view it please enter your password below: