Tag Archives: Difficulty 8

Generalized Hex

This is the first of the problems in the “Games and Puzzles” section.  The way they express these problems as logical formulas are pretty neat.

The problem: Generalized Hex.  This is problem GP1 in the appendix.

The description: Given a graph G=(V,E) and two vertices s,t in V.  Players alternate coloring vertices (except for s and t) “blue” (by player 1) and “red” (by player 2).  Player 1 wins if a path of blue vertices from s to t ever exists, and player 2 wins if that becomes impossible.  Does Player 1 have a forced win?

G&J mark this problem as “not known to be in NP” (like many of these problems), and the paper by Even and Tarjan mentions that it’s likely that this problem is even harder than NP-Complete problems, since we may not even be able to solve it in polynomial time with a non-deterministic algorithm.

Example: Here’s a picture of the “standard” Hex game that I took from the Internet:

(from https://www.maths.ed.ac.uk/~csangwin/hex/index.html)

The hexagonal spaces are the vertices of the graph.  In the “standard” game, the vertices are only adjacent to their neighbors, but the generalized game allows for different kinds of connections.  In the “standard” game of hex, you can connect from any hex on one blue edge of the board to any hex on the other blue edge- we can add that in the generalized game by making an edge from every cell on one blue edge to s, and another set of edges from every cell on the other side to t.

The other big difference is that in the standard game, either blue has a path connecting the 2 blue sides or red has a path connecting the 2 red sides (so blue not being able to create a path means red has a path).  In the generalized game, there are no “red sides”, so all we care about is whether blue can create a path or not.

In the above image, blue does not have a win (because red has already won).  But if we look at a slightly different board:

If the hex marked green is blue instead of red, then blue has a forced win, no matter whose turn it is.  Either of the two cells next to the left blue edge will win the game for blue.

Reduction: As I alluded to last week, many of these problems reduce from QBF.  So the question is: How do we take an arbitrary formula and turn it into a game board that has a forced win for player 1 if and only if the formula is solvable?

To help us, Even and Tarjan create some subgraphs that have properties that will help us.  First is a tree where the root connects to s, and some leaves connect to t:

It turns out that if it’s blue’s turn, blue has a forced win if and only if all of the leaves connect directly to t: Your first move is the root (above s), and then each move after that you choose the child node that is the root of a subtree that red has not played in.

If there are leaves that are not connected to t, red can win by always picking a node in the subtree that is connected to t.  In the graph above, if blue picks the root, red would pick the left child, forcing blue to move right. Then red picks the right child of blue’s node, and then the left child of blue’s next move.

(If blue doesn’t pick the root on their first move, or if blue doesn’t pick a child of their last move, red can disconnect the whole tree.)

The way to think of this graph is that it is similar to “and”.

Here’s another tree structure:

This graph has one child of the root connecting directly to t, and otherwise sets pairs of grandchildren that may or may not connect directly to t- all of the a’s connect to t, and the b’s may or may not connect to t.  If it’s blue’s turn, they have a forced win if and only if there is at least one b vertex that connects to t.  (Blue picks the root, red has to pick the child that connects directly to t, then blue picks the child that has both a and b connect to t, and red can’t stop it).

The way to think of this graph is “or”.  (The caption in the figure in the paper says this represents a “conjunction” which I’m pretty sure is a typo).

The third graph is based on variables:

There are 2 copies of each variable and their negation.  If it’s blue’s move, they can pick either x1(1) or x1(2) or their negations.  Then red has to pick the negated vertex of the opposite number (if blue picks x1(1) then red has to pick ~x1(2)).  This will be used to force a variable to only have one truth value.  If it’s red’s move, they can decide whether to use the positive or negated version of x1 and blue’s moves are forced.  So, if it’s blue’s move when this ladder is reached, it works like a “there exists”- blue can choose any combination of literals they want.  If it’s red’s move, this works like a “for all”- red can force blue into whatever variable configuration that they want.

So we’re given a QBF formula. We’ll assume the expression part of the formula is in CNF.  We create a graph that has s,t, 4 vertices for each variable (xi(1), xi(2), ~xi(1) and ~xi(2) for each variable xi).  For each change in quantifier (between ∀ and ∃) we add a “quantifier” vertex qi .  We can assume the first quantifier in the formula is a ∀, so the first quantifier vertex is placed in such a way that blue needs to play on it (effectively wasting its turn and giving red the initiative to select the first part of the next set of vertices).  If the next quantifier is a ∃, we add the quantifier vertex such that blue will reclaim the initiative

For each clause, we add an “and” tree, where the “b” leaves of the tree connect to the corresponding literals of the clause.

Here is the paper’s example for a small formula:

Some things I noticed to help me understand what is happening:

  • The formula starts with ∃, so blue can pick either x(1) or ~x(1).
  • The formula then moves into a ∀, so blue will have to pick q1, meaning red gets to decide whether blue will pick y(1) or ~y(1)
  • The formula then moves into a ∃, which means after blue’s choice, red has to pick q2, giving blue the choice about z(1) or ~z(1)
  • The l vertices denote the subtrees.  They are built out of the trees discussed earlier, and the paths through the “b” vertices (the one next to the “a” vertices) connect to the literals that are in each clause.

There are a lot more details about how it all gets exactly connected, but I think this shows the main idea of how the game is set up.

Difficulty: 8.  This is a very cool transformation, but you need a lot of help to see the structures and forcing moves.  On top of that, there are a lot of fiddly details that I’ve glossed over that are important in an actual reduction.

Equilibrium Point

This next reduction is confusing to me, and I wonder if it’s because there is a typo in the paper.

The problem: Equilibrium Point.  This is problem AN15 in the appendix.

The description: Given a set X = {x1..xn} of variables, a collection F = {F1..Fn} of integer product polynomials over the variables, and a set of “ranges” M = {M1..Mn} of subsets of the integers.  Can we find a sequence Y = {y1..yn}, where each yi ∈ Mi, and for all y ∈ Mi, Fi(y1, y2,…yi-1, yi, yi+1, …yn) ≥ Fi(y1, y2,…yi-1, y, yi+1, …yn)?

Example: This concept of “Equilibrium point” is best through of from the perspective of Game Theory.  The functions F are the utility functions for each player.  The sequence Y is the set of choices each player makes.  We are asking whether we can find a set of values in Y where any player i changing their yi value to something else will not improve their personal Fi score.

So the classic “Prisoner Dilemma” problem can be represented in these terms: There are 2 players, so n is 2.  Each range is with in {0,1}, where 0 means “stay silent” and 1 means “betray”.  F1 is defined by a table:

Player 2 stays silent Player 2 betrays
Player 1 stays silent -1 -3
Player 1 betrays 0 -2

F2 is defined similarly (the 0 and -3 scores switch places).

Notice that if we chose y1=y2 = 0 (both sides stay silent).  Then F1(0,0)= F2=(0,0) = -1.  But this is < F1(1,0), where player 1 betrays.  So this is not an equilibrium point.

y1=y2=1 is an equilibrium point, where both functions return -2.  Any player changing their choice from 1 to 0 will see their F function go from -2 to -3.

Reduction: Sahni does this in his “Computationally Related Problems” paper that we’ve used in the past to do some graph reductions.  This reduction is from 3SAT.   I’ll just say now that he could have avoided a lot of manipulation if he’d have used One-In-Three 3Sat.  From a 3SAT instance, we build a game where there is one clause for each player, and the range of choices for each player is between {0,1}.  The goal is to make a function fi for a clause Ci that is 1 iff the corresponding clause is true.  I’ll skip over the manipulations he does because he’s using a harder SAT problem than he needs to.

Define hi(x’) to be 2* the product of all of the fi(x’) values (for some literal x’.  If x;’ is a positive literal, use the variable.  If it’s a negated literal, use 1- the variable).  F1 (x’) = h1(x’) for all players.  This means that if the formula was satisfiable, everyone could score 2, but if it wasn’t, they’d always score 0.

Now it gets weird.  We’re going to set up a second game G, a 2 player game with no equilibrium point, then define a second payoff function for our original game F2 where F2 (x) = the g function of x for the first 2 players, but 0 for everyone else.

The paper says that the actual payoff for the actual game we’re creating is: F(X) = F1(x) + F2(x) * 2 – F1(x)

The “2” is a payout of 2 for all players- since the above depends on matrix math, it’s an nx1 vector of all 2’s. This formula is very weird to me because the F1 and -F1 should cancel out.  This is where I think there might be a typo.  I’m pretty convinced there is a typo on the previous page where he was building his little fi function (he uses a + where there should be a -).  I’m thinking that there are missing parentheses in this formula, and it should be F(X) = F1(x)+F2(x)*(2-F1(x))

Now two things can happen.  If the formula was satisfiable, then F1(x) is all 2’s, and that is the max payout for everyone and is an equilibrium point.  If the formula was not satisfiable, then F1(x) is all 0’s, and so the scores in the F2 part influence the score for F, but the F2 part has no equilibrium, so F doesn’t either.

Difficulty: 8.  I think I found the typo though.

Permanent Evaluation

Going back to the problem we skipped over last week.

The problem: Permanent Evaluation.  This is problem AN13 in the appendix.

The description: Given an nxn matrix M of 0’s and 1’s, and a positive integer K, is the permanent of M equal to K?

Example: The permanent of M = \displaystyle \sum_\sigma \prod_{i=1}^n A_{i,\sigma(i)}

That is, for each permutation of the columns, we multiply down the main diagonal.  The sum of all of those products is the permanent.

1 2 3
4 5 6
7 8 9

..then the permanent is 1*5*9 + 1*6*8 + 2*4*9 + 2*6*7 + 3*5*7 + 3*4*8 = 450

Of course, we’re looking at 0/1 matrices, so I think what we’re really asking is how many permutations of the columns have 1’s on the main diagonal.

(Wrong) Reduction: If I’m right above that all we’re doing is counting how many ways there are 1’s in the main diagonal, then this becomes a pretty easy Turing Reduction from Hamiltonian Path.  Given an adjacency matrix of a graph, we want to know if the permanent of the adjacency matrix is 1 or more.  (Any Hamiltonian Path will give a permutation that multiplies to 1, and any permutation of vertices that is not a Hamiltonian Path multiplies to 0).   Given how complicated the “actual” reduction is, I’m a little worried that I missed something, though.

Edit on 1/21: This isn’t right.  The problem is that while you’re premuting the columns, you’re not permuting the rows.  So if we permute the column to be the second vertex in the Hamiltonian Path, the second row is still the vertices adjacent to vertex #2 (which might not be the second vertex in the path).

That’s a shame.  I wonder if there is a way to manipulate the problem to make it work this way anyway.

(Correct) Reduction:

The reduction by Valiant that G&J point you to uses 3SAT, He shows that if you have a formula F, and define t(F) to be 2x the number of literals in F – the number of clauses in F, then there is some function f,  computable by a deterministic Turing Machine  in polynomial time, that maps a formula to a matrix.  (The matrix has entries in {-1..3}, he does another transformation later to convert it to a 0/1 matrix).  The permanent of that matrix = 4^{t(F)} * s(F) , where s(F) is the number of ways to satisfy F.

Since one of the consequences of Cook’s Theorem is that we can take an arbitrary non-deterministic Turing Machine and turn it into a Satisfiability formula, we get the reduction.

The actual construction of that function f is complicated.  Given a formula, we construct a graph and use the matrix as the adjacency matrix of the graph.  The variables, literals, and clauses get mapped to subgraphs.

Difficulty: If my way was right, I’d give it a 4- I think it’s easier than most Turing Reductions.  The Valiant construction is an 8.

Cosine Product Integration

I’m back from break, and we’ll go a little out of order at first since this is the last problem that’s similar to the ones we’ve been working on.

The problem: Cosine Product Integration.  This is problem AN14 in the appendix.

The description: Given a sequence of n integers (a_i..a_n), does \displaystyle \int_0^{2\Pi} (\prod_{i=1}^n cos(a_i\theta)) d\theta = 0?

Example:

Suppose our sequence was (1,2,3).  Then our integral is:

\displaystyle \int_o^{2\Pi}(\theta*2\theta*3\theta)d\theta

This is just 6\theta^3, which integrates to \frac{6x^4}{4}, which over the interval 0..2\pi is 24\pi^4

Reduction: This one is in Plaisted’s 1976 paper. In it, he notes that if you look at the function \displaystyle \prod_{j=1}^{k} (x^{a_j}+x^{-a_j}) the constant term in the power series expansion of that product is 0 if and only if the cosine integral is 0.  I have trouble seeing that myself.  The cooler thing is that you can make that constant term 0 if and only if you can take the sequence of a_i elements and partition them into 2 sets with the same sum.  So we can take an instance of the Partition problem, use the elements of the set as the elements of our sequence and then feed them into the product formulas.

Difficulty: It really depends on how easily you can see the connection between the cosine integral and the product formula (and, of course, how easily you could have thought of it). I find it hard, so I’m giving it an 8.

Periodic Solution Recurrence Relation

Probably the last post of the year- enjoy the holidays, everyone!

The problem: Periodic Solution Recurrence Relation.  This is problem AN12 in the appendix.

The description: Given a set of m ordered pairs (c_1,b_1) through (c_m, b_m with each b_i >0, can we find a sequence a_0 though a_{n-1} of integers, such that if we build the infinite sequence \displaystyle a_i = \sum^m_{j-1} c_j*a_{i-b_j} is periodic: that is, a_i \equiv a_{i (mod \: n)} for all i?

Example: Here’s a super simple example: m=2 and the pairs are (1,1) and (2,2).  This gives us the recurrence a_i = a_{i-1}  + 2a_{i-2}.  If we start with 1,1, this gives the sequence 1,1,3,5,11,21,43,…  which is periodic mod 10 (the last digit always repeats 1,1,3,5)

Reduction: This shows up in Plaisted’s 1984 paper.  He mentions it as a corollary to his Theorem 5.1 which showed that Non-Trivial Greatest Common Divisor and Root of Modulus 1 were NP-Complete.  Similar to the Root of Modulus 1 problem, we build a polynomial from a set of clauses that has 0’s on the unit circle.  The polynomial also has a leading coefficient of 1.  This means, apparently, that the recurrence relation corresponding to the polynomial has a periodic solution if and only if the polynomial has a root on the complex unit circle, which only happens if the original 3SAT formula was satisfiable.

Difficulty: 8.

Number of Roots for a Product Polynomial

The problem: Number of Roots for a Product Polynomial.  This is problem AN11 in the appendix.

The description: Given a set of sequences A1 through Am , each Ai containing a sequence of k pairs (a_i[1],b_i[1]) through (a_i[k],b_i[k]) , and an integer K.  If we build a polynomial for each Ai by \displaystyle \sum_{j=1}^k a_i[j]*z^{b_i[j]}, and then multiply all of those polynomials together, does the resulting product polynomial have less than K complex roots?

Example:  Suppose A1 was <(1,2), (2,1), (1,0)>, A2 was <(3,3), (2,2), (1,1), (0,0)>, and A3 was <(5,1), (7,0)>.  These represent the polynomials x2+2x+1, 3x3 + 2x2 + x, and 5x+7.  (I’m pretty sure it’s ok for the sequences to be of different length, because we could always add (0,0) pairs to shorter sequences).  This multiplies out to 15 x6 + 61 x5 + 96 x4+ 81 x3 +  50 x2 + 26x +7, which has 4 distint complex roots, according to Mathematica.

Reduction: This is another one that uses Plaisted’s 1977 paper.  (It’s problem P4).  He builds the polynomials PC and QC in the same way that he did in the reduction for Non-Divisibility of a Product Polynomial.  One of the statements that he says is “easy to verify” is that The product of the Q polynomials for each clause has N (for us, K) zeroes in the complex plane if and only if the original 3SAT formula was inconsistent.

Difficulty: I’m giving all of these problems based on the polynomials that come from a formula an 8.

Protected: Root of Modulus 1

This content is password protected. To view it please enter your password below:

Protected: Non-Divisibility of a Product Polynomial

This content is password protected. To view it please enter your password below:

Protected: Non-Trivial Greatest Common Divisor

This content is password protected. To view it please enter your password below:

Protected: K-Relevancy

This content is password protected. To view it please enter your password below: