Variable Partition Truth Assignment

We’re going to dive into a couple of logic games for the next couple of posts before we come out with I think of as more “game-like” games.

The problem: Variable Partition Truth Assignment.  This is problem GP5 in the appendix.

The description: Given a set U of variables, and a set C of clauses over U, players play a game where they alternate choosing variables from U.  The variables Player 1 chooses will be set true, and the variables Player 2 chooses will be set to false.  Player 1 wins if all clauses in C are satisfied.  Can player 1 force a win?

Example: Here is a small example: The clauses will be: {(x1 ∨ x2 ∨ x3), (~x1 ∨~x2∨ ~x3), (~x4, x1, ~x2)}

Player 1 can force a win by choosing x1, which will be set to true.  This satisfies both clauses 1 and 3.  Player 2 does not want to choose x2 or x3 (since any variable player 2 chooses will be set to false, making clause 2 true), so they’ll choose x4.  Then player 1 picks either of x2 or x3 (clause 2 is still not satisfied, because the variable must be set to true by player 1), then player 2 must set the other one to false, satisfying clause 2.

Reduction: The paper by Schaefer that we’ve been going through calls this “GPOS (POS CNF)” and focuses on the subcase where we’re given a formula where all of the variables are positive.  If even that case is NP-Complete, then the generalized case where we allow negated literals is also NP-Complete.  The reduction in the paper is from Sequential Truth Assignment, specifically the case where the problem has 3 variables per clause.  So we’re given an input that is a set of 2n variables, which we can view as alternating between ∃ and ∀, and a set of m clauses with at most 3 literals per clause.  We’re going to build a new formula A’ out of many pieces.  The variables are all positive, but some of the variable names will correspond to negated literals.  We’ll have 2n “x” variables, 2n “~x” variables, and 2n “u” variables.  He then builds up a pretty crazy formula.  The point of the formula is to force a 6-move sequence between the players:

  • On move 6k+1 (so the first move of “round” k), player 1 chooses either x2n-2k or ~x2n-2k to become true.
  • The next move, player 2 chooses the other of that pair to become false.
  • The next move, player 1 chooses u2n-2k to become true
  • On moves 6k+4 through 6k+6, we repeat the same sequence, but this time, player 2 can choose whether to make x2n-2k-1 or ~x2n-2k-1 false.  Then we end with player 2 choosing u2n-2k-1 to become false.

Notice how each set of 3 moves corresponds to one truth assignment of one variable in the Sequential Truth Assignment game (which alternates between player 1 and player 2 choosing the assignment).  Also, notice how player 1 choosing a variable xi corresponds to setting it true in the Sequential Truth Assignment game, and player 1 choosing a ~xi variable to be set to true corresponds to making the corresponding Sequential Truth Assignment variable false. (The opposite assignments occur when player 2 chooses)

The hard part of the proof (and of the formula) involves all of the various ways the other player can force a win if the sequence above is not followed.

Difficulty: 9.  The formula in the paper is very hard to follow, which is why I didn’t go into it here- I think the important part is how it all works out.  Since I couldn’t think of a way to explain it without going on for pages, I figured just explaining the idea was a better strategy.

Generalized Kayles

I hadn’t heard of this game before encountering this problem.  It sort of feels a lot like Nim to me.

The problem: Generalized Kayles.  This is problem GP3 in the appendix.

The description: Given a graph G=(V,E). Players take turns removing vertices and all vertices adjacent to it from the graph.  The first player to have no vertices left loses.

Example: Here’s a graph:

Suppose player 1 chooses vertex c.  Then we also remove s,b, d, and t.  All that is left are vertices a and e.  Whichever vertex player 2 chooses, player 1 can choose the other one, and win.

Reduction: This one is again by Schaefer, and again uses the same problem which is either Sequential Truth Assignment or QBF, depending on how you look at it.

Just like last time, we’re given a formula: (∃ x1) (∀ x2) (∃ x3) … (∃ xn) (A1 ∧ A2 ∧ … Am), where each Ai is a disjunction of literals.  We’ll further assume n is odd (so the last quantifier is ∃).  The graph is built as follows:

  • Each clause k in the formula gets a vertex x0,k.  These vertices are all connected to each other.
  • Each variable xi in the formula gets two vertices: xi and ~xi, that have an edge between them. We also get y vertices: yi,j for all j 0 <= j < i
  • We add an edge between xi and x0,k if xi appears as a literal in clause k.  Similarly, we add an edge between ~xi and x0,k if ~xi appears as a literal in clause k.
  • Each yi,j vertex connects to all xk vertices where k <= i.  Each yi,j vertex also connects to all ya,b vertices where a < i and b < a.

Here’s a picture from the paper of what we get:

One main point that comes out of this construction is that players need to take their first n moves playing the x (or ~x) vertices in order.  If you go out of order, the opponent can play a y vertex and remove everything.  If you play a y vertex, the opponent can play an x vertex (or an x0 vertex) and remove everything.

If the original formula was satisfiable, player 1 (who is the ∃ player) starts by choosing either x1 or ~x1.  No matter which of x2 or ~x2 is chosen by the opponent (the ∀ player), player3 will have a way to set x3 to keep the formula satisfiable.  This process continues for all variables.  Once player 1 plays the last variable (xn or ~xn), all vertices have been removed from the graph- most importantly, all of the x0 vertices have been removed, because there is an edge from each one to each variable whose setting would satisfy the clause.  Thus, player 2 has no place to play, and player 1 wins.

If the formula is not satisfiable, then after taking turns choosing xi or ~xi for all i, there is still some x0 vertex in the graph (corresponding to a clause not satisfied by the variable choices made).  Player 2 can select that vertex, removing all x0 vertices from the graph, and will win.

Difficulty: 7.  I can see what the “jobs” of each vertex are: The xi  set the truth values of a variable, the x0 ensure that each clause is satisfied by a variable setting, and the y vertices are there to force players to choose the xi vertices in order.  I don’t think I could have come up with this though.

 

Generalized Geography

It’s spring break here- I was expecting to take the week off, but got this post done anyway.  This might mean I skip next week though.

The problem: Generalized Geography.  This is problem GP2 in the appendix.

The description: Given a directed graph G=(V,A) and a starting vertex vo.  Players alternate choosing edges that leave the current vertex (starting with v0.  The next current vertex is the one at the end of the edge leaving from vo.  You can’t choose an edge that is already chosen.  The first player to be unable to choose an edge loses.  Does Player 1 have a forced win?

Example: This is the “geography” game kids play in the car, where you have to think of a place that has as its first letter the last letter of the previous choice.  As a graph problem, vertices correspond to letters, and edges to locations:

Note that the vertices can have self-loops, and (at least in the actual game) could have multiple edges between pairs of vertices.  There is no requirement to have any edges leave a vertex either- I remember instituting the “Phoenix rule”  on car trips because nobody could think of a place that started with X.

Anyway, if we start at E, player 1 has a forced win- they have to take “Egypt” to T, and then player 2 has to take “Texas” to S, and if player 1 chooses “Singapore” we’re in vertex E, it’s player 2’s turn, and they have no place left to pick.  (The actual game has many more edges, of course)

Reduction:  Schaefer says he’s building off of the Sequential Truth Assignment problem from last time, but his instance looks more like an instance of QBF. (I think this is a result of his claim that they’re basically the same problem).  So we’re given a formula: (∃ x1) (∀ x2) (∃ x3) … (∃ xn) (A1 ∧ A2 ∧ … Am), where each Ai is a disjunction of literals.  We’ll further assume n is odd (so the last quantifier is ∃)

We then go on to build a graph.  Each positive and negative literal gets a vertex.  Each variable x1 gets 4 vertices in the graph:

  • xi corresponding to a positive literal
  • ~xi corresponding to a negative literal
  • 2 vertices u and v that control the “entrance” and “exit” for the setting of one of the 2 literal values.

We have a vertex for each clause (yi), and an additional vertex un+1 so the last vi has someplace to go.

Each set of these 4 vertices are set up in a diamond, with edges from ui to both xi and ~xi, and then from both of those to vi. vi then connects to ui+1, making a chain of diamonds. The final un+1 connects to each y vertex, and each y vertex connects to the literals that are in its clause.

Here’s the example picture from the paper:

Player 1 starts on vertex u1. The player chooses whether to make variable x1 positive or negative by going to that vertex (simulating a “there exists”), and player 2 has no choice but to move on to the v1 vertex, and player 1 has no choice to move to the next u2 (simulating a “for all”). This alternating process continues until we hit vertex un+1.  Since there are an odd number of variables, it is now player 2’s turn.

If the setting of the variables has not satisfied some clause, player 2 can move to the y vertex of that unsatisfied clause.  Then player 1 has to go to one of the 3 literals that appear in that clause.  Since the clause was not satisfied, all of these variables have not been chosen, so the edge from that literal to the v vertex is available for player 2 to pick.  But after that, the vertex from the v vertex to the next u vertex has already been chosen, so player 1 loses.

If the setting of the variables has satisfied every clause, then player 2 has to pick a move to a y vertex that has a literal chosen by a player to satisfy it. When player 1 moves to that vertex, the only edge exiting that vertex has already been chosen, so player 2 loses.

Difficulty: 7. I think this is very slick and elegant, but I don’t see a student getting there without lots of help.

Sequential Truth Assignment

A whole bunch of the proofs in this section are done in the same paper and mainly use this problem.  So we’ll go out of order here- but really this is a restatement of the QBF problem from a couple of weeks ago.

The problem: Sequential Truth Assignment.  This is problem GP4 in the appendix.

The description: Given a set of variables and a collection of clauses (like a Satisfiability instance), players take turns giving truth assignments to sequential variables.  Player 1 wins if and only if all clauses are satisfied.  Does player 1 have a forced win?

Example: (x1 ∨ x2 ∨ x3) ∧ (~x1 ∨ ~x2 ∨ ~x3) ∧ (x1 ∨ ~x2 ∨ x3).

Player 1 can win by choosing x1 = true, which satisfies both the first and third clauses.  Then layer 2 has to set x2, so should set it to true (otherwise clause 2 is automatically satisfied). but player 1 then set x3 to false satisfying the second clause.

Reduction: The paper by Shaefer that has many of the reductions we’ll be doing in this section refers to the paper by Stockmeyer and Meyer that showed how QBF was NP-Complete (or worse- you might want to check out the comments of the QBF problem for a discussion) defined a formula Bk to be a formula on k sets of variables, with sets alternating between “For All” and “There Exists”.  Schaefer points out that if you allow players to assign values to these sets of variables (instead of one at a time, like our problem describes), then player 1 is the “There Exists” player and player 2 is the “For All” player, and you have basically the same problem.

To get to our “one variable at a time” version of the problem, we need to build a problem instance with one variable per set.   We can start with an instance of 3SAT and since the variables are set sequentially, we set up the quantifiers to be ∃ x1 ∀ x2 ∃ x3, and so on.

Difficulty: 5. I think it depends on how hard you see the intermediate step of going from QBF to the alternating of setting groups of variables.  Knowing you have to break the reduction into 2 steps is a bit hard as well.

Generalized Hex

This is the first of the problems in the “Games and Puzzles” section.  The way they express these problems as logical formulas are pretty neat.

The problem: Generalized Hex.  This is problem GP1 in the appendix.

The description: Given a graph G=(V,E) and two vertices s,t in V.  Players alternate coloring vertices (except for s and t) “blue” (by player 1) and “red” (by player 2).  Player 1 wins if a path of blue vertices from s to t ever exists, and player 2 wins if that becomes impossible.  Does Player 1 have a forced win?

G&J mark this problem as “not known to be in NP” (like many of these problems), and the paper by Even and Tarjan mentions that it’s likely that this problem is even harder than NP-Complete problems, since we may not even be able to solve it in polynomial time with a non-deterministic algorithm.

Example: Here’s a picture of the “standard” Hex game that I took from the Internet:

(from https://www.maths.ed.ac.uk/~csangwin/hex/index.html)

The hexagonal spaces are the vertices of the graph.  In the “standard” game, the vertices are only adjacent to their neighbors, but the generalized game allows for different kinds of connections.  In the “standard” game of hex, you can connect from any hex on one blue edge of the board to any hex on the other blue edge- we can add that in the generalized game by making an edge from every cell on one blue edge to s, and another set of edges from every cell on the other side to t.

The other big difference is that in the standard game, either blue has a path connecting the 2 blue sides or red has a path connecting the 2 red sides (so blue not being able to create a path means red has a path).  In the generalized game, there are no “red sides”, so all we care about is whether blue can create a path or not.

In the above image, blue does not have a win (because red has already won).  But if we look at a slightly different board:

If the hex marked green is blue instead of red, then blue has a forced win, no matter whose turn it is.  Either of the two cells next to the left blue edge will win the game for blue.

Reduction: As I alluded to last week, many of these problems reduce from QBF.  So the question is: How do we take an arbitrary formula and turn it into a game board that has a forced win for player 1 if and only if the formula is solvable?

To help us, Even and Tarjan create some subgraphs that have properties that will help us.  First is a tree where the root connects to s, and some leaves connect to t:

It turns out that if it’s blue’s turn, blue has a forced win if and only if all of the leaves connect directly to t: Your first move is the root (above s), and then each move after that you choose the child node that is the root of a subtree that red has not played in.

If there are leaves that are not connected to t, red can win by always picking a node in the subtree that is connected to t.  In the graph above, if blue picks the root, red would pick the left child, forcing blue to move right. Then red picks the right child of blue’s node, and then the left child of blue’s next move.

(If blue doesn’t pick the root on their first move, or if blue doesn’t pick a child of their last move, red can disconnect the whole tree.)

The way to think of this graph is that it is similar to “and”.

Here’s another tree structure:

This graph has one child of the root connecting directly to t, and otherwise sets pairs of grandchildren that may or may not connect directly to t- all of the a’s connect to t, and the b’s may or may not connect to t.  If it’s blue’s turn, they have a forced win if and only if there is at least one b vertex that connects to t.  (Blue picks the root, red has to pick the child that connects directly to t, then blue picks the child that has both a and b connect to t, and red can’t stop it).

The way to think of this graph is “or”.  (The caption in the figure in the paper says this represents a “conjunction” which I’m pretty sure is a typo).

The third graph is based on variables:

There are 2 copies of each variable and their negation.  If it’s blue’s move, they can pick either x1(1) or x1(2) or their negations.  Then red has to pick the negated vertex of the opposite number (if blue picks x1(1) then red has to pick ~x1(2)).  This will be used to force a variable to only have one truth value.  If it’s red’s move, they can decide whether to use the positive or negated version of x1 and blue’s moves are forced.  So, if it’s blue’s move when this ladder is reached, it works like a “there exists”- blue can choose any combination of literals they want.  If it’s red’s move, this works like a “for all”- red can force blue into whatever variable configuration that they want.

So we’re given a QBF formula. We’ll assume the expression part of the formula is in CNF.  We create a graph that has s,t, 4 vertices for each variable (xi(1), xi(2), ~xi(1) and ~xi(2) for each variable xi).  For each change in quantifier (between ∀ and ∃) we add a “quantifier” vertex qi .  We can assume the first quantifier in the formula is a ∀, so the first quantifier vertex is placed in such a way that blue needs to play on it (effectively wasting its turn and giving red the initiative to select the first part of the next set of vertices).  If the next quantifier is a ∃, we add the quantifier vertex such that blue will reclaim the initiative

For each clause, we add an “and” tree, where the “b” leaves of the tree connect to the corresponding literals of the clause.

Here is the paper’s example for a small formula:

Some things I noticed to help me understand what is happening:

  • The formula starts with ∃, so blue can pick either x(1) or ~x(1).
  • The formula then moves into a ∀, so blue will have to pick q1, meaning red gets to decide whether blue will pick y(1) or ~y(1)
  • The formula then moves into a ∃, which means after blue’s choice, red has to pick q2, giving blue the choice about z(1) or ~z(1)
  • The l vertices denote the subtrees.  They are built out of the trees discussed earlier, and the paths through the “b” vertices (the one next to the “a” vertices) connect to the literals that are in each clause.

There are a lot more details about how it all gets exactly connected, but I think this shows the main idea of how the game is set up.

Difficulty: 8.  This is a very cool transformation, but you need a lot of help to see the structures and forcing moves.  On top of that, there are a lot of fiddly details that I’ve glossed over that are important in an actual reduction.

Quantified Boolean Formulas

The next section in the appendix is the “Games and Puzzles” section.  But many of the reductions refer to this problem, so we’ll do this first.

The problem: Quantified Boolean Formulas (QBF).  This is problem LO11 in the appendix.

The description: Given a set U {u1..un} of variables, a formula F that consists of a quantifier Qi for each ui that is either ∀ or ∃, and a boolean expression E bound to the quantifiers, is F true?

Example: ∀ u1 ∀ u2 ∃ u3  (u1 ∨ u2 ∨ u3).  This is true because for each value of u1 and u2, we can find a value for u3 (namely “true”) that makes the expression true.

Reduction: G&J send you to the paper by Stockmeyer and Meyer that we did last week, and calls it a “generic transformation”, which means it builds a Turing Machine, similar to how Cook’s Theorem does.

But I think you can do a regular reduction from 3SAT: Given a SAT instance, we add a ∃ quantifier for each variable and then use the SAT formula as our E expression in this problem.  Then we’re just asking whether there exist values for each variable that make the formula true.

Difficulty: 3, assuming I’m not missing something.  I always worry when I find a “short cut” like this.

Integer Expression Membership

The last problem in this section!

The problem: Integer Expression Membership.  This is problem AN18 in the appendix.

The description: Given an integer expression e over the expressions ∪ and +, which work on sets of integers.   The ∪ operation unions two sets of integers together, and the + operation takes two sets of integers and creates the set of sums of elements from each of the sets.  Given an integer K, is K in the set of integers represented by e?

Example: Suppose A = {1,2,3}, B = {4,5,6}, C = {1,3,5}.

Then the expression (A∪C) would be {1,2,3,5} and the set (A∪C) + B would be {2,4,6,3,5,7,6,8,10}.  So if we were given that expression and a K of 8, a solver for the problem would say “yes”, but if given a K of 9, the solver for the problem would say “no”.

Reduction: Stockmeyer and Meyer show this in their Theorem 5.1.  They reduce from Subset Sum (they call it Knapsack, but their version of Knapsack doesn’t have weights, so it’s out Subset Sum problem).  So from a Subset sum instance (a set A = {a1, …, an} and a target B), we build an expression.  The expression is (a1 ∪ 0) + (a2 ∪ 0) + …  + (an ∪ 0).  Then we set K=B.

Notice that the set of integers generated by this expresion is the set of all sums of all subsets of A.  Asking is B is a member is just asking if there is a way to make the above expression (for each element ai, either taking it or taking 0) that adds up to exactly B.  This is exactly the Subset Sum problem.

Difficulty: 4.  The reduction itself is easy.  It might be hard to explain the problem itself though.

Unification for Finitely Presented Algebras

The example below (using Boolean Algebra) is the kind of description of an “Algebra” that I wish I had when I learned this stuff in school- we did a lot of stuff with modular arithmetic, that I think was a lot harder.

The problem: Unification for Finitely Presented Algebras.  This is problem AN17 in the appendix.

The description: Given a description of an algebra as a set G of generator symbols, a set O of operator symbols (of possibly varying dimensions), and a Γ of “defining relations” over G and O.  We’re also given two expressions x and y over G and O, and a set of variables V.   Can we find “interpretations” for x and y, called I(x) and I(y) such that I(x) and I(y) are formed by replacements of variables such that I(x) and I(y) represent the same element of the algebra?

Example: The paper by Kozen that has the reduction gives a pretty good example of an algebra: Suppose G was {0,1}, O was {∧, ∨, ¬}, and Γ was {0∧0 ≡ 0, 0 ∧ 1 ≡ 0, 1 ∧ 0 ≡ 0, 1 ∧ 1 ≡ 1, 0 ∨ 0 ≡ 0, 0 ∨ 1 ≡, 1 ∨ 0 ≡ 1, 1 v 1 ≡ 1, ¬0 ≡ 1, ¬1 ≡}.  This is the normal Boolean Algebra we see everywhere.

Now we can ask questions like: If x = a ∨ b ∨ c and y = 0 ∨ 1 ∨ d, can we find a unification of the variables that makes the equations the same?  The answer is yes, with the replacements {a/0, b/1, d/c}.

I think the  “represent the same element” part of the problem statement means that we can also take advantage of unification rules, so we can ask questions like does ~x ∨ y ∨ ~z unify to 1?

Reduction: Kozen’s paper makes the point that if we use the Boolean Algebra that we used in the example, then the Satisfiability problem can be represented in Boolean Algebra using the rules above, and the question “Is this formula satisfiable” can be recast as “Does this formula unify to true?”

He doesn’t provide the proof, though, so I hope I’m representing him correctly.

Difficulty: 4.  Once you see how the Boolean Algebra relates, it’s pretty easy.

Unification With Commutative Operators

The next problem is one of the “Private Communication” problems.  I’m going to try to take a stab at it myself.

The problem: Unification With Commutative Operators.  This is problem AN16 in the appendix.

The description: Given a set V of variables, a set C of constants, a set of n ordered pairs (ei, fi) (1 ≤ i ≤ n) representing “expressions”.  Each expression is defined recursively as either a variable from V, a constant from C, or (e+f) where e and f are expressions.  Can we create a way to assign each variable v in V a variable-free expression I(v) such that, if I(e) denotes the expression obtained by making all such replacements of variables in an expression e, that I(ei) ≡ I(fi) for all i?  We define e ≡ f as either:

  • e = f  (the only possible way if e or f is a constant)
  • If e = (a+b) and f = (c + d) then either (a ≡ c and b ≡ d) or (a ≡ d and b ≡ c)

Example:  Suppose C was {0,1}, and V was (x,y,z}.  We will have pairs ((x+y), (0+1)) and ((y + z), (0 + 0)).  Then substituting x with 1, and y and z with 0 makes the substituted pairs equivalent.

Reduction: I think this can be done with One-In-Three 3SAT.  The SAT instance defines variables x1..xk.  Each variable xi in the SAT instance will correspond to 2 variables in the set V in our unification instance: xi and ~xi.  Each clause in the SAT instance (l1, l2, l3) will turn into the expression (e,f) where the “e side” is (l1+l2+l3), where each li is the variable corresponding to the literal (i.e., either some xj or some ~xj).  The “f side” of all expressions is (1+0+0), which corresponds to exactly one of the literals being true.

We also need to add some extra expressions to make sure that we set each xi and ~xi to opposite signs.  So each variable xi in the SAT instance gets a pair whose “e side” is (xi + ~xi), and whose “f side” is 0+1.

I think that the unification instance we’ve constructed has a solution if and only if we can find a way to:

  1. Set exactly one literal in each clause to true
  2. Set exactly one of xi and ~xi to true, for all xi

..which is exactly when the original One-In-Three 3SAT instance was solvable.

I’ll admit to being a little worried about whether this is correct, because the comment in G&J talked about there being at most 7 constants or variables in an expression, and I did it with just 3.  Maybe it’s because I’m using One-In-Three 3SAT instead of regular 3SAT, or maybe I’m missing something.

Difficulty: 4 if I’m right.  I think starting from this version of 3SAT is a big hint.

Equilibrium Point

This next reduction is confusing to me, and I wonder if it’s because there is a typo in the paper.

The problem: Equilibrium Point.  This is problem AN15 in the appendix.

The description: Given a set X = {x1..xn} of variables, a collection F = {F1..Fn} of integer product polynomials over the variables, and a set of “ranges” M = {M1..Mn} of subsets of the integers.  Can we find a sequence Y = {y1..yn}, where each yi ∈ Mi, and for all y ∈ Mi, Fi(y1, y2,…yi-1, yi, yi+1, …yn) ≥ Fi(y1, y2,…yi-1, y, yi+1, …yn)?

Example: This concept of “Equilibrium point” is best through of from the perspective of Game Theory.  The functions F are the utility functions for each player.  The sequence Y is the set of choices each player makes.  We are asking whether we can find a set of values in Y where any player i changing their yi value to something else will not improve their personal Fi score.

So the classic “Prisoner Dilemma” problem can be represented in these terms: There are 2 players, so n is 2.  Each range is with in {0,1}, where 0 means “stay silent” and 1 means “betray”.  F1 is defined by a table:

Player 2 stays silent Player 2 betrays
Player 1 stays silent -1 -3
Player 1 betrays 0 -2

F2 is defined similarly (the 0 and -3 scores switch places).

Notice that if we chose y1=y2 = 0 (both sides stay silent).  Then F1(0,0)= F2=(0,0) = -1.  But this is < F1(1,0), where player 1 betrays.  So this is not an equilibrium point.

y1=y2=1 is an equilibrium point, where both functions return -2.  Any player changing their choice from 1 to 0 will see their F function go from -2 to -3.

Reduction: Sahni does this in his “Computationally Related Problems” paper that we’ve used in the past to do some graph reductions.  This reduction is from 3SAT.   I’ll just say now that he could have avoided a lot of manipulation if he’d have used One-In-Three 3Sat.  From a 3SAT instance, we build a game where there is one clause for each player, and the range of choices for each player is between {0,1}.  The goal is to make a function fi for a clause Ci that is 1 iff the corresponding clause is true.  I’ll skip over the manipulations he does because he’s using a harder SAT problem than he needs to.

Define hi(x’) to be 2* the product of all of the fi(x’) values (for some literal x’.  If x;’ is a positive literal, use the variable.  If it’s a negated literal, use 1- the variable).  F1 (x’) = h1(x’) for all players.  This means that if the formula was satisfiable, everyone could score 2, but if it wasn’t, they’d always score 0.

Now it gets weird.  We’re going to set up a second game G, a 2 player game with no equilibrium point, then define a second payoff function for our original game F2 where F2 (x) = the g function of x for the first 2 players, but 0 for everyone else.

The paper says that the actual payoff for the actual game we’re creating is: F(X) = F1(x) + F2(x) * 2 – F1(x)

The “2” is a payout of 2 for all players- since the above depends on matrix math, it’s an nx1 vector of all 2’s. This formula is very weird to me because the F1 and -F1 should cancel out.  This is where I think there might be a typo.  I’m pretty convinced there is a typo on the previous page where he was building his little fi function (he uses a + where there should be a -).  I’m thinking that there are missing parentheses in this formula, and it should be F(X) = F1(x)+F2(x)*(2-F1(x))

Now two things can happen.  If the formula was satisfiable, then F1(x) is all 2’s, and that is the max payout for everyone and is an equilibrium point.  If the formula was not satisfiable, then F1(x) is all 0’s, and so the scores in the F2 part influence the score for F, but the F2 part has no equilibrium, so F doesn’t either.

Difficulty: 8.  I think I found the typo though.