Category Archives: Appendix-Logic

Predicate Logic Without Negation

(Looks like I had a problem I forgot to post.  I’ll let this count for this week while we’re on coronavirus lockdown.  We’ll see where we are in 2 weeks)

We continue bouncing between really easy and really hard reductions with another easy one.

The problem: Predicate Logic Without Negation.  This is problem LO15 in the appendix.

The description: Given a predicate logic system, that is:

  • A set of variables U = {u1..un}
  • A set of “function symbols” F = {f1..fk}
  • A set of “relation symbols” R = {R1..Rj}

..and a well-formed predicate logic sentence A without negations using U, F, and R, is A true under all interpretations of F and R.  An interpretation (I think) is us replacing the function (or relation) symbol fi or ri with an actual function of the correct dimension.

Example:

Suppose we have the predicate logic sentence:

∀x,y R1(x,y) = R2(x,y)

..then this statement would not be true under all interpretations R, because we can make R1 and R2 be different relations (maybe R1 always is true but R2 is always false)

But if we do:

∀x,y R1(x,y) = R1(x,y)

..then no matter what relation we choose for R1, the statement is true.

Reduction: G&J imply that the reduction is “trivial” but I still went to Kozen’s paper to figure out how to get rid of the negations.  The idea is that we replace each positive instance of a literal xi with the equation “xi = a1“, and we replace each negative instance of a literal ~xj with the equation “xj = a0“.  The new values a0 and a1 are constants (or, more precisely, functions that take 0 parameters)

If the original sat formula is satisfiable, then our new formula is also satisfiable with setting a1  to true and a0 to false. (Or, technically, setting each variable xi to a1 if it is true in our satisfying assignment, and setting it to a0 if it’s false in our satisfying assignment).

If the new formula is satisfiable, then we have a way to assign the variables to a0 or a1 that makes the whole formula true.  This gives us a way to solve the original SAT instance.

Difficulty: 4.  There isn’t much to the transformation, but I think it’s hard to see what’s happening- the fact that the a values are 0-parameter functions, and that the x values are given values in the domain {a0, a1} is a subtle point.  It’s hard right off to see the new formula with no negations and not say “But that’s always satisfiable if we make both a variables true!”.  The trick is that we’re not setting the a variables to true and false, we’re setting the x variables to constant values in {a0, a1}, which is different.

 

Conjunctive Satisfiability with Functions and Inequalities

The problem: Conjunctive Satisfiability with Functions and Inequalities.  This is problem LO16 in the appendix.

The description: Given a set U of variables, a set F of functions on 1 variable, a collection C of clauses connecting 2 statements U and V with either:

  • >
  • =

.. and U and V are either

  • A constant 0 or 1
  • f(0) or f(1) for some f in F
  • f(u) for some u in U, and some f in F

.. can we find integer values to all variables in U that make all of the clauses in C true?

Example:

Let F = {f,g}, f(x) = x2, g(x) = x3.  Let U = {a,b}

Let our clauses be:

  • f(a) > g(b)
  • a ≤ b

We can satisfy the clauses by choosing a = -2 and b = 1.  If we had another clause that said a > 2 then the clauses would not be satisfiable.

Reduction: G&J point you to an “unpublished manuscript” by Pratt, which I found online.  Some people who reference this paper just call it a “manuscript” and some people call it a Tech Report from MIT.  I looked at the MIT tech report list and couldn’t find this.  But that doesn’t really mean anything since not all of the tech reports from the 1970s have been indexed online at a lot of places.  Anyway, you can find the paper online.

I do wish the paper had gone through an editing pass because it has what to me look like bugs or ambiguities.  Maybe I’m not understanding things, though.

Here’s the basic idea: Given a Satisfiability formula P, build a formula H(P) in this new construction.  The clauses are:

  • Each variable has to be 0 or 1 (so, 0 ≤ u and u ≤1 for all u in U)
  •  Each literal i has a distinct function fi.  If the literal is positive then we will be using fi(0) in what’s to come.  If the literal is negative then we will be using fi(1) in what’s to come.
  • If 2 literals P and Q are connected by ∧, create clauses for the inequalities:
    • fp(0 or 1) ≤ fq (0 or 1)  (0 or 1 depending on whether the literal is positive or negative
    • fp(P) ≤ fq(Q)
  • If 2 literals P and Q are connected by ∨ (the paper uses the | symbol for “or” which I’d never seen before), create a clause for the inequalities:
    • fp(P) ≤ fq (0 or 1)
  • f0 (0 or 1) > fP(P) for some(?) (maybe all?) variable P.

In the description, Pratt says that they’re building a graph to represent the formula connected by inequalities.  If we hit a ∧ symbol, it will connect the graphs of the two parts of the formula together with a parallel connection, so that there is always a way to get from one subgraph to another.  If we hit a ∨ symbol, the subgraphs are connected by a series connection.  He says “We hope that this informal discussion will make a more formal argument unnecessary”.  I for one wish there was some more detail here.

The examples he uses to explain what’s happening are just on the trivial formulas “a” and “a ∧ ~a”.  I can sort of see how the clauses he creates come out of those formulas.  But since none of his examples use the ∨ operation, or use more complex clauses, it’s hard for me to see what happens there.  I’m probably missing something.

It also seems like he wants to let the definition of the functions be part of the satisfiability argument (i.e. not only do we have to find truth values for all of he variables, we also have to create functions that make all of the clauses become true), which seems to not be the definition provided in G&J.

Difficulty: 8.  There is probably a much simpler reduction out there.  I wish I knew what it was.

Modal Logic Provability

Here’s a much harder reduction based on the modal logic from last time.

The problem: Modal Logic Provability.  This is problem LO14 in the appendix.

The description: Given a modal system S that follows the “S4” modal logic rules, and a modal statement A, can A be proven in S?

Example: S4 modal logic is concerned with what different agents know.  I found a pretty good example of how it works from the notes of a lecture at CMU (Section 2, the story about the hats).

From our point of view, we would say that statement A would be \square_3 B3. (The third person knows his hat is black), and the explanation of how we got there would be the proof of the statement.

Reduction: Ladner’s paper has the reduction from QBF.  It’s theorem 3.1 in the paper if you want to look at it.  They start with the set B_\omega which is the set of B_k sets that the Stockmeyer and Meyer paper used in their QBF proof (I talked about it in my discussion of Sequential Truth Assignment), but with any number of quantifiers.  From a QBF formula A, they will build a modal formula B (sort of like the A in our problem definition) with the property that A is a satisfiable QBF formula if and only if B is S4-Satisfiable.

(He actually proves that it for B  “between” K-Satisfiable and S4-Satisfiable.  K-satisfiable is a slightly simpler set of rules.)

The way he builds B is pretty crazy and hard to follow.  I’m not sure there’s much I can say about it that’s not just requoting the entire paper.  The basic idea though is that he is using the modal operators in B to replace the quantifiers in the QBF formula.  The various levels of “for all” and “there exists” are replaced by worlds for agents “know” that a fact is true.

So the clauses in the formula form a tree, and every “For all” in A leads to a branch of worlds (one where the variable is true, and one where it is false).  Then he shows that

Difficulty:8.  Maybe less if you’re comfortable with the Modal logic rules.

Modal Logic S5-Satisfiability

One of the things that strikes me about the logic chapter is that the problems all seem to be either “trivial” or “really, really, hard”.  I just hope I’m classifying them correctly.  Here’s one that I think is a trivial one.

The problem: Modal Logic S5-Satisfiability.  This is problem LO13 in the appendix.

The description:  Given a set of variables, and a modal logic formula over those variables, is it “S5-Satisfiable”?  I think that “S5-Satisfiable” is just the regular notion of satisfiability extended to the modal operators.

Example/Reduction: I think all we need to know about Modal Logic for our purposes can be found in from this Wikipedia article. In it, they say that Modal Logic extends propositional or predicate logic by adding two new operations:

  • \square which means “Necessarily”.  So \square P means “it is necessary that P is true”.
  • \Diamond which means “Possibly”.  So \Diamond P means “it is possible that P is true”.

Since you can define \Diamond in terms of \square, G&J just have the diamond symbol in the language.

People use Modal logic to think about “worlds” in which facts are necessarily or possibly true, and different agents know different things.  I talk about it (vaguely) when I teach Artificial Intelligence because it can model things agents know, and things agents know that other agents know, and so on.  (The Russell and Norvig AI book has a fun example about how Superman knows that Clark Kent is Superman, but Lois Lane doesn’t know.  And Superman knows that Lois doesn’t know.  But he knows that Lois knows that someone is Superman).

Notice, though, that for our purposes, we don’t need the modal operators.  So, for example, a regular 3SAT formula: (x_1 \lor x_2 \lor x_3) \land (~x_1 \lor x_4 \lor ~x_2) is a modal logic formula (and, in this case, is true by setting all 4 variables to true).  So the reduction from 3SAT is just “Take a 3Sat formula, keep it the same, call it a modal logic formula.  Obviously, this is satisfiable iff the original (unchanged) formula is”.

Difficulty: 2.  Maybe it should be 1, but you need to at least think a little about what you’re leaving out.

First Order Theory of Equality

LO11 is QBF

This next problem is related to QBF and is proved in a similar way.

The problem: First Order Theory of Equality.  This is problem LO12 in the appendix.

The description: Given a set U of variables {u1..un}, and a sentence S over U in the first order theory of equality.  Is S true in all models of the theory?

The first order theory of equality defines logical operations for equality (on variables), and the standard ∧, ∨, ¬, → operations (on expressions).  It also lets us define ∀ and ∃ quantifiers on variables outside of an expression.

Example: My lack of mathematical logic knowledge is again letting me down as I’m not entirely sure what the difference between “models of the theory” and “assignments of truth values” is.  It seems to me that this is very similar to QBF, with the addition of equality. So I think we can ask whether an expression like:

∀x ∀y ∀z (x=y) → (x = z) is always true.  (It is).

I think, at least, this is the meaning that the Stockmeyer and Meyer paper use.

Reduction: Stockmeyer and Meyer again do a “generic transformation” using Turing Machines.  What they do is the generic transformation for QBF (it’s Theorem 4.3 in the paper), and then show this problem is a corollary to it because it follows from the “well-known fact that a first-order formula with n quantifiers and no predicates other than equality is valid iff it is valid for domains of all cardinalities between 1 and n”.

It’s not a well-known fact to me, so I guess I’ll just trust them.

Difficulty: I guess it depends on just how “well-known” that fact is for your class.  If I gave the QBF reduction a 3, maybe this is a 5?  But if you have to tell them the fact, maybe it’s too obvious?  I don’t know.

Truth-Functionally Complete Connectives

This is another “private communication” problem that I couldn’t find an obvious reduction for online.  I’ve used the holidays as an excuse to not put up a post so I could work on this problem, but even with the extra time, this problem has stumped me.

The problem: Truth-Functionally Complete Connectives.  This is problem LO10 in the appendix.

The description: Given a set U of variables, and a set C of well-formed Boolean expressions over U, is C truth-functionally complete?  In other words, can we find a finite set of unary and binary operators D = {θ1..θk} such that for each θi we can find an expression E in C, and a substitution s: U->{a,b} such that s(E) ≡ aθib (if θi is binary) or s(E) ≡ θia (if θi is unary)?

Example: So, “Truth-Functionally Complete” means that you can use the operations in D to generate all possible truth tables.  So some possible candidates for D are {AND, NOT} or just {NAND}.

So, if U = {x,y} and C = {~x, x∧y}, then I can just choose D to be {NOT, AND} and use the obvious substitution.  I think the hard part here is that the problem asks if such a D exists- I think the substitutions are probably always straightforward, once you get the functionally complete set D that gives you the best mapping..

Reduction: As I said above, this is a “Private Communication” problem, and so after a lot of searching, I found this paper by Statman, which I’m honestly am not even sure is the correct one. It at least talks about substitutions and deterministic polynomial time algorithms and things, but it very quickly goes into advanced Mathematical Logic that I wasn’t able to follow.  If someone out there knows more about this than I do and wants to chime in, let me know!

Edited Jan 22, 2020: Thanks to Thinh Nguyen for the help in coming up with the idea for the reduction- please check it out in the comments!

Difficulty: 10.  This is why we have the level 10 difficulty- for problems I can’t even follow.

Minimum Disjunctive Normal Form

LO7 is Satisfiability of Boolean Expressions, which is what we now call Cook’s Theorem. It’s related to the “regular” (clause-based) Satisfiability we’ve done all along because it’s possible to turn any Boolean expression into CNF form.

LO8 is DNF Non-Tautology.

The next problem’s paper was written in 1965, so before NP-Completeness was a concept.  As a result, the paper doesn’t line up exactly with what we want to do, and I think I’m misunderstanding something.

The problem: Minimum Disjunctive Normal Form.  This is problem LO7 in the appendix.)

The description: Given a set u1 through un of variables, a set A of n-tuples containing T or F symbols (possible truth assignments of the variables), and an integer K, can we create a DNF form expression E that is true for the variable set exactly for the truth assignments in A (everything in A and nothing more) using K clauses or less?

Example: Suppose we had 3 variables, and A was {(T,T,F), (T,F,F)}.

Then the DNF expression (u1 ∧ u2 ∧ ~u3) ∨ (u1 ∧~u2 ∧ ~u3) will be true only on the truth assignments listed in A.

We can get away with less than “one clause for each element of A” though.  For example, if A = {(T,T,T), (T,T,F), (T,F,T), (F,T,T), (T,F,F), (F,T,F)}  then we can just use u1 ∨ u2 to cover exactly those truth assignments.

Reduction: The paper by Gimpel was written in 1965 (before Cook’s 1971 paper), so wasn’t conceived of as an NP-Completeness reduction.  But it does show how to translate from Set Covering.

The basic idea is that we will represent a covering instance as a 0/1 table A.  We start with an m by n table, with variables representing the rows and sets representing the columns (aij = 1 means that variable i is in set j)

We’re now going to build some boolean expressions over m+n variables:

pi = u1 ∧ u2 ∧ … ∧ ~ui ∧ … ∧ um+n

(All of the positive variables and-ed together, except the negation of ui)

f1 = p1 ∨ p1 ∨ … pn

Fi = {j | aij = 0}  (The set of all variables not in set i)

Pi = un+i ∧ uj (the conjunct of all uj where j ∈ Fi)

f2 = P1 ∨ … Pn

Then he proves that f1 and f2 form a pair of functions for which A is the prime implicant table of those functions.  By which he means that aij = 1 if and only if pj ∈ Pi.

Then we build a table B that represents the prime implicant table of a related function.  If we remove the last r rows and 2r columns from B (I think this is related to K somehow), we get a table that is equivalent to our A.

Difficulty: 9.  I’m pretty sure I’m missing something here.  I don’t see how these tables relate.  I don’t see how you can take f1 and f2 and make a single DNF formula out of them.  I don’t see where K shows up- I guess it’s a limit on n?  But how does it relate to the covering matrix?

Generalized Satisfiability

We’re starting a new section and a lot of the introductory problems have been covered elsewhere.

LO1 is Satisfiability

LO2 is 3-Satisfiability

LO3 is Not all Equal 3SAT

LO4 is One-In-Three 3SAT

LO5 is Maximum 2SAT

After that we finally get to a new problem:

The Problem: Generalized Satisfiability.  This is problem LO6 in the appendix.

The description: Given a list of positive integers k1 through km and a sequence S = <R1..Rm>, where each Ri is a set of strings of ki “T” or “F” symbols.  We’re also given a set U of variables, and a set of collections C1 through Cm.  Each Ci contains a ki-tuple of variables from U.

Can we assign a truth value (“T” or “F”) to each variable in  U such that for each tuple in Ci the truth value of each variable in the tuple matches the sequence of true and false values in Ri?

Example: Let all ki = 3, and m = 2.  R1 = {(T,T,T), (F,F,F)}  R2 = {(T,F,F). (F,T,T)}

U = {x1, x2, x3}.  C1 = (x1, x2, x3}.  C2 = {x1, x2, x3}

So, the question is: Can we assign truth values to x1 x2 and x3 so that the truth values map to a sequence in both R0 and R1?  Since we have the same sequences of the same variables in both sets, the answer is no.

But, suppose we change C2 to (x4, x2, x3).  Then we can set x1, x2, and x3 to true and x4 to false.  This will let the sequence in C1 map to the (T,T,T) sequence in R1, and the sequence in C2 map to the (F,T,T) sequence in R2.

Reduction: This reduction is in Shaefer’s paper, and it reduces from 3SAT.  The idea is that we will build 4 R sets, thinking of them as functions.  So:

R0 = {R(x,y,z) | x or y or z is true} = {(T,T,T), (T,T,F), (T,F,T), (T,F,F), (F,T,T), (F,T,F), (F,F,T)}

R1 =Same idea, but  “~x or y or z is true”

R2 = “~x or ~y or z is true”

R3 = ” ~x or ~y or ~z is true”

Once we’re given a CNF instance, we replace each clause in the formula with one of the Ri formulas.  Then build a C set that uses the variables in the clause.  So, for example, if a clause in the CNF formula was ~x2 ∨ x5 ∨ ~x7, we’d make the element in R corresponding to the clause R2, and we’d use the C formula (x2, x7, x5).  Notice that we changed the order of the variables in the C set because R2 expects its negated literals first.

Notice that we can find a way to assign the variables truth values to make each Ci map to its R set, we have satisfied the clause.  So this new construction is satisfiable if and only if the 3SATinstance was satisfiable.

Difficulty: Making the construction is probably a 5, mostly because you need a good way to explain how you’re making the R sets.  The hardest thing here though is understanding what the problem says.

Protected: Quantified Boolean Formulas

This content is password protected. To view it please enter your password below:

Protected: Planar 3-Satisfiability

This content is password protected. To view it please enter your password below: