Tag Archives: 3SAT

Code Generation on a One-Register Machine

I’ve done a lot of traveling over the past month, so I’m sorry about missing posts.  Things should be back to normal now.

The problem: Code Generation on a One-Register Machine.  This is problem PO4 in the appendix.

The description: Given a directed acyclic graph G = (V,A), in which no vertex has an out-degree larger than 2, and a positive integer K.  The leaves of this graph (the vertices with out-degree 0) are our starting values, sitting in memory.  Can we compute the values in all root vertices  (with in-degree 0) in K instructions or less, if our only instructions are:

  • Load a value from memory into our only register.
  • Store a value from the register into memory
  • Doing an operation combining a value in a register and a value in memory.  The operation must connect two children of a vertex in a graph together, and the “result” is the parent vertex.  The result value replaces the original value in the register.

Example:  Here’s a simple graph:

Here, we can compute the “+” node by:

  • Loading 1 into the register.
  • Doing the + operation between the register and 2
  • Storing the + operation to memory (G&J’s definition of the problem says that the node is not computed until the value is stored)

We can compute the “-” node in another 3 instructions, and since the value of the “-‘ node is still in the register, compute the “*” node in 1 more instruction, and store it with out last instruction.

Here’s a more complicated graph:

To do this one, we will have to load the value in node 2 lots of times.  For example, here is the set of instructions I came up with to compute h:

  • load 1
  • op to create a (only 1 operand)
  • store a
  • load 4
  • op to create c (3 is in memory)
  • store c
  • load 2
  • op to create d (4 is in memory)
  • op to create f (c is in memory)
  • op to create g (a is in memory)
  • store g
  • load 2
  • op to create b (3 is in memory)
  •  op to create e (c is in memory)
  • op to create h (g is in memory)
  • store h

It’s possible that we can do this in fewer instructions, but hopefully, you can see why this problem is hard- knowing what value to keep in the register is tricky.

Reduction: G&J point out that the reduction is from a paper by Bruno and Sethi, which uses 3SAT to do the reduction.  The instance they build is pretty complicated.  I also came across a paper by Aho, Johnson, and Ullman, who extend the result of the Bruno and Sethi paper with a nice reduction from Feedback Vertex Set.  I think this reduction is easier to follow, so we’ll go with that.

So, we are given an instance of FVS- a directed acyclic graph G and an integer K.  We are looking for a set F of K vertices such that every cycle goes in G goes through some element of F.

We are going to build our Code Generation graph D as follows:

  • For every vertex in G with outdegree d, build a “left chain” of d+1 vertices.  So if vertex a had 2 vertices leaving it, we will create 3 vertices b0, b1, and b2.  b2 will connect to b1, and b1 will connect to b0.
  • Each of the “0” vertices at the bottom of these chains connects to 2 distinct memory values (they will be the leaves of the code graph)
  • If vertex v has outdegree d, each vertex in a’s chain will connect to the different “0” vertex of the different neighbors of v in G.

Here is an example from the paper:

Notice that if we don’t have the edges between the chains, we can compute the entire chain with just 2 loads (of the leaves that start in memory).  So, the only loads needed to compute all of D happen in the leaves, or in some of the “level 1” vertices that are parents of the leaves.   If we have to re-load one of those vertices, it is because there is no optimal strategy to avoid loading it, which means it’s part of a cycle.

For example, look at the a and b chains in the picture above.  If we didn’t have any of the c or d vertices or edges in our graph, we could compute a1 and b1 without loading any vertex that is not a leaf: compute b0, b1, b2, then a0, then a1 (which uses a0 from the register and b0 from memory).  The reason we can do this is that while a1 depends on b0, none of the b vertices depend on anything in a, which gives us a chain to do first.  We need to reload a value when we have a circular dependency between chains (so there is no correct chain to do first).  That’s the relationship between the chains and the feedback vertex set.

This works in the other direction as well- if we are given the feedback vertex set in G, we can compute those vertices first in D, and then load them all once as needed to compute D.

The paper says that in the example graph, the node {d} by itself is a Feedback Vertex Set, and the optimal computation ordering is: d0,c0, c1, b0,b1, b2, a0,a1, d1.  That final d1 needs a re-load of d0.  The 1 extra load corresponds to the 1 vertex in our Feedback Set.

Difficulty: 6.  Maybe 7.  I think this is right at the limit of what a student can figure out, but I would also want a more rigorous proof about the connection between the extra loads and the feedback set, which is probably tricky to come up with.

Tree Transducer Language Membership

Sorry for vanishing for so long- I was trying to track down the reference for this problem, which is from a Ph.D. thesis from 1977, so was hard to get.  I probably could have (or should have) moved on to the next problem while we were working on that, but doing these problems in order is too ingrained in me to change now.

The problem: Tree Transducer Language Membership.  This is problem AL21 in the appendix.

The description: Given a description of a Top-Down Finite State Tree Transducer (see below) T and a string in its output grammar w, is w generated by some initial string by T?

A Top-Down Finite State Tree Transducer (abbreviated “t-fst” by Reiss in his thesis) defines a tree for rewriting strings into other strings.  Each rule replaces a tree (or a subtree) with a new tree (or subtree).

Example: Here’s an example Reiss uses:

What you’re seeing here is a tree that can rewrite strings of the form an into strings of the form an^2.  The bottom part shows how this set of rewritings can turn the string “aa” into the string “aaaa”.  First, we apply the first rule (on our starting “q1” tree) into the second tree.  Then we have a second rule to replace a q1 tree with a single child and a single grandchild with the same tree without the q1.  We have similar rules to remove the q2 symbols in the tree.  The final tree is a derivation for “aaaa”.

The reason the capital “A” symbols are in the trees is because these trees are parse trees for context-free grammars.  In particular, these trees come from the CFG:

A->Aa | a

Notice though that our tree rewriting rules only turn certain parse trees into other parse trees.

So, an instance of our problem is: Given a result string (such as “aaaa”) does there exist some initial string (such as “aa”) that our tree rewriting rules can generate?  Reiss calls this the “Inverting” a t-fst.

Reduction: Reiss reduces from 3SAT.  Our 3SAT instance will have m variables and r clauses.  We will assume that each variable appears at most once in a clause, and that r is an exact power of 2 (r = 2k).  We can add dummy clauses to ensure this.

First, he defines a “standard” set of tree rewriting rules.  These rules are always the same and do not depend on our SAT instance.  The rules will take a string of the form 1k$<variable settings>$, where <variable settings> is a string of m “t” or “f” symbols corresponding to the settings of the variables.

The output of the transformations is a string built out of one substring for each clause: 0m+7$<m a b or c symbols>.  The substrings for each clause are concatenated together.

Our problem instance is to start with a string in the form of this output transformation and see if an input string exists (and to show that one does if and only if the SAT instance was satisfiable).  Each variable contributes an a,b, or c symbol to the clause substring as follows:

  • If the variable does not appear in the clause, we choose a.
  • If the variable appears positively in the clause, we choose b.
  • If the variable appears negatively in the clause, we choose c.
  • We also reverse the ordering of the letters (so variable 1’s letter appears last)

So, suppose we have (v1, v2, ~v3) and (~v1, v3, v4) as two clauses.  Our initial string w would be: 00000000000$acbb00000000000bbac

We’re looking for a string like 1$tftt:  1’s equal to the log of the # of clauses, and then the truth values of the variables

Here’s another example from the thesis.  Note that each variable appears in each clause, so there are no “a” symbols:

If the formula is satisfiable, then we have an input string (from the settings of the variables) that will hopefully generate the output string.  The way the rewriting rules work is that the 1’s at the start of the string generate a subtree for each clause (copying all of our truth values), and then from there, we generate the string for each clause.

In each clause, we need to generate m+7 0 symbols as well as the a’s b’s and c’s that correspond to the clause.  Each of the a’s eventually maps to a 0 in our rewriting, which will give us m-3 of the 0’s we need- we still need to generate 10 more.  Assigning a literal to true will get us 3 0’s and setting it to false will get us 2 0’s.  So if the clause is satisfied, we will have 7-9 0’s, and we will have 6 0’s if the clause is not satisfied.  The replacement of the $ can generate 1-3 more 0’s.  So if the clause is satisfied, we will be able to get our 10 0’s, but if it’s not, we will not be able to.

In the other direction, if some string exists that can generate our clause string w, we know it has to start with k 1’s, then a $, then m t or f symbols.  The same logic will show that any string of that form that does not satisfy a clause will not generate the correct number of 0’s.  So whatever initial string was created to generate w has to be a way to satisfy our formula.

Difficulty: The hardest thing is to understand what is happening with the tree replacement.  Once you have that, the whole “figure out a way to handle 1-3 (but not 0) ways to satisfy a clause” is something that we’ve seen a few times.  So I guess I’d say a 8 for the actual reduction.

ETOL Language Membership

AL18 is Quasi-Realtime Language Membership.

Next up is another ET0L problem.

The problem: ETOL Language Membership.  This is problem AL19 in the Appendix.

The description: Given an ETOL Grammar G (See ETOL Grammar Non-Emptiness for the definition), and a string w, is w in the language generated by G?

It’s worth noting that in the paper by van Leeuwen that has the reduction, they say that the productions are from V into V*, which seems weird for several reasons:

  • The definition from last time was from V∪Σ into (V∪Σ)* (i.e. we could have terminals on the left side of a production)
  • V into V* means we never produce terminals
  • A context-free grammar production also is from a single element in V into a string in (V∪Σ)*.  So I don’t see how this definition isn’t equivalent to a Context-Free Grammar definition.  I know there are “tables” involved, but I haven’t yet seen a definition of an ETOL grammar where “tables” is not equivalent to “different options for productions”.  I’ve got a book on order through the library, so maybe I’ll come back to this when I learn more.

Example:  As I said above, I remain unclear as to the difference between a regular CFG and an ET0L grammar.  But here is a modified version of the grammar from last time (which still allows productions from terminals, because that’s what G&J’s definition allows):

S->A
A->B
B>a
a->ab

In this grammar, the string “ab” is in the grammar, and the string “bb” isn’t.

Reduction: van Leeuwen works from 3SAT, and turns the input instance of 3SAT into the string w to test for the formula by replacing all occurrences of variables with unary numbers (different ones for each variable).  So, to use the example from the paper, the formula (A∨ B∨ C) ∧ (~A ∨ B ∨ ~C) will turn into the input string:

(1 ∨ 11 ∨ 111) ∧ (~1 ∨ 11 ∨ ~111).

He then makes a grammar that will generate all possible satisfiable strings over these unary numbers, and so if our input string is generated by the grammar, it is satisfiable.

He does this in several tables, but I think this is equivalent to a CFG.  We’ll see.

Our first table will use start symbol S (he uses Π) and will generate all possible true ways to set all possible clauses:

S->(T ∨ T ∨ T)

S->(T ∨ T ∨ T) ∧ S

S-> (T ∨ T ∨ F)

S->(T ∨ T ∨ F) ∧ S

..and so on

Our second table replaces T’s and F’s with representations of the literals.  We can get a T from a true occurrence of a literal or a negated false occurrence of a literal.  We can get an F from a negated true occurrence of a literal, or a negated false occurrence of a literal.  The paper uses non-terminals like [., true] to mean a true occurrence of a literal and [~, true] to show a negated true occurrence.  I think it will be easier to use actual non-terminal letters:

T->TL  (true literal)

T->~ NTL  (the ~ is a terminal symbol, the NTL stands for negated true literal)

F-> FL (false literal)

F->~NFL (Negated false literal)

S-> $ ($ is a non-terminal that never gets replaced.  It’s here to force us to replace all S’s in the first table)

He then creates a third table that ends up being superfluous, and in the 4th table we replace non-terminals with literals, possibly terminating literals that evaluate to true:

TL->1

NFL->1

TL->1TL

NTL->1NTL

FL->1FL

NFL->1NFL

S->$

The last table also lets us expand literals, or terminate false literals:

NTL->1

FL->1

TL->1TL

NTL->1NTL

FL->1FL

NFL->1NFL

S->$

..this then gives us a grammar for all possible satisfiable strings.  So w is generated by this grammar if and only if the formula corresponding to w is satisfiable.

This does feel like a Context-Free Grammar to me though.  I wonder if there is something specific about the progression through the tables that I don’t understand.

Difficulty: 6. The actual process (“Turn the SAT instance into a string and create a grammar that generates all legal strings”) is pretty straightforward and I like the idea.  I’m just sure I’m missing something about how the tables are applied and why this is not just a CFG.

 

Protected: Regular Expression Inequivalence on Single Character Alphabets

This content is password protected. To view it please enter your password below:

Protected: Minimum Inferred Regular Expression

This content is password protected. To view it please enter your password below:

Protected: First Order Subsumption

This content is password protected. To view it please enter your password below:

Protected: Predicate Logic Without Negation

This content is password protected. To view it please enter your password below:

Protected: Modal Logic S5-Satisfiability

This content is password protected. To view it please enter your password below:

Protected: Sequential Truth Assignment

This content is password protected. To view it please enter your password below:

Protected: Quantified Boolean Formulas

This content is password protected. To view it please enter your password below: