AL18 is Quasi-Realtime Language Membership.
Next up is another ET0L problem.
The problem: ETOL Language Membership. This is problem AL19 in the Appendix.
The description: Given an ETOL Grammar G (See ETOL Grammar Non-Emptiness for the definition), and a string w, is w in the language generated by G?
It’s worth noting that in the paper by van Leeuwen that has the reduction, they say that the productions are from V into V*, which seems weird for several reasons:
- The definition from last time was from V∪Σ into (V∪Σ)* (i.e. we could have terminals on the left side of a production)
- V into V* means we never produce terminals
- A context-free grammar production also is from a single element in V into a string in (V∪Σ)*. So I don’t see how this definition isn’t equivalent to a Context-Free Grammar definition. I know there are “tables” involved, but I haven’t yet seen a definition of an ETOL grammar where “tables” is not equivalent to “different options for productions”. I’ve got a book on order through the library, so maybe I’ll come back to this when I learn more.
Example: As I said above, I remain unclear as to the difference between a regular CFG and an ET0L grammar. But here is a modified version of the grammar from last time (which still allows productions from terminals, because that’s what G&J’s definition allows):
In this grammar, the string “ab” is in the grammar, and the string “bb” isn’t.
Reduction: van Leeuwen works from 3SAT, and turns the input instance of 3SAT into the string w to test for the formula by replacing all occurrences of variables with unary numbers (different ones for each variable). So, to use the example from the paper, the formula (A∨ B∨ C) ∧ (~A ∨ B ∨ ~C) will turn into the input string:
(1 ∨ 11 ∨ 111) ∧ (~1 ∨ 11 ∨ ~111).
He then makes a grammar that will generate all possible satisfiable strings over these unary numbers, and so if our input string is generated by the grammar, it is satisfiable.
He does this in several tables, but I think this is equivalent to a CFG. We’ll see.
Our first table will use start symbol S (he uses Π) and will generate all possible true ways to set all possible clauses:
S->(T ∨ T ∨ T)
S->(T ∨ T ∨ T) ∧ S
S-> (T ∨ T ∨ F)
S->(T ∨ T ∨ F) ∧ S
..and so on
Our second table replaces T’s and F’s with representations of the literals. We can get a T from a true occurrence of a literal or a negated false occurrence of a literal. We can get an F from a negated true occurrence of a literal, or a negated false occurrence of a literal. The paper uses non-terminals like [., true] to mean a true occurrence of a literal and [~, true] to show a negated true occurrence. I think it will be easier to use actual non-terminal letters:
T->TL (true literal)
T->~ NTL (the ~ is a terminal symbol, the NTL stands for negated true literal)
F-> FL (false literal)
F->~NFL (Negated false literal)
S-> $ ($ is a non-terminal that never gets replaced. It’s here to force us to replace all S’s in the first table)
He then creates a third table that ends up being superfluous, and in the 4th table we replace non-terminals with literals, possibly terminating literals that evaluate to true:
The last table also lets us expand literals, or terminate false literals:
..this then gives us a grammar for all possible satisfiable strings. So w is generated by this grammar if and only if the formula corresponding to w is satisfiable.
This does feel like a Context-Free Grammar to me though. I wonder if there is something specific about the progression through the tables that I don’t understand.
Difficulty: 6. The actual process (“Turn the SAT instance into a string and create a grammar that generates all legal strings”) is pretty straightforward and I like the idea. I’m just sure I’m missing something about how the tables are applied and why this is not just a CFG.