Let’s see if I can build my own reduction for this one.
The problem: Inequivalence of Finite Memory Programs. This is problem PO13 in the appendix.
The description: Given a finite set X of variables, a finite set of constant symbols Σ, two programs P1 and P2, each made with a sequence of instructions I1..Im (possibly of different lengths in both programs)
- read xi
- write vi
- xi <- vj
- if vi = vj then goto Ik
- halt (“halt” tends to mean “reject”)
Each xi is a variable from X. Each vj is either a variable from X, a symbol from Σ, or a special “end marker” symbol $. Ik is an instruction in the program.
Is there an initial assignment of values to all variables in P1 and P2 (the same assignment to variables in both programs) that leads to different output values of variables in both programs?
Here are two programs:
I1: X1 <- 2
I2: write X1
I1: if X1 = X1 goto I4
I2: X1 <- 3
I3: if X1=X1 goto I5
I4: X1 <- 2
I5: write X1
Despite the two programs being different, P2 will always go to I4 in the first if statement, and then both programs set X1 to 2, write the new value, and halt so that both programs will produce the same output on all inputs.
If we change P2’s first instruction to:
I1: if X1=1 goto I4
..then the programs will produce different outputs whenever X1 is not 1.
Reduction (from the paper): The paper by Jones and Muchnick uses Linear Bounded Automaton Acceptance. They first show that a program in this format can be represented by a “Deterministic Generalized Sequential Machine” (DGSM), which works like a finite automaton that can output any string on a transition. (I’m getting the description for this from my (old, 1979) copy of Hopcroft and Ullman, section 11.2.).
The DGSM has a state for each pair of:
- Labels of read statements in the program
- All possible strings of length m (the number of variables) containing either a symbol from Σ, or $.
In addition to all of those 2-tuples, we have a state qa (the accepting state) and qh the halting state.
The start state is the tuple (instruction of the first read in the program, whatever is in the variables at that point of the program).
The paper goes on to claim that once we have this, the “transition function can be defined in a totally finite manner”, and then claims that once we do this, we can use another result that says that inequivalence for DGSM’s is NSPACE(log(n))-complete.
I’m a little worried that the number of states in our DGSM is exponential in the size of the number of variables. Maybe that doesn’t matter because this is just proving the existence of the DGSM?
Reduction (my try): So let’s see if I can try to create a reduction myself.
Suppose we have an LBA M, with its starting input x. We’ll build a program in our language to simulate its behavior. We’ll store one variable for each tape cell and an extra variable “xt” holding the (integer) position of the tape head.
We can simulate looking at a tape cell with:
if xt = 1 then goto an instruction that looks at x1
if xt = 2 then goto an instruction that looks at x2
.. and so on
This will increase the size of the program by a linear amount (because the tape cells only use a linear amount of space)
So now we can simulate our machines using these programs and if our programs are inequivalent, then the LBA’s we started with are as well.
I don’t know. It feels too easy.
Difficulty: 8. It might be less if you can provide a clear description of what the instructions in the language does. The paper describes a configuration of a program as an instruction to perform next, and two strings. One of the strings represents the contents of the variables. The other string I think is related to what the read and write instructions do, but it is hard for me to see what exactly those instructions are doing. So I may have missed something.