Strong Inequivalence of Ianov Schemes

I’d never heard of “Ianov schemes” before, but we’ll be using them for the next couple of problems, so let’s take a look!

The problem: Strong Inequivalence of Ianov Schemes.  This is problem PO16 of the appendix.

The description: Given two programs with a single variable x and instructions:

  1. x = f(x) (where f is some function from some given set of functions F)
  2. if p(x) goto I1 else goto I2 (p(x) is a boolean predicate from some given set of predicates P, I1 and I2 are instructions in the program)
  3. Halt

Are the two programs not strongly equivalent: Can we find a function from F for each f, a predicate from P for each p, and an initial value for each variable that gives different final values for the variable in both programs?

Example: Here is one similar to what we will need in a minute:

P1:

1: if p(x) then 4 else 2
2: x =  f(x)
3: Halt
4: Halt

P2:

1: Halt

Suppose the only choice for f is f(x)= x+ 1(so, changing the variable).  Then the programs are different if we have a p(x) that will let us get to line 2 in P1.  If our only choice of p is something like “p(x) = true”, then the two programs are always equivalent.  If our set of P is (p(x) = true, p(x) = false), then we can choose a predicate that makes the programs inequivalent.

Reduction: The paper from Constable, Hunt, and Sahni basically does this in two parts.  G&J say to use 3SAT, but this looks to me like DNF Non-Tautology.

First (this is Proposition 2.11), they show how to take a DNF formula and turn it into a program.  The program works as a giant chain of if statements checking each literal in each clause.  If a literal is checked to be false, we move on to the next clause.  If all of the literals in a clause are true, we make it to the first (“+”) halting state.  If none of the clauses make it to the “+” state, we go to the second (“-“) halting state.  This means that the “-” state is only reachable if we do not start with a tautology.

Next (This is part iii of Corollary 2.14 of Theorem 2.13), they show that they can extend this into an Ianov scheme program by making the “+” state be just a halt command (so the variable never gets changed), and the “-” state be:

1: x= f(x)  (Where f is some function that changes x)
2: halt.

So the only way this program is different from the trivial “just halt” program is if we can reach the “-” instruction, which only happens if the formula is not a tautology.

Difficulty: 5. The program to test the non-tautology is not that hard.  As usual, the hardest part is parsing all of the definitions in the problem

Inequivalence of Simple Functions

This paper (also referenced in our last problem) is the first one in the Bibliography whose first author started with “T”.  It feels weird that it took so long to get that letter.  Now the only letters missing in the Bibliography are “Q” (though I have a direct proof from Maurice Queyranne in the K-Closure post),  “X”, and “Z”.  I’m not sure I’ll get any “X” or “Z” papers, we’ll see.

The problem: Inequivalence of Simple Functions.  This is problem PO15 in the appendix.

The description: A simple function is a composition of basic functions on non-negative integers from:

  • s(x) = x+1
  • plus(x,y) = x+y
  • sub(x) = x-1
  • selecti (x1..xn) = xi
  • div(x,t) = [x/t] (integer division where t is a constant positive integer)
  • w(x,y) = if(y == 0) then x else 0
  • mod(x,t) = The remainder of dividing x by t (t is a constant positive integer)

We are given two simple functions f and g over some set of initial variables X.  Can we provide starting values in X that give different outputs for f and g?

Example:

Let

f(x,y) = s(s(s(x)))

(This returns x+3)

g(x) = s(s(s(w(x,y))))

(This returns x+3 if y is 0, but 3 if y is not 0)

On any input pairs (x,y) where y is 0, f and g will return the same value.  But the functions are inequivalent because f(2,7) = 5 but g(2,7) = 3.

Reduction: Tsichritzis uses Inequivalence of Loop Programs Without Nesting, which we did last time.  The idea is to show that all such program actually define simple functions (and so therefore if we can find the simple function derived from a program, that is our reduction).

First, he shows that a program without loops either returns xi + k, or k, where xi is some variable and k is a constant.  The idea is that since we can only do three things, each command can be rewritten from the top down:

  • If the instruction you are looking at is an assignment X=Y, then replace that with X=A (where A is whatever value Y is.  Either it has been computed previously in the program, or is Y’s starting value
  • If the instruction you are looking at is an increment X=X+1, then replace that with X=A+1, (where A is whatever the previous value of X is.  Either it has been computed previously in the program, or is X’s starting value)
  • If the instruction is X=0, then remember that 0 is the most recent previous value of X.

At the end, we look at what the output variable has been updated to.  It can only have been changed by some amount of additions to some variable’s start value, or by some amount of additions to 0.

So what is left is what goes on inside a loop.  Since loops cannot be nested, we know that the body of the loop only has the basic instructions.  Thus, all a loop can do is, for each variable Yi:

  1. Leave it alone, so, if yi was the contents of the variable at the beginning of the loop, Yi = yi at the end of the loop.
  2. Reset the value to some new constant k
  3. Set it to some other variable Yj‘s initial value yj plus some constant k.  We’ll say in this case that variable Yi links variable Yj

It’s possible for this set of linking to form a cycle.  It’s also possible for variables to depend on variables inside a cycle, but not to be in the cycle themselves.  The third possibility is for as variable to not care about a cycle at all and be independent.  The values of the independent variables are easy to express as simple functions.

If we have a cycle, the paper shows how the changes to the variable in one iteration can get added together by everything in the cycle, and then multiplied by the number of iterations the cycle goes.  These can also be represented as simple functions.

Difficulty: 6  The actual task we’re trying to do (“represent this program as a simple function”) is pretty easy to see.  The details of how to write the simple functions are pretty finicky, though.

 

Inequivalence of Loop Programs Without Nesting

This is another one of those “mainly ignore most of the problem” reductions.

The Problem: Inequivalence of Loop Programs Without Nesting.  This is problem PO14 of the appendix.

The description: Given:

  • a set of X variables,
  • a subset Y of X which are “input” variables,
  • a specific output variable x0
  • Two programs P1 and P2 with the commands:
    • x <- y (x gets a copy of y)
    • x <- 0  (x gets set to 0)
    • x<- x + 1 (x is incremented by 1)
    • DO xi {S1 S2..Sn} END  (see below)

.. is there a way to assign the variablse in Y so that P1 and P2 produce different outputs in our output variable x0?

Example: The semantics of the “DO” statement are (they’re better described by Tsichrtizis in his paper) that we execute the loop body a number of times equal to the value of xi (if it is 0, we do not execute the loop at all).  The Si statements can be any program statement except another loop (since we are looking at programs without nesting- even allowing one level of nesting makes this problem undecidable).

So, here are two programs x1 is our input variable and x0 is our output variable:

P1:

x0 <- 0
DO x1
x0 <- x0 + 1
END

P2:

x0 <- x1

These programs are equivalent as long as we do not allow our input values to be negative (which seems to be the convention) because both programs will copy the integer value in x1 into x0 (though P1 does it more slowly).  Suppose we used the alternate P1

P1′:

x0 <- 0
DO x1
x0 <- x0 + 1
x0 <- x0 + 1
END

..this has the effect of making x0‘s value twice the value of x1.  While P1 and P2 will still have the same value in x0 if x1=0, any other value in x1 will result in different outputs, making our programs inequivalent.

Reduction: Constable, Hunt, and Sahni use 3SAT.  The basic idea is to use loops that execute at most once as if statements.  (I don’t think they have a loop that iterates more than once in the whole program).  So for example, they will set up a clause variable Ci that will be set to 1 if any of its literal variables are true with a subprogram like:

Ci <- 0
DO Ci1  (Ci1 is the first literal of clause i)
Ci <- 1 (Technically this is Ci <- 0 followed by Ci <- Ci +1)
END

DO Ci2
Ci <- 1
END

DO Ci3
Ci <- 1
END

..each of the loops will set Ci to 1 if the literal is true, and leave it alone if the literal is false.

So, with that in mind, we take a 3SAT formula and write a program:

  • Our input variables are the variables to the formula.
  • We create the negations of all of the variables in an if statement  (this way we have both positive and negative literals for each variable)
  • We compute all of the Ci variables as above
  • We compute the negations of all of the Ci variables. (we need this because our loop-based if statement can only do things if a variable is 1, not 0)
  • We create a variable P that is initially 1.  If any of the negated Ci variables are 1 (so the regular Ci variable was 0, so the clause was not satisfied), we reset P to 0.  P is our output variable.

This program will output P to 1 if its inputs form a satisfiable arrangement of variables, and will output 0 otherwise.  This is our P1 program.

Our P2 program is a program that always outputs 0.  P1 and P2 will be different if and only if the formula is satisfiable.

Difficulty: 5. The reduction itself is pretty easy- “write a program that sees if the inputs are satisfiable”.  The hard part is realizing that you don’t really use the loop structure as a loop, and that you aren’t searching for an input to make it satisfiable, you’re just using whatever input the SAT instance gives you.

Inequivalence of Finite Memory Programs

Let’s see if I can build my own reduction for this one.

The problem: Inequivalence of Finite Memory Programs.  This is problem PO13 in the appendix.

The description: Given a finite set X of variables, a finite set of constant symbols Σ, two programs P1 and P2, each made with a sequence of instructions I1..Im (possibly of different lengths in both programs)

  • read xi
  • write vi
  • xi <- vj
  • if vi = vj then goto Ik
  • accept
  • halt (“halt” tends to mean “reject”)

Each xi is a variable from X.  Each vj is either a variable from X, a symbol from Σ, or a special “end marker” symbol $.  Ik is an instruction in the program.

Is there an initial assignment of values to all variables in P1 and P2 (the same assignment to variables in both programs) that leads to different output values of variables in both programs?

Example:

Here are two programs:

P1:

I1: X1 <- 2
I2: write X1
I3: accept

P2:

I1: if X1 = X1  goto I4
I2: X1 <- 3
I3: if X1=X1 goto I5
I4: X1 <- 2
I5: write X1
I6: accept

Despite the two programs being different, P2 will always go to I4 in the first if statement, and then both programs set X1 to 2, write the new value, and halt so that both programs will produce the same output on all inputs.

If we change P2’s first instruction to:

I1: if X1=1 goto I4

..then the programs will produce different outputs whenever X1 is not 1.

Reduction (from the paper): The paper by Jones and Muchnick uses Linear Bounded Automaton Acceptance.  They first show that a program in this format can be represented by a “Deterministic Generalized Sequential Machine” (DGSM), which works like a finite automaton that can output any string on a transition.  (I’m getting the description for this from my (old, 1979) copy of Hopcroft and Ullman, section 11.2.).

The DGSM has a state for each pair of:

  • Labels of read statements in the program
  • All possible strings of length m (the number of variables) containing either a symbol from Σ, or $.

In addition to all of those 2-tuples, we have a state qa (the accepting state) and qh the halting state.

The start state is the tuple (instruction of the first read in the program, whatever is in the variables at that point of the program).

The paper goes on to claim that once we have this, the “transition function can be defined in a totally finite manner”, and then claims that once we do this, we can use another result that says that inequivalence for DGSM’s is NSPACE(log(n))-complete.

I’m a little worried that the number of states in our DGSM is exponential in the size of the number of variables.  Maybe that doesn’t matter because this is just proving the existence of the DGSM?

Reduction (my try): So let’s see if I can try to create a reduction myself.

Suppose we have an LBA M, with its starting input x.  We’ll build a program in our language to simulate its behavior.  We’ll store one variable for each tape cell and an extra variable “xt” holding the (integer) position of the tape head.

We can simulate looking at a tape cell with:

if xt = 1 then goto an instruction that looks at x1
if xt = 2 then goto an instruction that looks at x2
.. and so on

This will increase the size of the  program by a linear amount (because the tape cells only use a linear amount of space)

So now we can simulate our machines using these programs and if our programs are inequivalent, then the LBA’s we started with are as well.

I don’t know.  It feels too easy.

Difficulty: 8.  It might be less if you can provide a clear description of what the instructions in the language does.  The paper describes a configuration of a program as an instruction to perform next, and two strings.  One of the strings represents the contents of the variables.  The other string I think is related to what the read and write instructions do, but it is hard for me to see what exactly those instructions are doing.  So I may have missed something.

Inequivalence of Programs with Arrays, Inequivalence of Programs with Assignments

Both of these problems are from the same paper and use basically the same proof, so we’ll do them together.

The Problems: Inequivalence of Programs With Arrays (PO11 in the appendix) and Inequivalence and Programs with Assignments (PO12 in the appendix)

The Description (for Inequivalence of Programs With Assignments): 

Given two programs P and Q that access a common set of variables, S.  Each statement in the program is of the form “x0 <- if x1 = x2 then x3 else x4“, where each xi is a variable from S.  Will the two programs always give different output for all initial variable configurations?

The Description (for Inequivalence of Programs With Assignments):

Same idea, but the possible statements in the program are:

  • Select an element B of an array α: A <- α.B
  • Update an element B of an array α: α.B <- A
  • “Operate” on an operation Φ: A <- Φ B1B2..Bk.  Φ comes from a given finite set of operations and is expected to take a certain number of arguments (k) that we know ahead of time.

Example: The “outputs” of these programs are the final values of the variables.  They denote a “final” value by representing a program as a Directed Acyclic Graph, and the leaf nodes (of outdegree 0) are the variables with final values.  We will specify what variables we care about as output variables

So for example, we might have the one-line program:

k <- if i=j then 2 else 3

Compared to the one-line program:

k <- if i=j then 3 else 2

If k is our output variable, then for all values of i and j both programs will produce different outputs.  If we change our second program to:

k<- if i=j then 2 else 2

…then if we assign i and j the same values, both programs will give the same output.

Reduction (for Inequivalence of Programs With Assignments): Downey and Sethi use 3SAT.  If we start with a formula with variables y1..yn we will create a program with variables {T,F, A1..An, B1..Bn,).  Then we just write a program that checks if all clauses are true.  “D” will be our output variable and will be updated for each clause.  We start it as true initially:

D <- if T = T then T else T // D is true so far

Then we check the first clause.  A variable “C” will be used for each clause.  It starts out false and will be set to true once the clause is satisfied:

C <- if F = F then F else F // C is false so far

Then for each literal x1..x3 we check if it has been made true.  We use Ai if the literal is positive in the clause and Bi if the literal is negative in the clause.  So suppose our clause was (y1, ~y2, y4), we’d have the lines:

// does the first literal make the clause true?
C <- if A1 = T then T else C
// does the second literal make the clause true?
C <- if B2 = T then T else C
// does the third literal make the clause true?
C <- if A4= T then T else C

C now is true if and only if the clause is satisfied, so let’s update D:

// D becomes false if we don't satisfy that clause
D <- if C = T then D else F

We then repeat that process for each clause.  At the end, we add rules saying that a variable and its negation can’t both have the same value:

D <- if A1 = B1 then F else D

..repeated for each variable.

The final output of this program is D, which will be true if our inputs legally satisfied the formula, and false otherwise.

Our second program just outputs the constant “F”.  The programs will produce different outputs if and only if there is a way for the first program to produce a T, which happens when D is true, which happens when all clauses are true, which happens when the formula is satisfiable.

Reduction (Inequivalence of Programs With Arrays): The same program works, we just need to replace our assignment if operations.  We do this by placing all of the elements of what we are comparing into an array, and replacing the statement “E <- if A=B then C else D” with:

α.A <- D
α.B <- C
E <- α.A

..if A and B are the same, then we will fill that spot in the array with D then C, and get a C out.  Otherwise, the two assignments go to different locations, and E is set to what is in the array at position A, which is the value D.  Since we can go back to the previous reduction and replace all of its statements with 3 statements under the new rules, the reduction works in the same way.

Difficulty: 6.  This isn’t a hard reduction to do, I think the hardest thing is to get students to think in terms 0f the restrictions these programs require.  There are no loops or anything (that would make this problem undecidable, at least in general), and representing variables as positions in an array is something we might be used to doing if all variables are integers, but since variables here are things like “B2” some work might need to be done to convince students that the mapping to integers for all variables is reasonable.  It’s not a thing we do as much with modern high-level languages as we might have done in the 1970s when the underlying machine was much more exposed.

Microcode Bit Optimization

The Problem: Microcode Bit Optimization.  This is problem PO10 in the appendix

The description: We have a set A of “microcommands”.  We take groups of these commands to create a set C of “microinstructions”.   We would like to partition our set A of commands into groups where each command uses at most one instruction from each group. Give an integer K, can we make such a split of A (into groups A1..An) such that \sum_{i=1}^n \lceil log_2(|A_i|+1 \rceil \leq K?

(I’ll note here that the definition in G&J requires this split into groups to be a partition into disjoint sets, while the paper we will be using allows the split to be a covering where a microcommand can appear in multiple Ai)

Example: The log part is there because we can represent groups as binary numbers.  The idea is that if we only have one group, we can represent microinstructions as a binary string of which microcommands we are using.  For example, if we make every microcommand its own group, then a microinstruction can be represented in binary: 101 might mean microcommand #5.  This makes our microinstructions smaller, but each microinstruction can only represent one microcommand.

In the other extreme, we can represent a microinstruction as a binary string of what microcommands are being used. In this model. the instruction “10101” says that we are using the first, third, and fifth microcommand, but not any of the others.  This lets us have longer microinstructions that use several microcommands at once.

So the problem is asking: How many groups are needed to minimize the representation of the microinstructions and mirocommands in a given program?  So, for example, suppose we had 8 microcommands and the following microinstructions (written as a binary string of what microcommands are used):

  • 1000 1111
  • 0100 0011
  • 0010 1100
  • 0001 0101

Notice that our 4 microinstructions never have 2 microcommands used from the 4 leftmost commands at the same time, so we can represent that chunk of the instruction as 2 bits (counting from 0-3, representing what command we are using).  We will need 4 bits to represent the rightmost 4 microcommands because all of those microcommands are in use in the first microinstruction.  But if we change the last 4 bits of the first instruction to 1011, then no instruction uses both the second and third commands out of those 4 bits, and we could represent those 2 commands using 1 bit (maybe a 0 means we use the second microcommand, and a 1 means we use the third one)

Reduction: G&J say to use 3DM, but the paper I found by Robertson uses CNF-SAT.  We assume that no clause has both a literal and its negation (because if it did the clause is automatically true), and also we never have a literal appear both with a variable and its negation in different clauses (we can add extra variables to remove this situation).

Suppose our formula has clauses T1..Tp and variables x1..xk.  Each of these will be its own microcommand.

The microinstructions will be:

  • An instruction {xi, xj} for all i < j.
  • An instruction {xi, Tj} if xi (or its negation) is not in clause Tj
  • An instruction {Ti, Tj} if those 2 clauses do not share any literals (they might share variables)

Define a maximal compatibility class (MCC) as a set of mutually compatible instructions that cannot add any other instruction without breaking compatibility.  Our job is to make these classes (the bigger the classes, the more items we can put in each subset)

We can make an MCC Ri out of xi and all of the Tj‘s that have xi in the clause.  We can make a second MCC Si out of xi and all of the Tj‘s that have ~xi in the clause.  Our goal is to use just k of these classes (one for each variable) to cover all of the instructions.

So, suppose the formula is satisfiable.  Then for all variables that are set to true in the satisfying arrangement, we use the Ri class of the variable, and for all variables that are set to false, we use the Si class for that variable.  Since we have chosen either Ri or Sfor each variable, each xi is covered.  Since the formula is satisfiable, all Ti are also covered by whatever variable makes the clause true.

In the other direction, if we have exactly k R’s and S’s and cover all of the T instructions, then the choice of R or S for each variable will give a consistent way to satisfy each clause.

We are not quite done, because of the requirement to use the log formula to measure the size of the sets, instead of just minimizing the number of the sets.  The paper goes into the algebraic details of manipulating these sets to satisfy those requirements.

Difficulty: 8. The log formula stuff is almost longer than the rest of the proof, which is annoying.  Other than that, the reduction is relatively straightforward, I think.

Ensemble Computation

This next one is done as an example of “Local Replacement” in the main part of the G&J book, but it’s pretty complicated, so I figured it was worth a post.

The problem: Ensemble Computation.  This is problem PO9 in the appendix and is done on pages 66-68 of G&J.

The description: Given a set A, and a collection C os subsets of A, and a positive integer J.  Can we find a sequence z1..zj, of at most J elements, where:

  • Each zi is formed as the union of 2 disjoint sets xi and yi
  • Each xi and yi is either (the set containing) a single element of A, or a zk for some k < i
  • Every set in C appears as a zi someplace?

Example: G&J say that this problem comes out of a desire to minimize the number of multiplications needed to compute a set of products- the idea being that the union of 2 sets works like the multiplication of the sets with those terms.  So let’s suppose A = {a,b,c,d,e}, and C = {ab, bcd, ce}

(Really C should be a collection of sets like {a,b}, but I think that thinking of it as multiplication is helpful)

Then a sequence that might get everything in C might be:

  • z1 = ab
  • z2 = bc
  • z3 = bcd
  • z4 = ce

..which is the best we can do.  But now imagine that the second element of C was “abc” instead of “bcd”.  Then we can make the sequence:

  • z1 = ab
  • z2 = abc
  • z3 = ce

.. Using the z1 result as an operand to create our z2, enabling us to do it in fewer steps.

Reduction: G&J use Vertex Cover.  So we’re given a graph G=(V,E) and an integer K.  Our set A will be the set of all vertices in V, plus a new element a0.  For each edge (u,v) in E, we will create a set {a0, u, v} in C.  Our bound J will be set to K + |E|.

Suppose G has a vertex cover of size K (we’ll call them v1 .. vk).  Then we know that each edge in E can be built using at least one of the VC vertices.  So the following sequence will work and be of size J:

  • z1 through zk are formed by unioning a0 with the VC vertex vi (i=1..k)
  • Each of the remaining unions will make an element of C (based on an edge in E) by unioning the z set that has the VC vertex in the edge with the vertex in A that has the other end of the vertex.

In the other direction, suppose we have a sequence of unions z1..zj that solves the Ensemble Computation problem.  There may be multiple possible solutions to this problem, let’s look at the shortest sequence, and also the one that has the least number of operations that make a z set out of 2 elements of V (and not a0)  It turns out that since we have a minimum length sequence, the only reason to union 2 elements u and v from V together would be because (u,v) is an edge in E, and we want to union in a0 later to make the set {a0, u, v} in C.  But then we can rearrange the order and union a0 with u first, and union that result with v to get the same element of C in the same number of steps.  So this means that our sequence has 2 kinds of operations:

  1. Unions of some vertex with a0
  2. Unions of one of the above sets with a vertex to make an element in C.

We know that there are |E| of the second kind of union (because there are |E| elements in C), and so there must be K of the first kind of union.  So the vertices we union directly with a0 must be of size K, and since one of them gets combined in each set in C (and thus is a part of each edge in E), must be a vertex cover.

Difficulty: 6. This is one of those “the description is harder than the actual problem” problems.  All of the talk of sequences and unions and things looks scary, but the actual problem isn’t that bad.  As for the actual reduction, I think it would be a good hard homework problem.  One trap I see is that the need for a0 might not be obvious (it’s there to force us to “choose” the VC vertices and commit them to be a step in our sequence).  So maybe that needs to be a hint.

Code Generation With Unfixed Variable Locations

Sorry for the long break.  I’m hoping to use this summer to get most of the rest of the way through this book.

The Problem: Code Generation With Unfixed Variable Locations.  This is problem PO8 in the appendix.

The Description: Given a program (a sequence of instructions I1..In), a set V of variables, a “short address span” B, and a desired program size M.  We want to assign addresses to both the instructions of the program and the variables.  If an instruction accesses a variable that is within B instructions of its use, we can represent that instruction in one byte (using the “short address”).  Otherwise, the instruction is represented in two bytes (using the “long address”).  Can we place all instructions and variables in M addresses or less?

Example: Suppose I have the following instructions (A, B, and C are variables, “ref” just means the instruction refers to that variable)

ref A
ref A
ref B
ref C

If B=1 (so all short references have to be within one instruction of the variable address), we can place all instructions and variables in 7 addresses, for example:

ref A
A
ref A
ref B
B
C
ref C

If instead that second A reference was at the end of the sequence of instructions:

ref A
ref B
ref C
ref A

..then (because we can’t change the order of the instructions in the program), we would have to make one of the references to A long, for example:

ref A
A
ref B 
B
C
ref C
ref A (long)

That second instruction that references A is a long reference, so this program needs 8 bytes of address space.

Reduction: The paper by Robertson uses 3SAT, So let’s start with a 3SAT instance with n variables and m clauses, each clause Cj contains literals Aj1, Aj2, and Aj3.Having just 1 or 2 literals in a clause is ok too.

Robertson notes that for any two literals V and W, if you have V and ~W appear as one (complete, two-element) clause, and also have ~V and W appear as a different clause, then V and W must both have the same truth value in order to satisfy both clauses.  This gets us a way to add more variables to the formula that have truth values the same as some other variable.  Specifically, if some variable V shows up in our formula more than 5 times, we can create a new variable W to replace three of those occurrences of V, and add two clauses to the end (the clauses (V ∨ ~W) and (~V ∨ W)).  This forces our new W to have the same truth value V will have if the formula is satisfied, and reduces the number of occurrences of V.  We repeat this process until no variable appears more than 5 times in the formula.  So, now we can say that for some variable Vi, it appears in clauses Ci1 through Ci5 (or maybe less than 5 if the variable occurs less than 5 times).

The program we will build will consist of “modules” that can be moved independently.  Each variable will have two modules (“t” and “f” modules), and each clause will have one.  Here is a picture of a variable “t” module:

“l” is the paper’s version of B- the maximum distance we can put a short reference to a variable.

What we’re seeing here is:

  • A “slot” we can put a variable (or more than one) into
  • 16 instructions that refer to a new variable ui
  • 3 instructions that reference each of the variables corresponding to the (at most 5) literals and clauses the variable can appear in.

The “f” mode is similar except that the definition of τr is negated (it’s the positive variable if ~Vi occurs as a literal).  The new variable ui appears in the t and f modules for variable i and noplace else, so the only way references to it can be short is if it is placed into the “slot” in either the “t” module (meaning variable i is set to true) or the “f” module (meaning variable i is set to false).  If we put ui in the slot of (say) the “f” module, then we have enough room to also place the variables corresponding to the literals in the same block making everything short.  But by putting the ui variable in the “f” module, all of the references to it from the “t” module will be long (adding 16 extra bytes of address space), making all of the literals too far away from the slot for the slot to be useful in that module.

We also have a module for the clause (the “c”-module):

Notice that the versions of the literals that actually appear in the clause are to the right of the slot.  These α variables are the same ones we had in the variable modules (so each α variable shows up once in a “c” module, but 3 times each in a “t” or “f” module).  Since there are more references to the α variables in our variable modules, the optimal placement should be to place those variables in the slot of the “t” or “f” modules (not in the “c” module).  The “remaining” α variables (the negations of the ones we put into the slot in either the “t” or “f” modules and which it didn’t help us to put in the slot of the other module) can go into the slot here and make one of the corresponding β references short.

We do have that extra γj cell to play around with, and here is where either the paper has a mistake, or I lose the ability to follow.  What is supposed to happen is that if we fill the slot with a variable that is referred to by one of the β variables then that reference will be short, and the reference from γ to δ will be within the range of a short reference.  That’s fine, but the paper wants this to be the α variable that we did not place in the “t” or “f” module.  But if placing the α in the “t” module meant setting the variable to true, then what is not in the module is the negation of that module, which feels like we are doing things backward- like we want to set the variable to true, we should have put ui in the “f” module so that the literal that can be placed in our clause module corresponds to the true version of the literal.

I’m not sure it matters much to the reduction.  If I’m right, it just means we should have labeled the α variables the negations of what they really are.  But it does worry me.

The proof finishes with deciding values of B and M to make it all work.  B ends up being 31 (it comes from the distance we need to separate the 16 ui instructions from the τ instructions), and M ends up being 16n+4* the total number of literals in the formula.

Difficulty: 7.  This has a lot of pieces, but (except for that end part) I think it’s pretty feasible to see how all of the pieces fit together.

 

 

Code Generation With Address Expressions

For this problem, I think it is easier to see what is happening if we use a problem definition that uses actual instructions (instead of the more abstract instructions G&J use in their definition).  So I am taking this problem description from the paper by Szymanski that has the reduction.

The problem: Code Generation With Address Expressions.  This is problem PO7 in the appendix.

The description: The PDP-11 has the following instructions:

  • Unconditional Branch (“br”), is two bytes long, and can only be used if the branch target is within 256 bytes or so of the branch instruction (technically 256 larger or 254 smaller).
  • Unconditional Jump (“jmp”), which works similarly to “br”, but is 4 bytes long and has no limit to the distance to the target.
  • Conditional branches (“beq” and “bne”), which like “br” are two bytes long and require the target to be within a fixed distance of the instruction
  • Extended conditional branches (“jbr”, “jeq”, and “jne”).  These instructions are either translated into “br”, “beq”, and “bne” if the targets are close enough, or they are treated as jmp instructions or conditional branches to a jmp instruction if the destination is too far.

Given a program with a series of these extended conditional branch instructions, and a goal size s, can we implement the program (by replacing the extended conditional statements with branches and jumps as appropriate) in s bytes or less?

Example: First, let’s look at how these jumps can happen.

Suppose I have:

    jbr  L1
    (some number of bytes of instructions without
     branches or jumps)
L1:

.. if the number of bytes in between is small, I can replace that with:

    br L1
    (the instructions just like before)
L1:

But if the size of the intermediate instructions is too large to allow a branch, we need to do:

     jmp L1
     (the instructions just like before)
L1:

Since a jmp is 4 bytes long and a br is 2 bytes long, the second option leads to a larger number of bytes at the end.

Here’s another example from the paper:

X: jne Y
   252 bytes of instructions
   jbr X
Y:

If we make the jbr instruction into a jump, then there is 256 bytes (+ the length of the jne instruction)  between the jne instruction and X, so the jne will also have to be a jump.  But if we make the jbr instruction into a br, then there are 254 bytes of instructions in between X and Y, so the jne can become a branch as well (so the distance between X and Y is 256 bytes), meaning that the jbr correctly was used as a branch (if the jne was a jump, the jump back to X would have covered too much space).

Reduction: Szymanski uses 3SAT.  So we start with a 3SAT instance with n variables and m clauses.  For each variable xi, we add the following lines to our program:

Yi:  jbr
     254 bytes of non-branching or jumping code
Zi:

(I’m a little confused by this because he doesn’t list a destination for the jbr here.  I think we’re supposed to go to Zi, so that we can potentially replace the jbr with a br)

For every clause Cj = {lj,1, lj,2, lj,3} we add the following lines to our program:

Ai:  
    246 lines of non-branching or jumping code
    jbr Tj1
    jbr Tj2
    jbr Tj3
Bj:

..where if lj,k is an un-negated version of variable xi, Tjk is an address calculated as the current address +Zi-Yi. If lj,k is a negated version of variable xi, Tjk is calculated as the current address +Zi-Yi-512.

We also add to the end of every clause n+3m+1 copies of a jbr statement to a line Bj-Aj bytes away from the current line.

The total length of this program is: 256n + 254m + 2nm + 6m2 + 2q, where q is the number of times we need to use a long jump instruction.  We have n+3m+m(n+3m+1) jbr instructions to deal with.  Our goal size s is 258n+280m+2nm+6m2, which we can only reach if we have at most n+3m jmp instructions (the rest being replaced by br’s)

Each of the Yi jumps can either be replaced by “br” instructions (which we will interpret as setting variable i to true) making the distance between Zi and Yi 256 bytes), or by jmp instructions (which we will interpret as setting variable i to false), making the distance between Zi and Yi 258 bytes.

The distance between each of the Ai and Bi in the clause pieces is 246 bytes + 2 bytes for each br we use or 4 bytes for each jmp we use.  So the total distance is 252, 254, 256, or 258 bytes,  Notice that we can only do a short translation if the corresponding literal is true, meaning that there are 256 or fewer bytes in the clause fragment if and only if the clause can be satisfied.

So, if the original formula is satisfiable, we will be able to find a way to satisfy each clause j to make the distance between the Aj and Bj labels 256 bytes or less.  So all of the “extra” jbr instructions at the end of each clause can be given short translations, so we will have our total program be of size s or less.

If the original formula is unsatisfiable, then no matter what we decide to do with our variables, there will be at least one clause where we have to give the long (258 byte) distance between Aj and Bj (this is the clause that is unsatisfied by whatever variable assignment we choose).  This means that the extra jbr Bj-Aj instructions at the end of that clause must turn into jmp instructions that are 4 bytes long each, making the overall size of the program too long.

Difficulty: 7.  It takes a while to understand the distances, and one of the consequences of using the real PDP-11 machine is the difference in distance between forward jumps that can be turned into branches (256 bytes) and backward ones (252 bytes), which makes the arithmetic and algebra a bit harder than I’d like.  But I think this is a neat reduction.

Code Generation for Parallel Assignments

The problem: Code Generation for Parallel Assignments.  This is problem PO6 in the appendix.

The description: Given a set of variables V= {v1 ..vn}, a set of assignments A= {A1..An}, where each assignment sets some variable vi to the result of some operation on a subset of parameters of V, and an integer K.

Can we order the elements of V in such a way that an assignment references a variable that comes later in the ordering at most K times?

Example: Sethi’s paper, which describes this problem, defines a parallel assignment as a statement like this:

i,x <- i+1, x+i

The idea is that both i and x get set to their values in the same assignment.

Sethi also defines a realization of an instruction, which is the ordering in which we do the assignments.  One realization of the above instruction is to compute i before x, and the other is to do x before i.

The cost of that realization is the number of variables that cause references to future instructions in the realization.  So, if we compute i before x, that realization is cost 1, because the x+i instruction has to wait for the change to i to happen  (and we have to keep the “new” value of i around someplace to use in the x instruction).

Computing x first and i second has a cost of 0, because we don’t need to keep the value of x around for the second instruction.  Our job is to minimize this cost (or, technically, to see if the cost is <= K).

What’s interesting to me, but not really discussed in the paper, is the fact that different realizations have different semantic meanings.  If we compute the i+1 first, x gets the updated value of i, but if we compute it second, then x gets the old value of i.  Maybe this kind of potential confusion is why we don’t typically see this kind of assignment operation in the present day.

Reduction: Sethi uses Feedback Node Set. So we start with a directed graph G.  We will map all of the vertices in the graph to variables.  For each vertex vi in the graph, if vi has edges connecting to w1..wk, we say that vi is computed from those vertices (the w1..wk are the parameters to create vi).  We will use the same K from the Feedback Node Set problem.

Suppose our graph has a set U of feedback vertices.  Sethi provides a separate proof about feedback vertex sets: A graph G has a feedback set U if and only if we can order the vertices in the graph y1..yn such that if we have a “backward” edge from yj to yi, if j > i, then yj is in U.

So, we use that ordering as our realization for our assignments. These “backward” edges correspond to “backward” references in our realization (which contribute to the cost).  Since each of these back edges must start with a vertex in U, and each one of these vertices costs 1 to our realization cost, this means that our realization cost is at most the size of U, which we know is K.

Difficulty: 6.  The sequence proof is a bit challenging to follow but other than that the hard part of this is understanding the problem.  One thing that tripped me up was that the cost of the realization was the number of variables that had backward references, not the number of references themselves.  (So, a variable that has 2 references to 2 different future instructions counts just once)