Category Archives: Appendix- Program Optimization

Inequivalence of Finite Memory Programs

Let’s see if I can build my own reduction for this one.

The problem: Inequivalence of Finite Memory Programs.  This is problem PO13 in the appendix.

The description: Given a finite set X of variables, a finite set of constant symbols Σ, two programs P1 and P2, each made with a sequence of instructions I1..Im (possibly of different lengths in both programs)

  • read xi
  • write vi
  • xi <- vj
  • if vi = vj then goto Ik
  • accept
  • halt (“halt” tends to mean “reject”)

Each xi is a variable from X.  Each vj is either a variable from X, a symbol from Σ, or a special “end marker” symbol $.  Ik is an instruction in the program.

Is there an initial assignment of values to all variables in P1 and P2 (the same assignment to variables in both programs) that leads to different output values of variables in both programs?

Example:

Here are two programs:

P1:

I1: X1 <- 2
I2: write X1
I3: accept

P2:

I1: if X1 = X1  goto I4
I2: X1 <- 3
I3: if X1=X1 goto I5
I4: X1 <- 2
I5: write X1
I6: accept

Despite the two programs being different, P2 will always go to I4 in the first if statement, and then both programs set X1 to 2, write the new value, and halt so that both programs will produce the same output on all inputs.

If we change P2’s first instruction to:

I1: if X1=1 goto I4

..then the programs will produce different outputs whenever X1 is not 1.

Reduction (from the paper): The paper by Jones and Muchnick uses Linear Bounded Automaton Acceptance.  They first show that a program in this format can be represented by a “Deterministic Generalized Sequential Machine” (DGSM), which works like a finite automaton that can output any string on a transition.  (I’m getting the description for this from my (old, 1979) copy of Hopcroft and Ullman, section 11.2.).

The DGSM has a state for each pair of:

  • Labels of read statements in the program
  • All possible strings of length m (the number of variables) containing either a symbol from Σ, or $.

In addition to all of those 2-tuples, we have a state qa (the accepting state) and qh the halting state.

The start state is the tuple (instruction of the first read in the program, whatever is in the variables at that point of the program).

The paper goes on to claim that once we have this, the “transition function can be defined in a totally finite manner”, and then claims that once we do this, we can use another result that says that inequivalence for DGSM’s is NSPACE(log(n))-complete.

I’m a little worried that the number of states in our DGSM is exponential in the size of the number of variables.  Maybe that doesn’t matter because this is just proving the existence of the DGSM?

Reduction (my try): So let’s see if I can try to create a reduction myself.

Suppose we have an LBA M, with its starting input x.  We’ll build a program in our language to simulate its behavior.  We’ll store one variable for each tape cell and an extra variable “xt” holding the (integer) position of the tape head.

We can simulate looking at a tape cell with:

if xt = 1 then goto an instruction that looks at x1
if xt = 2 then goto an instruction that looks at x2
.. and so on

This will increase the size of the  program by a linear amount (because the tape cells only use a linear amount of space)

So now we can simulate our machines using these programs and if our programs are inequivalent, then the LBA’s we started with are as well.

I don’t know.  It feels too easy.

Difficulty: 8.  It might be less if you can provide a clear description of what the instructions in the language does.  The paper describes a configuration of a program as an instruction to perform next, and two strings.  One of the strings represents the contents of the variables.  The other string I think is related to what the read and write instructions do, but it is hard for me to see what exactly those instructions are doing.  So I may have missed something.

Inequivalence of Programs with Arrays, Inequivalence of Programs with Assignments

Both of these problems are from the same paper and use basically the same proof, so we’ll do them together.

The Problems: Inequivalence of Programs With Arrays (PO11 in the appendix) and Inequivalence and Programs with Assignments (PO12 in the appendix)

The Description (for Inequivalence of Programs With Assignments): 

Given two programs P and Q that access a common set of variables, S.  Each statement in the program is of the form “x0 <- if x1 = x2 then x3 else x4“, where each xi is a variable from S.  Will the two programs always give different output for all initial variable configurations?

The Description (for Inequivalence of Programs With Assignments):

Same idea, but the possible statements in the program are:

  • Select an element B of an array α: A <- α.B
  • Update an element B of an array α: α.B <- A
  • “Operate” on an operation Φ: A <- Φ B1B2..Bk.  Φ comes from a given finite set of operations and is expected to take a certain number of arguments (k) that we know ahead of time.

Example: The “outputs” of these programs are the final values of the variables.  They denote a “final” value by representing a program as a Directed Acyclic Graph, and the leaf nodes (of outdegree 0) are the variables with final values.  We will specify what variables we care about as output variables

So for example, we might have the one-line program:

k <- if i=j then 2 else 3

Compared to the one-line program:

k <- if i=j then 3 else 2

If k is our output variable, then for all values of i and j both programs will produce different outputs.  If we change our second program to:

k<- if i=j then 2 else 2

…then if we assign i and j the same values, both programs will give the same output.

Reduction (for Inequivalence of Programs With Assignments): Downey and Sethi use 3SAT.  If we start with a formula with variables y1..yn we will create a program with variables {T,F, A1..An, B1..Bn,).  Then we just write a program that checks if all clauses are true.  “D” will be our output variable and will be updated for each clause.  We start it as true initially:

D <- if T = T then T else T // D is true so far

Then we check the first clause.  A variable “C” will be used for each clause.  It starts out false and will be set to true once the clause is satisfied:

C <- if F = F then F else F // C is false so far

Then for each literal x1..x3 we check if it has been made true.  We use Ai if the literal is positive in the clause and Bi if the literal is negative in the clause.  So suppose our clause was (y1, ~y2, y4), we’d have the lines:

// does the first literal make the clause true?
C <- if A1 = T then T else C
// does the second literal make the clause true?
C <- if B2 = T then T else C
// does the third literal make the clause true?
C <- if A4= T then T else C

C now is true if and only if the clause is satisfied, so let’s update D:

// D becomes false if we don't satisfy that clause
D <- if C = T then D else F

We then repeat that process for each clause.  At the end, we add rules saying that a variable and its negation can’t both have the same value:

D <- if A1 = B1 then F else D

..repeated for each variable.

The final output of this program is D, which will be true if our inputs legally satisfied the formula, and false otherwise.

Our second program just outputs the constant “F”.  The programs will produce different outputs if and only if there is a way for the first program to produce a T, which happens when D is true, which happens when all clauses are true, which happens when the formula is satisfiable.

Reduction (Inequivalence of Programs With Arrays): The same program works, we just need to replace our assignment if operations.  We do this by placing all of the elements of what we are comparing into an array, and replacing the statement “E <- if A=B then C else D” with:

α.A <- D
α.B <- C
E <- α.A

..if A and B are the same, then we will fill that spot in the array with D then C, and get a C out.  Otherwise, the two assignments go to different locations, and E is set to what is in the array at position A, which is the value D.  Since we can go back to the previous reduction and replace all of its statements with 3 statements under the new rules, the reduction works in the same way.

Difficulty: 6.  This isn’t a hard reduction to do, I think the hardest thing is to get students to think in terms 0f the restrictions these programs require.  There are no loops or anything (that would make this problem undecidable, at least in general), and representing variables as positions in an array is something we might be used to doing if all variables are integers, but since variables here are things like “B2” some work might need to be done to convince students that the mapping to integers for all variables is reasonable.  It’s not a thing we do as much with modern high-level languages as we might have done in the 1970s when the underlying machine was much more exposed.

Microcode Bit Optimization

The Problem: Microcode Bit Optimization.  This is problem PO10 in the appendix

The description: We have a set A of “microcommands”.  We take groups of these commands to create a set C of “microinstructions”.   We would like to partition our set A of commands into groups where each command uses at most one instruction from each group. Give an integer K, can we make such a split of A (into groups A1..An) such that \sum_{i=1}^n \lceil log_2(|A_i|+1 \rceil \leq K?

(I’ll note here that the definition in G&J requires this split into groups to be a partition into disjoint sets, while the paper we will be using allows the split to be a covering where a microcommand can appear in multiple Ai)

Example: The log part is there because we can represent groups as binary numbers.  The idea is that if we only have one group, we can represent microinstructions as a binary string of which microcommands we are using.  For example, if we make every microcommand its own group, then a microinstruction can be represented in binary: 101 might mean microcommand #5.  This makes our microinstructions smaller, but each microinstruction can only represent one microcommand.

In the other extreme, we can represent a microinstruction as a binary string of what microcommands are being used. In this model. the instruction “10101” says that we are using the first, third, and fifth microcommand, but not any of the others.  This lets us have longer microinstructions that use several microcommands at once.

So the problem is asking: How many groups are needed to minimize the representation of the microinstructions and mirocommands in a given program?  So, for example, suppose we had 8 microcommands and the following microinstructions (written as a binary string of what microcommands are used):

  • 1000 1111
  • 0100 0011
  • 0010 1100
  • 0001 0101

Notice that our 4 microinstructions never have 2 microcommands used from the 4 leftmost commands at the same time, so we can represent that chunk of the instruction as 2 bits (counting from 0-3, representing what command we are using).  We will need 4 bits to represent the rightmost 4 microcommands because all of those microcommands are in use in the first microinstruction.  But if we change the last 4 bits of the first instruction to 1011, then no instruction uses both the second and third commands out of those 4 bits, and we could represent those 2 commands using 1 bit (maybe a 0 means we use the second microcommand, and a 1 means we use the third one)

Reduction: G&J say to use 3DM, but the paper I found by Robertson uses CNF-SAT.  We assume that no clause has both a literal and its negation (because if it did the clause is automatically true), and also we never have a literal appear both with a variable and its negation in different clauses (we can add extra variables to remove this situation).

Suppose our formula has clauses T1..Tp and variables x1..xk.  Each of these will be its own microcommand.

The microinstructions will be:

  • An instruction {xi, xj} for all i < j.
  • An instruction {xi, Tj} if xi (or its negation) is not in clause Tj
  • An instruction {Ti, Tj} if those 2 clauses do not share any literals (they might share variables)

Define a maximal compatibility class (MCC) as a set of mutually compatible instructions that cannot add any other instruction without breaking compatibility.  Our job is to make these classes (the bigger the classes, the more items we can put in each subset)

We can make an MCC Ri out of xi and all of the Tj‘s that have xi in the clause.  We can make a second MCC Si out of xi and all of the Tj‘s that have ~xi in the clause.  Our goal is to use just k of these classes (one for each variable) to cover all of the instructions.

So, suppose the formula is satisfiable.  Then for all variables that are set to true in the satisfying arrangement, we use the Ri class of the variable, and for all variables that are set to false, we use the Si class for that variable.  Since we have chosen either Ri or Sfor each variable, each xi is covered.  Since the formula is satisfiable, all Ti are also covered by whatever variable makes the clause true.

In the other direction, if we have exactly k R’s and S’s and cover all of the T instructions, then the choice of R or S for each variable will give a consistent way to satisfy each clause.

We are not quite done, because of the requirement to use the log formula to measure the size of the sets, instead of just minimizing the number of the sets.  The paper goes into the algebraic details of manipulating these sets to satisfy those requirements.

Difficulty: 8. The log formula stuff is almost longer than the rest of the proof, which is annoying.  Other than that, the reduction is relatively straightforward, I think.

Ensemble Computation

This next one is done as an example of “Local Replacement” in the main part of the G&J book, but it’s pretty complicated, so I figured it was worth a post.

The problem: Ensemble Computation.  This is problem PO9 in the appendix and is done on pages 66-68 of G&J.

The description: Given a set A, and a collection C os subsets of A, and a positive integer J.  Can we find a sequence z1..zj, of at most J elements, where:

  • Each zi is formed as the union of 2 disjoint sets xi and yi
  • Each xi and yi is either (the set containing) a single element of A, or a zk for some k < i
  • Every set in C appears as a zi someplace?

Example: G&J say that this problem comes out of a desire to minimize the number of multiplications needed to compute a set of products- the idea being that the union of 2 sets works like the multiplication of the sets with those terms.  So let’s suppose A = {a,b,c,d,e}, and C = {ab, bcd, ce}

(Really C should be a collection of sets like {a,b}, but I think that thinking of it as multiplication is helpful)

Then a sequence that might get everything in C might be:

  • z1 = ab
  • z2 = bc
  • z3 = bcd
  • z4 = ce

..which is the best we can do.  But now imagine that the second element of C was “abc” instead of “bcd”.  Then we can make the sequence:

  • z1 = ab
  • z2 = abc
  • z3 = ce

.. Using the z1 result as an operand to create our z2, enabling us to do it in fewer steps.

Reduction: G&J use Vertex Cover.  So we’re given a graph G=(V,E) and an integer K.  Our set A will be the set of all vertices in V, plus a new element a0.  For each edge (u,v) in E, we will create a set {a0, u, v} in C.  Our bound J will be set to K + |E|.

Suppose G has a vertex cover of size K (we’ll call them v1 .. vk).  Then we know that each edge in E can be built using at least one of the VC vertices.  So the following sequence will work and be of size J:

  • z1 through zk are formed by unioning a0 with the VC vertex vi (i=1..k)
  • Each of the remaining unions will make an element of C (based on an edge in E) by unioning the z set that has the VC vertex in the edge with the vertex in A that has the other end of the vertex.

In the other direction, suppose we have a sequence of unions z1..zj that solves the Ensemble Computation problem.  There may be multiple possible solutions to this problem, let’s look at the shortest sequence, and also the one that has the least number of operations that make a z set out of 2 elements of V (and not a0)  It turns out that since we have a minimum length sequence, the only reason to union 2 elements u and v from V together would be because (u,v) is an edge in E, and we want to union in a0 later to make the set {a0, u, v} in C.  But then we can rearrange the order and union a0 with u first, and union that result with v to get the same element of C in the same number of steps.  So this means that our sequence has 2 kinds of operations:

  1. Unions of some vertex with a0
  2. Unions of one of the above sets with a vertex to make an element in C.

We know that there are |E| of the second kind of union (because there are |E| elements in C), and so there must be K of the first kind of union.  So the vertices we union directly with a0 must be of size K, and since one of them gets combined in each set in C (and thus is a part of each edge in E), must be a vertex cover.

Difficulty: 6. This is one of those “the description is harder than the actual problem” problems.  All of the talk of sequences and unions and things looks scary, but the actual problem isn’t that bad.  As for the actual reduction, I think it would be a good hard homework problem.  One trap I see is that the need for a0 might not be obvious (it’s there to force us to “choose” the VC vertices and commit them to be a step in our sequence).  So maybe that needs to be a hint.

Code Generation With Unfixed Variable Locations

Sorry for the long break.  I’m hoping to use this summer to get most of the rest of the way through this book.

The Problem: Code Generation With Unfixed Variable Locations.  This is problem PO8 in the appendix.

The Description: Given a program (a sequence of instructions I1..In), a set V of variables, a “short address span” B, and a desired program size M.  We want to assign addresses to both the instructions of the program and the variables.  If an instruction accesses a variable that is within B instructions of its use, we can represent that instruction in one byte (using the “short address”).  Otherwise, the instruction is represented in two bytes (using the “long address”).  Can we place all instructions and variables in M addresses or less?

Example: Suppose I have the following instructions (A, B, and C are variables, “ref” just means the instruction refers to that variable)

ref A
ref A
ref B
ref C

If B=1 (so all short references have to be within one instruction of the variable address), we can place all instructions and variables in 7 addresses, for example:

ref A
A
ref A
ref B
B
C
ref C

If instead that second A reference was at the end of the sequence of instructions:

ref A
ref B
ref C
ref A

..then (because we can’t change the order of the instructions in the program), we would have to make one of the references to A long, for example:

ref A
A
ref B 
B
C
ref C
ref A (long)

That second instruction that references A is a long reference, so this program needs 8 bytes of address space.

Reduction: The paper by Robertson uses 3SAT, So let’s start with a 3SAT instance with n variables and m clauses, each clause Cj contains literals Aj1, Aj2, and Aj3.Having just 1 or 2 literals in a clause is ok too.

Robertson notes that for any two literals V and W, if you have V and ~W appear as one (complete, two-element) clause, and also have ~V and W appear as a different clause, then V and W must both have the same truth value in order to satisfy both clauses.  This gets us a way to add more variables to the formula that have truth values the same as some other variable.  Specifically, if some variable V shows up in our formula more than 5 times, we can create a new variable W to replace three of those occurrences of V, and add two clauses to the end (the clauses (V ∨ ~W) and (~V ∨ W)).  This forces our new W to have the same truth value V will have if the formula is satisfied, and reduces the number of occurrences of V.  We repeat this process until no variable appears more than 5 times in the formula.  So, now we can say that for some variable Vi, it appears in clauses Ci1 through Ci5 (or maybe less than 5 if the variable occurs less than 5 times).

The program we will build will consist of “modules” that can be moved independently.  Each variable will have two modules (“t” and “f” modules), and each clause will have one.  Here is a picture of a variable “t” module:

“l” is the paper’s version of B- the maximum distance we can put a short reference to a variable.

What we’re seeing here is:

  • A “slot” we can put a variable (or more than one) into
  • 16 instructions that refer to a new variable ui
  • 3 instructions that reference each of the variables corresponding to the (at most 5) literals and clauses the variable can appear in.

The “f” mode is similar except that the definition of τr is negated (it’s the positive variable if ~Vi occurs as a literal).  The new variable ui appears in the t and f modules for variable i and noplace else, so the only way references to it can be short is if it is placed into the “slot” in either the “t” module (meaning variable i is set to true) or the “f” module (meaning variable i is set to false).  If we put ui in the slot of (say) the “f” module, then we have enough room to also place the variables corresponding to the literals in the same block making everything short.  But by putting the ui variable in the “f” module, all of the references to it from the “t” module will be long (adding 16 extra bytes of address space), making all of the literals too far away from the slot for the slot to be useful in that module.

We also have a module for the clause (the “c”-module):

Notice that the versions of the literals that actually appear in the clause are to the right of the slot.  These α variables are the same ones we had in the variable modules (so each α variable shows up once in a “c” module, but 3 times each in a “t” or “f” module).  Since there are more references to the α variables in our variable modules, the optimal placement should be to place those variables in the slot of the “t” or “f” modules (not in the “c” module).  The “remaining” α variables (the negations of the ones we put into the slot in either the “t” or “f” modules and which it didn’t help us to put in the slot of the other module) can go into the slot here and make one of the corresponding β references short.

We do have that extra γj cell to play around with, and here is where either the paper has a mistake, or I lose the ability to follow.  What is supposed to happen is that if we fill the slot with a variable that is referred to by one of the β variables then that reference will be short, and the reference from γ to δ will be within the range of a short reference.  That’s fine, but the paper wants this to be the α variable that we did not place in the “t” or “f” module.  But if placing the α in the “t” module meant setting the variable to true, then what is not in the module is the negation of that module, which feels like we are doing things backward- like we want to set the variable to true, we should have put ui in the “f” module so that the literal that can be placed in our clause module corresponds to the true version of the literal.

I’m not sure it matters much to the reduction.  If I’m right, it just means we should have labeled the α variables the negations of what they really are.  But it does worry me.

The proof finishes with deciding values of B and M to make it all work.  B ends up being 31 (it comes from the distance we need to separate the 16 ui instructions from the τ instructions), and M ends up being 16n+4* the total number of literals in the formula.

Difficulty: 7.  This has a lot of pieces, but (except for that end part) I think it’s pretty feasible to see how all of the pieces fit together.

 

 

Code Generation With Address Expressions

For this problem, I think it is easier to see what is happening if we use a problem definition that uses actual instructions (instead of the more abstract instructions G&J use in their definition).  So I am taking this problem description from the paper by Szymanski that has the reduction.

The problem: Code Generation With Address Expressions.  This is problem PO7 in the appendix.

The description: The PDP-11 has the following instructions:

  • Unconditional Branch (“br”), is two bytes long, and can only be used if the branch target is within 256 bytes or so of the branch instruction (technically 256 larger or 254 smaller).
  • Unconditional Jump (“jmp”), which works similarly to “br”, but is 4 bytes long and has no limit to the distance to the target.
  • Conditional branches (“beq” and “bne”), which like “br” are two bytes long and require the target to be within a fixed distance of the instruction
  • Extended conditional branches (“jbr”, “jeq”, and “jne”).  These instructions are either translated into “br”, “beq”, and “bne” if the targets are close enough, or they are treated as jmp instructions or conditional branches to a jmp instruction if the destination is too far.

Given a program with a series of these extended conditional branch instructions, and a goal size s, can we implement the program (by replacing the extended conditional statements with branches and jumps as appropriate) in s bytes or less?

Example: First, let’s look at how these jumps can happen.

Suppose I have:

    jbr  L1
    (some number of bytes of instructions without
     branches or jumps)
L1:

.. if the number of bytes in between is small, I can replace that with:

    br L1
    (the instructions just like before)
L1:

But if the size of the intermediate instructions is too large to allow a branch, we need to do:

     jmp L1
     (the instructions just like before)
L1:

Since a jmp is 4 bytes long and a br is 2 bytes long, the second option leads to a larger number of bytes at the end.

Here’s another example from the paper:

X: jne Y
   252 bytes of instructions
   jbr X
Y:

If we make the jbr instruction into a jump, then there is 256 bytes (+ the length of the jne instruction)  between the jne instruction and X, so the jne will also have to be a jump.  But if we make the jbr instruction into a br, then there are 254 bytes of instructions in between X and Y, so the jne can become a branch as well (so the distance between X and Y is 256 bytes), meaning that the jbr correctly was used as a branch (if the jne was a jump, the jump back to X would have covered too much space).

Reduction: Szymanski uses 3SAT.  So we start with a 3SAT instance with n variables and m clauses.  For each variable xi, we add the following lines to our program:

Yi:  jbr
     254 bytes of non-branching or jumping code
Zi:

(I’m a little confused by this because he doesn’t list a destination for the jbr here.  I think we’re supposed to go to Zi, so that we can potentially replace the jbr with a br)

For every clause Cj = {lj,1, lj,2, lj,3} we add the following lines to our program:

Ai:  
    246 lines of non-branching or jumping code
    jbr Tj1
    jbr Tj2
    jbr Tj3
Bj:

..where if lj,k is an un-negated version of variable xi, Tjk is an address calculated as the current address +Zi-Yi. If lj,k is a negated version of variable xi, Tjk is calculated as the current address +Zi-Yi-512.

We also add to the end of every clause n+3m+1 copies of a jbr statement to a line Bj-Aj bytes away from the current line.

The total length of this program is: 256n + 254m + 2nm + 6m2 + 2q, where q is the number of times we need to use a long jump instruction.  We have n+3m+m(n+3m+1) jbr instructions to deal with.  Our goal size s is 258n+280m+2nm+6m2, which we can only reach if we have at most n+3m jmp instructions (the rest being replaced by br’s)

Each of the Yi jumps can either be replaced by “br” instructions (which we will interpret as setting variable i to true) making the distance between Zi and Yi 256 bytes), or by jmp instructions (which we will interpret as setting variable i to false), making the distance between Zi and Yi 258 bytes.

The distance between each of the Ai and Bi in the clause pieces is 246 bytes + 2 bytes for each br we use or 4 bytes for each jmp we use.  So the total distance is 252, 254, 256, or 258 bytes,  Notice that we can only do a short translation if the corresponding literal is true, meaning that there are 256 or fewer bytes in the clause fragment if and only if the clause can be satisfied.

So, if the original formula is satisfiable, we will be able to find a way to satisfy each clause j to make the distance between the Aj and Bj labels 256 bytes or less.  So all of the “extra” jbr instructions at the end of each clause can be given short translations, so we will have our total program be of size s or less.

If the original formula is unsatisfiable, then no matter what we decide to do with our variables, there will be at least one clause where we have to give the long (258 byte) distance between Aj and Bj (this is the clause that is unsatisfied by whatever variable assignment we choose).  This means that the extra jbr Bj-Aj instructions at the end of that clause must turn into jmp instructions that are 4 bytes long each, making the overall size of the program too long.

Difficulty: 7.  It takes a while to understand the distances, and one of the consequences of using the real PDP-11 machine is the difference in distance between forward jumps that can be turned into branches (256 bytes) and backward ones (252 bytes), which makes the arithmetic and algebra a bit harder than I’d like.  But I think this is a neat reduction.

Code Generation for Parallel Assignments

The problem: Code Generation for Parallel Assignments.  This is problem PO6 in the appendix.

The description: Given a set of variables V= {v1 ..vn}, a set of assignments A= {A1..An}, where each assignment sets some variable vi to the result of some operation on a subset of parameters of V, and an integer K.

Can we order the elements of V in such a way that an assignment references a variable that comes later in the ordering at most K times?

Example: Sethi’s paper, which describes this problem, defines a parallel assignment as a statement like this:

i,x <- i+1, x+i

The idea is that both i and x get set to their values in the same assignment.

Sethi also defines a realization of an instruction, which is the ordering in which we do the assignments.  One realization of the above instruction is to compute i before x, and the other is to do x before i.

The cost of that realization is the number of variables that cause references to future instructions in the realization.  So, if we compute i before x, that realization is cost 1, because the x+i instruction has to wait for the change to i to happen  (and we have to keep the “new” value of i around someplace to use in the x instruction).

Computing x first and i second has a cost of 0, because we don’t need to keep the value of x around for the second instruction.  Our job is to minimize this cost (or, technically, to see if the cost is <= K).

What’s interesting to me, but not really discussed in the paper, is the fact that different realizations have different semantic meanings.  If we compute the i+1 first, x gets the updated value of i, but if we compute it second, then x gets the old value of i.  Maybe this kind of potential confusion is why we don’t typically see this kind of assignment operation in the present day.

Reduction: Sethi uses Feedback Node Set. So we start with a directed graph G.  We will map all of the vertices in the graph to variables.  For each vertex vi in the graph, if vi has edges connecting to w1..wk, we say that vi is computed from those vertices (the w1..wk are the parameters to create vi).  We will use the same K from the Feedback Node Set problem.

Suppose our graph has a set U of feedback vertices.  Sethi provides a separate proof about feedback vertex sets: A graph G has a feedback set U if and only if we can order the vertices in the graph y1..yn such that if we have a “backward” edge from yj to yi, if j > i, then yj is in U.

So, we use that ordering as our realization for our assignments. These “backward” edges correspond to “backward” references in our realization (which contribute to the cost).  Since each of these back edges must start with a vertex in U, and each one of these vertices costs 1 to our realization cost, this means that our realization cost is at most the size of U, which we know is K.

Difficulty: 6.  The sequence proof is a bit challenging to follow but other than that the hard part of this is understanding the problem.  One thing that tripped me up was that the cost of the realization was the number of variables that had backward references, not the number of references themselves.  (So, a variable that has 2 references to 2 different future instructions counts just once)

Code Generation With Unlimited Registers

The reason I found the easier reduction for the previous problem is because it was in the paper that discussed this similar problem.

The problem: Code Generation With Unlimited Registers.  This is problem PO5 in the appendix

The description: Given a Directed, Acyclic graph G=(V, A), where no vertex has an out-degree larger than 2.    Just like the previous problem, the leaves of this graph (with in-degree 0) are our starting values, and the roots of the graph (with out-degree 0) are the final values we need to compute.

The operations that are allowed are:

  • ri <- rj (copy a value between registers)
  • ri <- ri op rj  (replace a value in ri with the combination of ri and rj.  This is how we “compute” nodes in the graph with 2 children)

(The problem definition in G&J makes a big deal about labeling the (at most) 2 edges leaving a vertex v as “Left” and “Right”, and so the ri comes from the “Left” side, and the rj comes from the “Right” side.  But G&J also say (as a “private communication”) that this problem remains NP-Complete if we allow commutative instructions- in other words, ri <- rj op ri is also allowed.  So I don’t think it matters.)

Can we compute the root vertices of G in K or fewer instructions, if we allow any number of values to be stored in registers?

Example: Here’s the complicated graph from last time:

We can start by loading the 1,2,3, and 4 into registers 1-4.  Then we can do:

  • R1 <- R1 op R2 (compute a)
  • R2 <- R2 op R4 (compute d)
  • R3 <- R3 op R4 (compute c)
  • R2 <- R2 op R3 (compute f)
  • R1 <- R1 op R2 (compute g)
  • R5 <- 2 (another copy)
  • R6 <- 3 (another copy)
  • R5 <- R5 op R6 (computing b)
  • R3 <- R3 op  R5 (computing e)
  • R1 <- R1 op R3 (computing h)

The reason why this is hard, even with as many registers as we want, is because the operations are “accumulator” operations- they always put the result of the operation back into one of the operand registers, killing the old value.  So if we want to use that value more than once, we either need to reload it or be very careful about the order in which we compute vertices.

If we allowed 3-register operations like rk = ri op rj, then this is easily solvable by computing values “up the graph” in basically any bottom-up order you want, putting each vertex’s value into a new register.

Reduction: This is in the paper by Aho, Johnson, and Ullman that we used last time also talks about this variant of the problem.  They say that it’s basically the same situation, you just need to load the lead values into registers right away, and you get a situation that is “similar to” the one-register machine.  I think that the basic structure of the chains we are building still holds in this new situation.

Difficulty The one-register problem is a 6 or a 7, this is probably a 5 or 6 once you have that.  If you are going to make the students walk through the whole reduction step by step in this new situation, it might be a bit harder.

Code Generation on a One-Register Machine

I’ve done a lot of traveling over the past month, so I’m sorry about missing posts.  Things should be back to normal now.

The problem: Code Generation on a One-Register Machine.  This is problem PO4 in the appendix.

The description: Given a directed acyclic graph G = (V,A), in which no vertex has an out-degree larger than 2, and a positive integer K.  The leaves of this graph (the vertices with out-degree 0) are our starting values, sitting in memory.  Can we compute the values in all root vertices  (with in-degree 0) in K instructions or less, if our only instructions are:

  • Load a value from memory into our only register.
  • Store a value from the register into memory
  • Doing an operation combining a value in a register and a value in memory.  The operation must connect two children of a vertex in a graph together, and the “result” is the parent vertex.  The result value replaces the original value in the register.

Example:  Here’s a simple graph:

Here, we can compute the “+” node by:

  • Loading 1 into the register.
  • Doing the + operation between the register and 2
  • Storing the + operation to memory (G&J’s definition of the problem says that the node is not computed until the value is stored)

We can compute the “-” node in another 3 instructions, and since the value of the “-‘ node is still in the register, compute the “*” node in 1 more instruction, and store it with out last instruction.

Here’s a more complicated graph:

To do this one, we will have to load the value in node 2 lots of times.  For example, here is the set of instructions I came up with to compute h:

  • load 1
  • op to create a (only 1 operand)
  • store a
  • load 4
  • op to create c (3 is in memory)
  • store c
  • load 2
  • op to create d (4 is in memory)
  • op to create f (c is in memory)
  • op to create g (a is in memory)
  • store g
  • load 2
  • op to create b (3 is in memory)
  •  op to create e (c is in memory)
  • op to create h (g is in memory)
  • store h

It’s possible that we can do this in fewer instructions, but hopefully, you can see why this problem is hard- knowing what value to keep in the register is tricky.

Reduction: G&J point out that the reduction is from a paper by Bruno and Sethi, which uses 3SAT to do the reduction.  The instance they build is pretty complicated.  I also came across a paper by Aho, Johnson, and Ullman, who extend the result of the Bruno and Sethi paper with a nice reduction from Feedback Vertex Set.  I think this reduction is easier to follow, so we’ll go with that.

So, we are given an instance of FVS- a directed acyclic graph G and an integer K.  We are looking for a set F of K vertices such that every cycle goes in G goes through some element of F.

We are going to build our Code Generation graph D as follows:

  • For every vertex in G with outdegree d, build a “left chain” of d+1 vertices.  So if vertex a had 2 vertices leaving it, we will create 3 vertices b0, b1, and b2.  b2 will connect to b1, and b1 will connect to b0.
  • Each of the “0” vertices at the bottom of these chains connects to 2 distinct memory values (they will be the leaves of the code graph)
  • If vertex v has outdegree d, each vertex in a’s chain will connect to the different “0” vertex of the different neighbors of v in G.

Here is an example from the paper:

Notice that if we don’t have the edges between the chains, we can compute the entire chain with just 2 loads (of the leaves that start in memory).  So, the only loads needed to compute all of D happen in the leaves, or in some of the “level 1” vertices that are parents of the leaves.   If we have to re-load one of those vertices, it is because there is no optimal strategy to avoid loading it, which means it’s part of a cycle.

For example, look at the a and b chains in the picture above.  If we didn’t have any of the c or d vertices or edges in our graph, we could compute a1 and b1 without loading any vertex that is not a leaf: compute b0, b1, b2, then a0, then a1 (which uses a0 from the register and b0 from memory).  The reason we can do this is that while a1 depends on b0, none of the b vertices depend on anything in a, which gives us a chain to do first.  We need to reload a value when we have a circular dependency between chains (so there is no correct chain to do first).  That’s the relationship between the chains and the feedback vertex set.

This works in the other direction as well- if we are given the feedback vertex set in G, we can compute those vertices first in D, and then load them all once as needed to compute D.

The paper says that in the example graph, the node {d} by itself is a Feedback Vertex Set, and the optimal computation ordering is: d0,c0, c1, b0,b1, b2, a0,a1, d1.  That final d1 needs a re-load of d0.  The 1 extra load corresponds to the 1 vertex in our Feedback Set.

Difficulty: 6.  Maybe 7.  I think this is right at the limit of what a student can figure out, but I would also want a more rigorous proof about the connection between the extra loads and the feedback set, which is probably tricky to come up with.

Register Sufficiency For Loops

We finally get to the book problem that was the basis for our diversion.

The problem: Register Sufficiency For Loops.   This is problem PO3 in the appendix.

The description: Given a set of variables to be used within a loop of N instructions, and a set of K registers.  Each variable has a “lifetime” defined by the instruction in the loop we start needing its value, and the number of instructions we will that need that value for). We can assign two variables to the same register if they do not overlap in their lifetimes (keeping in mind that “overlap” includes wrapping around to the start of the next loop iteration).  Can we assign variables to registers such that there is no overlap?

Example: I think it’s easier to understand the concept of a “lifetime” of a variable by showing an example of a loop:

B
C
B
C
A
C
A
C
A
C
B
C
B

In this loop, we will need one register to hold C (because we’re basically using it the whole time), but the other register can alternate between holding A and B.  It holds A in the middle of the loop, but it can hold B from the end of one iteration through the start of the next.

If I added an extra B in the middle of the A instructions, I would need 3 registers (because we can’t have 1 register holding A and B because the “middle” use of B would conflict with the need to hold A).

Reduction: While this is the main result for us, this problem is mentioned as an aside in the Garey, Johnson, Miller, and Papadimitriou paper we have been working with for the past few weeks.

The idea is that this problem is very similar to Arc Coloring.  They show the problems are similar, which is sort of like doing the reduction backwards (“Look, any loop can be represented as a set of arcs!”).  But if we start from an Arc Coloring instance, we can create a loop with one statement for each point in the circle, and each arc connects the first and last uses of a variable (so the arc spans the lifetime of the variable).

The idea now is that a coloring of the graph is a way to color the arcs of the graph so that no two arcs overlap at a point.  This is exactly the same as saying we can assign a register to a set of instructions (matching the 2 uses of a variable) so that the same register is not in use during the lifetimes of two variables assigned to that register.

Difficulty: The actual reduction here is a 4.  (I think it’s a little tricky to understand the idea of a lifetime of a variable, and how it can be represented as just 2 points in the circle)  But it may be more complicated than that to explain what an arc graph is, and does.