Tag Archives: Difficulty 4

Register Sufficiency For Loops

We finally get to the book problem that was the basis for our diversion.

The problem: Register Sufficiency For Loops.   This is problem PO3 in the appendix.

The description: Given a set of variables to be used within a loop of N instructions, and a set of K registers.  Each variable has a “lifetime” defined by the instruction in the loop we start needing its value, and the number of instructions we will that need that value for). We can assign two variables to the same register if they do not overlap in their lifetimes (keeping in mind that “overlap” includes wrapping around to the start of the next loop iteration).  Can we assign variables to registers such that there is no overlap?

Example: I think it’s easier to understand the concept of a “lifetime” of a variable by showing an example of a loop:

B
C
B
C
A
C
A
C
A
C
B
C
B

In this loop, we will need one register to hold C (because we’re basically using it the whole time), but the other register can alternate between holding A and B.  It holds A in the middle of the loop, but it can hold B from the end of one iteration through the start of the next.

If I added an extra B in the middle of the A instructions, I would need 3 registers (because we can’t have 1 register holding A and B because the “middle” use of B would conflict with the need to hold A).

Reduction: While this is the main result for us, this problem is mentioned as an aside in the Garey, Johnson, Miller, and Papadimitriou paper we have been working with for the past few weeks.

The idea is that this problem is very similar to Arc Coloring.  They show the problems are similar, which is sort of like doing the reduction backwards (“Look, any loop can be represented as a set of arcs!”).  But if we start from an Arc Coloring instance, we can create a loop with one statement for each point in the circle, and each arc connects the first and last uses of a variable (so the arc spans the lifetime of the variable).

The idea now is that a coloring of the graph is a way to color the arcs of the graph so that no two arcs overlap at a point.  This is exactly the same as saying we can assign a register to a set of instructions (matching the 2 uses of a variable) so that the same register is not in use during the lifetimes of two variables assigned to that register.

Difficulty: The actual reduction here is a 4.  (I think it’s a little tricky to understand the idea of a lifetime of a variable, and how it can be represented as just 2 points in the circle)  But it may be more complicated than that to explain what an arc graph is, and does.

Context-Sensitive Language Membership

Here is another grammar/machine problem reduction that is pretty straightforward.

The problem: Context-Sensitive Language Membership.  This is problem AL20 in the appendix.

The description: Given a context-sensitive grammar G = (N, Σ, ∏, S) and a string w from Σ*.  Is w in the language generated by G?

Example: Lots of these grammars are complicated because you need to do some crazy productions to make it worth using the context-sensitiveness of the grammar (as opposed to just being a “regular” context-free grammar).  But Martin’s Theory of Computation book has a pretty nice one for the language anbncn:

S-> SABC|ABC
BA->AB
CA->AC
CB->BC
A->a
aA->aa
aB->ab
bB->bb
bC->bc
cC->cc

So, if we wanted the string aabbcc  we could do:

S->SABC ->ABCABC ->ABACBC->AABCBC ->AABBCC->aABBCC->aaBBCC->aabBCC->aabbCC->aabbcC->aabbcc

So if we were handed this grammar and any string of the form anbncn, we should get a “yes” response.  If we had any other string, we would get a “no” response.

Reduction: Since we know that Linear Bounded Automata are the machines that accept context-sensitive grammars, it seems straightforward to go from Linear Bounded Automaton Acceptance.  The good news is that the equivalence proofs that are used to show that Linear Bounded Automata and Context-Sensitive grammars are equivalent (for example, theorems 8.14 and 8.20 in Martin’s Theory book) build grammars from machines in polynomial time.  So our reduction can just be to take an LBA machine and a string, build the equivalent grammar (in polynomial time) and we have an equivalent context-sensitive grammar.

Difficulty: 4.  The hard part is convincing yourself that the conversion algorithm really is polynomial.  That’s not the sort of thing the theory books tend to worry about.  (Heck, context-sensitive languages in general are skipped over in most theory of computation classes).  But it’s a pretty easy consequence of the proof if you’re doing the proof in class anyway.

Protected: Linear Space Acceptance

This content is password protected. To view it please enter your password below:

Protected: Integer Expression Membership

This content is password protected. To view it please enter your password below:

Protected: Unification for Finitely Presented Algebras

This content is password protected. To view it please enter your password below:

Protected: Unification With Commutative Operators

This content is password protected. To view it please enter your password below:

Protected: Comparative Vector Inequalites

This content is password protected. To view it please enter your password below:

Protected: Staff Scheduling

This content is password protected. To view it please enter your password below:

Protected: Precedence Constrained Scheduling

This content is password protected. To view it please enter your password below:

Protected: Scheduling to Minimize Weighted Completion Time

This content is password protected. To view it please enter your password below: