Tag Archives: 3SAT

Single Execution Time Scheduling With Variable Number of Processors

Sorry for skipping last week.  I got about halfway through the post for the next problem, but forgot that the reduction went through this other problem instead, and then ran out of time to fix it.

The problem: Single Execution Time Scheduling.  This problem is not in the appendix and is problem “P4” in the paper by Ullman that has this reduction and the problem next week.

The description: Given a set S of N jobs, arranged in a partial order, each taking 1 time unit, a deadline D, and a sequence of D “processor slots” arranged in a sequence from 0 to D-1, where the sum of all of the processor slots is exactly N, can we create a feasible schedule for all tasks that respects the partial order?

Example: Here’s a simple example of a partial order:

If D=3 and the processor slot sequence was {1,2,1}, then this can be solved: schedule a at time 0, b and c at time 1, and d at time 2.   (So d is done by time 3).

If the processor slot sequence was {1.1.2}, then at time 1, we can only schedule 1 of b and c, so we won’t be able to schedule d at time 2.

Reduction: The Ullman paper goes from 3SAT.  The original formula will have m variables and n clauses.  We will create jobs:

  • xij and ~xij where i goes from 1 to m and j goes from 0 to m.
  • yi and ~yi where i goes from 1 to m
  • Dij where i goes from 1 to n and j goes from 1 to 7.

The partial order on these jobs is:

  • Each xij comes before xi,j+1 and each ~xij comes before ~xi,j+1
  • Each xi,i-1 comes before yi and each ~xi,i-1 comes before ~yi
  • For each Dij represent the j index as a binary number (from 1 to 7).  The three literals in clause i also have 7 combinations of ways to set their literals to make the clause true, which can also be represented as a binary number.  Then for each binary combination, look at the positive or negative settings of the literals that make that combination true.  Then take the last of the x (or ~x) jobs of the variables corresponding to those literals and make it come before the Di job.  So if we are considering job xk, we’d make xkm come before the Di job we’re looking at.

That last thing is just a way of saying “there are 7 ways of making the clause true, you need to execute the job of the literals that makes the clause true before you do the clause job.”

The deadline is n+3.  The processor slots have:

  • m slots at time 0
  • 2m+1 slots at time 1
  • 2m+2 slots at times 2-m
  • n+m+1 slots at time m+1
  • 6n slots at timem+2

The idea is that at time 0 we need to run one of either xi0 or ~xi0  for each i.  (The other is run at time 1).  These will correspond to whether variable i is set t true or false.  We need to do that because we need to run the y jobs as soon as they become available (y0 or ~y0 – whichever is the same parity as the x variable we ran in time 1- needs to be run at time 1, and so on down).  At time 1, we run either xi1 of ~xi1, depending on what we do at time 0.  So at time m+1, we have one y job left over (the last of the y’s in the sequence we started late), m x jobs left over (the xim or ~xim corresponding to the variable we started at time 1), and hopefully have enough x jobs finished to be able to run n D jobs (one for each clause).  This is the way you’ll satisfy each clause.  Then at time m+2, everything is done except for the other 6n D jobs.

Difficulty: 7.  I think Ullman does a very good job of explaining his method, which actually obscures a bit how complex this reduction is, and all of the moving parts and algebra going on.

 

Register Sufficiency

We’re jumping ahead again since this problem is used in the next reduction.

The problem: Register Sufficiency.  This is problem PO1 (“Program Optimization”) in the appendix.

The description: Given a directed acyclic graph G=(V,A), and a positive integer K,  can we find a “computation” for G that uses K or fewer registers?  The idea behind a “computation” is that vertices represent values that need to be kept in registers, and edges show dependencies between these values.  So can we represent the computation keeping K or fewer values in memory at all times?

Example: Here is the example graph from the paper by Sethi with the reduction (p.227):

So, for example, the “x” at the bottom is used twice: Once in t1 for the c*x calculation, and one at t3 for the (b+c*x)*x calculation.  We’d like to keep the x in the same register for both calculations.  The numbers of the vertices show the registers holding the values and lead to the following assembly-like computation:

  1. Load c into register 2
  2. Load x into register 3
  3. Multiply registers 2 and 3, putting the result in register 2.
  4. Load b into register 1.
  5. Add registers  1 and 2, putting the result in register 2.
  6. Multiply registers 2 and 3, putting the result in register 2.
  7. Load a into register 3.
  8. Add registers 2 and 3, putting the result into register 1.

A related problem: It’s worth mentioning that Sethi represents this computation problem as a “game” problem of placing stones on a graph.  The stones are like registers.  The possible moves are:

  1. Place a stone on a leaf (allocating that leaf’s value to a register)
  2. Pick up a stone from a node (reclaiming that register for another use)
  3. Place  a stone on a non-leaf node if there is a stone on all of its children (generating a value and putting the result in a new register)
  4. Move a stone from a node to a parent node if all children of the parent have a stone (generating a value and putting the result in a register held by one of the children).

The problem then becomes: Can we get stones on all “root” nodes of G (nodes without parents) using K or fewer stones at all times?

Reduction: Sethi’s reduction is from 3SAT.  The SAT instance has m clauses and n variables.  K will be 5n+3m+1.  The DAG that gets built has n+m “stages”.  The first n stages “assign” truth values to variables, and the rest check that each clause is satisfied.

The variable states are a pretty crazy structure based on a vertex zi, which can only be computed after all of its ancestors are computed, and when computed, there will be n-i+1 stones in hand.  This is the number of stones needed to compute the node to xi and ~xi.  So the last stone in that computation is placed on either xi or ~xi, “setting” its truth value (and not leaving any stones to compute the opposite value).  The clause stages are set up so that we only have enough stones to compute the clause if the clause is satisfiable (using the computed stone sitting on the xi or ~xi vertex).

The actual reduction goes on for pages and pages and has a lot of details to worry about.

Difficulty: 9.  It’s a shame that the cool idea of the stones game still led to such a hard reduction.

Protected: Linear Bounded Automaton Acceptance

This content is password protected. To view it please enter your password below:

Protected: Consistency of Database Frequency Tables

This content is password protected. To view it please enter your password below:

Protected: Tableau Equivalence

This content is password protected. To view it please enter your password below:

Protected: Hitting String

This content is password protected. To view it please enter your password below: