Monthly Archives: February 2018

Resource Constrained Scheduling

I spend a bit of time trying to see if there was an easy reduction between this problem and one of the other scheduling ones, but I couldn’t find anything.  So we’ll do it the hard way.

The problem: Given a set T of tasks, each with a length of 1, a set of m processors, and set of r resources.  Each resource i has a “bound” bi of how many resources are available at any time and each task t has a requirement ri(t) denoting how many of resource i it needs.  We’re also given a deadline D.

Can we find an m-processor schedule for all tasks that:

1) Meets the deadline, and

2) For all time units, ensures that all of the processors running in that time unit do not exceed the resource bounds for any resource?

Example: Suppose I have 4 tasks and 1 resource:

  • Task 1 uses 2 units of the resource
  • Task 2 uses 3 units of the resource
  • Task 3 uses 4 units of the resource
  • Task 4 uses 5 units of the resource

If there are 5 units of the resource available, we can get this task done in 3 time steps (Task 4, then 1 and 2, then 3)

If there are 7 units available, we can get the task done in 2 time steps (1 and 4, then 2 and 3)

If there are 14 units available, we can do everything in one time step.

Reduction: The paper by Garey and Johnson proves several variants of the problem,  The one I’ll do is in Lemma 3.4, which has 5 processors and 8 resources.  They go on from there to show the variant with 3 processors and 1 resource is also NP-Complete, but we only care about the general problem.

The reduction is from 3DM, In this case, we’re assuming that W, X, and Y (the 3 sets in the 3DM problem) are all the same set T, and S is our set of triples from T.  Define a value mk(i) to be the number of triples in S that have i in their kth value.  So m2(4) is the number of triples with a 4 in position 2.  Notice that for a legal matching to be possible, mk(i) has to always be at least 1.

So we build an instance of the resource constrained scheduling problem with 5 processors and 8 types of resources.   The tasks are:

  • One “S” task for each triple in S.
  • X0, Yo and Z0 tasks for each element in q (the number of elements in T)
  • Xij tasks where i runs from 1 to q and j runs from 1 to m1(i).
  • Similar Yij (using m2(i)) and Zij (using m3(i)) tasks.
  • “F” tasks from 1 to q.
  • |S|-q “G” tasks.

The deadline is |S|.

The resource requirements are set up so that during each time unit, we have 5 processors running:

  1. An S task
  2. An X or X0
  3. A Y or Y0
  4. A Z or Z0
  5. An F or G

The X0/Y0/Z0 tasks all correspond to an element in T, and can only be run along with an F and the S task that is the triple of the X, Y, and Z elements in S.   The F task makes sure that we have chosen a legal matching.   The matching consists of the S triples chosen along with an F task.  (The G tasks are there to fill the rest of the timesteps) .

Difficulty: 7.  More if you keep going and refine the problem down to 3 processors and 1 resource.

Precedence Constrained Scheduling

SS8 is “Multiprocessor Scheduling”, which is done in Section 3.2.1

Here’s SS9, which uses the non-book problem from last week.

The problem: Precedence Constrained Scheduling.  This is problem SS9 in the appendix.

The description: Given a set T of tasks arranged in a partial order, each of length 1, a set of m processors, and a positive deadline D.  Can we schedule the tasks in T in a way that obeys the precedence constraints and finishes by deadline D?

Example:

Suppose we had the following simple precedence graph:

With 2 processors, the earliest feasible schedule will finish at time 3 (timestep 1 schedules 2 of a,c, and d, timestep 2 schedules the remaining one, and then we can finally schedule b after that).  With 3 processors, we can finish at time 2 (schedule all of a,c, and d in timestep 1, then b right afterward).

Reduction: This is problem P2 in the paper by Ullman and the reduction uses the “Single Execution Time with Variable Number of Processors” problem from last week.  The main difference between that problem and this one is that instead of having a varying number of slots at each time, now we have K processors for all time steps.

So we start with an instance of the variable processor problem and add a bunch of new jobs Iij where i runs through each time step, and j runs from 0 to n – ci. (n is the number of jobs, ci is the number of processors available at time i).  The precedence graph keeps all of the old precedences and also has Iij come before Ii+1,k  for all j and k.  (So all I jobs at time i come before all I jobs at time i+1).  We have n+1 processors and the same deadline.  The idea is that the extra I jobs all have to run at their specific timestep, and they “fill” the empty processor slots between the ci processors available in the original and the n+1 processor slots available in this version of the problem.

Difficulty: 4, mainly because the variable # of processors problem we’re reducing from is hard to understand.  But this is a classic “go from a variable number of things to a larger fixed number of things” technique that can be used for lots of problems.

 

Single Execution Time Scheduling With Variable Number of Processors

Sorry for skipping last week.  I got about halfway through the post for the next problem, but forgot that the reduction went through this other problem instead, and then ran out of time to fix it.

The problem: Single Execution Time Scheduling.  This problem is not in the appendix and is problem “P4” in the paper by Ullman that has this reduction and the problem next week.

The description: Given a set S of N jobs, arranged in a partial order, each taking 1 time unit, a deadline D, and a sequence of D “processor slots” arranged in a sequence from 0 to D-1, where the sum of all of the processor slots is exactly N, can we create a feasible schedule for all tasks that respects the partial order?

Example: Here’s a simple example of a partial order:

If D=3 and the processor slot sequence was {1,2,1}, then this can be solved: schedule a at time 0, b and c at time 1, and d at time 2.   (So d is done by time 3).

If the processor slot sequence was {1.1.2}, then at time 1, we can only schedule 1 of b and c, so we won’t be able to schedule d at time 2.

Reduction: The Ullman paper goes from 3SAT.  The original formula will have m variables and n clauses.  We will create jobs:

  • xij and ~xij where i goes from 1 to m and j goes from 0 to m.
  • yi and ~yi where i goes from 1 to m
  • Dij where i goes from 1 to n and j goes from 1 to 7.

The partial order on these jobs is:

  • Each xij comes before xi,j+1 and each ~xij comes before ~xi,j+1
  • Each xi,i-1 comes before yi and each ~xi,i-1 comes before ~yi
  • For each Dij represent the j index as a binary number (from 1 to 7).  The three literals in clause i also have 7 combinations of ways to set their literals to make the clause true, which can also be represented as a binary number.  Then for each binary combination, look at the positive or negative settings of the literals that make that combination true.  Then take the last of the x (or ~x) jobs of the variables corresponding to those literals and make it come before the Di job.  So if we are considering job xk, we’d make xkm come before the Di job we’re looking at.

That last thing is just a way of saying “there are 7 ways of making the clause true, you need to execute the job of the literals that makes the clause true before you do the clause job.”

The deadline is n+3.  The processor slots have:

  • m slots at time 0
  • 2m+1 slots at time 1
  • 2m+2 slots at times 2-m
  • n+m+1 slots at time m+1
  • 6n slots at timem+2

The idea is that at time 0 we need to run one of either xi0 or ~xi0  for each i.  (The other is run at time 1).  These will correspond to whether variable i is set t true or false.  We need to do that because we need to run the y jobs as soon as they become available (y0 or ~y0 – whichever is the same parity as the x variable we ran in time 1- needs to be run at time 1, and so on down).  At time 1, we run either xi1 of ~xi1, depending on what we do at time 0.  So at time m+1, we have one y job left over (the last of the y’s in the sequence we started late), m x jobs left over (the xim or ~xim corresponding to the variable we started at time 1), and hopefully have enough x jobs finished to be able to run n D jobs (one for each clause).  This is the way you’ll satisfy each clause.  Then at time m+2, everything is done except for the other 6n D jobs.

Difficulty: 7.  I think Ullman does a very good job of explaining his method, which actually obscures a bit how complex this reduction is, and all of the moving parts and algebra going on.

 

Sequencing to Minimize Maximum Cumulative Cost

I want to give credit to our Interlibrary Loan people who tracked this reference down- it’s a Ph.D. thesis that only existed on microfilm.   So I got to use the microfilm reader for the first time since elementary school.  Those things are very different now.

The problem: Sequencing to Minimize Maximum Cumulative Cost.  This is problem SS7 in the appendix.

The description: Given a set of tasks with a partial order defined on the tasks, and a (possibly negative) integer cost c(t) for each task t, and a positive value Z.  Can we create a one-processor schedule that obeys the partial order constraints such that for each task t, the cost of each task scheduled before t is K or less?

Example: Suppose we have the following precedence graph of tasks (the number in the vertex is the cost of the task):

The schedule (d,e,a,c,b), has the last cost (task b) with a total cost of 12 (the sum of the costs of all tasks before it).  Notice that if some task costs are negative, it’s possible that the last task that is done does not have the maximum total cost.  For example, if task c’s cost was -4, then the schedule (d,e,a,c,b) has its maximum cost of 10 after task a is scheduled.

Reduction: I got this from a Ph.D. thesis by Abdel-Wahab, and in it, he reduces from Register Sufficiency.  So we’re given an instance of register sufficiency: a directed graph G=(V,E), and a K.  The precedence graph G’=(V’, E’) that we’ll build has 2 copies of every vertex in G.  The edge set of G’ will have all of the edges in E, and additionally, if (i,j) is an edge in E, then (j’, i) is an edge in E’.  (That’s an edge from the second copy of j, back to i.)  Each vertex costs 1 if it was in V, and -1 if it is a second copy.  Set K’ = K+1.

If the register sufficiency problem has a solution, then we build a sequence based on it.  Thinking in terms of Sethi’s “stone game”, each time we place a new stone on a vertex, we add that vertex to the end of the sequence.  If we pick up a stone from vertex x, add x’ (the second copy) to the end of the sequence.  If we’re moving a stone from y to x, then that’s like removing it from y and adding it to x.  You can prove by induction that the max cost of any task in this sequence is K’.

If the scheduling task has a solution, then we build a solution to the stone game.  For each task in the schedule, if it is in V (the original set of vertices), then place a stone on that vertex.   If it is not in V then it is a new vertex, so remove a stone from the copy in V.  Since K’ = K+1, it’s possible that this sequence of steps uses K+1 stones, so we need to modify it to use K.   So, find a node in the sequence whose cost is exactly K’.  We know this node is in V (it was a “place a stone” move since it took us to K’ stones).  It can’t be the last task since we have to end with one that costs -1 (since all of those vertices have no outdegree, and the ones that cost 1 all have an outdegree of at least 1).    So look at the task after it.  It can’t be a “place a stone” move, otherwise, we’ll use K’+1 stones.  So we know that the next task to be scheduled has cost -1.  Since that move will be to pick up a stone, just move the stone that’s about to be removed to our peak cost task instead of allocating a new stone, and we will solve the problem using 1 less stone at this point. If multiple places in the schedule cost K’ exactly, we can do this modification to all of them, creating a program sequence that uses just K stones at most.

Difficulty: 6.  The reduction is not hard, but the stone game takes some work to understand, and the turning a solution that costs K’ into a solution that costs K is a little tricky.  I wonder if there’s an alternate reduction that has K’=K.