Staff Scheduling

This one has no reference in G&J, but I think it’s easy enough for me (or a student) to figure out.

The problem: Staff Scheduling.  This is problem SS20 in the appendix.

The description: Given a collection C of m-tuples, each tuple consisting only of 0’s and 1’s, and each tuple having the same amount (k) of 1’s.  These represent possible schedules we can give a worker.  We’re also given a “requirement” m-tuple consisting of m non-negative integers, and a number N of workers.  Can we map each c in C to a non-negative integer (representing the number of workers we assign to that schedule) such that the total number of all integers is N or less, and the requirement tuple is met?

Example: I think this is easier to explain with an example.  Suppose we had 4-tuples (corresponding to 4 “work periods”) and C is the tuples:

0 0 1 1
1 1 0 0
1 0 0 1
0 1 1 0
0 1 0 1

Our requirement tuple is:

2 1 4 3

Then one possible schedule would be:

  • 3 people with the first schedule (0,0,1,1)
  • 2 people with the third schedule (1,0,0,1)
  • 1 person with the fourth schedule (0,1,1,0)

..for a total of 6 workers. I think that’s the best you can do, since the first and third time periods have no schedules in common, and there is a total requirement of 6 workers for those time periods.

Reduction: G&J don’t give a reference, but they do suggest to use X3C.   So we’re given a set X of 3q elements, and a collection C of 3-element subsets of X.  The set C’ that we’ll build will be a set of “3q-tuples”, one position corresponding to each element in X.  There will be one tuple for each triple in C.  The tuples will have 1’s in the 3 positions corresponding to the elements in the triple in C, and 0’s everywhere else.  The R-tuple will be all 1’s, and N will be equal to q.

The Staff Scheduling instance we’ve built needs to choose q different tuples to be successful, and that set of tuples will correspond to a solution to the original X3C instance.

Difficulty: 4, assuming I didn’t miss anything.  If you’ve previously explained X3C to your students, this would be a good homework problem.

Timetable Design

We’ve now entered the “Miscellaneous” section of the Sequencing and Scheduling chapter.

The problem: Timetable Design.  This is problem SS19 in the appendix.

The description: (I’m using the description in the paper by Even, Itai, and Shamir, because I think it’s clearer than G&J’s definition): We’re given a set H (of hours to schedule), a collection of N “teachers”, each teacher is available for some subset of the H hours.  We’re also given a collection of M “Classes”, each class is available to be taught for some subset of the H hours. We’re also given an NxM matrix R of requirements:  Rij is the number of hours teacher i needs to teach class j for.   Can create a function f: TxCxH->{0,1} such that f(t,c,h) is 1 if teacher T is teaching class c at time h (and 0 otherwise)?  The function needs to respect the following rules:

  • If f(i,j,h) is 1, then h is in Ti and Cj  (we only schedule teachers and classes when they’re available)
  • The sum of f(i,j,h) over all h = Rij  (Each teacher teaches each class the number of hours they’re supposed to
  • The sum of f(i,j,h) over all i <= 1  (Each class has at most 1 teacher in each time period)
  • The sum of (i,j,h) over all i <= 1 (Each teacher can only teach 1 class per time slot).

Example:  Suppose H was {1,2,3,4,5}

We have 3 teachers:

  • T1 is available at {1,2,3}
  • T2 is available at {3,4,5}
  • T3 is available at {1,3,5}

..and 3 classes:

  • C1 is available at {2,4}
  • C2 is available at {1,5}
  • C3 is available at {2,3}

If R was:

1 1 0
1 0 1
0 1 1

(Teachers are rows, so T1 needs 1 hour of C1, 1 hour of C2, and none of C3, this won’t be satisfiable, since neither T2 nor T3 can teach C3 during hour 2, and someone has to.  If we change C3 to be available at {2,3,5}, then this becomes satisfiable:

  • T1 can only teach C1 at time 2, and C2 at time 1
  • T2 can only teach C1 at time 4, and C3 can be either time 3 or 5.  Let’s pick 5.
  • T3 can teach C2 at times 1 or 5, but time 1 is taken by T1, so we schedule it at time 5.  Similarly, C3 is available at times 2 and 3, and T3 is also available at time 3, so we can schedule that class then.

Reduction: Even, Itai, and Shamir use 3SAT.  So we’re given a set X of literals (x1 through xl, and the negation ~x1 through ~xl), and a set D of k clauses (D1 through Dk).  Define pi to be the number of times variable i (so the literal xi or the literal ~xi)  shows up in the formula.  Each variable i will get 5*pi classes denoted Ciab, where a runs from 1 to pi, and b runs from 1 to 5.  There will be 3 hours to schedule the classes.   They organize the teachers for the classes in a widget that looks like this (from the paper, p. 693):

The idea is that the diagonal lines are teachers who have to teach the 2 classes connected by the line (So, C12 and C13 for example) and the crosses are the 2 ways they can teach it (so h2 and h3).  The bottom arrows are teachers who can have 3 hours of availability and 3 classes to teach.  Each variable gets pi teachers who teach Cq3 and Cq4 (where q runs from 1 to pi) during h1 and h2.  If Cq3 is taught at time h1, the variable is treated as being set to positive, otherwise, it’s negative.

We then need to add more jobs to make sure that we make one literal true in each clause.  This is done by creating a teacher with 3 classes (one for each literal in the clause).  If the literal is the qth occurrence of literal xi, the teacher is assigned to class Ciq2 if the literal is true, and Ciq5 if the literal is false.

If there is a satisfiable solution to the SAT instance, the literal that makes each clause true will define a way to set the teacher at time h1 and h2, and that starts a chain reaction that schedules everything.

If there is a feasible timetable, we look at the Ciq3 and Ciq4 classes to see how we have allocated the teachers to those tasks, which tells us how to set the variables to make each clause true.

Difficulty: 9.  I’m having a hard time following all of the classes and teachers here.  I wonder if there is a much easier way to do this if we reduce from some other problem.

 

Job-Shop Scheduling

Are you sick of all of these flow-shop problems where we have to split up the tasks of a job into exactly one thing on each processor, progressing through the processors in the same order?  Then today’s problem is for you!

The problem: Job-Shop Scheduling.  This is problem SS18 in the appendix.

Description: We’re given a set of m processors and a set J of jobs.  Each job j in J consists of a chain of tasks, but different jobs can have different numbers of tasks.  The subtasks need to be processed in order, and each task has a processor assignment associated with it, and it’s possible that the subtasks can be assigned to the same processor multiple times, and/or skip some processors entirely.  The only requirement is that two consecutive tasks need to be on different processors (otherwise, we can just merge those tasks together). If we’re given a deadline D, can we schedule all jobs to complete all of their tasks by time D?

Example: Suppose we have three processors and 4 jobs:

  1. Job 1: 1 time on P1, then 2 time on P2, then 3 time on P3, then 2 time on P2
  2. Job 2: 2 time on P1, then 3 time on P2, then 2 time on P1
  3. Job 3: 1 time on P1, then 2 time on P3, then 3 time on P1, then 2 time on P3.
  4. Job 4: 1 time on P4, and that’s it.

Then we can get everything done by time 8 with the following schedule:

P1 1 2 2 3 3 3 2 2
P2 3 1 1 2 2 2 1 1
P3 4 3 3 1 1 1 3 3

Reduction: Garey, Johnson, and Sethi use 3-Partition.  So we’re given a set of 3n elements A, and a bound B.  The Job-Shop instance will have 2 processors and 3n+1 jobs labeled C0 through C3n.  Job C0 has 2n tasks, alternating B time on each processor (so, B time on P1, then another B time on P2, then another B time on P1, and so on).  The rest of the tasks get 0 time on P1 and a1 time on P2.  The deadline is 2nB, which is the time it takes to schedule all of C0.  The paper leaves out the “straightforward” task of showing this reduction actually solves the problem.  I believe them that it is straightforward, let’s take a closer look:

If A has a 3-Partition, then we have a collection of N sets of 3 elements that all add up to B.  Each one of these sets can fit on P2 while C0 is running on P1 (since C0 uses P1 for exactly B time for each task on P1), so we can fit all of the C tasks into the “holes” left by C0.

If we have a schedule that meets the deadline, we know that C0 must be scheduled continually, which leaves the B-sized holes on P2.  For all of the A tasks to be scheduled by the deadline, they have to fit in those holes, which gives us a partition of A.

Difficulty: 5.  The hard part, of course, is coming up with the conversion.  3-Partition isn’t the easiest problem to work with, but the nice features of it (especially that you know that any subset of A that adds to B has exactly 3 elements) make it a good fit here.

I wonder if it’s necessary to have C0 spend B time on both processors.  I understand the need to have C0 running on P2 to separate the partition sets, but wouldn’t it also work (and to my mind, be easier) if C0 took just 1 time (instead of B) each time it ran on P2?

I also wonder if there isn’t an easier reduction that goes straight from Flow-Shop Scheduling.  The reduction can take the Flow-Shop instance, and map it directly to the Job-Shop instance (just because the Job-Shop problem allows you to reuse, reorder, or skip processors with tasks, doesn’t mean it’s required).  Unless I’m missing something, this is a difficulty 2 reduction.  I think the reason they do the 3-Partition reduction in the paper is because they want to show it’s NP-Complete in the case where there are only 2 processors, and the Flow-Shop Scheduling reduction uses 3 processors.

Two-Processor Flow-Shop With Bounded Buffer

Here’s the reduction using the problem from last week.

The problem: Two-processor Flow-Shop with Bounded Buffer.  This is problem SS17 in the Appendix.

The description: We’re given an instance of Flow-Shop Scheduling, but we fix the number of processors to 2.  We’re also given a non-negative integer buffer bound B.  Can we find a flow-shop schedule that meets the deadline such that the number of jobs finished with their first task, but yet to start their second task never exceeds B?

Example: Suppose I have three jobs:

Job P1 Time P2 Time
1 1 3
2 1 3
3 4 1

For ease of discussion in the reduction, we will write these 2 times as tuples. So job 1 will be written as (1,3).

One possible schedule would be:

P1 1 2 3 3 3 3
P2 1 1 1 2 2 2 3

This uses a buffer of size 1 (Job 2 finishes its first task at time 2 and starts its second task at time 5, and Job 3 finishes its first task at time 6 and starts its second task at time 8).  This finishes all three tasks at time 8. If we have a buffer size of 0, this wouldn’t be allowed.  The best we can do is:

P1 1 X X 2 3 3 3 3
P2 1 1 1 2 2 2 X 3

..finishing everything by time 9.

Reduction: The paper by Papadimitriou and Kanellakis uses the Three-Way Matching of Integers problem.  So we have a set A with n integers {a1..an} and a set B with 2n integers {b1..b2n}.  Let c = the desired sum of each set.  We’re going to create 4n+1 jobs:

  • Each element ai creates a job (c/2, ai+c).
  • Each element bi creates a job (1,bi).
  • We have n+1 “K” jobs.  K0 = (0,2).  Kn = (3c/2, 0).  The intervening Ki=(3c/2,2)

We have a buffer size of 1, and a deadline of n(*2c+2).  Notice that this deadline is both the sum of all processor 1 tasks and the sum of all processor 2 tasks.  So there can be no waiting by either processor if we want to meet the deadline.  This means that K0 has to be scheduled first (since it jumps straight to processor 2) and Kn has to be scheduled last (so processor 1 is doing something while processor 2 is finishing).

If we have a feasible schedule, it has to start by scheduling 2 B tasks(say, bi and bj) on processor 1 while K0 is on processor 2.  Then, those 2 tasks move to processor 2 and will be done by time 2+bi+bj.  Meanwhile, an A process runs on process 1 and moves to processor 2 as soon as both B processes end. The A process will finish on process 2 at time c+2.  We run one of the K processes on processor 1 at the same time.

What we get is this (from the paper): 

Notice how everything lines up in a giant chain of corresponding A and B elements, and that we have to choose A and B elements that add up to c to not have any wasted time on a processor.  The actual proof in the paper uses induction on N and is a very nicely presented argument.

If we have a matching of A and B, we can use that matching to build the above schedule.

Difficulty: 6. I think the combination of coming up with the steps above and realizing you need an inductive proof to make it happen makes this a hard problem for students.  Maybe this should be a 7, but I think giving them the Three-Way Matching of Integers problem (instead of Numerical 3DM, which is what G&J say in the Appendix) is a big hint.

Three-Way Matching of Integers

This is an interesting variant on the Three-Dimensional Matching problem that will be used in the next reduction.  Since I think it would make a good easy homework problem, I’m posting it mainly as a reminder to myself to use it.

The problem: Three-Way Matching of Integers.  This problem isn’t in the appendix, but the paper by Papadimitriou and Kanellakis that describes it refers to G&J for the reduction.  I think that it’s because it’s seen as a “trivial” extension of Numerical Three-Dimensional Matching.

The description: Given a set A with n integers {a1 .. an), and a set B with 2n integers (b1, b2n), can we split B into n pairs of elements, where each pair Pi={bj, bk}  and the sum of ai+bj+ bk = the same constant c for all i?  (Notice that c will be 1/n of the sum of all of the elements in A and B)

Example: I think the problem is simpler than the description.  Suppose A = {1,2,3}, and B = {4.5.6.7.8.9}.  Then c will be 15, and P1 = {6,8}, P2 = {4,9}, P3 = {5,7}.

Reduction:  This is very close to Numerical Three-Dimensional Matching, so we’ll start there.  So we’re given 3 sets X, Y, and Z, with n elements, and a bound B.  We can assume that B = 1/n of the sum of all of the elements in X, Y, and Z.

We can’t just set A=X, and B= Y∪Z, because we need to make sure that a solution to our problem doesn’t build a Pi that takes 2 elements from Y (or Z).  The fix is to add B to all of the elements of Y before putting them in B.  So now c will be 2B (the old B that a triple adds to, plus the B we added to the element from Y that we need).  This guarantees that each Pi will take one element that came from Y, and one element that came from Z.

Difficulty: 3.  I don’t think this is trivial since we have to change the original problem a bit in our reduction, but I think it is a change that’s not hard for people to figure out.  Though you might convince me this should be a 2 instead.

No-Wait Flow-Shop Scheduling

This problem is very similar to the last problem, but the reduction is much harder to understand.

The problem: No-Wait Flow-Shop Scheduling.  This is problem SS16 in the appendix.

The description: Given an instance of Flow-Shop Scheduling, can we create a schedule that meets the deadline with the additional property that no job waits between tasks?  (So, once a job completes a task, it immediately starts another task).

Example: Suppose I have 3 processors and 2 jobs:

Job P1 time P2 time P3 time
1 2 4 3
2 2 2 2
3 2 3 1

 

A flow-shop schedule that allowed waiting could be:

P1 1 1 2 2 3 3
P2 1 1 1 1 2 2 3 3 3
P3 1 1 1 2 2 3

This works because Job 2 waits for one time unit between finishing on P2 and starting on P3.   In the no-wait instance, we’d have to start job 2 much later, so that there is no interference.  But it turns out that we could also have a no-wait schedule putting a different order of jobs on P1 (Job 2, then Job 1, then Job 3) that is just as fast:

P1 2 2 1 1 3 3
P2 2 2 1 1 1 1 3 3 3
P3 2 2 1 1 1 3

Since jobs can’t be interrupted, the finish time of a job is always the start time plus all the time of all of its subtasks.  What we are trying to do is find a permutation of the jobs that allows each job to start at a time where all of the jobs will be finished by the deadline.

Reduction: I think this is Theorem 6 in the Lenstra, Rinooy Kan, and Brucker paper.  They reduce from directed Hamiltonian Cycle, but I think the discussion beforehand talking about (directed) TSP is also useful.

The key insight is based on the permutation discussion in the example above.  The TSP problem is also looking for a permutation of the vertices.  We can make the problems equivalent by making the cost of each edge (i,j) the value cij: The length of the time interval between when we can start job i and job j, assuming we start j as soon as possible after i.

The problem is that this is a reduction in the wrong direction (it reduces from our problem to TSP).  But since TSP is proven NP-Complete by reduction from directed HC, we can use a similar construction as we do in that proof to take an HC instance and build the correct jobs in the same fashion.  The paper actually does show you how to make that construction but it’s pretty complicated.

Difficulty: 8.  Not only is the conversion hard, I think the “oh, wait, this is based on a backward reduction” will confuse beginning students. For more advanced students, showing the details of this reduction may help add to a bag of tricks that can be used in similar “I have a reduction in the wrong direction” situations.

Flow-Shop Scheduling

The first couple of problems in this section, at least, are pretty tractable for students.

The problem: Flow-Shop Scheduling.  This is problem SS15 in the appendix.

The description: Exactly the same as Open-Shop scheduling, with the additional restriction that we have to schedule each job on the processors in strictly increasing order of processor number.

(Note that the description in G&J doesn’t specify that subtasks of a job are assigned to specific processors.  But it’s clear that’s how it works in the paper.)

Example:  Here is the set of jobs from last week:

Job P1 time P2 time P3 time
1 1 2 3
2 3 1 2
3 2 3 1

If we have to schedule things in order, we can only schedule one job at time 0, and it has to go on P1.  The greedy choice would be to schedule Job 1 on P1 first, and then schedule it on P2 at time 1, once it’s finished.  Then we can schedule the beginning of Job 3 on P1, and so on.  The overall schedule looks like:

P1 1 3 3 2 2 2
P2 1 1 3 3 3 2
P3 1 1 1 3 2 2

For total time 9.

But, if we schedule Job 2 instead of Job 3 on P1, we get:

P1 1 2 2 2 3 3
P2 1 1 X 2 X 3 3 3
P3 1 1 1 2 2 X 3

For total time 10.

Reduction: The paper by Garey, Johnson, and Sethi uses 3-Partition.  So we’re given a set A with 3N elements, and a bound B.  We’ll build a Flow-Shop problem with 3 processors.  The jobs will be a set of  N+1 “C” jobs and 3N “E” jobs.

Job C0 has tasks of length <0, B, 2B>, job Cn has tasks of length <2B, B, 0> and the intervening Ci tasks have length<2B, B, 2B>.

Notice that scheduling the C tasks leaves “holes” of length B on processor 2- a Ci task takes 2B time on processor P1, and when it moves to P2, it finishes there in B time, and so P2 needs to wait idly for another B time units while waiting for the next job on P1 to finish.

Job Ej (where j runs from 1 to 3N) will take <0, aj, 0> time, where aj is the corresponding element in A.  The deadline is (2n+1)B.

If a 3-partition exists, then we can split A into sets of 3 elements, each of which sum exactly to B.  Each of these sets of 3 elements can fit exactly into one of the length B “holes” left by the C jobs.

If a feasible schedule exists, notice that scheduling all of the C tasks as efficiently as possible gets us exactly to the deadline.  So the only thing we can do with the E tasks is put them into the holes on processor 2.  Which gives us a partition.

Difficulty: 5.  This is a pretty easy and straightforward application of 3-partition.  But I think 3-partition is hard to understand, so it’s still not an easy reduction.

Open-Shop Scheduling

On to the next section with a very nice reduction.

The problem: Open-Shop Scheduling.  This is problem SS14 in the appendix.

The description: Given a set of m processors and J jobs.  Each job j tas m tasks, t1[j] through tm[j].  Each subtask ti has a length l(t) and needs to be run on processor i.   We are not allowed to schedule the same job at the same time on two processors, but it doesn’t matter the order in which jobs go to processors.  Given an overall deadline D, can we finish all tasks by the deadline?

Example: The main difference between this problem and the scheduling problem is that each task gets broken into multiple sub-tasks.  I think of it like building a car- there is one “processor” that puts in the engine, one “processor” that does the body welding, one “processor” that puts on the wheels, and so on.  Since it’s all happening to the same car, we can’t be working on the same car in two places at the same time.

Suppose I have 3 jobs and 3 processors

Job P1 time P2 time P3 time
1 1 2 3
2 3 1 2
3 2 3 1

Obviously, the “right” way to do things is to schedule all of the tasks that have the same length on different processors at the same time.   Everything finishes by time 6.

If we get the first allocation of jobs wrong, though, perhaps by putting Job 1 on P1, Job 2 on P3, and Job 3 on P2, then at time 2 we have nothing to run on P1.  Trying to keep things as compact as possible, we can put Job1 on P3 and Job2 on P1 at time 3 (right after Job 2 finishes).  They both finish at time 5, and Job 3 has no place to go after it finishes at time 3.

Finishing the schedule, you get something like:

P1 J1 X J2 J2 J2 X J3 J3
P2 J3 J3 J3 X X J2 J1 J1
P3 J2 J2 J1 J1 J1 J3 X X

..finishing all of the jobs at time 8.

Reduction: The reduction by Gonzalez and Sahni uses Partition.  This threw me for a bit because G&J say that if you have only 2 processors, the problem is polynomial, and 2 processors seemed like the natural way to represent two partitions.  Suppose we have a partition set S, whose elements sum to T.  Each element in S is represented by 3 jobs, each with 3 tasks, one for each processor.  If the partition element is ai, then the first set of 3 tasks for that element will take time ai on P1, and time 0 on P2 and P3.  The second set of 3 tasks will take time ai on P2, and time 0 on P1 and P3.  The third set of 3 tasks will take time ai on P3, and time 0 on P1 and P2.   We add one extra job with 3 tasks, each taking time T/2 (which is the size of the partition of S).

Our deadline D will be 3T/2.  D also happens to be the total length of the last task, as well as the sum of all of the tasks put on each processor.  So the only way to meet the deadline is to have no idle time for any processor.

If a partition exists, then we can split S into two halves of equal size, and schedule all three processors with both halves of S, and one of the 3 jobs in the last task, as 3 discrete chunks, giving us a feasible schedule.

If a partition does not exist, then we can prove by contradiction that no feasible schedule exists:  Suppose we did have a schedule that finished by the deadline.  As noted above, there cannot be any idle time in the schedule for any processor, so each processor is running continuously from time 0 to time D.  Since processors can’t be working on the same job at the same time, that means that some processor (WOLOG P1) starts the big final job’s first task at time 0, and finishes at time T/2.  Then some other processor (WOLOG P2) continues the big final job’s second task from time T/2 till time T.  Since P2 needs to finish all of its jobs at time 3T/2 and has T total work done in other tasks, this must mean that the elements of S were split into a chunk of size T/2 done before the big job shows up, and a chunk of size T/2 done after the big job finishes.  This means that we did have a partition of S, a contradiction.

Difficulty: 5.  I really like this reduction, and think it’s very elegant. It may not be obvious to students starting from scratch why you need 3 processors, but it’s pretty obvious why this reduction wouldn’t work with just 2 processors.  Maybe if you give the students a hint that the 2-processor case is polynomial, then they’ll be able to get it.

Scheduling to Minimize Weighted Completion Time

This reduction took me a while to follow because there is a lot of subtle algebra you have to do for yourself.

The problem: Scheduling to minimize weighted completion time.  This is problem SS13 in the appendix.

The description: Given a set T of tasks where each task t has a length l(t) and weight w(t), a set of m processors, and a bound K.  Can we find an m-processor schedule for all tasks where each project’s “completion penalty” is its weight multiplied by it’s completion time, and the total completion penalty for all tasks is K or less?

Example: Suppose I had 2 processors and 4 tasks:

Task Length Weight
1 5 4
2 1 6
3 3 3
4 3 3

Scheduling greedily by weight would put task 1 and 2 on one processor, then tasks 3 and 4 on the second processor after task 2 finishes.  This has a total penalty of 20 for task 1 + 6 for task 2 + 12 for task 3 + 21 for task 4 = 59

Scheduling task 2 then 1 on processor 1 and tasks 3 and 4 on processor 2 gives a total penalty of 57.

Reduction: The paper by Lenstra, Rinnooy Kan, and Brucker was hard for me to follow because they are discussing lots of different scheduling variants in the same paper, and do it by parameterizing the kinds of inputs. I’m pretty sure our problem is Theorem 3b in the paper, which reduces from Partition.  So we start with a set T of elements and build 1 task for each element in T.  The processing time and weight of the task are the size of the element in T.  K will be the sum of the products of all pairs of elements – 1/4 of the sum of all of the elements squared.  We’ll have 2 processors.

They claim that since the processing times and lengths are all the same for each task, if you are given a set of these tasks on one processor, the order doesn’t matter, and we get the same total weight no matter what.  It took me a while to buy that, but it comes down to the fact that each task is costing one penalty unit per time step no matter where it is scheduled.

Given a subset T of S, define c to be the amount the sum of the elements in S is “off” of the partition.  (The difference between the sum of the elements in T and half of the sum of everything in S).  When c=0, T is a partition of S.

The completion penalty of a processor is the completion penalty of all of the tasks (remember, the order doesn’t matter)  – the sum of all of the weights of tasks on that processor * the sum of all of the weights not on that processor.  Which is K+c2. Which means we have a solution to the scheduling problem if and only if c is 0, which is exactly when we have a partition.

Difficulty: 6.  I’m not sure how much I’d trust students to be able to come up with the algebra themselves.  Make if you tell them to set up the problem with lengths and weights equal, and then make them prove that in that case, the order doesn’t matter, then it becomes more tractable for them.

Preemptive Scheduling

This is one of those reductions that is easy because you remove everything that makes the problem hard.  It always feels like cheating to do this, but I guess it shows that adding extra complexity to a hard problem will never make it easier..

The problem: Preemptive Scheduling.  This is problem SS12 in the appendix.

The description: Given a set T of tasks arranged in a partial order,   each task t has a positive length l(t).  We have a set of m processors and an overall deadline D.  Can we create a preemptive schedule for T that obeys the precedence constraints and meets the overall deadline?  A preemptive schedule allows tasks to be partially run on a processor and then removed, and we can finish it at some later time.

Example: Here is a simple partial order:

The number of the task is its length.  So task a takes 3 time units, for example.

A two-processor preemptive schedule could be something like: P1: (a,a,a,b,b,d)  and P2: (d,d,d,c,c,c), getting both tasks done at time6.

If we did not allow preemption, we would not be able to fit task d along with task a, so we couldn’t do anything on P2 while P1 was running task 1.

Reduction: The paper by Ullman which we used for Precedence Constrained Scheduling has the reduction.   He basically notes that Precedence Constrained Scheduling is a special instance of Preemptive Scheduling where all lengths of all jobs are 1.  In that case, no preemption is possible.  So the reduction is basically “use the exact same problem”.

Difficulty: 2.  I usually reserve Difficulty 1 for “this is the same problem”, but here, there is a little bit of insight to convince yourself that you can interpret the problem this way.