Open-Shop Scheduling

On to the next section with a very nice reduction.

The problem: Open-Shop Scheduling.  This is problem SS14 in the appendix.

The description: Given a set of m processors and J jobs.  Each job j tas m tasks, t1[j] through tm[j].  Each subtask ti has a length l(t) and needs to be run on processor i.   We are not allowed to schedule the same job at the same time on two processors, but it doesn’t matter the order in which jobs go to processors.  Given an overall deadline D, can we finish all tasks by the deadline?

Example: The main difference between this problem and the scheduling problem is that each task gets broken into multiple sub-tasks.  I think of it like building a car- there is one “processor” that puts in the engine, one “processor” that does the body welding, one “processor” that puts on the wheels, and so on.  Since it’s all happening to the same car, we can’t be working on the same car in two places at the same time.

Suppose I have 3 jobs and 3 processors

Job P1 time P2 time P3 time
1 1 2 3
2 3 1 2
3 2 3 1

Obviously, the “right” way to do things is to schedule all of the tasks that have the same length on different processors at the same time.   Everything finishes by time 6.

If we get the first allocation of jobs wrong, though, perhaps by putting Job 1 on P1, Job 2 on P3, and Job 3 on P2, then at time 2 we have nothing to run on P1.  Trying to keep things as compact as possible, we can put Job1 on P3 and Job2 on P1 at time 3 (right after Job 2 finishes).  They both finish at time 5, and Job 3 has no place to go after it finishes at time 3.

Finishing the schedule, you get something like:

P1 J1 X J2 J2 J2 X J3 J3
P2 J3 J3 J3 X X J2 J1 J1
P3 J2 J2 J1 J1 J1 J3 X X

..finishing all of the jobs at time 8.

Reduction: The reduction by Gonzalez and Sahni uses Partition.  This threw me for a bit because G&J say that if you have only 2 processors, the problem is polynomial, and 2 processors seemed like the natural way to represent two partitions.  Suppose we have a partition set S, whose elements sum to T.  Each element in S is represented by 3 jobs, each with 3 tasks, one for each processor.  If the partition element is ai, then the first set of 3 tasks for that element will take time ai on P1, and time 0 on P2 and P3.  The second set of 3 tasks will take time ai on P2, and time 0 on P1 and P3.  The third set of 3 tasks will take time ai on P3, and time 0 on P1 and P2.   We add one extra job with 3 tasks, each taking time T/2 (which is the size of the partition of S).

Our deadline D will be 3T/2.  D also happens to be the total length of the last task, as well as the sum of all of the tasks put on each processor.  So the only way to meet the deadline is to have no idle time for any processor.

If a partition exists, then we can split S into two halves of equal size, and schedule all three processors with both halves of S, and one of the 3 jobs in the last task, as 3 discrete chunks, giving us a feasible schedule.

If a partition does not exist, then we can prove by contradiction that no feasible schedule exists:  Suppose we did have a schedule that finished by the deadline.  As noted above, there cannot be any idle time in the schedule for any processor, so each processor is running continuously from time 0 to time D.  Since processors can’t be working on the same job at the same time, that means that some processor (WOLOG P1) starts the big final job’s first task at time 0, and finishes at time T/2.  Then some other processor (WOLOG P2) continues the big final job’s second task from time T/2 till time T.  Since P2 needs to finish all of its jobs at time 3T/2 and has T total work done in other tasks, this must mean that the elements of S were split into a chunk of size T/2 done before the big job shows up, and a chunk of size T/2 done after the big job finishes.  This means that we did have a partition of S, a contradiction.

Difficulty: 5.  I really like this reduction, and think it’s very elegant. It may not be obvious to students starting from scratch why you need 3 processors, but it’s pretty obvious why this reduction wouldn’t work with just 2 processors.  Maybe if you give the students a hint that the 2-processor case is polynomial, then they’ll be able to get it.

Leave a Reply

Your email address will not be published. Required fields are marked *