Job-Shop Scheduling

Are you sick of all of these flow-shop problems where we have to split up the tasks of a job into exactly one thing on each processor, progressing through the processors in the same order?  Then today’s problem is for you!

The problem: Job-Shop Scheduling.  This is problem SS18 in the appendix.

Description: We’re given a set of m processors and a set J of jobs.  Each job j in J consists of a chain of tasks, but different jobs can have different numbers of tasks.  The subtasks need to be processed in order, and each task has a processor assignment associated with it, and it’s possible that the subtasks can be assigned to the same processor multiple times, and/or skip some processors entirely.  The only requirement is that two consecutive tasks need to be on different processors (otherwise, we can just merge those tasks together). If we’re given a deadline D, can we schedule all jobs to complete all of their tasks by time D?

Example: Suppose we have three processors and 4 jobs:

  1. Job 1: 1 time on P1, then 2 time on P2, then 3 time on P3, then 2 time on P2
  2. Job 2: 2 time on P1, then 3 time on P2, then 2 time on P1
  3. Job 3: 1 time on P1, then 2 time on P3, then 3 time on P1, then 2 time on P3.
  4. Job 4: 1 time on P4, and that’s it.

Then we can get everything done by time 8 with the following schedule:

P1 1 2 2 3 3 3 2 2
P2 3 1 1 2 2 2 1 1
P3 4 3 3 1 1 1 3 3

Reduction: Garey, Johnson, and Sethi use 3-Partition.  So we’re given a set of 3n elements A, and a bound B.  The Job-Shop instance will have 2 processors and 3n+1 jobs labeled C0 through C3n.  Job C0 has 2n tasks, alternating B time on each processor (so, B time on P1, then another B time on P2, then another B time on P1, and so on).  The rest of the tasks get 0 time on P1 and a1 time on P2.  The deadline is 2nB, which is the time it takes to schedule all of C0.  The paper leaves out the “straightforward” task of showing this reduction actually solves the problem.  I believe them that it is straightforward, let’s take a closer look:

If A has a 3-Partition, then we have a collection of N sets of 3 elements that all add up to B.  Each one of these sets can fit on P2 while C0 is running on P1 (since C0 uses P1 for exactly B time for each task on P1), so we can fit all of the C tasks into the “holes” left by C0.

If we have a schedule that meets the deadline, we know that C0 must be scheduled continually, which leaves the B-sized holes on P2.  For all of the A tasks to be scheduled by the deadline, they have to fit in those holes, which gives us a partition of A.

Difficulty: 5.  The hard part, of course, is coming up with the conversion.  3-Partition isn’t the easiest problem to work with, but the nice features of it (especially that you know that any subset of A that adds to B has exactly 3 elements) make it a good fit here.

I wonder if it’s necessary to have C0 spend B time on both processors.  I understand the need to have C0 running on P2 to separate the partition sets, but wouldn’t it also work (and to my mind, be easier) if C0 took just 1 time (instead of B) each time it ran on P2?

I also wonder if there isn’t an easier reduction that goes straight from Flow-Shop Scheduling.  The reduction can take the Flow-Shop instance, and map it directly to the Job-Shop instance (just because the Job-Shop problem allows you to reuse, reorder, or skip processors with tasks, doesn’t mean it’s required).  Unless I’m missing something, this is a difficulty 2 reduction.  I think the reason they do the 3-Partition reduction in the paper is because they want to show it’s NP-Complete in the case where there are only 2 processors, and the Flow-Shop Scheduling reduction uses 3 processors.

Leave a Reply

Your email address will not be published. Required fields are marked *