Tag Archives: Difficulty 6

Two-Processor Flow-Shop With Bounded Buffer

Here’s the reduction using the problem from last week.

The problem: Two-processor Flow-Shop with Bounded Buffer.  This is problem SS17 in the Appendix.

The description: We’re given an instance of Flow-Shop Scheduling, but we fix the number of processors to 2.  We’re also given a non-negative integer buffer bound B.  Can we find a flow-shop schedule that meets the deadline such that the number of jobs finished with their first task, but yet to start their second task never exceeds B?

Example: Suppose I have three jobs:

Job P1 Time P2 Time
1 1 3
2 1 3
3 4 1

For ease of discussion in the reduction, we will write these 2 times as tuples. So job 1 will be written as (1,3).

One possible schedule would be:

P1 1 2 3 3 3 3
P2 1 1 1 2 2 2 3

This uses a buffer of size 1 (Job 2 finishes its first task at time 2 and starts its second task at time 5, and Job 3 finishes its first task at time 6 and starts its second task at time 8).  This finishes all three tasks at time 8. If we have a buffer size of 0, this wouldn’t be allowed.  The best we can do is:

P1 1 X X 2 3 3 3 3
P2 1 1 1 2 2 2 X 3

..finishing everything by time 9.

Reduction: The paper by Papadimitriou and Kanellakis uses the Three-Way Matching of Integers problem.  So we have a set A with n integers {a1..an} and a set B with 2n integers {b1..b2n}.  Let c = the desired sum of each set.  We’re going to create 4n+1 jobs:

  • Each element ai creates a job (c/2, ai+c).
  • Each element bi creates a job (1,bi).
  • We have n+1 “K” jobs.  K0 = (0,2).  Kn = (3c/2, 0).  The intervening Ki=(3c/2,2)

We have a buffer size of 1, and a deadline of n(*2c+2).  Notice that this deadline is both the sum of all processor 1 tasks and the sum of all processor 2 tasks.  So there can be no waiting by either processor if we want to meet the deadline.  This means that K0 has to be scheduled first (since it jumps straight to processor 2) and Kn has to be scheduled last (so processor 1 is doing something while processor 2 is finishing).

If we have a feasible schedule, it has to start by scheduling 2 B tasks(say, bi and bj) on processor 1 while K0 is on processor 2.  Then, those 2 tasks move to processor 2 and will be done by time 2+bi+bj.  Meanwhile, an A process runs on process 1 and moves to processor 2 as soon as both B processes end. The A process will finish on process 2 at time c+2.  We run one of the K processes on processor 1 at the same time.

What we get is this (from the paper): 

Notice how everything lines up in a giant chain of corresponding A and B elements, and that we have to choose A and B elements that add up to c to not have any wasted time on a processor.  The actual proof in the paper uses induction on N and is a very nicely presented argument.

If we have a matching of A and B, we can use that matching to build the above schedule.

Difficulty: 6. I think the combination of coming up with the steps above and realizing you need an inductive proof to make it happen makes this a hard problem for students.  Maybe this should be a 7, but I think giving them the Three-Way Matching of Integers problem (instead of Numerical 3DM, which is what G&J say in the Appendix) is a big hint.

Scheduling to Minimize Weighted Completion Time

This reduction took me a while to follow because there is a lot of subtle algebra you have to do for yourself.

The problem: Scheduling to minimize weighted completion time.  This is problem SS13 in the appendix.

The description: Given a set T of tasks where each task t has a length l(t) and weight w(t), a set of m processors, and a bound K.  Can we find an m-processor schedule for all tasks where each project’s “completion penalty” is its weight multiplied by it’s completion time, and the total completion penalty for all tasks is K or less?

Example: Suppose I had 2 processors and 4 tasks:

Task Length Weight
1 5 4
2 1 6
3 3 3
4 3 3

Scheduling greedily by weight would put task 1 and 2 on one processor, then tasks 3 and 4 on the second processor after task 2 finishes.  This has a total penalty of 20 for task 1 + 6 for task 2 + 12 for task 3 + 21 for task 4 = 59

Scheduling task 2 then 1 on processor 1 and tasks 3 and 4 on processor 2 gives a total penalty of 57.

Reduction: The paper by Lenstra, Rinnooy Kan, and Brucker was hard for me to follow because they are discussing lots of different scheduling variants in the same paper, and do it by parameterizing the kinds of inputs. I’m pretty sure our problem is Theorem 3b in the paper, which reduces from Partition.  So we start with a set T of elements and build 1 task for each element in T.  The processing time and weight of the task are the size of the element in T.  K will be the sum of the products of all pairs of elements – 1/4 of the sum of all of the elements squared.  We’ll have 2 processors.

They claim that since the processing times and lengths are all the same for each task, if you are given a set of these tasks on one processor, the order doesn’t matter, and we get the same total weight no matter what.  It took me a while to buy that, but it comes down to the fact that each task is costing one penalty unit per time step no matter where it is scheduled.

Given a subset T of S, define c to be the amount the sum of the elements in S is “off” of the partition.  (The difference between the sum of the elements in T and half of the sum of everything in S).  When c=0, T is a partition of S.

The completion penalty of a processor is the completion penalty of all of the tasks (remember, the order doesn’t matter)  – the sum of all of the weights of tasks on that processor * the sum of all of the weights not on that processor.  Which is K+c2. Which means we have a solution to the scheduling problem if and only if c is 0, which is exactly when we have a partition.

Difficulty: 6.  I’m not sure how much I’d trust students to be able to come up with the algebra themselves.  Make if you tell them to set up the problem with lengths and weights equal, and then make them prove that in that case, the order doesn’t matter, then it becomes more tractable for them.

Protected: Sequencing to Minimize Maximum Cumulative Cost

This content is password protected. To view it please enter your password below:

Protected: Scheduling to Minimize Weighted Completion Time

This content is password protected. To view it please enter your password below:

Protected: Consecutive Ones Matrix Partition

This content is password protected. To view it please enter your password below:

Protected: Consecutive Ones Matrix Augmentation

This content is password protected. To view it please enter your password below:

Protected: Sparse Matrix Compression

This content is password protected. To view it please enter your password below:

Protected: Rooted Tree Storage Assignment

This content is password protected. To view it please enter your password below:

Protected: Kth Largest m-Tuple

This content is password protected. To view it please enter your password below:

Protected: Numerical 3-Dimensional Matching

This content is password protected. To view it please enter your password below: