This reduction took me a while to follow because there is a lot of subtle algebra you have to do for yourself.

**The problem: **Scheduling to minimize weighted completion time. This is problem SS13 in the appendix.

**The description: **Given a set T of tasks where each task t has a length l(t) and weight w(t), a set of m processors, and a bound K. Can we find an m-processor schedule for all tasks where each project’s “completion penalty” is its weight multiplied by it’s completion time, and the total completion penalty for all tasks is K or less?

**Example: **Suppose I had 2 processors and 4 tasks:

Task | Length | Weight |

1 | 5 | 4 |

2 | 1 | 6 |

3 | 3 | 3 |

4 | 3 | 3 |

Scheduling greedily by weight would put task 1 and 2 on one processor, then tasks 3 and 4 on the second processor after task 2 finishes. This has a total penalty of 20 for task 1 + 6 for task 2 + 12 for task 3 + 21 for task 4 = 59

Scheduling task 2 then 1 on processor 1 and tasks 3 and 4 on processor 2 gives a total penalty of 57.

**Reduction: **The paper by Lenstra, Rinnooy Kan, and Brucker was hard for me to follow because they are discussing lots of different scheduling variants in the same paper, and do it by parameterizing the kinds of inputs. I’m pretty sure our problem is Theorem 3b in the paper, which reduces from Partition. So we start with a set T of elements and build 1 task for each element in T. The processing time and weight of the task are the size of the element in T. K will be the sum of the products of all pairs of elements – 1/4 of the sum of all of the elements squared. We’ll have 2 processors.

They claim that since the processing times and lengths are all the same for each task, if you are given a set of these tasks on one processor, the order doesn’t matter, and we get the same total weight no matter what. It took me a while to buy that, but it comes down to the fact that each task is costing one penalty unit per time step no matter where it is scheduled.

Given a subset T of S, define c to be the amount the sum of the elements in S is “off” of the partition. (The difference between the sum of the elements in T and half of the sum of everything in S). When c=0, T is a partition of S.

The completion penalty of a processor is the completion penalty of all of the tasks (remember, the order doesn’t matter) – the sum of all of the weights of tasks on that processor * the sum of all of the weights not on that processor. Which is K+c^{2}. Which means we have a solution to the scheduling problem if and only if c is 0, which is exactly when we have a partition.

**Difficulty: **6. I’m not sure how much I’d trust students to be able to come up with the algebra themselves. Make if you tell them to set up the problem with lengths and weights equal, and then make them prove that in that case, the order doesn’t matter, then it becomes more tractable for them.

Per studente, a detour to Algorithm 101 would help. For the one machine case, a QUICK_SORT in increasing order of (w_i/l_i) yields the optimal solution.

HINT: Swap any two violating consecutive items results in better soon.

Then plugin the words of Lenstra et al. which (w_i/l_i) are all 1.

Thinh Nguyen

Right, that’s kind of the proof I was talking about. I still don’t know if they’d think of that being necessary on their own.