Tag Archives: Difficulty 3

Knapsack

It’s kind of amazing that I’ve gotten this far without needing such a classic problem.  But I was writing up the next post and needed to link to my Knapsack article, and it wasn’t there, so..

The problem: Knapsack.  (More technically, “0/1 Knapsack”).  This is problem MP9 in the appendix.

The description: Given a set U of items, each item u in U has a profit p(u), and a weight w(u).  (G&J call these “value” and “size”), and positive integers B and K.  Can we create a subset U’ of U that has total profit at least K, but total weight at most B?

Example: Knapsack is one of my all-time favorite problems, which probably says something about me.  But I inflict it on my students at all levels of the curriculum – it’s a good motivation to introduce parallel arrays, and building simple classes at the low levels, the fractional version is a good example of a greedy algorithm, and the 0/1 version is a good example of where greedy fails.  It also fascinates me how it’s an example of a problem where a problem that has infinite solutions (Fractional Knapsack, when you can take any proportion of an item) is easily solvable, but a problem that has fewer solutions to consider (0/1 Knapsack, where there are “just” 2 choices for each item) is intractable.

Anyway, here is my classic “Your Greedy Approach Won’t Work” example:

Item Profit Weight
1 5 4
2 3 3
3 3 3

If B=6, the best option is to choose items 2 and 3.  But greedily picking items by profit/weight ratio (which works for the Fractional Knapsack problem) will choose item 1, and lose profit.

Reduction: G&J reference Karp’s paper and this is a classic reduction you see in most books that do this problem.  You go from Subset Sum.  We’re given a set S, and an integer K.  We create a set U with the same number of elements, make the profit and weight of each element the same, and equal to the size of the corresponding element in S.  We set K’=B=K.

If the original set had a subset summing to K, then taking those elements will make us a profit of K’ and a weight of B.

If we have a Knapsack solution with profit K’ or more, then since the profits of all items are equal to their weights, the only for the total weight to not exceed B is for the profit to be exactly K’.  So taking the corresponding items in S will get a sum equal to exactly K.

Difficulty: 3.  I use this as an easy homework or test question all the time.

Sequencing to Minimize Tardy Tasks, Sequencing to Minimize Tardy Task Weight

I don’t want to constantly restate reductions that are done in the G&J book, so I’ll just state that problem SS2, “Sequencing to Minimize Tardy Tasks” is done in G&J on page 73.

The next problem has a very similar title, but is a pretty different problem:

The problem: Sequencing to Minimize Tardy Task Weight.  This is problem SS3 in the appendix.

The description: Given a set  T of tasks, each with a length, a deadline, and a weight (think “penalty”), and an integer K, can we find a one-processor schedule for T such that the sum of the weights of the tasks that are not completed before their deadline is K or less?

Example: Here’s a pretty simple example to show what the hard decision is:

Task Length Deadline Weight
1 1 1 3
2 2 3 4
3 3 3 5

Obviously, we can only do either task 3, or both of task 1 and 2.  The current weights favor doing tasks 1 and 2 (because we want to make the weights of missed tasks small), but changing the weight values can change the decision.

Reduction: This is done in Karp’s paper.  We reduce from Partition. Let S be the set we are trying to Partition.   We’ll have a task for each element in the S.  The length of each task and its weight will be equal to the size of the element in S.  K = half of the sum of all of the elements in S.   The deadline for all tasks is also K.

Notice that the only way to get total weight K (or less) after the deadline is to get total length K (or more) done before the deadline.  We can’t do more than K length before the deadline (since the deadline is K). So the only way to have a feasible schedule is to have a set of tasks whose lengths are exactly K, which is a partition.

Difficulty: 3.  I think this is a little easier than last week’s since you don’t have to make up that boundary task in the middle.  Still, going from one set of things (the elements in S) to 3 values (length, deadline, and weight) gives opportunities for confusion.

Sequencing With Release Times and Deadlines

On to a new section!  This is the “Sequencing and Scheduling” section of the appendix.  The first problem is a bit weird because I think it appears with a differnt name elsewhere in G&J.

The problem: Sequencing with Release Times and Deadlines.  This is problem SS1 in the appendix.

Description: Given a set of tasks, where each task t has a “release time” r(t), a “deadline” d(t), and a length(t), all positive integers, can we create a one-processor feasible schedule for these tasks that meets all deadlines?

The definition of “feasible” is pretty straightforward:

  • No task can start before its release time.
  • No two tasks can be running at the same time (and there is no preemption, so once a task starts running, it must complete)
  • All tasks finish before (or equal to) their deadline.

Example: Here’s an example that will relate to the reduction:

Suppose I have 5 tasks: The first 4 tasks are similar: All of them start at time 1, all of them have a deadline at time 11, and the lengths are 1,2,3, and 4.  Our fifth task starts at time 5, has a length of 1, and a deadline of 6.  (So every feasible schedule must have this task own the processor between times 5 and 6)

A feasible schedule would be: {1,4,5,2,3}.  5 minutes of tasks before the 5th task, then 5 minutes of tasks afterward.

Reduction: Like I said above, I think this equivalent to the G&J “Sequencing Within Intervals” problem- at least I can’t see a difference.  The reduction for that problem is on page 70 of the G&J book, and it’s pretty elegant.

We reduce from Partition.  Let B = the sum of all elements in the Partition instance.  Each element in the set will become a task with starting time 1, deadline B+1, and a length equal to the value of the set element.  We have one extra task that is released at time B/2, has a length of 1, and a deadline of B/2 + 1 (this is like our 5th task in the example above).

The idea is that the only way to fit all of the tasks feasibly is to have some subset of the tasks start at time 1 and take (collectively) exactly B/2 time, then we have our 1 extra task, then we fill the time from B/2+1 to B+1 with the rest of the tasks (which also take collectively exactly B/2+1 time).  This forms a partition of B.

Difficulty: 3.  I think this is a good reduction students can come up with on their own.  I like the slickness of the extra element to force the partition.  If the problem we’re talking about is actually different than the one done in G&J, we’d have to see just where the differences lie.

Protected: Non-Circular Satisfiability

This content is password protected. To view it please enter your password below:

Protected: Conjunctive Boolean Query

This content is password protected. To view it please enter your password below:

Protected: Ratio Clique

This content is password protected. To view it please enter your password below:

Protected: Bin Packing Take 2

This content is password protected. To view it please enter your password below:

Protected: Numerical Matching With Target Sums

This content is password protected. To view it please enter your password below:

Protected: Min-Max Multicenter

This content is password protected. To view it please enter your password below:

Monotone EQSat

Today I’m posting 2 problems relating to my student Dan Thornton’s independent study work.  I’m out of town this week and next, so I have these posts set to automatically go up on Tuesday afternoon.

Dan’s independent study was based on the “Automaton Identification” problem, but to show that reduction, he needs to use a variant of 3SAT, which he shows here:

The problem: Monotone EQ SAT. This is a specific instance of Monotone SAT.

The description:
We are given a conjunction of clauses F = \wedge_{i=1}^{l} C_{i} where each clause in F contains all negated or non-negated variables z_{j} and the number of clauses and variables are equal, is there an assignment of the variables so that F is satisfied? Our instance will have l clauses and variables.

Example:
Here is an F that has 4 variables and 4 clauses.

F =( \neg x_{1} \vee \neg x_{2} \vee \neg x_{3} \vee \neg x_{4} ) \wedge ( x_{1} \vee x_{2} \vee x_{4} ) \wedge ( \neg x_{2} \vee \neg x_{3} \vee \neg x_{4} ) \wedge ( x_{1} )

The above F may be satisfied by the following assignment:
x_{1} \rightarrow True
x_{2} \rightarrow False
x_{3} \rightarrow False
x_{4} \rightarrow True

The reduction:
We will reduce from Monotone SAT. So we are given an instance of Monotone  SAT with the clauses F = \wedge_{i=1}^{n} C_{i} here each clause is of the form C_{i} = (x_{i1} \vee ... \vee x_{ik_i} ) where each clause has all negated or non-negated variables. This is different from Monotone EQ SAT as we do not require the number of variables and clauses to be equal.
From this F we must build an instance of Monotone EQ SAT.
We may transform our instance of Monotone SAT, F, into one of Monotone EQ SAT by the following iterative procedure. New variables will be denoted by z'_{j} and new clauses by C'_{i}.

 F' = F;
 i = 1;
 j = 1;
 While{number of clauses != number of variables}{
   introduce two new variables z'_{j} \ , \ z'_{j+1};
   If{number of variables < number of clauses}{
     Create the new clause C'_{i} = (z'_{j} \vee z'_{j+1});
     F' = F' \wedge C'_{i};
     i = i+1;
   }
   else {
     Create three new clauses:
      C'_{i} = z'_{j} \ ,
      C'_{i+1} = z'_{j+1} \ ,
      C'_{i + 2} = (z'_{j} \vee z'_{j+1} );
     F' = F' \wedge C'_{i} \wedge C'_{i+1} \wedge C'_{i+2};
     i = i + 3;
   }
 j = j+2;
 }

The above algorithm will produce an equation F' that is in Monotone EQ  SAT. This may be shown by induction. Notice that before the procedure if number \ of \ variables > number \ of \ clauses that we will add 2 new variables and 3 new clauses.

If number \ of \ variables < number \ of \ clauses then we will add 2 new variables but only a single new clause. Either way the difference between the number of variables and clauses, dif =| number \ of \ clauses \ - \ number \ of \ variables | will decrease by 1. So in O(dif) steps we will obtain an formula where dif = 0. Such an formula is an instance of Monotone EQ SAT.

True{Monotone SAT \Rightarrow True{Monotone EQ SAT}
Here we assume that there is a truth assignment function \theta: z_{j} \rightarrow \lbrace True,False \rbrace that maps every variable to a truth value, such that F is satisfied. Then after we preform the above algorithm we have an instance of F', now our instance of F' will be of the form F \wedge ( \wedge_{i=1}^{k} C'_{i}) for some k \geq 1. Now notice that \theta above will satisfy F in F' and we may trivially satisfy \wedge_{i=1}^{k} C'_{i} by simply assigning all new variables z'_{j} to true.
This will give us a new truth assignment function \theta' that will satisfy F'

True{Monotone EQ SAT} \Rightarrow True{Monotone SAT}
Here we assume that there is a truth assignment function theta' that will satisfy F' then obviously as F' = F \wedge ( \wedge_{i=1}^{k} C'_{i}) then theta' must also satisfy F.

(Back to me)

Difficulty: 3.  The logical manipulations aren’t hard, but it is possible to mess them up.  For example, it’s important that the algorithm above reduces the difference in variables and clauses by 1 each iteration.  If it can reduce by more, you run the risk of skipping over the EQ state.