# Monthly Archives: June 2018

## Cost-Parametric Linear Programming

This is a hard reduction that I’m not sure I understand.  I’m sure if I was better at vector operations, I’d be more able to figure it out.

The problem: Cost-Parametric Linear Programming.  This is problem MP3 in the appendix.

The description: Given a set of integer constraints (,b) as in Integer Programming, (except that the inequalities are instead of for some reason, but we can always multiply by -1 to flip the constraints) and a set J that is a subset of  indices of the variables in .  Can we find an m-tuple with rational values such that q and for all feasible non-negative m-tuples that satisfy the constraints,  the minimum of >

Example: Ok, here’s a pretty simple situation.  Suppose we have 3 variables, with constraints:

(Or, technically, )

Now, suppose we choose a of (1,1,0} and a J of {1,2}.  So in this case, the minimum of would be .3 + .4 = .7.  That needs to be more than   This comes out to .5 + 0, so it works.  So, with this , we need q , so a q > makes this solution correct.  There may be different values for that will also work on smaller q’s.

Reduction: G&J say that the reduction is from 3SAT, but I think the paper by Jersolow uses DNF Non-Tautology. Given a logical formula in DNF form, he builds a linear inequality system. All variables in formula become variables in the linear program.  For each clause, every variable has a constraint that it’s 1.  (and, I think implicitly, 0).  If the variable shows up in a literal positively, we add the constraint that it is 1.  If it shows up negatively, we have a constraint that it’s 0.  This means that we have a solution to the constraints if and only if we set the variables to the truth values that make the clause true.  (Remember, we have a DNF formula, so each clause is a conjunction- we need to make each variable true).

He then shows that the minimum way to solve the constraints (since it’s in DNF, that means making one clause true) is is true for all if and only if A is a tautology.  From there he goes on to show that our problem instance holds.

I’m pretty confused by the logic- especially seems it looks like he’s saying that his problem is true if and only if the original problem is a tautology, when non-tautology is what we’re supposed to be reducing from.

Difficulty 9.  This is a hard problem to understand,  and the reduction is hard to follow as well.   It’s very possible I’m not understanding something in the paper.

Moving along in the Mathematical Programming section.

The problem: Quadratic Programming.  This is problem MP2 in the appendix.

The description: Given a set of pairs (,b) of constraints, similar to those in Integer Programming, except the numbers can be rationals instead of integers.  Instead of the single objective vector , we’re also given a second vector , both of rationals.  Our objective bound B is also a rational.

Can we find a vector of rationals that meets all of the constraints and sets the objective function ?

Example: I’ll again try to write a simple example in a format closer to what I think is used in practice:

Maximize such that:

I’m pretty sure this has a trivial solution of and , for an objective score of 26.

Reduction: It’s worth mentioning that this is a “Not known to be in NP” problem, I think because the range of possible rational solutions is too large to nondeterministically guess.  Sahni’s paper reduces from Sum of Subsets.  So we’re given a set S= {} and a bound M.  Our QP instance will be have the following objective function: .   Our constraints will be that each has to be between 0 and 1 (inclusive) and also .  Our bound B will be equal to M.

The first half of the objective function makes its contribution to the sum negative if is not 0 or 1 (and 0 if it is 0 or 1).  So the only way to get the objective to exactly M is to choose 0/1 values for each . The second half of the objective function and the summation constraint force us to create a sum that is as large as possible but not larger than B.   In this case the objective function value is the sum of the subset chosen, which is equal to B if and only if we have a SOS solution.

Difficulty: 5.  The hack to make fractional variable assignments negative is very cool, and easy to explain, but hard to find on your own.  I also think that worrying about quadratics and fractional values for your variables makes this problem harder to handle.

## Integer Programming

On to a new section of the appendix!

The problem: Integer Programming.  This is problem MP1 in the appendix.

The description: Given a set of X of pairs (,b), where is an m-tuple of integers, and b is an integer.  We’re also given an m-tuple and an integer B.

Can we find an m-tuple of integers such that:

• b for each (,b) pair
• B.

Example: The definition above kind of hides the way we commonly think of integer (or linear) programming.  The X set is the set of constraints, and the and B make the objective function.  So, the set ((1,2,3,4),100) really corresponds to the constraint  1x 1+2x2+3x3+4x4 100.

Here is a simple Integer Programming problem written in the way we’re probably more used to seeing:

Given constraints:

x1 + x2 5

10x1 + 6x2 45

Maximize 5x1 + 4x2 (Or, as a decision problem, can 5x1 + 4x2 be 23?

The answer is yes: If x1=3 and x2 = 2, we satisfy both constraints and get an objective value of 23.

Notice that if we allowed non-integer values for our variables, we would have a linear program, and could get to an objective value of 23.75 by setting x1=3.75 and x2=1.25.

Reduction: Karp’s paper uses 3SAT, which is fine, but it’s hard to see how the objective function works there.  I think a much more natural reduction is from Knapsack.

So we’re given a set of n items, each item i has a profit pi and weight wi.  We’re given a knapsack capacity B and a goal profit K.

Our constraints will be:

x1*w1+… xn*wn B  (keep the total weight at B or less)

xi 1 for all i (I can take at most 1 of each item)

-1*xi 0 for all i (Each xi has to be at least 0)

The last sets of constraints force each xi variable to be either 0 or 1.

p1*x1 + … + pn*xn B?

I think it’s pretty clear that this is exactly the rules for Knapsack written as an Integer Programming problem.

Difficulty: 3.  I think this might be too easy for a homework problem because the proof part of the reduction (Knapsack says “yes” if and only if  IP says “yes”) is trivial.  But the need for the last set of constraints (that each variable is >=0, written as a <= equation) is a fun wrinkle that will mess up some people.

I was out of town last week- sorry for the missed post.

The problem: Deadlock Avoidance.  This is problem SS22 in the appendix.

The description: (The definition in G&J is a little vague, so this comes more from the paper by Araki, Sugiyama, Kasami, and Okui).  Given a set {P1..PU} of processes (represented as directed acyclic graphs with a fixed “start node”), and set {t1..tT} of resource types, where resource type ti has ai equivalent units available.  Each process state may allocate or deallocate a resource type, with the stipulations that:

• Each process can only deallocate a resource it has allocated
• Each process must deallocate all allocated resources before completing (along any path through its DAG)

Given a state in this system (where each process is somewhere along its process flow), is the state of the system “safe”?  That is, for each control flow, can we find a sequence of allocations and deallocations that allows each process to finish?

Example: I had a hard time with this definition until I realized we were asking about a specific state of the system.  So here is the classic deadlock situation described in this terminology.  Suppose we have 2 processes, P1 and P2.  Each process’ start node is its name, the symbol a(X) means to allocate resource X and the symbol d(X) means to deallocate resource X.

The process flows look like:

If we set the starting state to after the second node of each graph (so P0 has R1, and P1 has R2), then we have a deadlock.  But if we set the starting state to after the first node for each graph (so, before any resources are allocated) then we would call that “safe” because we can do all of P0, followed by all of P1, and have both processes finish.

Reduction: The paper by Araki, Sugiyama, Kasami, and Okui have several reductions for several variants of the problem.  This is the first one (Theorem 2 in my paper) and uses 3SAT.  We’re given a formula with n variables and m clauses, and build a system that has 3m+2n+1 processes and 7m+3n+1 resource types (each with 1 unit of the resource each).    The resources are:

• Bi, Xi and ~Xi for each variable
• Cj for each clause
• Cjk and Djk for each literal position in each clause (k goes from 1 to 3).
• A single resource A, which will control the flow through processes.

The processes are:

• Pxi and P~xi for each variable
• Pjk for each literal position in each clause
• P0, which manages everything.

Here is the flow diagram for Pxi.  “a” means allocate a resource, and “d” means deallocate a resource:

Here is the flow diagram for P~xi.  Notice that because they share Bi and A, only one of these processes can complete at a time:

The three Pjk processes for a clause are all similar.  The resource “Yj1” refers to the resource that is the first literal in clause j (so it’s really an X resource):

(You can click the images to make them larger)

It’s worth noticing that resource Cj is in common among all 3 processes, and that each process allocates two D resources in a circular fashion.

The main P0 process is:

The starting state is the second node of each process (so after each process acquires its first resource).

If the original formula was satisfiable, then we have a way of setting each variable that satisfies each clause.  Each variable has two processes: Pxi (corresponding to setting xi to true) and P~xi (corresponding to setting xi to false).  The process corresponding to the way we set each xi to satisfy the formula is the one we want to let proceed.  It will grab Bi and eventually wait on A (which is held by P0). This will release the xi (if the variable is set to true) or ~xi (if the variable is set to false) resource. This means that the Pjk process that corresponds to that literal that can run to completion (this is the literal that makes clause Cj true).  Once it’s done, we can run the other two Pjk processes to right before they each try to acquire Cj.

At this moment, none of the Cj‘s or Cjk resources are allocated, so process P0 can finish.  This releases A, so process Pxi (or P~xi) can finish.  This releases Bi, so the other of Pxi and P~xi can finish.  Then, finally, the other two Pjk in each clause can finish, in order.  Since we found an order to terminate all processes, our starting state was safe.

If the original formula was unsatisfiable, for each Bi, we need to decide whether to allocate it to Pxi or P~xi.  The one we allocate to can release it’s Xi (or ~Xi) resource.  That process is waiting to get A (currently held by P0) and the other process for that variable is waiting to get Bi (currently held by the other Xi (or ~Xi ) process).  P0 can’t release A until all Cj and Cjk resources are free, which only happens if some Pjk process completes.  But these processes can’t complete until they get a resource corresponding to a literal in a clause (currently held by a Xi/~Xi process).  Since we know the formula is unsatisfiable, no matter how we decide to allocate the Bi, there will always be at least one clause where all of the literals in that clause were not chosen to advance.  The three processes for this clause cannot complete, meaning P0 cannot advance.  Since we always get to a state where no processes can finish, we have a deadlock.

Difficulty: 8.  Tracing through the different processes is very cool, but very complicated.  I wish I could teach a course one of these years in “cool hard reductions”.  This is way to hard to fit into the last couple of weeks of an Algorithms class, but I think it is something students could sink their teeth into and learn a lot from.