Sequencing With Release Times and Deadlines

On to a new section!  This is the “Sequencing and Scheduling” section of the appendix.  The first problem is a bit weird because I think it appears with a differnt name elsewhere in G&J.

The problem: Sequencing with Release Times and Deadlines.  This is problem SS1 in the appendix.

Description: Given a set of tasks, where each task t has a “release time” r(t), a “deadline” d(t), and a length(t), all positive integers, can we create a one-processor feasible schedule for these tasks that meets all deadlines?

The definition of “feasible” is pretty straightforward:

  • No task can start before its release time.
  • No two tasks can be running at the same time (and there is no preemption, so once a task starts running, it must complete)
  • All tasks finish before (or equal to) their deadline.

Example: Here’s an example that will relate to the reduction:

Suppose I have 5 tasks: The first 4 tasks are similar: All of them start at time 1, all of them have a deadline at time 11, and the lengths are 1,2,3, and 4.  Our fifth task starts at time 5, has a length of 1, and a deadline of 6.  (So every feasible schedule must have this task own the processor between times 5 and 6)

A feasible schedule would be: {1,4,5,2,3}.  5 minutes of tasks before the 5th task, then 5 minutes of tasks afterward.

Reduction: Like I said above, I think this equivalent to the G&J “Sequencing Within Intervals” problem- at least I can’t see a difference.  The reduction for that problem is on page 70 of the G&J book, and it’s pretty elegant.

We reduce from Partition.  Let B = the sum of all elements in the Partition instance.  Each element in the set will become a task with starting time 1, deadline B+1, and a length equal to the value of the set element.  We have one extra task that is released at time B/2, has a length of 1, and a deadline of B/2 + 1 (this is like our 5th task in the example above).

The idea is that the only way to fit all of the tasks feasibly is to have some subset of the tasks start at time 1 and take (collectively) exactly B/2 time, then we have our 1 extra task, then we fill the time from B/2+1 to B+1 with the rest of the tasks (which also take collectively exactly B/2+1 time).  This forms a partition of B.

Difficulty: 3.  I think this is a good reduction students can come up with on their own.  I like the slickness of the extra element to force the partition.  If the problem we’re talking about is actually different than the one done in G&J, we’d have to see just where the differences lie.

Safety of File Protection Systems

This is the last problem in the “Storage and Retrieval” section of the appendix and uses the Linear Bounded Automaton Acceptance problem from last week. So, since LBA acceptance was “not known to be in NP”, this problem is as well.

The problem: Safety of File Protection Systems.  This is problem SR36 in the appendix.

The description: We’re given:

  • A set R of “generic rights” (intuitively, these are things like “read access” or “write access”)
  • A set O of “objects”
  • A set S ⊆ O of “subjects”
  • A matrix P defining the set of rights for each combination of set and objects.  That is: for all s∈S and o∈O, P[s][o] defines the rights (a subset of R)  that subject s has on object o.
  • A set C of “commands”  Each command takes a set of parameters X1 through Xm, where each Xi is an object (and many are subjects as well).
  • The commands begin with a series of if statements: “if ri ∈ P[Xsi][Xoi]”, where “si” and “oi” are numbers from 1 to m.  So the if statement is asking if a specific right exists for a specific pair of parameters to the command.  The command will have several of these if statements connected by “and” statements.
  • If all of the clauses in the if statement is true then a list of several operations can happen.  The two operations that can happen are “enter a right into P” or “delete a right from P”.   I think you can also delete objects and subjects, but you can’t create them (in the version of the problem we’re talking about), so I don’t know if it matters

Can we find a sequence of commands in C where at some point in the sequence after executing a command, some set P[s][o] has a right r’ that it previously did not have?

Example: It’s pretty easy to come up with an example that is unsafe- just create a function that gives a right to someone.  I think where it gets harder is when you deal with operations that have sets of commands.  Here is an example from the paper by Harrison, Ruzzo, and Ullman that discusses this problem:

command IREAD(s1, s2, o){  
  if "read" ∈(s2, o) and
     "iread" ∈ (s1, s2)
  then 
     enter "read" into (s1, o)
     delete "read" from (s1, o)
}

The command “iread” is meant to stand for “indirect read”.   So what this is saying is that s2 can read 0, and s1 has the rights to read what s2 reads.  So s1 gets (temporary) permission to read 0.  Presumably, some time passes in between the granting of the read permission to s1 and the removal of that right, during which the data in o gets read by s1.  Notice that this is a safe procedure because, by the end, no new rights have been granted.

Reduction: The paper referenced above shows that a system in which we also have operations that can create objects and subjects is undecidable.  They do this by showing how this system can simulate an arbitrary Turing Machine.  The reduction turns each transition into a command in the system: The current state, the current cell the head is over, and the contents of that cell are implemented as rights, and then state transitions can be seen as commands: The “if” parts of the command check that the machine is in the correct configuration, and the “then” parts change rights in a way that simulates the moving of the head, the output to the tape, and the transition to a new state.   Importantly for our purposes, the only time these commands use the “create” commands (that are in the undecidable version of the problem, but not in ours) is when the Turing Machine moves the head into a previously unexplored area of the tape.

They then say that in our problem, which doesn’t have “create” commands, a simulation “similar to that used” in the undecidability reduction can also be used to convert an LBA into a set of commands, since an LBA won’t be moving its head arbitrarily into new space.  I think the LBA still is allowed to use some new space though, but I guess that since it is bounded by a polynomial in the input size, we can simulate that by having each cell get it’s own access rights, and that keeps us with a polynomial-sized reduction.

Difficulty: 8.  This is a hard problem to understand, and the reduction is done in a non-standard way (and depends on the LBA acceptance reduction which is also done in a non-standard way), so it may throw some people for a loop.  It is a cool example of showing non-standard ways to reduce things though if that’s what you’re looking for.

Linear Bounded Automaton Acceptance

The next problem in the appendix is a doozy, and it needs this reduction, which is in Karp’s paper.

The problem: Linear Bounded Automaton Acceptance.  This is problem AL3 in the appendix.

The description: Given a Linear Bounded Automaton L, and a string x, does L accept x?

Example: A well-known context-sensitive language is L= anbncn.  If we had a definition for an LBA that accepts a string L, and a string such as aabbccc, the LBA should say “no” on this instance.  But if our string was aabbcc, the LBA would say “yes”.

Reduction: G&J say this is a “Generic reduction”, and I can see why.  Let me try to add some explanation and put it into something more like what we’re used to:

We start with a known NP-Complete (not NP-Hard) problem.  Let’s say 3SAT. 3SAT is in NP, which means we have a non-deterministic Turing Machine that takes an instance x and solves 3SAT in polynomial time, let’s say p(|x|).   Let’s assume that the alphabet of this Turing Machine is {0,1}.

Given this instance x of 3SAT, we build the following context-sensitive language (and it’s implied that we can go from there to an LBA in polynomial time).  {#p(log(|x|) x #p(log (|x|)} over the alphabet {0,1,#}.  So we have a number of # symbols before and after x equal to the polynomial on the log of the input.

To be honest, I’m not sure why you need the log here.  I think you can get away with just having p(|x|) symbols on each side, and the total length of the LBA acceptance instane is still polynomial in |x|.  The idea is that since we know that the TM completes in time p(|x|), it can only ever move its head p(|x|) tape symbols to the left or right before it runs out of time.  So we can use that as the “bound” on the LBA.

So, if our LBA uses the exact same states and transitions as the non-deterministic Turing Machine that solved 3SAT, we now have our LBA accept x exactly when x was satisfiable.

The reason this is a “generic” reduction is that nothing we did had anything to do with 3SAT specifically.  We could do this process for any problem in NP.  It’s more useful if we start with an NP-Complete problem, but we could do this for things in P as well since they are also in NP.

Difficulty: 5, mainly because this is a weird way of doing things, and Linear Bounded Automata are things that often get short shrift in Theory of Computation classes.

Consistency of Database Frequency Tables

The end of the Storage and Retrieval section is in sight!

The problem: Consistency of Database Frequency Tables.  This is problem SR35 in the appendix.

The description: I find G&J’s definition confusing, so this definition borrows a lot from the paper by Reiss that has the reduction. 

We have a set of “characteristics” (attributes) A.  Each attribute a in A has a domain Da.  A database is a set of “Objects” (tuples) O1..On where each object defines a value for each characteristic. (The result is a two-dimensional table where rows are objects and columns are attributes). We can define a frequency table  for each pair of attributes a and b in A.  The table has |Da| rows and |Db| columns, and each entry (x,y)  is “supposed” to represent the number of tuples in O that have x for its A attribute and y for its B attribute.

What the problem is asking is: Given a set of tables and a database table V, can we find a way to map the attributes in A to the tables such that the tables actually represent the frequencies in the database?

Example:
Since you need a frequency table for each pair of attributes, here is an example with 3 attributes, each taking 2 posible values.  Attribute a’s domain is {0,1}, b’s is {a,b}, and c’s is {>, <}.  Our set of objects is:

  • (0,a,>)
  • (0,b,>)
  • (1,b,>)
  • (0,b,<)

If we are handed a set of 3 frequency tables:

C1 vs C2:

1 0
2 1

C1 vs C3:

0 1
1 2

C2 vs C3:

1 2
0 1

These frequency tables are accurate if C1 is attribute a, C2 is attribute b, and C3 is attribute c.

The reduction: From 3SAT.  (Actually, this might be from regular CNF-SAT,  but there is no reason not to restrict the input to 3 literals per claue).  We take a SAT instance with p clauses and q variables and bake a database with 2 objects per variable (one for positive, one for negative) and p*q extra objects. We have one attribute per clause (“CLi“), one attribute per variable (“VRi“) and “VALUE” (which holds the truth value of the object in the final setting).  The frequency tables are set up to ensure that each variable has one value setting(true or false) and each clause is made true.  The way to make the database consistent with the table is to find a way to map the variables in the database to the tables in a way to make the formula satisfiable.

Difficulty: 5, because the reduction isn’t that hard to follow, or come up with, once you get how the frequency tables work.  But unlike most 5’s, I don’t think I’d assign it as a homework problem, because of how much extra work it would take to explain the frequency tables in the first place.

Safety of Database Transaction Systems

This problem is related to last week’s but turns out to be much harder.

The problem: Safety of Database Transaction Systems.  This is problem SR34 in the appendix.

The description: Given a set of variables V and transactions T as defined in the Serializability of Database Histories problem, is every history H for T equivalent to some serial history?

Example: This is an extension of last week’s problem.  So last week’s example which produced a history that is not serializable means that that set of transactions is not safe.

The easiest way to produce transactions that are safe is to make them access different variables: Suppose I have 2 transactions (R1, W1) and (R2, W2).  If the first transaction reads and writes a variable x and the second transaction reads and writes a variable y, then any ordering of those transactions will be serializable.

Reduction: It’s in the same paper by Papadimitriou,  Bernstein, and Rothnie as the Serializability of Database History problem was.  It’s interesting that they couldn’t show that the problem was in NP.

They reduce from Hitting Set.  They show in the paper how to take a transaction system and build a graph where there is one vertex for each read and write operation in each transaction, and edges between the two operations in a transaction.  There are also edges between operations Ri and Wj or Wi and Wj  if those operations share a variable.  These edges show places where changing the order changes the meaning of a transaction history.  They show that a transaction system is safe if and only if the graph has no cycles containing a (Rj, Wj) edge.  (Note that means that cycles can exist as long as they contain only W-W edges)

So, given an instance of hitting set, (a set S and a collection C of subsets of S), they build a transaction graph: One read vertex for each set in S, plus one write vertex at the end.  Between read vertices Ri and Ri+1 we add |Ci| edges (or, so we still have a simple graph, |Ci| paths containing vertices that don’t appear anyplace else).  At the end of this chain of paths is a single W vertex, with an edge back to R1.  The only unsafe cycle now starts at R1, goes through one of the paths connecting each R vertex, goes to the final W vertex, and then back to R1.

So far, so good.  But then they lose me when they say “We can embed the hitting set problem –among others– in safety by forcing (by the use of sets of singletons) each such choice to correspond to a hitting set”.  I think what they’re saying is that they will create a set of variables corresponding to sets in C such that an unsafe cycle exists if and only if S has a hitting set.  But I’m not sure how they get there- especially in polynomial time.  I’m sure there’s a way, but it reads like “we set it such that it all works”, which isn’t convincing to me.

Difficulty: 9, because I don’t see how they do that last step.  I’m sure a good explanation exists that would make this less difficult. I’ll also say that the reduction also says the transaction system is unsafe “if and only if there exists an unsafe path–and therefore a hitting set”.  Which sounds like a Co-NP proof to me.  I’m probably missing something.

Serializability of Database Histories

This is another problem with a cool elegant reduction once you get past the baggage you need to know to understand the problem.  This database section seems to be full of problems like these.

The problem: Serializability of Database Histories.  This is problem SR33 in the appendix.

The description: We have a set V of variables in our database, and a set T of transactions, where each transaction i has a read operation (Ri) that reads some subset of V, and a write operation Wi that writes some (possibly different) subset of V.  We’re also given a “history” H of T, which permutes the order of all reads and writes maintaining the property that for all i, Ri comes before Wi in the history.  Think of this as a set of parallel transactions that reach a central database.  H is the order the database processes these operations.

Can we find a serial history H’ of T, with the following properties:

  • Each Ri occurs immediately before its corresponding Wi.
  • A “live transaction” is a transaction (Ri, Wi) where either Wi is the last time a variable is written before the Rj of some other live transaction or the last time the variable is written at all.  The set of live transactions in H and H’ needs to be the same.
  • For each pair of live transactions (Ri, Wi) and (Rj, Wj), for any variable v in Wi∩Rj, Wi is the last write set to contain v before Rj in H if and only if Wi is the last write set to contain v before Rj in H’.  The paper says that this means transaction j “reads from” (or “reads v from”) transaction i.

Example: The paper by Papadimitriou, Bernstein, and Rothnie that has the reduction has a good simple example of a non-serializable history:

H= <R1, R2, W2, W1>, where R1 and W2 access a variable x, and R2 and W1 access a variable y.  Both transactions are live since they both write their variables for the last time.  Notice that neither transaction reads any variable.  But the two possible candidates for H’ are: <R1, W1, R2, W2> (where R2 reads the y written by W1) and <R2, W2, R1, W1> (where R1 reads the x written by W2), so neither H’ candidate has the same set of transactions reading variables from each other.

Reduction: Is from Non-Circular Satisfiability.  Given a formula, they generate a “polygraph” of a database history.  A polygraph (N,A,B) is a directed graph (N,A) along with a set B of “bipaths” (paths that are 2 edges long).  If a bipath{(v,u), (u,w)} is in B, then the edge (w,v) is in A.  So, if a bipath exists in B from v to w, then an edge in A exists from w back to v.  This means that we can view a polygraph (N,A,B) as a family of directed graphs.  Each directed graph in the family has the same vertices and an edge set A’ that is a superset of A and contains at least one edge in each bipath in B. They define an acyclic polygraph as a polygraph (represented as a family of directed graphs) where at least one directed graph in the family is acyclic.

In the paper, they relate database histories to polygraphs by letting the vertex set N bet a set of live transactions.  We build edges in A (u,v) from transactions that write a variable (vertex u)  to transactions that read the same variable (vertex v).  If some other vertex w also has that variable in their read set then the bipath {(v,w), (w,u)} exists in B.  So edges (u,v) in A mean that u “happens before” v since u writes a variable that v reads.  A bipath {(v,w), (w,u)} means that w also reads the same variable, so must happen before u or after v.  They show that a history is serializable if and only if the polygraph for the history is acyclic.

So, given a formula, they build a polygraph that is acyclic if and only if the formula is satisfiable.  The polygraph will have 3 vertices (aj, bj, and cj) for each variable xj in the formula.  Each a vertex connects by an edge in A to its corresponding B vertex.  We also have a bipath in B from the b vertex through the corresponding c vertex back to the a vertex.

Each literal Cik  (literal #k of clause i) generates two vertices yik and zik.  We add edges in A from each yik to zi(k+1)mod 3  (in other words, the y vertex of each clause connects to the “next” z vertex, wrapping around if necessary).  If literal Cik is a positive occurrence of variable Xj, we add edges (cj, yik) and (bj, zik) to A, and the bipath {(zik, yik), (yik, bj)} to B.  If the literal is negative, we instead add (zik, cj) to A and {(aj, zik), (zik, yik)} to B.

If the polygraph is acyclic (and thus the history it represents is serializable), then there is some acyclic digraph in the family of directed graphs related to the polygraph.  So the bipath {(bj, cj), (cj, aj)} will have either the first edge from b-c (which we will represent as “false”) or will have the second edge from c-a (which we will represent as “true”).  (The directed graph can’t have both because its edges are a superset of A, which means it has the edge (aj, bj) and taking both halves of he bipath will cause a cycle).

If our acyclic directed graph has the “false” (b-c) version of the edge for a literal, then it also has to have the z-y edge of the bipath associated with the literal (otherwise there is a cycle).  If all of the literals in a clause were set to false, this would cause a cycle between these bipath edges and the y-z edges we added in A for each clause.  So at least one literal per clause must be true, which gives us a way to satisfy the formula.

If the formula is satisfiable, then build the acyclic digraph that starts with all of A, and takes the bipath edges corresponding to the truth value of each variable, as defined above.  This implies ways you need to take the edges from the bipaths for the literals, to avoid cycles.  The only way now for the graph to be acyclic is for there to be a cycle of x’s and y’s in the edges and bipath edges.  But that would imply that we’ve set all of the literals in a clause to false. Since we know that the clause can be made true (since the original formula is satisfiable), we know that a way exists to make the directed graph acyclic.

Difficulty: 7.  It takes a lot of baggage to get to the actual reduction here, but once you do, I think it’s pretty easy and cool to see how the cycles arise from the definitions of the graphs and from the formula.

Non-Circular Satisfiability

The next reduction uses a new variant of Satisfiability, so I figured it would be worth making a separate post for it.

The problem: Non-Circular Satisfiability.  This problem is not in the appendix.

The description: A CNF-Satisfiability clause is mixed if it contains both variables and their negations.  A Satisfiability formula is non-circular if each variable occurs in a mixed clause at most once.  Given a non-circular satisfiability formula, is it satisfiable?

Example: This is actually a more general version of Monotone Satisfiability– in Monotone Sat, no mixed clauses can exist.  So all Monotone formulas are also Non-Circular.

Notice that we’re not necessarily restricting things to 3Sat clauses- we can have more or less than 3 variables per clause.  So here is a (satisfiable) instance:

F = (x1 ∨ x2) ∧ (~x1 ∨ ~x2) ∧ (~x1 ∨ x2)

This formula has just one mixed clause (the third one) and is satisfiable if x1 is false and x2 is true.  The formula becomes unsatisfiable if we add the clause (x1 ∨ ~x2), but then we would have both variables be in a second mixed clause.

Reduction: The easy way is to just do this by restriction from Montone Sat.  (Given a monotone sat formula, it’s automatically non-circular, so we’re done).  But if we want an actual reduction, we can use the paper by Papadimitriou, Bernstein, and Rothnie which has the next two reductions in G&J and uses this problem for the first one.  They also show the reduction for this problem in their paper and reduce from regular CNF-SAT.  So we have a formula F.  For each variable x in F, suppose x appears m times in F. Create m new variables x1 through xm.  The first instance of x in F will be replaced by x1, the second by ~x2, the third by x3, and so on down.  We then need to basically add the rules x1 ≡~x2 ≡ x3 …  (forcing each new literal we replaced x with to all have the same truth values).  in CNF form, that’s equivalent to the clauses: (x1 ∨ x2) ∧ (~x2 ∨ ~x3) ∧ (x3 ∨ x4) .. and so on down.  This means our new formula is equivalent to F (and is satisfiable when F is).  Each of these new clauses is non-mixed.  The only other occurrence of each xi variable is the one time it replaces x in the original F, which may or may not be a mixed clause, but either way, that means each variable appears in at most one mixed clause.

Difficulty: 3.  This is about as hard as the Monotone Sat reduction.

Tableau Equivalence

This is another problem where I’m going to supplement G&J’s problem description with definitions from the paper by Aho, Sagiv, and Ullman that has the reduction:

The problem: Tableau Equivalence.  This is problem SR32 in the appendix.

Definitions: Given a set of attributes A, and set F of ordered pairs of subsets of A (functional dependencies).  Most database queries ask for certain attributes (these are the “distinguished variables” in the G&J definition) that fill requirements defined by other attributes and values (anything new added are the “undistinguished variables” in the G&J definition).

A Tableau is a matrix that represents these attributes and variables.  The columns correspond to attributes, and the rows correspond to tuples that are returned by the query.   The first row of the tableau is the “summary” of the tableau and holds the distinguished variables we want to return (and possibly some constants)

Tableau example: This example comes from p. 223 of the paper.  For the query:

Find all values for variables a1 and a2 such that we can find values for variables b1 through b4 such that the following strings are all in our database instance (called “I” in the paper):  By convention in the paper, a variables are distinguished variables, and b variables are undistinguished.

  • a1b1b3
  • b2a21  (That’s the constant 1 at the end)
  • b2b1b4

If I was the strings {111,222, 121}, then all assignments of 1’s and 2’s to a1 and a2 work:

  • If a1 and a2 are both 1, then assigning all b variables to 1 generates the string 111 in all 3 cases above, which is in I
  • If a1 = 1 and a2 = 2, then we can assign 2 to b1 and 1’s to all over b variables and get 121 in all cases, which is in I.

Before I get to the 2 harder cases, let me show the tableau:

A B C
a1 a2
a1 b1 b3
b2 a2 1
b2 b1 b4

The first row lists the attributes we’re considering (a1 comes from A and only occurs in the first spot in the result string, a2 comes from B and only occurs in the second spot in the result string.  Our query doesn’t want any variables from C.  The summary lists the variables from the attributes that form our distinguished variables.

The rows below the summary show how to build legal strings (like my list above).

So now we can use the tableau to help see how to find legal values to the variables:

  • If a1 = 2 and a2 = 1, then we need to assign a 1 to b2 to make the second(non-summary) row 121.  We also need to assign a 2 to b1 and b3 to make the first row 222.  This means we need to assign a 1 to b4 to make the bottom row 121.
  • If a1 and a2 are both 2, we need to set b2 to 1 to make the middle row 121.  We need to set b1 and b3 to 2 to make the top row 222.  This means we need to set b4 to 4 to make the bottom row 121.

The problem: Given two Tableaux T1 and T2 which share the same A, F, X, and Y sets (as defined above), are they equivalent?  That is, do they generate the same sets of legal values for their distinguished variables for all possible I sets?

Example: This gets tricky, partially because of the need to worry about “all possible I sets”, and partially because adding functional dependencies makes things equivalent in subtle ways.  Here is the example from page 230 of the paper:

A B C D
a1 a2 a3 a4
a1 b1 b2 b3
b4 b1 a3 b5
a1 a2 b6 b7
a1 b8 b9 a4

If we have the functional dependencies B->A and A->C are true.  Then all strings with b1 in the second character must have the same value (a1) in the first character.  Similarly, A->C implies that the entire third column can be replaced by a3.  This gives us the equivalent tableau:

A B C D
a1 a2 a3 a4
a1 b1 a3 b3
a1 b1 a3 b5
a1 a2 a3 b7
a1 b8 a3 a4

And also, if the only difference between two rows is that one row has nondistinguished variables that don’t appear anywhere else, that row can be eliminated.  So we can get rid of the first row (with b3) because it is just like the second (with b5) But then we realize that the a1b1a3b5 row only differs from the row below it in b1 and b5 which now only appear in that row.  So we can remove that row too, to get the equivalent:

A B C D
a1 a2 a3 a4
a1 a2 a3 b7
a1 b8 a3 a4

 

Reduction: From 3SAT.  Given a formula, we build 2 tableaux.  The set A will have 1 attribute for each clause and variable in the formula.  The clause variables will be distinguished variables.  Our T1 tableau will be set up as follows:

For each clause, we will set a common undistinguished variable for the 3 variables in the row (so each clause that has that variable will have the same undistinguished variable in that column), and separate (only used once) undistinguished variable in the other columns.

The T2 tableau will have 7 rows for each row in T1.  In T2 we replace the common undistinguished variables with 7 sets constants(0 or 1)  that are the ways to set the variables to make the clause true.

They prove a bunch of lemmas after this, but it boils down to: If we have a truth assignment for the formula, we can map that to the tableau by setting the common variables in T1 to the truth values in the assignment.  All clauses will be true in both T1 and T2.  If the tableaux are equivalent, then we must have found a way to set those common variables, and that gives us a truth assignment for the formula.

Difficulty: 7.  This isn’t that hard of a reduction.  Even the lemmas aren’t too hard, though they do depend on a paper’s worth of previous results in equivalences (like the functional dependency thing I did in the example).  But there is a ton of definitions to get through before you can start.

Conjunctive Boolean Query

This is another problem from the same paper as last week’s.  The paper this time though, doesn’t really give a reduction.  Hopefully, I think it’s easy enough for us to build it ourselves.

The problem: Conjunctive Boolean Query.  This is problem SR31 in the appendix.

The description: Given a conjunctive query in the format described last week, is the query true?  (That is, can we find any tuples to satisfy the query?)

Examples: Here is the “List all departments that sell pens and pencils” query from last week:

(x). Sales(x, pen) and Sales(x, pencil).

This problem would return true if there was an actual tuple in the database that could bind to x, and false otherwise.

Reduction: The paper by Chandra and Merlin that we used last week has the definition of this problem, but just says the NP-Completeness “follows from, say, the completeness of the clique problem for graphs”

But I think the reduction is pretty easy to spell out.  If we’re given an instance of Clique (a graph G and an integer K), we can just build the query:

“Does there exist k vertices x1 through xk such that all edges between any two vertices in the set exist?”

We can implement this as a database by creating a relation for all edges, and then the conjunctive query will have at most O(V2) clauses.

Dififculty: 3, but it’s a lot of work to understand the database definitions to get to the actual problem.

Conjunctive Query Foldability

Back on track with more database problems.  This is another one of those “really hard to explain in NP-Complete language” problems.

The problem: Conjunctive Query Foldability.  This is problem SR30 in the appendix.

The description: I find G&J’s definition very confusing, I think because they don’t have room to define what the various variables mean in terms of databases (they actually send you to the paper, as if they know it was confusing).  The definitions in the paper by Chandra and Merlin that has the reduction do a good job of giving names to things and building up slowly.  So:

They define a Relational Data Base as a finite domain D, and a set of relations R1 through Rs.  Each relation is an ordered tuple of elements of D.

A first-order query is of the form (x1, x2, …xk). Φ(x1, x2, ..xk) where Φ is a first-order formula whose only free variables are x1 through xk

The result of a query on a database B with domain D is a set of k-tuples {(y1..yk) ∈ Dk such that Φ(y1..yk) is true in B}.

A conjunctive query is a first-order query of the form (x1, ..xk).∃ xk+1,xk+2, …xm. A1∧A2..Ar, where each Ai is an atomic formula asking about a relation in a database, and each element in the relation is a variable form x1 to xm or a constant.

Examples of conjunctive queries: Here are some examples from the paper, so we can see how this relates to databases:

“List all departments that sell pens and pencils”.  If we have a relation Sales(Department, Item)  (“Department” and “Item” are placeholders for variables), the conjunctive query is:

(x). Sales(x, pen) and Sales(x, pencil).

In this case, there are no new variables (or ∃) added in the query, and “pen” and “pencil” are constants.

“List all second-level or higher managers in department K10” (People who are in X10 who manage someone who manages someone else).  If we have a relation Emp(Name, Salary, Manager, Dept), the conjunctive query is:

(x1).∃(x2, …, x9) .Emp(x2, x3, x4, x5) ∧Emp(x4, x6, x1, x7) ∧ Emp(x1, x8, x9, K10)

In this query, x1 is the “answer”, but depends on the existence of the other x variables.  In particular, x4, who is the employee x1 manages, and x2, who is the employee x4 manages.

Ok, now the actual foldability definition (this is written less formally than either the paper or G&J):

Given two conjunctive queries Q1 and Q2, can we create a function σ mapping all constants and variables x1 through xk to themselves (and possibly mapping other variables to constants or variables from x1 to xk as well), where if we replace each variable x  in Q1 with σ(Q1), we get Q2?

Example: Here is a “worse” version of the first example above:

(x).Sales(x, xpen) ∧ Sales(x,xpencil) ∧ xpen=pen ∧ xpencil = pencil

This can be “folded” into:

(x).Sales(x, pen) ∧ Sales(x,pencil) ∧ pen=pen ∧ pencil = pencil

..it still has those redundant equalities at the end, though, but I don’t see a way to use the definitions of foldability to get rid of them.  I think that the definition given in Chandra and Merlin’s paper might let you do it because their function maps the “natural model” of a query, which I think includes relations.

Reduction: From Graph Coloring.  We start with a graph and an integer K.  We build one variable for each vertex, and K additional (separate) vertices, in a set called C. The relational model R relates vertices connected by an edge.

Q1 is: (). \exists V.\exists C. (\bigwedge\limits_{(u,v) \in E} R(u,v) \wedge \bigwedge\limits_{(u,v)\in C, u\ne v} R(u,v))

Q2 is (). \exists C. \bigwedge\limits_{(u,v)\in C, u \ne v} R(u,v))

So what I think we’re trying to do is “fold” the variables in V into the variables in C, so that there is still a relation between each edge.  That only works if the vertices on each side of the edge are different colors because otherwise.the relation won’t show up (because of the u \ne v constraint).  But I’m not sure, and the proof in the paper stops after building this construction.

Difficulty: 9.  I’m pretty sure a lot of this is going over my head.