Minimum Inferred Finite State Automaton

Here’s the actual problem my student Dan Thornton spent most of his semester trying to understand and prove.  It’s pretty crazy.

The problem: Automaton Identification from Given Data.  We abbreviate it “AutID”.  G&J call this problem “Minimum Inferred Finite State Automaton” and it is problem AL8 in the appendix.

The description: Given a finite set of data, is it possible to build a deterministic finite automata of n or fewer states that agrees with the data?
Prior to giving a description of this problem we must introduce a few concepts and definitions. We want to construct a finite automation, this means a Mealy model deterministic finite automata. Formally a 6-tuple of the form

    \[FA = \langle U,S,Y,f_{tr},f_{out},s_{0} \rangle\]

  • U is the input alphabet, where u \in U is an individual character, and \overline{u} \in U^{*} is a string.
  • S the set of states in the model. s \in S refers to an individual state. S may equivalently be referred to as the set S_{[s]} of all equivalence classes [s]_{\overline{u}}, with representative element \overline{u}. Each equivalence class is a set of all input strings \overline{u} \in U^{*} such that f_{tr}(\overline{u}) = s.
  • Y the output alphabet, where u \in Y is a individual character, \overline{y} \in Y^{*}.
  • f_{tr}: S \times U \rightarrow S is the transition function.
  • f_{out}: S \times U \rightarrow Y is the output function.
    • Both of the above functions may be extended in a natural way so that
      f_{tr}: S \times U^{*} \rightarrow S and f_{out}: S \times U^{*} \rightarrow Y^{*} respectively. Usage should be clear from context.
  • s_{0} \in S is the start state.

We use the standard convention that \phi refers to the empty string of no characters.

A black box is similar to an ‘unknown’ Mealy model with the restriction that for any input string \overline{u} we only get to see the last output character f_{out} returns. Consider the following black box. With input alphabet U = \lbrace 1 , 0 \rbrace and output alphabet Y= \lbrace A,B,C,D \rbrace.

Here the edges are labeled ( input / output )

So if the above known black box was passed the string 0001 the complete Mealy model’s output function would give us back the string ADDC where as the black box would only give us back C.

Now our above finite automata is constructed using data we are given, here we assume that data is a set D of finite pairs of the form (input \ string, black \ box \ output).
D = \lbrace( \overline{u}_{1},y_{1} ) ,...,(\overline{u}_{k},y_{k}) \rbrace such that \overline{u}_{i} \in U^{*} is not empty and \overline{u}_{i} \neq \overline{u}_{j}for  i \neq j.

An automata A agrees with data D if for every datum d = (input string, output character) \in D, the final output character of f_{out}(input string) = output character.

Formal Problem Statement:
Given input alphabet U, output alphabet Y and data, a set D of finite pairs determine if an automation of n or fewer states exists that agrees with D.

The Reduction:

The reduction by Gold uses Monotone EQ SAT. So we are given a conjunction of clauses F = \wedge_{i=1}^{l} C_{i} where each clause in F contains all negated or non-negated variables z_{j} and the number of clauses and variables is equal, is there an assignment of the variables so that F is satisfied? Our instance will have l clauses and variables. Clauses will be numbered C_{0} ... C_{l-1} and variables will be numbered z_{1} ... z_{l}.

In the above instance of Monotone EQ SAT we do not have a variable z_{0}. Indices in our reduction will sometimes force us to reference z_{0}. We may think of this variable as simply a place holder for z_{l}.

The idea behind this reduction is that we may encode our instance of Monotone EQ SAT into data D in such a way that if we can create a finite automata from our encoding, then we can use it to obtain a solution to the instance of Monotone EQ SAT if one exists.

Aut ID construction:
Here we transform our instance of Monotone EQ SAT into data D, to do so we need the following functions:

    \[I_{F}(i,k) = \left\{ \begin{array}{ll} "no-value" & \quad \text{ if }\ z_{l - k} \in C_{i} \\ 0 & \quad otherwise \end{array} \right. \\ \text{where } \\ 0 \leq i \leq l-1 \text{ and } \\ 1 \leq k \leq l\]

    \[\pi_{F}(i) = \left\{ \begin{array}{ll} 1 & \quad \text{if}\ C_{i} \text{ contains uncomplemented variables}\ \\ 0 & \quad \text{if}\ C_{i} \text{ contains complemented variables}\ \end{array} \right. \\ \text{where } \\ 0 \leq i \leq l-1\]

The bounds on the above functions are a result of our numbering of clauses C_{i} and z_{j}. Note that z_{0} is really a placeholder for z_{l}.
Given a propositional formula F which is an instance of MonotoneEQ SAT we encode it by the following:

  • The variables of F are encoded by the set
    D_{L} =\lbrace (1,0),(11,0),....,(1^{l-1},0),(1^{l},1) \rbrace. Here each z_{j} corresponds to the element (1^{j},y) where y is an element of the output alphabet, Y.
  • We encode the clauses C_{i} of F by D_{C} = \lbrace (1^{i}01^{j}, I_{F}(i,j)) \rbrace for 0 \leq i \leq l-1 , 1 \leq j \leq l. Here each clause has a \textit{identifying tag}, 1^{i}0, then the 1^{j} term is used specify a particular variable, z_{l - j}. Importantly, datum in this set for which I_{F}(i,j) returns "no-value" are thrown out.
    • Each clause encoding 1^{i}01^{j} will be 0 only if clause C_{i} does not contain variable z_{l - j}, otherwise I_{F}(i,j) will be "no-value" and is thrown out.
  • We then encode whether each clause is a set of all negated, or non-negated variables. D_{P} = \lbrace (1^{i}00, \pi_{F}(i)) \rbrace for 0 \leq i \leq l-1.

As a simple example we convert to data the following formula

    \[F = (z_{1} \vee z_{2} \vee z_{4}) \wedge (\neg z_{2} \vee \neg z_{3} \neg z_{4})\]

So after our conversion we get:

  • D_{L} = \lbrace (1,0) , (11,0) , (111,0) , (1111,1) \rbrace
  • D_{C} = \lbrace (01,0),(10111,0) \rbrace
  • D_{F} = \lbrace (00,1),(100,0) \rbrace

Aut ID instance
Now we create an instance of Aut \ ID by letting n be the number of variables l and D = D_{L} \cup D_{C} \cup D_{P}.

True{Aut ID} \Rightarrow True{Monotone EQ SAT}
Now we may assume that we have a finite automata A that agrees with the data in D. Now we may use the transition function f_{tr} and the output function f_{out} to deduce properties of A.

Let s_{\overline{u}} be f_{tr}(\overline{u}), this is the state that A is in after receiving the input string \overline{u}.

A splitting string of \overline{u}_{1} and \overline{u}_{2} is a string \overline{u}_{s} \in U^{*} so that f_{out}(\overline{u}_{1} \overline{u}_{s}) differs from f_{out}(\overline{u}_{2} \overline{u}_{s}). If such a string exists then obviously s_{\overline{u}_{1}} \neq s_{\overline{u}_{2}}. If two states do not have splitting string then they are equivalent

We define variable states as follows, a variable z_{j} has the encoding 1^{j}, so s_{1^{j}} is z_{j}‘s variable state, we will sometimes refer to this as s_{z_j}.

Clause states are defined in a manner similar to variable states. A clause C_{i} corresponds to the identifying tag in the encoding of C_{i} or 1^{i}0, so s_{1^{i}0} is C_{i}‘s clause state, we will sometimes refer to this as s_{C_i}.

In the following theorems we shall talk of assignment of clause state s_{C_i} to variable state s_{z_j}, this means that these two states are equivalent or alternatively that there does not exist a splitting string of s_{C_i} and s_{z_j}.

Theorem 1:All variable states are different.

Proof. We assume two variable states are equal {s_z}_j and {s_z}_{j'} where j >j'. Then in the encoding of D we get z_{j} = 1^{j} and z_{j'} = 1^{j'} respectively. But now we may define the splitting string 1^{l-j}. The final element of f_{out}(1^{j} 1^{l-j}) = f_{out}(1^{l}) is defined in our data D as (1^{l},1). By a similar argument we get f_{out}(1^{j'} 1^{l-j'}) = f_{out}(1^{l'}) where l' < l, in our data this corresponds to one of the entries (1^{l'},0). Our automata A must agree with the data, so as a result these states {s_z}_j and {s_z}_{j'} cannot be equal. Thus we have reached a contradiction. \square

Theorem 2: Each state in our finite automata A corresponds to exactly one variable.

Proof. The above follows by the pigeonhole principle, as A has exactly l states, there are l variable states, and by Theorem 1, no two variable states are equal. \square

By the above we may talk about the states of A in term of the variable each state corresponds to.

Theorem 3: If clause state {s_C}_i and variable state {s_z}_j are equal, Then clause C_{i} contains variable z_{l-j}.

Proof. Assume {s_C}_i = {s_z}_j and z_{l-j} is not contained in C_{i}. Once again we have the following encodings C_{i} = 1^{i}0 and z_{l-j} = 1^{l-j} that were defined in our construction of D. Now we may define a splitting string u_{s}= 1^{j}. When we append the splitting string to each of the above encodings we get C_{i} + u_{s} = 1^{i}01^{j} and z_{l-j} + u_{s} = 1^{l-j} 1^{j}. Both of these strings are defined in our data D as (1^{i}01^{j},I_{F}(i,j)) and (1^{l},1) respectively, and by our assumption that z_{l-j} is not contained in C_{i}, I_{F}(i,j) = 0 So by the a conclusion identical to that of Theorem 1, {s_C}_i \neq {s_z}_j and our theorem follows. \square

Theorem 4:No two clause states {s_C}_i and {s_C}_k may be equal if they are of opposite composition, meaning one has all negated variables, and the other all un-negated variables.

Proof. Once again we proceed by contradiction, assume {s_C}_{i} = {s_C}_{k} and that {s_C}_{i}, {s_C}_{k} are of opposite composition. Once again we get each clauses encoding 1^{i}0 and 1^{k}0 respectively. Now we may define the splitting string 0. Looking at the final character output by f_{out} we see f_{out}{1^{i}00} = \pi_{F}(i) and f_{out}{1^{k}00} = \pi_{F}(k). It follows that \pi_{F}(i) \neq \pi_{F}(k) as {s_C}_i and {s_C}_k are of opposite composition. Thus we arrive at a contradiction.\square

Theorem 5: Aut \ ID gives us a solution to F our instance of Monotone EQ SAT.

Proof. By Theorem 2 every clause state must be assigned to a single variable state, notice this does not mean only a single assignment is possible but just the given automata A will give a single assignment. By Theorem 1 no clause may be assigned more than one variable state. By Theorem 3 a clause may be assigned to a variable state only if the clause contains the variable. So we see that our automata A assigns clauses to the variables that could potentially satisfy them. Furthermore by Theorem 4, each variable state is given clauses of only one composition, so there is an obvious variable assignment to satisfy all of the clause states equal to a given variable state. \square

True{Monotone EQ SAT} \Rightarrow True{Aut  ID}
Now we assume there exists a mapping \theta_{F}: z_{j} \rightarrow \lbrace True(or \ 1), False(or \ 0) \rbrace that satisfies our instance of Monotone EQ SAT, F with l clauses and variables. From this mapping we build a finite automata A_{F} that agrees with all data D.

We let A_{F}‘s 0-transitions refer to transitions on input 0. Similarly 1-transitions  refer to the transitions on input 1.

So first we define all 1-transitions as given in the following example automata, obviously such transitions will satisfy all data in D_{L}.


Now for A_{F} to satisfy all of the data in D_{C} we notice by Theorems 3 and 4 above mapping each clause to the variable state that satisfies it seems like a natural thing to do. We may determine this mapping of clauses to the variables that satisfy them from our truth assignment that satisfies F, \theta_{F}.

Such mappings may be added to A_{F} by noticing the following, as soon as we encounter a 0 in input we must transition to a clause state. So we may define a 0-transition for every state in A_{F} that maps the clause state {s_C}_{i} (which recall corresponds to the string 1^{i}0) to the variable state {s_z}_j that satisfies it. When adding 0-transitions we must be careful as if clause C_{i} is satisfied by variable z_{j} then in order to agree with the data in D_{C} we really should map C_{i} to variable z_{l - j} due to the definition of I_{F}.

Now we must assign an output to each of these 0-transitions. To do so we iterate through every clause state {s_C}_{i} by sequentially considering s_{1^{i}0}. At each clause state we assign the output of the 0-transition starting at this state to be \pi(i). Notice that if two clause states {s_C}_{i} and {s_C}_{k} are identical, then they must both be satisfied by the same variable and so therefore \pi(i) = \pi(j).

Below is an example of adding such a 0-transition and assigning it’s output:


The Blue edge is the zero transition added that maps the clause state C_{2},which corresponds to the string 110, to the variable state z_{l - 2}. Semantically this means that the variable z_{l - (l-2)} = z_{2} satisfies this clause.
The Red edge shows us adding the output value of a 0-transition, as from the above blue edge we know {s_z}_{l-2} corresponds to the clause C_{2}‘s clause state, so to the zero transition from this clause state will have output \pi(2).

Theorem 6: Our constructed automata A_{F} agrees with all D_{L}

Proof.  This is quite obvious from our example of A_{F} in Figure 1. We have defined all the 1 transitions, and notice that the final output of all f_{out}(1^{l}) = 0 if l < n and f_{out}(1^{n}) = 1 this exactly matches our data D_{L}. \square

Theorem 7: Our constructed automata A_{F} agrees with all D_{C}
Proof:Here we consider the final character output of f_{out} for all encodings of fixed clause C_{i}, these encodings are precisely the 1^{i}01^{j} for 1 \leq j \leq l. Now with the addition of the 0-transitions, we transition into s_{1^{i}0}, and from this state we only worry about 1-transitions. Recall that in our definition of D_{C}, all datum are of the form (input \ string, 0). In A_{F} there is a single 1-transition that outputs the non-zero value 1. To agree with D_{C} it is sufficient to make sure that this non-zero output occurs for an input string not in D_{C}.

From our assignment of 0-transitions defined above this singular 1-transition that outputs 1 will occur for the variable z_{l-j} that satisfies C_{i}, so then z_{l-j} must be in C_{i}. So A_{F} will give the datum (1^{i}01^{l-j},1). Now notice the corresponding entry in D_{C} as defined by I_{F}(i,j) would be (1^{i}01^{l-j},"no-value"), but all of these datum were removed from D_{C}. So the sufficiency condition above is satisfied. \square

Theorem 8 :Our constructed automata A_{F} agrees with all D_{P}

Proof: The manner in which we defined our 0 transitions explicitly involved a \pi_{i} term, so for any clause state {s_C}_{i} = {s_{1^{i}0}} , now a 0 transition as defined above will output \pi(i) which will obviously agree with D_{P}

By Theorem 6, Theorem 7, and Theorem 8. We conclude that A will agree with data D

(Back to me)

Difficulty: 9.  This is basically the reduction in the Gold paper, though Dan reorganized it a bunch to make it fit our format, and be more understandable.  But the formatting of the various variables, the fact that you need to define the IF function on variable k to talk about zl-k instead of zk, and the three different kinds of test data all make this super hard to understand.  In some ways, I wonder if this should be a 10, given the work we did on it.  But I’m saving the 10’s for things that are truly impenetrable, and we did manage to figure this one out.

12 responses to “Minimum Inferred Finite State Automaton

  1. Hi Prof.,
    I had a query related to another problem [AL2] “Two way Finite State Automaton Non-Emptiness” in G&J (I think you haven’t covered it yet.

    Essentially, it mentions that determining if a 2Way FSA on a Single Alphabet is empty is NP Complete.

    I am confused, about this one. Why can’t we test strings on 1 of all lengths one by one till we find if one is accepted or determine the automata is Empty. Unlike in a Binary Alphabet we don’t have exponential number of possibilities of input strings. And we would have to test strings that are polynomial in size w.r.t. to the size of the given Automata.

    So how is this problem NPComplete?

    • ** The automaton is Non-Deterministic (just to clarify).

      • Well, I haven’t got there, so I don’t know the problem too well. But the way the comments read, I’d say that this _specific_ problem is NP-Complete when sigma is 2, but not when it’s 1. They say that the _general_ problem is NP-Complete when sigma is 1.

        So the question is what are they generalizing in the “general” problem? I’m not sure. My guess is that they’re generalizing from “two-way” to “any-way”. So instead of mapping to Qx{-1,0,1}, you can map into QxZ.

        But I’m not sure. Have you read the Galil paper that they reference?

  2. No Prof. Unfortunately not.

    1. “INSTANCE: A two-way nondeterministic finite state automaton A =(……”
    2. “Comment: PSPACE-complete, even if |l|=2 and A is deterministic. If |l| = l the general problem is NP-complete [Galil, 1976]. ”

    By general I suppose they mean the nondeterministic variant of the problem. They needed to specify it as in the previous line for a 2 alphabet version they mentioned that ‘even in deterministic case’ its PSPACE complete.

    • Maybe. I just downloaded the Galil paper and the way they phrase it is “The nonemptiness problem for two way nondeterministic finite automaton languages over one symbol is in NP (in fact it is complete in NP)”

      It’s weird that G&J used “general” to mean “nondeterministic” though…

  3. Thank you Prof. (I didn’t have access to the journal so couldn’t recheck myself). Yes its weird/unclear, took me some time too to get what they might have meant. 🙂

    I await your post on this as its still unclear why it should be NPComplete. On the face of it, doesn’t seem like it.

  4. Perhaps because of the exponential number of ways the head can move back and forth.

    • Yeah, the proof starts by saying that if you have a 2-way NFA with N states, there’s an equivalent nfa with n^(n+1) states. So that’s probably where you get the exponential time from.

  5. Hi Prof.

    I had some more doubts on yesterday’s discussion. As you explained “2-way NFA with N states, there’s an equivalent nfa with n^(n+1) states. So that’s probably where you get the exponential time from.”

    I am clear about that, but if I understand correctly the size of the string accepted by the 2Way NFA would still be (in the worst case) polynomial to number of states in the 2Way NFA. Cause if the smallest string accepted was exponential in size, the verification will also take exponential time and thus the problem by definition would not be NP Complete ? Or is there a mistake somewhere in understanding.

    Since I don’t have the access to paper I am not sure if an exponential sized accepting sting can be verified in polynomial time (or have the mentioned about input size verification in general)

  6. Or instead of using an input string as witness they are using some other method to verify the non emptiness that can be done in polynomial time ?

    • It looks like they’re using it to build a nondeterministic algorithm. It’s not a regular reduction like we normally do. They use it to show that if the 2-way NFA accepts a string, it needs to be bounded by n^(n+1) in size (and thus the tape is bounded as well). Then they count configurations of states and heads and show how to build an algorithm to solve the problem. The paper was written before the “normal” ways of doing reductions really came about, and I don’t know a lot about how you prove things directly from the machines.

  7. Thank you Prof.
    I too was unable to find this Problem in any of the modern books or any related reduction in our modern form either. Seems a lot complicated as compared to our standardized form too.

Leave a Reply

Your email address will not be published. Required fields are marked *