plain CS/EE 5750/6750: Asynchronous Circuit Design

CS/EE 5750/6750: Asynchronous Circuit Design

Chris J. Myers
Lecture 5: Huffman Style Synthesis
Chapter 5

Huffman Circuit

Figure

Huffman Circuit Design Method

Circuit Delay Model

  • Uses the bounded gate and wire delay model.
  • Environment must also be constrained:

    • Single-input change (SIC) - each input change must be separated by a minimum time interval.
    • SIC fundamental mode - the time interval is the maximum delay for circuit to stabilize.
    • MIC - allow multiple inputs to change.
    • MIC fundamental mode - waits for circuit to stabilize.

Solving Covering Problems

  • The last step of state minimization, state assignment, and logic synthesis is to solve a covering problem.
  • A covering problem exists whenever you must select a set of choices with minimum cost which satisfy a set of constraints.
  • Classic example: selection of the minimum number of prime implicants to cover all the minterms of a given function.

Formal Derivation of Covering Problem

  • Each choice is represented with a Boolean variable xi.
  • xi = 1 implies choice has been included in the solution.
  • xi = 0 implies choice has not been included in the solution.
  • Covering problem is expressed as a product-of-sums, F.
  • Each product (or clause) represents a constraint.
  • Each clause is sum of choices that satisfy the constraint.
  • Goal: find xi's which satisfy all constraints with minimum cost.
    cost
    =
    min t

    i = 1 
    wi xi
    (1)

Example Covering Problem

F
=
ab(c+ d)(c+ e + f + g)(d+ e + f + g) (d+ e)
(a+ e + f + g)(e+ g + h)(f+ g)

Unate versus Binate

  • Unate covering problem - choices appear only in their positive form (i.e., uncomplemented).
  • Binate covering problem - choices appear in both positive and negative form (i.e., complemented).
  • Algorithm presented here considers the more general case of the binate covering problem, but solution applies to both.

Constraint Matrix

  • F is represented using a constraint matrix.
  • Includes a column for each xi variable.
  • Includes a row for every clause.
  • Each entry of the matrix fij is:

    • '-' if the variable xi does not appear in the clause,
    • '0' if the variable appears complemented, and
    • '1' otherwise.

  • ith row of F is denoted fi.
  • jth column is denoted by Fj.

Constraint Matrix Example

 d
 e
 f
 g
 h
F =















1
-
-
-
-
-
-
-
-
0
-
-
-
-
-
-
-
-
0
1
-
-
-
-
-
-
0
-
1
1
1
-
-
-
-
0
1
1
1
-
-
-
-
0
1
-
-
-
0
-
-
-
1
1
1
-
-
-
-
-
0
-
1
1
-
-
-
-
-
0
1
-















1
2
3
4
5
6
7
8
9

Binate Covering Problem

  • The binate covering problem is to find a subset of columns, S, of minimum cost such that for every row fi either

    1. $j : (fij = 1) (Fj S); or,
    2. $j : (fij = 0) (Fj \not S).

BCP Algorithm

xxx ¯ xxx ¯ xxx ¯ xxx ¯ xxx ¯ xxx ¯ (F, U, S) {
 (F,S) = reduce(F,S)
 if (terminalCase(F)) {
  if (F 0 and cost(S) < U) {
   U = cost(S)
   return (S) }
  else return (``no solution'') }
 L = lower_bound(F,S)
 if (L U) return (``no solution'')
 xi = choose_var(F)
 S1 = bcp(Fxi, U, S {xi})
 if (cost(S1) = L) return (S1)
 S0 = bcp(Fxi, U, S)
 return best_solution(S1, S0) }

Essential Rows

  • A row fi of F is essential when there exists exactly one j such that fij is not equal to '-'.
  • This cooresponds to clause consisting of a single literal.
  • If the literal is xj (i.e., fij = 1), then the variable is essential.
  • If the literal is xj (i.e., fij = 0), then the variable is unacceptable.
  • The matrix F is cofactored with respect to the essential literal.
  • This means variable is set to value of literal, the column is removed, and any row where variable has same value is removed.

Essential Rows Example

 d
 e
 f
 g
 h
F =











0
1
-
-
-
-
0
-
1
1
1
-
-
0
1
1
1
-
-
0
1
-
-
-
-
-
1
1
1
-
-
-
0
-
1
1
-
-
-
0
1
-











3
4
5
6
7
8
9

Row Dominance

  • A row fi dominates another row fj if fi is satisfied whenever fj is satisfied.
  • Dominating rows can be removed without affecting the set of solutions.

Row Dominance Example

 d
 e
 f
 g
 h
F =









0
1
-
-
-
-
0
-
1
1
1
-
-
0
1
-
-
-
-
-
1
1
1
-
-
-
0
-
1
1
-
-
-
0
1
-









3
4
6
7
8
9

Column Dominance

  • A column Fj dominates another column Fk if for each clause fi of F, one of the following is true:

    • fij = 1;
    • fij = - and fik 1;
    • fij = 0 and fik = 0.

  • Dominated columns can be removed without affecting the existence of a solution.
  • When removing a column, the variable is set to 0 which means any rows including that column with a 0 entry can be removed.
  • Must also check the weights of the columns before removing.
  • If wj is greater than wk, then wk should not be removed.
  • A special case is when a column contains no 1's.

Column Dominance Example

 d
 e
 g
F =




0
1
-
-
1
1
-
0
1




6
7
8

Termination

  • If F has no more rows, then all the constraints have been satified by the current solution, S, and we have reached a terminal case.
  • The algorithm checks if the cost of the current solution, S, is less than the best found so far, U.
  • If so, the cost of the best solution, U, is updated to reflect S's cost, and S is returned.

Infeasible Problems

F = (a + b)(a+ b)(a + b)(a+ b)
a
 b
F =





1
1
0
1
1
0
0
0





1
2
3
4

Bounding

  • If F is not terminal, then the matrix is cyclic.
  • To find an exact minimal solution, must branch on alternatives.
  • First test whether or not a good solution can be derived from the partial solution found up to this point.
  • Determine lower bound of cost for current partial solution.
  • If lower bound is greater than the cost of the best solution found so far then there is no point to continue.
  • One method is to find a maximal independent set (MIS).
  • Two rows are independent when cannot satisfy both by setting one variable to 1.
  • Ignore rows with a complemented variable.

MIS Quick

  • A heuristic to find the size of the MIS is to select shortest row (i.e., one with least 1's) and remove all intersecting rows.
  • Select the shortest remaining row and delete intersecting.
  • Repeat this process until no rows remain.
  • Number of rows selected forms an estimate of the MIS.

Bounding Example





















c1
c2
c3
c4
c5
c6
c7
c8
c9
c10
c11
c12
1
1
-
-
-
-
-
-
-
-
-
1
-
2
1
1
-
-
1
-
-
-
-
-
-
-
3
-
1
1
-
1
1
1
1
-
-
-
-
4
1
1
-
1
-
1
-
-
-
1
-
-
5
1
-
-
1
-
-
-
-
-
-
-
-
6
-
-
1
-
-
-
1
-
1
-
-
1
7
-
-
1
-
-
-
-
1
1
-
1
-
8
-
-
-
1
-
-
-
-
-
1
-
-
9
1
0
-
-
-
-
-
-
-
-
-
-




















Branching

  • A column intersecting short rows is prefered for branching.
  • Assign a weight to each row that is inverse of row length.
  • Sum the weights of all the rows covered by a column.
  • Column xi with highest value is chosen for case splitting.
  • xi is added to the solution and constraint matrix is cofactored.
  • bcp is called recursively.
  • S1 is then checked if it meets the lower bound L.
  • If so, minimal solution found.
  • If not, cofactor with respect to xi and call bcp.
  • S0 is compared S1, and the best one is returned.

Branchng Example

Row Weight
1 1/2
2 1/3
3 1/6
4 1/5
5 1/2
6 1/4
7 1/4
8 1/2
9 1
Column Weight
c1 2.53
c2 0.70
c3 0.66
c4 1.20
c5 0.50
c6 0.37
c7 0.42
c8 0.42
c9 0.50
c10 0.70
c11 0.75
c12 0.25

State Minimization Overview

  • Original flow table may contain redundant rows, or states.
  • Reducing number of states, reduces number of state variables.
  • State minimization procedure:

    • Identify all compatible pairs of states.
    • Finds all maximal compatibles.
    • Find set of prime compatibles.
    • Setup a covering problem where prime compatibles are the solutions, and states are what needs to be covered.

  • For SIC fundamental mode, same as for synchronous FSMs.

Example Huffman Flow Table

x1 x2 x3 x4 x5 x6 x7
a a,0 - d,0 e,1 b,0 a,- -
b b,0 d,1 a,- - a,- a,1 -
c b,0 d,1 a,1 - - - g,0
d - e,- - b,- b,0 - a,-
e b,- e,- a,- - b,- e,- a,1
f b,0 c,- -,1 h,1 f,1 g,0 -
g - c,1 - e,1 - g,0 f,0
h a,1 e,0 d,1 b,0 b,- e,- a,1

Pair Chart

b
c
d
e
f
g
h
a b c d e f g
b
c
d
e
f
g
h
a b c d e f g

Unconditionally Compatible

  • Two states u and v are output compatible when for each input in which both are specified, they produce the same output.
  • Two states u and v are unconditionally compatible when output compatible and go to the same next states.
  • When two states u and v are unconditionally compatible, the (u,v) entry is marked with the symbol ~ .

Example after Marking Unconditional Compatibles

b
c ~
d
e ~
f
g ~
h ~
a b c d e f g

Incompatibles

  • When two states u and v are not output compatible, the states are incompatible.
  • When two states u and v are incompatible, the (u,v) entry is marked with the symbol ×.

Example after Marking Incompatibles

b
c × ~
d
e × ~
f × × ×
g ~ × ×
h × × × ~ × ×
a b c d e f g

Conditionally Compatible

  • Two states are conditionally compatible when there exists differences in their next state entries.
  • If differing next states are merged, then they become compatible.
  • When two states u and v are compatible only when states s and t are merged then the (u,v) entry is marked with s,t.

Example after Marking Conditional Compatibles

b a,d
c × ~
d b,e a,b d,e d,e a,g
e a,b a,d d,e a,b a,e × ~
f × × c,e × c,e b,f e,g
g ~ × c,d f,g c,e b,e a,f × e,h
h × × × ~ a,b a,d × ×
a b c d e f g

Final Check

  • The final step is to check each pair of conditional compatibles.
  • If any pair of next states are known to be incompatible, then the states are are also incompatible.
  • In this case, the (u,v) entry is marked with the symbol ×.

Final Pair Chart

b a,d
c × ~
d b,e a,b d,e d,e a,g
e a,b a,d d,e a,b a,e × ~
f × × c,d × ×
g ~ × c,d f,g × × e,h
h × × × ~ a,b a,d × ×
a b c d e f g

Maximal Compatibles

  • Next need to find larger sets of compatible states.
  • If S is compatible, then any subset of S is also compatible.
  • A maximal compatible is compatible that is not subset of any larger compatible.
  • From maximal compatibles, can determine all other compatibles.

Approach One

  • Initializing a compatible list (c-list) with the compatible pairs in rightmost column of pair chart having at least one non-× entry.
  • Examine the columns from right to left.
  • Set Si to states in column i which do not contain ×.
  • Intersect Si with each member of the current c-list.
  • If the intersection has more than one member, add to the c-list an entry composed of the intersection unioned with i.
  • Remove duplicate entries and those that are subset of others.
  • Add pairs which consist of i and any members of Si that did not appear in any of the intersections.
  • c-list plus states not contained in c-list are maximal compatibles.

Example for Approach One

First step: c = { fg }
Se = h: c = { fg, eh }
Sd = eh: c = { fg, deh }
Sc = dfg: c = { cfg, deh, cd }
Sb = cde: c = { cfg, deh, bcd, bde }
Sa = bdeg: c = { cfg, deh, bcd, abde, ag }

Approach Two

  • If si and sj have been found to be incompatible, we know that no maximal compatible can include both.
  • Write a Boolean formula that gives the conditions for a set of states to be compatible.
  • For each state si, xi = 1 means that si is in the set.
  • States si and sj are incompatible implies the clause (xi+ xj) is included.
  • Form conjunction of clauses for each incompatible pair.

Example for Approach Two

  • Initial Boolean formula for incompatibles:
    (a+ c)(a+ f)(a+ h) (b+ f)(b+ g)(b+ h) (c+ e)
    (c+ h)(d+ f)(c+ g) (e+ f)(e+ g) (f+ h)(g+ h)
  • Convert to sum-of-products:
    abdeh+abcfg+aefgh+cfgh+bcdefh
  • Each term defines a maximal compatible set where states that do not occur make up the maximal compatible.
    cfg, deh, bcd, abde, ag

Prime Compatibles

  • Some states are compatible only if other pairs are merged.
  • The implied state set for each compatible is called its class set.
  • The implied compatibles must be selected to guarantee closure.
  • C1 and C2 are compatibles and G1 and G2 are their class sets.
  • If C1 C2 then it may appear that C2 is better, but if G1 G2 then C1 may be better.
  • The best compatibles may not be maximal.
  • A compatible C1 is prime iff there does not exist C2 C1 such that G2 G1.
  • An optimum solution can always using only prime compatibles.

Prime Compatible Algorithm

xxx ¯ xxx ¯ xxx ¯ xxx ¯ xxx ¯ xxx ¯ xxx ¯ (CM){ done =
 for(k = |largest(M)|; k 1; k-) {
  foreach(q M;  |q| = k)  enqueue(P,q)
  foreach(p P;  |p| = k) {
   if(class_set(CM,p) = ) continue
   foreach(s max_subsets(p)) {
    if(s done) continue
    Gs = class_set(CM,s)
    prime = true
    foreach(q P; |q| k) {
     if (s q) {
      Gq = class_set(CM,q)
      if (Gs Gq) {
       prime = false;
       break } } }
    if(prime = 1) enqueue(P,s)
    done = done{s} } } } }

Prime Compatible Algorithm Example

Prime Compatibles

Prime compatibles Class set
1 abde
2 bcd {(a,b),(a,g),(d,e)}
3 cfg {(c,d),(e,h)}
4 deh {(a,b),(a,d)}
11 ag
5 bc
6 cd {(a,g),(d,e)}
7 cf {(c,d)}
8 cg {(c,d),(f,g)}
9 fg {(e,h)}
10 dh
12 f

Setting up the Covering Problem

  • A collection of prime compatibles forms a valid solution when it is a closed cover.
  • A collection of compatibles is a cover when all states are contained in some compatible in the set.
  • A collection is closed when all implied states are contained in some other compatible.
  • ci = 1 when the ith prime compatible is in the solution.
  • Using the ci variables, it is possible to write a Boolean formula that represents the conditions for a solution to be a closed cover.
  • The formula is a product-of-sums where each product is a covering or closure constraint.

Covering Constraints

  • There is one covering constraint for each state.
  • The product is simply a disjunction of the prime compatibles that include the state.
  • In other words, for the covering constraint to yield 1, one of the primes that includes the state must be in the solution. For example, the covering constraint for state a is:
    (c1 + c11)

Closure Constraints

  • There is a closure constraint for each implied compatible for each prime compatible.
  • For example, the prime bcd requires the following states to be merged: (a,b), (a,g), (d,e).
  • Therefore, if we include bcd in the cover (i.e., c2), then we must also select compatibles which will merge these other state pairs.
  • abde is the only prime compatible that merges a and b.
  • Therefore, we have a closure constraint of the form:
    c2 c1

Closure Constraints

  • The prime ag is the only one that merges states a and g, so we also need a closure constraint of the form:
    c2 c11
  • Finally, primes abde and deh both merge states d and e, so the resulting closure constraint is:
    c2 (c1 + c4)
  • Converting the implication into disjunctions, we can express the complete set of closure constraints for bcd as follows:
    (c2+ c1)(c2+ c11)(c2+ c1 + c4)

Product-of-Sums Formulation

(c1 + c11)(c1 + c2 + c5)(c2 + c3 + c5 + c6 + c7 + c8)
(c1 + c2 + c4 + c6 + c10)(c1 + c4)(c3 + c7 + c9 + c12)
(c3 + c8 + c9 + c11)(c4 + c10)(c2+ c1)(c2+ c11)(c2+ c1 + c4)
(c3+ c2 + c6)(c3+ c4)(c4+ c1)(c4+ c1)(c6+ c11)
(c6+ c1 + c4)(c7+ c2 + c6)(c8+ c2 + c6)(c8+ c3 + c9)(c9+ c4)
=
1
























































c1
c2
c3
c4
c5
c6
c7
c8
c9
c10
c11
c12
1
1
-
-
-
-
-
-
-
-
-
1
-
2
1
1
-
-
1
-
-
-
-
-
-
-
3
-
1
1
-
1
1
1
1
-
-
-
-
4
1
1
-
1
-
1
-
-
-
1
-
-
5
1
-
-
1
-
-
-
-
-
-
-
-
6
-
-
1
-
-
-
1
-
1
-
-
1
7
-
-
1
-
-
-
-
1
1
-
1
-
8
-
-
-
1
-
-
-
-
-
1
-
-
9
1
0
-
-
-
-
-
-
-
-
-
-
10
-
0
-
-
-
-
-
-
-
-
1
-
11
1
0
-
1
-
-
-
-
-
-
-
-
12
-
1
0
-
-
1
-
-
-
-
-
-
13
-
-
0
1
-
-
-
-
-
-
-
-
14
1
-
-
0
-
-
-
-
-
-
-
-
15
1
-
-
0
-
-
-
-
-
-
-
-
16
-
-
-
-
-
0
-
-
-
-
1
-
17
1
-
-
1
-
0
-
-
-
-
-
-
18
-
1
-
-
-
1
0
-
-
-
-
-
19
-
1
-
-
-
1
-
0
-
-
-
-
20
-
-
1
-
-
-
-
0
1
-
-
-
21
-
-
-
1
-
-
-
-
0
-
-
-






































































































c1
c2
c3
c4
c5
c6
c7
c8
c9
c10
c11
c12
1
1
-
-
-
-
-
-
-
-
-
1
-
2
1
1
-
-
1
-
-
-
-
-
-
-
3
-
1
1
-
1
1
1
1
-
-
-
-
4
1
1
-
1
-
1
-
-
-
1
-
-
5
1
-
-
1
-
-
-
-
-
-
-
-
6
-
-
1
-
-
-
1
-
1
-
-
1
7
-
-
1
-
-
-
-
1
1
-
1
-
8
-
-
-
1
-
-
-
-
-
1
-
-
9
1
0
-
-
-
-
-
-
-
-
-
-
10
-
0
-
-
-
-
-
-
-
-
1
-
12
-
1
0
-
-
1
-
-
-
-
-
-
13
-
-
0
1
-
-
-
-
-
-
-
-
14
1
-
-
0
-
-
-
-
-
-
-
-
16
-
-
-
-
-
0
-
-
-
-
1
-
17
1
-
-
1
-
0
-
-
-
-
-
-
18
-
1
-
-
-
1
0
-
-
-
-
-
19
-
1
-
-
-
1
-
0
-
-
-
-
20
-
-
1
-
-
-
-
0
1
-
-
-
21
-
-
-
1
-
-
-
-
0
-
-
-








































































c2
c3
c4
c5
c6
c7
c8
c9
c10
c11
c12
3
1
1
-
1
1
1
1
-
-
-
-
6
-
1
-
-
-
1
-
1
-
-
1
7
-
1
-
-
-
-
1
1
-
1
-
8
-
-
1
-
-
-
-
-
1
-
-
10
0
-
-
-
-
-
-
-
-
1
-
12
1
0
-
-
1
-
-
-
-
-
-
13
-
0
1
-
-
-
-
-
-
-
-
16
-
-
-
-
0
-
-
-
-
1
-
18
1
-
-
-
1
0
-
-
-
-
-
19
1
-
-
-
1
-
0
-
-
-
-
20
-
1
-
-
-
-
0
1
-
-
-
21
-
-
1
-
-
-
-
0
-
-
-













































c2
c3
c5
c6
c7
c8
c9
c11
3
1
1
1
1
1
1
-
-
6
-
1
-
-
1
-
1
-
7
-
1
-
-
-
1
1
1
10
0
-
-
-
-
-
-
1
12
1
0
-
1
-
-
-
-
16
-
-
-
0
-
-
-
1
18
1
-
-
1
0
-
-
-
19
1
-
-
1
-
0
-
-
20
-
1
-
-
-
0
1
-































c3
c5
c6
c7
c8
c9
c11
6
1
-
-
1
-
1
-
7
1
-
-
-
1
1
1
10
-
-
-
-
-
-
1
16
-
-
0
-
-
-
1
20
1
-
-
-
0
1
-




























c3
c5
c6
c7
c8
c9
c11
3
1
1
1
1
1
-
-
6
1
-
-
1
-
1
-
7
1
-
-
-
1
1
1
12
0
-
1
-
-
-
-
16
-
-
0
-
-
-
1
18
-
-
1
0
-
-
-
19
-
-
1
-
0
-
-
20
1
-
-
-
0
1
-


























c5
c6
c7
c8
c9
c11
12
-
1
-
-
-
-
16
-
0
-
-
-
1
18
-
1
0
-
-
-
19
-
1
-
0
-
-
























c5
c6
c7
c8
c9
c11
3
1
1
1
1
-
-
6
-
-
1
-
1
-
7
-
-
-
1
1
1
16
-
0
-
-
-
1
18
-
1
0
-
-
-
19
-
1
-
0
-
-
20
-
-
-
0
1
-
























c5
c6
c7
c8
c11
3
1
1
1
1
-
16
-
0
-
-
1
18
-
1
0
-
-
19
-
1
-
0
-









Reduced Flow Table

x1 x2 x3 x4 x5 x6 x7
1 1,0 {1,4},1 1,0 1,1 1,0 1,1 1,1
4 1,1 {1,4},0 1,1 {1,5},0 {1,5},0 {1,4},- 1,1
5 {1,5},0 {1,4},1 1,1 - 1,- 1,1 9,0
9 {1,5},0 5,1 -,1 4,1 9,1 9,0 9,0

Final Reduced Flow Table

x1 x2 x3 x4 x5 x6 x7
1 1,0 1,1 1,0 1,1 1,0 1,1 1,1
4 1,1 1,0 1,1 1,0 1,0 1,- 1,1
5 1,0 1,1 1,1 - 1,- 1,1 9,0
9 1,0 5,1 -,1 4,1 9,1 9,0 9,0

State Assignment

  • Each row must be encoded using a unique binary code.
  • In synchronous design, a correct encoding can be assigned arbitrarily using n bits for a flow table with 2n rows or less.
  • In asynchronous design, more care must be taken to ensure that a circuit can be built that is independent of signal delays.

Critical Races

  • When present state equals next state, circuit is stable.
  • When codes differ in one bit, the circuit is in transition.
  • When the codes differ in multiple bits, the circuit is racing.
  • A race is critical when differences in delay can cause it to reach different stable states.
  • A state assignment is correct when it is free of critical races.

Minimum Transition Time State Assignment

  • A transition from state si to state sj is direct (denoted [si,sj]) when all state variables are excited to change at the same time.
  • [si, sj] races critically with [sk,sl] when unequal delays can cause these transitions to pass through a common state.
  • When all state transitions are direct, the state assignment is called a minimum transition time state assignment.
  • A flow table in which each unstable state leads directly to a stable state is called a normal flow table.

A Simple Huffman Flow Table

x1 x2 x3 x4
a a b d c
b c b b b
c c d b c
d a d d b
y1 y2 y1 y2 y3
a 00 000
b 01 011
c 10 110
d 11 101

Partition Theory

  • A partition p on a set S is a set of subsets of S such that their pairwise intersection is empty.
  • The disjoint subsets of p are called blocks.
  • A partition is completely specified if the union of the subsets is S.
  • Otherwise, the partition is incompletely specified.
  • Elements of S which do not appear in p are unspecified.

Partition Theory and State Assignment

  • n state variables y1, yn induce n partitions p1, pn.
  • States with y1 = 0 are in one block of p1 while those with y1 = 1 are in the other block.
  • Each partition is composed of only one or two blocks.
  • The order blocks appear or which is assigned a 0 or 1 is arbitrary.
  • Once we find one valid assignment, others can be found by complementing or reordering variables.

Partition Example

y1 y2 y1 y2 y3
a 00 000
b 01 011
c 10 110
d 11 101
p1
=
{[`ab],[`cd] }
p2
=
{[`ac],[`bd] }
p1
=
{[`ab],[`cd] }
p2
=
{[`ad],[`bc] }
p3
=
{[`ac],[`bd] }

Partition List

  • p2 p1 iff all elements specified in p2 are specified in p1 and each block of p2 appears in a unique block of p1.
  • A partition list is a collection of partitions of the form:

    • { [`(sp,sq)]; [`(sr,ss)] } where [sp,sq] and [sr,ss] are transitions in the same column.
    • { [`(sp,sq)]; [`(st)] } where [sp,sq] and is a transition in the same column as the stable state st.

  • The t-partitions of an assignment are the set of two-block partitions t1, , tn induced by y1, , yn.
  • A state assignment for a normal flow table is a minimum transition time assignment free of critical races iff each partition in the partition list is some ti.

Tracey's Theorem

Theorem (Tracey, 1966) A row assignment alloting one y-state per row can be used for direct transition realization of normal flow tables without critical races if, and only if, for every transition [si,sj]

  1. if [sm,sn] is another transition in the same column, then at least one y-variable partitions the pair { si, sj } and the pair { sm, sn } into separate blocks and
  2. if sk is a stable state in the same column then at least one y-variable partitions the pair { si, sj } and the state sk into separate blocks and
  3. for i j, si and sj are in separate blocks of at least one y-variable partition.

Partition List Example

x1 x2 x3 x4
a a b d c
b c b b b
c c d b c
d a d d b
p1
=
{[`ad],[`bc] }
p2
=
{[`ab],[`cd] }
p3
=
{[`ad],[`bc] }
p4
=
{[`ac],[`bd] }

Larger Example

x1 x2 x3 x4
a a c d c
b a f c b
c f c c c
d - d d b
e a d c e
f f f - e
p1
=
{[`ab],[`cf] }
p2
=
{[`ae],[`cf] }
p3
=
{[`ac],[`de] }
p4
=
{[`ac],[`bf] }
p5
=
{[`bf],[`de] }
p6
=
{[`ad],[`bc] }
p7
=
{[`ad],[`ce] }
p8
=
{[`ac],[`bd] }
p9
=
{[`ac],[`ef] }
p10
=
{[`bd],[`ef] }

Boolean Matrix Example

p1
=
{[`ab],[`cf] }
p2
=
{[`ae],[`cf] }
p3
=
{[`ac],[`de] }
p4
=
{[`ac],[`bf] }
p5
=
{[`bf],[`de] }
p6
=
{[`ad],[`bc] }
p7
=
{[`ad],[`ce] }
p8
=
{[`ac],[`bd] }
p9
=
{[`ac],[`ef] }
p10
=
{[`bd],[`ef] }
a b c d e f
p1 0 0 1 - - 1
p2 0 - 1 - 0 1
p3 0 - 0 1 1 -
p4 0 1 0 - - 1
p5 - 0 - 1 1 0
p6 0 1 1 0 - -
p7 0 - 1 0 1 -
p8 0 1 0 1 - -
p9 0 - 0 - 1 1
p10 - 0 - 0 1 1

Intersection

  • Two rows of a Boolean matrix, Ri and Rj, have an intersection if Ri and Rj agree wherever both Ri and Rj are specified.
  • The intersection is formed by creating a row which has specified values taken from either Ri or Rj.
  • Entries where neither Ri or Rj are specified are left unspecified.
  • A row, Ri, includes another row, Rj, when Rj agrees with Ri wherever Ri is specified.
  • A row, Ri, covers another row, Rj, if Rj includes Ri or Rj includes the complement of Ri.
  • The complement of Ri is denoted [`(Ri)].

Boolean Matrix and State Assignment

  • State assignment problem is to find a Boolean matrix C with a minimum number of rows such that each row in the original partition list matrix is covered by some row of C.
  • The rows of this reduced matrix represent the t-partitions.
  • The columns of this matrix represent a state assignment.
  • Number of rows is the same as the number of state variables.

Boolean Matrix Reduction

a b c d e f
(p1,p2) 0 0 1 - 0 1
(p3,p4,p8,p9) 0 1 0 1 1 1
([`(p5)],p6) 0 1 1 0 0 1
(p7,p10) 0 0 1 0 1 1

Minimal Boolean Matrix

a b c d e f
(p1,p7,p10) 0 0 1 0 1 1
(p2,[`(p5)],p6) 0 1 1 0 0 1
(p3,p4,p8,p9) 0 1 0 1 1 1

Intersectables

  • If a set of rows, Ri, Rj, ..., Rp, have an intersection, they are called an intersectable (denoted (i, j, ..., p)).
  • If Ri, [`(Rj)], ..., Rp are intersectable, it is denoted by (i, [`j], ..., p).
  • An intersectable may be enlarged by adding a row Rq iff Rq has an intersection with every element in the set.
  • An intersectable which cannot be enlarged further is called a maximal intersectable.

Finding Pairwise Intersectables

  • For each pair for rows, Ri and Rj, check whether Ri and Rj have an intersection.
  • Also must check whether Ri and [`(Rj)] have an intersection.
  • The pairwise intersectables for our example are:
    (1,2) (1,7) (1,10) (2,
    5
     
    ) (2,6) (3,4) (3,5) (3,8) (3,9) (4,5)
    (4,8) (4,9) (
    5
     
    ,6) (6,7) (7,10) (8,9) (8,
    10
     
    ) (9,10)

Finding Maximal Intersectables

First step: c = { (9,10) }
S8 = 9,[`10]: c = { (8,9), (8,[`10]), (9,10) }
S7 = 10: c = { (7,10), (8,9), (8,[`10]), (9,10) }
S6 = 7: c = { (6,7), (7,10), (8,9), (8,[`10]), (9,10) }
S[`5] = 6: c = { ([`5],6), (6,7), (7,10), (8,9), (8,[`10]), (9,10) }
S4 = [`5],8,9: c = { (4,8,9), (4,[`5]), ([`5],6), (6,7), (7,10), (8,[`10]), (9,10) }

Finding Maximal Intersectables (cont)

S3 = 4,5,8,9: c = { (3,4,8,9), (3,5), (4,[`5]), ([`5],6), (6,7), (7,10),
      (8,[`10]),(9,10) }
S2 = [`5],6: c = { (2,[`5],6), (3,4,8,9), (3,5), (4,[`5]), (6,7), (7,10),
      (8,[`10]),(9,10) }
S1 = 2,7,10: c = { (1,2), (1,7,10), (2,[`5],6), (3,4,8,9), (3,5), (4,[`5]),
      (6,7),(8,[`10]), (9,10) }

Setting up the Covering Problem























1,2
1,7,10
2,
5
 
,6
3,4,8,9
3,5
4,
5
 
6,7
8,
10
 
9,10
p1
1
1
-
-
-
-
-
-
-
p2
1
-
1
-
-
-
-
-
-
p3
-
-
-
1
1
-
-
-
-
p4
-
-
-
1
-
1
-
-
-
p5
-
-
1
-
1
1
-
-
-
p6
-
-
1
-
-
-
1
-
-
p7
-
1
-
-
-
-
1
-
-
p8
-
-
-
1
-
-
-
1
-
p9
-
-
-
1
-
-
-
-
1
p10
-
-
-
1
-
-
-
-
1






















Reduced Covering Problem














1,2
1,7,10
2,
5
 
,6
3,5
4,
5
 
6,7
8,
10
 
9,10
p1
1
1
-
-
-
-
-
-
p2
1
-
1
-
-
-
-
-
p5
-
-
1
1
1
-
-
-
p6
-
-
1
-
-
1
-
-
p7
-
1
-
-
-
1
-
-
p10
-
1
-
-
-
-
1
1













Hazard-free Logic Synthesis

  • For each next state and output signal:

    • Derive sum-of-products (SOP) implementation.
    • Transform SOP using laws of Boolean algebra into a multi-level logic implementation.
    • Map to gates found in the given gate library.

  • For asynchronous FSMs, must avoid hazards in SOP.
  • Some laws of Boolean algebra introduce hazards.
  • First describe for SIC fundamental-mode.

Boolean Functions and Minterms

  • A Boolean function f of n variables x1, x2, ..., xn is a mapping: f : {0,1}n {0,1,-}.
  • Each element m of {0,1}n is called a minterm.
  • The value of a variable xi in a minterm m is given by m(i).
  • The ON-set of f is the set of minterms which return 1.
  • The OFF-set of f is the set of minterms which return 0.
  • The DC-set of f is the set of minterms which return -.

Literals and Products

  • A literal is either the variable, xi, or its complement, xi.
  • The literal xi evaluates to 1 in the minterm m when m(i) = 1.
  • The literal xi evaluates to 1 when m(i) = 0.
  • A product is a conjunction (AND) of literals.
  • A product evaluates to 1 for m (i.e., the product contains m) if each literal evaluates to 1 in m.
  • X Y if minterms contained in X are a subset of those in Y.
  • Intersection of two products is the minterms contained in both.
  • A sum-of-products (SOP) is a set of products.
  • A SOP contains m when a product in the SOP contains m.

Implicants and Prime Implicants

  • An implicant is a product that contains none of the OFF-set.
  • A prime implicant is an implicant contained by no other.
  • A cover is a SOP which contains the entire ON-set and none of the OFF-set.
  • A cover may optionally include part of the DC-set.
  • The two-level logic minimization problem is to find a minimum-cost cover of the function.
  • For SIC fundamental-mode, a minimal cover is always composed of only prime implicants.

Two-Level Logic Minimization Example

wx
yz
00 01 11 10
00 1 1 1 1
01 0 1 1 -
11 0 1 1 0
10 0 - 0 0


ON-set
=
{ wxyz, wx yz, w x yz, w xyz, wx yz, w x yz, wx y z, w x y z }
OFF-set
=
{ wxyz, wxy z, w xy z, wxy z, w x y z, w xy z}
DC-set
=
{ w xyz, wx y z}

Prime Implicant Generation

  • For functions of less than 4 variables, can use a Karnaugh map.
  • For more variables, Karnaugh maps too tedious.
  • Quine's tabular method is better but requires all minterms be explicitly listed.
  • Briefly describe a recursive procedure based on consensus and complete sums.

Consensus and Complete Sums

  • The consensus theorem states: x y + xz = x y + xz + y z.
  • The product y z is called the consensus for x y and xz.
  • A complete sum is defined to be a SOP formula composed of all the prime implicants.
  • It can be proven that a SOP formula is a complete sum when:

    1. No term includes any other term.
    2. The consensus of any two terms of the formula either does not exist or is contained in some term of the formula.

Recursive Prime Generation

  • If we have two complete sums f1 and f2 then we can obtain the complete sum for f1 ·f2 using the following two steps:

    1. Multiply out f1 and f2 using the following properties

      • x ·x = x (idempotent)
      • x ·(y + z) = x y + z z (distributive)
      • x ·x = 0 (complement)

    2. Eliminate all terms contained in some other term.

  • A recursive procedure for finding the complete sum for f:
    cs(f)
    =
    abs([x1 + cs(f(0,x2,,xn))]
    ·[x1+ cs(f(1,x2,,xn))])
    where abs(f) removes absorbed terms from f (abs(a + ab) = a).

Recursive Prime Generation: Example

f(w,x,y,z)
=
yz+ x z + w xyz + wx y z
f(w,x,y,0)
=
y+ wx y
f(w,x,0,0)
=
1
f(w,x,1,0)
=
wx
CS(f(w,x,y,0))
=
abs((y + 1)(y+ wx)) = y+ wx
f(w,x,y,1)
=
x + w xy
f(w,0,y,1)
=
w y
f(w,1,y,1)
=
1
CS(f(w,x,y,1))
=
abs((x + w y)(x+ 1)) = x + w y
CS(f(w,x,y,z))
=
abs((z + y+ wx) (z+ x + w y))
=
abs(x z + w yz + yz+ x y + w y+ wx z+ wx)
=
x z + yz+ x y + w y+ wx

Recursion Tree for Example

Figure

Prime Implicant Selection
x z
yz
x y
w y
wx
wxyz
-
1
-
-
-
wx yz
-
1
1
-
1
w x yz
-
1
1
1
-
w xyz
-
1
-
1
-
wx yz
1
-
1
-
1
w x yz
1
-
1
1
-
wx y z
1
-
-
-
1
w x y z
1
-
-
-
-

Combinational Hazards

  • For asynchronous design, the two-level logic minimization problem is complicated by the fact that there can be no hazards.
  • Let us consider the design of a function f to implement either an output or next state variable.
  • Under SIC, when input changes, the circuit moves from minterm m1 to another m2 which differ in value in exactly one xi.
  • During this transition, there are four possible transitions of f:

    1. Static 0 0 transition: f(m1) = f(m2) = 0.
    2. Static 1 1 transition: f(m1) = f(m2) = 1.
    3. Dynamic 0 1 transition: f(m1) = 0 and f(m2) = 1.
    4. Dynamic 1 0 transition: f(m1) = 1 and f(m2) = 0.

Static 0-Hazard

  • If during a static 0 0 transition, the cover of f can due to differences in delays momentarily evaluate to 1, then we say that there exists a static 0-hazard.
  • In a SOP cover of a function, no product term is allowed to include either m1 or m2 since they are members of the OFF-set.
  • Static 0-hazard exists only if some product includes both xi & xi.
  • Such a product is not useful since it contains no minterms.
  • If we exclude such product terms from the cover, then the SOP cover can never produce a static 0-hazard.

Static 1-Hazard

  • If during a static 1 1 transition, the cover of f can evaluate to 0, then we say that there exists a static 1-hazard.
  • Consider case where one product p1 contains m1 but not m2 and another product p2 contains m2 but not m1.
  • If p1 is implemented with a faster gate than p2, then the gate for p1 can turn off faster than the gate for p2 turns on which can lead to the cover momentarily evaluating to a 0.
  • To eliminate all static 1-hazards, for each m1 m2, there must exist a product in the cover that includes both m1 and m2.

Static 1-Hazard: Example

  • Consider transition from wx yz to wx yz.
  • Function should maintain a constant 1 during this transition.
  • If the gate implementing yz changes to 0 faster than x z changes to 1, then the output of the function may momentarily yields a 0.
  • Result is a static 1-hazard.
  • If we include the prime x y in the cover, then this hazard is eliminated since this product yields 1 during this transition.

Dynamic Hazards

  • If during a 0 1 transition, the cover can change from 0 to 1 back to 0 and finally stabilize at 1, we say the cover has a dynamic 0 1 hazard.
  • Assuming no useless product terms (ones that include both xi and xi), this is impossible under the SIC assumption.
  • No product is allowed to include m1 since it is in the OFF-set.
  • Any product that includes m2 turns on monotonically.
  • Similarly, there are no dynamic 1 0 hazards.

Removing Hazards

  • A simple, inefficent approach to produce a hazard-free SOP cover is to include all prime implicants in the cover.
  • Since two minterms m1 and m2 in a transition are distance 1 apart, they must be included together in some prime.
  • An implicant exists which is made up of all literals that are equal in both m1 and m2.
  • This implicant must be part of some prime implicant.
  • For our example, the following cover is guaranteed to be hazard-free under SIC:
    f
    =
    x z + yz+ x y+ w y+ wx

Better Approach to Remove Hazards

  • Form an implicant out of each pair of states m1 and m2 involved in a static 1 1 transition which includes each literal that is the same value in both m1 and m2.
  • The covering problem is now to find the minimum number of prime implicants that cover each of these transition cubes.

2-Level Hazard-Free Synthesis: Example

x z
yz
x y
w y
wx
wyz
-
1
-
-
-
xyz
-
1
-
-
-
wx y
-
-
1
-
1
w x y
-
-
1
1
-
w yz
-
1
-
1
-
x yz
1
-
1
-
-
wx z
1
-
-
-
1
w x z
1
-
-
-
-

Multi-Level Logic Synthesis

  • Two-level SOP implementations cannot be realized directly for most technologies.
  • AND or OR stages of arbitrarily large fan-in not practical.
  • In CMOS, gates with more than 3 or 4 inputs are too slow.
  • Two-level SOP implementations must be decomposed using Boolean algebra laws into multi-level implementations.
  • Care must be taken not to introduce hazards.
  • We present a number of hazard-preserving transformations.
  • If we begin with a hazard-free SOP implementation and only apply hazard-preserving transformations than the resulting multi-level implementation is also hazard-free.

Hazard-Preserving Transformations

  • Hazard-preserving both ways:

    • A + (B + C) A + B + C (associative law)
    • (A + B) AB (DeMorgan's theorem)
    • (A B) A+ B (DeMorgan's theorem)

  • Hazard-preserving one way:

    • A B + A C A(B + C) (distributive law)
    • A + A B A (absorptive law)
    • A + AB A + B

More Hazard-Preserving Transformations

  • Hazard exchanges:

    • Insertion or deletion of inverters at the output of a circuit only interchanges 0 and 1-hazards.
    • Insertion or deletion of inverters at the inputs only relocates hazards to duals of original transition.
    • The dual of a circuit (exchange AND and OR gates) produces dual function with dual hazards.

Extensions for MIC Operation

  • Proceeding restricted the class of circuits to SIC.
  • Each input burst can have only a single transition.
  • Does not even allow parallel state bit changes.
  • Now extend the synthesis method to MIC.
  • Can synthesize any BM machine which satisfies the unique entry point and maximal set properties.

Transition Cubes

  • MIC Transitions begin in one minterm m1 and end in another m2 where the values of multiple variables may have changed.
  • m1 is called the start point while m2 is called the end point.
  • Machine may pass through other minterms between m1 and m2.
  • This set of minterms is called a transition cube (denoted [ m1, m2 ]).
  • Transition cube can be represented with a product which contains a literal for each xi in which m1(i) = m2(i).
  • An open transition cube [ m1, m2 ) includes all minterms in [ m1, m2 ] except m2.
  • An open transition cube represented using a set of products.

Transition Cube: Example

wx
yz
00 01 11 10
00 1 1 1 1
01 0 1 1 1
11 0 1 1 0
10 0 1 0 0


[wx yz, w x yz]
[wx y z, w x y z)

Function Hazards

  • If f does not change monotonically during a multiple-input change, then f has a function hazard for that transition.
  • A function f contains a static function hazard during a transtion from m1 to m2, if there exists an m3 such that:

    • f(m1) = f(m2) f(m3)
    • m3 [m1, m2]

  • A function f contains a dynamic function hazard during a transition from m1 to m2, if there exists a m3 and m4 such that:

    • f(m1) f(m2), f(m2) = f(m3), f(m1) = f(m4)
    • m3 [m1, m2], m4 [m3, m2].

Function Hazards: Example

wx
yz
00 01 11 10
00 1 1 1 1
01 0 1 1 1
11 0 1 1 0
10 0 1 0 0


[wxyz, wx yz]
[wx y z, w xy z]

Function Hazards

  • If a transition has a function hazard, there is no implementation of the function which avoids the hazard during the transition.
  • Fortunately, the synthesis method never produces a design with a transition that has a function hazard.

Combinational Hazards for State Variables

  • A minimum transition time state assignment has MIC hazards.
  • Multiple changing next state variables may be fed back to the input of the FSM.
  • The circuit moves from one minterm m1 to another minterm m2, but multiple state variables may be changing concurrently.
  • For normal flow tables with outputs that change only in unstable states then only static transitions possible.

MIC Static Hazards

  • There can be no static 0-hazards.
  • Since multiple variables may be changing concurrently, the cover may pass through other minterms between m1 and m2.
  • To be free of static 1-hazards, it is necessary that a single product in the cover include all these minterms.
  • Each [m1, m2] where f(m1) = f(m2) = 1, must be contained in some product in the cover to eliminate static 1-hazards.

MIC Static Hazards: Example

wx
yz
00 01 11 10
00 1 1 1 1
01 0 1 1 1
11 0 1 1 0
10 0 1 0 0


[wx yz, w x yz]

MIC Dynamic Hazards

  • For each 1 0 transition, [m1, m2], if a product in the cover intersects [m1, m2], then it must include the start point, m1.
  • For each 0 1 transition, [m1, m2], if a product in the cover intersects [m1, m2], then it must include the end point, m2.

MIC Dynamic Hazards: Example

wx
yz
00 01 11 10
00 1 1 1 1
01 0 1 1 1
11 0 1 1 0
10 0 1 0 0


[wx yz, wxyz]
[wx yz, w x y z]

Burst-Mode Transitions

  • In legal BM machines, types of transitions are restricted.
  • A function may only change value after every transition in the input burst has occurred.
  • [m1, m2] for a function f is a burst-mode transition if for every minterm mi [m1, m2), f(m1) = f(mi).
  • The result is that if a function f only has burst-mode transitions, then it is free of function hazards.
  • Also, any dynamic 0 1 transition is free of dynamic hazards.
  • For any legal BM machine, there exists a hazard-free cover for each output and next state variable before state minimization.

State Minimization

  • After state minimization, it is possible that no hazard-free cover exists for some variable in the design.

    Inputs a b c
    State
    000 001 011 010 110 111 101 100
    A A,1 C,0 - A,1 B,0 - - A,1
    ...
    D - - - - - - E,1 D,1
    Reduce to:
    AD A,1 C,0 - A,1 B,0 - E,1 A,1

DHF-Compatibles

  • To guarantee a hazard-free cover, we must restrict when two states are compatible.
  • Two states s and s are dhf-compatible when they are compatible and for each output z and transition [m1,m2] of s and transition [m1,m2] of s:

    1. If z has a 1 0 transition in [m1,m2] and a 1 1 transition in [m1,m2] then [m1,m2] [m1,m2] = or m1 [m1,m2].
    2. If z has a 1 0 transition in [m1,m2] and a 1 0 transition in [m1,m2] then m1 = m1, [m1,m2] [m1,m2], or [m1,m2] [m1,m2].

State Assignment

Inputs a b
States
00 01 11 10
A A,0 - B,1 A,0
...
A A,0 C,0 B,1 A,0
...
A A,0 A,0 B,1 A,0

  • During a multiple input change, before all inputs have changed, the next state and output variables must remain constant.
  • For BM machines, guaranteed by the maximal set property.

Required Cubes

  • Transition cubes for each 1 1 transition are required cubes.
  • The end point of the transition cube for a 0 1 transition is a required cube.
  • The transition subcubes for each 1 0 transition are required cubes.
  • The transition subcubes for 1 0 transition [m1, m2] are all cubes of the form [m1, m3] such that f(m3) = 1.
  • Can eliminate any subcube contained in another.
  • The union of the required cubes forms the ON-set.
  • Each of the required cubes must be contained in some product of the cover to insure hazard-freedom.

Required Cubes: Example

ab
cd
00 01 11 10
00 1 1 1 1
01 0 1 1 1
11 1 1 1 0
10 1 1 0 0

t1
=
[a bcd, a b cd]
t2
=
[a bc d, a bc d]
t3
=
[ab cd, abcd]
t4
=
[ab c d, a bc d]

Priveleged Cubes

  • The transition cubes for each dynamic 1 0 transition are called priveleged cubes.
  • They cannot be intersected unless the intersecting product also includes its start point.
  • If a cover includes a product that intersects a priveleged cube without including its start point, then the cover is not hazard-free.

DHF-Prime Implicants

  • We may not be able to produce a SOP cover that is free of dynamic hazards using only prime implicants.
  • A dhf-implicant is an implicant which does not illegally intersect any priveleged cube.
  • A dhf-prime implicant is a dhf-implicant that is contained in no other dhf-implicant.
  • A dhf-prime implicant may not be a prime implicant.
  • A minimal hazard-free cover includes only dhf-prime implicants.

Prime Implicants: Example

f(a,b,c,d)
=
a c+ acd+ ab c+ b c d + ac
f(a,b,0,d)
=
a + ad+ ab
f(0,b,0,d)
=
d+ b
f(1,b,0,d)
=
1
CS(f(a,b,c,0))
=
abs((a + d+ b)(a+ 1)) = a + d+ b
f(a,b,1,d)
=
b d + a
f(0,b,1,d)
=
1
f(1,b,1,d)
=
b d
CS(f(a,b,1,d))
=
abs((a + 1)(a+ b d)) = a+ b d
CS(f(a,b,c,d))
=
abs((c + a + d+ b) (c+ a+ b d))
=
ac + a c+ cd+ ad+ b c+ ab + b d

DHF-Prime Implicants: Example

Setting up the Covering Problem

ac
a c
cd
b c
ab
b c d
a c
-
1
-
-
-
-
acd
-
-
1
-
-
-
ab c
-
-
-
1
1
-
b c d
-
-
-
-
-
1
ac
1
-
-
-
-
-

Summary

  • Binate covering problems
  • State minimization
  • State assignment
  • Hazard-free logic synthesis
  • Extensions for MIC operation


File translated from TEX by TTH, version 2.22.
On 5 Mar 2000, 06:47.