Automatic Test Pattern Generation:ATPG for Path Delay Faults.
ATPG for Path Delay Faults
The conventional approach for generating a pair of test patterns for a PDF P is a modification of an SSF ATPG tool. Initially, transitions are assigned on the lines of path P. This is called the path sensitization phase. Then, a modified ATPG for stuck-at fault is executed to ensure that constraints on the on-path inputs as well as on the off-path inputs (according to the followed PDF sensitization condition) are justified. This is the line justification phase. It should be noted that the sncv and scv constraints on some lines are also handled recursively. For example, for the output of a gate to have a stable noncontrolled value all inputs must have an sncv, and to have a stable controlled value it suffices that one of its inputs has an scv.
The problem with this conventional approach is that ATPG will be executed as many times as the number of PDFs, which is an exponential quantity to the size of the circuit. Any practical ATPG tool must be able to generate a polynomial number of pairs of test patterns. Thus, ATPG process must be modified as follows: For each pair of patterns to be generated, the path sensitization phase must be able to sensitize multiple paths. Then the line justification phase must be able to justify the assigned line constraints so that many PDFs remain sensitized.
One way to construct a non-PDF enumerative (referred to as nonenumerative) ATPG tool is to generate a pair of patterns that sensitizes and justifies the transitions on all the lines of a subcircuit that contains many paths. A necessary condition for the paths of a subcircuit to be simultaneously sensitized is to be structurally compatible with respect to the parity (on the number of inverters) between any two re-convergent nodes in the subcircuit. This concept is illustrated in Figure 69.5.
Consider the circuit on the top portion of Figure 69.5. The subgraph induced by the thick edges consists of two structurally compatible paths. These two paths share two OR gates. The two subpaths that share the same OR gate endpoints have even parity. Any graph that constraints structurally compatible graphs is called a structurally compatible (SC) graph. The methods in Refs. [14,15] consider a special case of SC graphs with a single primary input and a single primary output. Such an SG graph is called a primary compatible (PC) graph.
For the same pair of primary input and output nodes in the circuit, there may be many different PC graphs, which are called sibling PC graphs. Sibling PC graphs contain mutually incompatible paths. The subgraph induced by the thick edges on the bottom portion of Figure 69.5 shows a PC graph that is sibling to the one on the top portion. This graph also contains two paths (the ones induced by the thick edges).
The ATPG tool in Ref. [15] generates large sibling PC graphs for every pair of primary input and output nodes in the circuit. Robust PDF sensitization is considered. The size of each returned PC graph is measured in terms of the number of structurally compatible paths that satisfy the requirements for robust propagation described earlier. Experimentation in Ref. [15] shows that the line justification phase satisfies the constraints along paths in a manner proportional to the size of the graph returned by the path sensitization phase.
Given a pair of primary input and primary output nodes, a large sibling PC graphs as follows. Initially, a small number of lines in the circuit are removed so that the subcircuit between the selected primary inputs and outputs is a series–parallel graph. A polynomial time algorithm is applied on the series–parallel graph which finds the maximum number of structurally compatible paths that satisfy the conditions for robust propagation. An intermediate tree structure is maintained, which helps extract many such large sibling PC graphs for the same pair of primary input and output nodes. Finally, many previously deleted edges are inserted so that the size of the sibling PC graphs is increased further by considering paths that do not necessarily belong on the previously constructed series–parallel graph.
Once a pair of patterns is generated, fault simulation must be done so that the number of robust paths detected by the generated pair of patterns can be determined. The fault simulation problem for the PDF model is not as easy as for the SSF model and its methods are reviewed later. The difficulty relies on the fact that the number of PDFs is not necessarily a polynomial quantity.
Each generated pair of patterns targets robust PDFs in a particular sibling PC graph. It is shown that these PDFs can be detected easily. It may, however, detect robust PDFs in the portion of the circuit outside the targeted PCG. This complicates the fault simulation process. Thus, faults are simulated only within the current PCG in which case a simple topological graph traversal suffices to detect them.
Another ATPG approach for PDFs was proposed recently in Ref. [16]. In an initial step, static implication procedures are used to remove unsensitizable PDFs. This is illustrated in Figure 69.6. If line d receives value 1 then line g receives value 1 and h receives value 0. However, 0 is the controlling
value of the NAND gate and, therefore, no PDF with a rising transition on lines d and g can be sensitized robustly in the CUT. All such PDFs are then removed implicitly form further consideration. Such static implication rules were proposed in earlier works, [17–19] but Ref. [16] implements static implication rules implicitly using functions. Although function-based SSF ATPG is slower than structural-based SSF ATPG methods, fault-implicit PDF ATPG function-based methods have shown to outperform structural implicit PDF ATPG methods.
PDFs can be stored implicitly using canonical data structures. The most efficient approach appears to be a directed acyclic graph for storing and manipulating collections of sparse combinational sets, called the zero-suppressed binary decision diagram (ZBDD). Another data structure is presented in Ref. [20] but the technique ZBDD-based method is reviewed here. Since ZBDD is a decision diagram, each variable is represented by one or more node with two outgoing edges corresponding to a true and a false assignment on the variable, each function is a pointer to some variable node in the diagram, and its minterms are paths that originate from that node a terminate to a designated node labeled 1. When using ROBDDs, a function is further suppressed by removing variable nodes whose both outgoing edges point to the same node, thus suppressing don’t-cares. See also the ROBDD in Figure 69.7 for function a′ b′cd′ + ′ b′c′d. In contrast, the ZBBD represents the function by suppressing the absence of a variable in each minterm. See also the ZBDD in Figure 69.7 for the same function. The collection of paths or PFS in a CUT can be represented efficiently in a ZBDD since each path of PDF is a sparse set. Since ZBDD are intended for sparse set manipulations, built-in operations such as set union, intersec- tion, difference have been implemented.
All the PDFs in the CUT (sensitizable or not) are initially stored in a ZBDD with a topological traversal, and by using built-in ZBDD operations. Subsequently, each static implication is implemented by removing implicitly all unsensitized PDFs using built-in ZBDD operation during topological traversals on the CUT.
Next, the test functions for PDFs are derived via test functions for PDF portions called segments. A segment starts and ends at any two consecutive (topological-wise) stems in the CUT. There is a polynomial number of segments in the CUT. The test functions of all sensitized segments are stored in a ROBDD. All PDFs that contain an unsensitizable segment are removed implicitly from the ZBDD. Then the algorithm recursively manipulates the test functions of sensitized segments. If the concatenation of two or more test functions for segments is empty, then all PDFs that contain all respective segments are removed implicitly. For space efficiency, only the test functions of segments are kept in the ROBDD.
As an example, consider the circuit of Figure 69.6. Let F1 denote the test function for segment ad with a rising transition on line a, and F2 denote the test function for segment dgi with rising transition on line d. The test function for PDF adgi with rising transition on a is the concatenation of F1 and F2. However, if either F1 is empty both PDFs adgi and bdgi with rising transition on their input lines are untestable.
This step of the algorithm is not truly implicit since only unsensitizable PDFs are removed implicitly. However, in all existing benchmarks circuits, the majority of the PDFs are unsensitizable and the approach is able to identify all sensitizable PDFs for very large benchmarks [16]. The method can be accelerated by considering only critical PDFs, i.e., PDFs that contain a large number of gates and long interconnects. That way, all sensitizable critical PDFs can be identified even for very large benchmarks.
Comments
Post a Comment