Microprocessor Design Verification:Smart Simulation

Smart Simulation

Deterministic or smart simulation uses vectors that cover a certain aspect of the design’s behavior using details of its implementation. We first describe ad hoc techniques such as hazard-pattern enumeration, which target specific blocks in the processor, and then describe more general techniques aimed at verifying the entire chip.

Hazard Pattern Enumeration

Ad hoc techniques typically target a specific block in the design such as a pipeline [18,19] or cache controller [20]. Errors in the pipeline mechanism represent only a small fraction of the total errors. In a study undertaken in Ref. [19], it was shown that only 2.79% of the total errors in a commercial 32-bit CPU design was related to the pipeline interlock controller. However, these errors are widely acknowl- edged as being the hardest to detect and are therefore targeted by ad hoc methods.

Pipeline hazards are situations that prevent the next instruction from executing in its designated clock cycle. These are classified as structural hazards, data hazards, and control hazards. Structural hazards occur when two instructions in different pipeline stages attempt to access the same physical resource simultaneously. Data hazards are of three types: read-after-write (RAW), write-after-write (WAW), and write-after-read (WAR) hazards. The most common are RAW hazards, in which the second instruction attempts to read the result of the first instruction before it is written. Control hazards are treated as RAW hazards in the program counter (PC). An algorithm that enumerates all the structural, data, and control hazard patterns for each common resource in the pipeline is presented in Ref. [18]. Test programs that include all the patterns that can cause the pipeline to stall are then generated.

Lee and Siewiorek [19] define the set of state variables read by an instruction as its read state and the set written by the instruction as its write state. A conflict exists between two instructions if at least one of them is a write and the intersection of their read/write or write/write states is not empty. A dependency graph is constructed with nodes representing all the possible read/write instructions and edges (or dependency arcs) representing conflicts between instructions. Test programs are generated to cover all the dependency arcs in the graph and the dependency arc coverage is calculated.

In Ref. [20], a cache controller is verified using a model of the memory hierarchy, a set of cache coherence protocols, and enumeration capabilities to generate test programs for the design.

The problem inherent in ad hoc techniques is that pipeline behavior after the detection of a hazard is usually not considered [21]. Test cases reachable only after a hazard has occurred are therefore not covered. We next discuss more general test generation techniques, which are applicable to a larger part of the design.

ATPG

An important class of verification techniques is based on the use of test programs generated by ATPG tools. Coverage is measured as the fraction of design errors detected. These methods have been used in industry to verify the equivalence between the gate- and transistor-level models, e.g., in the verification

of PowerPCTM arrays [13,14]. In this approach, a gate-level model is created from the transistor-level implementation, and tests generated at the gate-level view are simulated at the transistor level to verify equivalence. However, these techniques, while effective at lower levels of abstraction, do not provide a good measure of the extent to which the design has been exercised.

State and Transition Traversal

Tests generated by traversing the design’s state space work on the principle that verification will be close to complete if the processor either visits all the states or exercises all the transitions of its state graph during simulation [15,17,22]. Since memory limitations make it impossible to examine the state graph of the entire design, the design behavior is usually abstracted in the form of a reduced state graph. Test sequences are then generated which cause this reduced FSM to exercise all the transitions. Figure 64.5 illustrates the verification flow for this technique. The first step is to extract the control logic of the design in the form of an FSM. The datapath is usually not considered because most designs have datapaths of substantial sizes, which can lead to an unmanageable state space. Furthermore, errors in the datapath usually result from incorrect implementation, not incorrect design, and can be easily tested by conven- tional simulation [17].

A method to extract the control logic of the design in the form of an FSM can be found in Ref. [17]. This is illustrated in Figure 64.6. The data variables in the design are made nondeterministic by including them in the set of primary inputs to the FSM. Since the datapath is to be excluded from consideration, the inputs to the data variables are excluded. This is represented by the dotted lines in Figure 64.6. The support set of the primary outputs and control state variables is now determined in terms of the primary inputs, control state variables, and data variables. This support set forms the new set of primary inputs to the FSM. Data variables that are not a part of the support set are excluded from the FSM. In this manner, the effect of the data variables on the control flow is taken into account, even though the data registers are abstracted.

After the FSM has been extracted, state enumeration is performed to determine the reachable states and a state graph which details the behavior of the FSM is generated. Since coverage is typically evaluated by the number of states visited or the number of transitions exercised, a state or transition tour of the state graph is found. A state (transition) tour of a directed state graph is a sequence of transitions that traverses every state (transition) at least once. Several polynomial-time algorithms have been developed for finding transition tours in nonsymmetric, strongly connected graphs, since this problem (the Chinese Postman problem) is frequently encountered in protocol conformance testing [22]. The transition tour is translated into an instruction sequence that will cause the FSM to exercise all transitions.

Microprocessor Design Verification-0253

Microprocessor Design Verification-0254

Cheng and Krishnakumar [15] use exhaustive coverage of the reduced FSM to generate test programs guaranteeing that all statements in the original HDL representation are exercised. A test generation technique based on visiting all states in the state graph is presented in Ref. [21]. Test cases are developed based on enumerating hazard patterns in the pipeline and are translated into sequences of states in the state graph. Simulation vectors that satisfy all test cases are generated. A more general transition- traversal technique is given in Ref. [22]. A translator is used to convert the HDL representation into a set of interacting FSMs. A full-state enumeration of the FSMs is performed to find all reachable states from reset. This produces a complete state graph, which is used to generate vectors that cause the processor to take a transition tour.

Finally, several classes of processors for which transition coverage is effective are identified in Ref. [5]. The authors demonstrate that under a given set of conditions, transition tours of the state graph can be used to completely validate a large class of processors.

State-space explosion is currently a key problem in computing state machine coverage. As designs get larger and considerably more complex, the maximum size of the state machine that can be handled is the major limiting factor in the use of formal methods. However, research is currently being undertaken to deal with state explosion, and we foresee an increasing use of formal coverage metrics in the future.

Comments

Popular posts from this blog

SRAM:Decoder and Word-Line Decoding Circuit [10–13].

ASIC and Custom IC Cell Information Representation:GDS2

Timing Description Languages:SDF