Microprocessor Design Verification:Design Verification Environment
Introduction
The task of verifying that a microprocessor implementation conforms to its specification across various levels of design hierarchy is a major part of the microprocessor design process. Design verification is a complex process which involves a number of levels of abstraction (e.g., architectural, RTL, and gate), several aspects of design (e.g., timing, speed, functionality, and power), as well as different design styles [1]. With the high complexity of present-day microprocessors, the percentage of the design cycle time required for verification is often >50%.
The increasing complexity of designs has led to a number of approaches being used for verification. Simulation and formal verification are widely recognized as being at opposite ends of the design verifi- cation spectrum, as shown in Figure 64.1 [2]. Simulation is the process of simulating a software model of the design in an environment that models the actual hardware system. The values of internal and output signals are obtained for a given set of inputs and are compared with expected results to determine whether the design is behaving as specified. Formal verification, on the other hand, uses mathematical formulae on an abstracted version of the design to prove that the design is correct or that particular aspects of the design are correct.
Formal verification includes equivalence checking, model checking, and theorem proving. Equivalence checking verifies whether one description of a design is functionally equivalent to another. Model checking verifies that specified properties of a design are true, i.e., that certain aspects of the design always work as intended. In theorem proving, the entire design is expressed as a set of mathematical assumptions. Theorems are expressed using these assumptions and are then proven. Formal verification is particularly useful at lower levels of abstraction, e.g., to verify that a gate-level model matches its RTL specification. Formal verification is becoming popular as a means of achieving 100% coverage, at least for specific areas of the design, and is described more fully elsewhere in this book.
There are several problems inherent in applying formal verification to large microprocessor designs. While equivalence checking ensures that no functional errors are inserted from one design iteration to the next, it does not guarantee that the design meets the designer’s specifications. Model checking is useful to check consistency with specifications; however, the assertions to be verified must be manually written in most cases. The size of the circuit or state machine that can be formally verified is severely limited owing to the problem of state-space explosion. Finally, formal techniques cannot be used for performance validation because timing-dependent circuits such as oscillators rely on analog behavior that is not handled by mathematical representations.
Simulation is therefore the primary commercial design verification methodology in use, especially for large microprocessor designs. Simulation is performed at various levels in the design hierarchy, including at the register transfer, gate, transistor, and electrical levels, and is used for both functional verification and performance analysis. Timing simulation is becoming critical for ultradeep submicron designs because the problems of power grid IR drops, interconnect delays, clock skews, and signal electromigra- tion intensify with shrinking process geometries and affect circuit performance adversely [3]. Timing verification involves performing 2D or 3D parasitic RC extraction on the layout followed by back- annotating the capacitance values obtained onto the netlist. A combination of static and dynamic timing analysis is performed to find critical paths in the circuit. Static analysis involves analyzing delays using a structural model of the circuit, while dynamic analysis uses vectors to simulate the design to locate
critical paths [3]. Accurate measurements of the critical path delays can then be obtained using SPICE. Techniques for timing verification are described elsewhere in this book.
Pseudorandom vector generation is the most popular form of generating instruction sequences for functional simulation. Random test generators provide the ability to generate test programs that lead to multiple simultaneous events, which would be extremely time consuming to write by hand [4]. Further- more, the amount of computation required to generate random instruction sequences is low. However, random simulation often requires a very long time to achieve a suitable level of confidence in the design. This has given rise to the use of a number of semiformal metrics to estimate and improve simulation coverage. These methods combine the advantages of simulation and formal verification to achieve a higher coverage, while avoiding the scaling and methodology problems inherent in formal verification. In this chapter, we focus on the tools and techniques used to generate instruction sequences for a simulation-based verification environment.
The chapter is organized as follows. We begin with a description of the design verification environment in Section 64.2. Random and biased-random instruction generation, which lie at the simulation end of the spectrum, are discussed in Section 64.3. Section 64.4 describes three popular correctness checking methods that are used to determine the success or failure of a simulation run. Coverage metrics, which are used to estimate simulation coverage, are presented in Section 64.5. In Section 64.6, we move to the middle of the design verification spectrum and discuss smart simulation which is used to generate vectors satisfying semiformal metrics. Wide simulation, which refers to the use of formal assertions to derive vectors for simulation is described in Section 64.7. Having covered the spectrum of semiformal verifica- tion methods, we conclude with a description of hardware emulation in Section 64.8. Emulation uses dynamically configured hardware to implement a design, which can be simulated at high speeds.
Design Verification Environment
In this section, we present a design verification environment that is representative of many commercial verification methodologies. This environment is illustrated in Figure 64.2, and the typical levels of design abstraction are shown in Figure 64.3. We describe the different parts of the environment and the role each part plays in the verification process.
Architectural Model
A high-level specification of the microprocessor is first derived from the product requirements and from the requirement of compatibility with previous generations. An architectural simulation model and an RTL model are then implemented based on the product specification. The architectural model, often written in C or C++, includes the programmer-visible registers and the capability to simulate the execution of an instruction sequence. This model emphasizes simulation speed and correctness over implementation detail and therefore does not represent pipeline stages, parallel functional units, or caches. This model is instruction accurate, but not clock cycle accurate [1]. A typical architectural model executes over 100 times faster than a detailed RTL model [4].
RTL Model
The RTL model, implemented in a hardware description language (HDL) such as VHDL or Verilog, is more detailed than the architectural model. Data is stored in register variables and transformations are represented by arithmetic and logical operators. Details of pipeline implementation are included. The RTL model is used to synthesize a gate-level model of the design, which may be used to formally verify equivalence between the RTL and transistor-level implementations or for automatic test pattern generation (ATPG) for manufacturing tests. Circuit extraction can also be performed to derive a gate- level model from the transistor-level implementation. In many methodologies, the RTL represents the golden model to which other models must conform. Equivalence checking is commonly used to verify the equivalence of RTL, gate-level and transistor-level implementations.
Test Program Generator
The combination of simulation and formal methods is an emerging paradigm in design verification. A test program generator may therefore use a combination of random, hand-crafted, and deterministic instruction sequences generated to satisfy certain semiformal measures of coverage. These measures include the coverage of statements in the HDL description and coverage of transitions between control states in the design’s behavior. The RTL model is simulated with these test vectors using an HDL simulator and the results are compared with those obtained from the architectural simulation. Since the design specification (architectural-level) and design implementation (RTL or gate-level) are at different levels of abstraction, there can be no cycle-equivalent comparison. Instead, comparisons are made at special checkpoints such as at the completion of a set of instructions [5]. Sections 64.3, 64.6, and 64.7 discuss the most popular techniques used for test generation.
HDL Simulator
HDL simulation consists of two stages. In the compile stage, the design is checked for errors in syntax or semantics and is converted into an intermediate representation. The design representation is then reduced to a collection of signals and processes. In the execute stage, the model is simulated by initializing values on signals and executing the sequential statements belonging to the various processes. This can be achieved in two ways: event-driven simulation and cycle-based simulation. Event-driven simulation is based on determining changes (events) in the value of each signal in a clock cycle and may incorporate various timing models. A process is first simulated by assigning a change in value to one or more of its inputs. The process is then executed, and new values for other signals are calculated. If an event occurs on another signal, other processes which are sensitive to that signal are executed. Events are processed in the order of the time at which they are expected to occur according to the timing model used. In this manner, all events occurring in a clock cycle are calculated. Cycle-based simulators, in contrast, limit calculations by determining simulation results only at clock edges and ignoring interphase timing. Cycle- based simulators focus only on the design functionality by performing zero-delay, two-valued simulation (memory elements are assumed to be initialized to known values) and they offer an improvement in speed of up to 10X while utilizing one-fifth of the memory required for event-driven simulation. However, cycle-based simulators are inefficient in verifying asynchronous designs, and event-driven simulators must be used to derive initializing sequences and for timing calculations. Simulation techniques used at various levels of design abstraction are discussed more fully in this book.
Emulation Model
Hardware emulation is a means of embedding a dynamically configured prototype of the design in its final environment. This hardware prototype, known as the emulation model, is derived from the gate- level implementation of the design. The prototype can execute both random vectors and software application programs faster than conventional software logic simulators. It is also connected to a hardware environment, known as an in-circuit facility, to provide it with a high throughput of test vectors at appropriate speeds. Hardware emulation executes from three to six orders of magnitude faster than simulation and subsequently requires considerably less verification time. However, hardware emulators have limitations on the sizes of the circuits they can handle.
Table 64.1 presents the results of a survey conducted by 0-In Design Automation on verification techniques currently used in industry [6]. Columns 1 and 3 in the table list the different techniques, while columns 2 and 4 show the percentage of surveyed engineers currently using a particular approach. While formal methods are becoming popular as a means to more exhaustively cover the design, pseudo- random simulation is still a vital part of the verification engineer’s repertoire. In Section 64.3 we review some conventional verification techniques that use pseudorandom and biased-random test programs for simulation.
Comments
Post a Comment