Microprocessor Design Verification:Random and Biased-Random Instruction Generation
Random and Biased-Random Instruction Generation
Random vector simulation is the primary verification methodology used for microprocessors today. New designs as well as changes made to existing designs are subjected to a battery of simulation and regression tests involving billions of pseudorandom vectors before focused testing is performed. Random test gener- ation, also known as black-box testing, produces more complex combinations of instructions than can be manually written by the design verification engineer. A large number of test programs are generated randomly. Each test program consists of a set of register and memory initializations and a sequence of instructions. It may also contain the expected contents of the registers and memory after execution of the instructions, depending on the implementation. The expected contents of the registers and memory are obtained using an architectural model of the design. The test programs are translated to assembler or machine-language code that is supported by the HDL simulator and are simulated on the RTL model. However, purely random test programs are not ideal because the instruction sequences developed may not exercise a sufficient number of corner cases; thus, millions of vectors and days of simulation are required before reasonable levels of coverage can be achieved. In addition, random vectors may violate constraints on memory addressing, thus causing invalid instruction execution.
Biased-Random Testing
Biasing is the manipulation of the probability of selecting instructions and operands during instruction generation. Biased-random instruction generation is used to create test programs that have a higher probability of leading to execution hazards for the processor. For example, the biasing scheme in Ref. [7] utilizes knowledge of the Alpha 21264 architecture to favor the generation of instructions that test architecture-specific corner cases, specifically those affecting control-flow, out-of-order processing, super- scalar structures, cache transactions, and illegal instructions.
Constraint solving, another biasing technique, identifies output conditions or intermediate values that are important to verify Ref. [8]. The instruction generator identifies input values that would lead to these conditions and generates instructions that utilize these “biased” input values. Constraint solving is useful because it improves the probability of exercising certain corner cases. Both of these schemes have biases hard-coded into the test generation algorithm based on the instruction type.
Static and Dynamic Biasing
Biasing can be classified as being either static or dynamic. Static biasing of test vectors involves randomly initializing the registers and memory, generating the biased-random test program and applying it to the architectural and RTL models, e.g., the RIS tool from Motorola [9]. A major complication of this method is that the test generator must construct a test that does not violate the acceptable ranges for data and memory addresses. The solution to this problem is to constrain biasing within a restricted set of choices that define a constrained model of the environment, e.g., to reserve certain registers for indexed addressing [1].
Dynamically biased test generators use knowledge of the current processor states, memory state, and user bias preferences to generate more effective test programs. In dynamic instruction generation, the states of the programmer model in the test generator are updated to reflect the execution of the instruction after each instance of instruction generation [8,10]. The test generator interacts with a tightly coupled functional model of the design to update current state information.
Drawbacks of random and biased-random testing include the vast amount of simulation time required to achieve acceptable levels of coverage and the lack of effective biasing methodologies. Deter- mining when an acceptable level of coverage has been achieved is a major concern. Semiformal verification techniques have therefore become popular as a means to monitor simulation coverage as well as improve coverage by generating vectors to cover test cases that have not been exercised by random simulation.
In Section 64.4, we discuss several correctness checking techniques that are used to determine whether the simulation test was successful. Later, in Section 64.5, we review some of the common metrics used to evaluate the coverage of test programs.
Comments
Post a Comment