Functional verification and 'e'

The e language enjoys popular use in the ASIC/VLSI industry for creating spec's, modeling, testing and verification of hardware systems.

Features of e include a combination of object oriented and constraint oriented mechanisms for the specification of data formats and interdependencies, interesting mechanisms of inheritance, and an efficient combination of interpreted and compiled code. Since the language is also extensible it serves as a living, industrial scale, implementation and application of the aspect oriented programming paradigm.

During the course of the following tryst with the e language we will cover the language highlights, its novel features and their particular suitability to the task of hardware verification, and reports on our experience of aspect oriented programming in this intense commercial setting. Objects have been a great success at facilitating the separation of concerns, but objects are limited in their ability to modularize systemic concerns that are not localized to a single module's boundaries. Rather than staying well, localized within a class, these concerns tend to crosscut the system's class and module structure. Much of the complexity and brittleness in existing systems appears to stem from the way in which the implementation of these kinds of concerns comes to be intertwined throughout the code.

We observed that while object oriented techniques have given the programmer excellent data abstraction mechanisms, objects themselves are cumbersome when it comes to expressing aspects of behavior that affect several data types. Conversely, OOP fails in naturally facilitating non-invasive extension mechanisms for layering new functionality over existing code.
  • A typical verification problem: A functional verification program consists of a more or less detailed description of the functionality of a device, its operating environment, and the data transformations it performs. In general terms functional verification is predicated on the assumption that a detailed simulation model of a device has been implemented in a suitable hardware description language. Such descriptions are simulated in software or emulated in configurable hardware for the purpose of determining the precise timing properties of the design, as well as to judge its functional correctness. Given such an implementation of the device under test (DUT) a suitable testbench needs to be erected around the DUT in order to subject it to a large number of tests.
    • Instructions: A key element in any verification environment is an adequate description of the data being manipulated—CPU instructions, in this case. Such descriptions typically do form natural classes of structured data—thus CPU instructions will be defined by some common elements such as opcode and addressing mode, but differences emerge (say) in the operands present causing a classification into immediate (e.g., the second operand, op2, is a two byte integer constant), memory (op2 is a two byte memory address), and register instructions (op2 is a four bit register index).
    • Test Generator: This software ultimately creates a sequence of test vectors (of bits) to stimulate the DUT whether on-the-fly, or as a prelude to running a test. Setting aside the question of how to (randomly) generate instances of the data classes involved, the test generator needs to determine what are legal inputs, and what are not. To some extent a strong type system helps define legal ranges—it is easy then to generate a random four bit value for op2 in the register class. However via types it is difficult to stipulate, for example, that since register zero never holds a branch address an indexed branch instruction cannot have op2 equal to zero. Constraints, in the form of Boolean relationships over the fields of class definitions, contribute the necessary flexibility, relieving the programmer (or test writer) of much unnecessary programming.
    • Reference model: Commonly, but not necessarily, a reference model will be used to predict correct responses from the DUT for each datum input during a test. Typically functional verification works at the level of whole transactions rather than clock cycles of the DUT—in this case a transaction is initiated by injecting an instruction into the running simulation, and terminated some time later by observing a result on one of the device’s output channels. Reference models thus do not need to be cycle accurate specifications of the hardware, just functionally accurate.
    • Checker: The testbench must obviously check the expected results of the test against the actual computation. In CPU verifications there are typically two types of checker: a data checker that ensures that all instructions computed the correct results, and a temporal checker that monitors how each instruction is executed by the DUT. This latter activity calls for the definition of behavioral rules (e.g., via executable temporal logic, or finite automata) that are run concurrently with the DUT, monitoring its state and progress.
    • Coverage: Metrics that help the verification engineer decide how well the verification is progressing have to be carefully designed with reference to a test plan. For instance it may be required to test that the CPU responded correctly to an interrupt when a branch instruction was being decoded. The ‘responds correctly’ may be a temporal rule invoked under such circumstances, but the fact that this scenario occurred during testing would be entered as a functional coverage point. In a simple case one might be content to count how many times this combination of circumstances occurred.
      Given a functional verification environment such as that envisaged above, tests will be devised to exercise the design. Sometimes these need to be very deterministic (e.g., in the early phases of the verification effort when one is testing basic functionality), but better coverage of the state space is achieved through random testing, especially when the ‘randomness’ can be directed towards particular goals. Often such goals are expressed as corner cases, particularly where functions of the device interact with one another. Principally it is for this purpose that the e language has been developed: random, directed test generation.
    • Factors influencing e's design: Since its initial conception in the early nineties the e language has evolved to meet the needs of functional verification engineers. e is used to describe the DUT, its operating environment, its legal inputs, and its behavior over time. Specman, Verisity’s flagship product implementing the language and runtime system, takes such a description and uses it to generate test inputs and drive them into the DUT, carry out temporal and data checking by monitoring the device, create coverage reports, and assist in debugging. Even though e is a general-purpose programming language (in fact most of Specman is written in e) its design has been geared towards the task of modeling and verifying hardware systems. This specific task imposed a number of important characteristics on the language.
      • Specialized Lingual Constructs: These include constraints, for example, which provide an effective declarative mechanism for the specification of configurations and for guiding test generation, and temporal properties (also declarative) which are used to describe time based phenomena. Inevitably there are many hardware oriented primitive types and operators on them such as bit-access and bit-slicing (common HDL functions), as well as mechanisms for specifying parallel execution.
      • Simplified textual syntax:The rich toolset that e provides to its user must be served in an easy to use, non-cryptic syntax. The design of the syntax and the semantics were also influenced by the reality that the principal users of the language are not software specialists but mainly hardware engineers who, in particular, may not be schooled in object oriented languages.
      • Performance: The verification of hardware systems by means of simulation is, almost by definition, a slow process. Every hardware cycle in which many operations may take place in parallel is translated to a sequence of slow software steps. In addition, the quality of a verification process is highly dependent on its coverage level. Even a non-exhaustive verification process may execute for months on dedicated powerful servers. This is the reason why e has a very efficient implementation; typically, an instruction (such as field access or function call) in e is implemented in a similar manner to the equivalent instruction in C.
      • Compiled and interpreted code: For reasons that are discussed in subsequent Sections below, there is a need when building testbenches to be able to load files which add new features, constructs, and especially constraints,
        on top of an extant code base. There is also a need for mixing those independently constructed additions in an unrestricted way.
        On the face of it e is a lexically scoped, statically type checked object-oriented language with single inheritance. A struct in e, just like a class in other programming languages, may declare fields and methods. Structs may also contain several unique declarative components, including constraints (affecting initial values assigned to fields), event definitions (for monitoring DUT behaviour), and temporal properties (checking protocols, etc.). The temporal and concurrent features of e are not discussed further here.
        A simple example, drawn from a verification environment for a packet switching device, demonstrates how constraints are used in e.
        type packet kind: [empty,short,long];
        struct packet {
        i: int;
        j: int;
        kind: packet kind;
        keep i < j + 1;
        keep j in [1..5];
        };
        The first of these two statements declares an enumerated type, the second declares a structured object with several scalar fields. The keeps are constraints that affect initial values assigned to the fields mentioned whenever an instance of this class is created—Specman resolves such constraints during a test run in order to generate a random, directed stream of data for the DUT. Constraints in e are linear functions over finite domains.
        While the synthesis of constraint solving and object oriented programming in e is an interesting subject in itself, it is not explored further in this article which rather focuses on the language constructs that address separation of concerns. Thus, in addition to the simple inheritance mechanism (which is called like inheritance in e), the language provides a unique and powerful when inheritance mechanism. Moreover, any e struct can be extended in a later module: fields, methods, events, and constraints can be added to it, and method definitions can be modified or overridden. Interpreted files can be loaded on top of a compiled executable, possibly extending already-compiled structs. The extension capabilities are discussed in Sections below, the when inheritance mechanism is also deferred until subsequent Sections.
    • The aspect-oriented features of e
      • Motivation:
      • Orthogonal extensions
      • Other uses of extension
        • Ease of debugging/analysis
        • Design exploration
        • Replacing callback registrations
      • Extending in-place
    • Extending extend
      • No pre-processor
      • The need for environmental acquisition
      • Using when inheritance
    • Conclusion
Contributed by Deepak et.al

To be completed..

1/Post a Comment/Comments

Your comments will be moderated before it can appear here.

Post a Comment

Your comments will be moderated before it can appear here.

Previous Post Next Post
ads1
Ads2