PULP-Platform Simulation Verification¶
Before discussing the verification strategy of the CV32E and CVA6, we need to consider the starting point provided to OpenHW by the RI5CY (CV32E) and Ariane (CVA6) cores from PULP-Platform. It is also informative to consider the on-going Ibex project, another open-source RISC-V project derived from the ‘zero-riscy’ PULP-Platform core.
For those without the need or interest to delve into history of these projects, the Executive Summary below provides a (very) quick summary. Those wanting more background should read the RI5CY and Ariane sub-sections of this chapter which review the status of RI5CY and Ariane testbenches in sufficient detail to provide the necessary context for the CV32E40P Simulation Testbench and Environment and CV6A Simulation Testbench and Environment chapters, which detail how the RI5CY and Ariane simulation environments will be migrated to CV32E and CVA6 simulation environments.
In the case of the CV32E, we have an existing testbench developed for RI5CY. This testbench is useful, but insufficient to execute a complete, industrial grade pre-silicon verification and achieve the goal of ‘production ready’ RTL. Therefore, a two-pronged approach will be followed whereby the existing RI5CY testbench will be updated to create a CV32E40P “core” testbench. New testcases will be developed for this core testbench in parallel with the development of a single UVM environment capable of supporting the existing RI5CY testcases and fully verifying the CV32E cores. The UVM environment will be loosely based on the verification environment developed for the Ibex core and will also be able to run hand-coded code-segments (programs) such as those developed by the RISC-V Compliance Task Group.
In the case of CVA6, the existing verification environment developed for Ariane is not yet mature enough for OpenHW to use. The recommendation here is to build a UVM environment from scratch for the CVA6. This environment will re-use many of the components developed for the CV32E verification environment, and will have the same ability to run the RISC-V Compliance test-suite.
The following is a discussion of the verification environment, testbench and testcases developed for RI5CY.
The verification environment (testbench) for RI5CY is shown in Illustration 1. It is coded entirely in SystemVerilog. The core is instantiated in a wrapper that connects it to a memory model. A set of assertions embedded in the RTL  catch things like out-of-range vectors and unknown values on control data. The testbench memory model supports I and D address spaces plus a memory mapped address space for a set of virtual peripherals. The most useful of these is a virtual printer that provides something akin to a “hardware printf” capability such that when the core writes ASCII data to a specific memory location it is written to stdout. In this way, programs running on the core can write human readable messages to terminals and logfiles. Other virtual peripherals include external interrupt generators, a ‘perturbation’ capability that injects random (legal) cycle delays on the memory bus and test completion flags for the testbench.
Testcases are written as C and/or RISC-V assembly-language programs which are compiled/linked using a light SDK developed to support these test . The SDK is often referred to as the “toolchain”. These testcases are all self-checking. That is, the pass/fail determination is made by the testcase itself as the testbench lacks any real intelligence to find errors. The goal of each testcase is to demonstrate correct functionality of a specific instruction in the ISA. There are no specific testcases targeting features of the core’s micro-architecture.
A typical testcase is written using a set of macros similar to TEST_IMM_OP  as shown below:
# instruction under test: addi # result op1 op2 TEST_IMM_OP(addi, 0x0000000a, 0x00000003, 0x007);
This macro expands to:
li x1, 0x00000003; # x1 = 0x3 addi x14, x1, 0x007; # x14 = x1 + 0x7 li x29, 0x0000000a; # x29 = 0xA bne x14, x29, fail; # if ([x14] != [x29]) fail
Note that the GPRs used by a given macro are fixed. That is, the TEST_IMM_OP macro will always use x1, x14 and x29 as destination registers.
The testcases are broadly divided into two categories, riscv_tests and riscv_compliance_tests. In the RI5CY repository these were located in the tb/core/riscv_tests and tb/core/ riscv_compliance_tests respectively. By cloning the core-v-verif repository, these original RI5CY tests can be found at $PROJ_ROOT/cv32/tests/core/riscv_tests and $PROJ_ROOT/cv32/tests/core/riscv_compliance_tests. Updated versions of these tests for CV32 are located at $PROJ_ROOT/cv32/tests/core/cv32_riscv_tests and $PROJ_ROOT/cv32/tests/core/cv32_riscv_compliance_tests.
This directory has sub-directories for many of the instruction types supported by RISC-V cores. According to the README, only those testcases for integer instructions, compressed instructions and multiple/divide instructions are in active development. It is not clear how much coverage the PULP defined ISA extensions have received.
Each of the sub-directories contains one or more assembly source programs to exercise a given instruction. For example the code segments above were drawn from the addi.S , a program that exercises the add immediate instruction. The testcase exercises the addi instruction with a set of 24 calls to TEST_* macros as shown above.
There are 217 such tests in the repository. Of these the integer, compressed and multiple/divide instructions total 65 unique tests.
RISC-V Compliance Tests¶
There are 56 assembly language tests in the** riscv_compliance_tests** directory. It appears that that these are a clone of a past version of the RISC-V compliance test-suite.
There are a small set of C programs in the firmware directory. The ability to compile small stand-alone programs in C and run them on a RTL model of the core is a valuable demonstration capability, and will be supported by the CORE-V verification environments. These tests will not be used for actual RTL verification as it is difficult to attribute specific goals such as feature, functional or code coverage to such tests.
The verification environment for Ariane is shown in Illustration 2. It is coded entirely in SystemVerilog, using more modern syntax than the RI5CY environment. As such, it is not possible to use an open source SystemVerilog simulator such as Icarus Verilog or Verilator with this core.
The Ariane testbench is much more complex than the RI5CY testbench. It appears that the Ariane project targets an FPGA implementation with several open and closed source peripherals and the testbench supports a verification environment that can be used to exercise the FPGA implementation, including peripherals as well as the Ariane core itself.
A quick review of the Ariane development tree in GitHub shows that there are no testcases for the Ariane core. In response to a query to Davide Schiavone, the following information was provided by Florian Zaruba, the current maintainer of Ariane:
There are no specific testcases for Ariane. The Ariane environment runs cloned versions of the official RISC-V test-suite in simulation. In addition, Ariane boots Linux on FPGA prototype and also in a multi core configuration.
So, the (very) good news is that the Ariane core has been subjected to basic verification and extensive exercising in the FPGA prototype. The not-so-good news is that CVA6 lacks a good starting point for its verification efforts.
Strictly speaking, the Ibex is not a PULP-Platform project. According to the README.md at the Ibex GitHub page, this core was initially developed as part of the PULP platform under the name “Zero-riscy”, and was contributed to lowRISC who now maintains and develops it. As of this writing, Ibex is under active development, with on-going code cleanups, feature additions, and verification planned for the future. From a verification perspective, the Ibex core is the most mature of the three cores discussed in this section.
Ibex is not a member of the CORE-V family of cores, and as such the OpenHW Group is not planning to verify this core on its own. However, the Ibex verification environment is the most mature of the three cores discussed here and its structure and implementation is the closest to the UVM constrained-random, coverage driven environment envisioned for CV32E and CVA6.
The documentation associated with the Ibex core is the most mature of the three cores discussed and this is also true for the Ibex verification environment, so it need not be repeated here.
IBEX Impact on CV32E and CVA6 Verification¶
Illustration 3 is a schematic of the Ibex UVM verification environment. The flow of the Ibex environment is very close to what you’d expect to see in a UVM environment: constraints define the instructions in the generated program which is fed to both the device-under-test (Ibex core RTL model) and an ISS reference model. The resultant output of the RTL and ISS are compared to produce a pass/fail result. Functional coverage (not shown in the Illustration) is applied to measure whether or not the verification goals have been achieved.
As shown in the Illustration, the Ibex verification environment is a set of five distinct processes which are combined together by script-ware to produce the flow above:
- An SV/UVM simulation of the Instruction Set Generator. This produces a RISC-V assembly program in source format. The program is produced according to a set of input constraints.
- A compiler that translates the source into an ELF and then to a binary memory image that can be executed directly by the Core and/or ISS.
- An ISS simulation.
- A second SV/UVM simulation, this time of the core itself.
- Once the ISS and RTL complete their simulations, a comparison script is run to check for differences.
This is an excellent starting point for the CV32E verification environment and our first step shall be to clone the Ibex environment and get it running against the CV32E . Immediately following, an effort will be undertaken to integrate the existing generator, compiler, ISS and RTL into a single UVM verification environment. It is known that the compiler and ISS are coded in C/C++ so these components will be integrated using the SystemVerilog DPI. A new scoreboarding component to compare results from the ISS and RTL models will be required. It is expected that the uvm_scoreboard base class from the UVM library will be sufficient to meet the requirements of the CV32E and CVA6 environments with little or no extension.
Refactoring the existing Ibex environment into a single UVM environment as above has many benefits:
- Run-time efficiency. Testcases running in the existing Ibex environment must run to completion, regardless of the pass/fail outcome and regardless of when an error occurs. A typical simulation will terminate after only a few errors (maybe only one) because once the environment has detected a failure it does not need to keep running. This is particularly true for large regressions with lots of long tests and develop/debug cycles. In both cases simulation time is wasted on a simulation that has already failed.
- Easier to debug failing simulations:
- Informational and error messages can be added in-place and will react at the time an event or error occurs in the simulation.
- Simulations can be configured to terminate immediately after an error.
- Easier to maintain.
- Integrated testcases with single-point-of-control for all aspects of the simulation.
- Ability to add functional coverage to any point of the simulation, not just instruction generation.
- Ability to add checks/scoreboarding to any point of the RTL, not just the trace output.
|||These assertions are embedded directly in the RTL source code. That is, they are not bound into the RTL from the TB using cross-module references. There does not appear to be an automated mechanism that causes a testcase or regression to fail if one or more of these assertions fire.|
|||Derived from the PULP platform SDK.|
|||The macro and assembly code shown is for illustrative purposes. The actual macros and testcases are slightly more complex and support debug aids not shown here.|
|||$PROJ_ROOT/cv32/tests/core/riscv_tests/rv64ui/addi.S in your local copy of the core-v-verif repository.|
|||Anyone with access to GitHub will be able to see the coverage results of CORE-V regressions.|
|||This does not change the recommendation made earlier in this document to continue developing new testcases on the existing RI5CY testbench in parallel.|