This document explains the recommended checklist items to review when transitioning from one Development Stage to another, for design, verification, and software device interface function (DIF) stages. It is expected that the items in each stage (D1, V1, S1, etc) are completed.
For a transition from D0 to D1, the following items are expected to be completed.
The specification is 90% complete, all features are defined. The specification is submitted into the repository as a markdown document. It is acceptable to make changes for further clarification or more details after the D1 stage.
The CSRs required to implement the primary programming model are defined. The Hjson file defining the CSRs is checked into the repository. It is acceptable to add or modify registers during the D2 stage in order to complete implementation.
Clock(s) and reset(s) are connected to all sub-modules.
.sv exists and meets comportability requirements.
The unit is able to be instantiated and connected in top level RTL files. The design must compile and elaborate cleanly without errors. The unit must not break top level functionality such as propagating X through TL-UL interfaces, continuously asserting alerts or interrupts, or creating undesired TL-UL transactions.
All expected memories have been identified and representative macros instantiated. All other physical elements (analog components, pads, etc) are identified and represented with a behavioral model. It is acceptable to make changes to these physical macros after the D1 stage as long as they do not have a large impact on the expected resulting area (roughly “80% accurate”).
The mainline functional path is implemented to allow for a basic functionality test by verification. (“Feature complete” is the target for D2 status.)
All the outputs of the IP have
A lint flow is setup which compiles and runs. It is acceptable to have lint warnings at this stage.
Any new features added since D1 are documented and reviewed with DV/SW/FPGA.
The GitHub Issue, Pull Request, or RFC where the feature was discussed should be linked in the
Block diagrams have been updated to reflect the current design.
All IP block interfaces that are not autogenerated are documented.
Any missing functionality is documented.
Feature requests for this IP version are frozen at this time.
All features specified are implemented.
An area check has been completed either on FPGA or using Synopsys Design Compiler.
All ports are implemented and their specification is frozen.
All architectural state (RAMs, CSRs, etc) is implemented and the specification frozen.
All TODOs have been reviewed and signed off.
The IP block conforms to the style guide regarding X usage.
The lint flow passes cleanly. Any lint waiver files have been reviewed.
A CDC checking run has been set up (if tooling is available). The CDC checking run shows no must-fix errors, and a waiver file has been created.
The IP block is synthesized as part of the continuous integration checking and meets timing there.
All CDC synchronization flops use behavioral synchronization macros (e.g.
prim_flop_2sync) not manually created flops.
Any appropriate security counter-measures have been documented and reviewed. The implementation of security countermeasures can be delayed until D3 if:
- The addition of those countermeasures would not negatively impact PPA by more than 10%, and
- The software interface would not be materially affected by those countermeasures
Where the area impact of countermeasures can be reliably estimated, it is acceptable to insert dummy logic at D2 in order to meet the above criteria.
Compile-time random netlist constants (such as LFSR seeds or scrambling constants) are exposed to topgen via the
randtype parameter mechanism in the comportable IP Hjson file.
Default random seeds and permutations for LFSRs can be generated with the gen-lfsr-seed.py script.
See also the related GitHub issue #2229.
Any approved new features since D2 have been documented and reviewed with DV/SW/FPGA
All TODOs are resolved.
The lint checking flow is clean. Any lint waiver files have been reviewed and signed off by the technical steering committee.
The CDC checking flow is clean. CDC waiver files have been reviewed and signed off by the technical steering committee.
A simple design review has been conducted by an independent designer.
Any deleted flops have been reviewed and signed off.
Any design changes which affect CSRs have been reviewed by the software team.
Any fatal error mechanisms have been reviewed by the software team.
Any other software-visible design changes have been reviewed by the software team.
All known “Won’t Fix” bugs and “Errata” have been reviewed by the software team.
Any appropriate security counter-measures are implemented. For redundantly encoded FSMs, the sparse-fsm-encode.py script must be used to generate the encoding.
A review of sensitive security-critical storage flops was completed. Where appropriate, non-reset flops are used to store secure material.
Shadow registers are implemented for all appropriate storage of critical control functions.
To transition from V0 to V1, the following items are expected to be completed. The prefix “SIM” is applicable for simulation-based DV approaches, whereas the prefix “FPV” is applicable for formal property-based verification approaches.
A DV document has been drafted, indicating the overall DV goals, strategy, the testbench environment details with diagram(s) depicting the flow of data, UVCs, checkers, scoreboard, interfaces, assertions and the rationale for the chosen functional coverage plan. Details may be missing since most of these items are not expected to be fully understood at this stage.
A testplan has been written (in HJson format) indicating:
- Testpoints (a list of planned tests), each mapping to a design feature, with a description highlighting the goal of the test and optionally, the stimulus and the checking procedure.
- The functional coverage plan captured as a list of covergroups, with a description highlighting which feature is expected to be covered by each covergroup. It may optionally contain additional details such as coverpoints and crosses of individual aspects of the said feature that is covered.
A top level testbench has been created with the DUT instantiated. The following interfaces are connected (as applicable): TileLink, clocks and resets, interrupts and major DUT interfaces. Some minor interfaces may not be connected at this point. Inputs for which interfaces have not yet been created are tied off to 0.
All available interface assertion monitors are connected (example: tlul_assert).
A UVM environment has been created with major interface agents and UVCs connected and instantiated. TLM port connections have been made from UVC monitors to the scoreboard.
A RAL model is generated using regtool and instantiated in the UVM environment.
A CSR check is generated using regtool and bound in the TB environment.
Full testbench automation has been completed if applicable. This may be required for verifying multiple flavors of parameterized DUT designs.
A smoke test exercising the basic functionality of the main DUT datapath is passing. The functionality to test (and to what level) may be driven by higher level (e.g. chip) integration requirements. These requirements are captured when the testplan is reviewed by the key stakeholders, and the test(s) updated as necessary.
CSR test suites have been added for ALL interfaces (including, but not limited to the DUT’s SW device acess port, JTAG access port etc.) that have access to the system memory map:
- HW reset test (test all resets)
- CSR read/write
- Bit Bash
Memory test suites have been added for ALL interfaces that have access to the system memory map if the DUT has memories:
- Mem walk
All these tests should verify back to back accesses with zero delays, along with partial reads and partial writes.
Each input and each output of the module is part of at least one assertion. Assertions for the main functional path are implemented and proven.
The smoke regression passes cleanly (with no warnings) with one additional tool apart from the primary tool selected for signoff.
A small suite of tests has been identified as the smoke regression suite and is run regularly to check code health. If the testbench has more than one build configuration, then each configuration has at least one test added to the smoke regression suite.
A nightly regression for running all constrained-random tests with multiple random seeds (iterations) has been setup. Directed, non-random tests need not be run with multiple iterations. Selecting the number of iterations depends on the coverage, the mean time between failure and the available compute resources. For starters, it is recommended to set the number of iterations to 100 for each test. It may be trimmed down once the test has stabilized, and the same level of coverage is achieved with fewer iterations. The nightly regression should finish overnight so that the results are available the next morning for triage.
An FPV regression has been set up by adding the module to the
A structural coverage collection model has been checked in.
This is a simulator-specific file (i.e. proprietary format) that captures which hierarchies and what types of coverage are collected.
For example, pre-verified sub-modules (including some
prim components pre-verified thoroughly with FPV) can be black-boxed - it is sufficient to only enable the IO toggle coverage of their ports.
A functional coverage shell object has been created - this may not contain coverpoints or covergroups yet, but it is primed for development post-V1.
- For a constrained random testbench, an entry has been added to
- For an FPV testbench, an entry has been added to
Sub-modules that are pre-verified with their own testbenches have already reached V1 or a higher stage.
The design / micro-architecture specification has been reviewed and signed off. If a product requirements document (PRD) exists, then ensure that the design specification meets the product requirements.
The draft DV document (proposed testbench architecture) and the complete testplan have been reviewed with key stakeholders (as applicable):
- DUT designer(s)
- 1-2 peer DV engineers
- Software engineer (DIF developer)
- Chip architect / design lead
- Chip DV lead
- Security architect
The following categories of post-V1 tests have been focused on during the testplan review (as applicable):
- Security / leakage
- Error scenarios
The V2 checklist has been reviewed to understand the scope and estimate effort.
To transition from V1 to V2, the following items are expected to be completed. The prefix “SIM” is applicable for simulation-based DV approaches, whereas the prefix “FPV” is applicable for formal property-based verification approaches.
It is possible for the design to have undergone some changes since the DV document and testplan were reviewed in the V0 stage. All design deltas have been captured adequately and appropriately in the DV document and the testplan.
The DV document is fully complete.
The functional coverage plan is fully implemented. All covergoups have been created and sampled in the reactive components of the testbench (passive interfaces, monitors and scoreboards).
For simulations, all interfaces are connected to all sidebands and exercised. For an FPV testbench, assertions have been added for all interfaces including sidebands.
All planned assertions have been written and enabled.
A UVM environment has been fully developed with end-to-end checks in the scoreboard enabled.
All tests in the testplan have been written and are passing with at least one random seed.
All assertions are implemented and are 90% proven. Each output of the module has at least one forward and one backward assertion check. The FPV proof run converges within reasonable runtime.
All assumptions have been implemented and reviewed.
Chip-level tests exist to verify example firmware code (DIFs) in simulation.
A nightly regression with multiple random seeds is 90% passing.
Line, toggle, fsm (state & transition), branch and assertion code coverage has reached 90%.
Functional coverage has reached 70%.
Branch, statement and functional code coverage for FPV testbenches has reached 90%.
COI coverage for FPV testbenches has reached 75%.
The lint checking flow for the testbench passes cleanly. Any waiver files have been reviewed.
Sub-modules that are pre-verified with their own testbenches have already reached V2 or a higher stage.
Security countermeasures are planned and documented.
- Common countermeasure features (such as shadowed reg, hardened counter etc) can be tested by importing common sec_cm testplans, tests and adding the bind file
- Additional checks and sequences may be needed to verify those features. Document those in the individual testplan.
- Create testplan for non-common countermeasures.
All high priority (tagged P0 and P1) design bugs have been addressed and closed. If the bugs were found elsewhere, ensure that they are reproduced deterministically in DV (through additional tests or by tweaking existing tests as needed) and have the design fixes adequately verified.
All low priority (tagged P2 and P3) design bugs have been root-caused. They may be deferred to post V2 for closure.
The DV document and testplan are complete and have been reviewed by key stakeholders (as applicable):
- DUT designer(s)
- 1-2 peer DV engineers
- Chip architect / design lead
- Chip DV lead
- Security architect
This review will focus on the design deltas captured in the tesplan since the last review. In addition, the fully implemented functional coverage plan, the observed coverage and the coverage exclusions are expected to be scrutinized to ensure there are no verification holes or any gaps in achieving the required stimulus quality, before the work towards progressing to V3 can commence.
The V3 checklist has been reviewed to understand the scope and estimate effort.
Security countermeasures are verified.
To transition from V2 to V3, the following items are expected to be completed. The prefix “SIM” is applicable for simulation-based DV approaches, whereas the prefix “FPV” is applicable for formal property-based verification approaches.
Although rare, it is possible for the design to have undergone some last-minute changes since V2. All additional design deltas have been captured adequately and appropriately in the DV document and the testplan.
X Propagation analysis has been completed.
All assertions are implemented and 100% proven. There are no undetermined or unreachable properties.
A nightly regression with multiple random seeds is 100% passing (with 1 week minimum soak time).
Line, toggle, fsm (state & transition), branch and assertion code coverage has reached 100%.
Functional coverage has reached 100%.
Branch, statement and functional code coverage for FPV testbenches has reached 100%.
COI coverage for FPV testbenches has reached 100%.
There are no remaining TODO items anywhere in the testbench code, including common components and UVCs.
There are no compile-time or run-time warnings thrown by the simulator.
The lint flow for the testbench is clean. Any lint waiver files have been reviewed and signed off by the technical steering committee.
Sub-modules that are pre-verified with their own testbenches have already reached the V3 stage.
All design and testbench bugs have been addressed and closed.
For a transition from S0 to S1, the following items are expected to be completed.
dif_<ip>.h and, optionally,
dif_<ip>.c exist in
All existing non-production code in the tree which uses the device does so via the DIF or a production driver.
Software unit tests exist for the DIF in
Smoke tests exist for the DIF in
This should perform a basic test of the main datapath of the hardware module by the embedded core, via the DIF, and should be able to be run on all OpenTitan platforms (including FPGA, simulation, and DV). This test will be shared with DV.
Smoke tests are for diagnosing major issues in both software and hardware, and with this in mind, they should execute quickly.
Initially we expect this kind of test to be written by hardware designers for debugging issues during module development.
This happens long before a DIF is implemented, so there are no requirements on how these should work, though we suggest they are placed in
sw/device/tests/<ip>/<ip>.c as this has been the convention until now.
Later, when a DIF is written, the DIF author is responsible for updating this test to use the DIF, and for moving this test into the aforementioned location.
For a transition from S1 to S2, the following items are expected to be completed.
The DIF has functions to cover all specified hardware functionality.
The DIF’s usage of its respective IP device has been reviewed by the device’s hardware designer.
The DIF’s respective device IP is at least stage D2.
The DIF uses automatically generated HW parameters and register definitions.
The HW IP Programmer’s guide references specific DIF APIs that can be used for operations.
The DIF follows DIF-specific guidelines in
sw/device/lib/dif/README.md and the OpenTitan C style guidelines.
Chip-level DV testing for the IP using DIFs has been started.
For a transition from S2 to S3, the following items are expected to be completed.
The DIF’s respective device IP is at least stage D3.
The DIF’s respective device IP is at least stage V3.
The C interface and its implementation have been fully re-reviewed, with a view to the interface not changing in future.
Unit tests exist to cover (at least):
- Device Initialisation
- All Device FIFOs (including when empty, full, and adding data)
- All Device Registers
- All DIF Functions
- All DIF return codes
All DIF TODOs are complete.