• DV
    • Verify all TLUL XBAR IP features by running dynamic simulations with a SV/UVM based testbench
    • Develop and run all tests based on the testplan below towards closing code and functional coverage on the IP and all of its sub-modules
  • FPV
    • Verify TileLink device protocol compliance with an SVA based testbench

Current status

Design features

For detailed information on TLUL design features, please see the TLUL design specification.

Testbench architecture

XBAR testbench has been constructed based on the hw/dv/sv/dv_lib

Block diagram

Block diagram

Top level testbench

Top level testbench is located at hw/ip/tlul/dv/tb/ It instantiates the XBAR DUT module hw/top_earlgrey/ip/xbar/rtl/autogen/ In addition, it instantiates the following interfaces, connects them to the DUT and sets their handle into uvm_config_db:

Common DV utility components

The following utilities provide generic helper tasks and functions to perform activities that are common across the project:

Global types & methods

All common types and methods defined at the package level can be found in xbar_param. Some of them in use are:

// 3 hosts can access a same device, reserve upper 2 bits. If all hosts send
// maximum outstanding request in this device, the device needs extra 2 bits
// for source ID to accommodate all the requests
parameter int VALID_HOST_ID_WIDTH = 6

TL agent

XBAR env instantiates tl_agent for each xbar host and device, which provides the ability to drive and independently monitor random traffic via TL host/device interface.

  • For host, source ID MSB 2 bits are tied to 0 and maximum number of outstanding request is 64
  • For device, max number of outstanding request = 64 * number of its accessible hosts. And device also supports out of order response

Stimulus strategy

Test sequences

All test sequences reside in hw/ip/tlul/dv/env/seq_lib. The xbar_base_vseq virtual sequence is extended from dv_base_vseq and serves as a starting point. All test sequences are extended from xbar_base_vseq. It provides commonly used handles, variables, functions and tasks that the test sequences can simple use / call. Some of the most commonly used tasks / functions are as follows:

  • seq_init: Create and configure host and device sequences, extended class can override this function to control host/device sequence
  • run_all_device_seq_nonblocking: Create passive response sequence for each device
  • run_all_host_seq_in_parallel: Create host sequences to run in parallel

Functional coverage

To ensure high quality constrained random stimulus, it is necessary to develop a functional coverage model. The following covergroups have been developed to prove that the test intent has been adequately met:

  • common covergroup from tl_agent: Cover each host/device reaches its maximum outsanding requests
  • same_device_access_cg: Cover each device has been accessed by all its hosts at the same time
  • same_source_access_cg: Cover all hosts use the same ID at the same time and all the IDs have been used at this sequence
  • max_delay_cg: Cover zero delay, small delay and large delay have been used in every host and device
  • outstanding_cg: Cover each host/device hit its maximum outstanding requests

Self-checking strategy


The xbar_scoreboard is primarily used for end to end checking. It extends from scoreboard_pkg::scoreboard, which supports multiple queues and in-order/out-of-order comparison. Scoreboard checks one transaction twice:

  • In a_channel, host initializes a transaction and scoreboard checks if this transaction is received by a right device
  • In d_channel, device initializes a response and scoreboard checks this response is returned to the right host

When device receives a transaction, we don't predict which host drives it. XBAR DUT may not drive transaction received from host to device in order. Due to this limitation, scoreboard is designed as following:

  • For a_channel, each device has a transaction queue. Monitor transaction from host and store it in a device queue based on item address. When device receives a transaction, check if there is a same item in its queue and the item is allowed to be appeared out of order.
  • For d_channel, use same structure to check items from device to host.
  • If the transaction is unmapped, it won't be sent to any device. Host will return an error response with d_error = 1. Each host has a queue used only for unmapped items. It stores the unmapped item from a_channal, then compare it with the same source ID response received in d_channel.

Following analysis fifos are created to retrieve the data monitored by corresponding interface agents:

  • a_chan_host/device_name, d_chan_host/device_name: These fifos provide transaction items at the end of address channel and data channel respectively from host/device

Following item queues are created to store items for check

  • a_chan_device_name: store items from all hosts that are sent to this device
  • d_chan_device_name: store items from this device that are returned to all hosts

Another limitation of scoreboard is that we don't check the conversion of source ID from host to device. We set the source of expected item to 0 before put it into scoreboard queue and hack the source of actual item to 0 before comparison


  • TLUL assertions: The tb/ binds the tlul_assert assertions to the IP to ensure TileLink interface protocol compliance.
  • Unknown checks on DUT outputs: The RTL has assertions to ensure all outputs are initialized to known values after coming out of reset.

Building and running tests

We are using our in-house developed regression tool for building and running our tests and regressions. Please take a look at the link for detailed information on the usage, capabilities, features and known issues. Here's how to run a basic sanity test:

$ cd hw/ip/xbar/dv
$ make TEST_NAME=xbar_sanity


Milestone Name Description Tests
V1 xbar_sanity <p>Sequentially test each host to access any device</p> xbar_main_sanity<br>
V2 xbar_base_random_sequence <p>Enable all hosts to randomly send transactions to any device</p>xbar_main_random<br>
V2 xbar_random_delay <p>Control delays through plusargs to create tests for below types of delay</p> <ul> <li>Zero delay for sending a/d_valid and a/d_ready</li> <li>Large delay from 0 ~ 1000 cycles</li> <li>Small delay (0-10 cycles) for a_channel, large delay (0-1000 cycles) for d_channel</li> </ul> xbar_main_sanity_zero_delays<br> xbar_main_sanity_large_delays<br> xbar_main_sanity_slow_rsp<br> xbar_main_random_zero_delays<br> xbar_main_random_large_delays<br> xbar_main_random_slow_rsp<br>
V2 xbar_unmapped_address <ul> <li>Host randomly drives transactions with mapped and unmapped address</li> <li>Ensure DUT returns d_error=1 if address is unmapped and transaction isn't passed down to any device</li> </ul> xbar_main_unmapped_addr<br> xbar_main_error_and_unmapped_addr<br>
V2 xbar_error_cases <ul> <li>Drive any random value on size, mask, opcode in both channels</li> <li>Ensure everything just pass through host to device or device to host</li> </ul> xbar_main_error_random<br> xbar_main_error_and_unmapped_addr<br>
V2 xbar_all_access_same_device <ul> <li>Randomly pick a device, make all hosts to access this device</li> <li>If the device isn't accessible for the host, let the host randomly access the other devices</li> </ul> xbar_main_access_same_device<br> xbar_main_access_same_device_slow_rsp<br>
V2 xbar_all_hosts_use_same_source_id<p>Test all hosts use same ID at the same same</p> xbar_main_same_source<br>
V2 xbar_stress_all <ul> <li>Combine all sequences and run in parallel</li> <li>Add random reset between each iteration</li> </ul> xbar_main_stress_all<br> xbar_main_stress_all_with_error<br>
V2 xbar_stress_with_reset <ul> <li>Inject reset while stress_all is running, after reset is completed, kill the stress seq and then start a new stress seq</li> <li>Run a few iteration to ensure reset doesn't break the design</li> </ul> xbar_main_stress_all_with_rand_reset<br> xbar_main_stress_all_with_reset_error<br>