Total Pageviews

Friday, December 31, 2010

Constrained random verification[CRV] - How to design and execute

Targeted to audience who are looking to understand the bigger picture and typically would lead the verification project that intends to use the CRV. Coveres how to go about designing the CRV TB and execute & track it.

Introduction : What is verification ?

DUTs from verification point of view can be treated as a transfer function. If DUT has some configurable parameters that can be thought as variables that go in to making the transfer function. Whole DUT operation can be thought as system that translates the inputs to outputs based on the transfer function.

Now if we apply this to real scenario, On power up we configure the parameters of DUT. Thereon provide different types of inputs DUT. Using the trnsfer function predict the output and compare with the actual output provided by DUT.

Constrained random verification environment: System Spec re-orientation

No, i am not going to sell CRV methodology in this article. I am going to talk how do you go about building one.

Unfortunately most DUTs that get verified cannot be catpured as precise mathematical transfer function. If it can be, even partially then to that extent it can be verified using the formal techniques.

The ones where its not possible to capture the transfer function we rely on the constrained random approach.

In constrained random verification, following steps are to be followed:

1. Identify the variables in the configurable parameters and their allowed values

2. Identify the variables in input and possible values these variables can take.

3. Based on #1 and #2 how the output can be defined

Any DUT specification can come in the form standard specifications and internal architecture specification.

In order to build the TB micro-architecture for CRV capture information from system specification in the three groups noted above.

DUT might be processing single stream of inputs or multi stream. Multi Stream may be due to real multiple physical interfaces or logical support for multiple data streams on single physical interface.

Constrained random verification[CRV] environment: TB architecture

Once you have these details time to get into TB architecture.

Because of several pre-defined methodologies its very easy to get grocery list of components required in the TB. Dont fall into the trap of just creating quick grocery list. Using that as reference crate the schedules and todo lists based on this grocery list alone. This might get you started but would be absolute useless in real execution. You would end up incorrectly estimating timelines and resources.

You need to have the good understanding the system you are going to verify. Methodology driven component list is just guidance. Now under the light of CRV and system specification you can go build the similar todo list. This will be more accurate reflecting the relevant list for this project.

All the system configuration parameters should be captured in a global config. This config should typically hold both the system parameters and state of the system as well. Plan on passing this config object to all the components of the system. Since they need to work based on this.

If there are multiple physical interfaces a generator for each would be a good choice. Single physical interface having multiple logical stream can still be a single generator.

Based on the level of abstraction at which the data transaction is being created decides the number of transactors below it to process it. Dont make a single transactor overly complicated. if the abstraction is higher plan for layered transactors approach.

I am not going to talk much about the bus functional models for the interface. This is different game.

Based on the information in group #3 plan for a score board. This is vital component of the CRV system. Dont make mistake of dumping all the checks in the scoreboard. Limit it to data integrity checks, rest of the checks can be put it in other transactors where it makes more sense.

Design the end of test very carefully. This is equally important.

Constrained random verification : Functional Coverage myth

Well we have Test Bench and now we are starting to execute. What you need to take it to closure.

Typically one more mistake that's being done is overloading functional coverage for everything.

Verification should drive coverage for most part than being driven by it. In order for this to happen effectively you need to have 3 sheets to track the overall constrained random methodology based verification project.

1. Infrastructure Tracker: Based on TB architecture capture the infrastructure todo items. This can capture TB type(if multiple TB's), broad feature group(transactions, interrupts, configuration etc), general component(generators, data, xactors etc), actual name of component, one line about the feature(granular the better). Add owners and status.

2. Prioritized feature list: Order in which different features to be enabled, support exists in TB, Enabled in verification, coverage enabled.

3. Coverage tracker: Various functional coverage. Coverage should again be grouped feature based.

For every week based on the Prioritized feature list[#2] decide the list of features to be enabled. Based on the list decide the infrastructure items[#1] that need to be implemented. After features are implemented and enabled track the coverage[#3].

Good luck and Happy new year 2011 !