Total Pageviews

Friday, December 31, 2010

Constrained random verification[CRV] - How to design and execute

Targeted to audience who are looking to understand the bigger picture and typically would lead the verification project that intends to use the CRV. Coveres how to go about designing the CRV TB and execute & track it.

Introduction : What is verification ?

DUTs from verification point of view can be treated as a transfer function. If DUT has some configurable parameters that can be thought as variables that go in to making the transfer function. Whole DUT operation can be thought as system that translates the inputs to outputs based on the transfer function.

Now if we apply this to real scenario, On power up we configure the parameters of DUT. Thereon provide different types of inputs DUT. Using the trnsfer function predict the output and compare with the actual output provided by DUT.

Constrained random verification environment: System Spec re-orientation

No, i am not going to sell CRV methodology in this article. I am going to talk how do you go about building one.

Unfortunately most DUTs that get verified cannot be catpured as precise mathematical transfer function. If it can be, even partially then to that extent it can be verified using the formal techniques.

The ones where its not possible to capture the transfer function we rely on the constrained random approach.

In constrained random verification, following steps are to be followed:

1. Identify the variables in the configurable parameters and their allowed values

2. Identify the variables in input and possible values these variables can take.

3. Based on #1 and #2 how the output can be defined

Any DUT specification can come in the form standard specifications and internal architecture specification.

In order to build the TB micro-architecture for CRV capture information from system specification in the three groups noted above.

DUT might be processing single stream of inputs or multi stream. Multi Stream may be due to real multiple physical interfaces or logical support for multiple data streams on single physical interface.

Constrained random verification[CRV] environment: TB architecture

Once you have these details time to get into TB architecture.

Because of several pre-defined methodologies its very easy to get grocery list of components required in the TB. Dont fall into the trap of just creating quick grocery list. Using that as reference crate the schedules and todo lists based on this grocery list alone. This might get you started but would be absolute useless in real execution. You would end up incorrectly estimating timelines and resources.

You need to have the good understanding the system you are going to verify. Methodology driven component list is just guidance. Now under the light of CRV and system specification you can go build the similar todo list. This will be more accurate reflecting the relevant list for this project.

All the system configuration parameters should be captured in a global config. This config should typically hold both the system parameters and state of the system as well. Plan on passing this config object to all the components of the system. Since they need to work based on this.

If there are multiple physical interfaces a generator for each would be a good choice. Single physical interface having multiple logical stream can still be a single generator.

Based on the level of abstraction at which the data transaction is being created decides the number of transactors below it to process it. Dont make a single transactor overly complicated. if the abstraction is higher plan for layered transactors approach.

I am not going to talk much about the bus functional models for the interface. This is different game.

Based on the information in group #3 plan for a score board. This is vital component of the CRV system. Dont make mistake of dumping all the checks in the scoreboard. Limit it to data integrity checks, rest of the checks can be put it in other transactors where it makes more sense.

Design the end of test very carefully. This is equally important.

Constrained random verification : Functional Coverage myth

Well we have Test Bench and now we are starting to execute. What you need to take it to closure.

Typically one more mistake that's being done is overloading functional coverage for everything.

Verification should drive coverage for most part than being driven by it. In order for this to happen effectively you need to have 3 sheets to track the overall constrained random methodology based verification project.

1. Infrastructure Tracker: Based on TB architecture capture the infrastructure todo items. This can capture TB type(if multiple TB's), broad feature group(transactions, interrupts, configuration etc), general component(generators, data, xactors etc), actual name of component, one line about the feature(granular the better). Add owners and status.

2. Prioritized feature list: Order in which different features to be enabled, support exists in TB, Enabled in verification, coverage enabled.

3. Coverage tracker: Various functional coverage. Coverage should again be grouped feature based.

For every week based on the Prioritized feature list[#2] decide the list of features to be enabled. Based on the list decide the infrastructure items[#1] that need to be implemented. After features are implemented and enabled track the coverage[#3].

Good luck and Happy new year 2011 !

Thursday, September 30, 2010

Hazards of Copy/Paste

Copy/Paste is way of coding for many. Copy/Paste can be from websites or another source within company or project. I am not talking about that.

While you write code there are many times you have to repeat some things with slight changes. What you do is simply copy/paste. These are cases that you cannot get to write as subroutine. What happens is if the slight change is required at multiple places its quite likely that you may forget at one place. One's attention is lesser while one is doing copy/paste of code than while one is writing.

Now starts the most dangerous phase. This program since it does not have syntax issues will compile nicely(or will have same syntax error at two places :-)). Well as it goes in programming world its not as much about the programming as it is about debugging. Nightmare starts once you run the program. It will result is some unexpected signature. Since the context of your debug might be totally focused on the conceptual change you have done, you might have tough time getting to the bug that creep in due to copy/paste.

Time saved during copy/paste you might end up paying in the ugly debug which will take far longer time. Unless you are highly diligent and can maintain most of the time its not worth copy/paste small chunks of the code. Better write them again. It will not only reduce silly bugs but also improve the familiarity with the language.

Say No to small copy/paste and even when you have to do pay full attention as if you are writing it fresh. You cannot take it easy, Be Alert !

Wednesday, September 22, 2010

SystemVerilog’s Garbage Collection – Dont forget Threads !

Humans generally like symmetry. C++ has constructor and destructor. Memory allocated in the constructor is to be released in the destructor.

SystemVerilog is not so symmetrical in this context. Systemverilog has constructor but there is no destructor. The memory allocated to the object is garbage collected automatically.

Well in the context of VMM there are thousands of the transactions are created and typically handful of the transactors are crated. While transactions dont last the entire simulation time, typically transactors last the entire simulation time.

Now the transactions are data and they don’t have their own threads. So the garbage collection works perfect.

Now the Systemverilog in contrast to C++ has one more dimension. That is Thread. Class can have their own threads.  These threads are resources of the class that started it. Now what happens is if the thread is active even if the object handle is not referenced by any component the garbage collection will not happen till all threads are terminated.

Now one such scenario is hot plug systems. There could be a model that may have their own  threads. Now to detach and attach with the same class handle could lead to issues.

Since the class handle is not really freed up unless all threads of the class are completed.

VMM xactors have stop_xactor to address it. Although if the main of the xactor thread launched multiple threads by itself with the dangerous join_none will also have issues when the transactor needs to be stopped.

Stopping the xactor will be my next post.

Friday, July 23, 2010

Debug Tip from "Magic's Biggest Secrets Finally Revealed on AXN" show

While ago AXN ran this series "Magic's Biggest Secrets Finally Revealed". It was kind of interesting series. Lets take example where the magician is able to move a egg or himself through a solid wall. watch this : http://www.youtube.com/watch?v=pEIFYFqEyaw

What I learnt from this is, we see what magician wants us to see. We shut our attention to see what is really happening to make that illusion possible. Now in this case if the wall is solid the only way he could move in and move out is from sides. Now what one has to look for is what can help magician move from the side.

How does this translate to debugging ?
Look Programmer and Debugger are really two different personalities although it may be done by single person, Programmer is magician and Debugger is audience. Programmer has algorithmic intent that he has captured in his program and it has failed. While debugger starts looking at it, Programmer shows only his intent but does not allow the Debugger to look at it from perspective of yes this is intent but what can break the intent. This is magic trick of the programmer on debugger where he creates the illusion of this failure is not possible.

As a smart audience, Debugger needs to think like this. Lets accept failure has happened. Lets stop denying that this failure is not possible. What one needs to think is with this given code snippet that had specific intent what can lead to the failure. It may be the fact that program was not designed for a specific input and it was given or in the course of solving main problem certain subtle language specific intent was missed.

Let me elaborate with simple System verilog constraint failure example. I had this simple constraint that sum of two integer numbers[A, B] less than third integer[C]. Third integer[C] was input and other two integers[A, B] were output of this constraint. I checked the input number it was correct but the generated integers[A, B] when I was printing as HEX was showing huge values. I thought how could this simple constraint fail. Well what was happening was int was taking negative values. I really wanted only positive numbers and declaring as int had caused it. While I stared at the simple constraint for long time assuming bug was there while it was just hiding the data type. Programmer would just show me the constraint and hide his trick with integers data type.

Wednesday, July 21, 2010

Object oriented programming

Often there are times when I find the problem falls in to one of those buckets that have clear mapping to concepts of OOPs. Without much of thought I jump in code and move on.

There are times when its not. Thats when I dive to fundamenatals again. I was asking myself did the object orineted programming exist even before the foraml object oriented language introduction ? Did go back in the time of software days and discussed with few of my friends and answer is YES. Look the various components of the Linux kernel for instance. By using the Structs and function pointers people did do OOP with C. So it is these concepts which had inherent advnatges were made explicit with the help of the explicit constructs in the object oriented programming.

Well thought of diving in and capturing the most essential concepts used in the OOP's and the value that they bring in.

Two key things OOPs is attempting to solve is bring programming closer to problem domain and ease the reuse of the code. When done properly, the approach leads to simpler, concrete, robust, flexible and modular programs.

Also note that basic OOP concepts such as Abstraction, Encapsulation, Inheritance and Polymorphism should not be viewed as all independent things. Instead they are all related. Both the inheritance and polymorphism build on the top of Abstraction and Encapsulation.

A. Abstraction : "Abstraction is the elimination of the irrelevant and the amplification of the essential," according to Robert C. Martin in "Designing Object-Oriented C++ Applications Using the Booch Method," ISBN 0-13-203837-4

Abstraction is something we see in our everyday life. Be it any device like Car, Alarm clock or mobile. There are only few essential things that one needs to know in order to use them. Other details are not relevant and hence abstracted away.

In programming abstraction could be data abstraction or control abstraction.

Miller’s law: humans can only keep 7 ± 2 things in their head at a time.

Key to managing complexity: abstraction.
An abstraction is a view or representation of an entity that includes only the attributes of significance in a particular context. Abstraction is about emphasis on what an object is or does rather than how it is represented or how it works.

While abstraction reduces complexity by hiding irrelevant detail, generalization reduces complexity by replacing multiple entities which perform similar functions with a single construct. Generalization is the broadening of application to encompass a larger domain of objects of the same or different type.

Value abstraction brings in:
1. Hides all irrelevant details and thus making the main program simpler to design, write and maintain when the system is quite complex.
2. Reuse the proven components across applications
3. Abstraction allows the method of the internal implementation to continuously improve without affecting the rest of the system.

B. Encapsulation : Generally viewed as brother of the abstraction. Abstraction is noble but to get its benefits it has to be enforced. Abstraction is way to achieve intended(positive) and encapsulation is prevent any things that could break the abstraction(negative). Encapsulation is mechanism of implementing the abstraction so that its benefits can be realized.

The purpose is to achieve potential for change: the internal mechanisms of the component can be improved without impact on other components, or the component can be replaced with a different one that supports the same public interface. Encapsulation also protects the integrity of the component, by preventing users from setting the internal data of the component into an invalid or inconsistent state. Another benefit of encapsulation is that it reduces system complexity and thus increases robustness, by limiting the interdependencies between software components.[

C. Inheritance : Just like abstraction is closely related with generalization, the inheritance is closely related with specialization. the specialization relationship is implemented using the principle called inheritance.

A new class can be defined in terms of “diffs” from another class. New class is called subclass or derived class; old class is called superclass or parent class.

A derived class inherits all the entities of its parent class, but this can be restricted by access controls. Details are language-specific. Subclasses can override methods and provide its own implementation that may differ from the superclass.

D. Polymorphism : Polymorphisms is a generic term that means 'many shapes'. More precisely Polymorphisms means the ability to request that the same operations be performed by a wide range of different types of things.

The primary usage of polymorphism is the ability of objects belonging to different types to respond to method, field, or property calls of the same name, each one according to an appropriate type-specific behavior. The programmer (and the program) does not have to know the exact type of the object in advance, and so the exact behavior is determined at run time (this is called late binding or dynamic binding).

The different objects involved only need to present a compatible interface to the clients. In principle, the object types may be unrelated, but since they share a common interface, they are often implemented as subclasses of the same superclass.

Polymorphism is only concerned with the application of specific implementations to an interface or a more generic base class. Polymorphism is not the same as method overloading or method overriding.

Friday, July 16, 2010

SystemVerilog Assertions

Although been using the system verilog for a while but have not used the SystemVerilog Assertions(SVA). We could use the Formal verification to verify one of our Arbitration logic. Thats when I realized the assertions are quite overloaded entries. Formal verifciation seems to be good idea for the logic structures of the type "more of less". Meaning small processing but doing lot concurrent processing. Writing simulation environment for such thing is a pain. I found formal verification to be complementry to the the simulation than a replacement. Its certainly interesting and should be carefully made use to cover certain part of verification.

SVA is an integral part of IEEE-1800 System Verilog languages focusing on the
- temporal aspects of the spcification
- modeling
- verification
SVA allows sophisticated multi-cycle assertions and functional checks to be embedded in to the HDL code.

Good thing is it is multi-faceted. The same can be used as assertions, functional coverage, debug and formal verification.

It kind of makes verificatione engineers to invest in SVA assertions.

Communication Protocols -1

I was just looking back at the communication protocol's that have dealt with.

1. Started with simple UART
2. Moved to simple TDM switch
3. Telecom carrier line protocol E1 still kind of TDM
4. From TDM's to packet switched ISDN BRI and PRI from Exchange side
5. TCP/IP
6. H.323 and RTP for the VoIP

Aha...TCP/IP linux software stack interested me after having looked at the Streams based ISDN stacks. But what really fascinated me how would it look if the TCP/IP could be completely implemented in hardware. Not using some dedicated processor but a real custom FSM's. Processing the data at huge rate. May be people might have done it and using it in Server space.

Although I never got to implementing the TCP/IP on the hardware but have seen several mini versions in the bursting serial communication interfaces.

1. RapidIO - Parallel version. We implemented the first version of the spec. 8-bit Physical and link layer with buffer management logic for the protocol layer. It was fun.
2. HyperTransport - which was also called as LDT
3. PCIe - serial versions of it
3. USB3 - Now.

Its been a while dealing with these protocols. Many times during these spec implementation one thing that's not given enough attention by many as to why the things way they are. Although agree not every one may need it but it's certainly very important that at least few of the top level folks know about it. Its DNA.

In case of any ambiguities in the specs it helps to decide which route to take if you knew what intent it is supporting.

I am going to be doing the post mortem of these protocols and try to get what is the basic core problem is it trying to solve. Will be taking USB3 since I am fresh with it. Others have lost touch for a while.

Friday, July 2, 2010

Apple

Past few weeks I have had Steve Job's fever. I read and watched quite a bit about him. His principles of design are Simplicity, Ease of use, allowing creative extensions and reducing the pain.

I generally have these fever of something for a while. Different fever's last different duration. Some longer and some shorter. Some of them I really enjoy. Few of them have left mark on my life. Significant few have been Ayn Rand & Jiddu Krishnamurthy's literature for a while. There on the Madhwa philosophy which I am holding on to it.

Coming back to Steve & Apple. It triggered a thought process of asking myself question of how should things be done whenever I looked at any thing. What's the Apple way to do it ? What's the right way to do it ?

LPG Gas Stoove in the House caught up my attention. I certainly felt it lacked basic feature of Gas leakage detection. With so many fire accidents, I wonder why is this feature missing. Many Indian households could appreciate if Gas Stove could also double up as partial microwave oven. If not full featured at least it should have basic electronics to turn the burner off based on the timer. So next time you could keep milk for heating and not spill it out while you enjoyed your favourite TV show.

Friday, June 25, 2010

Shri Gurubhyoam Namaha...

Shri Gurubhyoam Namaha
Shri Param Gurubhyoam Namaha
ShriMadAnandaTeerthaiah Namaha

Harihi Om.

Starting off...