Showing posts with label Code Reviews. Show all posts
Showing posts with label Code Reviews. Show all posts

Peer code review of RTL, Test bench, Test Cases for 100% Verification closure


The topic of "peer code review" is a widely discussed topic in the context of design verification. I remember the times very early in my career when my code was reviewed. There were lots of positives occasionally with some negatives which i have improved over time. What is that people look for in a code review and what is the value add? Is it the most easy and powerful way of hunting down issues and avoid reproducing them by educating people, that is too often neglected in favor of complex tools and methodologies which are never idiot-proof?

I still remember the comments from my first code reviewer who went on to say the following: "Peer code reviews are like speed bumps on the highway where the ultimate goal is not to impose a fine but for the prevention of speeding violations in general". Translating this to way we code is the ultimate benefit for the whole team. The significant gains are that the person whose code is being reviewed puts in that extra effort to check the missed signals in the sensitivity list and add default states in FSMs when they know their code is going to be put under the spotlight in front of their peers. Many potential bugs get fixed even before the code gets to the review committee. Furthermore, this is the right forum to ensure that people are following the coding guidelines that should be in place. Not only does the code owner gets feedback, the peers in the room generally apply the same lessons to their own code, resulting in an overall improvement and value add.

All said well, the main problem is always the time where code reviews consume significant resources and valuable productive time. In any organization, peer code reviews have to be part of the methodology, be it design or verification.

Self code review is probably more important!

A problem with any code review is the lack of specific targets and the right audience. As said above, in a peer code review people learn from each other however, it is generally not 100% clear what specific targets are to be achieved. Coding guidelines are the easy ones among the possible targets, but they should not be the only targets. Based on my experience, code reviews ideally should come with spec and verification plan reviews. The reviewed spec should be complete and clear enough to define how to assure its correct implementation. The code review plan should be a part of verification plan so that it is a part of the integrated solution to assure the implementation's correctness. Theoretically, the verification plan should include a complete set of conceptual properties to verify. This set should be complete enough to 100% assure the implementation's correctness. Some of the properties should be proven in code review and the rest should be proven with other methods. As an industry, we do not know how to create a theorectically complete spec, and we do not know how to create a theorectically complete verification plan. However, we should at least start taking some steps in the right direction.