Showing posts with label Verification. Show all posts
Showing posts with label Verification. Show all posts

Intel Mobile Comm's is looking for a Senior Verif Engineer


This opening is in Bangalore, India and the company is looking forward to close it at the earliest Job Description: 1. About 5-7 years of experience in functional verification with at least 3-4 years in HVL's like E language/specman and System Verilog. 2. Good experience at both module and sub-system/SOC level verification 3. Good knowledge of Verilog/VHDL 4. Good knowledge of UVM/eRM methodology 5. Should have developed complete test bench architecture, designing and coding of test bench components like UVCs/eVCs including checkers, monitors, scoreboards, BFMs 6. Should have architected the test plan including functional coverage and driven functional verification closure of complex DUTs 7. Expertise in sequences and sequence libraries 8. Working knowledge of register package model, regressions tools like eManager and perl scripting. 9. Should have working knowledge of ARM based processors and AHB Desirable skill set: 1. Exposure to other object oriented verification methodologies like VMM/OVM/UVM and system Verilog. 2. Exposure to C++, TLM and Co-verification Role: 1. Ownership and leadership of verification activity . 2. Good coordination skills to work in a flexible manner with multi-skilled teams and schedule-critical projects Technical interaction with concept, system, program and design teams that are geographically distributed If you are interested please contact using this link

Peer code review of RTL, Test bench, Test Cases for 100% Verification closure


The topic of "peer code review" is a widely discussed topic in the context of design verification. I remember the times very early in my career when my code was reviewed. There were lots of positives occasionally with some negatives which i have improved over time. What is that people look for in a code review and what is the value add? Is it the most easy and powerful way of hunting down issues and avoid reproducing them by educating people, that is too often neglected in favor of complex tools and methodologies which are never idiot-proof?

I still remember the comments from my first code reviewer who went on to say the following: "Peer code reviews are like speed bumps on the highway where the ultimate goal is not to impose a fine but for the prevention of speeding violations in general". Translating this to way we code is the ultimate benefit for the whole team. The significant gains are that the person whose code is being reviewed puts in that extra effort to check the missed signals in the sensitivity list and add default states in FSMs when they know their code is going to be put under the spotlight in front of their peers. Many potential bugs get fixed even before the code gets to the review committee. Furthermore, this is the right forum to ensure that people are following the coding guidelines that should be in place. Not only does the code owner gets feedback, the peers in the room generally apply the same lessons to their own code, resulting in an overall improvement and value add.

All said well, the main problem is always the time where code reviews consume significant resources and valuable productive time. In any organization, peer code reviews have to be part of the methodology, be it design or verification.

Self code review is probably more important!

A problem with any code review is the lack of specific targets and the right audience. As said above, in a peer code review people learn from each other however, it is generally not 100% clear what specific targets are to be achieved. Coding guidelines are the easy ones among the possible targets, but they should not be the only targets. Based on my experience, code reviews ideally should come with spec and verification plan reviews. The reviewed spec should be complete and clear enough to define how to assure its correct implementation. The code review plan should be a part of verification plan so that it is a part of the integrated solution to assure the implementation's correctness. Theoretically, the verification plan should include a complete set of conceptual properties to verify. This set should be complete enough to 100% assure the implementation's correctness. Some of the properties should be proven in code review and the rest should be proven with other methods. As an industry, we do not know how to create a theorectically complete spec, and we do not know how to create a theorectically complete verification plan. However, we should at least start taking some steps in the right direction.

Acceleration And Emulation – Why HW/SW Integration Needs Both


Acceleration, Emulation, and FPGA prototypes are most talked about these days and each has a distinctive role to play. In our earlier post we reflected on the Cadence rollout of Palladium XP, a verification computing platform that unifies acceleration capabilities from the Incisive Xtreme product line with Incisive Palladium emulation, incorporating some of the strongest capabilities from each platform.  You can read the press release here, but in his blog Richard Goering looks at the larger story behind the announcement. Why put acceleration and emulation in a single environment? What role does either play in hardware/software integration? And how do we define "acceleration" and "emulation," anyway?

Cadence Debuts Verification Computing Platform


Cadence Design Systems, Inc. has announced a fully integrated high-performance verification computing platform, called Palladium XP, that unifies simulation, acceleration and emulation into a single verification environment. Developed to support next-generation designs, the highly scalable Palladium XP verification computing platform lets design and verification teams bring up their hardware/ software environment faster and produce better quality embedded systems in a shorter time.

Cadence Palladium XP supports design configurations up to 2 billion gates, delivering performance up to 4MHz and supporting up to 512 users simultaneously. The platform also provides unique system-level solutions, including low-power analysis and metric-driven verification.

"Our system-integration challenges require us to improve our tools and methodologies continuously. Cadence has kept pace with our requirements and provided us with an excellent verification computing platform," said Narendra Konda, Director of Engineering, NVIDIA. "Cadence Palladium XP helps us design, verify and integrate the hardware and software components of our advanced graphics processing unit (GPU) better than ever to stay at the top of our game."

The Palladium XP verification computing platform provides developers a high-fidelity representation of their design so they can quickly and confidently locate and fix bugs, resulting in better-quality IP, subsystems, SOCs and system. Design teams can "hot swap" simulation with acceleration and emulation in a scalable verification environment as needed, which speeds the verification process and enables early access to testing embedded software and evaluating performance implications of different IP and/or system architectures.

"With the introduction of multicore IP platforms, ARM and our customers are facing new design requirements to integrate and run complex CPU sub-systems with software," said Dr. John Goodenough, Worldwide Director of Design Technology at ARM. "Like its predecessor, the Palladium XP verification computing platform will be a valuable validation tool for these advanced designs. Our initial trials have shown that the Palladium XP runs current ARM workloads out of the box, with the additional ability to trade off domain utilization for higher performance."

Availability:

The Palladium XP verification computing platform is available now worldwide. It is offered in two configurations, XL for design teams, and GXL for enterprise-class global teams.

SDRAM Memory Systems: Architecture Overview and Design Verification


DRAM (Dynamic Random Access Memory) is attractive to designers because it provides a broad range of performance and is used in a wide variety of memory system designs for computers and embedded systems. This DRAM memory primer provides an overview of DRAM concepts, presents potential future DRAM developments and offers an overview for memory design improvement through verification.

Enabling Rapid, Reliable Deployment of IP into System Designs


This webcast highlights The SPIRIT Consortium's new IP-XACT 1.4 specification which expands the range of IP that can be used in an IP-XACT Design Environment and targets new applications, specifically those dealing with transactional modeling and advanced verification methodologies. IP-XACT 1.4 benefits include documentation of all aspects of IP using XML databook format, documentation of models in a quantifiable and language- independent way, and enables designers to deploy specialist knowledge in their designs.

Evolving the Coverage-Driven Verification FlowEvolving the Coverage-Driven Verification Flow


Over the past decade, coverage-driven verification has emerged as a means to deal with increasing design complexity and ever more constrained schedules. Among the benefits of the new methodology—a dramatically expanded set of verification metrics and tools providing much improved visibility into the verification process. However, coverage-driven verification flows are still beset by challenges, including how to efficiently achieve coverage closure. Matthew Ballance describes how inFact's coverage-driven stimulus generator is helping to respond to these challenges with unique algorithms that help to simultaneously target multiple independent coverage goals.

Boosting RTL Verification with High-Level Synthesis


Instead of prolonging the painful process of finding bugs in RTL code, the design flow needs to be geared toward creating bug-free RTL designs. This can be realized today by automating the generation of RTL from exhaustively verified C++ models. If done correctly, high-level synthesis (HLS) can produce RTL that matches the high-level source specification and is free of the errors introduced by manual coding.

Diagnosing clock domain crossing errors in FPGAs


Clock domain crossing (CDC) errors in FPGAs are elusive, and locating them often requires good detective work and smart design as well as an understanding of metastability and other physical behaviors. This white paper discusses the nature of CDC errors and presents a powerful solution that aids in their detection and removal.Note: By clicking on the above link, this paper will be emailed to your TechOnline log-in address by Mentor Graphics.

The Art of Debugging: Make it Fail


"Debugging" is the most valuable engineering skills, not taught in any formal setting, and often learned the hard way, by trial, error, experience, and long days and nights. These excerpted chapters, part1, part2, part3 from a book, Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems, published by AMACOM Books is informative and a logical introduction to the skill and art of begging electronic circuits and systems.

e Verification language is alive and well


According to Mitch Weaver, corporate vice president for front-end verification at Cadence, the e verification language is not only still alive but is thriving and growing. In this pre-DVCon interview, he answers questions about Cadence and industry support for the e language and the Specman/e verification environment, as provided by Incisive Specman Elite and Incisive Enterprise Simulator XL.[Via: Industry Insights]

Toyota Prius 2005: An Early Warning About Verification


In this article Richard Goering talks about a software bug in Toyota Prius 2005 and after 5 years even after a through study by three of Toyota engineers on this issue did not help the company learn any lessons. He also talks about model-based verification which the original presentation talks about and how a formalized methodology for verification can help alleviate issues that may arise later in the field. Clearly a must read article!

Motivation: What else can we talk about verification?


Many of you already know that verification efforts are as or more important as the design efforts themselves. They cannot be ignored or hind-sighted and they are part of the chip design process itself. You may also have known or learnt that there is always a productivity gap, the significance of verification, the very famous pentium bug, the moore's law, that verification cannot be done by simulation itself, the benefits of formal verification and the complexities that tag along :-), the benefits of model checking, what is equivalence checking and how do those algorithms work etc. So, what part of the verification landscape are you yet to explore or understand? Is there something i can explain or detail about? Do you want to contribute? Please comment!

Verification Sessions at DVcon 2010


Featured Tutorial: Step-By-Step Guide to Advanced Verification Tutorial!, DVcon Exhibits and Product Demos.., DVCon Paper Presentations... etc.. more @ this link

Clock-Domain Crossing Verification Module


This Mentor's Verification Academy module directly addresses CDC issues by introducing a set of steps for advancing an organization’s clock-domain crossing verification skills, infrastructure, and metrics for measuring success while identifying process areas requiring improvement. This Clock-Domain Crossing Verification Module contains 7 sessions total including 1 demo.

Questa SV/AFV: Verification Methodology Kits


Mentor Graphics provides the Methodology kit examples in open source form under the Apache-2.0 license. These kits are available for Questa versions 6.2 to 6.4

The world of HVLs and VIPs


Over the last decade functional verification of ASIC systems has witnessed a paradigm shift in verification methodologies. Until late 90's a verification project requirements were met by traditional test-benches written using a HDL(hardware description language, used for coding the design) or a software language like C. These test environments supported directed testing by generating a pre-calculated stream of input data only focusing on the desired coverage points in the design.As the design size and complexity grew exponentially, with a million gate design becoming the norm, clearly the directed verification approach is quite cumbersome. It is an almost impossible task to manually code the test vectors to test your million gate design under all operating conditions. The ever increasing aggressive project schedules also pushed in favor of a powerful yet easy to develop test environment. Hardware verification langauages (HVL) came as a boon to solve all these problems. HVLs typically include features of high level programming languages like C++ or Java and in addition they support built-in constructs which help you develop the environment with fewer lines of code. It is much easier to generate random stimuli or constrained random stimuli, as needed and the built in HDL interface support let’s you hook up the environment and the DUT seamlessly.

Of the handful of HVL choices available in the market SystemVerilog and e language (Specman) are the most popular and widely adopted by semiconductor companies. The reason for these two languages being the preferred choice is not just the language features itself but the accompanying methodology prescribed by the EDA vendors (ex: OVM/VMM for SV and eRM for e language). These methodologies provide guidelines to build modular, re-usable and extensible verification modules which support plug and play development. With most of the silicon designs today adopting the SoC (System-on-Chip) methodology where several design IPs are integrated on a single chip, there is a new requirement for corresponding verification IPs (VIP). By taking advantage of the VIPs available in market, the verification team can build a test environment for a complex SoC with minimal resource and time utilization. A standard verification methodology serves as a common platform for the various third party VIP developers and its customers. So it’s actually the verification methodology rather than the language which influences the decision on VIP selection. Some methodologies have made it possible to integrate verification modules written in different languages so that you don’t have to discard any legacy code if available (you may have to make it methodology compliant though). VIPs and verification methodologies are still relatively newer concepts to the verification world and is being tried out in experimental phases by semiconductor companies. Not all of the industry leaders have adopted these methodologies completely probably because of the overhead in training their current workforce and replacing the legacy code. Here’s hoping that the EDA vendors enhance their methodologies to make it more developer friendly.

Verification Plan


An effective verification plan encompasses a detailed description of the complete hierachical verification methodology at unit and full chip level. It is important to consider at what verification phase directed vs random tests will be applied or when to stop investing effort on building a stand alone test environment that can provide greater coverage and instead, migrate to full chip level tests that deliver a more comprehensive understanding of the sate of the chip.

A good verification plan addresses many questions like what tools can be used for stand alone and full chip and for what specific type of tests. Creation of expected result scenarios along with the self checking mechanism should be detailed to improve automation and to drive the highest return on performance. In addition to each verification phase, testbench deliverables, dependencies like RTL availability, milestones like tests to be completed or written and any assumptions need to be specified and understood thoroughly. Finally, upon completion of the verification plan it has to be reviewed by both the design and verification teams and a matrix has to be created to track test coverage and then use it to measure the completeness or progress. Is is also important to know when and how to apply technologies such as emulation and formal methods to leverage key strengths to avoid any weaknesses and achieve high design quality using the verfication effort.
Courtesy: Catherine Ahlshlager!

Dealing with clock jitter in DDR2/DDR3 based designs


For the past couple of days i have been part of a design that interfaces a DDR/DDR2 memory. But lately after a recent pll model integration the whole scenario changed and i was in the middle of a clock jitter related timing check failure. This led me to do some Google search when i found this interesting 3 part article based on the same title.

Defining clock jitter
DDR2/DDR3 functionality
Clock jitter and statistics

SOC verification


At a Glance, an ASIC (Application Specific Integrated Circuit) can consolidate the work of many chips into a single, smaller, faster package reducing the manufacturing and support costs while boosting the speed of the device built with this technology. ASIC technology is so advanced that many functions traditionally implemented in software can be migrated into ASIC's functionality. More specifically, ASIC let designers think in terms of Hardware Design connectivity and use the power of constantly improving silicon technology to improve the timing and build devices targeted to specific functions like wireless or consumer electronics domain.

ASIC technology can give a performance improvement of up to threefold and can be always compared with the same functions being executed in software. Over the past decade, ASIC technology has seen major changes or improvements in density and performance driven by the smaller processes of the silicon. Now a days Most of the Major Companies involved in developing the ASIC's solution targeting from 65nm to 45nm technology.

The ASIC verification plays a vital role in proving that the Product is tested in all modes of functionality. To Address the issue of ASIC verification customer will tend make use language construct like VHDL/Verilog for modeling the RTL behavior and High Level Verification Language construct like specman 'e' language or System-Verilog which will help in modeling BFM and as well as generating the scenarios either in Random or Constrained manner.