Showing posts with label EDA Tools. Show all posts
Showing posts with label EDA Tools. Show all posts

Verilog2C++


Verilog2C++ is a Verilog to C++ translation program that translates a C++ class of a Verilog design using a cycle-accurate representation of each nets and registers. Verilog2C++ is about 10 times faster than other commercial simulators, but has only simple functions.

Splint Annotation-Assisted Lightweight Static Checking


Splint is a tool for statically checking C programs for security vulnerabilities and coding mistakes. With minimal effort, Splint can be used as a better lint. If additional effort is invested adding annotations to programs, Splint can perform stronger checking than can be done by any standard lint.

Source Navigator for Verilog


Source Navigator for Verilog is full featured tool for editing and navigating through large projects with many Verilog files. It parses Verilog code into a database that can be used to navigate files, trace connectivity, and find modules and signals in the design. It can even parse your files as you edit so you don't launch those long compile scripts only to end up with a syntax error after 5 minutes of compiling.

Source Navigator was developed by Cygnus Software as a commercial IDE (Integrated Development Environment) for software engineers and was later released under the GPL by Red Hat. Source Navigator supports many languages including C, C++, Tcl, Java, Fortran, and COBOL. There are so many similar products for software engineers, but almost nothing available for hardware engineers using languages such as Verilog. By adding a Verilog parser to Source Navigator hardware engineers can now enjoy the same high quality software.

Comit-TX Verilog Testbench Extractor


Comit-TX extracts a self-checking Verilog testbench of any module inside a design that has a system level testbench. Comit-TX, with the extracted testbench, enables the module's replacement to be verified in a stand-alone basis in an environment identical to its final working environment, without having to simulate the entire system.

Incisive Conformal ASIC


With thousands of Tapeouts, Conformal ASIC is the most widely-supported equivalency checking tool in the industry. It's also production-proven with more physical design closure tools, advanced synthesis tools, ASIC libraries, and IP cores than any other formal verification tool. Many EDA tool vendors rely on Conformal ASIC as an independent standard within their regression suites to verify the results that their own tools produce.

Key Features:
    * Minimizes re-spin risk by providing complete verification coverage
    * Reduces verification time significantly by verifying multi-million gate designs by orders of magnitude faster than traditional gate-level simulation
    * Independent verification technology decreases the risk of missing critical bugs
    * Faster, more accurate bug detection and correction throughout the entire design flow
    * Provides capacity to handle designs of tens of millions of gates
    * Eliminates functional clock domain crossing problems before simulation
    * Easy to learn and easy to use
    * Structural checks perform bus checks for data conflicts, set-reset exclusivity checks, and multi-port latch contention checks
    * Clock domain checks perform synchronization validation and data stability checks
    * Full cross-highlighting between debug, schematic, and RTL source code windows

System Architect for micro-architecture performance analysis and optimization during functional simulation


System Architect is comprised of a set of powerful, on-demand SystemC-compliant functions and analysis tools that enable micro-architecture performance analysis and optimization during functional simulation. The analysis provides a wide range of valuable information showing how to improve performance and power utilization. Seamlessly linked with Summit's Vista IDE , System Architect enables effective and rapid analysis of system performance and architectural tradeoffs using C and SystemC.

The System Architect API function set can be instrumented into any functional code to track tokens of data, log states and attributes. Textual reports and visualization tools allow designers to analyze actual key performance metrics, such as bus contention, memory utilization, and SW instruction distribution - all during standard functional simulation. These metrics are critical for analyzing micro-architecture bottlenecks, bandwidth limitations, and power tradeoffs.
Key Features:

   * On-demand SystemC-compliant API functions
   * Advanced textual and graphical reports
   * Analysis of data throughput and communication latencies
   * Dynamic resource utilization analysis (such as memories and FIFO's)
   * Software task distribution and processor utilization reports
   * Hardware/Software tradeoff analysis

nECO for Verdi and Debussy debug systems


nECO is an integrated graphical netlist modification tool for the Verdi and Debussy debug systems. The Novas debug systems accelerate users' understanding of complex designs to improve design, verification, and debug productivity. nECO adds the ability to isolate logic that needs to be changed in a flattened schematic, make the necessary changes, and write the modified design to a new netlist file.

Functionally debug in RTL source using Identify RTL Debugger


The Identify RTL Debugger lets FPGA designers and ASIC prototyping designers to functionally debug their hardware directly in their RTL source code. This allows functional verification with RTL designs 10,000 times faster than RTL simulators, and enables the use of in-system stimulus for applications like networking, audio and video, and HW/SW designs. Identify software allows designers to directly select signals and conditions in their RTL source code for debugging and the results are viewed directly in the RTL source code. The Identify tool can also save results in standard VCD format that can be used with most waveform viewers.

Key Features:
* Allows the designer to insert debug logic and view results directly in the RTL source code.
* Allows FPGA to run at normal design speed, but still allows debug access.
* Allows the designer to set triggers on signals and their values (data path), as well as trigger on RTL code branches such as CASE and IF statements.
* Allows the designer to view the captured data from the FPGA in almost any waveform display. Provides standard VCD output for results.
* Provides VHDL models for waveform data with all the type information and data included allowing the designer to view results in a waveform display complete with all the VHDL type information that they want to see.

Gatevision for Netlist debugging


GateVision  is a standalone graphical netlist analyzer that allows intuitive design navigation, schematic viewing, logic cone extraction, interactive logic cone viewing, and design documentation. GateVision's easy-to-read schematics and schematic fragments provide excellent debug support and accelerate the debug process.

Key Features:

   * On-the-fly schematic creation results in very high speed and capacity
   * Automatically extracts logic cones from user-defined reference points, and shows just the important portion of the circuit
   * Interactive logic cone navigation Allows compelling signal path tracing through the complete design hierarchy
   * Search-and-show capability allows easy location of specific objects shortens debug time
   * Design hierarchy browser provides easy navigation through the design hierarchy and gives compact hierarchy overview
   * Object cross-probing highlights selected objects in all design views (schematic, logic cone and HDL view) and shortens debug time
   * Context-sensitive menus and easy-to-use GUI
   * Verilog and EDIF netlist interface allows integration s into almost any design flow
   * Userware API allows addition of custom features

Design Compiler 2010 Doubles Productivity of Synthesis and Place-and-Route


Synopsys, Inc. has introduced Design Compiler 2010, the latest RTL synthesis innovation within the Galaxy™ Implementation platform, which delivers a two-fold speed-up in the synthesis and physical implementation flow.

To meet aggressive schedules for increasingly complex designs, engineers need an RTL synthesis solution that enables them to minimize iterations to speed up physical implementation. To address these challenges, topographical technology in Design Compiler 2010 is being extended to produce "physical guidance" to Synopsys' flagship place-and-route solution, IC Compiler, tightening timing and area correlation to 5% while speed IC Compiler's placement phase by 1.5 times (1.5X). A new capability allows RTL designers to perform floorplan exploration within the synthesis environment to efficiently achieve an optimal floorplan. In addition, Design Compiler's new scalable infrastructure tuned for multicore processors yields 2X faster synthesis runtimes on four cores. These new Design Compiler 2010 productivity improvements will be highlighted today by users at the Synopsys Users Group (SNUG) meeting in San Jose, California.

"Cutting design time and improving design performance are essential to keep our competitiveness in the marketplace," said Hitoshi Sugihara, Department Manager, DFM & Digital EDA Technology Development at Renesas Technology Corp. "With the new physical guidance extension to topographical technology we are seeing 5 percent correlation between Design Compiler and IC Compiler, up to 2X faster placement in IC Compiler and better design timing. We are adopting the new technology innovations in Design Compiler to minimize iterations while meeting our design goals in shorter timeframes."

To alleviate today's immense time-to-market pressures, Design Compiler 2010 extends topographical technology to further optimize its links with IC Compiler, tightening correlation down to 5%. Additional physical optimization techniques are applied during synthesis, and physical guidance is created and passed to IC Compiler, streamlining the flow and speeding placement in IC Compiler by 1.5X. Design Compiler 2010 also provides RTL designers access to IC Compiler's floorplanning capabilities from within the synthesis environment. With the push of a button, designers can perform what-if floorplan exploration, enabling them to identify and fix floorplan issues early and achieve faster design convergence.

"For the last few years, we have used Design Compiler's Topographical technology to find and fix design issues during synthesis to give us predictable implementation," said Shih-Arn Hwang, Deputy Director R&D Center at Realtek. "We see Design Compiler 2010 synthesis results closely correlating to physical results, while accelerating placement in IC Compiler by 1.5X. This tight correlation between synthesis and layout, along with faster runtimes, is exactly what we need for reducing iterations and significantly shortening design schedules in 65 nanometer and smaller process technologies."

Design Compiler 2010 includes a new, scalable infrastructure designed to deliver significant runtime speed-up on multicore compute servers. It employs an optimized scheme of distributed and multithreaded parallelization techniques, delivering an average of 2X faster runtime on quad-core compute servers while achieving zero deviation of the synthesis results.

"We've focused Design Compiler improvements on helping designers shorten design cycles and improve productivity," said Antun Domic, Senior Vice President and General Manager, Synopsys Implementation Group. "Since the introduction of topographical technology, the impact of logic synthesis on accelerating design closure with physical implementation has grown significantly. Design Compiler 2010 continues this trend, delivering a significant decrease in iterations and reducing run times in physical implementation. We have achieved this while dramatically updating our software infrastructure to best utilize the latest microprocessor architectures."

Probabilistic Timing Analysis


Because of shrinking feature sizes and the decreasing faithfulness of the manufacturing process to design features, process variation has been one of the constant themes of IC designers as new process nodes are introduced. This article reviews the problem and proposes a "probabilistic" approach as a solution to analysis and management of variability.

Process variation may be new in the digital design framework, but it has long been the principle worry of analog designers, known as mismatch. Regardless of its causes, variation can be global, where every chip from a lot can be effected in the same way, or quasi-global, where wafers or dies may show different electrical characteristics. Such global variation has been relatively easy to model, especially if process modeling people have been able to characterize it with a single "sigma" parameter. Timing analyzers need to analyze a design under both worst case and best case timing conditions. Usually, two extreme conditions of "sigma" sufficed to provide these two conditions. With the new process nodes , however, not only it is necessary to have several variational parameters, but individual device characteristics on a chip could differ independently, known as on-chip variation (OCV).

At the device level, process variation is modeled by a set of "random" parameters which modify the geometric parameters of the device and its model equations. Depending on the nature of the variation, these may effect all devices on the chip, or certain types of devices, or they may be specific to each instance of the device. Because of this OCV, it is important that correlation between various variational parameters be accounted for. For example, the same physical effect is likely to change the length and width of a device simultaneously. If this is ignored, we may be looking at very pessimistic variation scenarios.

There are some statistical methods which try to capture correlations and reduce them to a few independent variables. Some fabs use use parameters related to device geometries and model parameters. The number of such parameters may range from a few to tens, depending on the device. If one considers global and local variations, the number of variables quickly can get out of hand. Variation is statistically modeled by a distribution function, usually Gaussian. Given the value of a variational parameter, and a delta-interval around it, one can calculate the probability that the device/ process will be in that interval and will have specific electrical characteristics for that condition. Instead of having a specific value for a performance parameter such as delay, it will have a range of values with specific probabilities depending on the variational parameters.

To analyze the performance of digital designs, two approaches have emerged: statistical static timing analysis (SSTA) and multi-corner static timing analysis. SSTA tries to generate a probability distribution for a signal path from delay distributions of individual standard cells in the path. This is usually implemented by using variation-aware libraries, which contain a sampling of cell timing at various discrete values of the variational parameters. Because of the dependence on a discrete library, this approach is practically limited to only few global systematic variables, with a very coarse sampling of the variation space. Since it is a distribution-based analysis, it depends on the shape of primary variables. It is generally assumed these are Gaussian, but there is no reason to assume this. In fact, most process models may not even be centered. In addition, it becomes difficult to do input slope dependent-delay calculation. Assumptions and simplifications could quickly make this approach drift from the goal. Since it has the probability distributions, one can report a confidence level about a timing violation. Implicit in this approach is the assumption that any path has a finite probability of being critical.

Multi-corner timing analysis is kind of Monte-Carlo in disguise, and has been gaining popularity as a brute-force method. Someone who knows what he/she is doing decides on a set of extreme corner conditions. These are instances of process variables, and cell libraries are generated for these conditions. Timing analysis is performed using these libraries. The number of libraries may be 10 to 20 or more. Naturally, this approach is still limited to few global variational parameters. It is also difficult to ascertain the reliability of timing analysis, in terms of yield. The only way to increase the confidence level is by building more libraries and repeating the analysis with them. This process increases verification and analysis time, but does not guarantee coverage.

What we propose instead, is probabilistic timing analysis. It can address both global and local variations, and we can have a lower confidence limit on timing analysis results which can be controlled by the designer. This turns the problem upside down. Since timing analysis is interested in worst-case and best-case timing conditions of a chip, we ask the same question for individual cells making up a design. We want to find the best/ worst case timing condition of a cell. While doing this, we need to limit our search and design space. For example, the interval (-1,1) covers 68.268% of the area under the normal bell curve distribution. If we search this interval for sigma with maximum inverter delay and later use that value, we can only say that the probability that this value is the maximum delay is 0.68268. For the interval (-2,2), it is 0.95448. If we had searched a wider interval, our confidence level would go up even higher. If there were two process variables, and if we had searched (-1,1)(-1,1), our confidence would drop to 0.68268X0.68268, or 0.46605.

Although lower confidence limits are set by the initial search intervals, the actual probabilities may be much higher. If the maximum had occurred at extreme corners, one could expect that as the search interval expands, we might see new maximum conditions. On the other hand, if the maximum had occurred at a point away from the corners, most likely this is the absolute value. Typically, only one of the parameters, the one most tightly coupled to threshold voltage, for example, takes up the extreme values, and most others take intermediate values. In these cases it is effectively the same as if we searched the interval (-inf, +inf). This behavior is consistent with the traditional approach, where a single parameter is used to control best and worst timing corners.

One of the conceptual problem with our probabilistic approach is that each cell may have different sets of global variables, which contradicts the definition of such variables. A flip-flop may have different global variables than an inverter. Even inverters of different strengths may have different sets. They are typically close to each other, however. There may be some pessimism associated with this condition.

It is easy to establish confidence levels on critical path timing. If for example, global variables have a confidence level of 0.9, and local random variables have 0.95, the confidence level for a path of 10 cells is 0.9X0.95*10= 0.5349. Since local variations of each gate are independent of each other, intersection rule of probability should be followed, probability of having 0.95 coverage for two independent cells is 0.95X0.95, for three is 0.95X0.95X0.95, etc. In reality though, minimum and maximum conditions for local variations are clustered around the center, away from the interval end points, which brings confidence level to 0.9, confidence level for global variations. Alternatively, one can expand the search interval to cover more process space. Also keep in mind, the variation range of "real" random variables is much narrower than (-inf, +inf).

Library Technologies has implemented this probabilistic approach in its YieldOpt product. The user defines the confidence levels her/she would like to see, and identifies global and local random parameters for each device. Confidence levels are converted to variation intervals assuming a normal distribution. This is the only place we make an assumption about the shape of the distributions. As a result, our approach has a weak dependence on probability distribution. In the probabilistic approach, we view timing characteristics of a cell as functions of random process variables. For each variable, we define a search interval. The variables could be global and local random variables. Maximum and minimum timing conditions for each cell are determined for typical loads and input slopes. Two libraries are generated for each condition. Normally, we couple worst process condition with high temperature, low voltage; and best process condition with low temperature and high voltage.

Timing analysis flow is the traditional flow, and depending on the number of random variables, searching for extreme conditions becomes a very demanding task. We have developed methods and tools which can achieve this task in a deterministic way. The YieldOpt product determines appropriate process conditions for each cell and passes it over for characterization and library generation. Determining worst/best case conditions may add about 0.1X to 2X overhead on top of characterization.

By Mehmet Cirit:
Mehmet Cirit is the founder and president of Library Technologies, Inc. (LTI). LTI develops and markets tools for design re-optimization for speed and low power with on-the-fly cell library creation, cell/ memory characterization and modeling, circuit optimization, and process variation analysis tools such as YieldOpt.

EDA Tools - VN-Cover Emulator Coverage Analysis for HW Emulation


VN-Cover Emulator by TransEDA enables engineers to obtain coverage on their SoCs in a hardware-accelerated environment and reach a level of confidence similar to that achieved using VN-Cover with software simulators. Using VN-Cover Emulator speeds up the overall verification task by providing better visibility on what has been covered, what is left, and when to stop verification.

Key Features:

* Coverage for statement, branch, toggle and FSM state and arc
* Verilog, VHDL and mixed-language support
* Detailed code coverage reports and graphical display
* Automatic FSM extraction and analysis
* Support for Cadence Palladium and Cobalt, EVE Zebu, Mentor Graphics Celaro and Vstation, and Verisity Xtreme

EDA Tools - VN-Cover Coverage Analysis


VN-Cover by TransEDA is a code and FSM coverage tool that identifies any unverified parts of a simulated HDL design. VN-Cover includes a comprehensive set of metrics, which include line, statement, branch, condition, path, toggle, triggering, signal trace and FSM state, arc and path. In addition, the tool offers advanced features such as Deglitch and Coverability Analysis option, aimed at increasing measured coverage accuracy.

VN-Cover seamlessly works with all leading simulators to measure coverage on VHDL, Verilog, SystemVerilog and mixed-language designs. It is a vendor-neutral coverage tool that works across simulators, languages and platforms, and can be also utilized with hardware-accelerated verification environments.
Key Features:

* Verilog, VHDL and mixed-language support
* Detailed code coverage reports and graphical display
* Automatic FSM extraction and analysis
* Advanced and unique glitch filtering capability
* Post-simulation coverability analysis option and results filtering
* Test-suite optimization facility
* Multi-platform, multi-simulator availability