Happy New Year 2009!

We wish all our readers a very Happy New Year 2009!

9 yr old Indian girl is the youngest Microsoft Certified Professional

A nine year-old Indian girl named M. Lavinashree has passed the Microsoft Certified Professional Exam, becoming the youngest person to ever pull it off (smashing the record previously held by a 10 year-old Pakistani girl). The youngster has a long history of making records in her short life -- including reciting all 1,300 couplets of a 2,000 year-old Tamil epic at the age of three -- and now she's now cramming for the Microsoft Certified Systems Engineer Exam.

The person with the certificate in hand is none other than the former Indian President Dr. APJ Abdul Kalam (Missile Scientist and Former DRDO - Defense Research and Development Organization, India, Director)

[Via: TechNews]

Why Subscribe?

What Benefits Do I Get When Subscribing To The Digital Electronics Blog? Lots of them! :-)

Subscribers have reported that coffee tastes awesome, decrease in crime, and property taxes decline. Okay, true, the last one I’m still investigating…. :)

Seriously, subscribing to The Digital Electronics Blog's plain and simple RSS, gives you the edge over others in your niche.

A vivid group of chip designers and researchers for about a decade, we’ve mastered the art of seeing straight through to the critical issues that challenge the art of chip design and entrepreneurs… and we present here our solutions and thoughts so creatively outside the box, you’ll never see them any other place online.

In other words….

Subscribing allows you to receive every new article whenever they’re released…without any action on your part.

Get personalized answers to your questions.

Get answers to frequently asked and complex interview questions.

No typing!

What joy!

You don’t even have to return to the site to get the latest special tips and writings; it’s all sent directly to YOU. All the fresh content that we freely share every day is delivered to you at no cost, waiting for your convenience for reading.

That saves you time and makes learning easier than determining whether gravity is still operating as it should be (classic test - jump up. If you float to the ceiling, you have bigger problems (ahem, I mean, "opportunities for improvement") to deal with right now).

Currently, we offer two different subscription methods:

  • Website Subscriptions - New posts are sent to your subscription (feed) reader
  • Email Subscriptions - New posts are sent directly to your inbox.

That sounds very nifty! But…ummm….what’s a Subscription (feed) reader?

Glad you asked! A subscription feed reader is a reader (and no, I don’t mean the little old librarians who lower their glasses to the edge of their noses when reading your over-due list - we mean an actual program that runs on your computer) that says to itself:

"Okay, my computer user is sooo busy with everything else, I’ll just simply zoom out to all the sites they want to monitor, gather up all the thousands of headlines, categorize them into neat useful lists and display them whenever I’m asked."

Well, actually, yes. What readers do you recommend?

There’s lots of superduper feedreaders out there - We personally use Google Reader. Other folks like:

  • FeedReader
  • Top RSS Readers

The choice is yours - everyone has their own particular favorites.

Sounds good! What if I want to get your updates via email?

Then make it so, it’s as easy as melting ice cream in an oven! Just click on the envelope below and you’ll be taken to a signup window - add your email address, type in the verification letters (you’ll see them there) and click "Complete Subscription Request".


You’ll receive a confirmation request in your inbox from FeedBurner (the place that delivers our articles). Click on the acceptance link and voila, you’ll be done.

Got any questions?

Please feel free to drop us a line and we’ll help out in any way we can.


The Digital Electronics Blog Team

The Economics of Test, Part - IV

Detecting a defective unit is often only part of the job. Another important aspect of test economics that must be considered is the cost of locating and replacing defective parts. Consider again the board with 10 integrated circuits. If it is found to be defective, then it is necessary to locate the part that has failed, a time-consuming and error-prone operation. Replacing suspect components that have been soldered onto a PCB can introduce new defects. Each replaced component must be followed by retest to ensure that the component replaced was the actual failing component and that no new defects were introduced during this phase of the operation. This ties up both technician and expensive test equipment. Consequently, a goal of test development must be to create tests capable of not only detecting a faulty operation but to pinpoint, whenever possible, the faulty component. In actual practice, there is often a list of suspected components and the objective must be to shorten, as much as possible, that list.

One solution to the problem of locating faults during the manufacturing process is to detect faulty devices as early as possible. This strategy is an acknowledgment of the so-called rule-of-ten. This rule, or guideline, asserts that the cost of locating a defect increases by an order of magnitude at every level of integration. For example, if it cost N dollars to detect a faulty chip at incoming inspection, it may cost 10N dollars to detect a defective component after it has been soldered onto a PCB. If the component is not detected at board test, it may cost 100 times as much if the board with the faulty component is placed into a complete system. If the defective system is shipped to a customer and requires that a field engineer make a trip to a customer site, the cost increases by another power of 10. The obvious implication is that there is tremendous economic incentive to find defects as early as possible. This preoccupation with finding defects early in the manufacturing process also holds for ICs.27 A wafer will normally contain test circuits in the scribe lanes between adjacent die. Parametric tests are performed on these test circuits. If these tests fail, the wafer is discarded, since these circuits are far less dense than the circuits on the die themselves. The next step is to perform a probe test on individual die before they are cut from the wafer. This is a gross test, but it detects many of the defective die. Those that fail are discarded. After the die are cut from the wafer and packaged, they are tested again with a more thorough functional test. The objective? Avoid further processing, and subsequent packaging, of die that are clearly defective.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

The Economics of Test, Part - III

However, if devices are tested, feature sizes can be reduced and more die will fit on each wafer. Even after the die are tested and defective die are discarded, the number of good die per wafer exceeds the number available at the larger feature sizes. The benefit in terms of increasing numbers of good die obtainable from each wafer far outweighs the cost of testing the die in order to identify those that are defective. Point B on the graph corresponds to a point where process yield is lower than the required quality level. However, testing will identify enough defective units to bring quality back to the required quality level. The horizontal distance from point A to point B on the graph is an indication of the extent to which the process capability can be made more aggressive, while meeting quality goals. The object is to move as far to the right as possible, while remaining competitive. At some point the cost of test will be so great, and the yield of good die so low, that it is not economically feasible to operate to the right of that point on the solid line.

We see therefore that we are caught in a dilemma: Testing adds cost to a product, but failure to test also adds cost. Trade-offs must be carefully examined in order to determine the right amount of testing. The right amount is that amount which minimizes total cost of testing plus cost of servicing or replacing defective components. In other words, we want to reach the point where the cost of additional testing exceeds the benefits derived. Exceptions exist, of course, where public safety or national security interests are involved.

Another useful side effect of testing that should be kept in mind is the information derived from the testing process. This information, if diligently recorded and analyzed, can be used to learn more about failure mechanisms. The kinds of defects and the frequency of occurrence of various defects can be recorded and this information can be used to improve the manufacturing process, focusing attention on those areas where frequency of occurrence of defects is greatest.

This test versus cost dilemma is further complicated by “time to market.” Quality is sometimes seen as one leg of a triangle, of which the other two are “time to market” and “product cost.” These are sometimes posited as competing goals, with the suggestion that any two of them are attainable.25 The implication is that quality, while highly desirable, must be kept in perspective. Business Week magazine, in a feature article that examined the issue of quality at length, expressed the concern that quality could become an end in itself. The importance of achieving a low defect level in digital components can be appreciated from just a cursory look at a typical PCB. Suppose, for example, that a PCB is populated with 10 components, and each component has a defect level DL = 0.999. The likelihood of getting a defect free board is (0.999)10 = 0.99004; that is, one of every 100 PCBs will be defective—and that assumes no defects were introduced during the manufacturing process. If several PCBs of comparable quality go into a more complex system, the probability that the system will function correctly goes down even further.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

The Economics of Test, Part - II

The table depicted shows test cost broken down into four categories some of which are one-time, non recurring costs whereas others are recurring costs. Test preparation includes costs related to development of the test programs as well as some potential costs incurred during design of the DFT features.

DFT-related costs are directed toward improving access to the basic functionality of the design in order to simplify the creation of test programs. Many of the factors depicted in the Figure imply both recurring and nonrecurring costs. Test execution requires personnel and equipment. The tester is amortized over individual units, representing a recurring cost for each unit tested, while costs such as probe cards may represent a one-time, nonrecurring cost. The testrelated silicon is a recurring cost, while the design effort required to incorporate testability enhancements, listed under test preparation as DFT design, is a nonrecurring cost.

The category listed as imperfect test quality includes a subcategory labeled as tester escapes, which are bad chips that tested good. It would be desirable for tester escapes to fall in the category of nonrecurring costs but, regrettably, tester escapes are a fact of life and occur with unwelcome regularity.

Lost performance refers to losses caused by increases in die size necessary to accommodate DFT features. The increase in die size may result in fewer die on a wafer; hence a greater number of wafers must be processed to achieve a given throughput. Lost yield is the cost of discarding good die that were judged to be bad by the tester.

The column in Figure labeled “Volume” is a critical factor. For a consumer product with large production volumes, more time can be justified in developing a comprehensive test plan because development costs will be amortized over many units. Not only can a more thorough test be justified, but also a more efficient test—that is, one that reduces the amount of time spent in testing each individual unit. In low-volume products, testing becomes a disproportionately large part of total product cost and it may be impossible to justify the cost of refining a test to make it more efficient. However, in critical applications it will still be necessary to prepare test programs that are thorough in their ability to detect defects.

A question frequently raised is, “How much testing is enough?” That may seem to be a rather frivolous question since we would like to test our product so thoroughly that a customer never receives a defective product. When a product is under warranty or is covered by a service contract, it represents an expense to the manufacturer when it fails because it must be repaired or replaced. In addition, there is an immeasurable cost in the loss of customer goodwill, an intangible but very real cost, not reflected in the Figure, that results from shipping defective products. Unfortunately we are faced with the inescapable fact that testing adds cost to a product. What is sometimes overlooked, however, is the fact that test cost is recovered by virtue of enhanced throughput. Consider the graph in the Figure. The solid line reflects quality level, in terms of defects per million (DPM) for a given process, assuming no test is performed. It is an inverse relationship; the higher the required quality, the fewer the number of die obtainable from the process. This follows from the simple fact that, for a given process, if higher quality (fewer DPM) is required, then feature sizes must be increased. The problem with this manufacturing model is that, if required quality level is too high, feature sizes may be so large that it is impossible to produce die competitively. If the process is made more aggressive, an increasing number of die will be defective, and quality levels will fall. Point A on the graph corresponds to the point where no testing is performed. Any attempt to shrink the process to get more units per wafer will cause quality to fall below the required quality level.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

The Economics of Test, Part - I

What are the factors that influence the cost of test? Quality and test costs are related, but they are not inverse of one another. As we shall see, an investment in a higher-quality test often pays dividends during the test cycle. Test related costs for ICs and PCBs include both time and resource. As pointed out in previous sections, for some products the failure to reach a market window early in the life cycle of the product can cause significant loss of revenue and may in fact be fatal to the future of the product.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

TCL for EDA - A repository Free TCL/TK tools, scripts for EDA...

The TCL for EDA project is an open-source repository of TCL/TK tools, applications, scripts and methodological articles. The TCL for EDA project targets different stages of chip design: from Verification to Project Management and up to Synthesis, Static Timing Analysis and Design-for-Test.

Some of their offerings:

  • Netedit - Verilog netlist editor/viewer
  • Netman - Verilog netlist manager/viewer
  • Pman - Project manager (allows navigation, viewing and editing of verilog files)
  • TCL-PLI - TCL pli library
  • Verilog Structural Integration Methodology and Scripts - Sounds interesting..
  • Lots of DC, Timing, DFT and verifications scripts!

A very interesting site indeed!

Transport delay / Inertial Delay

A number of types of delays exist for describing circuit behavior. The two major hardware description languages, Verilog and VHDL, support inertial delay and transport delay.
Inertial delay is a measure of the elapsed time during which a signal must persist at an input of a device in order for a change to appear at an output. A pulse of duration less than the inertial delay does not contain enough energy to cause the device to switch. This is illustrated in Figure attached where the original waveform contains a short pulse that does not show up at the output.

Transport delay is meaningful with respect to devices that are modeled as ideal conductors; that is, they may be modeled as having no resistance. In that case the waveform at the output is delayed but otherwise matches the waveform at the input. Transport delay can also be useful when modeling behavioral elements where the delay from input to output is of interest, but there is no visibility into the behavior of delays internal to the device.

Cycle based simulation

New design starts continue to grow in gate count, and the amount of CPU time required to simulate these designs tends to grow disproportionate to gate count, implying a growing need for simulation speed. A simple example helps to shed light
on this situation.
Suppose a circuit has n functions and that, in the worst case, each function interacts with all of the others. Ignoring for the moment the complexity of the interactions, there are n × (n − 1)/2 potential interactions between the n functions.

Thus, in the worst case, the number of interactions grows proportional to the square of the number of functions. Handshaking protocols between functions also grow more complex. Internal status and mode control registers act as extensions to device I/O pins.

To verify the growing number of interactions requires more stimuli. In addition, the growing number of gates and functions in the circuit model generate more events that must be evaluated during each clock cycle. The combination of more functionality and more stimuli requires an exponentially growing amount of CPU time to complete the evaluations. A consequence of this is a growing difficulty to create and simulate enough stimuli to verify design correctness. As a result, design errors are more likely to escape detection until after tape-out, at which time the discovery of errors requires another expensive iteration through the design cycle.

Cycle simulation is one of the answers to the growing need for greater verification power. Cycle simulation evaluates logic elements and functions across clock cycle boundaries without regard to intermediate values. Its purpose is to evaluate input stimuli as rapidly as possible. Designs are required to be synchronous so that every possible technique can be leveraged during simulation. Rank-ordering is used so that elements only need to be evaluated once during each clock period. Circuit delays are ignored, and the number of logic values is usually limited to three or four {0, 1, X, Z}. Internal representation of the circuit may be in terms of binary decision diagrams (BDDs), so intermediate values are totally obscured. To insure that a circuit operates at its intended speed when fabricated, circuit delays are measured by timing analysis programs that are written specifically for that purpose and run independently of simulation.

The designer plays a role in this simulation mode by modeling circuits at the highest possible level of abstraction without losing essential details. A number of methods have been developed to speed up simulation while reducing the amount of workstation memory required to perform simulations.

Contribute to this Article: onenanometer[at]gmail.com
Please leave a comment!

Event driven simulation/simulator

A latch or flip-flop does not always respond to activity on its inputs. If an enable or clock is inactive, changes at the data inputs have no effect on the circuit. As it turns out, the amount of activity within a circuit during any given timestep is often minimal and may terminate abruptly.

Since the amount of activity in a time step is minimal, why simulate the entire circuit? Why not simulate only the elements that experience signal changes at their inputs? This strategy, employed at a global level, rather than locally, as was the case with stimulus bypass, is supported in Verilog by means of the sensitivity list. The following Verilog module describes a three-bit state machine. The line beginning with “always” is a sensitivity list. The if-else block of code is evaluated only in response to a 1 → 0 transition (negedge) of the reset input, or a 0 → 1 transition (posedge) of the clk input. Results of the evaluation depend on the current value of tag, but activity on tag, by itself, is ignored.

module reg3bit(clk, reset, tag, reg3);
input clk, reset, tag;
output reg3;
reg [2:0] reg3;

always@(posedge clk or negedge reset)
if(reset == 0)
reg3 = 3'b110;
else // rising edge on clock
3'b110: reg3 = tag ? 3'b011 : 3'b001;
3'b011: reg3 = tag ? 3'b110 : 3'b001;
3'b001: reg3 = tag ? 3'b001 : 3'b011;
default: reg3 = 3'b001;

When a signal change occurs on a primary input or the output of a circuit element, an event is said to have occurred on the net driven by that primary input or element. When an event occurs on a net, all elements driven by that net are evaluated. If an event on a device input does not cause an event to appear on the device output, then simulation is terminated along that signal path.

Linting tools

Some of the tools used for design verification of ICs have their roots in software testing. Tools for software testing are sometimes characterized as static analysis and dynamic analysis tools. Static analysis tools evaluate software before it has run. 

An example of such a tool is Lint:It is not uncommon, when porting a software system to another host environment and recompiling all of the source code for the program, to experience a situation where source code that compiled without complaint on the original host now either refuses to compile or produces a long list of ominous sounding warnings during compilation. The fact is, no two compilers will check for exactly the same syntax and/or semantic violations. One compiler may attempt to interpret the programmer’s intention, while a second compiler may flag the error and refuse to generate an object module, and a third compiler may simply ignore the error.

Lint is a tool that examines C code and identifies such things as unused variables, variables that are used before being initialized, and argument mismatches. Commercial versions of Lint exist both for programming languages and for hardware design languages. A lint program attempts to discover all fatal and nonfatal errors in a program before it is executed. It then issues a list of warnings about code that could cause problems. Sometimes the programmer or logic designer is aware of the coding practice and does not consider it to be a problem. In such cases, a lint program will usually permit the user to mask out those messages so that more meaningful messages don’t become lost in a sea of detail.

In contrast to static analysis tools, dynamic analysis tools operate while the code is running. In software this code detects such things as memory leaks, bounds violations, null pointers, and pointers out of range. They can also identify source code that has been exercised and, more importantly, code that has not been exercised. Additionally, they can point out lines of code that have been exercised over only a partial range of their variables.

White box testing or Black box testing

When performing verification, the target device can be viewed as a white box or a black box. During whitebox testing, detailed knowledge is available describing the internal workings of the device to be tested. This knowledge can be used to direct the verification effort. Forexample, an engineer verifying a digital circuit may have schematics, block diagrams, RTL code that may or may not be suitably annotated, and textual descriptions including timing diagrams and state transition graphs. All or a subset of these can be used to advantage when developing test programs. The logic designer responsible for the correctness of the design, armed with knowledge of the internal workings of the design, writes stimuli based on this knowledge; hence he or she is performing white-box testing.

During black-box testing, it is assumed that there is no visibility into the internal workings of the device being tested. A functional description exists which outlines, in more or less detail, how the device must respond to various externally applied stimuli. This description, or specification, may or may not describe behavior of the device in the presence of all possible combinations of inputs. For example, a microprocessor may have op-code combinations that are left unused and unspecified. From one release to the next, these unused op-codes may respond very differently if invoked. PCB designers, concerned with obtaining ICs that work correctly with other ICs plugged into the same PCB or backplane, are most likely to perform black-box testing, unless they are able to persuade their vendor to provide them with more detailed information.

Formal Verification or EquivalenceChecking

Design verification, must show that the design, expressed at the RTL or structural level, implements the operations described in the data sheet or whatever other specification exists.

Verification at the RTL level can be accomplished by means of simulation, but there is a growing tendency to supplement simulation with formal methods such as model checking. At the structural level the use of equivalence checking is becoming standard procedure. In this operation the RTL model is compared to a structural model, which may have been synthesized by software or created manually. Equivalence checking can determine if the two levels of abstraction are equivalent. If they differ, equivalence checking can identify where they differ and can also identify what logic values cause a difference in response.

Updates: 18th Dec 2008 !
Another series of steps in equivalence checking goes beyond what was described above...

RTL to Pre-Synthesis Netlist!
Pre-Synthesis Netlist Vs Post Synthesis Netlist!

Verification or Validation

The purpose of design verification is to demonstrate that a design was implemented correctly. By way of contrast, the purpose of design validation is to show that the design satisfies a given set or requirements. A succinct and informal way to differentiate between them is by noting that Validation asks “Am I building the right product?” Verification asks “Am I building the product right?” Seen from this perspective, validation implies an intimate knowledge of the problem that the IC is designed to solve. An IC created to solve a problem is described by a data sheet composed of text and waveforms. The text verbally describes IC behavior in response to stimuli applied to its I/O pins. Sometimes that behavior will be very complex, spanning many vectors, as when stimuli are first applied in order to configure one or more internal control registers. Then, behavior depends on both the contents of the control registers and the applied stimuli. The waveforms provide a detailed visual description of stimulus and response, together with timing, that shows the relative order in which signals are applied and outputs respond.

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

vlsi chip design

test, vlsi chip design, test

asic chip design


Avago Technologies (former HP group) Interview Questions

  • How do you minimize clock skew/ balance clock tree?
  • Given 11 minterms and asked to derive the logic function.
  • Given C1= 10pf, C2=1pf connected in series with a switch in between, at t=0 switch is open and one end having 5v and other end zero voltage; compute the voltage across C2 when the switch is closed?
  • Explain the modes of operation of CMOS (Complimentary Metal Oxide Semiconductor) inverter? Show IO (Input-Output) characteristics curve.
  • Implement a ring oscillator.
  • How to slow down ring oscillator?

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Hynix Semiconductor Interview Questions

  • How do you optimize power at various stages in the physical design flow?
  • What timing optimization strategies you employ in pre-layout /post-layout stages?
  • What are process technology challenges in physical design?
  • Design divide by 2, divide by 3, and divide by 1.5 counters. Draw timing diagrams.
  • What are multi-cycle paths, false paths? How to resolve multi-cycle and false paths?
  • Given a flop to flop path with combo delay in between and output of the second flop fed back to combo logic. Which path is fastest path to have hold violation and how will you resolve?
  • What are RTL (Register Transfer Level) coding styles to adapt to yield optimal backend design?
  • Draw timing diagrams to represent the propagation delay, set up, hold, recovery, removal, minimum pulse width.

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Hughes Networks Interview Questions

What is setup/hold? What are setup and hold time impacts on timing? How will you fix setup and hold violations?
  • Explain function of Muxed FF (Multiplexed Flip Flop) /scan FF (Scal Flip Flop).
  • What are tested in DFT (Design for Testability)?
  • In equivalence checking, how do you handle scanen signal?
  • In terms of CMOS (Complimentary Metal Oxide Semiconductor), explain physical parameters that affect the propagation delay?
  • What are power dissipation components? How do you reduce them?
Short Circuit Power
Leakage Power Trends
Dynamic Power
Low Power Design Techniques

  • How delay affected by PVT (Process-Voltage-Temperature)?
Answer: Process-Voltage-Temperature (PVT) Variations and Static Timing Analysis (STA)
  • Why is power signal routed in top metal layers?

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Qualcomm Interview Questions

In building the timing constraints, do you need to constrain all IO (Input-Output) ports?
  • Can a single port have multi-clocked? How do you set delays for such ports?
  • How is scan DEF (Design Exchange Format) generated?
  • What is purpose of lockup latch in scan chain?
  • Explain short circuit current.
Answer: Short Circuit Power
  • What are pros/cons of using low Vt, high Vt cells?
Multi Threshold Voltage Technique
Issues With Multi Height Cell Placement in Multi Vt Flow
  • How do you set inter clock uncertainty?

set_clock_uncertainty –from clock1 -to clock2
  • In DC (Design Compiler), how do you constrain clocks, IO (Input-Output) ports, maxcap, max tran?
  • What are differences in clock constraints from pre CTS (Clock Tree Synthesis) to post CTS (Clock Tree Synthesis)?

Difference in clock uncertainty values; Clocks are propagated in post CTS.
In post CTS clock latency constraint is modified to model clock jitter.
  • How is clock gating done?
Answer: Clock Gating
  • What constraints you add in CTS (Clock Tree Synthesis) for clock gates?

Make the clock gating cells as through pins.
  • What is trade off between dynamic power (current) and leakage power (current)?
Leakage Power Trends
Dynamic Power

  • How do you reduce standby (leakage) power?
Answer: Low Power Design Techniques
  • Explain top level pin placement flow? What are parameters to decide?
  • Given block level netlists, timing constraints, libraries, macro LEFs (Layout Exchange Format/Library Exchange Format), how will you start floor planning?
  • With net length of 1000um how will you compute RC values, using equations/tech file info?
  • What do noise reports represent?
  • What does glitch reports contain?
  • What are CTS (Clock Tree Synthesis) steps in IC compiler?
  • What do clock constraints file contain?
  • How to analyze clock tree reports?
  • What do IR drop Voltagestorm reports represent?
  • Where /when do you use DCAP (Decoupling Capacitor) cells?
  • What are various power reduction techniques?
Answer: Low Power Design Techniques


Texas Instruments (TI) Interview Questions

  • How are timing constraints developed?
  • Explain timing closure flow/methodology/issues/fixes.
  • Explain SDF (Standard Delay Format) back annotation/ SPEF (Standard Parasitic Exchange Format) timing correlation flow.
  • Given a timing path in multi-mode multi-corner, how is STA (Static Timing Analysis) performed in order to meet timing in both modes and corners, how are PVT (Process-Voltage-Temperature)/derate factors decided and set in the Primetime flow?
  • With respect to clock gate, what are various issues you faced at various stages in the physical design flow?
  • What are synthesis strategies to optimize timing?
  • Explain ECO (Engineering Change Order) implementation flow. Given post routed database and functional fixes, how will you take it to implement ECO (Engineering Change Order) and what physical and functional checks you need to perform?

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

ST Microelectronics - Interview Questions

  • What were the challenges you faced in physical design, PAR (place and route), FV (Formal Verification)?
  • What was the average design cycle time for your designs?
  • What part are your areas of interest in physical design?
  • Explain ECO (Engineering Change Order) methodology used in your projects.
  • Explain CTS (Clock Tree Synthesis) flow used in your projects.
  • What kind of routing issues did you face in your projects? Mention the recent one?
  • How does STA (Static Timing Analysis) in OCV (On Chip Variation) conditions done?
  • How do you set OCV (On Chip Variation) in the tool you used?
  • How is timing correlation done before and after place and route?
  • If there are too many pins of the logic cells in one place within core, what kind of issues would you face and how will you resolve?
  • Define hash/ @array in perl.
  • Using TCL (Tool Command Language, Tickle) how do you set variables?
  • What is ICC (IC Compiler) command for setting derate factor/ command to perform physical synthesis?
  • What are nanoroute options for search and repair?
  • What were your design skew/insertion delay targets?
  • How is IR drop analysis done? What are various statistics available in reports?
  • Explain pin density/ cell density issues, hotspots?
  • How will you relate routing grid with manufacturing grid and judge if the routing grid is set correctly?
  • What is the command for setting multi cycle path?
  • If hold violation exists in design, is it OK to sign off design? If not, why?
Raj - Sequence Design

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Digital design interview questions

Search in this blog for all the available articles and resources!

Advertise on this Blog and see yourself grow!
Contact us at onenanometer[at]gmail.com

The Great Divide – Technical Leadership and Project Management

[Via Coaching Excellence in IC Design Teams]
This is bitterly true.
Intel is one example of why it is #1.

47 CEOs for Cadence

[Via Deepchip]

I feel even that baiting a big name CEO from EDA background, from the list will not help. This is more about the basics. They need a thinker who understands Cadence's past values and what could be done to bolster the true strengths. This a point of debate so lets debate!

Bottom line *grin* [Make me the CEO]

For the rest of this story see:

Unit delay simulation - an intermediate step in Gate level simulation!

This is an intermediate step during Gate level simulation!

Unit delay simulation operates on the assumption that all the elements in a circuit posses identical delays. This has an advantage that it can be setup early in the flow when the post layout netlist is ready but before the SDFs are not available which could be due to the fact that the design is not timing clean and is in the process of being timinh closed.

Primarily Unit delay sims help in ironing out any possible simulation synthesis mismatches due to delta delay issues and so widely used in the industry. But this kind of simulation should not be used to generate test stimuli. If done, this will give a false sense of security as the timing for the actual circuit will not resemble the results shown by the unit delay simulation. Another major disadvantage of unit delay simulation is that since the elements have non-zero delay, the design cannot be rank-ordered for simulation and hence unnecessary evaluation of elements several times in a same period can happen.

Unit delay simulation is very useful for FPGAs and CPLDs. Since these are fixed array circuits of rows and columns with identical elements that may be a NAND or NOR gate or a collection of resistors and transistors. Switching elements connected in this way usually have the same switching speed in which case unit delay sims become very meaningful. If the switching speeds are integral multiples of one another unit delay sims can still be effectively implemented.

Different types of simulations!

Functional simulation: Simulation of a design description. This is also called spec simulation or concept simulation. This is usually done at the highest level and in the beginning of the project.

Behavioral simulation: Simulation of digital circuit described in HDLs like verilog or VHDL. We simulate the behavior described in these language based designs. This the second step.

Static timing analysis: This tells us "What is the longest delay in my circuit?" Timing analysis finds the critical path and its delay. Timing analysis does not find the input vectors that activate the critical path. Done after synthesis, this is the third step.

Gate-level simulation: Differences between functional simulation, timing analysis, and gate level simulation. In this type of simulation the delays after the post layout stage are back annotated to the design using SDF and simulated. This gives close to a real chip simulation performance. This is the final step.

Transistor-level or circuit-level simulation: Mainly for mixed mode(mixed signal) circuits. For mixed mode circuit we must verify complete design on transistor level. This is an intermediate step based on how the design is setup and the flow.
Simulation conclusion:
  1. Behavioral simulation can only tell you only if your design will not work.
  2. Pre-layout simulation estimates your design performance.
  3. Finding a critical path is difficult because you need to construct input vectors to exercise the right paths.
  4. Behavioral simulation and Static timing analysis is the most widely used form of simulation.
  5. Formal verification compares two different representations. It cannot prove your design will work.
  6. Switch-level simulation can check the behavior of circuits that may not always have nodes that are driven or that use logic that is not complementary.
  7. Transistor level simulation is used when you need to know the analog, rather than the digital, behavior of circuit voltages.
  8. Trade-off in accuracy against run time.

Get your Blog listed in our Google Custom Search Engine

To get your Blog listed in our Google Custom Search Engine please send us an e-mail with your Blog details to onenanometer [at] gmail [dot] com

Terms and Conditions apply!

Performance-Contingent Self Esteem

Don't be too committed in your profession where you reach a point where it might boomerang, leaving you depressed, anxious and fretful, warn experts. When you are a fresher or new to the job at hand and when you try to place too much emotional weight on its successful accomplishment, you naturally tend to evaluate your self-worth solely based on the outcomes of these assignments.

This is what psychologists term as performance-contingent self-esteem (PCSE) and, it's an unhealthy factor in mainly in your early career that adversely affect your future.

PCSE can trigger depression and anxieties during even the most minor or common incidents, such as mis-communication, short spats over non-critical matters or a critique of one's personality or abilities.