Happy New Year 2009!

We wish all our readers a very Happy New Year 2009!

9 yr old Indian girl is the youngest Microsoft Certified Professional

A nine year-old Indian girl named M. Lavinashree has passed the Microsoft Certified Professional Exam, becoming the youngest person to ever pull it off (smashing the record previously held by a 10 year-old Pakistani girl). The youngster has a long history of making records in her short life -- including reciting all 1,300 couplets of a 2,000 year-old Tamil epic at the age of three -- and now she's now cramming for the Microsoft Certified Systems Engineer Exam.

The person with the certificate in hand is none other than the former Indian President Dr. APJ Abdul Kalam (Missile Scientist and Former DRDO - Defense Research and Development Organization, India, Director)

[Via: TechNews]

Why Subscribe?

What Benefits Do I Get When Subscribing To The Digital Electronics Blog? Lots of them! :-)

Subscribers have reported that coffee tastes awesome, decrease in crime, and property taxes decline. Okay, true, the last one I’m still investigating…. :)

Seriously, subscribing to The Digital Electronics Blog's plain and simple RSS, gives you the edge over others in your niche.

A vivid group of chip designers and researchers for about a decade, we’ve mastered the art of seeing straight through to the critical issues that challenge the art of chip design and entrepreneurs… and we present here our solutions and thoughts so creatively outside the box, you’ll never see them any other place online.

In other words….

Subscribing allows you to receive every new article whenever they’re released…without any action on your part.

Get personalized answers to your questions.

Get answers to frequently asked and complex interview questions.

No typing!

What joy!

You don’t even have to return to the site to get the latest special tips and writings; it’s all sent directly to YOU. All the fresh content that we freely share every day is delivered to you at no cost, waiting for your convenience for reading.

That saves you time and makes learning easier than determining whether gravity is still operating as it should be (classic test - jump up. If you float to the ceiling, you have bigger problems (ahem, I mean, "opportunities for improvement") to deal with right now).

Currently, we offer two different subscription methods:

  • Website Subscriptions - New posts are sent to your subscription (feed) reader
  • Email Subscriptions - New posts are sent directly to your inbox.

That sounds very nifty! But…ummm….what’s a Subscription (feed) reader?

Glad you asked! A subscription feed reader is a reader (and no, I don’t mean the little old librarians who lower their glasses to the edge of their noses when reading your over-due list - we mean an actual program that runs on your computer) that says to itself:

"Okay, my computer user is sooo busy with everything else, I’ll just simply zoom out to all the sites they want to monitor, gather up all the thousands of headlines, categorize them into neat useful lists and display them whenever I’m asked."

Well, actually, yes. What readers do you recommend?

There’s lots of superduper feedreaders out there - We personally use Google Reader. Other folks like:

  • FeedReader
  • Top RSS Readers

The choice is yours - everyone has their own particular favorites.

Sounds good! What if I want to get your updates via email?

Then make it so, it’s as easy as melting ice cream in an oven! Just click on the envelope below and you’ll be taken to a signup window - add your email address, type in the verification letters (you’ll see them there) and click "Complete Subscription Request".


You’ll receive a confirmation request in your inbox from FeedBurner (the place that delivers our articles). Click on the acceptance link and voila, you’ll be done.

Got any questions?

Please feel free to drop us a line and we’ll help out in any way we can.


The Digital Electronics Blog Team

The Economics of Test, Part - IV

Detecting a defective unit is often only part of the job. Another important aspect of test economics that must be considered is the cost of locating and replacing defective parts. Consider again the board with 10 integrated circuits. If it is found to be defective, then it is necessary to locate the part that has failed, a time-consuming and error-prone operation. Replacing suspect components that have been soldered onto a PCB can introduce new defects. Each replaced component must be followed by retest to ensure that the component replaced was the actual failing component and that no new defects were introduced during this phase of the operation. This ties up both technician and expensive test equipment. Consequently, a goal of test development must be to create tests capable of not only detecting a faulty operation but to pinpoint, whenever possible, the faulty component. In actual practice, there is often a list of suspected components and the objective must be to shorten, as much as possible, that list.

One solution to the problem of locating faults during the manufacturing process is to detect faulty devices as early as possible. This strategy is an acknowledgment of the so-called rule-of-ten. This rule, or guideline, asserts that the cost of locating a defect increases by an order of magnitude at every level of integration. For example, if it cost N dollars to detect a faulty chip at incoming inspection, it may cost 10N dollars to detect a defective component after it has been soldered onto a PCB. If the component is not detected at board test, it may cost 100 times as much if the board with the faulty component is placed into a complete system. If the defective system is shipped to a customer and requires that a field engineer make a trip to a customer site, the cost increases by another power of 10. The obvious implication is that there is tremendous economic incentive to find defects as early as possible. This preoccupation with finding defects early in the manufacturing process also holds for ICs.27 A wafer will normally contain test circuits in the scribe lanes between adjacent die. Parametric tests are performed on these test circuits. If these tests fail, the wafer is discarded, since these circuits are far less dense than the circuits on the die themselves. The next step is to perform a probe test on individual die before they are cut from the wafer. This is a gross test, but it detects many of the defective die. Those that fail are discarded. After the die are cut from the wafer and packaged, they are tested again with a more thorough functional test. The objective? Avoid further processing, and subsequent packaging, of die that are clearly defective.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

The Economics of Test, Part - III

However, if devices are tested, feature sizes can be reduced and more die will fit on each wafer. Even after the die are tested and defective die are discarded, the number of good die per wafer exceeds the number available at the larger feature sizes. The benefit in terms of increasing numbers of good die obtainable from each wafer far outweighs the cost of testing the die in order to identify those that are defective. Point B on the graph corresponds to a point where process yield is lower than the required quality level. However, testing will identify enough defective units to bring quality back to the required quality level. The horizontal distance from point A to point B on the graph is an indication of the extent to which the process capability can be made more aggressive, while meeting quality goals. The object is to move as far to the right as possible, while remaining competitive. At some point the cost of test will be so great, and the yield of good die so low, that it is not economically feasible to operate to the right of that point on the solid line.

We see therefore that we are caught in a dilemma: Testing adds cost to a product, but failure to test also adds cost. Trade-offs must be carefully examined in order to determine the right amount of testing. The right amount is that amount which minimizes total cost of testing plus cost of servicing or replacing defective components. In other words, we want to reach the point where the cost of additional testing exceeds the benefits derived. Exceptions exist, of course, where public safety or national security interests are involved.

Another useful side effect of testing that should be kept in mind is the information derived from the testing process. This information, if diligently recorded and analyzed, can be used to learn more about failure mechanisms. The kinds of defects and the frequency of occurrence of various defects can be recorded and this information can be used to improve the manufacturing process, focusing attention on those areas where frequency of occurrence of defects is greatest.

This test versus cost dilemma is further complicated by “time to market.” Quality is sometimes seen as one leg of a triangle, of which the other two are “time to market” and “product cost.” These are sometimes posited as competing goals, with the suggestion that any two of them are attainable.25 The implication is that quality, while highly desirable, must be kept in perspective. Business Week magazine, in a feature article that examined the issue of quality at length, expressed the concern that quality could become an end in itself. The importance of achieving a low defect level in digital components can be appreciated from just a cursory look at a typical PCB. Suppose, for example, that a PCB is populated with 10 components, and each component has a defect level DL = 0.999. The likelihood of getting a defect free board is (0.999)10 = 0.99004; that is, one of every 100 PCBs will be defective—and that assumes no defects were introduced during the manufacturing process. If several PCBs of comparable quality go into a more complex system, the probability that the system will function correctly goes down even further.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

The Economics of Test, Part - II

The table depicted shows test cost broken down into four categories some of which are one-time, non recurring costs whereas others are recurring costs. Test preparation includes costs related to development of the test programs as well as some potential costs incurred during design of the DFT features.

DFT-related costs are directed toward improving access to the basic functionality of the design in order to simplify the creation of test programs. Many of the factors depicted in the Figure imply both recurring and nonrecurring costs. Test execution requires personnel and equipment. The tester is amortized over individual units, representing a recurring cost for each unit tested, while costs such as probe cards may represent a one-time, nonrecurring cost. The testrelated silicon is a recurring cost, while the design effort required to incorporate testability enhancements, listed under test preparation as DFT design, is a nonrecurring cost.

The category listed as imperfect test quality includes a subcategory labeled as tester escapes, which are bad chips that tested good. It would be desirable for tester escapes to fall in the category of nonrecurring costs but, regrettably, tester escapes are a fact of life and occur with unwelcome regularity.

Lost performance refers to losses caused by increases in die size necessary to accommodate DFT features. The increase in die size may result in fewer die on a wafer; hence a greater number of wafers must be processed to achieve a given throughput. Lost yield is the cost of discarding good die that were judged to be bad by the tester.

The column in Figure labeled “Volume” is a critical factor. For a consumer product with large production volumes, more time can be justified in developing a comprehensive test plan because development costs will be amortized over many units. Not only can a more thorough test be justified, but also a more efficient test—that is, one that reduces the amount of time spent in testing each individual unit. In low-volume products, testing becomes a disproportionately large part of total product cost and it may be impossible to justify the cost of refining a test to make it more efficient. However, in critical applications it will still be necessary to prepare test programs that are thorough in their ability to detect defects.

A question frequently raised is, “How much testing is enough?” That may seem to be a rather frivolous question since we would like to test our product so thoroughly that a customer never receives a defective product. When a product is under warranty or is covered by a service contract, it represents an expense to the manufacturer when it fails because it must be repaired or replaced. In addition, there is an immeasurable cost in the loss of customer goodwill, an intangible but very real cost, not reflected in the Figure, that results from shipping defective products. Unfortunately we are faced with the inescapable fact that testing adds cost to a product. What is sometimes overlooked, however, is the fact that test cost is recovered by virtue of enhanced throughput. Consider the graph in the Figure. The solid line reflects quality level, in terms of defects per million (DPM) for a given process, assuming no test is performed. It is an inverse relationship; the higher the required quality, the fewer the number of die obtainable from the process. This follows from the simple fact that, for a given process, if higher quality (fewer DPM) is required, then feature sizes must be increased. The problem with this manufacturing model is that, if required quality level is too high, feature sizes may be so large that it is impossible to produce die competitively. If the process is made more aggressive, an increasing number of die will be defective, and quality levels will fall. Point A on the graph corresponds to the point where no testing is performed. Any attempt to shrink the process to get more units per wafer will cause quality to fall below the required quality level.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

The Economics of Test, Part - I

What are the factors that influence the cost of test? Quality and test costs are related, but they are not inverse of one another. As we shall see, an investment in a higher-quality test often pays dividends during the test cycle. Test related costs for ICs and PCBs include both time and resource. As pointed out in previous sections, for some products the failure to reach a market window early in the life cycle of the product can cause significant loss of revenue and may in fact be fatal to the future of the product.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

TCL for EDA - A repository Free TCL/TK tools, scripts for EDA...

The TCL for EDA project is an open-source repository of TCL/TK tools, applications, scripts and methodological articles. The TCL for EDA project targets different stages of chip design: from Verification to Project Management and up to Synthesis, Static Timing Analysis and Design-for-Test.

Some of their offerings:

  • Netedit - Verilog netlist editor/viewer
  • Netman - Verilog netlist manager/viewer
  • Pman - Project manager (allows navigation, viewing and editing of verilog files)
  • TCL-PLI - TCL pli library
  • Verilog Structural Integration Methodology and Scripts - Sounds interesting..
  • Lots of DC, Timing, DFT and verifications scripts!

A very interesting site indeed!

Transport delay / Inertial Delay

A number of types of delays exist for describing circuit behavior. The two major hardware description languages, Verilog and VHDL, support inertial delay and transport delay.
Inertial delay is a measure of the elapsed time during which a signal must persist at an input of a device in order for a change to appear at an output. A pulse of duration less than the inertial delay does not contain enough energy to cause the device to switch. This is illustrated in Figure attached where the original waveform contains a short pulse that does not show up at the output.

Transport delay is meaningful with respect to devices that are modeled as ideal conductors; that is, they may be modeled as having no resistance. In that case the waveform at the output is delayed but otherwise matches the waveform at the input. Transport delay can also be useful when modeling behavioral elements where the delay from input to output is of interest, but there is no visibility into the behavior of delays internal to the device.

Cycle based simulation

New design starts continue to grow in gate count, and the amount of CPU time required to simulate these designs tends to grow disproportionate to gate count, implying a growing need for simulation speed. A simple example helps to shed light
on this situation.
Suppose a circuit has n functions and that, in the worst case, each function interacts with all of the others. Ignoring for the moment the complexity of the interactions, there are n × (n − 1)/2 potential interactions between the n functions.

Thus, in the worst case, the number of interactions grows proportional to the square of the number of functions. Handshaking protocols between functions also grow more complex. Internal status and mode control registers act as extensions to device I/O pins.

To verify the growing number of interactions requires more stimuli. In addition, the growing number of gates and functions in the circuit model generate more events that must be evaluated during each clock cycle. The combination of more functionality and more stimuli requires an exponentially growing amount of CPU time to complete the evaluations. A consequence of this is a growing difficulty to create and simulate enough stimuli to verify design correctness. As a result, design errors are more likely to escape detection until after tape-out, at which time the discovery of errors requires another expensive iteration through the design cycle.

Cycle simulation is one of the answers to the growing need for greater verification power. Cycle simulation evaluates logic elements and functions across clock cycle boundaries without regard to intermediate values. Its purpose is to evaluate input stimuli as rapidly as possible. Designs are required to be synchronous so that every possible technique can be leveraged during simulation. Rank-ordering is used so that elements only need to be evaluated once during each clock period. Circuit delays are ignored, and the number of logic values is usually limited to three or four {0, 1, X, Z}. Internal representation of the circuit may be in terms of binary decision diagrams (BDDs), so intermediate values are totally obscured. To insure that a circuit operates at its intended speed when fabricated, circuit delays are measured by timing analysis programs that are written specifically for that purpose and run independently of simulation.

The designer plays a role in this simulation mode by modeling circuits at the highest possible level of abstraction without losing essential details. A number of methods have been developed to speed up simulation while reducing the amount of workstation memory required to perform simulations.

Contribute to this Article: onenanometer[at]gmail.com
Please leave a comment!

Event driven simulation/simulator

A latch or flip-flop does not always respond to activity on its inputs. If an enable or clock is inactive, changes at the data inputs have no effect on the circuit. As it turns out, the amount of activity within a circuit during any given timestep is often minimal and may terminate abruptly.

Since the amount of activity in a time step is minimal, why simulate the entire circuit? Why not simulate only the elements that experience signal changes at their inputs? This strategy, employed at a global level, rather than locally, as was the case with stimulus bypass, is supported in Verilog by means of the sensitivity list. The following Verilog module describes a three-bit state machine. The line beginning with “always” is a sensitivity list. The if-else block of code is evaluated only in response to a 1 → 0 transition (negedge) of the reset input, or a 0 → 1 transition (posedge) of the clk input. Results of the evaluation depend on the current value of tag, but activity on tag, by itself, is ignored.

module reg3bit(clk, reset, tag, reg3);
input clk, reset, tag;
output reg3;
reg [2:0] reg3;

always@(posedge clk or negedge reset)
if(reset == 0)
reg3 = 3'b110;
else // rising edge on clock
3'b110: reg3 = tag ? 3'b011 : 3'b001;
3'b011: reg3 = tag ? 3'b110 : 3'b001;
3'b001: reg3 = tag ? 3'b001 : 3'b011;
default: reg3 = 3'b001;

When a signal change occurs on a primary input or the output of a circuit element, an event is said to have occurred on the net driven by that primary input or element. When an event occurs on a net, all elements driven by that net are evaluated. If an event on a device input does not cause an event to appear on the device output, then simulation is terminated along that signal path.

Linting tools

Some of the tools used for design verification of ICs have their roots in software testing. Tools for software testing are sometimes characterized as static analysis and dynamic analysis tools. Static analysis tools evaluate software before it has run. 

An example of such a tool is Lint:It is not uncommon, when porting a software system to another host environment and recompiling all of the source code for the program, to experience a situation where source code that compiled without complaint on the original host now either refuses to compile or produces a long list of ominous sounding warnings during compilation. The fact is, no two compilers will check for exactly the same syntax and/or semantic violations. One compiler may attempt to interpret the programmer’s intention, while a second compiler may flag the error and refuse to generate an object module, and a third compiler may simply ignore the error.

Lint is a tool that examines C code and identifies such things as unused variables, variables that are used before being initialized, and argument mismatches. Commercial versions of Lint exist both for programming languages and for hardware design languages. A lint program attempts to discover all fatal and nonfatal errors in a program before it is executed. It then issues a list of warnings about code that could cause problems. Sometimes the programmer or logic designer is aware of the coding practice and does not consider it to be a problem. In such cases, a lint program will usually permit the user to mask out those messages so that more meaningful messages don’t become lost in a sea of detail.

In contrast to static analysis tools, dynamic analysis tools operate while the code is running. In software this code detects such things as memory leaks, bounds violations, null pointers, and pointers out of range. They can also identify source code that has been exercised and, more importantly, code that has not been exercised. Additionally, they can point out lines of code that have been exercised over only a partial range of their variables.

White box testing or Black box testing

When performing verification, the target device can be viewed as a white box or a black box. During whitebox testing, detailed knowledge is available describing the internal workings of the device to be tested. This knowledge can be used to direct the verification effort. Forexample, an engineer verifying a digital circuit may have schematics, block diagrams, RTL code that may or may not be suitably annotated, and textual descriptions including timing diagrams and state transition graphs. All or a subset of these can be used to advantage when developing test programs. The logic designer responsible for the correctness of the design, armed with knowledge of the internal workings of the design, writes stimuli based on this knowledge; hence he or she is performing white-box testing.

During black-box testing, it is assumed that there is no visibility into the internal workings of the device being tested. A functional description exists which outlines, in more or less detail, how the device must respond to various externally applied stimuli. This description, or specification, may or may not describe behavior of the device in the presence of all possible combinations of inputs. For example, a microprocessor may have op-code combinations that are left unused and unspecified. From one release to the next, these unused op-codes may respond very differently if invoked. PCB designers, concerned with obtaining ICs that work correctly with other ICs plugged into the same PCB or backplane, are most likely to perform black-box testing, unless they are able to persuade their vendor to provide them with more detailed information.

Formal Verification or EquivalenceChecking

Design verification, must show that the design, expressed at the RTL or structural level, implements the operations described in the data sheet or whatever other specification exists.

Verification at the RTL level can be accomplished by means of simulation, but there is a growing tendency to supplement simulation with formal methods such as model checking. At the structural level the use of equivalence checking is becoming standard procedure. In this operation the RTL model is compared to a structural model, which may have been synthesized by software or created manually. Equivalence checking can determine if the two levels of abstraction are equivalent. If they differ, equivalence checking can identify where they differ and can also identify what logic values cause a difference in response.

Updates: 18th Dec 2008 !
Another series of steps in equivalence checking goes beyond what was described above...

RTL to Pre-Synthesis Netlist!
Pre-Synthesis Netlist Vs Post Synthesis Netlist!

Verification or Validation

The purpose of design verification is to demonstrate that a design was implemented correctly. By way of contrast, the purpose of design validation is to show that the design satisfies a given set or requirements. A succinct and informal way to differentiate between them is by noting that Validation asks “Am I building the right product?” Verification asks “Am I building the product right?” Seen from this perspective, validation implies an intimate knowledge of the problem that the IC is designed to solve. An IC created to solve a problem is described by a data sheet composed of text and waveforms. The text verbally describes IC behavior in response to stimuli applied to its I/O pins. Sometimes that behavior will be very complex, spanning many vectors, as when stimuli are first applied in order to configure one or more internal control registers. Then, behavior depends on both the contents of the control registers and the applied stimuli. The waveforms provide a detailed visual description of stimulus and response, together with timing, that shows the relative order in which signals are applied and outputs respond.

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

vlsi chip design

test, vlsi chip design, test

asic chip design


Avago Technologies (former HP group) Interview Questions

  • How do you minimize clock skew/ balance clock tree?
  • Given 11 minterms and asked to derive the logic function.
  • Given C1= 10pf, C2=1pf connected in series with a switch in between, at t=0 switch is open and one end having 5v and other end zero voltage; compute the voltage across C2 when the switch is closed?
  • Explain the modes of operation of CMOS (Complimentary Metal Oxide Semiconductor) inverter? Show IO (Input-Output) characteristics curve.
  • Implement a ring oscillator.
  • How to slow down ring oscillator?

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Hynix Semiconductor Interview Questions

  • How do you optimize power at various stages in the physical design flow?
  • What timing optimization strategies you employ in pre-layout /post-layout stages?
  • What are process technology challenges in physical design?
  • Design divide by 2, divide by 3, and divide by 1.5 counters. Draw timing diagrams.
  • What are multi-cycle paths, false paths? How to resolve multi-cycle and false paths?
  • Given a flop to flop path with combo delay in between and output of the second flop fed back to combo logic. Which path is fastest path to have hold violation and how will you resolve?
  • What are RTL (Register Transfer Level) coding styles to adapt to yield optimal backend design?
  • Draw timing diagrams to represent the propagation delay, set up, hold, recovery, removal, minimum pulse width.

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Hughes Networks Interview Questions

What is setup/hold? What are setup and hold time impacts on timing? How will you fix setup and hold violations?
  • Explain function of Muxed FF (Multiplexed Flip Flop) /scan FF (Scal Flip Flop).
  • What are tested in DFT (Design for Testability)?
  • In equivalence checking, how do you handle scanen signal?
  • In terms of CMOS (Complimentary Metal Oxide Semiconductor), explain physical parameters that affect the propagation delay?
  • What are power dissipation components? How do you reduce them?
Short Circuit Power
Leakage Power Trends
Dynamic Power
Low Power Design Techniques

  • How delay affected by PVT (Process-Voltage-Temperature)?
Answer: Process-Voltage-Temperature (PVT) Variations and Static Timing Analysis (STA)
  • Why is power signal routed in top metal layers?

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Qualcomm Interview Questions

In building the timing constraints, do you need to constrain all IO (Input-Output) ports?
  • Can a single port have multi-clocked? How do you set delays for such ports?
  • How is scan DEF (Design Exchange Format) generated?
  • What is purpose of lockup latch in scan chain?
  • Explain short circuit current.
Answer: Short Circuit Power
  • What are pros/cons of using low Vt, high Vt cells?
Multi Threshold Voltage Technique
Issues With Multi Height Cell Placement in Multi Vt Flow
  • How do you set inter clock uncertainty?

set_clock_uncertainty –from clock1 -to clock2
  • In DC (Design Compiler), how do you constrain clocks, IO (Input-Output) ports, maxcap, max tran?
  • What are differences in clock constraints from pre CTS (Clock Tree Synthesis) to post CTS (Clock Tree Synthesis)?

Difference in clock uncertainty values; Clocks are propagated in post CTS.
In post CTS clock latency constraint is modified to model clock jitter.
  • How is clock gating done?
Answer: Clock Gating
  • What constraints you add in CTS (Clock Tree Synthesis) for clock gates?

Make the clock gating cells as through pins.
  • What is trade off between dynamic power (current) and leakage power (current)?
Leakage Power Trends
Dynamic Power

  • How do you reduce standby (leakage) power?
Answer: Low Power Design Techniques
  • Explain top level pin placement flow? What are parameters to decide?
  • Given block level netlists, timing constraints, libraries, macro LEFs (Layout Exchange Format/Library Exchange Format), how will you start floor planning?
  • With net length of 1000um how will you compute RC values, using equations/tech file info?
  • What do noise reports represent?
  • What does glitch reports contain?
  • What are CTS (Clock Tree Synthesis) steps in IC compiler?
  • What do clock constraints file contain?
  • How to analyze clock tree reports?
  • What do IR drop Voltagestorm reports represent?
  • Where /when do you use DCAP (Decoupling Capacitor) cells?
  • What are various power reduction techniques?
Answer: Low Power Design Techniques


Texas Instruments (TI) Interview Questions

  • How are timing constraints developed?
  • Explain timing closure flow/methodology/issues/fixes.
  • Explain SDF (Standard Delay Format) back annotation/ SPEF (Standard Parasitic Exchange Format) timing correlation flow.
  • Given a timing path in multi-mode multi-corner, how is STA (Static Timing Analysis) performed in order to meet timing in both modes and corners, how are PVT (Process-Voltage-Temperature)/derate factors decided and set in the Primetime flow?
  • With respect to clock gate, what are various issues you faced at various stages in the physical design flow?
  • What are synthesis strategies to optimize timing?
  • Explain ECO (Engineering Change Order) implementation flow. Given post routed database and functional fixes, how will you take it to implement ECO (Engineering Change Order) and what physical and functional checks you need to perform?

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

ST Microelectronics - Interview Questions

  • What were the challenges you faced in physical design, PAR (place and route), FV (Formal Verification)?
  • What was the average design cycle time for your designs?
  • What part are your areas of interest in physical design?
  • Explain ECO (Engineering Change Order) methodology used in your projects.
  • Explain CTS (Clock Tree Synthesis) flow used in your projects.
  • What kind of routing issues did you face in your projects? Mention the recent one?
  • How does STA (Static Timing Analysis) in OCV (On Chip Variation) conditions done?
  • How do you set OCV (On Chip Variation) in the tool you used?
  • How is timing correlation done before and after place and route?
  • If there are too many pins of the logic cells in one place within core, what kind of issues would you face and how will you resolve?
  • Define hash/ @array in perl.
  • Using TCL (Tool Command Language, Tickle) how do you set variables?
  • What is ICC (IC Compiler) command for setting derate factor/ command to perform physical synthesis?
  • What are nanoroute options for search and repair?
  • What were your design skew/insertion delay targets?
  • How is IR drop analysis done? What are various statistics available in reports?
  • Explain pin density/ cell density issues, hotspots?
  • How will you relate routing grid with manufacturing grid and judge if the routing grid is set correctly?
  • What is the command for setting multi cycle path?
  • If hold violation exists in design, is it OK to sign off design? If not, why?
Raj - Sequence Design

Advertise on this Blog and see yourself grow!
For more information contact us at onenanometer[at]gmail.com

Digital design interview questions

Search in this blog for all the available articles and resources!

Advertise on this Blog and see yourself grow!
Contact us at onenanometer[at]gmail.com

The Great Divide – Technical Leadership and Project Management

[Via Coaching Excellence in IC Design Teams]
This is bitterly true.
Intel is one example of why it is #1.

47 CEOs for Cadence

[Via Deepchip]

I feel even that baiting a big name CEO from EDA background, from the list will not help. This is more about the basics. They need a thinker who understands Cadence's past values and what could be done to bolster the true strengths. This a point of debate so lets debate!

Bottom line *grin* [Make me the CEO]

For the rest of this story see:

Unit delay simulation - an intermediate step in Gate level simulation!

This is an intermediate step during Gate level simulation!

Unit delay simulation operates on the assumption that all the elements in a circuit posses identical delays. This has an advantage that it can be setup early in the flow when the post layout netlist is ready but before the SDFs are not available which could be due to the fact that the design is not timing clean and is in the process of being timinh closed.

Primarily Unit delay sims help in ironing out any possible simulation synthesis mismatches due to delta delay issues and so widely used in the industry. But this kind of simulation should not be used to generate test stimuli. If done, this will give a false sense of security as the timing for the actual circuit will not resemble the results shown by the unit delay simulation. Another major disadvantage of unit delay simulation is that since the elements have non-zero delay, the design cannot be rank-ordered for simulation and hence unnecessary evaluation of elements several times in a same period can happen.

Unit delay simulation is very useful for FPGAs and CPLDs. Since these are fixed array circuits of rows and columns with identical elements that may be a NAND or NOR gate or a collection of resistors and transistors. Switching elements connected in this way usually have the same switching speed in which case unit delay sims become very meaningful. If the switching speeds are integral multiples of one another unit delay sims can still be effectively implemented.

Different types of simulations!

Functional simulation: Simulation of a design description. This is also called spec simulation or concept simulation. This is usually done at the highest level and in the beginning of the project.

Behavioral simulation: Simulation of digital circuit described in HDLs like verilog or VHDL. We simulate the behavior described in these language based designs. This the second step.

Static timing analysis: This tells us "What is the longest delay in my circuit?" Timing analysis finds the critical path and its delay. Timing analysis does not find the input vectors that activate the critical path. Done after synthesis, this is the third step.

Gate-level simulation: Differences between functional simulation, timing analysis, and gate level simulation. In this type of simulation the delays after the post layout stage are back annotated to the design using SDF and simulated. This gives close to a real chip simulation performance. This is the final step.

Transistor-level or circuit-level simulation: Mainly for mixed mode(mixed signal) circuits. For mixed mode circuit we must verify complete design on transistor level. This is an intermediate step based on how the design is setup and the flow.
Simulation conclusion:
  1. Behavioral simulation can only tell you only if your design will not work.
  2. Pre-layout simulation estimates your design performance.
  3. Finding a critical path is difficult because you need to construct input vectors to exercise the right paths.
  4. Behavioral simulation and Static timing analysis is the most widely used form of simulation.
  5. Formal verification compares two different representations. It cannot prove your design will work.
  6. Switch-level simulation can check the behavior of circuits that may not always have nodes that are driven or that use logic that is not complementary.
  7. Transistor level simulation is used when you need to know the analog, rather than the digital, behavior of circuit voltages.
  8. Trade-off in accuracy against run time.

Get your Blog listed in our Google Custom Search Engine

To get your Blog listed in our Google Custom Search Engine please send us an e-mail with your Blog details to onenanometer [at] gmail [dot] com

Terms and Conditions apply!

Performance-Contingent Self Esteem

Don't be too committed in your profession where you reach a point where it might boomerang, leaving you depressed, anxious and fretful, warn experts. When you are a fresher or new to the job at hand and when you try to place too much emotional weight on its successful accomplishment, you naturally tend to evaluate your self-worth solely based on the outcomes of these assignments.

This is what psychologists term as performance-contingent self-esteem (PCSE) and, it's an unhealthy factor in mainly in your early career that adversely affect your future.

PCSE can trigger depression and anxieties during even the most minor or common incidents, such as mis-communication, short spats over non-critical matters or a critique of one's personality or abilities.

Mumbai Terror!

If you want to get in touch with your loved one in Mumbai...

follow this link

List of Injured and Deceased. click on the hyperlink.

A project challenge for a good cause!

[re-hashed] [re-hashed] [re-hashed] [re-hashed] [re-hashed]

If you're an engineer or a student interested in working toward a technological breakthrough, Tanya Vlach has a project for you.

After losing her eye in a 2005 car accident, the San Francisco artist is calling on engineers to design a prosthetic eye for her that also functions as a Web cam.

Vlach is seeking an "eye cam" that is more advanced than her acrylic prosthetic. She wants her new eye to be able to dilate with light changes and allow her to control camera functions like zoom, focus, and power by blinking.

When news of her quest hit the Internet via a blog, Kevin Kelly's Lifestream, she posted her own blog entry with details of her challenge and hoped engineers would respond. There has been interest in designing her eye cam and experts say the technology to achieve it is possible.

Roy Want, a senior principal engineer at Intel, believes it is possible to build a wireless camera to fit inside Vlach's prosthetic and link it to a smart phone that could transfer the video to another phone, a TV studio, or a computer.

While this technology would allow her to record her entire life from the unique perspective of her right eye, she is not yet sure what she would use the footage for, though she has some ideas.

On her Web site, Vlach calls the project an "experiment in wearable technology, cybernetics, and perception." In her blog post she writes, "I am attempting to recreate my eye with the help of a miniature camera implant in my prosthetic / artificial eye. While my prosthetic is an excellent aesthetic replacement, I am interested in capitalizing on the current advancement of technology to enhance the abilities of my prosthesis for an augmented reality." She also provides the dimensions of her prosthetic, the specifications she's looking for, and information about her ocularist, who supports the project.

The ultimate hope is that through this project and more advancements, the camera could help Vlach and others regain some sort of vision.

As her search has gained more attention, Vlach discovered she was not the only one working on this idea. Rob Spence, who injured his eye in a childhood accident and later had it removed, has been pursuing a similar goal as Vlach and plans to have a working prototype ready by Christmas. The Canadian documentary filmmaker has been in contact with Vlach and they may work together to achieve their goal.

If the project is a success, it could not only lead to medical advancements for the blind, but experts think it could create a widespread innovation to digitize everyone's memory by recording everything for them to refer back to.

Digital Logic Design - Interview Essentials!

Hello Readers,
Due to the recent spate of Layoffs, a high amount of panic and interview preparation frenzy has seeped into the industry. This has triggered a significantly increased amount of emails we get everyday. These are mainly regarding pointers for interview preparation that i have compiled below. This compilation constitutes a list of Books and Links that will help you in kick starting your preparation.

Don't forget, this blog is also a treasure chest for information that you never thought you might find on the web.

Good luck with your preparations!!!

Some popular links on this Blog! - Before we head out to our recommendations.
  • Fundamentals of Digital Logic and Microcomputer Design - by M. Rafiquzzaman
    This book is an essential reference that provides with the fundamental tools you need to design typical digital systems.
  • Digital Logic Circuit Analysis and Design - by Nelson, Nagle, Carroll, Irwin
    If you want to understand the basics of the Digital Logic Circuit Analysis and Desig.
  • Verilog HDL (2nd Edition) - by Samir Palnitkar
    A jump starter. Learn basic digital design paradigms and the necessaryVerilog HDL constructs that would help build small digital circuits, using Verilog and run simulations.
  • Verilog HDL Synthesis, A Practical Primer - by Jayaram Bhasker
    For teaching Verilog-based synthesis techniques, as it shows the reader not only what hardware results from various Verilog constructs, but how to tailor the Verilog to get the desired hardware.
  • Circuit Design with VHDL - by Volnei A. Pedroni
    Teaches VHDL using system examples combined with programmable logic and supported by laboratory exercises. While other textbooks concentrate only on language features, Circuit Design with VHDL offers a fully integrated presentation of VHDL and design concepts by including a large number of complete design examples, illustrative circuit diagrams, a review of fundamental design concepts, fully explained solutions, and simulation results!

What Users Would Change About This Blog!

The best part about this survey is that you get to see what users are looking for, whats bothering them that they would like it changed and what's the top thing on their mind.

Please leave a detailed comment or you could alternatively e-mail us at onenanometer (at) gmail (dot) com!

Work/Life Balance - Current Implications and Solutions

You've probably already realized that the "slash" in the work/life balance equation is disappearing. Work has spilled over into life with our increasing accessibility to colleagues, customers and other stakeholders, and the things we value in life are gaining precedence in the workplace. Senior-level executives tell that they are satisfied by: work they like to do, working with people they like, and a boss they like to work for. Compensation is further down the list, below their desires for adequate work/life balance and opportunity for leadership/professional development.

Just as employees take their work home with them, they are bringing electronic errands into the office space. Online banking, ordering flowers, buying birthday presents, and researching which new HDTV to buy are sometimes done from an office PC. A 2007 survey from CareerBuilder and Harris Interactive revealed that 30 percent of workers admitted to holiday shopping on company time, while 50 percent of their employers reportedly monitor Internet usage.

I agree that online shopping can drain productivity, but the ever-present BlackBerry can interrupt relaxation. The border between "work" and "life" is getting blurry, and the expectations of younger workers are going to help eradicate the boundaries altogether. Having grown up with the Internet, they are online all the time anyway, so it makes no difference to them if they are downloading music at 3:30pm, updating their Facebook page at 7:00am or crunching numbers on a work project they are enjoying at midnight.

For me, at this instant with a hectic project schedule which has managers expecting us to work weekends and skip festivals for the sake of making the deadline mainly want to ignore the fact that social events like festivals and holidays are significant in rejuvenating an employee to work more productively. This should not be compromised.

If you're the boss, consider relaxing the grip on your knowledge workers. When tasks allow, create flex-time policies that enable employees to work more entrepreneurially. As long as there isn't a drop in performance, productivity or the system isn't being abused, you may find a happier and more engaged workforce — and you'll be ahead of your competitors who still demand a stringent 9-to-5 model.

The Author of this article is a Senior Engineer in a Multi-National Semiconductor Company based in Bangalore, India.

Layoff Watch!

Announced Worldwide Layoffs As of 30th January 2009

Infineon - 3000
Qimonda - 3000 (Bankrupt: 23 Jan 09)
Renesas - 150
ST - 4500 (Updated)
TI - 3500
Corning - 4900(New)
NXP - 4000
Philips - 6000
ST+NXP(Wireless) - 500
Toshiba - 500
FreeScale - 400
Brooks - 10%+350 (New: Exact # not known)
AMD - 500
Broadcom - 300
Xilinx - 200
LsiLogic - 500
Trident - 9%
Entergis - 200
Cadence - 625
IBM - 200
NOVA M - 250
Anadigic - 100
Intersil - 9%
Nortel - 5000
Sun Microsystems - 6000
Sandisk - 3000

This information has been compiled from various news sources and the authenticity cannot be fully verified. This information shall only be used primarily to get a feel of the overall job outlook and any analysis beyond this is shall be the whole responsibility of the person who is embarking on this research.

Intel will invest through recession!

Intel will continue to invest in products and technologies even though it sees that a U.S.financial meltdown is likely to affect the emerging markets that are crucial for its growth, its chairman said.

10 Gbps Wireless

Engineers at Battelle have come up with a way to send data through the air at 10 Gigabits per second using point-to-point millimeter-wave technology. They used standard optical networking equipment and essentially combined two lower bandwidth signals to produce a 10Gb signal from the interference. They say the technology could replace fiber optics around large campuses or companies or even deliver high-bandwidth streaming within the home.

VLSI Blogs

Based on the content and articles in this blog do you really think that this blog should be categorized under VLSI or ASIC or Chip design or Semiconductor?
Appreciate your comments?

How to get a free link to your ASIC or VLSI Blog?

This is how you can get a free link to your blog. If you are not already a member of the ASIC/VLSI/Digital Electronics blogging community, join it now! Then write a post in your blog with at least 2 paragraphs and add an active link to this blog below (homepage or individual post page, ie., permalink). When you are done, comment in this post giving the link to the post and we will link to your blog. You can write anything. You can praise us, just describe us, or even criticise us. However, the post must be a permanent post and should make sense, and should not be deleted after you get your free credit.

It will be greatly appreciated if you can write more, especially if you find this blog helpful. I find this blog has been helpful from the comments I have received, the feeburner subscription count and the googlepage rank :-).

Note: Your blog should be a VLSI/ASIC/Digital Electronics related blog!
Anything else will be ignored.

Good luck!
Create your RupeeMail Account & refer your friends to earn launch referral bonus on every new registration.
Try this... http://www.rupeemail.in/rupeemail/invite.do?in=MTUyNjUlIyVSajZ5dlBocDhrU3ozWVZ3bTBhZWQyQ2ZF

Semiconductor forecast looks grim!

According to IC insights, the maket forecast looks grim for 2008 total worldwide IC market growth, noting shrinking demand from fabless suppliers, a weak memory market, and pricing issues that continue to plague the semiconductor industry. After yesterday's failure of US$700B bailout vote in the US house, it looks even more grim to note that spending will reduce drastically against most weak currencies which includes emerging markets.

Thanks to the reign of the capitalist US market for the past 7 decades, the Technological growth and innovation Breakthroughs are more likely to be impacted further unless the emerging markets/powers pump money into their already confident markets to keep up the pace and shift the trend from US and Europe for a more robust and sustainable world economy.

[Featured Link: http://www.edn.com/article/CA6600386.html?nid=3351&rid=869353679]
Create your RupeeMail Account & refer your friends to earn launch referral bonus on every new registration.
Try this... http://www.rupeemail.in/rupeemail/invite.do?in=MTUyNjUlIyVSajZ5dlBocDhrU3ozWVZ3bTBhZWQyQ2ZF

Simple XVGA (1024x768) Controller in Verilog

module xvga(clk,hcount,vcount,hsync,vsync);
input clk; // 64.8 Mhz
output [10:0] hcount;
output [9:0] vcount;
output hsync, vsync;
output [2:0] rgb;

reg hsync,vsync,hblank,vblank,blank;
reg [10:0] hcount; // pixel number on current line
reg [9:0] vcount; // line number

wire hsyncon,hsyncoff,hreset,hblankon; // next slide for generation
wire vsyncon,vsyncoff,vreset,vblankon; // of timing signals
wire next_hb = hreset ? 0 : hblankon ? 1 : hblank; // sync & blank
wire next_vb = vreset ? 0 : vblankon ? 1 : vblank;

always @(posedge clk) begin
hcount <= hreset ? 0 : hcount + 1;
hblank <= next_hb;
hsync <= hsyncon ? 0 : hsyncoff ? 1 : hsync; // active low
vcount <= hreset ? (vreset ? 0 : vcount + 1) : vcount;
vblank <= next_vb;
vsync <= vsyncon ? 0 : vsyncoff ? 1 : vsync; // active low

// assume 65 Mhz pixel clock
// horizontal: 1344 pixels total
// display 1024 pixels per line
assign hblankon = (hcount == 1023); // turn on blanking
assign hsyncon = (hcount == 1047); // turn on sync pulse
assign hsyncoff = (hcount == 1183); // turn off sync pulse
assign hreset = (hcount == 1343); // end of line (reset counter)
// vertical: 806 lines total
// display 768 lines
assign vblankon = hreset & (vcount == 767); // turn on blanking
assign vsyncon = hreset & (vcount == 776); // turn on sync pulse
assign vsyncoff = hreset & (vcount == 782); // turn off sync pulse
assign vreset = hreset & (vcount == 805); // end of frame

// for frame
always @(posedge clk) begin
if (vblank | (hblank & ~hreset)) rgb <= 0;
rgb <= (hcount==0 | hcount==639 | vcount==0 | vcount==479) ? 7 : 0;

//for colors (see below)
always @(posedge clk) begin
if (vblank | (hblank & ~hreset)) rgb <= 0;
rgb <= hcount[8:6];

Color chart:

RGB Color

000 black
001 blue
010 green
011 cyan
100 red
101 magenta
110 yellow
111 white

Example Pixel:

Create your RupeeMail Account & refer your friends to earn launch referral bonus on every new registration.
Try this... http://www.rupeemail.in/rupeemail/invite.do?in=MTUyNjUlIyVSajZ5dlBocDhrU3ozWVZ3bTBhZWQyQ2ZF

Coaching Excellence in IC Design Teams: Why Can't we get Rid of that Thorn in our Process?

Coaching Excellence in IC Design Teams: Why Can't we get Rid of that Thorn in our Process?

Commenting on this article by Jorvig...

It is really true. In the past and including the current one i work for all suffer from the same problem. Its ironic that there is no solution in sight but i am sure somebody is reading this article and kicks a thought process to even look at it unless he isn't even aware of it.