FMEA - Failure Mode and Effects Analysis - Part 1

Failure Mode and Effects Analysis (FMEA):
A procedure for analysis of potential failure modes within a system for the classification by severity or determination of the failure's effect upon the system. It is widely used in the manufacturing sector in various phases of the product life cycle. Failure causes are any errors or defects in process, design, or item especially ones that affect the customer, and can be potential or actual. Effects analysis refers to studying the consequences of those failures.

In this article we will see why we need FMEA in the semiconductor industry and how it can help save costs and customers while keeping the company bottomline.

History & Background:
Failure mode: The manner by which a failure is observed; it generally describes the way the failure occurs.
Failure effect: The immediate consequences a failure has on the operation, function or functionality, or status of some item
Local effect: The Failure effect as it applies to the item under analysis.
Next higher level effect: The Failure effect as it applies at the next higher indenture level.
End effect: The failure effect at the highest indenture level or total system.
Failure cause: Defects in design, process, quality, or part application, which are the underlying cause of the failure or which initiate a process which leads to failure.
Severity: The consequences of a failure mode. Severity considers the worst potential consequence of a failure, determined by the degree of injury, property damage, or system damage that could ultimately occur.
Indenture levels: An identifier for item complexity. Complexity increases as the levels get closer to one.

The FMEA process was originally developed by the US military in 1949 to classify failures "according to their impact on mission success and personnel/equipment safety". FMEA has since been used on the 1960s Apollo space missions. In the 1980s it was used by the Ford Motor Company to reduce risks after one model of car, the Pinto, suffered a design flaw that failed to prevent the fuel tank from rupturing in a crash, leading to the possibility of the vehicle catching fire.


In FMEA, Failures are prioritized according to how serious their consequences are, how frequently they occur and how easily they can be detected. An FMEA also documents current knowledge and actions about the risks of failures, for use in continuous improvement. FMEA is used during the design stage with an aim to avoid future failures. Later it is used for process control, before and during ongoing operation of the process. Ideally, FMEA begins during the earliest conceptual stages of design and continues throughout the life of the product or service.

The purpose of FMEA is to take actions to eliminate or reduce failures, starting with the highest-priority ones. It may be used to evaluate risk management priorities for mitigating known threat-vulnerabilities. FMEA helps select remedial actions that reduce cumulative impacts of life-cycle consequences (risks) from a systems failure (fault).

It is used in many formal quality systems such as QS-9000 or ISO/TS 16949. The basic process is to take a description of the parts of a system, and list the consequences if each part fails. In most formal systems, the consequences are then evaluated by three criteria and associated risk indices:

  • Severity (S),
  • Likelihood of occurrence (O), and (Note: This is also often known as probability (P))
  • Inability of controls to detect it (D)

An FMEA simple scheme would be to have three indices ranging from 1 (lowest risk) to 10 (highest risk). The overall risk of each failure would then be called Risk Priority Number (RPN) and the product of Severity (S), Occurrence (O), and Detection (D) but the Detection 1 means the control is absolutely certain to detect the problem and 10 means the control is certain not to detect the problem (or no control exists). rankings: RPN = S × O × D. The RPN (ranging from 1 to 1000) is used to prioritize all potential failures to decide upon actions leading to reduce the risk, usually by reducing likelihood of occurrence and improving controls for detecting the failure.


If used as a top-down tool, FMEA may only identify major failure modes in a system. Fault tree analysis (FTA) is better suited for "top-down" analysis. When used as a "bottom-up" tool FMEA can augment or complement FTA and identify many more causes and failure modes resulting in top-level symptoms. It is not able to discover complex failure modes involving multiple failures within a subsystem, or to report expected failure intervals of particular failure modes up to the upper level subsystem or system.

Additionally, the multiplication of the severity, occurrence and detection rankings may result in rank reversals, where a less serious failure mode receives a higher RPN than a more serious failure mode. The reason for this is that the rankings are ordinal scale numbers, and multiplication is not a valid operation on them. The ordinal rankings only say that one ranking is better or worse than another, but not by how much. For instance, a ranking of "2" may not be twice as bad as a ranking of "1," or an "8" may not be twice as bad as a "4," but multiplication treats them as though they are. See Level of measurement for further discussion.

Application in the semiconductor industry:

Why use FMEA?

  1. If the products comeback, customers will not.
  2. FMEA checks if the product meets requirements before Tape Out or Delivery.
  3. Saves R&D costs and on re-designs.
Which type of FMEA?
  1. System FMEA
    • Customer interface - Talk to the customer and get as much info as possible about the product in question.
    • Project resources - Analyze the resources available at your disposal for execution and delivery.
    • Architecture - Study the architecture thoroughly enough possible with a group brainstorming session.
  2. Design or Technology
    • Mechanical or Electrical
    • Technology
    • Software
  3. Fab or Assembly
    • Mechanical
    • Electrical
    • Testing
What is a failure?
  1. A re-design
  2. A failed test case
  3. Any crash
  4. All errata sheets
  5. All bugs
  6. All change requests
  7. All field failures
  8. All FARs & RMAs
  9. All test programs that did not work right the first time.
  10. Any project started without requirements defined by Marketing or not accepted by the cutomer.
Teams & Classification:
  1. Facilitator - He is the Quality Engineer
  2. Project Group - Architect, Programmers, RTL guys, Test, Quality, Marketing
  3. Support Group - Representatives from Test Departments, AEs, Sales.
Be Specific:
  1. Define scope of the FMEA - Make a picture
    • Serial IO
    • Refer a Block Diagram of the product
    • Provide customer & market requirement docs by version
    • Provide BOM list for the product
    • Ensure the team includes all cross functional departments for efficiency.
  2. Create proper visual definitions for the scope of FMEA - This would involved extensive whiteboard sessions!
  3. Start with the top 3 riskiest and the newest features added. Be as muc specific as possible by using a product brief or a data sheet etc.
To be continued....

Low Voltage Is Key To Energy-Efficient Chip - Part II

I have managed to gather a real doc on the recently announced low voltage chip...

These are some of the important details...
  • This was a PhD Thesis for very low voltage operation
  • 8T SRAM to avoid erasing of content which reading
  • Supply voltage 0.3-0.6V (derived from 1.2V by DC-DC converter)
  • Variation e.g. by random dopant fluctuations that required redesign of library
  • SubVt logic cell library was developed and used
  • 62 cells with limited fanin of 3
  • Small cells may double in size compared to regular library
  • e.g. Upsizing of keeper cells in flipflops and resizing of T-gates
  • Average area overhead ~1.7x
  • SubVt timing is slow!
  • Monte Carlo simulation of selected path
  • Handcrafted timing signoff methodology
Now here is the Link!

Mobile TV heats up with Broadcom's 65-nm SOC

The official news is out!
Broadcom has just announced a monolithic digital CMOS mobile TV receiver/demodulator, that reduces power consumption by up to 40 percent and physical dimensions by up to 30 percent from current handset design budgets. Designed for global digital TV reception on portable devices, the BCM2940 chip supports VHF III, UHF IV and V bands. It also supports the EU/US L-bands and integrates a 4-Mb MPE-FEC SRAM to handle DVB-H parallel/consecutive streams and services. The chip offers various physical interfaces including SPI, SDIO and USB 2.0.

Ok that sound great and appetizing, but is there a Market really?
Sometime ago TI came up with its own single-chip DVB-H receiver, called the Hollywood using a 90-nm technology, but the company hasn't announced any major design wins.
ST put its project on the back burner more than a year ago.
Now noise from NXP or Infineon!!!!!!!

Infineon given it recent track record and recent aquisitions i would not be surprised if they throw one in pretty soon :-) it is more a speculation than prediction.
Lets wait and watch and do Read on... this article on EE Times..

TI reveals details of 45-nm process

Rumors had been rampant for the past 3 months or so that TI had stopped development on the 45 nm node and moved on to 40nm attributing to lower power advantages and poor performance scaling till this article was published at EETimes.

"The first 45-nanometer chip to be designed by Texas Instruments, and fabricated by a foundry, uses new processing technology never before revealed by TI. The design details of the 45-nanometer process used to lower power by 63 percent and increase performance by 55 percent, compared with its 65-nanometer process, will be revealed Tuesday (Feb. 5) by TI at the International Solid-State Circuits Conference here."

Read on.. (Broken link, corrected now)

Low Voltage Is Key To Energy-Efficient Chip

News in from the International Solid State Circuits Conference in San Francisco of a new energy-efficient chip designed by researchers at MIT. It's said to be able to run on 1/10 the power of current chips. Texas Instruments worked with MIT on the design, which is maybe five years from production. The key to the chip's improved energy efficiency lies in making it work at a reduced voltage level, according to... a member of the chip design project team. Most of the mobile processors today operate at about 1 volt. The requirement for MIT's new design, however, drops to 0.3 volts.