Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts

At-Speed and Advanced Fault Models for Achieving High Quality Test


With the increasing clock speeds and the decreasing feature sizes found in today's nanometer designs, at-speed testing is a requirement to achieve high quality test results. In addition, new advanced fault models are also available to improve defect detection and lower DPM rates. Advanced at-speed test capabilities and some new fault models are described in this paper.

Most Significant Papers on IC Test


Many important papers have been presented at the annual International Test Conference (ITC) over the past 35 years. The best of these are on-line at ITC Papers.

The Economics of Test, Part - IV


Detecting a defective unit is often only part of the job. Another important aspect of test economics that must be considered is the cost of locating and replacing defective parts. Consider again the board with 10 integrated circuits. If it is found to be defective, then it is necessary to locate the part that has failed, a time-consuming and error-prone operation. Replacing suspect components that have been soldered onto a PCB can introduce new defects. Each replaced component must be followed by retest to ensure that the component replaced was the actual failing component and that no new defects were introduced during this phase of the operation. This ties up both technician and expensive test equipment. Consequently, a goal of test development must be to create tests capable of not only detecting a faulty operation but to pinpoint, whenever possible, the faulty component. In actual practice, there is often a list of suspected components and the objective must be to shorten, as much as possible, that list.

One solution to the problem of locating faults during the manufacturing process is to detect faulty devices as early as possible. This strategy is an acknowledgment of the so-called rule-of-ten. This rule, or guideline, asserts that the cost of locating a defect increases by an order of magnitude at every level of integration. For example, if it cost N dollars to detect a faulty chip at incoming inspection, it may cost 10N dollars to detect a defective component after it has been soldered onto a PCB. If the component is not detected at board test, it may cost 100 times as much if the board with the faulty component is placed into a complete system. If the defective system is shipped to a customer and requires that a field engineer make a trip to a customer site, the cost increases by another power of 10. The obvious implication is that there is tremendous economic incentive to find defects as early as possible. This preoccupation with finding defects early in the manufacturing process also holds for ICs.27 A wafer will normally contain test circuits in the scribe lanes between adjacent die. Parametric tests are performed on these test circuits. If these tests fail, the wafer is discarded, since these circuits are far less dense than the circuits on the die themselves. The next step is to perform a probe test on individual die before they are cut from the wafer. This is a gross test, but it detects many of the defective die. Those that fail are discarded. After the die are cut from the wafer and packaged, they are tested again with a more thorough functional test. The objective? Avoid further processing, and subsequent packaging, of die that are clearly defective.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

The Economics of Test, Part - III


However, if devices are tested, feature sizes can be reduced and more die will fit on each wafer. Even after the die are tested and defective die are discarded, the number of good die per wafer exceeds the number available at the larger feature sizes. The benefit in terms of increasing numbers of good die obtainable from each wafer far outweighs the cost of testing the die in order to identify those that are defective. Point B on the graph corresponds to a point where process yield is lower than the required quality level. However, testing will identify enough defective units to bring quality back to the required quality level. The horizontal distance from point A to point B on the graph is an indication of the extent to which the process capability can be made more aggressive, while meeting quality goals. The object is to move as far to the right as possible, while remaining competitive. At some point the cost of test will be so great, and the yield of good die so low, that it is not economically feasible to operate to the right of that point on the solid line.

We see therefore that we are caught in a dilemma: Testing adds cost to a product, but failure to test also adds cost. Trade-offs must be carefully examined in order to determine the right amount of testing. The right amount is that amount which minimizes total cost of testing plus cost of servicing or replacing defective components. In other words, we want to reach the point where the cost of additional testing exceeds the benefits derived. Exceptions exist, of course, where public safety or national security interests are involved.

Another useful side effect of testing that should be kept in mind is the information derived from the testing process. This information, if diligently recorded and analyzed, can be used to learn more about failure mechanisms. The kinds of defects and the frequency of occurrence of various defects can be recorded and this information can be used to improve the manufacturing process, focusing attention on those areas where frequency of occurrence of defects is greatest.

This test versus cost dilemma is further complicated by “time to market.” Quality is sometimes seen as one leg of a triangle, of which the other two are “time to market” and “product cost.” These are sometimes posited as competing goals, with the suggestion that any two of them are attainable.25 The implication is that quality, while highly desirable, must be kept in perspective. Business Week magazine, in a feature article that examined the issue of quality at length, expressed the concern that quality could become an end in itself. The importance of achieving a low defect level in digital components can be appreciated from just a cursory look at a typical PCB. Suppose, for example, that a PCB is populated with 10 components, and each component has a defect level DL = 0.999. The likelihood of getting a defect free board is (0.999)10 = 0.99004; that is, one of every 100 PCBs will be defective—and that assumes no defects were introduced during the manufacturing process. If several PCBs of comparable quality go into a more complex system, the probability that the system will function correctly goes down even further.

About the Author:
Name: Joachim Bauer, Test Engineer
Experience: 13+ Yrs
Location: Nice, France

Some Testing Glossary


Black box testing
not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing
based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing
the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
Incremental integration testing
continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing
testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing
black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
System testing
black box type testing that is based on overall requirement specifications; covers all combined parts of a system.
End-to-end testing
similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing
typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing
re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing
final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing
testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.
Stress testing
term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing
term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing
testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing
testing of full, partial, or upgrade install/uninstall processes.
Recovery testing
testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing
testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing
testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing
often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing
similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
User acceptance testing
determining if software is satisfactory to an end-user or customer.
Comparison testing
comparing software weaknesses and strengths to competing products.
Alpha testing
testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing
testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing
a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
Disclaimer: Sent to me by TestMedia Inc.

Verification and Testing


Verification:
In order to verify the functional correctness of a design, one needs to capture the model of the behavior of the design in a formal language or use the design itself. In most commercial software development organizations, there is often no formal specification of the program under development. Formal verification is used routinely by only small pockets of the industrial software community, particularly in the areas of protocol verification and embedded systems. Where verification is practiced, the formal specifications of the system design (derived from the requirements) are compared to the functions that the code actually computes. The goal is to show that the code implements the specifications.

Testing:
Testing is clearly a necessary area for software or hardware validation. Typically, prior to coding the program, design reviews and code inspections are done as part of the static testing effort. Once the code is written, various other static analysis methods based on source code can be applied.

Typically, system testing targets key aspects of the product, such as recovery, security, stress, performance, hardware configurations, software configurations, etc. Testing during production and deployment typically involves some level of customer-acceptance criteria.

Load and stress testing.


One of the most common, but unfortunate misuse of terminology is treating "load testing" and "stress testing" as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly "load tested" nor subjected to a meaningful stress test.

1. Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

2. Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads is in support of software reliability testing and in performance testing. The term "load testing" by itself is too vague and imprecise to warrant use. For example, do you mean representative load," "overload," "high load," etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer (application-specific) excessive delay.

3. A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, "load testing" is merely testing at the highest transaction arrival rate in performance testing.