- Black box testing
- not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
- White box testing
- based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
- Unit testing
- the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
- Incremental integration testing
- continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
- Integration testing
- testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
- Functional testing
- black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
- System testing
- black box type testing that is based on overall requirement specifications; covers all combined parts of a system.
- End-to-end testing
- similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
- Sanity testing
- typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
- Regression testing
- re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
- Acceptance testing
- final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
- Load testing
- testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.
- Stress testing
- term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
- Performance testing
- term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
- Usability testing
- testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
- Install/uninstall testing
- testing of full, partial, or upgrade install/uninstall processes.
- Recovery testing
- testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
- Security testing
- testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
- Compatibility testing
- testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
- Exploratory testing
- often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
- Ad-hoc testing
- similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
- User acceptance testing
- determining if software is satisfactory to an end-user or customer.
- Comparison testing
- comparing software weaknesses and strengths to competing products.
- Alpha testing
- testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
- Beta testing
- testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
- Mutation testing
- a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
Home » Archives for October 2007
- Boolean logic Minimization
- State machine design
- Synchronous circuit timing, races, testability
- Pipelines and hazards
- Processor block diagrams, Cache architecture
- Microarchitecture techniques
- CMOS gates, complex gates, Latch and FF design
- Regions of operation of a MOSFET, IV characteristics in different regions, transistor IV curves
- Transistor cross-sections
- Charge storage and how it impacts certain circuits
- Capacitive coupling
- Dynamic logic
- Sources of capacitive load, capacitances between terminals of a MOSFET
- Various kinds of inverters, switching delay, gain, wire delays
- Transistor sizing
What is the total test time?
Sol: 20.8 Secs
If you implement their suggestions, and make no other changes, what effect will this have on power? (NOTE: Based on the information given, be as specific as possible.)
Sol: Reducing short circuit time from 1 ns to 0.85 ns means reducing raising/falling time. Hence, the new short circuit power is 85% of original.
A group of wannabe performance gurus claim that the above optimization can be used to improve performance by at least 15%. Briefly outline what their plan probably is, critique the merits of their plan, and describe any affect their performance optimization will have on power.
Sol:The plan was probably to increase clock speed by 15%. However reducing
Tshort by 0.15 ns can at most decrease clock period by 2x0.15=0.30ns, while clock perios >> 1ns. Therefore it does not work.
Transistors have gotten smaller, die size has remained roughly the same size or even increased, clock speeds are increasing.
Signals are travelling roughly the same distance as before, but driving smaller capactive loads. Thus, wire delay is not decreasing much, but capacitive load is decreasing.
The clock period is decreasing, so the wire delay is taking up a larger percentage of the clock period and capacitive load delay is taking up a smaller percentage.
Increase transistor size so as to increase threshold voltage. This will require an increase in supply voltage, which will likely increase total power.
Alternative: When increasing transistor size, keep supply voltage the same, but decrease performance.
Alternative: Change fabrication process and materials to reduce leakage current. This will likely be expensive.
Alternative: Use dual-Vt fabrication process.
Short circuiting power will increase because:
- As temperature increases, atoms vibrate more, and so have greater probability of colliding with electrons flowing with current.
- This increases resistivity, which increases delay.
- Signals will rise and fall more slowly, which will increase the short circuiting time, and hence increase short circuiting power
Your comments, feedback & suggestions will be greatly appreciated.
Chapter 2: Language Basics, Net & Register Data Types
Chapter 3: Verilog Simulations & Display Commands
Chapter 4: Continuous Assignments, Time Delays & Timescales
Chapter 5: Verilog Operators, Timing Controls, Decisions & Looping Statements
Chapter 6: RTL Models of Combinational Logic, Interactive Debugging
Chapter 7: RTL models of Sequential Logic, Behavioral Models of RAMs & ROMs
Chapter 8: Modeling Structural Designs
Chapter 9: Blocking & Non Blocking Assignments, State Machine Designs
Chapter 10: File I/O, Test benches, Introduction to Synthesis Design Flows
Chapter 11: Additional Behavioral Commands, Verilog Strength Handling
Chapter 12: Verilog Gate Primitives, user defined Primitives
Chapter 13: Specify Blocks, SDF Back Annotation
Chapter 14: Switch Primitives, Passive Device Modeling
Thanks to Jim Blake of Centillium, Raghav Santhanam of Synopsys & Pedro of Texas A&M University for the Tutorials and other numerous contributions.
What is your response to each question? What is the justification for your answer? What are the trade-offs between the two options?
- Should all projects use an asynchronous reset signal, or should all use a synchronous reset signal, or should each project choose its own technique?
- Should all projects use Latches, or should all use Flip Flops, or should each project choose its own technique?
- Should all chips have registers on the inputs and outputs or should chips have the inputs and outputs directly connected to combinational circuitry, or should each project choose its own technique? By "register" we mean either flip-flops or latches, based upon your answer to the previous question. If your answer is different for inputs and outputs, explain why.
- Should all circuit modules on all chips have flip-flops on the inputs and outputs or should chips have the inputs and outputs directly connected to combinational circuitry, or should each project choose its own technique? By "register" we mean either flip-flops or latches, based upon your answer to the previous question. If your answer is different for inputs and outputs, explain why.
- Should all projects use tri-state buffers, or should all projects use multiplexor's, or should each project choose its own technique?
Sol to 1:
Synchronous reset: Synchronous reset leads to more robust designs. With asynchronous reset, a flop is reset whenever the reset signal arrives. Due to wire delays, signals will arrive at different flops at different times. If an asynchronous reset occurs at about the time as a clock edge, some flops might be reset in one clock cycle and some in the next. This can lead to glitches and/or illegal values on internal state signals.
The tradeoff is that asynchronous reset is often easier to code in VHDL and requires less hardware to implement.
Sol to 2:
Flip flops lead to more robust designs than latches. Latches are level sensitive and act as wires when enabled. For a latch based design to work correctly, there cannot be any overlap in the time when a consecutive pair of latches are enabled. If this happens, the value on a signal will “leak” through the latch and arrive at the next set of latches one clock phase too early. Thus, latch based designs are more sensitive to the timing of clock signals. Another disadvantage of
latches is that some FPGAs and cell libraries do not support them. In comparison, D-type flip flops are (almost?) always supported.
The tradeoff is that latches are smaller and faster than flip flops. A common implementation
of a flip-flop is a pair of latches in a master/slave combination.
Sol to 3:
Putting flops on inputs and outputs will make the clock speed of the chip less dependent of the propagation delay between chips. Flops can also be used to isolate the internals of the chip from glitches and other anomolous behaviour that can occur on the boards.
The tradeoff is that flops consume area and will increase the latency through the
Sol to 4:
Each project should adopt a convention of either using flops on inputs of modules or outputs of modules. It is rarely necessary to put flops on both inputs and outputs of modules on the same chip. This is because the wire delay between modules is usually less than a clock period. Putting flops on either the inputs or outputs is advantageous because it provides a standard design convention that makes it easier to glue modules together without violating timing constraints. If modules were allowed to have combinational circuitry on both inputs and outputs, the maximum
clock speed of the design could not be determined until all of the modules were glued together.
The tradeoff is that flops add area and latency. Sometimes there will be two modules where the combinational circuitry on the outputs of one can be combined with the combinational circuitry on the inputs of the second without violating timing constraints. This discipline prevents that optimization.
Aside: Sometimes, to meet performance targets, in situations such as this, a project will remove or move the flops between modules and do “clock borrowing” to fit the maximum amount of circuitry into a clock period. This is a rather low-level optimization that happens late in the design cycle. It can cause big headaches for functional validation and equivalence verification, because the specifications for modules are no longer clean and the boundaries between modules on the lowlevel design might be different from the boundaries in the high-level design.
Sol to 5:
Multiplexors lead to more robust designs. Tri-state buffers rely on analog characteristics of devices to work correctly. Latches can work incorrectly in the presence of voltage fluctuations or fabrication process variations. Multiplexors work on a purely Boolean level and as such are less sensitive to changes in voltages or fabrication processes.
The tradeoff is that latches are smaller and faster than multiplexor's.
Instruction P : (i+j+k+l)*m
Instruction Q: a*b*((a*b)+(b*d)+e)
P's Frequency of occurrence is 75%
Q's Frequency of occurrence is 25%
2 i/p Mult - 40 ns
2 i/p Add - 25 ns
Register - 5ns
- There is a resource limitation of a maximum of 3 input ports i.e, you can assume other inputs to be internal signals in the expression. (There are no other resource limitations.)
- You must put registers on the inputs not the outputs.
- The environment will directly connect your outputs (its inputs) to registers. So no addition register is counted.
- Each input value (a, b, d, e, i, j, k, l, m) can be input only once — if you need to use a value in multiple clock cycles, you must store it in a register.
- What is the fastest execution time (for the mixture of Instruction P and Instruction Q given above) that you can achieve for this design, and what clock period do you need to achieve it?
- Find a minimal set of resources that will achieve the performance you calculated.
Sol to 2: Registers 3, Adders 2, Multipliers 1
- All of the loads of apples must be carried using the same truck
- Elapsed time is counted from beginning to deliver first load to returning to the orchard after the last load
- Ignore time spent loading and unloading apples, coffee breaks, refueling, etc.
- Which truck will result in the least elapsed time and what percentage faster will
the elapsed time be?
- In planning ahead for next year, is there anything the farmer could do to decrease
his delivery time with little or no additional expense? If so, what is it, if not, explain.
Sol for 2: Use two drivers, Use a combination of the small truck and large truck to improve his utilization.
That's for another day, though -- for now, onnentoivotus!
Skepticism is a method
Cynicism is a position!
All good companies have well thought-out processes
Don't let it get in the way the way of doing what is right!
If you are looking for free E-Books you are in the wrong place !!!!!!!
We only feature some good E-Books which you can purchase!!
To support this portability, module boundaries must be enforced at the power domains level. That is, a given module should belong to a single power domain, not split across several domains. Some tools and flows support RTL process by RTL process assignment to power domains, but this leads to much more complicated implementation and analysis. Clean visibility of the boundaries of a power gated block is key to having a clean, top-down implementation and verification flow.
Although one can in theory nest power gated modules arbitrarily within power gated subsystems which are in turn nested on a shared switched power rail, there are considered benefits in not creating multiple levels of power switching fabric. Power gating is intrusive and ass in some voltage drop and degradation of performance. Cascading multiple voltage drops can lead to unacceptable increases in delay. Even if the design is representeted as hierarchical at the architectural level, the implementation is improved if this is mapped on to a single level of power gating at implementation.
- Map power gated regions to explicit module boundaries.
- When partitioning a hierarchical power gating design ensure that the power gating control terms can be mapped back to a flat switching fabric.
- Avoid control signals passing through power gated or power down regions to other power regions that not hierarchically switched with the first region.
- Avoid excessively fine power gating granularity unless absolutely required for aggressive leakage power management. Every interface adds implementation and verification challenges and complicates the system level production test challenges.
- Avoid a power gating system of more that one or two levels.
To support power gating, we need to:
- Decide when and how the IP will be powered down and powered up.
- Decide which blocks will be power gated and which blocks will be always on.
- Design a power controller that controls the power up and power down sequence.
- Determine which signals need to be isolated during power down.
- Develop an initial strategy for clocks reset and the power control signals.
SOWs are necessary for formal agreements
A good working relationship will surmount more barriers!
Let me try to explain each one of them based on my recent design experience.
For regular logic blocks, there are multiple ways to wake-up faster without losing much of information.
- Use retention flops to save state of some important registers. For example state of control block, which forms the heart of the whole system.
- If the chip is aimed for At-Speed testing, scan chains of the design can be used to scan out the data to an external memory and scan in after wake-up. This may not be as fast as using retention flops.
- ….. there are many more possible methods.
Again w.r.t to retention flops there were questions about, How many type of retention flops are available.
I have seen 3 types.
- Single save/restore pin retention latch (Slave latch being always on)
- Single pin balloon Latch
- Dual Pin balloon Latch
Advantages of Single Pin:
- Minimal area impact
- Single signal controls retention
Disadvantages of Single Pin:
- Performance Impact on the register
- Hold Time requirements for the input data
Advantage of Dual Pin:
- Minimal leakage power
- Minimal performance impact compared to the Single Pin design
- Minimal dependency on the clock for the control signals.
Disadvantages of Dual Pin:
- Area Impact
- More Complex System Design
- More Buffer Network and AON network required.
While designing systems with DVFS techniques, we need to look at the impact of temperature inversion on the performance of the design. An important criteria while selecting voltages and frequencies for a design, one must consider a range such that delay/voltage consistently increases or decreases.
What does this means?
We must always operate above the temperature inversion point.
Especially in low power UDSM process, combined use of reduced VDD and High Threshold voltage may greatly modify the temperature sensitiveness of the design. Due to this, worst case timing is no longer guaranteed at higher temperatures. So in order to guarantee correct behavior of the design, one has to verify the design at various PVT corners. This leads to a significant increase in the total turn around time of the design.
In a nutshell, delay increases with increase in temperature, but below a certain voltage, this relationship inverts and delay starts to decrease with increase in temperature. This is a function of threshold voltage (Threshold voltage and carrier mobility are temperature dependent). Due to this threshold voltage dependency, we have observed that non-critical paths suddenly become critical.
Having said this, as soon as Voltage/Delay relate randomly Voltage Scaling becomes a nightmare to implement and verify.
Note: If both threshold voltage and carrier mobility monotonically decrease with increase in temperature, Operating Voltages(range) defines the performance of the design.
My advise is: If you have a multilayer board then fill them with copper and connect it to GND, unless you have some very sensitive HF analog ICs where feedback is absolutely unwanted. Use at least two vias to the nearest GND layer and add one via for every square centimeter of copper area you have. This copper fill will give you a better shielding of your signals, the IC's will seem less noisy and less sensitive to EMI due to the tighter coupling to GND and you will get better compliance to the EMC standards.
However keep the areas under the ICs currentless - don't try to use them for decoupling or anything like this.
Massaging the truth can get you out of an immediate crisis
Hidden agendas never remain hidden for long!
Technical competence is a must
It's the "passion" that makes the difference!
Watch out for more of these Recipes...
Choose your words with care, phrases like "…will never work", "…is useless" etc. will spoil your cooperation with the designer(s) far beyond your current project.
Never count the number of points in your feedback to the designer, or even worse return him pages of long numbered lists. Never consequently call your points for design "errors". If you need to be able to refer to the points on your lists use alphabetically numbered lists. You don't get paid for proving the incompetence of the designer but for improving the quality of the final product, thereby saving your company thousands of dollars.
Remember: The designer is not stupid - Engineers make a lot of errors when they are experienced and know for sure - and take things for granted.
If you try to insist or convince the designer of your point of view you will very soon have an enemy among your colleagues at work. By giving him a less emphasized feedback his responsibility and curiosity makes him pick up the important points in your feedback and at the end of the day you "save his ass" and get a friend.
Design review meetings are not a modern pillory. Only designer(s) and reviewer(s), all working on the project, should attend the meeting.
There are various voltage scaling approaches that are in use today,
Static Voltage Scaling: Different blocks in the design will be operating at different fixed supply voltages
Multi-level Voltage Scaling: An extension to static voltage scaling where in different blocks are switched between two or more voltage levels.
Dynamic Voltage and Frequency Scaling : An extension to Multi-Level Voltage Scaling Voltage levels are dynamically varied as per the work-load of the block
Adaptive Voltage Scaling : An extension to DVFS and its a closed loop representation of the above method. Power Controller block within the design adopts itself dynamically to varying work-loads.
DVFS example: Here is an outline of tasks that will be executed within a design to scale voltage and frequency dynamically, controller first decides the minimum clock speed that meets the workload requirements. It then determines the lowest supply voltage that will support that clock speed. Given below is an example of a sequence thats followed if the target frequency is higher than the current frequency
– Controller monitors the variance in work-load
– Controller detects variation in work-load and programs the device to operate at different voltage
– Block under question continues operating at the current clock frequency until the voltage settles to the new value
– Controller then programs the desired pre-determined clock frequency
Varying clocks and voltages during operation is a new methodology in the design and leads to many challenges in the design process
– Identifying the optimal combination of Voltage/Frequency
– How to model the timing behavior
– Clock and Power Supply locking times.