VLSID 2010 Conference (Bangalore): Registrations open!

VLSID 2010, Asia’s premier technical conference on VLSI design, EDA and embedded systems will be held at NIMHANS Convention Center, Bangalore, India from January 3 - 7, 2010. The conference agenda is now live, and can be found on their website!

VLSID 2010 features an exciting lineup of seventy technical papers, eight distinguished keynote speakers, and nine invited embedded tutorials/hot topic presentations on the three days of the conference (January 5 - 7). On the two days preceding the conference (January 3 - 4), there are eight tutorials, including one hands-on tutorial being organized for the first time in the history of the conference. Other events that will happen concurrently with the main conference include industry and education forums, exhibits, and Design/EDA/Systems contest.

The conference call for participation is available for viewing!!

You can now register online, and avail of early bird registration rates. If you need any further information, please feel free to write to organizing committee member Mohammed Hussain Mohammed.Hussain@synopsys.com

When people give negative feedback about you?

For the most part the primary reaction is to defend yourself with a cause or justification or to just outrightly acknowledge and move on. We may sometimes be in a situation not knowing how to respond! We may even be not sure if we have the capability to really tackle this in a professional and meaningful way.
Regardless of whether or not they have the infrastructure to respond, here are some ways to look at it:

1. Is the feedback legitimate? There are many instances where the negativity actually has some merit. It's hard for everybody to have a pristine experience when issues pop up. In a lot of instances, the negative feedback is not about the overall work quality or efficiency, but is an exception to the rule. If the negative feedback is legitimate, it does require some kind of response. Does it require a personal response in every instance? Not necessarily. As long as the response is communicated in a human and personal way and mutually agreeable it could correct the course.

2. Is the person crazy? Don't laugh. It is possible. We've all read peer reviews and marvelled at how someone's review of a work package has no real attachment to the reality we all share. The world is full of crazy people who are just looking for a soapbox to be heard or a cause to take on. In this instance, you have to tread carefully. Responding may open up a can of worms that will see no end and no reason. No responding might only aggravate the individual. These are special/case-by-case instances, and they might require something more traditional - like a phone call - to try and resolve the scenario. If you get a mixed bag of Positive and Negative feedback very frequently from the same person, it just time to re-think your strategies.

3. Is apologizing an option? Apologies definitely go a long way. But be political and dont give away too much!

4. Should you just forget about it and move on? There are many schools of thoughts on this. Some people say you have to respond to each and every piece of feedback (both positive and negative), some argue that you should only respond to those who really do have some kind of impact, and then there is the group that simply sits backs and just lets it fly without ever responding. Your mileage may vary. Depending on the scenario, the type of feedback and the voices behind the noise, is how you will best gauge how to respond. It is usually good to respond in some kind of fashion so that your own POV (point of view) is - at least - a part of the conversation.

5. Should you respond to everything? It's very easy to respond to the good stuff, it is hard (and time consuming) to respond to the negative by citing justification and stuff. The answer to this one ties into #4. In a perfect world, yes - respond to everything (with the exception of the people in #2). In responding, you're not just answering to this one individual's gripe, you're better able to reflect on how your brand "lives" in people's minds, and I believe this will make you a better Marketer, a better Communications Professional and a better brand.

Fedora Electronics Lab

Fedora Electronic Lab (FEL) comes to fix one big problem in the opensource community.
The problem is : there is no one who provides opensource EDA solutions for the real life. Although it is one problem, it is very complex in itself. In real life, designers use EDA software to design chips or circuit boards. Thereby the designer requires a set of hardware design tools to design his/her chips. However the same set of hardware design tools does not apply for every hardware design project.
FEL is the vision child of Chitlesh Goorah [Interview @ http://fedoraproject.org/wiki/Interviews/FEL)]...

FEL is..
* Fedora's EDA portfolio,
* an opensource EDA provider and
* opensource EDA community builder.

* Deployable in both development and production environments.
* No kernel patches are required, making it easy to deploy and use.
* No licenses required and it is free.

Main Highlights:
"Fedora Electronic Lab" targets mainly the Micro-Nano Electronic Engineering field. It introduces:
* a collection of Perl modules to extend Verilog and VHDL support.
* tools for Application-Specific Integrated Circuit (ASIC) Design Flow process.
* extra standard cell libraries supporting a feature size of 0.13┬Ám. (more than 300 MB)
* extracted spice decks which can be simulated with gnucap/ngspice or any spice simulators.
* interoperability between various packages in order to achieve different design flows.
* tools for embedded design and to provide support for ARM as a secondary architecture in Fedora.
* tool set for Openmoko development and other opensource hardware communities.
* a Peer Review Web-based solution coupled with Eclipse IDE for Embedded/Digital Hardware IP design.
* PLA tools, C-based design methodologies, simulators for 8051 and 8085 microcontrollers and many more ...

FEL live CD can be downloaded here..

Did not find what you are looking for?

Did you try using our custom search box at the top of the page? Still not found? Please send us an email and we will author an article on that missing topic.

Flash Memories - Types

Although all flash memories use the same basic storage cell, there are a number of ways in which the cells can be interconnected within the overall memory array. The two most prominent architectures are known as NOR and NAND; these terms, derived from traditional combinatorial logic, indicate the topology of the array and the manner in which individual cells are accessed for reading and writing. Initially, there was a basic distinction between these two fundamentally different architectures, with NOR devices exhibiting inherently faster read times and NAND devices offering higher storage densities (because the NAND cell is about 40% smaller than the NOR cell). NOR Flash memories are considered to be the best choice for densities up to 256 Mbits, while NAND types are preferred for 512-Mbits and up. This is the best compromise between large data storage capacities and cell size - and consequently, final die size.

NOR Flash Memory:
NOR-type Flash memories are based on technologies that evolved largely from the first non-volatile memory technologies. They are typically organized as a number of blocks between 16 Kbytes and 128 Kbytes, each of which can be individually erased or programmed. The architecture can be either uniform if all of the blocks are the same size or asymmetrical when the blocks vary in size. The array can be organized as a single piece of memory or split into dual or multiple banks, and in some cases, one block (called boot block) located at the top or the bottom of the address space, is dedicated to the storage of the boot code. NOR Flash memories usually have a random access for reading at byte/word level and sometimes a page access mode, allowing the reader to view an entire page of 2 to 4 words in one go. When very rapid read operations are required, the Flash memory is equipped with a burst read mode, which allows data to be transferred on every clock cycle.

Parallel and Serial Interface:
Parallel Access and Serial Access Parallel buses were primarily used to interface flash memories with microcontrollers and microprocessors through an address bus, a data bus and a control bus. By default, the term "Flash memory" refers to a parallel interface memory. The data bus can be organized as x8 bits, x16 bits or x32 bits. In some cases, address and data buses can be multiplexed. They are available in densities of up to 128 Mbits. Because of their rapid read times, Flash memories are traditionally used for basic code or code-plus-parameter storage where greater flexibility compared to EPROM is more important than the additional unit cost. More recently, they have pervaded many new applications where their key functions are to store both code and data. This was achieved by dual operations supported by dual or multiple bank architecture, which enable programming/erasing operations in one bank while reading from another bank. The serial bus is used to connect a Flash memory to a microcontroller or an ASIC equipped with a serial bus. Serial buses are input/output interfaces supporting a mixed address/data protocol. The serial bus connectivity reduces the number of interface signals required. For example, the SPI bus, the most popular serial bus for serial Flash memories, requires only 4 signals (data in, data out, clock and chip select) compared to 21 signals necessary to interface a 10-bit address parallel memory. As a result, the number of pins of the memory package (memory and bus master) is reduced, as is the number of PCB tracks. Consequently, a serial memory can fit into a smaller and less expensive package. However, serial Flash memories are available in lower densities than Flash memories. The communication throughput between serial Flash memory and master processor is lower than for traditional Flash memories. Consequently, the time to download code into the serial memory and execute it from the memory is longer. As a result, serial Flash memories are usually used for small code storage associated with a cache RAM. This is called a code shadowing architecture. The executable code is first programmed in the memory and it is write protected. After power-up, it is downloaded from memory to RAM from where it is executed by the master processor.

Flash Memories - Introduction

Flash memory is a type of electronic memory increasingly used in a wide range of communications, consumer, computer and peripherals, and automotive applications, but which relatively few semiconductor companies can produce in volume at the low cost equipment manufacturers require. Flash memories belongs to the class of semiconductor memories called non-volatile memories, of which it is the most dynamic driving force. Semiconductor memories can be divided into two different types: those that can only retain data stored in them while they are connected to a battery or some other source of electrical power (volatile), and those that retain their data even if their power supply is removed (non-volatile).

Flash memories can be electrically erased and it is not necessary to erase the whole memory array in order to store new data in part of it. Flash memory, EPROM and EEPROM devices all use the same basic floating gate mechanism to store data, but they use different techniques for reading and writing data.

In each case, the basic memory cell consists of a single MOS transistor (MOSFET) with two gates:
• control gate connected to the read/write control circuitry
• floating gate located between the control gate and the channel of the MOSFET(the part of the MOSFET through which electrons flow between the so-called
Source and Drain terminals).

In a standard MOSFET, a single Gate terminal controls the electrical resistance of the channel: electrical voltage applied to the gate controls how much current can flow between the Source and Drain. The MOSFETs used in non-volatile memories include a second gate that is completely surrounded by an insulating layer of silicon dioxide, i.e., it is electrically isolated from the rest of the circuitry. Because the floating gate is physically very close to the MOSFET channel, even a small electric charge has an easily detectable effect on the electrical behavior of the transistor. By applying appropriate signals to the control gate and measuring the change in transistor behavior, it is possible to determine whether there is an electrical charge on the floating gate. Because the floating gate is electrically isolated from the rest of the transistor, special techniques are required to move electrons to and from the floating gate.

One method is to fill the MOSFET channel with high-energy electrons by making a relatively high current pass between the drain and the source of the MOSFET. Some of these "hot" electrons have sufficient energy to cross the potential barrier between the channels and reach the floating gate. When the high current in the channel is removed, these electrons remain trapped in the floating gate. This is the method used to program the memory cells in EPROM and Flash memories. This technique, known as Channel Hot Electron (CHE) injection, can be used to load an electrical charge onto the floating gate, but does not provide a way to discharge it. EPROM technology achieves this by flooding the entire memory array with ultra-violet light; the high-energy light rays penetrate the chip structure and impart enough energy to the trapped electrons to allow them to escape from the floating gate.

The second method of moving a charge to a floating gate is the quantum mechanical effect known as tunneling. In this method electrons are removed from the floating gate by applying a voltage that is large enough to cause electrons to 'tunnel' across the insulating oxide layer to the source between the MOSFET control gate and the source or the drain. The number of electrons that can tunnel across an insulating layer in a given time depends on the thickness of the layer and the value of the applied voltage. To meet realistic voltage levels and erase-time constraints, the insulating layer must be very thin, typically 7nm (70 Angstroms).

EEPROM memories use tunneling to charge and discharge the floating gate according to the polarity of the applied tunneling voltage. A Flash memory can therefore be considered to be a memory device that is programmed like an EPROM and erased like an EEPROM, although there is much more to Flash technology than simply grafting the EEPROM erase mechanism onto EPROM technology.

The most important difference between EPROM and the other two processes lies in the thickness of the oxide layer that separates the floating gate from the source. In an EPROM, this is typically 20-25nm, but this is far too thick to allow tunneling to take place at an acceptable rate with a practical voltage level. For Flash memory, tunnel oxide thickness of around 10nm is required, and the quality of this oxide layer has a dramatic effect on the performance and reliability of the device. This is one of the reasons that relatively few semiconductor manufacturers have mastered Flash technology and even fewer have been able to reliably combine Flash technology and mainstream CMOS processes to build products such as microcontrollers with embedded Flash memory.

In the next post on this series, we will look at NAND, NOR, Parallel & Serial Flashes!

Researchers expand clinical study of brain implant

braingateneuralinterface.jpgWe are excited to see that the BrainGate Neural Interface System is moving to phase-II clinical testing.
BrainGate is:
A baby aspirin-size brain sensor containing 100 electrodes, each thinner than a human hair, that connects to the surface of the motor cortex (the part of the brain that enables voluntary movement), registers electrical signals from nearby neurons, and transmits them through gold wires to a set of computers, processors and monitors. The goal is for patients with brain stem stroke, ALS, and spinal cord injuries to eventually be able to control prosthetic limbs directly form their brains.An earlier version of the BrainGate system helped a young tetraplegic named Matt Nagle control a mouse cursor and operate a very basic prosthetic hand. A 25-year-old locked-in patient named Erik Ramsey, who is participating in the only other FDA-approved clinical trial of a brain-computer interface. Ever since a car accident nine years ago, the only part of Erik's body that has been under his control has been his eyeballs, and even those he can only move up and down. The hope is that he might someday use his neural implant to control a digital voice:
When Erik thinks about puckering his mouth into an o or stretching his lips into an e, a unique pattern of neurons fires--even though his body doesn't respond. It's like flicking switches that connect to a burned-out bulb. The electrode implant picks up the noisy firing signals of about fifty different neurons, amplifies them, and transmits them across Erik's skull to two small receivers glued to shaved spots on the crown of his head. Those receivers then feed the signal into a computer, which uses a sophisticated algorithm to compare the pattern of neural firings to a library of patterns Kennedy recorded earlier. It takes about fifty milliseconds for the computer to figure out what Erik is trying to say and translate those thoughts into sound.
Like the BrainGate sensor, Erik's neural implant was inserted into the motor cortex (in his case, the specific region that controls the mouth, lips, and jaw). But Erik's implant only has a single electrode, whereas the BrainGate has 100, which means it should, theoretically, be able to differentiate signals from a far greater number of neurons.

Hot papers at 2009 VLSI Technology Symposium

Hot papers from this year's VLSI Technology Symposium include three nonvolatile memory advancements: Toshiba' BiCS Flash, Samsung's vertical-stacked transistor structures and Hitachi's PCRAM. Two papers on advanced logic processes include: Intel's" High-k/Metal Gate Stacks" and IBM's "32nm SOI CMOS with Highk/ Metal Gate."

Low-cost phones, emerging markets to drive handsets sector

With developed markets saturated and shifting mostly high-end handsets, and mid tier phone providers continuing to struggle, market tracker Juniper Research suggests low-cost devices sold to the emerging markets will be the only ray of hope in the short term.

Intel Eyes Smartphone Chip Market

Intel has been rather successful at carving out a larger percentage of the netbook market with their low power Atom processor. Moving forward, Intel's executives believe there's a good potential to increase Atom's traction in adjacent markets by targeting its low-cost, energy-efficient chips at various multifunctional consumer gadgets including smartphones and other portable devices that access the Internet. Code-named Moorestown, a new version of the chip will offer a 50x power reduction at idle and reportedly will deliver enough horsepower to handle 720p video recording and 1080p quality playback. It is with this upcoming chip, that Intel will begin targeting the smartphone market In 2011. Intel also plans to introduce an even smaller, less power hungry version of the chip known as Medfield, which will be built on a 32nm process with a full solution comprising a PCB area of about half the size of a credit card.[Via Slashdot]

Nokia Developing Wireless, Accessory-Free Ambient Charging

Engineers at Nokia have hatched a plan to for a system that'll charge phones using nothing more than ambient electromagnetic radiation, or, as you and I might put it, electricity sucked from thin air.

It sounds a little sci-fi at first, but it's not: RFID tags are powered by electrical signals converted from electromagnetic waves emitted by a nearby sensor machine, which is exactly how this system is said to work. The thing is, the amount of electricity involved here is tiny, and Nokia's system won't even have a base station—it'll draw from ambient electromagnetic waves, meaning Wi-Fi, cell towers and TV antennae. Nokia hopes to harvest about 50 milliwatts—not quite enough to sustain a phone, but enough to mitigate drain, and slowly charge a handset that's been switched off.
Current prototypes only gather about 5 milliwatts, which is essentially useless, and scientists and industry experts just don't see the technology maturing to the point that Nokia wants it to, at least in the near future. But the company's researchers are standing strong:
I would say it is possible to put this into a product within three to four years.
If you believe them, this is pretty exciting: maybe not as a primary charging mechanism, but as a battery extender. [Technology Review—Image from Technology Review]

Moore's Law: 43 Years and counting

In 1965, Gordon Moore sat down to pen his article for a Electronics Magazine and this is when he saw some fundamental drivers in the Integrated circuits. Little did he know how powerful his vision would be, or the longevity of what others would come to call a law. Forty year later, in celebration of his birthday, the semiconductor industry association devoted its annual report to Moore's law. They searched the world for the top two Moore's law scholars: one from the industry and one from the academia. they then commissioned these scholars to write two papers that describe Moore's law, its history, its economics and its impact on the world.

Moore's original statement that transistor counts had doubled every year can be found in his publication "Cramming more components onto integrated circuits", Electronics Magazine 19 April 1965:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

Intel 8008 | Intel Pentium4

Who are the Computer Architects?

The author (Mark Smotherman, Associate Professor, School of Computing, Clemson University) makes an effort to list the key computer architects mainly to recognize their work. The listing comprises of instruction set architecture (ISA) and its architect(s), followed by implementations of that ISA and the associated microarchitect(s)/designer(s). The processors that are listing have been available for sale commercially, and in most instances, have categorized the processors by company. The current list mainly includes late 1980's and 1990's ISAs and microprocessor implementations. The list especially highlights the high-performance (i.e., high-risk) implementations.
The success and failure of high risk computer developments can quite often be traced to a single individual. It is not accidental that unique persons such as Gene Amdahl, Seymour Cray, Fred Brooks, and Bob Barton have become recognized leaders in the computer architecture and design field. Their reputations did not arise from a happy coincidence of being associated with a successful project; rather, they stand out because of their ability to generate a system wide concept, determine a course of action to get it implemented, make the necessary tradeoffs and finally drive through all obstacles to ensure completion of their vision.

Also read: Which Machines Do Computer Architects Admire?

RFIC Tutorials

Free Video Lectures

A Video Lecture from UC Berkeley on Analog-Digital Interfaces! Analysis and Design of VLSI Analog-Digital Interface Integrated Circuits
Checkout the other lectures in the same category.

Sponsored Post: SpyGlass 4.2.0 Webinar from ATRENTA

In this SpyGlass(R) webinar, Atrenta experts will take you through the highlights of the new release and cover the following topics, including overviews of:

* Latest GuideWare™ methodology improvements for better productivity
* New Atrenta Console™ interface for improved ease of use (demo included)
* CDC Setup Manager (demo included)
* Multi-mode timing coverage report, including why it's valuable
* New at-speed test analysis
* CPF and UPF support

Title:SpyGlass 4.2.0 Webinar
Date:Wednesday, May 6, 2009
Time:7:00 PM - 8:00 PM PDT

After registering you will receive a confirmation email containing information about joining the Webinar.
Space is limited.Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/439111672

Effective communications skills for the Industry - Modulation skills

During a conversation voice modulation plays a major role in defining the atmosphere to get the message out. In this post we look at the various factors that can be worked during a conversation.

Pitch: Musical notes as in "Do Re Me So .." or "Sa Re Ga Ma .." usually start from low pitch to high pitch.
Volume: How loud can you be.
Tone: The character/Timbre of sound. (Eg. The Rustle of Paper or the Tinkle of glass.
Pace: The speed of conversation or how long a sound lasts.

The variation of the above aspects of your voice give personality, variety and clarity to your conversation. To check and modulate the individual effectiveness of your voice focus attention by following the below steps...

1. Cup your right hand around your right ear pulling it forward.
2. Cup your left hand around your mouth to direct sound in to you ear.
3. Then talk and hear your voice. This is pretty much how people will hear you.

Use the below to control your voice..(the PAMPERS of voice if i may call it ;-))
P Projection
A Articulation
M Modulation
P Pronunciation
E Enunciation
R Repetition
S Speed

Earlier post:
Effective communications skills for the Industry - Being Politically ccorrect
Effective communications skills for the Industry - The Basics
Effective communications skills for the Industry - Introduction

Virtual Component Websites and Software

Virtual Component Exchange (VCX) is a web-based, regulated trading exchange for semiconductor virtual components or intellectual property (IP). The exchange was established as an outgrowth of the Alba Centre which was launched in 1997 by an economic development agency of the Scottish government. The main elements of the Alba Centre initiative were the Virtual Component Exchange, the Institute for System Level Integration and the development of the Alba Campus, the physical embodiment of the Alba vision. The Virtual Component Exchange has been acquired by Beach Solutions which continues to operate the website and to sell database solutions to both IP buyers and sellers. For additional information, VCX

A similar effort that was launched in France is the Design & Reuse (D&R) web portal which provides a secure catalog of 15,000 IP products from 150 suppliers with 37,000 registered users making 100,000 page views per month. D&R also sells a comprehensive intranet reuse infrastructure that provides an IP supply chain delivering external IPs as well as internal IPs from the designer site to the user site under the control of an IP management system hosting the corporate IP directory. For additional information, D&R

SIPAC, System Integration & Intellectual Property Authoring Center, is a non-profit organization located in South Korea that has an IP trading system providing a variety of IP related services. SIPAC has developed IP related guidelines for HDL coding and AMS design. It has also produced a web-based IP verification and evaluation system. For additional information SIPAC

Taiwan SoC Consortium (formerly named the Silicon IP Consortium) is a non-profit organization. Member companies include fabless design houses, integrated devices manufacturers, system vendors, foundries, EDA companies, design service companies and semiconductor research organizations. Its primary mission is to facilitate IP information sharing and IP exchange, especially for the companies in Taiwan. For additional information, TaiwanSoC


OpenFPGA is an emerging effort to foster and accelerate the adoption and incorporation of reconfigurable computing based computing solutions in high-performance computing and enterprise application environments. OpenFPGA will foster shared and open efforts to address challenges of portability, interoperability and intra-application communication for FPGA and reconfigurable applications in high-performance and enterprise computing environments.

Free Chip Estimation Tool

A free design automation tool for chip estimation has been developed by Giga Scale IC, Inc. The tool named "InCyte" estimates IC die size, power, leakage, yield, and cost, enabling designers to visualize both the technical and economic impacts of their design specification. The tool's rapid "what-if" analysis makes it easy for designers to visualize tradeoffs between key design metrics, and across technology nodes and process variants. Users can generate accurate and optimized chip estimates at the architectural stage of the design process, resulting in significantly shorter design times and lower design costs.

Teaching Embedded Systems using ARM and FPGAs

Prof. Saeid Nooshabadi of the University of New South Wales in Sydney, Australia, has an excellent website describing his course and the accompanying laboratory-based exercises. Emphasis is placed on interfacing the ARM processor to other programmable hardware devices. Students use GNU tools operating under Linux to compile and simulate C, C++ and assembly-language programs. FPGA development is performed using Xilinx ISE WebPack and ModelSim-XE operating under Microsoft Windows. Click for additional information!

Guides for Writing and Presentations and References

Top 500 Supercomputers

A list of the Top 500 performing supercomputers is available at: Top 500. Since 1993, the list has been updated twice a year to indicate the best performance on the Linpack benchmark. Except for one machine which uses Microsoft Windows, all of the top 500 machines employ either the Linux or Unix operating system.


SystemC provides hardware-oriented constructs within the context of C++ as a class library implemented in standard C++. Its use spans design and verification from concept to implementation in hardware and software. SystemC provides an interoperable modeling platform which enables the development and exchange of very fast system-level C++ models. It also provides a stable platform for development of system-level tools.

The Open SystemC Initiative (OSCI) is an independent not-for-profit organization composed of a broad range of companies, universities and individuals dedicated to supporting and advancing SystemC as an open source standard for system-level design.

The specific purposes of this organization include:
1. Building a rich system-level design language and open source implementation based on C++ class libraries, called "SystemC",
2. Encouraging availability and adoption of intellectual property (IP), tools and methodologies based on SystemC,
3. The mechanisms that enable the continued growth of the SystemC community,
4. Defining interoperability criteria for IP and tools based on SystemC, (5) delivering updates to the SystemC Language Reference Manual (LRM) and open source implementation, and
6. The SystemC language via the IEEE.

The open source proof-of-concept SystemC 2.1 library and the transaction-level modeling (TLM) library have been updated.
Access: SystemC

Success stories from ST Microelectronics, Intel, IBM, QualComm, Texas Instruments and Conexant which were given in session 22 of the 2004 Design Automation Conference and can be accessed at: DAC

One of the major benefits of SystemC is the ability to model at the transaction level (TLM), where the use of simple function calls in communication modeling brings gains in both coding productivity and simulation speed. The TLM standard is now here and is being adopted. The OSCI TLM interface standard extends the practical value of the SystemC class library by providing a standard modeling kit for the construction of TLM interfaces, thus reducing the work needed to construct new interfaces and increasing the opportunities for interopability. At the same time, the release of version 2.1 of the SystemC class library has added new features which extend the utility of SystemC for
transaction level modelling.

A SystemC specification of the AMBA bus is now available free of charge. The specification is a fully cycle-accurate representation of the AMBA AHB protocol including the AHB-lite protocol that is widely adopted for high-performance bus-matrix architectures. The AMBA specification is an established, open methodology that serves as a framework for SoC designs, effectively providing the interconnect that binds IP cores together. The specification has been downloaded by more than 12,000 design engineers and implemented in hundreds of ASIC designs. The AMBA AHB cycle-level modeling specification is available now for download from: AMBA

Synthesis of SystemC can be performed using the Agility Compiler from Celoxica. The compiler synthesizes SystemC directly to high-density FPGA and programmable SoC logic and generates RTL VHDL and Verilog for SoC design. SoC designers using SystemC can maintain the C level of design abstraction throughout the entire SystemC design process while taking advantage of simulation speeds that are orders of magnitude faster than RTL. Thus, whole systems can now be verified using the same test-bench at all stages of the design process.

Forte Design Systems offers a Cynthesizer, a synthesis tool that delivers an implementation path from SystemC to RTL, verification, and co-simulation. Cynthesizer accelerates RTL delivery for leading-edge integrated circuits and systems-on-chip by automatically generating optimized RTL code from a C++ / SystemC algorithmic description. It can also be used to explore architectural trade-offs, e.g. area and performance. Significant productivity and quality-of-results improvements are being realized. For additional information, access: ForteDS

CoWare, Inc. has partnered with Forte to provide the first integrated SystemC-based solution for electronic system-level (ESL) design to implementation. The tight integration of CoWare's SystemC-based ConvergenSC system-on-chip (SoC) design tools and Forte's Cynthesizer SystemC behavioral synthesis product unites system architecture, simulation, and synthesis in a first-of-its kind flow. Users can explore and validate a design's system architecture in CoWare's ConvergenSC, then synthesize to RTL using Forte's Cynthesizer, and verify the RTL in a system context with the same SystemC model.

CoWare has also integrated its SPW digital signal processing application design tool with the Cadence Virtuoso custom design platform enabling wireless product design teams to reduce schedule risk through an evolutionary change of their methodology. SPW reference models have been instrumental in the successful tapeout of thousands of wireless designs to-date. The new flow enables broader reuse of the reference models for the RF and analog designer's benefit. By using SPW reference models throughout different design domains, wireless design teams can dramatically increase design efficiency and reduce risk. Starting from the SPW frontend, users can select the parts of the system that are used as the reference or testbench, and mark them for export. SPW automatically creates an optimized simulation model with interfaces based on SystemC signals and data types. SPW was enhanced with technology from the CoWare ConvergenSC platform design tool. Cadence's Virtuoso platform leverages the same SystemC technology based on the Cadence/CoWare technology alliance, readily importing the newly created SPW model. RF designers do not need to be familiar with SPW to benefit from the new flow. For additional information, access: CoWare

CoWare provides its tools to universities via its university program UniversityProgram@CoWare.com). For U.S. universities, there is no charge for ConvergenSC and LISATek and up to 300 licenses of SPW can be obtained for an annual fee of $500. For European universities and non-profit research, CoWare participates in the EUROPRACTICE program.

In India, IIT Delhi and IIT Kharagpur use ConvergenSC and LISATek to support courses on system level modeling, system synthesis and architecture design space exploration.

Free SystemC On-line Tutorial
Access: SCOTT

Most Significant Papers on IC Test

Many important papers have been presented at the annual International Test Conference (ITC) over the past 35 years. The best of these are on-line at ITC Papers.