LTE-Advanced Technology Introduction

Since the year 2010 commercialization of the LTE technology is taking place. At the same time further enhancements of the LTE technology are worked on in order to meet ITU-Advanced requirements. This application note summarizes these necessary improvements known as LTE-Advanced.

Show Me the Next-Generation HDMI

The first part of this white paper explores the basic concepts behind HDMI, the markets it serves and its leadership role in multimedia interfaces. This is followed by a tutorial on the new capabilities of HDMI 1.4 and their role in providing a richer, more straightforward user experience. Next, we'll explore a series of user case scenarios that illustrate how the HEAC feature can simplify cabling requirements between digital home multimedia devices. The last portion of this paper discusses the architectural considerations and technical details involved with incorporating the Ethernet and Sony/Philips Digital Interconnect Format (S/PDIF) standards into the HDMI system-on-chips (SoCs) to support the HEAC feature.

A Lifecycle Approach to Quality Management

Success in ever more competitive worldwide marketplaces demands continually smarter products and systems, which in turn lead to compounding complexity. Furthermore, while advanced functionality is an important competitive differentiator, quality has become part of the "price of entry" into the marketplace. Quality can no longer be considered an attribute that can be added into systems at the end of the development lifecycle. Controlling risks and costs needs to be a guiding principle that drives virtually all stages of the lifecycle, starting at the concept stage; building during analysis, design, deployment and acceptance; and continuing through service until end-of-life retirement. As a result, the management of quality must consider all key systems engineering disciplines, including requirements engineering, systems design and testing, and change and configuration management. This white paper examines the changing role of quality assurance and looks at approaches to effectively extend application lifecycle management (ALM) to include quality management and how this improves project outcomes.

Designing Low Power, Multi-Primary Technology for Mobile Phone Displays

Displays have always been a major power consumer in mobile devices. In the past, one easy way to keep power demands in check was to limit the display performance in terms of luminance, color gamut, resolution and refresh rate. This way, users could get away without recharging their devices for several days. By contrast, today's power-hungry devices must be charged every day. Typical color displays require 400-500 mW, which is ~50% of the average power consumption of the device when using a multimedia application. Reducing the power consumption demand of a mobile device display is a certain way to increase the time required of a user between recharges, but today it is unrealistic to limit display performance in order to achieve that. This article offers technical information for designers about a multi-primary display solution that will reduce power consumption of the display without limiting its performance.

Major Benefits of IEEE-1149.7(Compact JTAG)

This paper provides a summary of the features and benefits from the new IEEE-1149.7 specification commonly referred to as compact JTAG (cJTAG) which builds on existing IEEE-1149.1 boundary-scan architecture.

Checklist for Success with Multicore

The benefits of multicore are numerous, but to realize them you must avoid the common pitfalls by planning carefully and selecting a platform with the required level of optimization, flexibility, and integration. Before you rush into multicore, spend some time with this checklist and make sure that the platform and vendor you choose has the breadth, depth, and quality you'll need at every level: from the multicore-optimization and scalability of the software platform, to hardware optimization, to the virtualization solution, to the service and support expertise.

Fast, Easy, and Flexible Power for System Designers

Systems designers are having a difficult time developing power subsystems that supply all of their system's power needs due to varied and changing power requirements. A new type of power subsystem—the field-programmable power subsystem or FPPS—squarely addresses this issue by providing a flexible approach that costs no more than conventional switching power subsystems. This white paper discusses the advantages and benefits of field-programmable power subsystems and discusses the many ways they reduce system-design risks.

Processor Affinity or Bound Multiprocessing?

Migrating systems designed for single core processors to multicore environments is still a challenge. Bound multiprocessing (BMP) can help with these migrations. It improves SMP processor affinity. It allows developers to bind all threads (including dynamically created threads) in a process or even a subsystem to a specific processor without code changes.

Evolving the Coverage-Driven Verification FlowEvolving the Coverage-Driven Verification Flow

Over the past decade, coverage-driven verification has emerged as a means to deal with increasing design complexity and ever more constrained schedules. Among the benefits of the new methodology—a dramatically expanded set of verification metrics and tools providing much improved visibility into the verification process. However, coverage-driven verification flows are still beset by challenges, including how to efficiently achieve coverage closure. Matthew Ballance describes how inFact's coverage-driven stimulus generator is helping to respond to these challenges with unique algorithms that help to simultaneously target multiple independent coverage goals.

Improve Project Success with Better Information

Are we on target? Are we within budget? Are my projects contributing effectively to my operational objectives? How well are these operational objectives helping to satisfy our business goals? Can we easily demonstrate compliance to customer requirements or industry standards? Are we ready to ship? This white paper discusses the difficulties organizations face as they gather critical information, and explores the use of automation to increase the efficiency and effectiveness of reporting and measurement. It also introduces IBM Rational solutions that can help organizations automate reporting and measurement to improve project, program and organizational management and decision makers' abilities to influence positive outcomes.

Challenges for the 28nm half node: Is the optical shrink dead?

A half-node process has been routinely used to deliver incremental improvements in process control and hardware availability in order to continue Moore's Law. Traditionally, due to the imaging requirements, parameters such as numerical aperture and partial coherence were not set to their maximum resolution settings, thus leaving room in hardware and RET recipes to accommodate incremental imaging requirements. However, as hardware availability and computational lithography methods are stressed to the maximum of their capabilities to deliver the next technology nodes, it is worth asking the question if such optical shrinks continue to be viable moving forward. Already 28nm layouts scaled down from the original 32nm layouts are starting to show signs of configuration limitations dictated by the available imaging hardware. In this paper its is shown that two-dimensional features determine the feasibility of migrating successfully to the next half-node even when one-dimensional metrics suggest that such migration should be possible.

Debug and Validation of High Performance Mixed Signal Designs

Modern embedded and computing systems have become progressively more powerful by incorporating high-speed buses, industry standard subsystems, and more integrated functionality in chips. They have also become more complex, more sensitive to signal quality, and more time consuming to troubleshoot. While standards exist for many technologies commonly used within high performance digital systems, a major test requirement is to ensure that all elements are synchronized and perform as a seamless, integrated whole. This application note discusses testing and tools that enable evaluation of not only a single element, but also an entire system.

SPARK: A Parallelizing Approach to the High-Level Synthesis of Digital Circuits

SPARK is a C-to-VHDL high-level synthesis framework that employs a set of innovative compiler, parallelizing compiler, and synthesis transformations to improve the quality of high-level synthesis results. SPARK takes behavioral ANSI-C code as input, schedules it using speculative code motions and loop transformations, runs an interconnect-minimizing resource binding pass and generates a finite state machine for the scheduled design graph. Finally, a backend code generation pass outputs synthesizable register-transfer level (RTL) VHDL. This VHDL can then by synthesized using logic synthesis tools into an ASIC or can be mapped onto a FPGA.

Patents database: An easy and cost effective way to perform reverse engineering for chip designers

Reverse engineering is a very common practice in semiconductor business. Companies will barely admit doing it but every of them do. Sometimes it happens that the hidden goal of reverse engineering is to determine how a chip works, or to learn architecture or process technologies that are used to develop competitor's products for benchmarking. However, patents provide public with free valuable and reliable information about competitor's technologies.

Best Practices for Reducing Risk through Environmental Compliance Data Collection

Complying with the variety of environmental regulations has become a challenging task for electronics OEMs (Original Equipment Manufacturers). There are frequent changes to RoHS regulations and exemptions; the REACH SVHC list may be revised up to twice a year; large OEMs are developing their own unique requirements. This white paper describes GreenSoft's data collection services for RoHS/REACH/Full Disclosure Material Data. This overview illustrates some of the procedures and tools of analysis during data collection process that can help your company reduce risk in the compliance management for all your products.

Application Specific IP

One of the major barriers for Semiconductor IP commercialization is to provide evidence for an IP's quality. A common approach by IP vendors is to prove the quality of their IP in a test chip. Usually the Die contains the IP block separated from the System-on-a-Chip (SoC). It is, though, uncertain how the block will function in ASSP and ASIC products, potentially damaging its perceived commercial value. In Rosetta's methodology, the IP Core is a block within a subsystem, integrated to enable the subsystem functionality and targeted for a specific market and application. By analyzing the specific requirements of the market and application, and by providing an IP package targeted at those requirements, we solve and mitigate the IP quality

Boosting RTL Verification with High-Level Synthesis

Instead of prolonging the painful process of finding bugs in RTL code, the design flow needs to be geared toward creating bug-free RTL designs. This can be realized today by automating the generation of RTL from exhaustively verified C++ models. If done correctly, high-level synthesis (HLS) can produce RTL that matches the high-level source specification and is free of the errors introduced by manual coding.

SDRAM Memory Systems: Architecture Overview and Design Verification

DRAM (Dynamic Random Access Memory) is attractive to designers because it provides a broad range of performance and is used in a wide variety of memory system designs for computers and embedded systems. This DRAM memory primer provides an overview of DRAM concepts, presents potential future DRAM developments and offers an overview for memory design improvement through verification.

Getting Started with Android Development for Embedded Systems

Android is an open source platform built by Google that includes an operating system, middleware, and applications for the development of devices employing wireless communications. This paper takes a look at the design of Android, how it works, and how it may be deployed to accelerate the development of a connected device. Along with basic guidelines to getting started with Android, the Android SDK, its available tools and resources are reviewed and some consideration is given to applications for Android beyond conventional mobile handsets such as medical devices, consumer electronics and military/aerospace systems.

Phase-locked loops (PLLs) Demystified

Over the past decade, Phase-Locked Loops (PLLs) have become an integral part of the modern ASIC design. PLLs provide the clocks that sequence the operation of the various blocks on an ASIC chip as well as synthesize their communications. There are various types of PLLs targeting specific applications. Read this white paper to learn more about the types of PLLs and how they work in certain technologies.

Diagnosing clock domain crossing errors in FPGAs

Clock domain crossing (CDC) errors in FPGAs are elusive, and locating them often requires good detective work and smart design as well as an understanding of metastability and other physical behaviors. This white paper discusses the nature of CDC errors and presents a powerful solution that aids in their detection and removal.Note: By clicking on the above link, this paper will be emailed to your TechOnline log-in address by Mentor Graphics.

Fantastic failures

Most failures are not single-point; generally a single event does not entirely account for the failure. Often multiple problems interact to precipitate the failure. Multiple initiating events and unforeseen circumstances usually combine to force the system into regions of complexity that the human mind cannot easily predict or even comprehend. Non-obvious use and improper operation can lead to failure. This paper(a must read) looks at various events in history where multiple problems lead to catastrophic accidents.

Digital Signal Processing: A Practical Guide

This book is intended for those who work in or provide components for industries that use digital signal processing (DSP). There is a wide variety of industries that utilize this technology. While the engineers who implement applications using DSP must be very familiar with the technology, there are many others who can benefit from a basic knowledge of its' fundamental principles, which is the goal of this book—to provide a basic tutorial on DSP. Checkout these Part1, Part2, Part3 which are actually chapter extracts from the original book.

The Art of Debugging: Make it Fail

"Debugging" is the most valuable engineering skills, not taught in any formal setting, and often learned the hard way, by trial, error, experience, and long days and nights. These excerpted chapters, part1, part2, part3 from a book, Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems, published by AMACOM Books is informative and a logical introduction to the skill and art of begging electronic circuits and systems.

Single Chip Coherent Multiprocessing

Many embedded system-on-a-chip (SoC) designs make use of multiple processors, but do so in an application-specific or "loosely coupled" manner. Until recently, SoC design options for software-friendly multiprocessing were severely limited. But with the advent of SoC design components such as the MIPS32 1004K Coherent Processing System, on-chip multiprocessing under a single operating system has become a real design option. This paper explores a number of ways that multiprocessing can be exploited to achieve high SoC performance, especially for designs with parallel operations, high rates of I/O DMA traffic, or that are being re-designed to address converging applications.

Data Management for Hardware Design Teams

Hardware design data and design flows present unique requirements that are not met by software configuration management systems. This paper outlines how a system designed from the ground up to meet the unique requirements of hardware designers manages composite design objects, integrates into industry-standard design flows, understands design hierarchy, optimizes disk space usage for very large files, and facilitates collaboration across multiple sites.

e Verification language is alive and well

According to Mitch Weaver, corporate vice president for front-end verification at Cadence, the e verification language is not only still alive but is thriving and growing. In this pre-DVCon interview, he answers questions about Cadence and industry support for the e language and the Specman/e verification environment, as provided by Incisive Specman Elite and Incisive Enterprise Simulator XL.[Via: Industry Insights]

Broadcom's smartphone on a chip

Broadcom recently announced a single-chip HSUPA baseband processor that integrates key 3G mobile technologies and will improve smartphone performance and features. It also added that the processor is the first to combine high-speed upstream package applications with 3G connectivity and built-in graphics processing. Built with an integrated ARM11 core and capable of running Google's Android and Microsoft Windows Mobile, the BCM21553 baseband chip could pave the way for cheaper but much more sophisticated handsets. Other features include 5.8Mbps upstream and 7.2Mbps downstream bandwidth.

Intel meets its match in IBM

In this article by Brooke Crothers, the news is out that IBM has announced it new high performance power7 chip technology.By the time Intel had introduced its latest processor for servers, the Itanium 9300, on Monday, IBM had already stolen Intel's thunder with its new Power7 chip technology.According to reports, the Power7 is impressive. It has eight cores, while Intel's Itanium 9300 (PDF) has four. And each of the Power7's cores is capable of four threads, or tasks, compared to Itanium's two per core.Although both companies are touting dozens of other features--for example, better thread performance and improved scaling of workloads--IBM is taking a lead in marquee features for the lucrative high-end server market.

Formal Verification: Theorem proving

How many times in the course of a project have you heard of the term Formal Verification? This relatively short on article theorem proving article assumes that you know the basic definition of Formal Verification while we try to see what Theorem proving is in relation to Formal Verification.

The theorem proving techniques relate specification and implementation as a theorem in logic to be proven in the context of a proof calculus. The implementation acts as a provider of axioms and assumptions for the proof. Like the math classes we had in school, involves proving of lemmas and theorems by deduction, rewriting and induction. This is mainly used for verification at a higher level of abstraction usually in algorithmic verification. This is more successful in complex problems and systems like distributed algorithms and real-time systems.

The first tools for this method were available in 1970's. Currently there exists a variety of tools which are more or less automatic. some of them being, ACL2, NQTHM, PVS, HOLY, Isabelle, STeP...

However this technique requires a strong user expertise and time intenseive guidance of the proof. The usual theorem prover tools are based on first order predicate logic, Higher-order predicate logic, Recursive functions, Induction etc. Eg., "The ACL2 logic is a first order logic of total recursive functions providing mathematical induction..."

Brush up of basics:
First-order predicate logic:
Predicate logic in which predicates take only individuals as arguments and quantifiers only bind individual variables.

Induction: A powerful technique for proving that a theorem holds for al cases in a large infinite set. The proof has two steps, the basis and induction step. Roughly, in the basis the theorem is proved to hold for the "ancestor" case, and in the induction step it is proved to hold for all "descendant" cases.

Higher-order predicate logic: Predicate logic in which predicates take other predicates as arguments and quantifiers bind predicate variables. For example, second order predicates take first-order predicates as arguments. Oder n predicates take order n-1 predicates as arguments (n>1).

In summary, the theorem proving tools use sophisticated combination of decision proedures, conditional rewriting, Induction & recursion, propositional simplification, complex heuristics to orchestrate all of that. The tool learns over its lifetime by making use automatically of lemmas being proven by the user.

Very powerful mainly at algorithmic level. However you should go get an expert :-)

Toyota Prius 2005: An Early Warning About Verification

In this article Richard Goering talks about a software bug in Toyota Prius 2005 and after 5 years even after a through study by three of Toyota engineers on this issue did not help the company learn any lessons. He also talks about model-based verification which the original presentation talks about and how a formalized methodology for verification can help alleviate issues that may arise later in the field. Clearly a must read article!

Motivation: What else can we talk about verification?

Many of you already know that verification efforts are as or more important as the design efforts themselves. They cannot be ignored or hind-sighted and they are part of the chip design process itself. You may also have known or learnt that there is always a productivity gap, the significance of verification, the very famous pentium bug, the moore's law, that verification cannot be done by simulation itself, the benefits of formal verification and the complexities that tag along :-), the benefits of model checking, what is equivalence checking and how do those algorithms work etc. So, what part of the verification landscape are you yet to explore or understand? Is there something i can explain or detail about? Do you want to contribute? Please comment!

Verification Sessions at DVcon 2010

Featured Tutorial: Step-By-Step Guide to Advanced Verification Tutorial!, DVcon Exhibits and Product Demos.., DVCon Paper Presentations... etc.. more @ this link

Clock-Domain Crossing Verification Module

This Mentor's Verification Academy module directly addresses CDC issues by introducing a set of steps for advancing an organization’s clock-domain crossing verification skills, infrastructure, and metrics for measuring success while identifying process areas requiring improvement. This Clock-Domain Crossing Verification Module contains 7 sessions total including 1 demo.

ModelSim PE Student Edition - Free HDL Simulation

ModelSim PE Student Edition is a free download of the industry leading ModelSim HDL simulator for use by students in their academic coursework.

ModelSim PE Student Edition Highlights
- Support for both VHDL and Verilog designs (non-mixed).
- Intelligent, easy-to-use graphical user interface with TCL interface.
- Project manager and source code templates and wizards.

ModelSim PE Student Edition Target Use and Upgrades
- ModelSim PE Student Edition is intended for use by students in pursuit of their academic coursework and basic educational projects.
- For more complex projects and advanced features of ModelSim, universities and colleges have access to ModelSim SE, the highest configuration of ModelSim, plus Questa and other Mentor Graphics products through the Higher Education Program.
- ModelSim PE Student Edition is not be used for business use or evaluation.
- Please contact sales for a fully functioning evaluation version of ModelSim PE, DE or SE.

ModelSim PE Student Edition Performance
- Capacity: 10,000 lines of executable code
- Performance (up to capacity): 30% of PE
- Performance (exceeding capacity): 1% of PE (i.e., 100 times slower than PE).

ModelSim PE Student Edition Support Notice
- No customer support is provided for ModelSim Student Edition.
- Interact with other users and join the ModelSim Student Edition Discussion Forum.
- ModelSim PE Student Edition applies to x86/Windows platforms only.

ModelSim PE Student Edition Information
Important Information
- To download and install ModelSim PE Student Edition, you must complete all three of the following steps.
- You must be connected to the Internet for the entire download, installation and license request processes.

Step 1 - Download the Latest Software
- To begin the process, download the software from the FTP server by completing the ModelSim PE Student Edition license agreement form from the downloads tab.
- Select the FTP link provided on the confirmation page and downloaded the .exe file.

Step 2 - Install the Software
- After the file downloads completely, double-click on the exe file to start the installation process.

Step 3 - Complete the License Request Form
- At the end of the installation process, select Finish and a browser window will invoke with the License Request form
** Note - clicking on an existing license request link from your browser bookmark or from a link posted on the web - WILL NOT WORK.
- Complete the form with all fields and select Request Student Edition to submit
- Once you complete the license request form, your ModelSim PE Student Edition license file will be emailed to you, together with instructions for installing the license file.

Important Information about your Installation
- License files are valid only for the current installation of the software on the computer on which the software is installed.
- If you need to re-install the software on a computer, you must go through the entire process of download, installation and submission of the license request form.
- If for any reason you need a new license file you must go through the entire process of download, installation and
submission of the license request form.

Upgrade to Mentor Graphics' Higher Education Program
- For applications requiring the highest simulation performance and advanced verification capabilities, students may access Mentor Graphics most advanced design and verification tools, including ModelSim SE and Questa Advanced Functional Verification, through their college’s membership of Mentor Graphics’ Higher Education Program.
- View the Higher Education Program Details and learn how your institute can apply.

Questa SV/AFV: Verification Methodology Kits

Mentor Graphics provides the Methodology kit examples in open source form under the Apache-2.0 license. These kits are available for Questa versions 6.2 to 6.4

Questa Compatibility Matrix: Versions of Questa SV/AFV that work with different versions of other Mentor and open-source verification products

This matrix illustrates(SupportNet access needed) the version compatibility between Questa SV/AFV and different versions of OVM, Questa MVC Library, QVL, 0-In and Questa ADMS.

Design Verification Club (DVclub)

DVClub is a very interesting organization. With chapters in Austin, Bangalore, Boston, Dallas, Research Triangle Park, San Diego, and Silicon Valley, the club’s stated purpose is “to have fun while helping build the verification community through quarterly educational and networking events.” IC engineers can join for free, and events are free. Costs are picked up by sponsors, including Cadence.

Delivering synthesizable verification IP for testbences

Case Study:SystemVerilog VMM vs. BSV for an Ethernet MAC test bench.
High-level verification languages and environments such as e/Specman, Vera and now SystemVerilog, as used in VMM or OVM, may be the state-of-the-art for writing test bench IP, but they are not synthesizable for developing models, transactors and test benches to run in FPGAs for emulation and prototyping. So engineers wishing to move verification assets onto FPGAs have been designing with RTL, the same old slow, resource-intensive and error-prone way.

But now, with the introduction of modern high-level languages for synthesizable verification IP, engineers can design test benches, models and transactors at a high level of abstraction and with extreme reuse, but they can also synthesize them onto FPGAs – and they can do this as easily as they do today in simulation-only verification environments. Imagine running your test benches, models and transactors at tens of MHz.

This White Paper outlines important attributes of, and the applications for, modern high-level synthesizable verification environments. Using the example of a test bench for an Ethernet MAC, the paper compares the implementation of a synthesizable test bench done with Bluespec’s BSV with a non-synthesizable reference test bench done with SystemVerilog VMM – and it demonstrates that a synthesizable test bench can be implemented with fewer lines of code than using state-of-the-art SystemVerilog.

Features of an Ideal High-Level Synthesizable Verification Environment
Applications for High‐Level Synthesizable Verification
Case Study: SystemVerilog VMM vs. BSV for an Ethernet MAC test bench

Download a free copy of this whitepaper!

Toyota's woes: More technology, more complexity

Today, cars can have as many as 70 electronic control units, or ECUs, based on microcontrollers (sometimes generically referred to as microprocessors). ECUs manage engines, doors, transmissions, seats, and entertainment, and climate systems. Electronic throttle systems use an array of sensors, microcontrollers, and electric motors to control how the car is accelerated. Gone are the old steel cables to connect the driver's foot to the engine. Because of all of this added complexity and the need for chips to talk to each other, a bus system was introduced--not unlike the Peripheral Component Interconnect, or PCI bus used in virtually all PCs today. Called the Controller-area network, or CAN-bus, it is designed to allow microcontrollers and devices to communicate with each other within a vehicle. Click the title for the main commentary!

Are latches really bad for a design?

It is not completely correct to say that we have to avoid latches in our designs. In one of our recent projects we went a good extent debating only the disadvantges of using latches and then we had to can all the latches because many were not convinced. There are cases where you can safely use latches, just that you have to be little careful and the designer needs to be really sure of the impending functionality. In this post lets look at all possible cases where latches are preferred and where they are not. If you think if any of the statements are wrong, lets have a healthy debate.

The bad guys (Please add more):
1. One of the main reasons that latches are not used in designs is due to combinational loop back and subsequent oscillation which creates super problems in simulation, and also simulation Vs synthesis mismatches.
2. Due to controllability issues in latches unlike FF's, they can significantly harm STA/DFT and thereby reduce coverage in both cases.
3. The major factor for the above uncontrollability are Glitches in the enable pin of the latches that can cause unrecoverable failure
4. Last but not the least.. :-) the biggest culprit of all... inferred latches after synthesis due to bad coding styles.

1. Latches consume less power and area compared to FF's
2. If the design is not power aware, FF's are prefered and almost all the latches could be replaced by FF's without affaecting functionality.
3. Remember that the decode of a latched address can begin before the latch is actually closed..:-)

Corrective Measures:
1. A glitch-free enable helps if latches are unavoidable.
2. If you are synthesizing the code that generates the enable, please consider the direct instantiation of the logic using gates that drives the enable to the latch. Now you can be double sure!
3. A good input hold time ensures that the data is held long enough before you close the latch. If your latch enable is derived from a clock, the latch will lag the clock, requiring the latch's D inputs to be held valid after the clock edge.

NOTE: The latch has a disadvantage of being "transparent" till the clock makes the latch the value of the input. The FF samples the input on the risinf or falling edge of the clock.

If you are not careful.... when developing synchronous digital designs, latches can generate asynchronous circuits that become unstable.

Did the 3G auction happen in India on 14Jan?

I have been waiting for WiMAX and 3G acutions to happen, don't know for how long. But the huge potential has yet to be realized due repeated bureaucratic bungling. How could the Indian government be so incompetent and irresponsible?

iPad and the A4 chip

By now you would have digested enough info about iPad from all the overflowing blogs and sites that are covering Apple's latest announcement. But do you know Apple's A4 chip is a ARM Cortex A9 with an ARM Mali GPU? Now i am curious about the 3G part and HSDPA given the BOM list of iPhone 3Gs. Are the original vendors already out of the window with the entry of PA Semi?