Showing posts with label Press Release. Show all posts
Showing posts with label Press Release. Show all posts

Femtocell-enabled home energy management solution


ip.access, the leading developer of femtocell and picocell solutions, and AlertMe.com, the pioneer in home energy management systems, have created a demonstration showing how femtocells can be integrated into smart home energy management solutions. 

With femtocell integration, the AlertMe Energy service can automatically detect when phones enter or leave the house and can therefore power down lights, televisions and other home appliances automatically when the house is empty. The service can also switch the services back on again when the residents return.

The ip.access femtocell powered service also enables mobile phones to control electrical devices in different parts of the house using a series of commands and automatic triggers.

Smart energy metering and control is the subject of extensive EU regulation over the coming years, and services such as AlertMe Energy, which allows electrical appliances in the home to be controlled via the Internet, are set to become a part of everyone's lives in the near future.

ip.access has combined the AlertMe Energy service with its own femtocell technology. The solution works by allowing electrical appliances to switch on and off automatically in response to the presence or absence of phones in the home. The "presence" information is routinely gathered by the femtocell but is normally only used to route cellphone traffic and set tariffs.

The AlertMe integration enables this presence information to be used to set personalised light and power preferences which are activated automatically when a subscriber arrives at home. Pre-set electrical outlets can also switch off automatically to save energy a few minutes after the last person has left the house.

The demo also shows how supplementary services codes on the phone can be personalised through the femtocell when the phone is at home, allowing the phone to be used to switch appliances on and off remotely. For example, a subscriber could type *1# on their phone to switch off the downstairs lights and power after retiring upstairs to bed.

"One automatic trigger could be to switch the kettle on as soon as you arrive home," said Dr Andy Tiller, VP Marketing at ip.access. "But there is a lot more to this than just tea and convenience.

"Using a femtocell to personalise supplementary services codes is a new and unique idea," he said. "It enables the mobile phone to become a powerful controller for all kinds of applications in the home. And because it's a network-enabled feature, it works with any handset – there are no applications to install."

According to AlertMe.com founder Pilgrim Beart, "The mobile phone is increasingly the remote-control for your life. Most people carry their handset everywhere they go, making it an ideal control device for the AlertMe Energy service. And because everyone already has a mobile phone, there is no extra cost involved."

The demo also shows how the AlertMe Hub (the central device that receives instructions via the Internet and controls the electrical plugs in the home) can be integrated inside a femtocell Access Point, receiving its power and Internet connection through the femtocell. In this way, a mobile operator could offer a smart home energy management solution as an integrated option to its femtocell subscribers.

ip.access will be showing the demonstration at the Femtocells World Summit in London from 22-24 June.

RTL synthesis and other backend Interview Questions (with answers)


Q1: How would you speed up an ASIC design project by parallel computing? Which design stages can be distributed for parallel computing, which cannot, and what procedures are needed for maintaining parallel computing?
Ans: Mentioning the following important steps in parallel computing is essential:
1. Partitioning the design
2. Distributing partitioned tasks among multiple CPUs
3. Integrating the results


WHAT STAGES: The following answers are acceptable. Others may be accepted if you gave a reasonable explanation of why you can or cannot use parallel computing in a particular stage of the flow.
Can use parallel computing:
- Synthesis after partitioning
- Placement (hierarchical design)
- Detailed routing
- DRC
- Functional verification
- Timing Analysis (partition the timing graph)
Cannot use parallel computing:
- Synthesis before partitioning
- Floorplanning
- Flat Placement
- Global Routing
CONSTRAINTS: Mentioning that care must be taken to make sure that partition boundaries are consistent when integrating the results back together.

Q2: What kinds of timing violations are in a typical timing analysis report? Explain!
Ans: Acceptable answers...
- Setup time violations
- Hold time violations
- Minimum delay
- Maximum delay
- Slack
- External delay

Q3: List the possible techniques to fix a timing violation.
Ans: Acceptable answers...
- Buffering
Buffers are inserted in the design to drive a load that is too large for a logic cell to efficiently drive. If the net is too long then the net is broken and buffers are inserted to improve the transition which will ultimately improve the timing on data path and reduce the setup violation.
To reduce the hold violations buffers are inserted to add delay on data paths.- Mapping - Mapping converts primitive logic cells found in a netlist to technology-specific logic gates found in the library on the timing critical paths.
- Unmapping - Unmapping converts the technology-specific logic gates in the netlist to primitive logic gates on the timing critical paths.
- Pin swapping - Pin swapping optimization examines the slacks on the inputs of the gates on worst timing paths and optimizes the timing by swapping nets attached to the input pins, so the net with the least amount of slack is put on the fastest path through the gate without changing the function of the logic.
- Wire sizing
- Transistor (cell) sizing - Cell sizing is the process of assigning a drive strength for a specific cell in the library to a cell instance in the design.If there is a low drive strength cell in the timing critical path then this cell is replaced by higher drive strength cell to reduce the timing violation.
- Re-routing
- Placement updates
- Re-synthesis (logic transformations)

- Cloning - Cell cloning is a method of optimization that decreases the load of a very heavily loaded cell by replicating the cell. Replication is done by connecting an identical cell to the same inputs as the original cell.Cloning clones the cell to divide the fanout load to improve the timing.
- Taking advantage of useful skew
- Logic re-structuring/Transformation (w/Resynthesis) - Rearrange logic to meet timing constraints on critical paths of design
- Making sure we don't have false violations (false path, etc.)

Q4: Give the linear time computation scheme for Elmore delay in an RC interconnect tree.
Ans: The following is acceptable...
- Elmore delay formula
T = Sum over all nodes i in path (s,t) of Ri*Ci where Ci is the total capacitance in the sub tree rooted at node i, or alternatively, the sum over the capacitances at the nodes times the shared resistance between the path of interest and the path to the node.
- Explaining terms in formula
- Mentioning something that shows that it can be done in linear time ("lumped"
or "shared" resistances, "recursive" calculations, etc)

Q5: Given a unit wire resistance "r" and a unit wire capacitance "c", a wire segment of length "l" and width "w" has resistance "l/w" and capacitance "cwl". Can we reduce the Elmore delay by changing the width of a wire segment? Explain your answer.
Ans: You needed to mention that by scaling different segments by different amounts, you can reduce the delay (e.g. wider segments near the root and narrower segments near the leaves. Delay is independent of width because the "w" term cancels out.

Q6: Extend the ZST-DME algorithm to embed a binary tree such that the Elmore delay from the root to each leaf of the tree is identical.
Ans: You needed to mention that a new procedure is needed for calculating the Elmore delay assuming that certain merging points are chosen, instead of just the total downstream wire-length. The merging segment becomes a set of points with equal Elmore delay instead of just equal path length. You could refer the paper "Low-Cost Single-Layer Clock Trees With Exact Zero Elmore Delay Skew", Andrew B. Kahng and Chung-Wen Albert Tsao.

Q7: IPO (sometimes also referred to as "In-Place Optimization") tries to optimize the design timing by buffering long wires, resizing cells, restructuring logic etc.
Explain how these IPO steps affect the quality of the design in terms of area, congestion, timing slack.
(a) Why is this called "In-Place Optimization" ?
(b) Why are the two IPO steps different ?
(c) Why are both used ?

Ans: IPO optimizes timing by buffer insertion and cell resizing. Important steps that are performed in IPO include fixing {setup,hold} time, max. transition time violation. Timing slack along all arcs is optimized by these operations. Increase in area and reduction in timing slack depend upon timing and IPO constraints.
(a) This step is referred to as "In-Place Optimization" because IPO performs resizing and buffer in-place (between cells in the row). It does not perform placement optimization in this step.
(b) The first IPO1 step is performed after placement. It performs trial-route--> extraction --> timing analysis to determine the quality of placement. Setup and hold time fixing is done according to result of timing analysis. The second IPO step is performed after clock tree synthesis. CTS performs clock buffer insertion to balance skews among all flip-flops. IPO2 step optimizes timing paths between flip-flops taking the actual clock skew.
(c) If IPO2 step is not performed after CTS, then timing paths between flip-flops are not tuned for clock skew variation. Even though NanoRoute performs timing optimization, it is more of buffer insertion in long interconnect to fix setup and hold times.

Q8: Clocking and Place-Route Flow. Consider the following steps:
- Clock sink placement
- Standard-cell global placement
- Standard-cell detailed placement
- Standard-cell ECO placement
- Clock buffer tree construction
- Global signal routing
- Detailed signal routing
- Bounded-skew (balanced) clock (sub)net routing
- Steiner clock (sub)net routing
- Clock sink useful skew scheduling (i.e., solving the linear program, etc.)
- Post-placement (global routing based) static timing analysis
- Post-detailed routing static timing analysis
(a) As a designer of a clock distribution flow for high-performance standard-cell based ASICs, how would you order these steps? Is it possible to use some steps more than once, others not at all (e.g., if subsumed by other steps).
(b) List the criteria used for assessing possible flows.
(c) What were the 3 next-best flows that you considered (describe as variants of your flow), and explain why you prefer your given answer.

Ans:(a) My basic flow:
(1) SC global placement
(2) post-placement STA
(3) clock sink useful-skew scheduling
(4) clock buffer tree construction that is useful-skew aware (cf. associative skew.)
(5) standard-cell ECO placement (to put the buffers into the layout)
(6) Steiner clock subnet routing at lower levels of the clock tree (following CTGen type paradigm)
(7) bounded-skew clock subnet routing at all higher levels of the clock tree, and as necessary even at lower levels, to enforce useful skews
(8) global signal routing
(9) detailed signal routing,
(10) post-detailed routing STA
(b)Criteria:
(1) likelihood of convergence with maximum clock frequency
(2) minimization of CPU time (by maximizing incremental steps, minimizing .detailed. steps, and minimizing iterations)
(3) make a good trade-off between wiring-based skew control and wire cost (this suggests Steiner routing at lower levels, bounded-skew routing at higher levels).
[Comment 1. Criteria NOT addressed: power, insertion delay, variant flow for hierarchical clocking or gated clocking.
Comment 2: I do not know of any technology for clock sink placement that can separate this from placement of remaining standard cells. So, my flow does not invoke this step. I also don't want post-route ECOs.]
(c) Variants:
(1) introduce Step 11: loop over Steps 3-10 (not adopted because cost benefit ratio was not attractive, and because there is a trial placement + global routing to drive useful-skew scheduling, buffer tree construction and ECO placement);
(2) after Steps 1-4, re-place the entire netlist (global, detailed placement) and then skip Step 5 (not adopted because benefits of avoiding ECO placement and leveraging a good clock skeleton were felt to be small-buffer tree will largely reflect the netlist structure, and replacing can destroy assumptions made in Steps 3-4);
(3) can iterate the first 5 steps essentially by iterating: clock sink placement, (ECO placement for legalization), (incremental) standard-cell (global + detailed) placement (not adopted because I feel that any objective for standalone clock sink placement would be very "fuzzy", e.g., based on sizes of intersections of fan-in/fan-out cones of sequentially adjacent FFs)

Q9: If we migrate to the next technology node and double the gate count of a design, how would you expect the size of the LEF and routed DEF files to change? Explain your reasoning.
Ans: The LEF file will remain roughly the same size (same richness of cell library, say, between 500-1200 masters), modulo possible changes in conventions (e.g., CTLF used to be a part of LEF) and modulo possible additional library model semantics (e.g., adding power modeling into LEF). The DEF file should at least double (the components and nets will double, but if there is extra routing complexity (more complex geometries, and more segments per connection due to antenna rules or badly scaling router heuristics) the DEF could grow significantly faster.

Wilder Technologies Launches Line of Test Adaptor Products for Test & Measurement Companies


Vancouver, WA -- Wilder Technologies has released a new family of test adaptor products for the test & measurement industry that allow users to test and validate their designs for compliancy based on individual high speed serial protocol standards and to assist in finding out deficiencies in their products. Designed to deliver the highest electrical performance, Wilder’s products are used by design engineers, test and measurement engineers, validation engineers and compliance test engineers to detect defects through the use of the test fixtures and establish the failure criteria accurately.

Priced in the $2-3K range, Wilder’s products have demonstrated effectiveness via outstanding S-Parameters, TDR measurements, 3D EM models to empirical measurements for true design accuracy. The test adaptors work with test and measurement companies in mechanical, electrical and signal integrity design.

“Our team has worked together in test and measurement companies with a combined experience of over 125 years in mechanical, electrical and Signal Integrity design, including extensive experience in field engineering, manufacturing, applications and customer support,” explained Wilder Technologies founder Paul Deringer.

Using a systemic approach to solve signal integrity challenges via high-speed, high-performance Test Adapters, Wilder has initially released several test adaptor products including Display Ports, SATA, SAS and HDMI .

Located in Vancouver, WA, Wilder Technologies is committed to solving signal integrity challenges using time and frequency domain analysis, measurement techniques, and state of the art tools and design methodologies to develop metrological solutions for a broad range of companies in the telecommunications and other industries.

For more information, visit www.wildertechnologies.com.

Cambridge Wireless is Unlocking New Semiconductor Opportunities in Connectivity 20th May 2010, Cambridge, UK


Cambridge Wireless has announced the second meeting of the popular semiconductor Special Interest Group meeting taking place on 20th May in Cambridge where it is hosted by the Bluetooth giant company, CSR. For more information, please visit www.cambridgewireless.co.uk/events

This discussion will examine current and future developments in semiconductors that will provide the data throughput required for new applications, and also the semiconductor devices that will support these applications on battery-powered mobile devices.
Peter Claydon of Silicon Southwest, who is a champion of this SIG comments, “for this event we’ve gathered together an impressive set of speakers from many of the world’s most forward-thinking chip companies. I’m looking forward to some lively and informed debate on developments that will shape all our futures over the next ten years”.

The ability of networks to deliver high data throughput to users is in part governed by the development of more sophisticated modulation techniques and algorithms, but the ability to deploy these in the real world is wholly dependent on developments in the semiconductor arena that enable the implementation of these algorithms for an acceptable cost and power consumption. This applies both to terminal devices, which may be handsets, data cards, powerline adaptors or increasingly embedded in other devices, such as the Amazon Kindle, and to network devices that enable smaller and cheaper base stations, including femtocells and backhaul solutions.

It is a truism that the development of applications follows the availability of the underlying networks that support the required data throughput. For example, YouTube and Spotify would not be possible without widespread availability of wired broadband. So what are the applications that are driving the need for higher data throughputs and what semiconductor devices are being developed that will support these applications?

Attendees will hear from and debate with the following industry specialists:
• Raj Gawera, Vice President of HBU Marketing at CSR
• Mike Muller, CTO at ARM
• Gordon Lindsay, Associate Director at Broadcom Europe
• Ben Timmons, Senior Director of Business Development at Qualcomm
• Tim Fowler, Commercial Director at Cambridge Consultants

“This event highlights the ever changing demands faced by the semiconductor industry being driven by the need for efficient movement of higher and higher levels of data within acceptable levels of power consumption,” explained Eric Schorn, VP of Marketing, Processor Division, ARM. “ARM is investing to address these challenges and is a great supporter of innovation in this field which will enable us to help deliver the wired and handheld devices of tomorrow.”

This SIG is championed by Eric Schorn of ARM, Peter Claydon of Silicon Southwest and Carson Bradbury of Cre8Ventures.

VSIDE - VSDSP Integrated Development Environment


VLSI Solution has announced VSIDE - the Integrated Development Environment for VSDSP Processor Family. VSIDE is an integrated development environment for VLSI Solution's 16/40-bit VSDSP digital signal processor family. It contains a complete set of development utilities, including an optimizing ANSI-C compiler, assembler, linker, profiler, etc. All programs are integrated into a simple-to-use, easy-to-learn package running on a PC / Windows XP or Vista platform.

VSIDE supports emulator-based debugging using real hardware. It also contains several example projects to help users get easily started. The beta version of the tool has been successfully used in the development of many audio products such as echo cancellation for Skype phone and pitch shifting of the audio source for a portable karaoke product. DSPeaker's (www.dspeaker.com) award winning Anti-Mode™ algorithm was debugged in a short time by using the powerful tools of VSIDE.

VSIDE currently supports VLSI Solution's audio codec chip VS1053 as well as VLSI's all-new digital signal processor circuit VS8053. Support for the low cost VS1000 audio system chip will be added by Q1/2011. VLSI Solution's current programming examples will gradually be ported to VSIDE.

Keeping with the spirit of VLSI Solution's openness policy, VSIDE can be downloaded for free at: http://www.vlsi.fi/en/support/software/vside.html

About VLSI Solution
VLSI Solution is an innovative new technology creator that designs and manufactures integrated circuits. Within its 19 years of existence VLSI has build an extensive in-house IP library and has the capability to pull through complicated mixed-signal IC projects, ranging from digital audio to RF applications.

For more information, see http://www.vlsi.fi/

Press Release: Baolab creates nanoscale MEMS inside the CMOS wafer


Baolab Microsystems has announced a new technology to construct nanoscale MEMS (Micro Electro Mechanical Systems) within the structure of the actual CMOS wafer itself using standard, high volume CMOS lines, which is much easier and quicker with fewer process steps than existing MEMS fabrication techniques that build the MEMS on the surface of the wafer. This significantly reduces the costs of a MEMS by up to two thirds and even more if several different MEMS are created together on the same chip.

The Baolab NanoEMS™ technology uses the existing metal layers in a CMOS wafer to form the MEMS structure using standard mask techniques. The Inter Metal Dielectric (IMD) is then etched away through the pad openings in the passivation layer using vHF (vapour HF). The etching uses equipment that is already available for volume production and takes less than an hour, which is insignificant compared to the overall production time. The holes are then sealed and the chip packaged as required. As only standard CMOS processes are used, NanoEMS MEMS can be directly integrated with active circuitry as required.

"We have solved the challenge of building MEMS in a completely different way," explained Dave Doyle, Baolab's CEO. "Existing MEMS technologies are slow, expensive and require specialist equipment. They have to be either built on top of the wafer at a post production stage or into a recess in the wafer. By contrast, our new NanoEMS technology enables MEMS to be built using standard CMOS technologies during the normal flow of the CMOS lines."

Baolab has successfully created MEMS devices using standard 0.18um 8" volume CMOS wafers with four or more metal layers, and has achieved minimum feature sizes down to 200 nanometres. This is an order of magnitude smaller than is currently possible with conventional MEMS devices, bringing the new NanoEMS MEMS into the realm of nanostructures, with the additional benefits of smaller sizes, lower power consumption and faster devices.

Baolab will be making a range of discrete MEMS including RF switches, electronic compasses and accelerometers, along with solutions that combine several functions in one chip. The prototype stage has already proved the NanoEMS technology and evaluation samples will be available later this year. These are aimed at handset designers and manufacturers, and Power Amplifier and RF Front End Module markets.

For further information on Baolab Microsystems, please go to www.baolab.com.
e-mail: info[at]baolab[dot]com
Institut Politècnic del Campus de Terrassa, 08220 Terrassa, Spain.
Tel.: +34-93-394-17-70

Press contact for interviews and illustrations is Nigel Robson, Vortex PR.
e-mail: Nigel[at]vortexpr[dot]com
Tel: +44 1481 233080
NanoEMS is a trademark of Baolab Microsystems, S.L.