EMIB vs CoWoS
Two approaches to heterogeneous integration for AI and HPC silicon
Intel Foundry Services
EMIB
-
No large interposer Small Si bridge (~5×4 mm) embedded in substrate — die-to-die zone only
-
Higher yield potential Smaller bridge die = lower defect probability; substrate scales independently
-
Lower warpage risk Organic substrate CTE closer to PCB; no full-wafer silicon mismatch
-
Scalable multi-die layout Multiple bridges can be placed wherever high-bandwidth die-to-die links are needed
TSMC Advanced Packaging
CoWoS
-
Full-wafer silicon interposer Passive Si interposer (CoWoS-S) or active (CoWoS-R/L) beneath all chiplets
-
Ultra-high interconnect density ~9 µm RDL pitch enables massive die-to-HBM bandwidth (up to 3.35 TB/s per GPU)
-
Higher warpage risk CTE mismatch between large Si interposer and organic substrate drives warpage at reflow
-
Capacity-constrained supply CoWoS capacity at TSMC is the primary bottleneck for AI accelerator supply in 2024–25
Technical parameters
| Parameter | Intel EMIB | TSMC CoWoS | Edge |
|---|---|---|---|
| Interconnect pitchdie-to-die | ~55 µm | ~9 µm (RDL) | CoWoS |
| Package heightz-dimension | Lower | Taller (extra Si layer) | EMIB |
| Silicon area efficiencyrelative to die count | High (bridge <0.2% of pkg) | Low (interposer ~100% of pkg) | EMIB |
| Max aggregate bandwidthper package | ~2 TB/s (Ponte Vecchio) | ~3.35 TB/s (H100 NVL) | CoWoS |
| Warpage riskat reflow | Lower | Higher | EMIB |
| Foundry dependencypackaging lock-in | Intel internal / IFS | TSMC only | Context |
| Known silicon deploymentsproduction 2023–2025 | Meteor Lake, Ponte Vecchio, Gaudi 3 | A100, H100, MI300X, Blackwell | CoWoS |
Intel EMIB — where it ships
Meteor Lake (Core Ultra)
Ponte Vecchio GPU
Gaudi 3 AI Accel.
Altera Agilex FPGAs
Sapphire Rapids HBM
TSMC CoWoS — where it ships
NVIDIA A100 / H100
AMD MI300X / MI300A
NVIDIA Blackwell (GB200)
Apple M-series Ultra
AWS Trainium 2

Your comments will be moderated before it appears here.