Executive Summary
As we enter 2026, the artificial intelligence infrastructure build-out is hitting a physical wall: the "Copper Limit." While GPU performance has skyrocketed, the ability to move data between chips is lagging. This post outlines why Co-Packaged Optics (CPO) is shifting from a science experiment to a necessity for hyperscalers, who the winners will be in 2026, and how the unit economics of the data center are being rewritten.
1. The Perspective: CPO is No Longer Optional for AI Workloads
For the last decade, we relied on pluggable optics (those silver modules you plug into the front of a switch). They were flexible and easy to service. But in the era of trillion-parameter AI models, they are becoming a liability.
The Impact on AI: The primary bottleneck in AI training clusters (e.g., clusters of 100k+ GPUs) is not compute; it is I/O (Input/Output) bandwidth and power consumption.
The Power Problem: Moving data electronically over copper traces to the edge of a switch consumes too much power—by some estimates, up to 30-40% of a switch's power budget is just for the I/O. CPO moves the optical engine right next to the ASIC (chip), eliminating long copper traces and reducing interconnect power consumption by >50%.
The Density Problem: AI workloads require massive "scale-up" bandwidth (GPU-to-GPU). You physically cannot fit enough pluggable modules on a 1RU faceplate to support the 102.4T and 200T switches needed for next-gen clusters. CPO is the only way to achieve the required shoreline density.
Expert Take: We are seeing a bifurcation in the market. Standard cloud networking (email, web serving) will stick with pluggables for years. But AI "backend" networks (the fabric connecting GPUs) are aggressively moving to CPO because they simply cannot afford the power penalty of the old way.
2. The 2026 Leaderboard: Who is Positioned to Win?
In 2026, the market is moving from "PowerPoint slides" to "Production Silicon."
The 800lb Gorilla: Broadcom. Broadcom is forcing the market’s hand. With their Tomahawk series switches and Bailly CPO platform, they are the only player capable of delivering a fully integrated, mass-manufacturable solution today. They are effectively telling hyperscalers: "If you want the fastest switch, you are taking our optics."
The Ecosystem Builder: Marvell. Marvell is playing the "Android" to Broadcom's "Apple." They are focusing on open standards and merchant silicon, positioning themselves as the flexible alternative for hyperscalers (like Amazon or Google) who want to design their own custom interconnects using Marvell’s DSPs and optical drivers.
The Strategic Mover: Ciena. Traditionally a telecom/WAN player, Ciena is now a data center player. Their acquisition of Nubis Communications (more on this below) makes them a dangerous dark horse in the short-reach interconnect market.
The "Chiplet" Enablers: Keep a close eye on TSMC (packaging) and Ayar Labs (optical I/O). As designs move to chiplets, these firms provide the critical glue that makes CPO work.
3. Hyperscaler Milestones to Watch in 2026
2026 is the year of "The Pilot at Scale."
Design Wins: Expect public confirmation that a Tier-1 hyperscaler (likely Microsoft Azure or Meta) is deploying CPO-based switches for their primary AI training cluster.
Volume Production: While 2025 was about sampling, 2026 will see volume production of 51.2T and 102.4T switches with CPO. The driver is the transition to 200G/lane electrical signaling—copper struggles immensely at this speed, making optics the only viable path for reach >1 meter.
Reliability Data: The biggest hesitation has been serviceability (if the laser breaks, do I throw away the switch?). In 2026, look for white papers from hyperscalers proving that the Laser-as-a-Service (external laser source) model works, allowing lasers to be replaced without replacing the switch silicon.
4. M&A Spotlight: Ciena Buys Nubis
The News: Ciena acquired Nubis Communications. The Insight: This is not just a "talent acqui-hire." It is a strategic signal that the walls between "Telecom" (long-haul) and "Datacom" (inside the server) are collapsing.
Why it matters: Nubis specialized in ultra-low latency, high-density optical engines specifically for CPO. By buying them, Ciena admits that the future of their growth isn't just connecting cities, but connecting racks.
Advancing the Market: This validates the "linear drive" approach (removing DSPs to save power). Nubis was a leader in Linear Drive Optics (LPO). Ciena can now offer a full coherent & short-reach portfolio, challenging Marvell and Broadcom directly inside the AI cluster.
5. The Unit Economics: TCO and the "Upgrade Tax"
Moving optics inside the switch fundamentally changes the financial model of the data center.
OpEx Wins: The Total Cost of Ownership (TCO) argument is unbeatable. Saving 5-10 Watts per link adds up to megawatts across a large cluster. In an AI data center, where power availability is the #1 constraint, power efficiency = revenue. Every watt saved on optics is a watt that can be used for a GPU.
CapEx Shift: The unit cost of the switch increases significantly (because it now includes the optics), but the cost of the cabling infrastructure decreases.
The Upgrade Risk: The downside is the "Upgrade Tax." With pluggables, you could upgrade optical modules without changing the switch. With CPO, the optics and switch are married. This forces hyperscalers to synchronize their network silicon upgrades with their optical upgrades.
Assessment: For AI, this doesn't matter. The technology moves so fast that by the time the optics are obsolete, the switch silicon is already three generations behind. The upgrade cycles are now lock-step.
Conclusion & Next Step
The "Optics Wall" is the next great hurdle for AI scaling. 2026 will be the year the industry climbs over it. We are moving from a world of discrete components to a world of integrated silicon-photonics systems.
For Investors/Strategists: Watch the Broadcom (AVGO) earnings calls for specific mentions of "Tomahawk 6" attachment rates, and monitor Ciena (CIEN) for early signs of Nubis integration wins in AI clusters.


Your comments will be moderated before it appears here.