The AI infrastructure world has a bandwidth problem that most people do not talk about. While everyone focuses on GPU counts and model parameters, the real bottleneck often sits in the network fabric connecting those GPUs. Tower Semiconductor and Salience Labs just announced a partnership that could fundamentally change how AI data centers move data, and practitioners should pay attention.

The Problem With Current AI Networks
Today's AI data centers rely on electronic packet switching (EPS) to route data between GPUs, storage, and other components. This architecture requires constant optical-to-electronic-to-optical (OEO) conversion. Light signals come in from fiber optics, get converted to electrical signals for processing and switching, then get converted back to light for transmission.
Every conversion introduces latency. Every conversion consumes power. At the scale of modern AI training clusters, where thousands of GPUs need to exchange gradients and activations in real time, these small inefficiencies compound into serious performance constraints.
The numbers tell the story clearly. Dell'Oro Group estimates that data center switch spending in AI back-end networks will exceed $100 billion by 2030. That spending reflects both the scale of the problem and the opportunity for better solutions.
What Optical Circuit Switches Actually Do
Optical Circuit Switches (OCS) take a fundamentally different approach. Instead of converting light to electricity for switching decisions, they route optical signals directly in the photonic domain. The light stays as light throughout the entire switching process.
The Tower and Salience Labs partnership focuses on manufacturing Photonic Integrated Circuit (PIC) based optical switches at scale. The collaboration leverages Tower's existing silicon photonics platforms, specifically PH18DA with integrated III-V lasers and TPS45PH with low-loss nitride waveguides.
Vaysh Kewada, CEO of Salience Labs, described the significance: "Tower is a key partner strengthening our ability to deliver optical switch technology optimized for the performance and power demands of AI data centers."
The key benefits include ultra-low latency (no OEO conversion delays), dramatically lower energy per bit across optical interconnects, and the ability to scale bandwidth without proportionally scaling power consumption.
Why This Matters for AI Workloads
AI training workloads have unique networking requirements that make optical switching particularly valuable. Distributed training across hundreds or thousands of GPUs requires frequent all-reduce operations, where every GPU must exchange gradient information with every other GPU. The collective communication patterns create massive, bursty traffic that benefits from low-latency, high-bandwidth switching.
Inference workloads at scale face similar challenges. Mixture-of-experts architectures route different parts of each request to different experts, creating complex traffic patterns. Agentic AI systems generate unpredictable communication between components. In all these cases, reducing network latency directly improves throughput and reduces cost per token.
The timing of this announcement is notable. Tower Semiconductor recently announced a separate partnership with NVIDIA to produce 1.6T data center optical modules designed for NVIDIA networking protocols. The convergence of photonics and AI infrastructure is accelerating across multiple fronts.
Manufacturing at Scale
One of the most significant aspects of this partnership is the emphasis on moving "from development into pre-production phase, driving product readiness and at-scale deployment." Silicon photonics has existed in research labs for years, but manufacturing at the volumes required for data center deployment remains challenging.
Tower Semiconductor brings established high-volume manufacturing capabilities. Dr. Ed Preisler, Tower's VP and General Manager, noted that "silicon photonics with integrated light sources is a key enabler for scaling next-generation optical connectivity." The integration of III-V laser sources directly into the photonics platform simplifies manufacturing and improves reliability.
Salience Labs, founded in 2021 and backed by research from the University of Oxford and University of Munster, brings the design expertise for switches specifically optimized for AI workloads. Their photonic switch technology directs data optically without requiring transceivers at each switching stage.
Regional Implications
For those of us building AI infrastructure in the UAE and Middle East, developments in optical networking have particular relevance. The region is investing heavily in sovereign AI capabilities, and efficient data center infrastructure will determine how much compute those investments actually deliver.
Power consumption and cooling represent major operational costs in Gulf region data centers. Any technology that reduces energy per bit of data transfer directly improves the economics of regional AI infrastructure. Optical switching offers exactly that improvement at exactly the right time, as regional investments in AI infrastructure continue to accelerate.
The partnership also signals broader industry direction. Major hyperscalers are likely watching optical switching developments closely. As these technologies reach production readiness, they could become standard components in next-generation AI data centers globally.
What to Watch Next
Both companies will attend the OFC 2026 Conference in Los Angeles from March 17-19, where they will demonstrate their progress. For practitioners planning infrastructure investments, this timeframe matters: pre-production in early 2026 suggests commercial availability could arrive by late 2026 or early 2027.
The transition from electronic to optical switching will not happen overnight. Existing data center architectures have enormous installed bases, and any transition involves complex integration work. But the technical advantages of optical switching are compelling enough that adoption seems likely to accelerate as manufacturing scales.
This is the kind of infrastructure development that rarely makes headlines but quietly determines what AI applications become practical to build. Lower latency, higher bandwidth, and lower power consumption at the network layer enable larger training runs, faster inference, and more complex agentic architectures. The foundation matters as much as the models that run on top of it.