Apr 24, 2026 Leave a message

Data centers are entering an era where fiber optics are indispensable

info-864-449

Figure 1: Key advantages of Dense Wavelength Division Multiplexing Co-Packaged Optics (DWDMCPO). (Image credit: Scintil Photonics)

 

There is a saying in the data center design field: "Use copper cables wherever possible, and resort to optical technology only when fiber is absolutely necessary." For years, the industry's pragmatic approach has been to prioritize low-cost copper cables until the laws of physics force a technological upgrade. But today, as artificial intelligence (AI) computing clusters are moving toward the scale of "computing power factories" deploying millions of graphics processing units (GPUs), and data center cost-effectiveness faces immense pressure, people are discovering that this "no-optics-alternative" technological inflection point has arrived far sooner than anyone anticipated.

 

 

Network architects have long been aware that copper cables have significant limitations in transmission distance under high-speed scenarios, but few have analyzed the underlying causes from a fundamental physical perspective. Although network engineers have pushed the limits of copper cable transmission distance and bandwidth to the extreme with their expertise, the constraints of the laws of physics cannot be overcome; even the most ingenious engineering designs struggle to overcome the inherent limitations of copper cables. Understanding this explains why the industry is urgently shifting toward Co-Packaged Optics (CPO) technology.

 

The higher the frequency of the electrical signals transmitted through copper cables, the higher the symbol rate and bandwidth, and the more information they can carry. The problem is that as signal frequency increases, transmission distance decreases significantly. There are two primary causes of signal loss: the skin effect and dielectric loss.

 

Skin Effect: The "Surface Concentration" of Current

When an AC signal travels through a copper cable, the changing magnetic field induces eddy currents within the conductor. The magnetic field generated by these eddy currents cancels out the signal magnetic field at the center of the cable. The faster the signal magnetic field changes-that is, the higher the frequency-the stronger this cancellation effect becomes.

 

The direct result is that the current is "squeezed" into an extremely thin surface layer of the conductor; the thickness of this layer is known as the skin depth. Current signal transmission in AI data centers commonly uses frequencies around 53 GHz. At this frequency, the skin depth of copper cables is only 0.3 μm. Such a thin transmission layer utilizes less than 1% of the conductor's cross-sectional area, causing the cable's resistance to skyrocket-even exceeding DC resistance by more than 100 times.

 

Dielectric Loss: "Energy Dissipation" in the Insulation Layer

Another major culprit behind signal attenuation in copper cables is dielectric loss. In high-frequency scenarios at the gigahertz level, the molecules within the cable's insulating dielectric cannot keep pace with the rapid fluctuations of the electric field. The delay between the rapid fluctuations of the electric field and the molecules' lagging response converts the signal's electromagnetic energy into thermal energy, resulting in energy loss.

 

Dual Losses: The Fatal Weakness of Copper Cables

When the skin effect and dielectric loss act in concert, signal loss in copper cables increases exponentially as frequency rises.

 

To illustrate this clearly: at a frequency of 50 GHz, even with high-quality copper cables, the combined loss from these two effects will consume over 90% of the signal power budget within a 2-meter transmission distance. The physical properties of copper cables dictate an irreconcilable core contradiction: bandwidth and transmission distance cannot be achieved simultaneously.

 

Dense Wavelength Division Multiplexing Co-Packaged Optics (DWDMCPO)

Today, the performance bottleneck for AI training clusters has shifted from floating-point operations per second (FLOPS) to bandwidth requirements.

In scalable networks with consistent memory, a new generation of networking technology is needed to meet multiple demands:

 

|As power consumption becomes a limiting factor in data center deployments, lower power consumption per bit is required;

| Since floating-point operations are no longer the bottleneck, greater bandwidth must be provided for each processor;

| As physical space for fiber deployment becomes limited, higher integration density must be achieved;

| To enable cross-rack horizontal scaling, longer transmission distances must be supported;

| To maintain memory domain consistency and improve GPU utilization, ultra-low tail latency is required;

| High reliability and ease of maintenance are also essential.

 

When tail latency becomes a core constraint, meeting the above requirements will significantly boost GPU utilization (some model estimates indicate utilization can more than double), substantially reduce network power consumption, and improve end-to-end model performance. Currently, DWDMCPO is the only technical approach capable of simultaneously meeting these stringent requirements, and it will bring far-reaching cost-benefit transformations to hyperscale data center operators.

 

By transmitting multiple wavelengths of light over a single fiber, DWDMCPO technology provides each GPU with multiple wide, low-speed channels. By scaling the number of transmitted wavelengths from 1 to 8, 16, or even more, this geometric scaling pattern is poised to revolutionize AI networks, just as DWDM technology revolutionized the Internet backbone 25 years ago.

 

DWDM's single-channel transmission rate is approximately 50–64 Gbit/s, which falls within the low-speed transmission category. This characteristic allows engineers to simplify the data encoding scheme from PAM4 to NRZ. This simplification eliminates multiple costly and power-intensive signal processing stages, achieving dual reductions in power consumption and latency by streamlining the signal transmission path.

 

Tail latency is the "silent killer" eroding data center ROI. When GPU clusters process data tokens, they require a continuous, predictable bitstream input. If a single bit experiences transmission delay, the rest of the cluster becomes idle, significantly reducing processor utilization. As the number of processors in a cluster increases, the probability of p999-level tail bit latency rises significantly, and processor utilization declines accordingly.

 

Without low-latency, low-power, long-range DWDMCPO technology, the scalability of networks would be limited, and the performance of large language models (LLMs) would consequently be constrained. Larger-scale, flatter, low-latency scalable networks can support larger key-value caches, directly increasing the model's context window size and content relevance, thereby effectively expanding the LLM's effective working memory. Increased low-latency bandwidth also holds the potential to increase the number of transformer layers, enabling models to possess deeper thinking and reasoning capabilities. In short: breaking through the communication constraints between chips allows LLMs to store more information in working memory and complete more reasoning steps without stuttering.

 

Copper cabling was undoubtedly a great technology in its day, but its inherent physical limitations have created an urgent need for more advanced networking technologies to significantly improve the return on investment for hyperscale data centers and expand the capabilities of LLMs.

 

In a few years, AI data centers built entirely on copper cabling will become as unimaginable as long-distance internet connections relying solely on copper.

 

Operators of hyperscale data centers, investors, LLM developers, and other stakeholders in AI infrastructure must take this technological trend seriously: it will not only fundamentally reshape the cost-effectiveness landscape of data centers but also continuously expand the boundaries of AI capabilities. Enterprises that adopt DWDMCPO technology first will establish architectural advantages in terms of cost, power consumption, and performance; these advantages will continue to amplify as AI infrastructure scales up.

Send Inquiry

whatsapp

Phone

E-mail

Inquiry