[wp_tech_share]

A year of continuous shifts within the sector — and familiar debates beyond it

As we close out 2025, the milestones of the past twelve months underscore just how quickly the industry is shifting beneath our feet. DeepSeek’s breakthrough reshaped assumptions about compute efficiency and cost; NVIDIA’s announcement of Blackwell Ultra signaled yet another leap in accelerator performance; the White House’s AI Action Plan formalized the policy stakes around national compute capacity; Stargate’s Abilene facility began operating at unprecedented scale, becoming a symbol of the AI‑era mega‑campus; debates around AI circular investments highlighted both the ambition and fragility of capital flows into frontier infrastructure — only to name a few of key milestones of this past year

These developments set the stage for a year that will balance continuity with disruption. For vendors and operators, 2026 will bring meaningful shifts in technologies, architectures, and competitive dynamics. Yet from the outside, the narrative may feel familiar. The same themes that began surfacing more prominently in recent years — and defined public debate throughout 2025 — will continue to dominate headlines, even as the underlying infrastructure evolves at a far faster pace.

 

What we’re not predicting — because everyone else already is

Power scarcity remains the defining constraint, with power availability continuing to be the single most important determinant of site selection for data center projects. Speculation about an AI‑driven investment bubble is expected to intensify, as trillions of dollars in critical infrastructure are deployed amid lingering uncertainty about long‑term monetization models. And public visibility of the sector will keep rising, bringing sharper community pushback, permitting resistance, and societal concerns ranging from energy affordability to the impact of AI on jobs, as well as growing scrutiny over the safe and responsible use of AI, particularly among young people — pressures that intensify most as the industry lacks coherent, accessible, and positive messaging about its value to communities and the broader economy.

Because these forces are so obvious and so deeply embedded in the industry’s trajectory, we will not include them among our predictions. Instead, this outlook focuses on the emerging dynamics that will shape vendors, operators, and the broader ecosystem in ways both expected and unexpected.

 

The easy ones: our highest-confidence expectations for 2026

These trends are already well underway, with early signals evident throughout 2025, reinforcing a trajectory that leaves little doubt about their momentum heading into 2026.

1. Consolidation and partnerships accelerate

The complexity of gigawatt‑scale data centers is pushing vendors to work together more closely, driving a surge in strategic partnerships that combine expertise across power, cooling, controls, and integration. Expect more joint reference architectures, co‑engineered solutions, and collaborative designs that extend well beyond any single vendor’s historical domain. We anticipate at least ten additional partnership announcements in 2026 as vendors align to meet the growing demands of AI‑era infrastructure.

In parallel, consolidation will continue as vendors with differentiated capabilities become acquisition targets — particularly in high-priority areas such as liquid cooling, solid-state power electronics, and global design and service expertise. These acquisitions will further accelerate the shift toward full-stack delivery models, with integrated chip-to-rack, rack-to-row, and row-to-hall solutions becoming a defining competitive strategy. We expect no fewer than five acquisitions or take-private transactions crossing the $1 billion threshold, underscoring the intensifying race to secure critical capabilities across the DCPI stack.

 

2. Real builds matter more than bold visions (and vanished ones)

Multi‑billion‑dollar and multi‑gigawatt campus announcements might continue to dominate headlines, but the center of gravity will shift toward execution rather than ideation. Operators will focus on translating these bold visions into reality — securing power, navigating permitting, sequencing construction, and commissioning facilities on time.

Source: Open AI – OpenAI, Oracle, and SoftBank expand Stargate with five new AI data center sites

 

With the running backlog of public announcements now exceeding 70 GW of stated capacity, a meaningful share of these projects is likely to remain “braggerwatts” — aspirational declarations that never progress past land options, concept designs, or early‑stage filings. As economic, regulatory, and power‑availability constraints sharpen, attention will shift back to credible projects with clear pathways to completion and well‑defined delivery plans.

Today, several sites are on trajectories that suggest they could eventually cross the fabled 1 GW capacity threshold, but none have reached that milestone yet. By the end of 2026, however, we expect at least five sites worldwide to surpass 1 GW of operational capacity.

 

3. Divergence grows before convergence returns

Despite efforts toward convergence, 2026 is likely to bring even greater architectural divergence across power and cooling, a proliferation of design pathways rather than a narrowing of them. This is being fueled by rapid technological shifts that show no signs of slowing.

On the power side, even as clarity improves around 400 Vdc and 800 Vdc rack architectures, vendors will diversify rather than narrow their portfolios — developing new families of DC circuit breakers, power shelves, hybrid and supercapacitor‑based energy storage, and MV switchgear integrated with solid-state electronics in preparation for deployments expected in 2028/29.

Cooling will see similar diversification. A testing ground of novel technologies — including two‑phase direct liquid cooling (DLC), CDU‑less single‑phase DLC, and a wide variety of cold‑plate architectures — is expected to gain momentum, expanding the solution diversity of the ecosystem.

In this environment, initiatives like the Open Compute Project (and its collaborations with ASHRAE, Current/OS, and others) will become even more important in steering the industry, offering reference frameworks and shared direction to help channel innovation while reducing unnecessary fragmentation.

Watch closely: trends gaining momentum — but not yet locked in

Early signals suggest these trends could gain real traction — but timing, economics, and scale remain uncertain.

 

4. “Micro‑mega” edge AI deployments are on the rise

As compute density within a single rack skyrockets, many AI workloads will be able to operate on one — or just a handful — of cabinets. These compact yet powerful clusters will increasingly sit alongside conventional compute to support hybrid workloads. Expect a wave of megawatt-class, ultra-dense AI racks for enterprise post-training and inference — small-scale AI factories — embedded within colocation sites, enterprise campuses, or telco edge facilities.

What makes this shift noteworthy is what it reveals about broader AI adoption: AI is moving beyond pilots and proofs‑of‑concept and into day‑to‑day business operations, requiring right‑sized, high‑density compute footprints placed directly where data and decision‑making occur.

Architecturally, this marks a meaningful shift. Instead of concentrating accelerated compute solely in hyperscale campuses or purpose‑built training clusters, enterprises and colocators will increasingly deploy AI directly into existing facilities. This proximity to business‑critical workflows will drive demand for modular, pre‑engineered AI systems that can be “dropped in” with minimal disruption, along with managed AI‑infrastructure services that oversee monitoring, lifecycle management, and performance optimization.

 

5. Air cooling strikes back

The novelty of liquid cooling has dominated industry discourse for the past three years, pushing vendors and operators to rapidly adapt — bringing new products to market, redesigning systems to accommodate liquid infrastructure, and upskilling operational teams to support deployments at scale. But as AI deployments move beyond frontier‑model training clusters and into enterprise environments, high‑density AI racks will more frequently appear in facilities not originally designed for liquid cooling.

This shift will prompt a resurgence in advanced air‑cooling solutions. Expect a proliferation of 40–80 kW air‑cooled racks supported by extremely high‑performance thermal systems, paired with 60–150 kW liquid‑cooled racks equipped with liquid‑to‑air sidecars. The result: hybrid thermal profiles within the same facility, introducing complex challenges for operators managing uneven heat loads and airflow dynamics.

Far from being overshadowed by liquid cooling, air‑cooling systems are poised for incremental growth as operators seek flexible, retrofit‑friendly approaches to support heterogeneous rack densities across mixed‑use sites.

 

6. Immersion cooling re-emerges in modular form

After the hype cycle of recent years, immersion cooling is beginning to find its footing in more targeted, pragmatic applications. Rather than competing head‑on with DLC for hyperscale AI clusters, immersion vendors are shifting toward modular, compact systems that deliver differentiated value.

We expect growing traction in edge, telecom, and industrial environments, where immersion’s sealed‑bath architecture offers advantages in reliability, environmental isolation, and minimal site modification. These deployments will remain modest in scale, but meaningful in carving out a sustainable niche beyond today’s supercomputing and crypto segments.

To be clear, immersion cooling is not poised to displace DLC or become a dominant cooling technology. However, it is finally entering a phase where use‑cases align with its strengths — enabling vendors to build viable businesses around modular, ready‑to‑deploy immersion clusters that “drop in” alongside traditional IT and support workloads that benefit from simplified thermal management and rapid deployment.

 

7. Europe and China wake up — but in very different ways

Europe and China are both poised for stronger AI‑driven data‑center momentum in 2026, but their trajectories could not be more different. In power‑constrained Europe, growth will increasingly hinge on inference deployments located closer to population centers, to minimize network latency (even if compute latency remains the bigger challenge for AI services). This shift toward user‑proximate infrastructure will steer investment toward distributed, high‑density nodes rather than massive gigawatt-scale training campuses. Within this landscape of smaller facilities, a growing cohort of start‑up model builders will prioritize hyper‑efficient architectures that can extract maximum utility from these distributed fleets, for both inference and selective training workflows.

China, by contrast, faces no shortage of power. Its constraint is access to the latest generation of advanced accelerators. We expect operators to continue building at scale using a mix of domestic silicon and whatever Western supply remains available — iterating rapidly as local manufacturers improve capability generation by generation. Over the next few years, this mix‑and‑match strategy will help China bridge the gap until it achieves greater semiconductor self‑sufficiency, resulting in substantial expansion of AI data‑center capacity even under export controls.

The long shots: unlikely swings with outsized impact

Three low-probability but transformative developments, if they emerge, could reshape the data center landscape far more than their probability suggests.

 

8. U.S. government tightens regulation of the data center industry

A push in Washington to encourage investment in advanced cooling technologies — including a proposed bill aimed at accelerating liquid‑cooling adoption — could have unintended consequences. While well‑intentioned, efforts to steer technological choices risk drawing the federal government more directly into data center design decisions, increasing oversight and potentially making infrastructure requirements more rigid at a time when flexibility is essential.

We do not expect sweeping regulation to materialize in 2026. The current administration has closely aligned itself with AI as a pillar of economic competitiveness and will be wary of stymieing data center buildout, especially given its role in supporting GDP growth. Moreover, political attention will be dominated largely by the mid‑term elections, leaving little bandwidth for complex industry‑specific legislation.

However, affordability and household cost pressures are set to become highly charged political themes — and in that environment, data centers may attract negative scrutiny. As utilities grapple with rising demand and public concern around bills, the industry could face a wave of unfavorable headlines and heightened calls for transparency. To mitigate reputational risk, operators will need to invest more heavily in public engagement, clear messaging, and proactive demonstration of their contributions to reliability, economic growth, and community well‑being.

 

9. The first liquid-cooling leak critical failure hits the headlines

The early wave of liquid-cooled deployments often moved faster than the industry’s collective design and operational expertise. Many systems were installed without fully accounting for the nuances of coolant management, materials compatibility, monitoring, and routine maintenance — conditions that naturally elevate leak risk. Throughout 2025, we saw scattered reports of cluster-level shutdowns tied to liquid-handling failures, but nothing approaching the scale or societal visibility of a major cloud outage.

While we still believe high-profile failures are possible, their broader impact will likely be limited. Despite growing enterprise adoption, most AI systems are not yet embedded deeply enough in critical business processes to trigger widespread disruption. As a result, even a significant leak-related outage is unlikely to spark the kind of global headlines seen after the AWS blackout — though it may accelerate industry efforts around standards, training, instrumentation, and risk-mitigation practices.

 

10. The GPU secondary market skyrockets

As hyperscalers and neo cloud providers refresh their fleets, early generations of GPUs — notably Ampere- and Hopper-based accelerators — will increasingly face retirement to make room for newer, more efficient architectures. This raises a key question already weighing on investors: what is the real depreciation timeline for AI hardware on hyperscaler balance sheets?

We expect most older GPUs to shift into lower‑complexity inference workloads or the training of smaller, less compute‑intensive models. We believe it is still too early for widespread scrapping of full data centers built on these platforms, which could flood the secondary market of GPUs looking for another productive life somewhere else.

Enterprise IT environments and colocation providers will see growing volumes of these second‑hand GPUs entering their ecosystems, often at attractive price points. Integrating these “intruders” into general‑purpose, lower‑density compute environments will introduce new operational and thermal challenges. Operators will need to manage concentrated heat loads, non‑uniform rack densities, and power profiles that differ from their conventional estate.

 

The bubbling question we can’t avoid — even if we tried

Speculation about an AI “bubble” has increasingly dominated media narratives throughout 2025, and the conversation is unlikely to quiet down in 2026. It is true that many AI‑adjacent companies are trading at lofty valuations, buoyed by optimism around future adoption and monetization, an optimism may not prove durable. There is a meaningful possibility that equity markets enter correction territory in 2026, bringing P/E ratios closer to historical norms.

Yet even in a cooling market environment, we do not expect the data‑center buildout to slow materially. Hyperscalers continue to generate ample cash flow to support aggressive infrastructure expansion, and their balance sheets remain low‑leveraged, giving them capacity to secure additional capital if needed. Strategic imperatives will outweigh short‑term market pressure: these companies are locked in a race to establish AI hegemony — or risk being left behind.

In other words, financial markets may wobble, but the underlying drivers of AI infrastructure investment remain intact. The bubble debate will rage on, but the buildout will continue.

 

Looking ahead: embracing another year of acceleration and uncertainty

As with every prediction cycle, only time will reveal which of these dynamics take hold and which fade into the background. What is certain, however, is that 2026 will yet again challenge our assumptions. The pace of AI‑driven infrastructure evolution shows no signs of slowing, and the industry will continue navigating a rare combination of technological disruption, supply‑chain reinvention, and unprecedented demand for capacity.

While we avoid grand year‑end platitudes, it is fair to say that much will change — and much will stay the same. Power will remain the currency of competitiveness, AI will continue to push infrastructure to its limits, and operators and vendors alike will be forced to adapt faster than ever. At Dell’Oro Group, we look forward to tracking, analyzing, and interpreting these shifts as they unfold.

Here’s to a 2026 that will undoubtedly keep all of us in the data‑center world busy — and to the insights that the next twelve months will bring!

[wp_tech_share]
AI capacity announcements are multiplying fast—but many overlap, repeat, or overstate what will realistically be built. Untangling this spaghetti means understanding when multiple headlines point to the same capacity and recognizing that delivery timelines matter as much as the billions of dollars and gigawatts announced.

AI is often hailed as a force set to redefine productivity — yet, for now, it consumes much of our time simply trying to make sense of the scale and direction of AI investment activity. Every week brings record-breaking announcements: a new model surpassing benchmarks, another multi-gigawatt data center breaking ground, or one AI firm taking a stake in another. Each adds fuel to the frenzy, amplifying the exuberance that continues to ripple through equity markets.

 

When AI Announcements Become “Spaghetti”

In recent weeks, the industry’s attention has zeroed in on the tangled web of AI cross‑investments, often visualized through “spaghetti charts.” NVIDIA has invested in its customer OpenAI, which, in turn, has taken a stake in AMD — a direct NVIDIA competitor — while also becoming one of AMD’s largest GPU customers. CoreWeave carries a significant investment from NVIDIA, while ranking among its top GPU buyers, and even leasing those same GPUs back to NVIDIA as one of its key compute suppliers. These overlapping stakes have raised questions about governance and prompted déjà vu comparisons with past bubbles. Morgan Stanley’s Todd Castagno captured this dynamic in his now‑famous spaghetti chart, featured in Barron’s and below, which quickly circulated among investors and analysts alike.

Source: Morgan Stanley

 

Why Venn Diagrams Matter More Than Spaghetti Charts

Yet while investors may have reason to worry about these tangled relationships, data center operators, vendors, and analysts should be paying attention to two other kinds of charts: Venn diagrams and Gantt charts.

In our conversations at Dell’Oro Group’s data center practice, we’re consistently asked: “How much of these announced gigawatts are double‑counted?” and “Can the industry realistically deliver all these GWs?” These are the right questions. For suppliers trying to plan capacity and for investors attempting to size the real opportunity, understanding overlap is far more important than tracking every new headline.

When all public announcements are tallied, the theoretical pipeline can easily stretch into the several‑hundred‑gigawatt range — far above what our models suggest will actually be built by 2029. This leads to the core issue: how do we make sense of all these overlapping (and at times even contradicting) announcements?

 

The OpenAI Example: One Company, Multiple Overlapping GW Claims

Consider OpenAI’s recent announcements. A longtime NVIDIA customer, the company committed to deploy 10 GW of NVIDIA systems, followed only weeks later by news of 6 GW of AMD‑based systems and 10 GW of custom accelerators developed with Broadcom. From a semiconductor standpoint, that totals roughly 26 GW of potential IT capacity.

On the data center construction side, however, the math becomes far less clear. OpenAI’s Stargate venture launched earlier this year with plans for 10 GW of capacity in the U.S. over four years — later expanded to include more sites and accelerated timelines.

Its flagship campus in Abilene, Tex. is part of Crusoe’s and Lancium’s Clean Campus development, expected to provide about 1.2 GW of that capacity. The initiative also includes multiple Oracle‑operated sites totaling around 5 GW (including the Crusoe-developed Abilene project, which Oracle will operate for OpenAI, and other sites developed with partners like Vantage Data Centers), plus at least 2 GW in leased capacity from neocloud provider CoreWeave. That leaves roughly 3 GW of U.S. capacity yet to be allocated to specific data center sites.

Assuming Stargate’s full 10 GW materializes domestically, OpenAI’s remaining 16 GW from its 26 GW of chip‑related announcements is still unallocated to specific data center projects. A portion of this may be absorbed by overseas Stargate offshoots in the U.A.E., Norway, and the U.K., generally developed with partners such as G42 and Nscale. These countries are already confirmed locations, but several additional European and Asian markets are widely rumored to be next in line for expansion.

 

Shared Sites, Shared Announcements, Shared Capacity

While OpenAI‑dedicated Stargate sites draw significant attention, the reality is that most of the remaining capacity likely ties back to Microsoft — the model builder’s largest compute partner and major shareholder. Microsoft’s new AI factories, including the Fairwater campus in Wisconsin, have been publicly described as shared infrastructure supporting both Microsoft’s own AI models and OpenAI’s workloads.

Naturally, Microsoft’s multibillion‑dollar capex program has come under close investor scrutiny. But to understand actual capacity expansion, one must ask: how much of this spend ultimately supports OpenAI? Whether through direct capital commitments or via absorbed costs within Azure‑hosted AI services, a meaningful share of Microsoft’s infrastructure buildout will inevitably carry OpenAI’s workloads forward.

Given the size and complexity of these projects, it’s unsurprising that multiple stakeholders — chipmakers, cloud providers, developers, utilities, and investors — announce capacity expansions tied to the same underlying sites.

A clear example is Stargate UAE, which has been unveiled from multiple angles:

Each announcement, viewed in isolation, can sound like a separate multi‑gigawatt initiative. In reality, they describe different facets of the same underlying build. And importantly, this is not unique to Stargate — similar multi‑angle, multi‑announcement patterns are becoming increasingly common across major AI infrastructure projects worldwide. This layered messaging contributes to a landscape where genuine incremental expansion becomes increasingly difficult to differentiate from multiple narratives referring to the same capacity.

Source: Dell’Oro Group’s Analysis

 

Beware the Rise of “Braggerwatts”

If tracking real, shovel‑ready projects weren’t already challenging enough, a newer phenomenon has emerged to further distort expectations: “braggerwatts.”

These headline‑grabbing gigawatt declarations tend to be bold, aspirational, and often untethered from today’s practical constraints. They signal ambition more than bankability. While some may eventually break ground, many originate from firms without sufficient financing — or without the secured power required to energize campuses of this scale. In fact, the absence of power agreements is often the very reason these announcements become braggerwatts: compelling on paper, but unlikely to materialize.

 

Power is the Real Constraint—Not Chips

This leads directly to the most consequential source of uncertainty: power. As Microsoft CEO Satya Nadella put it in BG2 podcast, “You may actually have a bunch of chips sitting in inventory that I can’t plug in … it’s not a supply issue of chips; it’s actually the fact that I don’t have warm shells to plug into.”

Recent reports from Santa Clara County, Calif. underscored this reality. Silicon Valley Power’s inability to energize new facilities from Digital Realty and STACK Infrastructure revealed just how fragile power‑delivery timelines have become. Developers, competing for scarce grid capacity, increasingly reserve more power across multiple markets than they ultimately intend to use. Nicknamed “phantom data centers” by the Financial Times, these speculative reservations may be a rational hedging strategy — but they also clog interconnection queues and introduce yet another form of double counting.

 

Gantt Charts and Reality Checks

Making sense of real data center capacity — especially when announced timelines often compress multi‑year build cycles into optimistic one‑ or two‑year horizons — is challenging enough, but an even bigger issue is that, while announcements are rich in dollars and gigawatts, they are often strikingly vague as to when this capacity will actually be delivered. Several large AI‑era projects have publicized increasingly compressed “time‑to‑token” goals.

Recent mapping by nonprofit Epoch.AI, below, illustrates highly ambitious timelines to the first gigawatt of capacity. Yet the reality is far more measured. Most hyperscale and AI‑focused campuses are expected to phase in capacity over multiple years to manage engineering complexity, navigate permitting, and align with the risk tolerance of investors financing these developments.

Source: EPOCH AI

 

True Modelling Requires Ground-true Data—Not Hype

Ultimately, this creates a disconnect between what is announced and what is genuinely achievable. Understanding true data center growth requires cutting through overlapping announcements, aspirational gigawatt claims, and speculative power reservations. By grounding expectations in semiconductor shipment volumes, verifiable construction progress, and secured power commitments, the industry can move beyond headline noise and toward an accurate view of the capacity that is truly on the way.

[wp_tech_share]

Across hyperscalers and sovereign clouds alike, the race is shifting from just model supremacy to infrastructure supremacy. The real differentiation is now in how efficiently GPUs can be interconnected and utilized. As AI clusters scale beyond anything traditional data center networking was built for, the question is no longer how fast can you train? but can your network keep up? This is where emerging architectures like Optical Circuit Switches (OCS) and Optical Cross-Connects (OXC), a technology used in wide area networks for decades, enter the conversation.

The Network is the Computer for AI Clusters

The new age of AI reasoning is ushering in three new scaling laws—spanning pre-training, post-training, and test-time scaling—that together are driving an unprecedented surge in compute requirements. At GTC 2025, Jensen Huang stated that demand for compute is now 100× higher than what was predicted just a year ago. As a result, the size of AI clusters is exploding, even as the industry aggressively pursues efficiency breakthroughs—what many now refer to as the “DeepSeek moment” of AI deployment optimization.

As the chart illustrates, AI clusters are rapidly scaling from hundreds of thousands of GPUs to millions of GPUs. Over the next five years, the expectation is that there will be about 124 gigawatts of capacity to be brought online, or an equivalent of more than 70 million GPUs to be deployed. In this reality, the network will play a key role in connecting those GPUs in the most optimized, efficient way. The network is the computer for AI clusters.

 

Challenges in Operating Large Scale AI Clusters

As shown in the chart above, the number of interconnects scales exponentially with the number of GPUs. This rapid increase drives significant cost, power consumption, and latency. It is not just the number of interconnects that is exploding—the speed requirements are rising just as aggressively. AI clusters are fundamentally network-bound, which means the network must operate at nearly 100 percent efficiency to fully utilize the extremely expensive GPU resources.

Another major factor is the refresh cadence. AI back-end networks are refreshed roughly every two years or less, compared to about five years in traditional front-end enterprise environments. As a result, speed transitions in AI data centers are happening at almost twice the pace of non-accelerated infrastructure.

Looking at switch port shipments in AI clusters, we expect the majority of ports in 2025 will be 800 Gbps. By 2027, the majority will have transitioned to 1.6 Tbps, and by 2030, most ports are expected to operate at 3.2 Tbps. This progression implies that the data center network’s electrical layer will need to be replaced at each new bandwidth generation—a far more aggressive upgrade cycle than what the industry has historically seen in front-end, non-accelerated infrastructure.

 

 

The Potential Role of OCS in AI Clusters

Optical Circuit Switches (OCS) or Optical Cross-Connects (OXC) are network devices that establish direct, light-based optical paths between endpoints, bypassing the traditional packet-switched routing pipeline to deliver near-zero-latency connectivity with massive bandwidth efficiency. Google was the first major hyperscaler to deploy OCS at scale nearly a decade ago, using it to dynamically rewire its data center topology in response to shifting workload patterns and to reduce reliance on power-hungry electrical Ethernet fabrics.

A major advantage of OCS is that it is fundamentally speed-agnostic—because it operates entirely in the optical domain, it does not need to be upgraded each time the industry transitions from 400 Gbps to 800 Gbps to 1.6 Tbps or beyond. This stands in stark contrast to traditional electrical switching layers, which require constant refreshes as link speeds accelerate. OCS also eliminates the need for optical-electrical-optical (O-E-O) conversion, enabling pure optical forwarding, that not only reduces latency but also dramatically lowers power consumption by avoiding the energy cost of repeatedly converting photons to electrons and back again.

The combined benefit is a scalable, future-proof, ultra-efficient interconnect fabric that is uniquely suited for AI and high-performance computing (HPC) back-end networks, where east-west traffic is unpredictable and bandwidth demand grows faster than Moore’s Law. As AI workload intensity surges, OCS is being explored as a way to optimize the network.

 

OCS is a Proven Technology

Using an OCS in a network is not new. It was, however, called by different names over the past three decades: OOO Switch, all-optical switch, optical switch, and optical cross-connect (OXC). Currently, the most popular term for these systems used in data centers is OCS.

It has been used in the wide area network (WAN) for many years to solve a similar problem set. And for many of the same reasons, tier-one operators worldwide have addressed it through the strategic use of OCSs. Hence, OCSs have been used in carrier networks by operators with the strictest performance and reliability requirements for over a decade. Additionally, the base optical technologies, both MEMS and LCOS, have been widely deployed in carrier networks and have operated without fault for even longer. Stated another way, OCS is based on field-proven technology.

Whether used in a data center or to scale across data centers, an OCS offers several benefits that translate into lower costs over time.

To address the specific needs for AI data centers, companies have launched new OCS products. The following is a list of the products available in the market:

 

Final Thought

AI infrastructure is diverging from conventional data center design at an unprecedented pace, and the networks connecting GPUs must evolve even faster than the GPUs themselves. OCS is not an exotic research architecture; it is a proven technology that is ready to be explored and considered for use in AI networks as a way to differentiate and evolve them to meet the stringent requirements of large AI clusters.

[wp_tech_share]
From NVIDIA’s 800Vdc power architecture to the open Deschutes CDU standard, this year’s OCP Summit highlighted breakthroughs across the full spectrum of power, cooling, and rack technologies shaping AI data centers.

 

The Open Compute Project (OCP), founded in 2011 to promote open, efficient data center design, has become the leading forum shaping AI‑era infrastructure. Now a focal point for next‑generation discussions on power, cooling, and rack and server architecture, its annual Global Summit was held last week in San Jose, Calif., drawing more than 10,000 participants. The non‑profit’s reach continues to expand through new subprojects that broaden its scope across data center systems. The clearest signal of its growing influence came with the announcement that NVIDIA would join its board—a move underscoring how even the industry’s pace‑setter sees value in aligning more closely with the organization.

Among the most pivotal technological developments, NVIDIA provided deeper detail on its 800Vdc power distribution architecture for data centers, adding substance to a disruptive concept first hinted at in a May blog post. This triggered a wave of announcements from power and component suppliers: Vertiv previewed new products expected next year; Eaton introduced a new reference design; Flex expanded its AI infrastructure platform; Schneider Electric unveiled an 800Vdc sidecar rack; ABB announced new DC power products leveraging its solid‑state expertise; Legrand deepened its focus on OCP‑based power and rack solutions; and Texas Instruments introduced new power management chips.

Comparison between current (top) and proposed 800 Vdc power architecture (bottom) in May 2025 (Source: NVIDIA blog)
Comparison between current (top) and proposed 800 Vdc power architecture (bottom) in May 2025 (Source: NVIDIA blog)

 

Comparison between current and proposed 800 Vdc power architectures in October 2025 (Source: NVIDIA blog)
Comparison between current and proposed 800 Vdc power architectures in October 2025 (Source: NVIDIA blog)

 

After years of liquid cooling dominating headlines as the defining innovation in data center design, power distribution has now taken center stage. Roadmaps point to accelerated compute racks exceeding 500 kW per cabinet, introducing new challenges for delivering power efficiently to AI clusters. NVIDIA’s proposed solution marks a decisive break from conventional 415/480 V AC layouts, moving toward a higher-voltage DC (800 Vdc) bus spanning the whitespace and fed directly from a single step‑down switchgear integrated with a solid‑state transformer connected to utility and microgrid systems.

This transition represents a major architectural shift, though it will unfold gradually. Hybrid deployments bridging existing AC systems with 800 Vdc designs are expected to dominate in the coming years. These transitional architectures will rely on familiar 415/480 Vac power distribution feeding whitespace sidecar units, which will step up and rectify voltage to 800 Vdc, in order to supply adjacent high‑performance racks.

Despite speculation that UPS systems, PDUs, power shelves, and BBUs may become obsolete, these interim designs will continue to sustain demand for such equipment for the foreseeable future. Until 2027, when Rubin Ultra chips are expected to reach the market, greater clarity around the end‑state architecture should emerge, and collaboration across the ecosystem will bring novel solutions to market. Significant progress is expected in the design and scalable manufacturing of solid‑state transformers (SSTs), DC breakers, on‑chip power conversion and other solutions enabling purpose‑built AI factories to fully capitalize on the efficiency of these new architectures.

Many of these technologies are already under development. ABB’s DC circuit breaker portfolio, while rooted in industrial applications, provides a solid foundation but must evolve to meet the needs of a new customer segment, alongside its solid‑state MV UPS offering. Vertiv and Schneider Electric—industry heavyweights whose announcements offered only high‑level previews of future solutions—are accelerating product development to address these evolving requirements and still have ample time to do so. Eaton stood out as one of the few vendors demonstrating a functional power sidecar unit at OCP, showcasing tangible progress in this emerging architecture and reinforcing its position through expertise in SSTs gained from the acquisition of Resilient Power.

While suppliers are expected to adapt swiftly to new demands, regulatory bodies responsible for guiding the design and safe operation of power solutions, such as the NFPA, often move at a slower pace than the market. Codes and standards will need to evolve accordingly, and uncertainty in this area could become a key obstacle to the broader adoption of cutting-edge higher-voltage designs.

Although power has dominated recent discussions, liquid cooling sessions remained highly popular at OCP. I even found myself standing in a packed room for what I assumed would be a niche discussion on turbidity and electrical conductivity measurements in glycol fluids. Yet, the most significant development in this area was the introduction of the open‑standard Deschutes CDU. With the new specification expected to attract additional entrants to the market, our preliminary research—initially counting just over 40 CDU manufacturers—has quickly become outdated, with over 50 companies now in our mapping. However, new entrants continue facing the same challenges: while a CDU may appear to be just pipes, pumps, and filters, the true differentiation lies in system design expertise and intelligent controls—capabilities that remain difficult to replicate.

CDUs following Deschutes design showcased at OCP by Boyd and Envicool (Source: Dell’Oro Group)
CDUs following Deschutes design showcased at OCP Global Summit’25 by Boyd and Envicool (Source: Dell’Oro Group)

 

These trends underscore OCP’s growing role as the launchpad for the next generation of data center design, bringing breakthrough technologies to the forefront. This year’s discussions—from higher-voltage DC power to open liquid cooling—are shaping the blueprint for the next generation of AI factories. These architectures point toward a new model for hyperscale infrastructure, the result of collaboration among hyperscalers themselves, chipmakers, infrastructure specialists, and system integrators. Much remains in flux, with further developments expected leading into SC25 and NVIDIA GTC 2026. Stay tuned, and connect with us at Dell’Oro Group to explore our latest research or discuss these trends defining the data center of the future.

[wp_tech_share]

With around 40 vendors rushing into coolant distribution units, liquid cooling is surging—but how many players can the market sustain?

The AI supercycle is not just accelerating compute demand—it’s transforming how we power and cool data centers. Modern AI accelerators have outgrown the limits of air cooling. The latest chips on the market—whether from NVIDIA, AMD, Google, Amazon, Cerebras, or Groq—all share one design assumption: they are built for liquid cooling. This shift has catalyzed a market transformation, unlocking new opportunities across the physical infrastructure stack.

While the concept of liquid cooling is not new—IBM was water-cooling System/360 mainframes in the 1960s—it is only now, in the era of hyperscale AI, that the technology is going truly mainstream. According to Dell’Oro Group’s latest research, the Data Center Direct Liquid Cooling (DLC) market surged 156 percent year-over-year in 2Q 2025 and is projected to reach close to $6 billion by 2029, fueled by the relentless growth of accelerated computing workloads.

As with any fast-growing market, this surge is attracting a flood of new entrants, each aiming to capture a piece of the action. Oil majors are introducing specialized cooling fluids, and thermal specialists from the PC gaming world are pivoting into cold plate solutions. But one product category in particular has become a hotbed of competition: coolant distribution units (CDUs).

 

What’s a CDU and Why Does It Matter?

CDUs act as the hydraulic heart of many liquid cooling systems.

Sitting between facility water and the cold plates embedded in IT systems, these units regulate flow, pressure, and temperature, while providing isolation, monitoring, and often redundancy.

As direct-to-chip liquid cooling becomes a design default for high-density racks, the CDU becomes a mission-critical mainstay for modern data centers.

 

At Dell’Oro, we have been tracking this market from its early stages, anticipating the shift of liquid cooling from niche to necessity. Our ongoing research has already identified around 40 companies with CDUs within their product portfolios, ranging from global powerhouses to nimble specialists. The sheer number of players raises an important question: is the CDU market becoming overcrowded?

 

Who is currently in the CDU market?

The CDU market is being shaped by players from a wide variety of backgrounds. Some excel in rack system integration, others in high-performance engineering, and others in manufacturing and scalability prowess. The variety of approaches reflects the diversity of the players themselves—each entering the market from a different starting point, with distinct technical DNA and go-to-market strategies.

Below is a snippet of our CDU supplier map—only a sample of our research to be featured in Dell’Oro’s upcoming Data Center Liquid Cooling Advanced Research Report, expected to be published in 4Q 2025. Our list of CDU vendors is constantly refreshed—it has only been three weeks since the latest launch by a major player, with Johnson Controls announcing its new Silent-Aire series of CDUs.

Not all companies in this list have arrived here organically. The momentum in the CDU market has also fueled a wave of M&A and strategic partnerships. Unsurprisingly, the largest moves have been led by physical infrastructure giants eager to secure a position, as was the case with Vertiv’s acquisition of CoolTera in December 2023 and Schneider Electric’s purchase of Motivair in October 2024.

Beyond these headline deals, several diversified players have taken stakes in thermal specialists—for example, Samsung’s acquisition of FläktGroup and Carrier’s investment in two-phase specialist Zutacore. Private equity has also entered the fray, most notably with KKR’s acquisition of CoolIT. Together, these moves underscore the growing strategic importance of CDU capabilities, even if not every partnership is directly tied to them.

 

Who will win in the CDU market?

Our growth projections are robust, and there is room for multiple vendors to thrive. In the short to medium term, we still expect to see new entrants. Innovators are likely to emerge, developing technologies to address the relentless thermal demands of AI workloads, while nimble players will be quick to capture share in underserved geographies and verticals. Established names such as Vertiv, CoolIT, or Boyd will need to maintain their edge as data center designs and market dynamics evolve.

By the end of the decade, we expect the supply landscape to consolidate as the market matures and capital shifts toward other growth segments. Consolidation and exits are inevitable. We expect fewer than 10 vendors to ultimately capture the lion’s share of the market, with the remainder assessing the minimum scale neede d to operate sustainably while meeting shareholder expectations—or exiting altogether.

Who will win? There is no single path to success, as data center operators and their applications remain highly diverse. For instance, some had forecast the demise of the in-rack CDU as a subscale solution misaligned with soaring system capacity requirements. Many operators, however, continue to find value in this form factor. Slightly lower partial power usage effectiveness (pPUE) can be offset by advantages in modularity, ease of off-site rack integration and commissioning, and containment of faults and leaks.

Similarly, liquid-to-air (L2A) systems were often described as a transitional technology destined to be quickly superseded by more efficient liquid-to-liquid (L2L) solutions. Yet L2A CDUs have maintained a role even with large operators—ideal for retrofit projects in sites heavily constrained by legacy design choices, with accelerated computing racks operating alongside conventional workloads.

In-rack CDUs, L2A solutions, and other design variations will continue to play a role in a market that is rapidly evolving. GPU requirements are rising year after year, and liquid cooling systems are advancing in step with the capacity demands of next-generation AI clusters. Amid this market flux, several factors are emerging as critical for success.

First, CDUs are not standalone equipment: they are an integral element of a cooling system. Successful vendors take a system-level approach, anticipating challenges across the deployment and leveraging the CDU as hardware tightly integrated with multiple elements to ensure seamless operation. Vendors with proven track records and large installed bases—spanning multiple gigawatts—enjoy an advantage in this regard, as their experience positions them to function as a partner and advisor to their customers, rather than a mere vendor.

Second, success is not just about having the right product—it is about understanding the problem the customer needs solved and developing suitable solutions. Operators face diverse challenges, and a single fleet may need everything from small in-rack CDUs to customized L2A units or even fully skidded multi-megawatt systems. Breadth of portfolio helps hedge across deployment types, but it is not the only path to success. Vendors with a sharp edge in specific technologies can also capture meaningful share.

Lastly, scale and availability are often decisive. As builders race to deliver more compute capacity, short equipment lead times can create opportunities for nimble challengers. Availability goes beyond hardware—it also requires skilled teams to design, commission, and maintain CDUs across global sites, including remote locations outside traditional data center hubs.

As the market evolves, one key question looms: which vendors will adapt and emerge as leaders in this critical segment of the AI infrastructure stack? The answer will shape not just the CDU landscape, but the broader liquid cooling market. We will be following this closely in Dell’Oro’s upcoming Data Center Liquid Cooling Advanced Research Report, expected in 4Q 2025, in which we provide deeper analysis into these dynamics and the broader liquid cooling ecosystem.