[wp_tech_share]

Nokia has a plan to reverse its declining RAN revenue share trajectory—and NVIDIA is now a significant part of that plan. What does this mean for the RAN market? After an intense month of updates from GTC and Nokia’s CMD, this is an opportune moment to review the scope of the Nokia–NVIDIA announcements, the potential RAN implications of their partnership, and Nokia’s broader RAN strategy.

A quick recap of NVIDIA’s entry into RAN: Based on the announcement and subsequent discussions, our understanding is that NVIDIA will invest $1 B in Nokia and that NVIDIA-powered AI-RAN products will be incorporated into Nokia’s RAN portfolio starting in 2027 (with trials beginning in 2026). While RAN compute—which represents less than half of the $30 B+ RAN market—is immaterial relative to NVIDIA’s $4+ T market cap, the potential upside becomes more meaningful when viewed in the context of NVIDIA’s broader telecom ambitions and its $165 B in trailing-twelve-month revenue.

Source: Nokia

Perhaps more importantly, both Nokia and NVIDIA appear aligned on the role that telecom networks and assets will play as we move deeper into the AI era. Both companies broadly believe that AI will transform society—enabling robots, self-driving cars, humanoids, and digital twins for manufacturing, among other use cases. NVIDIA envisions a future in which everything that moves will be autonomous. But achieving this requires transforming the network from a simple connectivity pipe into a distributed computing platform that functions as an AI grid.

Since this is not NVIDIA’s first attempt to enter the RAN market, it is worth noting that a key difference from prior efforts is a more pragmatic approach. Nokia is acutely aware of its customers’ risk profiles—operators cannot justify ROI based on unknowns. This time, the target is parity with its existing RAN in terms of performance, power, and TCO. Multi-tenancy and potential new revenue streams are certainly attractive, but they are not prerequisites—the ROI must stand on its own on a RAN-only basis.

Source: Nokia

 

Given the size of Nokia’s 1 M+ BTS installed base, there are currently three high-level paths to transition towards NVIDIA’s GPU/AI-RAN, listed here in order of importance/projected shares: 1) Purpose-built D-RAN (add card into existing AirScale slots), 2) D-RAN vRAN (COTS at cell site), 3) C-RAN vRAN (centralized COTS).

Considering that the macro-RAN market—including both baseband and radio—totals around $30 B annually and suppliers ship 1–2 M macros per year, it is clear that carriers have limited appetite to spend $10+ K on a GPU, even if the software model could yield additional benefits over time. NVIDIA and Nokia will likely provide more details on performance and hopefully pricing soon. For now, NVIDIA has indicated that the GPU optimized for D-RAN will be priced similarly to the ARC-Compact, while delivering roughly twice the capacity. Nokia, meanwhile, is targeting further margin improvement; during its CMD, the company stated that the new Mobile Infrastructure BU is aiming for a 48%–50% gross margin by 2028, up from 48% for the 4Q24–3Q25 period.

If the TCO and performance-per-watt gap with custom silicon continues to narrow, this partnership could have meaningful implications across multiple RAN domains. Beyond strengthening Nokia’s financial position, it also provides momentum for both the AI-RAN and Cloud-RAN movements. While the AI-RAN train had already left the station—and was expected to scale significantly in the second half of the 5G cycle, propelling AI-RAN to account for around a third of RAN by 2029, even before this announcement—Nokia’s decision to lean further into GPUs will only reinforce this trend.

Since Nokia’s customers want to leverage their existing AirScale investments, the D-RAN option using empty AirScale slots is expected to dominate in the near term. At the same time, this partnership is unlikely to materially affect the C-RAN vs. D-RAN mix, Open RAN adoption, or the growth prospects for multi-tenancy RAN. The shift toward GPUs is also unlikely to alter the broader 6G trajectory.

However, it could influence vendor dynamics. Nokia remains optimistic that it can reverse its RAN share trajectory, which had been trending downward over an extended period until recently. During its November 2025 CMD, the company outlined plans to stabilize its RAN business in the near term and position itself for long-term growth. As we have highlighted in our quarterly RAN coverage, the market is becoming increasingly concentrated and polarized, and vendors must determine how best to maximize their chances of winning while navigating the inherent trade-offs (the top five suppliers accounted for 96% of the 1Q25-3Q25 RAN market).

Rather than chasing volume in markets that are open to all suppliers, Nokia plans to remain disciplined and focus on areas where it can differentiate and unlock value—particularly through software/faster innovation cycles via its recently announced partnership with NVIDIA. The company sees meaningful opportunities to capture incremental share in North America, Europe, India, and select APAC markets. And it is already off to a solid start— we estimate that Nokia’s 1Q25–3Q25 RAN revenue share outside North America improved slightly relative to 2024. Following this stabilization phase, Nokia is betting that its investments will pay off and that it will be well-positioned to lead with AI-native networks and 6G.

Source: Nokia

 

In other words, the objective is stability in the near term and growth over the long term. It is now up to Nokia and NVIDIA to execute.

[wp_tech_share]
AI capacity announcements are multiplying fast—but many overlap, repeat, or overstate what will realistically be built. Untangling this spaghetti means understanding when multiple headlines point to the same capacity and recognizing that delivery timelines matter as much as the billions of dollars and gigawatts announced.

AI is often hailed as a force set to redefine productivity — yet, for now, it consumes much of our time simply trying to make sense of the scale and direction of AI investment activity. Every week brings record-breaking announcements: a new model surpassing benchmarks, another multi-gigawatt data center breaking ground, or one AI firm taking a stake in another. Each adds fuel to the frenzy, amplifying the exuberance that continues to ripple through equity markets.

 

When AI Announcements Become “Spaghetti”

In recent weeks, the industry’s attention has zeroed in on the tangled web of AI cross‑investments, often visualized through “spaghetti charts.” NVIDIA has invested in its customer OpenAI, which, in turn, has taken a stake in AMD — a direct NVIDIA competitor — while also becoming one of AMD’s largest GPU customers. CoreWeave carries a significant investment from NVIDIA, while ranking among its top GPU buyers, and even leasing those same GPUs back to NVIDIA as one of its key compute suppliers. These overlapping stakes have raised questions about governance and prompted déjà vu comparisons with past bubbles. Morgan Stanley’s Todd Castagno captured this dynamic in his now‑famous spaghetti chart, featured in Barron’s and below, which quickly circulated among investors and analysts alike.

Source: Morgan Stanley

 

Why Venn Diagrams Matter More Than Spaghetti Charts

Yet while investors may have reason to worry about these tangled relationships, data center operators, vendors, and analysts should be paying attention to two other kinds of charts: Venn diagrams and Gantt charts.

In our conversations at Dell’Oro Group’s data center practice, we’re consistently asked: “How much of these announced gigawatts are double‑counted?” and “Can the industry realistically deliver all these GWs?” These are the right questions. For suppliers trying to plan capacity and for investors attempting to size the real opportunity, understanding overlap is far more important than tracking every new headline.

When all public announcements are tallied, the theoretical pipeline can easily stretch into the several‑hundred‑gigawatt range — far above what our models suggest will actually be built by 2029. This leads to the core issue: how do we make sense of all these overlapping (and at times even contradicting) announcements?

 

The OpenAI Example: One Company, Multiple Overlapping GW Claims

Consider OpenAI’s recent announcements. A longtime NVIDIA customer, the company committed to deploy 10 GW of NVIDIA systems, followed only weeks later by news of 6 GW of AMD‑based systems and 10 GW of custom accelerators developed with Broadcom. From a semiconductor standpoint, that totals roughly 26 GW of potential IT capacity.

On the data center construction side, however, the math becomes far less clear. OpenAI’s Stargate venture launched earlier this year with plans for 10 GW of capacity in the U.S. over four years — later expanded to include more sites and accelerated timelines.

Its flagship campus in Abilene, Tex. is part of Crusoe’s and Lancium’s Clean Campus development, expected to provide about 1.2 GW of that capacity. The initiative also includes multiple Oracle‑operated sites totaling around 5 GW (including the Crusoe-developed Abilene project, which Oracle will operate for OpenAI, and other sites developed with partners like Vantage Data Centers), plus at least 2 GW in leased capacity from neocloud provider CoreWeave. That leaves roughly 3 GW of U.S. capacity yet to be allocated to specific data center sites.

Assuming Stargate’s full 10 GW materializes domestically, OpenAI’s remaining 16 GW from its 26 GW of chip‑related announcements is still unallocated to specific data center projects. A portion of this may be absorbed by overseas Stargate offshoots in the U.A.E., Norway, and the U.K., generally developed with partners such as G42 and Nscale. These countries are already confirmed locations, but several additional European and Asian markets are widely rumored to be next in line for expansion.

 

Shared Sites, Shared Announcements, Shared Capacity

While OpenAI‑dedicated Stargate sites draw significant attention, the reality is that most of the remaining capacity likely ties back to Microsoft — the model builder’s largest compute partner and major shareholder. Microsoft’s new AI factories, including the Fairwater campus in Wisconsin, have been publicly described as shared infrastructure supporting both Microsoft’s own AI models and OpenAI’s workloads.

Naturally, Microsoft’s multibillion‑dollar capex program has come under close investor scrutiny. But to understand actual capacity expansion, one must ask: how much of this spend ultimately supports OpenAI? Whether through direct capital commitments or via absorbed costs within Azure‑hosted AI services, a meaningful share of Microsoft’s infrastructure buildout will inevitably carry OpenAI’s workloads forward.

Given the size and complexity of these projects, it’s unsurprising that multiple stakeholders — chipmakers, cloud providers, developers, utilities, and investors — announce capacity expansions tied to the same underlying sites.

A clear example is Stargate UAE, which has been unveiled from multiple angles:

Each announcement, viewed in isolation, can sound like a separate multi‑gigawatt initiative. In reality, they describe different facets of the same underlying build. And importantly, this is not unique to Stargate — similar multi‑angle, multi‑announcement patterns are becoming increasingly common across major AI infrastructure projects worldwide. This layered messaging contributes to a landscape where genuine incremental expansion becomes increasingly difficult to differentiate from multiple narratives referring to the same capacity.

Source: Dell’Oro Group’s Analysis

 

Beware the Rise of “Braggerwatts”

If tracking real, shovel‑ready projects weren’t already challenging enough, a newer phenomenon has emerged to further distort expectations: “braggerwatts.”

These headline‑grabbing gigawatt declarations tend to be bold, aspirational, and often untethered from today’s practical constraints. They signal ambition more than bankability. While some may eventually break ground, many originate from firms without sufficient financing — or without the secured power required to energize campuses of this scale. In fact, the absence of power agreements is often the very reason these announcements become braggerwatts: compelling on paper, but unlikely to materialize.

 

Power is the Real Constraint—Not Chips

This leads directly to the most consequential source of uncertainty: power. As Microsoft CEO Satya Nadella put it in BG2 podcast, “You may actually have a bunch of chips sitting in inventory that I can’t plug in … it’s not a supply issue of chips; it’s actually the fact that I don’t have warm shells to plug into.”

Recent reports from Santa Clara County, Calif. underscored this reality. Silicon Valley Power’s inability to energize new facilities from Digital Realty and STACK Infrastructure revealed just how fragile power‑delivery timelines have become. Developers, competing for scarce grid capacity, increasingly reserve more power across multiple markets than they ultimately intend to use. Nicknamed “phantom data centers” by the Financial Times, these speculative reservations may be a rational hedging strategy — but they also clog interconnection queues and introduce yet another form of double counting.

 

Gantt Charts and Reality Checks

Making sense of real data center capacity — especially when announced timelines often compress multi‑year build cycles into optimistic one‑ or two‑year horizons — is challenging enough, but an even bigger issue is that, while announcements are rich in dollars and gigawatts, they are often strikingly vague as to when this capacity will actually be delivered. Several large AI‑era projects have publicized increasingly compressed “time‑to‑token” goals.

Recent mapping by nonprofit Epoch.AI, below, illustrates highly ambitious timelines to the first gigawatt of capacity. Yet the reality is far more measured. Most hyperscale and AI‑focused campuses are expected to phase in capacity over multiple years to manage engineering complexity, navigate permitting, and align with the risk tolerance of investors financing these developments.

Source: EPOCH AI

 

True Modelling Requires Ground-true Data—Not Hype

Ultimately, this creates a disconnect between what is announced and what is genuinely achievable. Understanding true data center growth requires cutting through overlapping announcements, aspirational gigawatt claims, and speculative power reservations. By grounding expectations in semiconductor shipment volumes, verifiable construction progress, and secured power commitments, the industry can move beyond headline noise and toward an accurate view of the capacity that is truly on the way.

[wp_tech_share]

Across hyperscalers and sovereign clouds alike, the race is shifting from just model supremacy to infrastructure supremacy. The real differentiation is now in how efficiently GPUs can be interconnected and utilized. As AI clusters scale beyond anything traditional data center networking was built for, the question is no longer how fast can you train? but can your network keep up? This is where emerging architectures like Optical Circuit Switches (OCS) and Optical Cross-Connects (OXC), a technology used in wide area networks for decades, enter the conversation.

The Network is the Computer for AI Clusters

The new age of AI reasoning is ushering in three new scaling laws—spanning pre-training, post-training, and test-time scaling—that together are driving an unprecedented surge in compute requirements. At GTC 2025, Jensen Huang stated that demand for compute is now 100× higher than what was predicted just a year ago. As a result, the size of AI clusters is exploding, even as the industry aggressively pursues efficiency breakthroughs—what many now refer to as the “DeepSeek moment” of AI deployment optimization.

As the chart illustrates, AI clusters are rapidly scaling from hundreds of thousands of GPUs to millions of GPUs. Over the next five years, the expectation is that there will be about 124 gigawatts of capacity to be brought online, or an equivalent of more than 70 million GPUs to be deployed. In this reality, the network will play a key role in connecting those GPUs in the most optimized, efficient way. The network is the computer for AI clusters.

 

Challenges in Operating Large Scale AI Clusters

As shown in the chart above, the number of interconnects scales exponentially with the number of GPUs. This rapid increase drives significant cost, power consumption, and latency. It is not just the number of interconnects that is exploding—the speed requirements are rising just as aggressively. AI clusters are fundamentally network-bound, which means the network must operate at nearly 100 percent efficiency to fully utilize the extremely expensive GPU resources.

Another major factor is the refresh cadence. AI back-end networks are refreshed roughly every two years or less, compared to about five years in traditional front-end enterprise environments. As a result, speed transitions in AI data centers are happening at almost twice the pace of non-accelerated infrastructure.

Looking at switch port shipments in AI clusters, we expect the majority of ports in 2025 will be 800 Gbps. By 2027, the majority will have transitioned to 1.6 Tbps, and by 2030, most ports are expected to operate at 3.2 Tbps. This progression implies that the data center network’s electrical layer will need to be replaced at each new bandwidth generation—a far more aggressive upgrade cycle than what the industry has historically seen in front-end, non-accelerated infrastructure.

 

 

The Potential Role of OCS in AI Clusters

Optical Circuit Switches (OCS) or Optical Cross-Connects (OXC) are network devices that establish direct, light-based optical paths between endpoints, bypassing the traditional packet-switched routing pipeline to deliver near-zero-latency connectivity with massive bandwidth efficiency. Google was the first major hyperscaler to deploy OCS at scale nearly a decade ago, using it to dynamically rewire its data center topology in response to shifting workload patterns and to reduce reliance on power-hungry electrical Ethernet fabrics.

A major advantage of OCS is that it is fundamentally speed-agnostic—because it operates entirely in the optical domain, it does not need to be upgraded each time the industry transitions from 400 Gbps to 800 Gbps to 1.6 Tbps or beyond. This stands in stark contrast to traditional electrical switching layers, which require constant refreshes as link speeds accelerate. OCS also eliminates the need for optical-electrical-optical (O-E-O) conversion, enabling pure optical forwarding, that not only reduces latency but also dramatically lowers power consumption by avoiding the energy cost of repeatedly converting photons to electrons and back again.

The combined benefit is a scalable, future-proof, ultra-efficient interconnect fabric that is uniquely suited for AI and high-performance computing (HPC) back-end networks, where east-west traffic is unpredictable and bandwidth demand grows faster than Moore’s Law. As AI workload intensity surges, OCS is being explored as a way to optimize the network.

 

OCS is a Proven Technology

Using an OCS in a network is not new. It was, however, called by different names over the past three decades: OOO Switch, all-optical switch, optical switch, and optical cross-connect (OXC). Currently, the most popular term for these systems used in data centers is OCS.

It has been used in the wide area network (WAN) for many years to solve a similar problem set. And for many of the same reasons, tier-one operators worldwide have addressed it through the strategic use of OCSs. Hence, OCSs have been used in carrier networks by operators with the strictest performance and reliability requirements for over a decade. Additionally, the base optical technologies, both MEMS and LCOS, have been widely deployed in carrier networks and have operated without fault for even longer. Stated another way, OCS is based on field-proven technology.

Whether used in a data center or to scale across data centers, an OCS offers several benefits that translate into lower costs over time.

To address the specific needs for AI data centers, companies have launched new OCS products. The following is a list of the products available in the market:

 

Final Thought

AI infrastructure is diverging from conventional data center design at an unprecedented pace, and the networks connecting GPUs must evolve even faster than the GPUs themselves. OCS is not an exotic research architecture; it is a proven technology that is ready to be explored and considered for use in AI networks as a way to differentiate and evolve them to meet the stringent requirements of large AI clusters.

[wp_tech_share]

Part 3 of a 3-Part CNaaS Blog Series:

We should look at history to predict seismic shifts in the IT equipment industry.

Campus NaaS is poorly defined in the industry, leading to market confusion. In this series of blogs, Siân Morgan explores the differences and similarities of the offers on the market and proposes a set of definitions to help enterprises and vendors speak the same language.

In 1998, when Netflix shipped its first DVD (Beetlejuice), not many foresaw the impact that the company would have on the entertainment industry.  It took nine years for Netflix to shift from DVD rental to streaming, and another 18 years for streaming to eclipse broadcast and cable TV viewing combined, in 2025. In September, KPop Demon Hunters became Nextflix’s most watched title, with 325 million views in the first 91 days.

The cloud delivery model transformed television and has the potential to transform the LAN equipment industry (although ideally without any soul-sucking demons).

CNaaS has already outpaced Netflix’s nine-year runway.  Some of the LAN-as-a-Utility vendors began developing hardware in stealth mode in 2015, and the first CNaaS revenue was recognized in 2018. In 2021, HPE brought the concept of a cloud-inspired LAN offer to mainstream when it announced its deal with Home Depot. Figure 1 shows a few of the key milestones in the short history of CNaaS.

Like the streaming market was in 2014, CNaaS is still a relatively new concept, and most enterprises can be forgiven for struggling to visualize how the offer may change their business.

In the first blog of this series, I introduced the characteristics that appear in cloud-inspired offers, including “as-a-Service” offers for LAN connectivity, which we label as CNaaS or Campus Network as a Service.  To different degrees, CNaaS offers are outcome-oriented, elastic, priced as opex, and maintenance-free.  In the second blog, I explained that the CNaaS offers can be classified into three categories:

    • Turnkey CNaaS, which are bespoke offers to large enterprises,
    • Enabler CNaaS for which vendors deliver enhancements to MSPs to expand the service providers’ addressable market, and
    • LAN-as-a-Utility CNaaS, for which vendors have developed LAN hardware to automate many operational tasks and to allow the vendors to provide ongoing network monitoring for enterprises.

There is another dimension by which available CNaaS offers vary:  by scope of technology.  CNaaS vendors include a mix of technologies such as LAN, WAN, data center connectivity, private cellular, firewalls, ZTNA, as well as higher-order applications, such as workflow management and software for smart-building management.

Can the CNaaS opportunity be quantified?

To quantify the opportunity for CNaaS vendors, we first reduce the service to a “core offering,” that is, the common elements present in every vendor’s service: Wireless LAN and Campus Switching. This allows market growth of the core to be baselined and tracked as it evolves. However, it will not represent the total opportunity for a vendor who includes other technologies such as firewalls, ZTNA, or an ISP marketplace, not to mention the professional services that can be bundled with the offers.

Next, this core CNaaS market must be defined in relation to other existing markets. We consider whether its trajectory will be independent, or whether it will evolve as a subset of a larger market, subject to the same prevailing trends as its parent.  All of the CNaaS on the market are constructed with Public Cloud-managed LAN technology, and so we consider CNaaS to be a subset of the Public Cloud-managed LAN market.  (Public Cloud-managed LAN is made up of LAN offers with management or control applications that are hosted in the cloud facilities of the vendor or a third party). Analyzing the manner in which the Public Cloud-managed LAN market grew and became widely adopted can provide valuable insights into the CNaaS trajectory.

Although it is a subset of the Public Cloud-managed LAN market, CNaaS has one very important difference: the way in which revenue is recognized by vendors.  Whereas Public Cloud-managed LAN vendors, such as Cisco, HPE, and CommScope already recognize a material amount of recurring software revenue, nearly all of their hardware revenue is recognized in the quarter in which it is shipped.

CNaaS offers reduce the revenue recognized in the early years of a contract; as vendors sell more CNaaS, the revenue for past shipments accumulates. Figure 2 compares the theoretical revenue profile of a vendor selling CNaaS with the revenue profile of the same equipment sold as capex.  This principle is captured by the “fish model” proposed by Thomas Lah in his blog.

Impact of CNaaS on Mfg Revenue Profile - DellOro Group

Finally, to quantify the CNaaS opportunity, we must consider whether it is accretive to the total market: will it result in an expansion of overall spending on LAN equipment, or is it a competitive tool to win share of existing market revenues?

I have identified three accretive opportunities related to CNaaS:

    • The CNaaS opportunity is expected to increase enterprise spending on AIOps software features designed to simplify and automate network operations. This will shift spending from human resources, growing overall LAN-related software revenue.
    • Simplification of the MSP workflow delivered by Enabler CNaaS will result in an expansion of MSP revenue from a broader segment of enterprises.
    • Vertical-specific CNaaS solutions blending different technologies and higher order applications (such as smart buildings, private wireless/WLAN networks, or the digitization of manufacturing) will drive new use cases and expand enterprise IT spending.

The CNaaS opportunity can be sized by considering the factors described above, alongside an analysis of shipment data, historical trends, and interviews with vendors, enterprises, and channel partners.  The conclusion of this analysis is that a shift in LAN delivery model can take several years, but the opportunity it brings is alluring.  Increased margins, a deeper, ongoing relationship with customers, and the future stability of a higher total contract value will drive vendors to develop rich CNaaS offers.  The commoditization of connectivity and the increasing complexity of networks will drive adoption by enterprises. I predict that in 2029, a mere 11 years after the offer became commercially available, it will represent over 10% of the Public Cloud-Managed LAN market.

CNaaS Opportunity 2029 Chart - DellOro GroupWith the broadest access to MSPs as a channel of delivery, we expect Enabler CNaaS to remain the largest category.  LAN-as-a-Utility CNaaS will remain the fastest growing variant, with the growth driven by new entrants gaining scale.

In his explanation of the “fish model”, Thomas Lah laid out two critical steps for SaaS companies wanting to “swallow the fish”, and these recommendations also ring true for CNaaS vendors. First, make sure to align sales compensation to the subscription model, without inflating incentives and associated costs.  Second, focus on ensuring a high rate of subscription renewals. As Netflix is well aware, customer churn determines whether a recurring revenue business succeeds or fails.

Dell’Oro Group Tracks CNaaS Trends, Market Dynamics and Revenue Forecasts in the Advanced Research Report: CNaaS and Public Cloud-Managed LAN