[wp_tech_share]

At this year’s NVIDIA GTC, the narrative has moved decisively beyond the initial shift to accelerated computing. What stood out in 2026 is not just the continuation of that trend, but the expansion of AI infrastructure into a heterogeneous, domain-specific ecosystem.

As an analyst covering data center compute, the key takeaway is clear: the industry is entering its next phase—where optimization, not just scale, becomes the defining battleground.

 

From Retrieval to Generative—and Now to Reasoning Infrastructure

Hyperscaler workloads have evolved rapidly from retrieval-based systems toward generative AI, and now increasingly toward reasoning-driven architectures. Internal workloads such as search are being fundamentally re-architected around AI models, signaling a structural shift in how compute is deployed.

This transition continues to drive strong demand for accelerated computing. At Dell’Oro Group, we project global data center capex to exceed $1.7 T by 2030. These estimates could prove conservative given the scale of investment being signaled by hyperscalers, including multi-hundred-billion-dollar capex trajectories and long-term, large-scale infrastructure commitments.

 

The Emergence of LPUs: A Potential Inflection Point

LPUs, particularly through NVIDIA’s partnership with Groq, represent one of the more strategically important developments. Their SRAM-based architecture is optimized for low latency and strong performance per watt, enabling lower cost per token for inference and reasoning workloads.

This introduces greater flexibility in infrastructure design. Different service tiers can be optimized independently, with throughput-oriented configurations for lower-cost services and latency-sensitive deployments for premium offerings. LPUs provide a mechanism to fine-tune this balance in ways that GPUs alone cannot fully achieve.

Early deployments suggest LPUs can be configured at meaningful density. For example, a single Groq LPU rack can integrate hundreds of processors highlighting the degree of parallelism available for inference and reasoning workloads. In practice, such systems are likely to be deployed alongside GPU clusters, with the ratio depending on workload mix and service requirements.

If adoption reaches even modest levels, LPUs could expand the silicon TAM for domain-specific accelerators. At the same time, it remains unclear whether LPUs will primarily complement GPUs or displace portions of certain workloads as operators optimize for overall system efficiency. More broadly, LPUs underscore the growing importance of architectural specialization tailored to specific workload requirements.

 

GPU Roadmap: Density and Scale Continue to Accelerate

NVIDIA continues to push aggressively on GPU density and system integration. Platforms such as Vera Rubin Ultra demonstrate this trajectory, with multi-die architectures, massive HBM capacity reaching the terabyte scale per package, and highly dense, liquid-cooled rack designs.

Future platforms such as Feynman are expected to push these limits further, increasing both compute density and system complexity. However, this rapid scaling introduces new constraints around power, cooling, and system balance. As a result, complementary architectures and more specialized components will play a growing role in maintaining overall efficiency. With compute costs remaining elevated and data center capex scaling into the hundreds of billions annually, operators will need to strategically align infrastructure with domain-specific workloads to maximize efficiency and reduce total cost of ownership.

 

Interconnects: Balancing Standards and Proprietary Innovation

Interconnect strategy remains central to NVIDIA’s roadmap. The company continues to balance proprietary innovation with industry standards, investing in both InfiniBand and Ethernet for scale-out connectivity while advancing NVLink as the backbone of scale-up architectures.

As scale-up domains expand, NVLink will increasingly need to extend beyond the rack and, over time, into the optical domain. This evolution is necessary to support larger, more tightly coupled compute fabrics, but also introduces new technical challenges.

The expansion of scale-up capabilities naturally raises the question of whether they could displace portions of traditional scale-out networking. In practice, both architectures will need to evolve in parallel. Scale-up enables higher performance within tightly coupled systems, while scale-out remains essential for resilience, workload distribution, and efficient utilization across clusters. This is increasingly true not only for training but also for inference, where distributed workloads and service-level requirements demand flexibility.

NVIDIA is reducing reliance on PCIe-based x86 systems by positioning NVLink as an alternative interconnect. With initiatives such as NVLink Fusion and the development of its own CPU roadmap, the company is positioning NVLink as a broader system fabric that could extend beyond GPUs.

 

Connectivity, Networking, and System-Level Optimization

Connectivity is rapidly emerging as one of the primary constraints in next-generation AI infrastructure. Current systems are largely built on 200 Gbps SerDes, but the industry is already looking ahead to 400 Gbps SerDes. However, the transition to 400 Gbps presents significant challenges in signal integrity, power consumption, and packaging complexity, making the timeline aggressive and execution uncertain.

In this context, NVIDIA’s vertically integrated approach provides a meaningful advantage. Its control over InfiniBand technology, including SerDes development, allows it to move ahead of standard Ethernet ecosystems when necessary, particularly when industry standards lag behind system requirements.

At the same time, networking is no longer just about bandwidth. Smart NICs and DPUs, particularly NVIDIA’s BlueField platform, are becoming increasingly central to system architecture, with the market projected to grow at a 30% CAGR over the next five years. DPUs are expanding into broader roles within the AI infrastructure, managing data movement between compute, storage, and CPU domains while offloading networking and orchestration tasks from primary processors..

Taken together, these trends point toward a broader shift to system-level optimization, where performance is increasingly determined by how effectively compute, networking, and storage are integrated across the entire infrastructure stack.

 

Expanding the Platform: Beyond GPUs to Full-Stack Infrastructure

While GPUs remain the foundation of AI infrastructure, NVIDIA is clearly extending its reach across the full data center stack. Beyond its focus on domain-specific accelerators, GTC 2026 also highlighted the dense Vera CPU platform optimized for orchestrating agentic AI workloads, as well as the STX platform designed for KV cache-based context memory. A central theme underpinning this expansion is the increasing importance of co-design—bringing together compute, networking, and storage disciplines into a unified, system-level architecture rather than optimizing each component in isolation.

Taken together, these developments signal a clear expansion of NVIDIA’s total addressable market—from GPUs alone to a broader, full-stack infrastructure opportunity spanning compute, networking, and storage.

 

From Scale to Optimization: The Path Forward

NVIDIA’s rapid innovation cadence raises important questions around long-term economics, particularly as systems become more complex and capital-intensive. Maintaining a strong return on investment will depend not only on hardware performance, but on how effectively these systems can be utilized over time.

Here, NVIDIA’s software ecosystem remains a key advantage. CUDA provides continuity across generations, allowing developers to extract incremental performance improvements and enabling mixed-generation deployments that improve overall total cost of ownership.

More broadly, GTC 2026 makes it clear that the industry is moving beyond the initial phase of scaling AI infrastructure and into one defined by optimization and specialization. The shift toward heterogeneous architectures, combined with a growing focus on efficiency and workload-specific design, is reshaping how data centers are built and operated.

[wp_tech_share]

In a far-reaching ruling this week, the FCC added all consumer-grade routers produced in foreign countries to its existing Covered List–effectively blocking any new foreign-made router model from receiving FCC equipment authorization. Without FCC authorization, no new foreign-made routers can be imported or sold into the US market. Nearly 100% of consumer-grade routers are manufactured or assembled outside the United States, which means the FCC has significantly limits on new router imports and sales until approvals or waivers are granted.

Previously authorized devices are not affected and can continue to be imported, sold, and used. Firmware support for these models is expected to continue through at least March 1, 2027, with a possible extension.

For broadband providers, existing CPE deployment and inventory remain in place. However, the policy introduces uncertainty around the timing and availability of next-generation equipment.

 

Cybersecurity Concerns Could Hurt Broadband Providers

In its decision, the FCC cited cybersecurity concerns that foreign-made routers were implicated in the Volt, Flax, and Salt Typhoon  targeting vital U.S. infrastructure. All of those were serious and very concerted efforts at cyber espionage, and all have been tied back to China. The consumer routers that were targeted in each of these attacks were from multiple brands, including Cisco, D-Link, Netgear, Asus, and others, all of which generally split manufacturing and assembly between Taiwan, the Philippines, Malaysia, and Vietnam, among others. Some of these companies even have US-based corporate headquarters or major US sales offices. But for the FCC, the focus of its decision is not on the corporate nationality, but on the country of production.

For broadband operators, the implications could be meaningful. Many ISP-supplied gateways and mesh systems are assembled by global original design manufacturers (ODMs), including Sercomm, Arcadyan, Askey, Compal, and Wistron NeWeb. There is currently not enough domestic manufacturing capacity of residential CPE and routers to fill in the supply gap ISPs now face, since the vast majority are manufactured in other countries.

Cable operators running managed Wi-Fi programs—such as Comcast’s xFi, Charter’s Spectrum Wi-Fi—are particularly exposed, since those programs depend on a steady pipeline of certified gateway hardware to provision new subscribers and replace aging CPE in the field. A freeze on new model authorizations could not only limit the availability of new DOCSIS 4.0 and Wi-Fi 7 units, but also the limit the new revenue associated with the managed Wi-Fi services these operators are providing.

The FCC established a Conditional Approval pathway, which may require disclosure of management structure, supply chain details, and potential plan for to US manufacturing. However, there is no published timeline for how long that process takes, and no precedent for how many applications the relevant agencies can process in parallel.

Few, if any, brands known for consumer-grade routers currently build products stateside. Standing up domestic manufacturing lines—even for final assembly—is a capital-intensive, multi-year undertaking. Beyond the amount of time, it would take to get domestic manufacturing up and running is the cost to do so. CPE margins are incredibly slim to begin with, which makes it almost impossible that these companies would even consider onshoring manufacturing, where input costs are significantly higher than in Southeast Asia.

In the near term, the US residential router market will now stratify in ways that may not serve the underlying security objectives. Inventory of previously-authorized models will be rationed, prices will rise, and innovation cycles — particularly the transition to Wi-Fi 7 and Wi-Fi 8 — will slow in the U.S. market relative to the rest of the world. Whether that outcome makes American networks more secure, or simply more expensive, is an open question.

[wp_tech_share]

NVIDIA’s annual developer conference (San Jose, March 16–19) has become a bellwether for data center physical infrastructure (DCPI). This year was no exception. NVIDIA DSX took center stage — a full-stack platform for designing, building, and operating AI factories that now counts over 200 partners in its ecosystem. Several major DCPI vendors—including ABB, Eaton, Mitsubishi Electric, Schneider Electric, Siemens, Trane Technologies, and Vertiv—unveiled co-designed solutions in a tightly choreographed wave of announcements. It was a concrete expression of what CEO Jensen Huang declared in his keynote: “this conference is going to cover every single layer of the five-layer cake of artificial intelligence, from land, power and shell the infrastructure to chips, to the platforms, the models, and, of course, the most important, and ultimately what’s going to take get this industry taken off, is all of the applications.”

NVIDIA’s five-layer cake of AI

 

A Factory for Designing Factories

Among the DSX components, what particularly stood out was the Omniverse DSX Blueprint—a now generally available platform for modeling data center layouts, power topologies, and thermal behavior, using simulation-ready 3D models contributed by infrastructure partners in OpenUSD format. It is an ambitious vision at a time when the reality on the ground is that most data center design still relies on traditional CAD and BIM applications, and digital twin adoption is still in its infancy. This is NVIDIA being characteristically visionary—anticipating what will eventually become a necessity, even if today it can look like an overkill.

The industry is moving from adding capacity in the teens of gigawatts a year to potentially 100GW+ in a decade or less. At that scale, without AI-assisted tools in design, construction, and commissioning, it is hard to see how projects come online at the pace required—particularly given well-known skilled labor shortages. Just as semiconductor design has become fundamentally dependent on AI tools, data center design at gigawatt scale may have no choice but to follow the same path. The Omniverse Blueprint is NVIDIA’s bet on removing the barriers to building AI factories at scale.

But while the Omniverse Blueprint captures the imagination, the conversations dominating the show floor among DCPI vendors were far more immediate. Five topics in particular stood out: the growing heterogeneity of inferencing cluster racks, the fast-approaching 800 VDC transition, the ramp-up of liquid cooling designs, the potential commoditization within the MGX ecosystem and—as no data center discussion could miss it—power availability.

NVIDIA’s Vera Rubin DSX AI Factory Reference Design

 

The End of the One-Rack Era

For the past two NVIDIA generations, data center designers could plan around a single workhorse rack. The Hopper and then Blackwell platforms offered a largely homogeneous building block: one compute rack architecture, scaled across rows and halls, with relatively uniform power and cooling profiles. GTC 2026 broke that pattern decisively.

NVIDIA introduced not one but a number of rack configurations under the Vera Rubin umbrella. The NVL72 remains the flagship—72 Rubin GPUs and 36 Vera CPUs in a fully liquid-cooled, fanless, cableless enclosure exceeding 200 kW per rack. Alongside it, a CPX rack adds Rubin CPX accelerators to the Vera Rubin superchip trays, optimized for inference performance. A Vera CPU-only rack targets inference and data preprocessing without GPU acceleration. And the LPX rack with Groq’s LPUs debuts third-party silicon within NVIDIA’s own reference design.

This is a big departure. And it is also entirely expected. A single architecture serving every workload was only tenable while AI infrastructure was synonymous with large-scale training. As workloads diversify into a variety of fine-tuning and inferencing agentic AI applications, infrastructure must follow suit. Henry Ford was able to offer the Model T alone for only so long.

For DCPI vendors, the implications are immediate. Heterogeneous clusters mean managing mixed rack densities, uneven heat loads, and varying liquid cooling requirements coexisting on the same row. This is a design and operational challenge that will demand far more flexibility from infrastructure solutions than the relatively uniform AI halls of the Hopper and Blackwell era.

 

High Voltage, High Stakes

For the biggest disruption in data center power architecture in decades, 800 VDC power distribution received remarkably little attention in NVIDIA’s official channels. Absent from Jensen’s keynote and with no significant announcements since the technical blog and whitepaper released alongside last year’s OCP Global Summit—an event we covered in a previous blog—NVIDIA’s messaging on the architecture has been sparse.

The relevance of the discussion among vendors, however, could not have been more different. 800 VDC was the talk of the town. Multiple vendors showcased equipment and prototypes, and many dedicated sessions explored everything from semiconductor building blocks to rack-level power delivery and facility integration. Vendors like Delta Electronics, Texas Instruments, and STMicroelectronics focused their marquee March 16 announcements squarely on 800 VDC developments—an unusual departure from the lockstep of similar-themed announcements that have become the norm at GTC.

Schneider Electric’s Jim Simonelli session at GTC draws interest from audience

 

Such advancements are important and necessary, but many pieces of the 800 VDC topology remain unanswered. In his GTC session entitled “A Safe, Efficient, and Scalable Approach to 800 VDC Architecture,” Eaton’s J.P. Buzzell referenced an OCP white paper expected in the coming weeks. The draft should bring more clarity to the architecture, but there is still a long way to go before engineers can fully specify an 800 VDC data hall. And even once the specification matures, supply chains for components will need to be stood up and safety guidelines codified before broad deployment can begin.

 

45 Degrees of Separation

Much like 800 VDC, another infrastructure shift that made waves in an earlier NVIDIA keynote received little airtime at GTC. At CES in January, Jensen highlighted the move toward 45°C warm-water inlet temperatures—a significant departure from the designs more commonly deployed today. Beyond Jensen’s brief nod to Vera Rubin’s 45°C specification, the topic received little attention at GTC.

NVIDIA remains committed to 45°C, but there is no sign of it doubling down or rushing to get there. The convergence toward 45°C architectures will take longer to play out. Facility-side infrastructure needs to be adapted, but operators might remain reluctant to optimize the cooling system if doing so carries any risk of reducing accelerator performance. In an age of highly constrained compute, every token counts. And the imperative to maximize throughput trumps facility-level efficiency optimization.

The water temperature debate, however, was far from the only liquid cooling story at GTC. On the show floor, the direction of travel for CDU capacity was unmistakable. As pod architectures scale and per-rack thermal loads climb, vendors responded with a new class of multi-megawatt CDUs. These are a step change from capacities that dominated the market just a year ago, and we expect this upward trend to continue as next-generation pod architectures push thermal envelopes further still.

Delta Electronics’ 800VDC CDU

 

An interesting product found on the exhibition floor was a direct-current CDU, able to be connected straight to the 800 VDC bus. It is a thoughtful choice that adds flexibility for operators designing next-generation whitespace, even if we expect most large units to be housed in mechanical galleries in the grey space—where traditional AC power distribution is likely to remain the standard for the foreseeable future. Either way, the convergence of power and cooling design choices is becoming impossible to ignore.

 

MGX and the March Toward Standardization

The growing specificity of NVIDIA’s reference architectures—from rack dimensions and cooling requirements to power topologies and simulation-ready digital models—raises an uncomfortable question for DCPI vendors: as NVIDIA defines more of the design, what room is left for differentiation?

The “MGX wall” on the show floor—displaying components from dozens of vendors side by side within the standardized MGX ecosystem—made this tension visible. By standardizing interfaces, form factors, and performance specifications across the infrastructure stack, MGX makes it easier for operators to mix and match components from multiple suppliers. That is a win for deployment speed and supply chain resilience. But it also compresses the space in which vendors can compete on anything other than price and availability—the classic hallmarks of a commoditizing market.

Quick disconnects from multiple vendors showcased at the “MGX wall”

 

Not all vendors will be affected equally. Those with deep system integration expertise, intelligent controls, service capabilities, or engineering and quality differentiation in mission-critical components will find ways to stay above the commoditization line. But for vendors whose value proposition rests primarily on the physical product itself, the tightening of NVIDIA’s specifications around their equipment is a trend worth watching closely.

 

Unlocking the Grid

Perhaps the most consequential launch at GTC came not from the chip announcements but from DSX Flex—NVIDIA’s software layer for connecting AI factories to grid services and orchestrating dynamic power adjustment. With NVIDIA’s order book continuing to grow, the math is simple: the gap between the power needed to energize forecast chip shipments and the pace of grid updates is too large to ignore. And the only near-term path to more power is not launching data centers into space, but tapping into existing grid capacity when it is not being used.

This was a point I raised directly with Jensen during the event. His response was unequivocal: data centers must change their relationship with the grid and be willing to accept less stringent SLAs in exchange for faster access to capacity. AI workloads will need to flex around supply constraints rather than demanding always-on, fully firm power. In a world where tokens per watt is becoming the defining metric for AI factory economics, accessing these watts and maximizing them becomes a dealbreaker. Startups like Emerald AI and Phaidra are building the technology to support this, but unlocking it at scale requires more than just engineering ingenuity. It depends on the willpower and aligned incentives of primary gatekeepers involved—utilities, grid operators, and their regulators.

 

What This Means for the DCPI Market

Dell’Oro Group’s latest DCPI market update, released during GTC week, showed the market reached $10.9 billion in 4Q 2025—up 20% year-over-year—with synchronized backlog surges across vendors in power and cooling. The AI supercycle continues to drive record investment, and GTC 2026 did nothing to dampen expectations. The tone was one of confident optimism—about the trajectory of AI, the scale of compute still to be built, and the opportunities ahead for data center vendors.

Regardless of whether that optimism proves fully warranted, GTC 2026 left little doubt: the DCPI market is entering its most consequential chapter yet. Stay tuned as we continue to track these shifts—and connect with us at Dell’Oro Group to discuss these trends as they unfold.

 


Vendor Press Releases

Accelsius:

Delta Electronics:

Eaton:

Foxconn:

Flex:

Hitachi:

LiteOn:

Schneider Electric:

STMicroelectronics:

Texas Instruments:

Trane Technologies:

Vertiv:

 

 

[wp_tech_share]

AI RAN is moving to the center court. While operators have not fundamentally changed how they think about their RAN roadmaps—openness, intelligence, automation, and virtualization remain the core pillars of next-generation RAN platforms—the visibility and adoption of these technologies vary significantly. In the early phase of 5G, Open RAN and vRAN dominated the conversation. Today, AI RAN is the shiny object.

Events such as MWC2026 Barcelona and Nvidia GTC reinforced the message we have communicated for some time, namely that AI RAN is already happening. At the same time, the GPU conversation is shifting. Looking ahead, AI RAN is expected to see broad adoption across the RAN in the latter half of the 5G cycle and from the outset of 6G.

All roads lead to increased adoption of AI RAN. Differences will emerge across deployment models, compute architectures, hardware choices, functional splits, and underlying technologies.

AI RAN Segments - Dell'Oro

At present, the majority of the AI RAN market is driven by distributed AI-for-RAN solutions focused on improving performance and efficiency, often leveraging existing 5G infrastructure. Vendors such as Huawei and ZTE have collectively shipped more than 0.6 M AI-enabled boards/plug-ins, underscoring that AI RAN is already happening at scale.

One of the key takeaways from MWC Barcelona is that nearly all RAN roadmaps—across both large and smaller vendors—now incorporate AI RAN capabilities across the full RAN stack, with a focus on AI-for-RAN. And it is not just the baseband—suppliers are now bringing intelligence into every RAN layer, including the radios. Ericsson’s launch of ten AI-ready radios featuring in-house silicon with neural network accelerators is a case in point. The question is no longer if AI RAN and AI-RAN will happen, but rather how, what, where, and when.

Ericsson AI RAN
Source: Ericsson

 

Dell’Oro’s long-term view of next-generation RAN has remained broadly intact. Events like MWC 2026 and NVIDIA GTC have done little to alter the underlying trajectory. The likelihood that AI RAN, Cloud RAN, and multi-vendor RAN will play major roles in the second half of 5G and the early 6G era remains high, moderate, and low, respectively. According to our latest forecast update, AI RAN is expected to surpass $10 B and account for roughly one-third of the total RAN market by 2029 (this is not new revenue).

Within the AI RAN domain, the prospects for GPU-RAN (and AI-and-RAN) are improving—still small, but no longer negligible. This shift reflects both low starting expectations and a gradual change in sentiment. The conversation is moving from outright skepticism to cautious curiosity. Much of this momentum is being driven by NVIDIA’s continued push and its vision that the world’s ~10 million macro sites could evolve into more than just base stations. As Jensen Huang put it during his GTC keynote: “That base station…is going to become an AI infrastructure platform.”

Early operator progress—from T-Mobile, SoftBank, and Indosat—combined with Nokia’s recent reiteration of its AI-RAN roadmap, is reinforcing this shift. Samsung and 1Finity, meanwhile, are exploring whether GPUs could make sense to diversify their computing platforms.

Source: Nokia

 

Part of the renewed interest in AI RAN—and GPU RAN specifically—stems from a broader realization: technological change is accelerating at a much faster pace than during the 4G-to-5G transition. This shift is reshaping how the industry views the role of mobile networks, the distribution of AI inference, and the trade-offs between hardware-based and software-defined architectures.

At the same time, “physical AI” is becoming more tangible. Concepts that once felt like science fiction—such as robots assisting with cooking or walking children to school—are now increasingly plausible in the near term.

That said, operators remain cautious for now about GPU RAN and broad-base AI inference distribution, even as skepticism gradually eases as the ecosystem matures. The constraints are structural. RAN deployments operate under tight power budgets, strict cost controls, and massive scale requirements. These factors make it challenging to justify deploying power-intensive compute at every cell site.

So, concerns persist about the performance-per-watt gap between GPUs and custom silicon, as well as the practicality and need to support non-telco workloads at both D-RAN and C-RAN sites—particularly in D-RAN deployments. For example, the SoftBank/Ericsson robot assistance demo at MWC operated with latency requirements of around 100 ms, which allows for centralized AI inference, with compute resources located in a data center using the User Plane Function.

In other words, AI RAN is moving from hype toward reality. While trade-offs across AI inference distribution needs, flexibility, performance, energy efficiency, TCO, and TTM will shape adoption paths over the near-term and long-term, the overall direction is clear: AI will become an integral part of every layer of the RAN.

Base-case projections suggest that non-GPU RAN will dominate AI RAN over the forecast period, reflecting both the ability to upgrade existing infrastructure, the constraints at the cell site, and the need for multi-purpose tenancy. This suggests NVIDIA still faces a meaningful challenge if it aims to position itself not only as the “inference king,” but also as the “AI RAN king.”

At the same time, the conversation is clearly evolving. Operators are no longer asking why GPUs might be relevant, but rather where and when they make sense. If NVIDIA succeeds in expanding the role of the RAN—from a single-purpose connectivity layer into a distributed AI platform—the long-term opportunity could be significantly larger than what is currently reflected in our base-case assumptions. As Amara’s Law suggests, the risk may not be overestimating the short-term impact of AI RAN, but underestimating the demand for more distributed intelligence over the long-term.

 

[wp_tech_share]

Open RAN has made significant progress since the O-RAN Alliance was formed in 2018 to “re-shape the RAN industry and ecosystem towards more intelligent, open, virtualized, and interoperable networks.” However, the results to date have been mixed. Open fronthaul (Open FH) is increasingly being specified as a baseline capability for next-generation RAN platforms. At the same time, supplier diversity has not improved. In fact, RAN market concentration is higher today than it was before the alliance was established. Also, uneven adoption across greenfield, early-adopting, and early-majority operators contributed to a sharp capex deceleration following the Open RAN peak in 2022. That slowdown fueled concerns about the movement’s momentum, even with single-vendor Open RAN.

Market conditions improved in 2025. Following the roughly 40 percent decline between 2022 and 2024, preliminary findings suggest worldwide Open RAN revenue grew at a double-digit rate in 2025. Virtualized RAN (vRAN) revenue also stabilized, although at a more modest pace. Several factors help explain this reversal, including easier year-over-year comparisons, more favorable RAN spending trends in regions with strong Open RAN exposure, and, to a lesser extent, increased activity among early-majority adopters.

Vendor rankings did not change significantly, but the broader RAN landscape evolved in ways that also affected the Open RAN and Cloud RAN ecosystems. Both Mavenir and NEC revised their RAN strategies. Mavenir is now focusing more on small cells and non-terrestrial networks (NTN), while NEC is prioritizing vRAN and Massive MIMO. Meanwhile, 1Finity moved up one spot in the Open RAN ranking.

The incumbent Western suppliers are fully hedged. Ericsson and Nokia continue to support Open RAN while maintaining integrated portfolios. According to Ericsson’s latest update, 160 radio models will be Open-RAN-proven by the end of 2026. Likewise, Nokia’s recently introduced AI-RAN-ready Doksuri radios include compatibility with Open fronthaul standards.

Looking ahead, the positive momentum is expected to continue into 2026, with both Open RAN and vRAN projected to grow this year. The longer-term outlook for Open RAN and Cloud RAN also remains favorable. We have not changed the long-term assumptions communicated in the most recent forecast update. To recap, near-term Open RAN revenue projections were revised downward, while long-term growth expectations strengthened.

Virtualization remains a key pillar of next-generation RAN platforms. At the same time, Cloud RAN projections were lowered in the most recent five-year forecast. Still, Cloud RAN is expected to account for roughly 15 to 20 percent of the total RAN market by 2030.

Although the narrative around Open RAN improving supplier diversity has clearly cooled, the emerging GPU-RAN and software RAN wave is reopening the conversation about non-traditional suppliers playing a larger role in the RAN ecosystem. That said, the base case outlook for mixing and matching vendors remains limited. Multi-vendor RAN is still expected to account for less than 5 percent of total RAN deployments by 2030.

For more information about our RAN and Open RAN coverage, please see https://www.delloro.com/advanced-research-report/openran/