[wp_tech_share]

Below are some of the MWC takeaways related to end-user drivers, AI RAN, 6G, and Open RAN.

 

It is all about the 0.06%

With mobile data traffic slowing and multiple data points suggesting the mobile network has significant excess capacity, the focus is shifting toward uplink (UL) traffic growth (Ericsson estimates it could grow by 3x every five years), network differentiation, and the RAN investment requirements needed to dimension networks for the AI era.

Impressive demos featuring robots serving drinks, busy concerts, or the ability to look up information using smart glasses were less interesting than the 0.06%—a number that every investor, financial analyst, and 6G skeptic now seems to have memorized. Against a backdrop of low network utilization, this figure became one of the key focal points of the show.

For those who did not read Ericsson’s June 2025 Mobility Report, 0.06% is the share of total network traffic originating from GenAI. The concern is that while AI is proliferating rapidly, its impact on the mobile network—for now—is negligible.

 

Two different mobile network visions

As we have discussed in various reports and blogs, we believe there are, at a high level, two very different mobile network visions evolving.

The “telecom is dead” narrative is largely driven by the mobile data traffic growth trajectory. The key argument is that humans can only consume so much smartphone video in a day; therefore, mobile data traffic growth will soon plateau, reducing the need for new 6G spectrum.

The “telecom is alive” vision is built on the assumption that it is still early days in the AI era. Even with GenAI accounting for just 0.06% of total traffic, this camp is more optimistic about both human-driven and machine-driven traffic. On the human side, the expectation is that new devices—better suited for environments where data is continuously recorded, analyzed, and uploaded—will emerge and significantly change mobile traffic patterns. Smart glasses are a strong contender, though they are not ready yet.

Machines and Physical AI are also expected to have a profound impact on everyday life for both consumers and enterprises. Fixed networks/Wi-Fi will play an important role, but they will need to be complemented by high-performance cellular connectivity.

These two very different visions are important, as they will shape how operators approach capex, architectural shifts, the timing of 6G, and the need for AI RAN, among other things.

All roads lead to AI RAN

MWC reinforced what we have already communicated:

  1. AI RAN is happening (across all RAN layers)
  2. It will play a major role in the second half of 5G and from the outset with 6G
  3. The base case is that AI-for-RAN will dominate over the forecast period
  4. The GPU RAN conversation is evolving

Operators are no longer asking why GPUs might be relevant, but rather where and when they make sense. Expectations for GPU RAN are still modest, but the topic has clearly moved out of the noise.

AI RAN Segments - Dell'Oro

Please see the recently published AI RAN blog for more details.

 

Open RAN/vRAN takes a backseat, but remains important

AI RAN has moved into the passenger seat, while Open RAN/vRAN/Cloud RAN was clearly pushed to the backseat at this year’s MWC. Still, even with reduced Open RAN marketing, most conversations and demos support our broader message:

  1. Open FH is increasingly being specified as a baseline capability for next-generation RAN platforms. Ericsson plans to have 160 Open-RAN-proven radios by the end of 2026. Similarly, Nokia’s recently introduced AI-RAN-ready Doksuri radios include compatibility with Open FH standards. Samsung, 1Finity, and NEC are already strong proponents of Open RAN/vRAN.
  2. Supplier diversity is not improving. RAN market concentration continues to increase, and Open RAN is most often deployed in single-vendor configurations.
  3. Multi-vendor Open RAN remains rare, although one European Tier-1 operator believes our forecast may be too pessimistic.
  4. Vendor strategies are shifting. Both Mavenir and NEC have recently revised their RAN strategies. Mavenir is focusing more on small cells and non-terrestrial networks (NTN), while NEC is prioritizing vRAN and Massive MIMO.
  5. The long-term outlook remains positive, though forecasts were revised downward in the January update.
  6. It is still not guaranteed that Open RAN will become part of the 3GPP standards.

Please see the recently published Open RAN blog for more details.

 

Massive MIMO – upside with 6G and FDD

Massive MIMO has been a major success. In addition to capacity gains, operators have used Massive MIMO to improve range and minimize incremental cell-site investments. We estimate that these higher-MIMO configurations accounted for roughly half of 5G RAN revenues between 2018 and 2025.

With upper mid-band now covering more than 55% of the global population (per the Ericsson Mobility Report), growth will become more challenging. So far, the focus has been on upper mid-band TDD, while FDD and 6.4 GHz+ remain largely untapped.

FDD Massive MIMO is gaining attention because:

  1. UL traffic is now growing faster than DL
  2. Technology improvements are helping reduce size and improve form factors

Because of the lower carrier frequencies, size remains a challenge in FDD spectrum. Huawei marketed its 28 kg 1.8+2.1 GHz FDD Massive MIMO unit as the industry’s lightest.

 

6G – no change to consensus outlook
  1. 6G was not as prominent as AI RAN, but the discussions and demos we saw reinforce our existing view:
  2. 6G is now inevitable—focus has shifted from if to how and when
  3. Technology ramp is expected around 2029/2030
  4. 4–8.4 GHz is emerging as the “golden spectrum”
  5. The existing macro grid will provide the foundation, with Massive MIMO playing a key role
  6. A more optimistic network vision is improving sentiment—6G is not just about capacity, but also about changing traffic patterns as machines account for a larger share of total traffic
  7. AI-native architecture will be central

[wp_tech_share]

OFC 2026 was held a couple of weeks ago, and since then, I have had a chance to reflect on what was shown and what I saw from the perspective of an Optical Transport industry analyst. The simple conclusion is that the next direction for optical networking is to scale up in density.

100 Gbps ZR/ZR+ is officially a market

I should clarify this headline. 100 Gbps ZR/ZR+ QSFP28 pluggable optics have been shipping for revenue since 4Q 2023, and shipments have ramped nicely. However, the only supplier of the DSP (Digital Signal Processor) was a co-developed product from Coherent Corp and Adtran. So, technically, it was only one supplier. This changed during OFC when two additional suppliers—Cisco Acacia and Arycs Technologies—announced plans to begin shipping their 100 Gbps ZR/ZR+ pluggable with in-house DSPs in mid-2026. Now there are three DSP suppliers, introducing competition and giving customers a choice. It “feels” more like a market. I should add that in the IPoDWDM and Disaggregated WDM report, we forecast that 100 Gbps ZR/ZR+ optical pluggable modules will grow steadily for many years to come.

 

1.6 Tbps ZR/ZR+ pluggable optics were announced… before 800 ZR/ZR+ volume shipments started

This is the environment we are in right now—things are moving fast, and development cycles are shortening. The good news is that Cisco Acacia announced it has ramped 800 Gbps ZR/ZR+ DSP production, shipping 25,000 DSPs to date, which is a lot compared to other 800 Gbps ZR/ZR+ suppliers. But to put this in perspective, the cumulative shipments for 400 Gbps ZR/ZR+ pluggable optics to date for the industry, per our ZR/ZR+ optical pluggable shipment tracker, are closer to 1.7 million (FYI Cisco Acacia has stated they shipped 750 thousand 400 Gbps DSPs to date, where most were used for ZR/ZR+ optics). So, 800 Gbps ZR/ZR+ is just starting to ramp in production.

The vendors that announced plans to sell 1.6 Tbps ZR/ZR+ optical plugs in an OSFP form factor based on DSPs using a 2 nm foundry process were 1Finity, Ciena, Marvell, and Nokia. The timeline was a little vague, but I believe 1.6 Tbps ZR/ZR+ plugs will be generally available before the end of 2027, with samples as early as 4Q 2026. A couple of items to point out: 1) 1Finity will use a 3rd party DSP, and 2) Cisco Acacia did not make any announcements about 1.6 Tbps optics at OFC. So, I am guessing the company is waiting until ECOC in September.

 

Nokia laid out its anywhere, anyplace, and anybody product strategy

“Anywhere, anyplace, and anybody” is my own interpretation of Nokia’s optical product strategy after listening to the company’s announcements. Nokia held an analyst event at OFC where the company presented all the new products it has in the pipeline. Personally, I liked seeing all the products it has in development, but it could be overwhelming to hear it all at once in under 60 minutes. Luckily, for me, the OFC analyst event was the fourth Nokia meeting I had with them, where they presented these product ideas, so I understood more of the details that the company executives didn’t have time to explain during the event. In summary, Nokia presented the following new products:

  • Four new coherent DSPs (Huron, Superior, Ontario, and Pacific) are planned. Not one or two, but four! The key here is that three of the four are being developed simultaneously, using the same 2 nm base structure and logic. In other words, the cost to develop two of them (Superior and Ontario) is a fraction of the cost to develop Huron. Nokia’s objective is to create cost and performance-optimized DSPs for applications that include 3.2 Tbps coherent-lite for campus, 1.6 Tbps ZR/ZR+ pluggable for metro and DCI, and 2.4 Tbps high-performance for long-haul and subsea. And while not explicitly stated, to meet the differing needs of their wide customer base (CSPs, cloud providers, enterprises/public). So, basically, DSPs for anywhere, anyplace, and anybody. There wasn’t much said about Pacific, but I believe it will be a high-performance, 3.2 Tbps-capable DSP operating at 400 Gbaud and will likely be productized later than the first three.
  • All the pluggable optics (QSFP and OSFP) and embedded line cards needed to house the new DSPs in different shapes and forms.
  • Double-sided pluggable transponder. The idea is simple: combine the client optical transceiver and the coherent optical transceiver into a single pluggable module. One use case for this is to convert CPO grey light to colored light.
  • Multi-rail in-line amplifier (ILA). It wasn’t clear what variations the company would offer, but they stated that the highest-density configuration would be 160 rails per rack. The system will begin sampling mid-2026.
  • A Full Band Transponder (also called a full spectrum transponder) that encloses all the client ports, coherent transponder components, and mux/demux into one line card module that fits in an existing GX chassis, delivering a single fiber output with multiple wavelengths. Nokia plans to offer variations of this module with different options and optical engines.

 

Ciena kept things at the system level

Ciena announced several products under development but kept much of the coherent DSP activity under wraps (probably to spread its announcements out between OFC and ECOC). The products included:

  • 1.6 Tbps ZR/ZR+ OSFP pluggable module for IPoDWDM. No comment was made on the DSP to enable it other than that it is a 2 nm DSP.
  • 3.2 Tbps coherent-lite plug for campus. This may leverage the same DSP developed for 1.6 Tbps ZR/ZR+ as Ciena did for its 1.6 Tbps coherent-lite plug.
  • RLS Hyper-Rail, which is a multi-rail ILA. The company plans to offer 300 mm and 600 mm versions, as well as a 5 RU size to fit in existing ILA huts.
  • Full Spectrum Transponder to house all the client ports, coherent transponder components, and mux/demux in a single unit that outputs all the wavelengths through a single fiber connector, enabling a rapid delivery for a full-fiber deployment.
  • Early work on xPO modules was shown. I was surprised since the MSA was just announced, but I guess the advantage of xPO is that companies can use existing components to fill in the xPO form factor, but in a tighter configuration, since the xPO has liquid cooling.

 

Optical line systems are REALLY important

As transponder technology approaches Shannon’s Limit, spectral efficiency improvements do little to increase fiber capacity. The realization is that to add more capacity, more fibers will be needed, and each fiber pair requires an ILA every 80 km. In addition, cloud providers are building massive AI data centers that now need to scale-across hundreds of kilometers between data centers to form a larger virtualized AI factory. My discussion at OFC with some folks pointed to a need for 20 Pbps of capacity to connect the back-end of a GPU data center to another. This would convert to 390 fiber pairs when connecting 800 ZR+ optics at each end. The answer to this is a multi-rail system. If a rack unit supports 128 rails, three racks of multi-rail ILAs will be needed at each site.

During OFC, four companies announced multi-rail products: Coherent Corp., Ciena, Cisco, and Nokia. Three other companies (Molex, Ribbon, and Smartoptics) plan to look into developing a multi-rail system. Based on the timing of availability, I think commercial shipments of multi-rail products could begin in 2027.

 

Density is the Next Dimension

For decades, the method for scaling optical transport networks was to increase wavelength speeds (Mbps to Tbps) and the usable spectrum in a fiber (C-band to Super C and L-band). However, as we saw at OFC 2026, the next dimension is density—increasing the number of transponders and ILAs that fit in a cubic meter of space. This is the reason for some of the new product announcements:

  • 1.6 Tbps ZR/ZR+ optics
  • xPO form factor pluggable module
  • Full Spectrum Transponder
  • Multi-rail ILA

You can imagine. Combining all four product features into an optical network will increase the density by around 4 times.

  • Put multiple 1.6 Tbps coherent optics inside a full-band/spectrum transponder unit. Use xPO modules for the client interface instead of 1.6 Tbps OSFP plugs, saving 75% of the front panel density.
  • Connect the fiber coming out of the full-band/spectrum transponder to a multi-rail ILA that is 75% smaller than a traditional ILA unit.

Following OFC 2026, I think the new metric for an optical transport system’s efficiency will be volumetric density (Gbps-per-cubic meter) rather than spectral efficiency (Gbps-per-hertz).

 

[wp_tech_share]

Very little is written about Huawei’s optical DWDM technology, but that doesn’t mean the company hasn’t made some big waves in the industry. We had the chance to sit down with the Huawei optical team, led by Gavin Gu, at MWC 2026 to learn about their latest coherent DWDM technology. This is what we learned.

Huawei has started shipping its next-generation high-performance coherent DSP in the first quarter of 2026 as an embedded assembly in a muxponder with two ports of 2.0 Tbps coherent wavelengths. The client ports in the module include a mix of 100 Gbps, 400 Gbps, and 800 Gbps. These muxponders are housed in the company’s DWDM systems, namely the OSN 9800 K12 and K36. And of course, Huawei’s new module delivers wavelengths across the entire Super C-band and Super L-band, which is increasingly important as wavelength channels get wider.

As is the situation in the industry, the highest wavelength speed used to identify the coherent technology is just that, the highest speed capable. One of the benefits of modern coherent line cards is that the symbol rate and modulation can be adjusted to deliver different wavelength speeds and performance. Huawei presented a few of those options in a chart (Figure 1) showing the unregenerated signal distance at different wavelength speeds. Maybe the most important speeds to look at are the 2.0 Tbps and 800 Gbps. We say this because the maximum distance at 2.0 Tbps gives us a good sense of the technology, and the maximum distance at 800 Gbps tells us if the muxponder will meet current customer requirements for unregenerated span lengths when they upgrade networks to 800 Gbps over the next few years.

How does Huawei’s new coherent wavelength technology compare to the rest of the industry? We did a simple comparison between the industry and Huawei (Figure 2). Specifically, we looked at high-performance coherent muxponders that are generally available as of 1Q 2026. Of course, this doesn’t give a deep assessment of Huawei’s technology or even that of the industry. But at a high level, we think it gives a good sense of where the company is at.

Figure 2: Currently Shipping High-Performance Coherent Line Cards

Two key differences show up in this comparison. The first is that Huawei DSP uses a larger semiconductor process node, while the industry is at 3 nanometers (nm). This difference puts Huawei at a slight disadvantage at the ASIC level, but the company can still deliver 2.0 Tbps at 80 km, which was proven in a live demonstration. Usually, a DSP using a larger process node would also consume more power. However, in Huawei’s newest muxponder, power consumption is lower at 0.1 Watts/Gbps, compared to the industry average of 0.125 Watts/Gbps. We believe this power advantage is created by Huawei’s extensive in-house development of every component inside a coherent optical module (tunable laser, receiver, TIA, driver, modulator, and DSP), along with its expertise in photonic packaging and manufacturing processes (Huawei has its own state-of-the-art manufacturing, assembly, and test facility for optical modules that we once visited).

Also, using an advanced InP-based modulator with a distributed electrode, internally designed and developed to achieve 30% lower parasitic capacitance, could give the company a power-consumption advantage at the module level, compensating for the DSP’s higher power consumption. Then, at the system level, Huawei also internally develops and manufactures the major components of its optical line systems, including its pump lasers and WSS modules, giving the company greater control over technology performance and time-to-market. As a result, Huawei is constantly innovating its optical system design, from the chip level to the system level.

The 2 Tbps technology is now ready, with the first wave of deployments underway. During MWC 2026, Huawei highlighted six successful trials of its 2 Tbps-capable muxponders across Europe, Asia Pacific, the Middle East, and Latin America. The company also proudly announced that the first commercial deployment with a major European Tier-1 communication service provider (CSP) is currently in progress.

[wp_tech_share]

In a far-reaching ruling this week, the FCC added all consumer-grade routers produced in foreign countries to its existing Covered List–effectively blocking any new foreign-made router model from receiving FCC equipment authorization. Without FCC authorization, no new foreign-made routers can be imported or sold into the US market. Nearly 100% of consumer-grade routers are manufactured or assembled outside the United States, which means the FCC has significantly limits on new router imports and sales until approvals or waivers are granted.

Previously authorized devices are not affected and can continue to be imported, sold, and used. Firmware support for these models is expected to continue through at least March 1, 2027, with a possible extension.

For broadband providers, existing CPE deployment and inventory remain in place. However, the policy introduces uncertainty around the timing and availability of next-generation equipment.

 

Cybersecurity Concerns Could Hurt Broadband Providers

In its decision, the FCC cited cybersecurity concerns that foreign-made routers were implicated in the Volt, Flax, and Salt Typhoon  targeting vital U.S. infrastructure. All of those were serious and very concerted efforts at cyber espionage, and all have been tied back to China. The consumer routers that were targeted in each of these attacks were from multiple brands, including Cisco, D-Link, Netgear, Asus, and others, all of which generally split manufacturing and assembly between Taiwan, the Philippines, Malaysia, and Vietnam, among others. Some of these companies even have US-based corporate headquarters or major US sales offices. But for the FCC, the focus of its decision is not on the corporate nationality, but on the country of production.

For broadband operators, the implications could be meaningful. Many ISP-supplied gateways and mesh systems are assembled by global original design manufacturers (ODMs), including Sercomm, Arcadyan, Askey, Compal, and Wistron NeWeb. There is currently not enough domestic manufacturing capacity of residential CPE and routers to fill in the supply gap ISPs now face, since the vast majority are manufactured in other countries.

Cable operators running managed Wi-Fi programs—such as Comcast’s xFi, Charter’s Spectrum Wi-Fi—are particularly exposed, since those programs depend on a steady pipeline of certified gateway hardware to provision new subscribers and replace aging CPE in the field. A freeze on new model authorizations could not only limit the availability of new DOCSIS 4.0 and Wi-Fi 7 units, but also the limit the new revenue associated with the managed Wi-Fi services these operators are providing.

The FCC established a Conditional Approval pathway, which may require disclosure of management structure, supply chain details, and potential plan for to US manufacturing. However, there is no published timeline for how long that process takes, and no precedent for how many applications the relevant agencies can process in parallel.

Few, if any, brands known for consumer-grade routers currently build products stateside. Standing up domestic manufacturing lines—even for final assembly—is a capital-intensive, multi-year undertaking. Beyond the amount of time, it would take to get domestic manufacturing up and running is the cost to do so. CPE margins are incredibly slim to begin with, which makes it almost impossible that these companies would even consider onshoring manufacturing, where input costs are significantly higher than in Southeast Asia.

In the near term, the US residential router market will now stratify in ways that may not serve the underlying security objectives. Inventory of previously-authorized models will be rationed, prices will rise, and innovation cycles — particularly the transition to Wi-Fi 7 and Wi-Fi 8 — will slow in the U.S. market relative to the rest of the world. Whether that outcome makes American networks more secure, or simply more expensive, is an open question.

[wp_tech_share]

AI RAN is moving to the center court. While operators have not fundamentally changed how they think about their RAN roadmaps—openness, intelligence, automation, and virtualization remain the core pillars of next-generation RAN platforms—the visibility and adoption of these technologies vary significantly. In the early phase of 5G, Open RAN and vRAN dominated the conversation. Today, AI RAN is the shiny object.

Events such as MWC2026 Barcelona and Nvidia GTC reinforced the message we have communicated for some time, namely that AI RAN is already happening. At the same time, the GPU conversation is shifting. Looking ahead, AI RAN is expected to see broad adoption across the RAN in the latter half of the 5G cycle and from the outset of 6G.

All roads lead to increased adoption of AI RAN. Differences will emerge across deployment models, compute architectures, hardware choices, functional splits, and underlying technologies.

AI RAN Segments - Dell'Oro

At present, the majority of the AI RAN market is driven by distributed AI-for-RAN solutions focused on improving performance and efficiency, often leveraging existing 5G infrastructure. Vendors such as Huawei and ZTE have collectively shipped more than 0.6 M AI-enabled boards/plug-ins, underscoring that AI RAN is already happening at scale.

One of the key takeaways from MWC Barcelona is that nearly all RAN roadmaps—across both large and smaller vendors—now incorporate AI RAN capabilities across the full RAN stack, with a focus on AI-for-RAN. And it is not just the baseband—suppliers are now bringing intelligence into every RAN layer, including the radios. Ericsson’s launch of ten AI-ready radios featuring in-house silicon with neural network accelerators is a case in point. The question is no longer if AI RAN and AI-RAN will happen, but rather how, what, where, and when.

Ericsson AI RAN
Source: Ericsson

 

Dell’Oro’s long-term view of next-generation RAN has remained broadly intact. Events like MWC 2026 and NVIDIA GTC have done little to alter the underlying trajectory. The likelihood that AI RAN, Cloud RAN, and multi-vendor RAN will play major roles in the second half of 5G and the early 6G era remains high, moderate, and low, respectively. According to our latest forecast update, AI RAN is expected to surpass $10 B and account for roughly one-third of the total RAN market by 2029 (this is not new revenue).

Within the AI RAN domain, the prospects for GPU-RAN (and AI-and-RAN) are improving—still small, but no longer negligible. This shift reflects both low starting expectations and a gradual change in sentiment. The conversation is moving from outright skepticism to cautious curiosity. Much of this momentum is being driven by NVIDIA’s continued push and its vision that the world’s ~10 million macro sites could evolve into more than just base stations. As Jensen Huang put it during his GTC keynote: “That base station…is going to become an AI infrastructure platform.”

Early operator progress—from T-Mobile, SoftBank, and Indosat—combined with Nokia’s recent reiteration of its AI-RAN roadmap, is reinforcing this shift. Samsung and 1Finity, meanwhile, are exploring whether GPUs could make sense to diversify their computing platforms.

Source: Nokia

 

Part of the renewed interest in AI RAN—and GPU RAN specifically—stems from a broader realization: technological change is accelerating at a much faster pace than during the 4G-to-5G transition. This shift is reshaping how the industry views the role of mobile networks, the distribution of AI inference, and the trade-offs between hardware-based and software-defined architectures.

At the same time, “physical AI” is becoming more tangible. Concepts that once felt like science fiction—such as robots assisting with cooking or walking children to school—are now increasingly plausible in the near term.

That said, operators remain cautious for now about GPU RAN and broad-base AI inference distribution, even as skepticism gradually eases as the ecosystem matures. The constraints are structural. RAN deployments operate under tight power budgets, strict cost controls, and massive scale requirements. These factors make it challenging to justify deploying power-intensive compute at every cell site.

So, concerns persist about the performance-per-watt gap between GPUs and custom silicon, as well as the practicality and need to support non-telco workloads at both D-RAN and C-RAN sites—particularly in D-RAN deployments. For example, the SoftBank/Ericsson robot assistance demo at MWC operated with latency requirements of around 100 ms, which allows for centralized AI inference, with compute resources located in a data center using the User Plane Function.

In other words, AI RAN is moving from hype toward reality. While trade-offs across AI inference distribution needs, flexibility, performance, energy efficiency, TCO, and TTM will shape adoption paths over the near-term and long-term, the overall direction is clear: AI will become an integral part of every layer of the RAN.

Base-case projections suggest that non-GPU RAN will dominate AI RAN over the forecast period, reflecting both the ability to upgrade existing infrastructure, the constraints at the cell site, and the need for multi-purpose tenancy. This suggests NVIDIA still faces a meaningful challenge if it aims to position itself not only as the “inference king,” but also as the “AI RAN king.”

At the same time, the conversation is clearly evolving. Operators are no longer asking why GPUs might be relevant, but rather where and when they make sense. If NVIDIA succeeds in expanding the role of the RAN—from a single-purpose connectivity layer into a distributed AI platform—the long-term opportunity could be significantly larger than what is currently reflected in our base-case assumptions. As Amara’s Law suggests, the risk may not be overestimating the short-term impact of AI RAN, but underestimating the demand for more distributed intelligence over the long-term.