[wp_tech_share]

The use of intelligence in the RAN is not new—both 4G and 5G deployments rely heavily on automation and intelligence to replace manual tasks, automate the RAN, manage increasing complexity, enhance performance, and control costs. What is new, however, is the rapid proliferation of AI and generative AI, along with a shifting mindset toward leveraging AI in cellular networks. More importantly, the scope of AI’s role in the RAN is expanding, with operators now looking beyond efficiency gains and performance improvements, cautiously exploring whether AI could also unlock new revenue streams. In this blog, we will review the scope and progress.

AI RAN Vision

Considering the opportunities with AI RAN, its evolving scope, the proliferation of groups working on AI RAN, the challenges of measuring its gains, and the absence of unified frameworks in 3GPP, it’s not surprising that marketing departments have some flexibility in how they interpret and present the concept of AI RAN.

Still, some common ground exists even with multiple industry bodies (3GPP, AI-RAN Alliance, ETSI, NGMN, O-RAN Alliance, TIP, TM Forum, etc) and key ecosystem participants working to identify the most promising AI RAN opportunities. At a high level, AI RAN is more about efficiency gains than new revenue streams. There is strong consensus that AI RAN can improve the user experience, enhance performance, reduce power consumption, and play a critical role in the broader automation journey. Unsurprisingly, however, there is greater skepticism about AI’s ability to reverse the flat revenue trajectory that has defined operators throughout the 4G and 5G cycles.

The 3GPP AI/ML activities and roadmap are mostly aligned with the broader efficiency aspects of the AI RAN vision, primarily focused on automation, management data analytics (MDA), SON/MDT, and over-the-air (OTA) related work (CSI, beam management, mobility, and positioning).

The O-RAN Alliance builds on its existing thinking and aims to leverage AI/ML to create a fully intelligent, open, and interoperable RAN architecture that enhances network efficiency, performance, and automation. This includes embedding AI/ML capabilities directly into the O-RAN architecture, particularly within the RIC/SMO, and using AI/ML for a variety of network management and control tasks.

Current AI/ML activities align well with the AI-RAN Alliance’s vision to elevate the RAN’s potential with more automation, improved efficiencies, and new monetization opportunities. The AI-RAN Alliance envisions three key development areas: 1) AI and RAN – improving asset utilization by using a common shared infrastructure for both RAN and AI workloads, 2) AI on RAN – enabling AI applications on the RAN, 3) AI for RAN – optimizing and enhancing RAN performance. Or from an operator standpoint, AI offers the potential to boost revenue or reduce capex and opex.

TIP is actively integrating AI/ML into its Open RAN vision, focusing on automating and optimizing the RAN using AI/ML-powered rApps to manage and orchestrate various aspects of the network, including deployment, optimization, and healing.

While operators generally don’t consider AI the end destination, they believe more openness, virtualization, and intelligence will play essential roles in the broader RAN automation journey.

What is AI RAN

AI RAN integrates AI and machine learning across various aspects of the RAN domain. For the broader AI RAN vision, the boundaries between infrastructure and services are not clearly defined, and interpretations vary. The underlying infrastructure (location, hardware, software, interface support, tenancy) varies depending on multiple factors, such as the latency and capacity requirements for a particular use case, the value-add of AI, the state of existing hardware, power budget, and cost.

AI-RAN, aka the AI-RAN Alliance version of AI RAN, is a subset of the broader AI RAN opportunity, reflecting AI RAN implementations utilizing accelerated computing and fully software-defined/AI-native principles. AI-RAN enables the deployment of RAN and AI workloads on a shared, distributed, and accelerated cloud infrastructure. It capitalizes on the demand for AI inferencing and converts the RAN infrastructure from a single-purpose to multi-purpose cloud infrastructure (NVIDIA AI-RAN Paper, March 2025).

While the ideal reference solution is AI-native/Cloud-native, AI RAN can be offered until that vision is achieved. The majority of the AI RAN deployments to date are implemented using existing hardware.

Why integrate AI and RAN

With power and capex budget requirements rising on the RAN priority list, one of the fundamental questions now is where AI can add value to the RAN without breaking the power budget or growing capex. It is a valid question. After all, RAN cell sites have been around for 40+ years, and the operators have had some time to fine-tune the algorithms to improve performance and optimize resources. AI can make sense in the RAN, but given preliminary efficiency gains, it will not be helpful everywhere.

Topline growth expectations are muted at this juncture. However, operators are optimistic that integrating AI and RAN will produce a number of benefits:

  • Reduce opex/capex
  • Improve performance and experience
  • Boost network quality
  • Lower energy consumption

AI can help introduce efficiencies that help to lower ongoing costs to deploy and manage the RAN network. According to Ericsson, Intelligent RAN automation can help reduce operator opex by 60%. AI will play an important role here, accelerating the automation transition, simplifying complexity and curbing opex growth. Most of the greenfield networks are clearly moving toward new architectures that are more automation-conducive. Rakuten Mobile operates 350 K+ cells with an operational headcount of around 250 people, and the operator claims an 80% reduction in deployment time through automation. China Mobile reported a 30% reduction in MTTR using Huawei’s AI-based O&M. Nokia has seen up to 80% efficiency gain in live networks utilizing machine learning in RAN operations.

The RAN automation journey will likely take longer with the existing networks. The average brownfield operator today falls somewhere between L2 (partial autonomous network) and L3 (conditional autonomous network), with some way to go before reaching L4 (high autonomous network) and L5 (full autonomous network). Even so, China Mobile recently reported it remains on track to activate its L4 autonomous networking on a broader scale in 2025. Vodafone is exploring how AI can help to automate multi-vendor RAN deployments, while Telefonica is implementing AI-powered optimization and automation in its RAN network. According to the TM Forum, 61% of the telcos are targeting L3 autonomy over the next five years.

AI can help improve the RAN performance by optimizing various RAN functions, such as channel estimation, resource allocation, and beamforming, though the upside will vary. Recent activity shows that the operators can realize gains in the order of 10 to 30% when using AI-powered features, often with existing hardware. For example, Bell Canada, using Ericsson’s AI-native link adaptation, increased spectral efficiency by up to 10 percent, improving capacity and reliability of connections, and up to 20 percent higher downlink throughput.

Initial findings from Smartfren’s (Indonesia) commercial deployment of ZTE’s AI-based computing resulted in a 15% improvement in user experience. There could be more upside as well. DeepSig, demonstrated at MWC Barcelona, its AI-native air interface, OmniPHY, running on the NVIDIA AI Aerial platform, could achieve up to 70% throughput gains in some scenarios.

With the RAN accounting for around 70% of the energy consumption at the cell site and comprising around 1% to 2% of global electricity consumption (ITU), the intensification of climate change, taken together with the current power site trajectory, forms the basis for the increased focus on energy efficiency and CO2 reduction. Preliminary findings suggest that AI-powered RAN can play a pivotal role in curbing emissions, cutting energy consumption by 15% to 30%. As an example, Vodafone UK and Ericsson recently showed on trial site across London that the daily 5G radio power consumption can be reduced by up to a third using AI-powered solutions. Verizon shared field data indicating a 15% cost savings with Samsung’s AI-powered energy savings manager (AI-ESM), Similarly, Zain estimates that the AI-powered energy-saving feature provided by Huawei can reduce power consumption by about 20%, while Tele2 believes that smarter AI-based mobile networks can reduce energy consumption in the long term by as much as 30% to 40%, while simultaneously optimizing capacity.

AI RAN Outlook

Operators are not revising their topline growth or mobile data traffic projections upward as a result of AI growing in and around the RAN. Disappointing 4G/5G returns and the failure to reverse the flattish carrier revenue trajectory is helping to explain the increased focus on what can be controlled — AI RAN is currently all about improving the performance/efficiency and reducing opex.

Since the typical gains demonstrated so far are in the 10% to 30% range for specific features, the AI RAN business case will hinge crucially on the cost and power envelope—the risk appetite for growing capex/opex is limited.

The AI-RAN business case using new hardware is difficult to justify for single-purpose tenancy. However, if the operators can use the resources for both RAN and non-RAN workloads and/or the accelerated computing cost comes down (NVIDIA recently announced ARC-Compact, an AI-RAN solution designed for D-RAN), the TAM could expand. For now, the AI service provider vision, where carriers sell unused capacity at scale, remains somewhat far-fetched, and as a result, multi-purpose tenancy is expected to account for a small share of the broader AI RAN market over the near term.

In short, improving something already done by 10% to 30% is not overly exciting. However, suppose AI embedded in the radio signal processing can realize more significant gains or help unlock new revenue opportunities by improving site utilization and providing telcos with an opportunity to sell unused RAN capacity. In that case, there are reasons to be excited. But since the latter is a lower-likelihood play, the base case expectation is that AI RAN will produce tangible value-add, and the excitement level is moderate — or as the Swedes would say, it is lagom.

[wp_tech_share]

On May 14th, I had the opportunity to attend Fastly’s Xcelerate 2025 customer roadshow in Los Angeles. It was a full day of customer case studies, partner demonstrations, and executive briefings, all of which delivered a clear message: Fastly is admidst the transformation from being a traditional content delivery network vendor to becoming an integrated edge services vendor that aims to reduce operational friction and operating expenses, while opening new avenues for adopting AI applications.  The three most prominent themes follow.

A Software-Defined Edge Platform Enables Distributed Cloud Networking Strategies

At Dell’Oro, I’ve been championing Distributed Cloud Networking. It is an architecture that couples the user edge, the wide-area middle mile, and the application edge, using a software-defined control plane that spans multiple clouds and networks. Although still emerging, Distributed Cloud Networking aims to harmonize routing, security, and compute policies wherever applications run. Fastly’s platform vision aligns tightly with this model. Executives described a composable stack that integrates content delivery, DDoS mitigation, Web Application Firewall, bot controls, object storage, WebAssembly compute, and observability behind a single set of Terraform modules and APIs.

Customers emphasized the operational upside. For example, customers credited Fastly’s new production-equivalent “staging edge,” where they can trial configurations and code before promotion. This safeguard has virtually eliminated rollbacks, enabling WAF users to ship approximately one-third more features each year. Moreover, flexible deployment options—such as edge points of presence (POPs), Fastly-managed environments in Amazon Web Services, or on-premises agents—support data-residency mandates without disrupting toolchains.

However, risks revolve around platform dependence. Enterprises that prefer best-of-breed tools may find the breadth of APIs to be demanding and the exit costs uncertain. Competitor Akamai continues to expand into core cloud services, while Cloudflare layers networking and security features at speed. We see enterprises benchmarking onboarding friction, roadmap transparency, and contractual agility before entrusting mission-critical workloads to any single vendor.

Offloading AI Workloads Closer to Users for Better Performance and Cost

Artificial intelligence was front and center at Xcelerate—less an aspiration and more an everyday workload. In a joint demo, Google and Fastly demonstrated how a semantic-aware edge cache handles Gemini prompts, with the cached reply being returned in approximately half the time of a cold request and using noticeably fewer tokens. For enterprises, that means faster pages and smaller AI bills without involving origin GPUs.

What makes the example interesting is where it happens. By utilizing an intelligent fabric, Google and Fastly can direct traffic to the nearest inference node, then maintain popular responses in place. It is a textbook illustration of Distributed Cloud Networking’s promise: policies and data move together through a programmable cloud networking fabric, allowing application teams to gain speed while finance teams experience predictable costs.

Shutterstock, the global stock-image and media marketplace, echoed the theme on the training side. Its video-analysis pipeline streams tens of millions of clips across AWS, Azure, and Google, while keeping preprocessing and vector embedding at edge points of presence. Running the heavy lifting in Fastly’s fabric enables Shutterstock to maintain steady throughput across clouds and avoid cross-region egress surprises—a real-world proof that Distributed Cloud Networking fabrics improve both performance and cost control for data-intensive AI jobs.

Challenges remain—semantic caching is young, model versions evolve quickly, and data-residency rules vary—but the direction is clear. Vendors, including Akamai and NVIDIA, are racing to offer similar edge-GPU overlays. Therefore, enterprises should pair any rollout with tight version control, automated rollback, and transparent governance to prevent the benefits from slipping away.

Edge Caching + Integrated Storage: Controlling Spend While Powering “The Best of the Internet”

Edge caching and integrated storage may not be as eye-catching as a software-defined edge-services platform or AI offload. Yet, when traffic surges and the finance team wants lower IT spend, their combination of uptime insurance and cost control often matters most.

For many customers, one of the most significant benefits of Fastly’s integrated object storage is the cost reduction it enables while serving massive amounts of data without interruption. Keeping hot data at the edge wipes out per-gigabyte cloud egress fees and shortens time-to-first-byte:

  • Fox Sports hit a 99.97 % cache-hit ratio during Super Bowl 2025, offloading terabits from its origin and avoiding a game-day cloud-bill spike.
  • Shutterstock migrated 35 PB of images once and now serves them approximately 40% faster, while eliminating a six-figure monthly cloud egress line item.

Cost efficiency is not reserved for media giants. Wildfire-alert nonprofit Watch Duty routinely saw incident spikes, ranging from 20,000 to 100,000 requests per second, during the devastating fires in Los Angeles in early 2025. Fastly provided WatchDuty capacity at a steep discount—an embodiment of the company’s aim to “Power the best of the internet.”

Whether it’s a global streaming platform or a community-safety service, the message was clear: every byte that stays in edge storage is one less byte paid for twice—first in bandwidth and then in user patience.

Conclusion

Fastly Xcelerate 2025 reinforced its commitment to an integrated edge platform that aligns with our vision for Distributed Cloud Networking. Customers repeatedly praised Fastly’s engineers for extracting every microsecond of performance and its high-touch support teams for restoring service stability when seconds mattered most—an operational culture evident in Fox’s Super Bowl war room and WatchDuty’s wildfire surge. We will continue tracking forthcoming roadmap milestones against the backdrop of our Distributed Cloud Network report, while evaluating Fastly tactically in our application security and delivery coverage within the quarterly Network Security report. Further developments deserve close observation.

[wp_tech_share]

Charter’s proposed $34.5 billion acquisition of Cox Communications reflects just how much the US broadband landscape has changed. The near-nationwide availability of fixed wireless access (FWA), combined with expanding fiber footprints, has put cable operators on the defensive as they struggle with net broadband subscriber losses. Back in September 2024, I detailed the situation in a blog titled, “US Telcos Betting on Convergence and Scale to End Cable’s Broadband Reign”:

Going forward, the 1-2 punch of FWA and fiber will allow the largest telcos to have substantially larger broadband footprints than their cable competitors. Combine that with growing ISP relationships with open access providers and these telcos can expand their footprint and potential customer base further. And by expanding further, we don’t just mean total number of homes passed, but also businesses, enterprises, MDUs (multi-dwelling units), and data centers. Fiber footprint is as much about total route miles as it is about total passings. And those total route miles are, once again, increasing in value, after a prolonged slump.

For cable operators to successfully respond, consolidation likely has to be back on the table. The name of the game in the US right now is how to expand the addressable market of subscribers or risk being limited to existing geographic serving areas. Beyond that, continuing to focus on the aggressive bundling of converged services, which certainly has paid dividends in the form of new mobile subscribers.

Beyond that, being able to get to market quickly in new serving areas will be critical. In this time of frenzied buildouts and expansions, the importance of the first mover advantage can not be overstated.

So, maybe the specific combination of Charter and Cox was a surprise. But the notion that cable operators had to fight back by getting bigger was certainly not.

Network Upgrade Plans Likely to Stay the Same

Of course, there is no guarantee that this transaction will ultimately be approved. So, while the trade and legal reviews are getting underway, both operators still face competitors that are likely to accelerate their own marketing and sales initiatives designed to attract subscribers from the latest “corporate behemoth,” which only wants to stamp out competition and raise your broadband and mobile service prices. Charter and Cox, even though they have slightly different access network upgrade plans, will continue along their individual paths to raise speeds and improve signal quality across their HFC plant.

Fortunately, for both operators, the long-term vision of their access networks remains nearly aligned, though the timing might be slightly different. It’s worth a quick look at how Charter and Cox are both similar and different when it comes to their broadband access network strategies:

  • Charter and Cox are moving forward with Distributed Access Architectures (DAA) using vCMTS and Remote PHY Devices. Charter is in the early stages of their RPD deployments, while Cox has converted nearly all of its existing optical node base to Remote PHY. Cox had historically relied on Cisco for its M-CMTS (Modular CMTS) platforms, an early precursor to Remote PHY, and subsequently took the next evolutionary step of homing RPDs to the existing CCAP installed base. While that did allow the operator to move to Ethernet transport between the headend and RPDs, the benefits of moving to a vCMTS architecture weren’t fully realized, which is why Cox is now working with Vecima’s vCMTS platform.
  • Both Charter and Cox believe in using the Extended Spectrum flavor of DOCSIS 4.0, though Charter expects to deploy DOCSIS 4.0 earlier than Cox. This is because Cox is already running the vast majority of its network at 1 GHz with a mid-split architecture, while Charter is in the process of upgrading its usable spectrum from 750 MHz to 1.2 GHz (using 1.8 GHz amplifiers running at 1.2 GHz) using a high-split architecture. According to Charter CEO Chris Winfrey, “In terms of the network, Cox is largely through an upgrade for what we would call a mid-split upgrade…There’s no rush for us to go try to harmonize that into a high split footprint.” Winfrey also said, “In our planning, the eventual conversion to DOCSIS 4.0 with DAA doesn’t take place for years and it’ll be done at a lower cost as a result of them having already completed their mid-split and because of the scale that we’ll have at the time that we’re completing our own DOCSIS 4.0 and DAA upgrades.” In other words, Cox has a longer runway with its current mid-split, 1 GHz architecture delivering 2 Gbps downstream speeds. So, should the merger go through, the Cox systems would be delivering similar downstream speeds as the upgraded Charter systems, but would likely have reduced upstream capacity relative to the upgraded, high-split systems.
  • Charter is also a proponent of GAP (Generic Access Platform) nodes and has begun deploying these modular nodes in its network to replace aging and discontinued units. Cox, on the other hand, has made no mention of GAP nodes and likely doesn’t need to in the short-term, given that it spent a good deal of capex years ago to upgrade to 1 GHz. Even Charter isn’t deploying GAP nodes universally across its network, as it will continue to source GAP and non-GAP nodes from multiple vendors.
  • When it comes to vCMTS, Charter has hinted about having cores from multiple vendors, though to date it has only publicly announced Harmonic as its vCMTS supplier. Meanwhile, Cox just recently announced its selection of Vecima’s Entra vCMTS, which makes sense given the deployment of Vecima RPDs. But Vecima RPDs are also being deployed at Charter. So, does that mean that Vecima stands to win a share of Charter’s vCMTS business, as well? Although RPD and vCMTS interoperability is expected and is in deployment at other operators, Charter has made note of some interoperability challenges within its network. Thus, it utilizes Falcon V as a testbed for vCMTS and RPD interoperability, along with Vecima’s acquisition of Falcon V.
  • When it comes to fiber deployments, Charter and Cox have different technology choices. Charter continues to use 10 Gbps DPoE (DOCSIS Provisioning over EPON) for both its RDOF-funded projects and its Greenfield fiber builds. In contrast, Cox was an early adopter of both GPON and the newer XGS-PON technology. As a result, Cox has a significantly higher percentage of PON (Passive Optical Network) connections compared to Charter in terms of total homes and businesses served.

It goes without saying that there are many variables from a technology perspective surrounding this proposed transaction that are likely to have profound implications on the cable outside plant and headend vendor landscape. The combination of two of the largest cable operators in the world ultimately reduces the number of opportunities for unique vendors, thereby furthering consolidation among those vendors. Should this deal move forward, I fully expect there to be some consolidation among equipment vendors as they look to grow their share at the new combined company.

[wp_tech_share]

Earlier this month, San Francisco’s Moscone Center buzzed with energy as 45,000 security professionals convened for the RSAC 2025 Conference. Across scheduled briefings, product launches, and crowded corridors, one reality became clear. Enterprises are rebuilding their cyber defenses for a cloud-first era characterized by geopolitical tension, architectural complexity, and non-stop release cycles. Attack surfaces expand while budgets tighten, making every architectural bet consequential. Drawing on my 26 analyst meetings at RSAC 2025, this post distills three key forces that are guiding investments and supplier roadmaps. The conference floor affirmed that cybersecurity strategy is now inseparable from business resilience and national policy.

Sovereignty Moving Center Stage

Data location, once ranked low on vendor scorecards, is now becoming a table stake. Multinational buyers are increasingly demanding that security controls, telemetry, and even help-desk staff remain within chosen jurisdictions. Regulators are hardening their stance. The European Union Data Act, Japan’s amended APPI, and parallel proposals in Latin America will codify expectations of sovereignty and impose meaningful penalties for non-compliance.

Vendors are responding by dual-provider architectures, modular key-management offerings, and portals that verify locality compliance in real-time. Another example is how security service edge (SSE), web application firewalls (WAF), and zero-trust services are providing or will provide options to pin policy engines to specific countries while routing inspection traffic only through approved data centers.

The net result is that we are seeing early adopter enterprises beginning to update their request-for-proposal templates. Jurisdictional flexibility will differentiate leading vendors from laggards, and late adopters’ risk costly retrofits as upcoming regulations become even stricter.

Security Becomes an Everywhere Fabric

Perimeter defense has dissolved. Protection now forms an enforcement fabric that spans top-of-rack switches, smart NICs, private cloud gateways, and microsegmentation agents embedded within every workload. We are on the verge of 800G networking systems that push line-rate policy checks into switching silicon, while lightweight software already extends native host filters for east-west inspection.

This convergence blurs product lines. The common objective is to deliver uniform policy logic at the nearest feasible hop, thereby reducing lateral movement risk without requiring expensive data center redesigns. Hardware offload further reduces latency and power consumption, enabling organizations to meet aggressive carbon reduction targets.

The rise of generative-AI workloads adds urgency. Vendors warned that 2-kilowatt GPUs, liquid cooling, and 800G links create new lateral movement paths, making switch-resident firewalls and host eBPF agents mandatory safeguards for model pipelines, vector databases, and inference gateways.

Operational complexity remains the hurdle. An everywhere fabric only works when application flows are mapped and kept up to date. Early adopters emphasized the importance of domain-specific language models and graph-based visualization in maintaining context as environments evolve. Vendors that supply open APIs, distributed telemetry lakes, and workflow integrations will win mindshare.

Consolidation and Managed Security Services Accelerate

Console fatigue is real. Chief information security officers described staff juggling dozens of dashboards, overlapping agents, and unpredictable subscription bills. With headcount flat, many organizations view platform consolidation or managed delivery as the only viable escape.

RSAC exhibitors leaned into that demand. Several vendors introduced unified licensing that bundles networking, cloud access, endpoint protection, and security operations into a single contract. Managed service providers unveiled outcome-based agreements promising defined detection times, integrated compliance reporting, and one-hour onboarding for new locations. New alliances between telecom carriers and hyperscale clouds aim to embed managed detection natively within connectivity bundles.

Economics also favors consolidation as volume commitments push scale advantages upstream into vendor roadmaps. During analyst sessions, suppliers acknowledged that cross-product telemetry lakes enhance threat-model accuracy more than isolated engines, further strengthening the business case.

Dell’Oro analysis highlights partner-delivered SASE (Secure Access Service Edge) as a key enabler for expanding the reach of SASE into smaller enterprises that lack the necessary technology expertise and personnel. Renewal cycles will prompt strategic platform pivots rather than incremental add-ons. Vendors offering transparent pricing, shared analytics, and structured migration tooling will capture a disproportionate share as enterprises rationalize portfolios.

Cellular 5G emerged as a surprise accelerant. Compact routers and slice-aware software, provided by several exhibitors, enable managed-service providers to extend SASE to pop-up branches, public safety fleets, and the long tail of small enterprises without requiring trenching of cable or fiber.

Conclusion

RSAC 2025 confirmed that the security industry stands at a strategic crossroads. Sovereign-ready architectures, AI-aware controls, 5G-enabled reach, and integrated delivery models now define a competitive advantage. Readers following these shifts should engage with Dell’Oro Group’s forthcoming Network Security, SASE/SD-WAN, and CNAPP reports and advisory services to benchmark against new imperatives and guide investment decisions.