Print Friendly, PDF & Email

The dominant open RAN talking points focus on leveraging open interfaces and general purpose hardware to lower total cost of ownership while providing operators the ability to mix-and-match at radio sites while fostering competition in a vendor market dominated by three major players. If you’ve attended a telecom trade show lately–5G AsiaHuawei’s Global Mobile Broadband Forum and Mobile World Congress Los Angeles all come to mind, chances are you’ve listened to or had a conversation about open RAN.

Print Friendly, PDF & Email

Whether spurred by the looming rollouts of 5G services or the continued attrition of video subscribers and revenue, fixed broadband technologies, services, and business strategies have changed. Whereas operators were once focused on driving scale across multiple areas of their business, in many cases, the focus currently is firmly squared on fixed broadband. And why not? For most North American operators, service margins for residential fixed broadband hover between 60-70%, while video margins have seen steady declines of approximately 15% over the last five years, pushing average margins below 15%, in some cases.

Scaling broadband services, however, is tricky because achieving scale involves any combination of bandwidth, network platforms, CPE, test and measurement equipment, as well as personnel to support both the upgrades and ongoing maintenance. These challenges are faced by all broadband service providers and are certainly not limited to cable or telco operators alone.

Shared challenges, as well as the standards and technologies to overcome them, is a big reason why I decided to combine my perspectives on the recent Cable-Tec Expo and Broadband World Forum shows into a single article. Because no matter where you look, the almost universal focus for all broadband service providers for addressing their scaling challenges is through virtualization. The topic occupied most of my discussions at both events and will only grow as we progress through 2020 and beyond.

Cable’s Clear Use Case for Virtualization

Cable operators are intimately familiar with the challenges of scaling their broadband networks to support downstream bandwidth consumption CAGRs still hovering in the 25-35% range. To deliver more bandwidth, MSOs traditionally have had to split their optical nodes to reduce service group sizes. Each node split, however, requires more passive and active equipment, including splitters, combiners, receivers, and transmitters. More importantly for opex is the need to increase the number of hardware-based CCAP platforms to support the additional bandwidth and service groups. The net result is a significant increase in space and power requirements in both headend and hub sites, as well as additional complexity in fiber cabling requirements.

With the ultimate goal of delivering multi-gigabit services to subscribers, this traditional model of adding hardware to enable a consistent increase in overall bandwidth is simply unsustainable, especially when cable operators are also trying to reduce their real estate footprint by reducing total headend and secondary hub site facilities.

Obviously, Comcast has taken a lead role in pushing virtualization and it provided an informative overview of its progress. For me, there were three key benefits Comcast either explicitly or implicitly communicated during the event about their virtualization efforts:

  1. Even if Comcast moves away from its plan of delivering full-duplex (symmetric 10 Gbps) services in a node + zero environment, a virtualized CCAP core gives them the ability to scale at their own pace and at any location. Servers could still be located in existing headends or primary hub sites, or they could be deployed in centralized data centers. With workload balancing across their CCAP core servers, there are effectively no restrictions on where Comcast can grow its capacity.
  2. The virtual CCAP core almost eliminates the extended maintenance windows often required for software and firmware upgrades of traditional CCAP platforms. With increasing restrictions on service downtimes, operators frequently push those limits when they have complex upgrades to complete across their entire CCAP footprint. The virtual CCAP core takes those software and firmware upgrades and makes them microservices, allowing them to be digested and completed without complete reboots of the platform. That results in almost minimal downtime for subscribers. Even if there is downtime, it can be isolated to a service group size of 250 homes or less (and declining,) as opposed to the potential 100k to 250k subscribers that are traditionally impacted when a CCAP chassis goes down.
  3. Comcast fully believes that other cable operators can benefit from their virtual CCAP core architecture, and they intend to license it just as they have done with their X1 video platform. There are, of course, questions around just how that licensing model might work and how revenue might be distributed between Comcast and Harmonic, its vCCAP partner. But it’s clear that Comcast is leaving the door wide open to profiting from its software development work. Obviously, this could have negative impacts on the traditional CCAP vendors, as the size of their addressable market shrinks. However, only a few operators have thus far licensed Comcast’s X1 platform, and it stands to reason that an even smaller number would want to entrust the most important service in their portfolio to the operator.

Really, Comcast’s progress on virtualization is just the beginning. Yes, it satisfies a short-term requirement to be able to scale to support consistent increases in fixed broadband speeds. But the longer-term potential for supporting edge computing and processing for more complex IoT and 5G backhaul applications also requires this transition away from dedicated hardware platforms.

Multi-Vendor, Multi-Service Requirements Drive Telcos’ Virtualization Efforts

Multi-service support, which is still on the horizon for most cable operators, is a reality today for a number of operators who are moving forward with the virtualization of their access networks. That reality has been reflected in increasing discussions and focuses on VOLTHA (Virtual OLT Hardware Abstraction,) currently for XGS-PON deployments, but with an eye towards G.fast deployments, as well.

VOLTHA is a well-known, open-source standard, at this point, designed to simplify traditional PON architectures by abstracting PON-related elements such as OMCI and GEM, and allowing an SDN controller to treat each PON OLT as a programmable switch, independent of any vendor’s hardware.

Whereas cable operators are virtualizing currently to scale for more bandwidth, for telcos, that is just one piece of the puzzle. They are virtualizing to scale for bandwidth, certainly, but also for 4G and 5G backhaul, and enterprise PON and WiFi backhaul. In addition, telco operators are also looking to more easily manage multi-vendor and multi-technology environments, where physical layer technologies, such as G.fast and GPON are all managed in a similar manner from a central location. In such cases, the elements associated with each physical layer technology are abstracted, allowing for easy migration from one technology to the next, as well as a unified management and troubleshooting plan across all technologies.

During Broadband World Forum, discussions centered on actual deployments of virtualized, software-defined access networks were plentiful. This was a significant change from previous years when the technologies were still relegated to lab environments. Beyond an increase in the maturity of the technologies, the focus on virtualization has come about partially because of how service providers are either deploying or accessing fiber assets. In a growing number of cases, service providers are leasing fiber to fill in service area gaps, or they are partnering with other operators to share the costs of deploying fiber. In these cases, where service providers have equipment on their own fiber, on leased fiber lines, or even leased access to the fiber owner’s OLTs, virtualized infrastructure simplifies the management of these network elements by abstracting the specific PON elements of multiple vendors and enabling their provisioning and management from a single, centralized controller.

For many years, multi-vendor access network deployments were a stated goal of major network operators. However, very few ever became reality, due to unique management complexities associated with each vendor’s implementation. Virtualization finally makes this a reality by essentially treating each active network element as an equal node. One node could be an OLT, another could be a DSLAM or G.fast DPU, while another could be a fixed wireless access point. All can be provided by different vendors, while still being managed centrally by a software controller.

Though multi-vendor, multi-service environments remain the exception rather than the rule, the progress being made to make these a reality through virtualization will continue to ramp up through 2020 and beyond. We should expect to see some novel business models emerge next year, especially in the areas of open access networks, where ISPs virtually lease access through network slicing. These models are already emerging in Europe and Latin America, and we expect them to expand in these two regions next year.