I would like to share some initial thoughts about the groundbreaking announcement that HPE has entered into a definitive agreement to acquire Juniper for $14 B. My thoughts are mostly around the switch business of both firms. The WLAN and security aspects of the acquisition are covered by our WLAN analyst Sian Morgan and security analyst Mauricio Sanchez.
My initial key takeaways and thoughts on the potential upside and downside impact of the acquisition are:
Pros:
In the combined data center and campus switch market, Cisco has consistently dominated as the major incumbent vendor, with a 46% revenue share in 2022. HPE held the fourth position with approximately 5%, and Juniper secured the fifth spot with around 3%. A consolidated HPE/Juniper entity would climb to the fourth position, capturing 8% market share, trailing closely behind Huawei and Arista.
Juniper’s standout performer is undeniably their Mist portfolio, recognized as the most cutting-edge AI-driven platform in the industry. As AI capabilities increasingly define the competitive landscape for networking vendors, HPE stands to gain significantly from its access to the Mist platform. We believe that Mist played a pivotal role in motivating HPE to offer a premium of about 30% for the acquisition of Juniper. In other words, Juniper brings better “AI technology for networking” to the table.
In the data center space, HPE has predominantly focused on the compute side, with a relatively modest presence in the Data Center switch business (HPE Data Center switch sales amounted to approximately $150 M in 2022, in contrast to Juniper’s sales that exceeded $650 M). Consequently, we anticipate that HPE stands to gain significantly from Juniper’s data center portfolio. Nonetheless, a notable contribution from HPE lies in their Slingshot Fabric, which serves as a compelling alternative to InfiniBand for connecting large GPU clusters. In other words, HPE brings better “Networking technology for AI” to the table.
Juniper would definitely benefit from HPE’s extensive channels and go-to-market strategy (about 95% of HPE’s business goes through channels). Additionally, HPE has made great progress driving their as-a-service GreenLake solution. However, GreenLake has been so far mostly dominated by compute. With the Juniper acquisition, we expect to see more networking components pushed through GreenLake.
In campus and with the Mist acquisition in particular, Juniper has been focusing mostly on high-end enterprises whereas HPE has been playing mainly in commercial and mid-market. Therefore, from that standpoint, there should be a little overlap in the customer base and more of cross-selling opportunities.
Cons:
Undoubtedly, a significant challenge arises from the substantial product overlap, evident across various domains such as data center switching, campus switching, WLAN, and security. Observing how HPE navigates the convergence of these diverse product lines will be intriguing. Ideally, the merged product portfolio should synergize to bolster the market share of the consolidated entities. Regrettably, history has shown that not all product integration and consolidation meet that desired outcome.
[wp_tech_share]
We’ve been participating in the OCP Open Compute Project (OCP) Global Summit for many years, and while each year has brought pleasant surprises and announcements, as described in previous OCP blogs from2022and 2021, this year stands out in a league of its own. 2023 marks a significant turning point, notably with the advent of AI, which many speakers have referred to as a tectonic shift in the industry and a once-in-a-generation inflection point in computing and in the broader market. This transformation has unfolded within just the past few months, sparking a remarkable level of interest at the OCP conference. In fact, this year, the conference was completely sold out, demonstrating the widespread eagerness to grasp the opportunities and confront the challenges that this transformative shift presents to the market. Furthermore, at OCP 2023, there was a new track just to focus on AI. This year marks the beginning of a new era in the age of AI. AI is here! The race is on!
This new era of AI is marked and defined by the emergence of new generative AI applications and large language models. Some of these applications deal with billions and even trillions of parameters and the number of parameters seems to be growing 1000X every 2 to 3 years.
This complexity and size of the emerging AI applications dictate the number of accelerated nodes needed to run the AI applications as well as the scale and type of infrastructure needed to support and connect those accelerated nodes. Regrettably, as illustrated in the chart below presented by Meta at the OCP conference, a growing disparity exists between the requirements for model training and the available infrastructure to facilitate it.
This predicament poses the pivotal question: How can one scale to hundreds of thousands or even millions of accelerated nodes? The answer lies in the power of AI Networks purposively built and tuned for AI applications. So, what are the requirements that the AI Networks need to satisfy? To answer that question, let’s first look at the characteristics of AI workloads, which include but are not limited to the following:
Traffic patterns consist of a large portion of elephant flows
AI workloads require a large number of short remote memory access
The fact that all nodes transmit at the same time saturates links very quickly
The progression of all nodes can be held back by any delayed flow. In fact, Meta showed last year that 33% of elapsed time in AI/ML is spent waiting for the network.
Given these unique characteristics of AI workloads, AI Networks have to meet certain requirements such as high speed, low tail-latency, lossless and scalable fabrics.
In terms of high-speed performance, the chart below, which I presented at OCP, shows that by 2027, we anticipate that nearly all ports in the AI back-end network will operate at a minimum speed of 800 Gbps, with 1600 Gbps comprising half of the ports. In contrast, our forecast for the port speed mix in the front-end network reveals that only about a third of the ports will be at 800 Gbps speed by 2027, while 1600 Gbps ports will constitute just 10%. This discrepancy in port speed mix underscores the substantial disparity in requirements between the front-end network, primarily used to connect general-purpose servers, and the back-end network, which primarily supports AI workloads.
In the pursuit of achieving tail-latency and creating a lossless fabric, we are witnessing numerous initiatives aimed at enhancing Ethernet and modernizing it for optimal performance in AI workloads. For instance, the Ultra Ethernet Consortium (UEC) was established in July 2023, with the objective of delivering an open, interoperable, high-performance full-communications stack architecture based on Ethernet. Additionally, OCP has formed a new alliance to address significant networking challenges within AI cluster infrastructure. Another groundbreaking announcement from the OCP conference came from Google, who unveiled their opening of Falcon chips; a low-latency hardware transport, to the ecosystem through the Open Compute Project.
At OCP, there was a huge emphasis on adopting an open approach to address the scalability challenges of AI workloads, aligning seamlessly with the OCP 2023 theme: ‘Scaling Innovation Through Collaboration.’ Both Meta and Microsoft have consistently advocated, over the years, for community collaboration to tackle scalability issues. However, we were pleasantly surprised by the following statement from Google at OCP 2023: “A new era of AI systems design necessitates a dynamic open industry ecosystem”.
The challenges presented by AI workloads to network and infrastructure are compounded by the broad spectrum of workloads. As illustrated in the chart below showcased by Meta at OCP 2023, the diversity of workloads is evident in their varying requirements.
Source: Meta at OCP 2023
This diversity underscores the necessity of adopting a heterogeneous approach to build high-performance AI Networks and infrastructure capable of supporting a wide range of AI workloads. This heterogeneous approach will entail a combination of standardized as well as proprietary innovations and solutions. We anticipate that Cloud service providers will make distinct and unique choices, resulting in market bifurcation. In the upcoming Dell’Oro Group’s AI Networks for AI Workloads report, I delve into the various network fabric requirements based on cluster size, workload characteristics, and the distinctive choices made by cloud service providers.
Exciting years lie ahead of us! The AI journey is just 1% finished!
Save the date: Free OCP Educational Webinar on November 9, 8 AM PT, explores AI-driven network solutions, market potential, featuring Juniper Networks and Dell’Oro Group.Register now!
[wp_tech_share]
2023 witnessed a remarkable resurgence of the OFC conference following the pandemic. The event drew a significant turnout, and the atmosphere was buzzing with enthusiasm and energy. The level of excitement was matched by the abundance of groundbreaking announcements and product launches. Given my particular interest in the data center switch market, I will center my observations in this blog on the most pertinent highlights regarding data center networking.
The Bandwidth and Scale of AI Clusters Will Skyrocket Over the Next Few Years
It’s always interesting to hear from different vendors about their expectations for AI networks, but it’s particularly fascinating when Cloud Service Providers (SPs) discuss their plans and predictions regarding the projected growth of their AI workloads. This is because such workloads are expected to exert significant pressure on the bandwidth and scale of Cloud SPs’ networks, making the topic all the more astounding. At OFC this year, Meta portrayed their expectations of how their AI clusters in 2025 and beyond may look like. Two key takeaways from Meta’s predictions:
The size and network bandwidth of AI clusters are expected to increase drastically in the future: Meta expects the size of its AI cluster will grow from 256 accelerators today to 4 K accelerators per cluster by 2025. Additionally, the amount of network bandwidth per accelerator is expected to grow from 200 Gbps to more than 1 Tbps, a phenomenal increase in just about three years. In summary, not only the size of the cluster is growing, but also the amount of compute network per accelerator is skyrocketing.
The expected growth in the size of AI clusters and compute network capacity will have significant implications on how accelerators are currently connected: Meta showcased the current and potential future state of the cluster fabric. The chart below presented by Meta proposes flattening the network by embedding optics directly in every accelerator in the rack, rather than through a network switch. This tremendous increase in the number of optics, combined with the increase in network speeds is exacerbating the power consumption issues that Cloud SPs have already been battling with. We also believe that AI networks may require a different class of network switches purpose-built and designed for AI workloads.
Pluggable Optics vs. Co-packaged Optics (CPOs) vs. Linear Drive Pluggable Optics (LPOs)
Pluggable optics will be responsible for an increasing portion of the power consumption at a system level (more than 50% of the switch system power @ 51 .2 Tbps and beyond) and as mentioned above, this issue will only get exacerbated as Cloud SPs build their next-generation AI networks. CPOs emerged as an alternative technology that have the promise to reduce power and cost compared to pluggable optics. Below are some updates about the state of the CPO market:
Cloud SPs are still on track to experiment with CPOs: Despite rumors that Cloud SPs are canceling their plans to deploy CPOs due to budget cuts, it appears that they are still on track to experiment with this technology. At OFC 2023, Meta reiterated their plans to consider CPOs in order to reduce power consumption from 20 pJ/bit to less than 5 pJ/bit using Direct Drive CPOs (Direct Drive CPOs eliminate the digital signal processors (DSPs)). It is still unclear, however, where exactly in the network they plan to implement CPOs or if it will be primarily used for compute interconnect.
The ecosystem is making progress in developing CPOs but a lot remains to be done: There were several exciting demonstrations and product announcements at OFC 2023. For example, Broadcom showcased a prototype of its Tomahawk 5-based 51.2 Tbps “Bailly” CPO system, along with a fully functional Tomahawk 4-based 25.6 Tbps “Humboldt” CPO system that was announced in September 2022. Additionally, Cisco presented the power savings achieved with its CPO switch populated with CPO silicon photonic-based optical tiles driving 64×400 G FR4, as compared to a conventional 32-port 2x400G 1 RU switch. During our discussions with the OIF, we were provided with an update on the various standardization efforts taking place, including the standardization of the socket that the CPO module will go into. Our conversations with major players and stakeholders made it clear that significant progress has been made in the right direction. However, there is still much work to be done to reach the final destination, particularly in addressing serviceability, manufacturability, and testability issues that remain unsolved. Our CPO forecast published in our 5-year Data Center Forecast report January 2023 edition takes into consideration all of these challenges.
LPOs present another alternative to explore: Andy Bechtolsheim of Arista has suggested LPOs as another alternative that may address some of the challenges of CPOs. The idea behind LPOs is to remove the DSP from pluggable optics, as the DSP drives about half of the power consumption and a large portion of the cost of 400 Gbps pluggable optics. By removing the DSP, LPOs would be able to reduce optic power by 50% and system power by up to 25% as Andy portrayed in the chart below.
Additionally, other materials for Electric Optic Modulation (EOM) are being explored, which may offer even greater savings compared to silicon photonics. Although silicon photonics is a proven high-volume technology, it has high voltage and insertion loss, so exploring new materials such as TFLN may help lower power consumption. However, we would like to note that while LPOs has the potential to achieve power savings similar to CPOs, they put more stress on the electrical part of the switch system and require a high-performance switch SERDES and careful motherboard signal integrity design. We expect 2023 to be busy with measurement and testing activities for LPO products.
800 Gbps Pluggable Optics are Ready for Production Volume and 1.6 Tbps Optics are already in the Making
While we are excited about the aforementioned futuristic technologies that may take a few more years to mature, we are equally thrilled about the products on display at the OFC that will contribute to the market growth in the near future, such as the 800 Gbps optical pluggable transceivers, which were widely represented at the event this year. The timing was perfect, as it is aligned with the availability of 51.2 Tbps chips from various vendors, including Broadcom and Marvell. While 800 Gbps optics started shipping in 2022, more suppliers are currently sampling, and the volume production is expected to ramp up by the end of this year, as indicated in the chart below from our 5-year Data Center Forecast report January 2023 edition. In addition, several 1.6 Tbps optical components and transceivers based on 200 G per lambda were also introduced at OFC 2023, but we do not expect to see substantial volumes in the market before 2025/2026.
If you would like to hear more about our findings, please contact us at dgsales@delloro.com
[wp_tech_share]
Happy New Year! Right before the holidays, we published our 3Q22 reports which provided a good overview of the market performance for the first nine months of 2022. Based on those results and while vendors have not reported their 4Q results yet, the Campus Switch market is estimated to have achieved a stellar double-digit growth, reaching a record-revenue level in 2022.
Now the big question is what’s next and what does that mean for 2023 performance? Should we expect a market pull-back, especially in light of rising macroeconomic uncertainties? And what other trends should we watch in 2023?
1) Market Performance to Remain Healthy in 2023
Despite the remarkable performance in 2022 resulting in a tough comparison for sales in the new year, we project that the Campus Switch market will continue to grow in 2023. Our optimism is underpinned by the healthy backlog witnessed in the market. On the latest earnings calls, almost every switch vendor reported near record-level backlog and most did not expect a return to normal in the next several quarters. As the supply situation continues to improve in the first half of 2023, it will help fulfill this backlog, providing a cushion for market sales not to crash, even when booking growth rates start to moderate. Furthermore, this backlog will be priced at a premium compared to what has been shipped in 2022, as explained later in this blog.
However, as we head into the second half of 2023, we believe that improvement in the supply situation, combined with macroeconomic challenges, will put a break on the panic-purchasing behavior that led to the extraordinary levels of backlog recorded so far in the market. We, therefore, expect a significant slowdown in bookings, followed shortly thereafter by a slowdown in revenue, as most of the backlog will have been fulfilled during the first half of the year.
2) Market Prices May Finally Start to Rise
As you know, almost every vendor had to increase its list prices by an average of 10-15% as a way to protect margin by passing some of the increased supply-related costs to customers. However, those list price increase actions have not yet started to impact recognized revenues as most of the products that have been shipped in 2022 are from orders placed ahead of the list price increase. However, as supply improves and as this old backlog starts to get fulfilled in 2023, we expect the market to start to benefit from this list price increase, although it may partially be offset by regional, customer, and product mix dynamics.
3) Wide Discrepancy in Regional Performance
2023 is expected to be a wild and uncomfortable year from a geopolitical and macro perspective. The war in Europe, the global energy crisis and inflation are expected to put pressure on market demand and curb enterprises’ appetite for spending. However, we expect this slowdown in demand to be more severe in certain regions compared to others. For instance, we expect the slowdown to be more severe in Europe than in the U.S. Additionally, China will also be dealing with the increased rate of COVID infections following the end of the zero-Covid policy.
4) 2.5/5.0 Gbps Campus Switch Adoption to Accelerate
We predict 2.5/5.0 Gbps shipments to grow in excess of 50% in 2023, showing an accelerated growth rate compared to 2022. This accelerated ramp is a reflection of improved supply but also increased demand. We expect a higher portion of Wi-Fi 6E and Wi-Fi 7 Access points (APs) to ship with 2.5/5.0 Gbps uplinks and to drive the need for 2.5/5.0 Gbps switches. Additionally, as employees return to their offices, even on a part-time basis, network traffic will surge, requiring higher-speed Wi-Fi APs and switches. Last but not least, we expect this growth in 2.5/5.0 Gbps switch shipments to be diversified among a wide variety of vendors, unlike during the prior years when Cisco used to comprise well in excess of two-thirds of the shipments in the market.
5) Network-As-A-Service Offerings to Increase and Open the Door for Heated Competition in the Market
Perhaps one of the main questions we have been getting in 2022 and expect to persist in 2023 is around Network-As-A-Service (NAAS) offerings. What is the definition of NaaS? What does it include? What is the target market? How are vendors charging for it? What is the delivery model? How are the different responsibilities being divided to provision, maintain and operate the network?
Given the complexity of the matter, we felt the need to address all the questions above and even more in an advanced research report that is planned to be launched in 2023. Stay Tuned!
Best Wishes for 2023! We would like to kick off the new year by reflecting on our 2022 predictions and sharing what we believe 2023 might have in store for us.
First looking back at our 2022 prediction blog, we have anticipated the following for 2022:
Data center switch market spotlight will continue to shine in 2022 if supply permits
200/400 Gbps adoption to accelerate beyond Google and Amazon
800 Gbps shipments may debut at Google
Silicon diversity will become more pronounced
AI-driven workloads to continue to shape data center network infrastructure
On our first prediction, 2022 was indeed a record year for data center switch sales as manufacturers’ ability to navigate supply challenges was remarkable.
On our second and third predictions, 200/400 Gbps shipments are estimated to have nearly tripled in 2022, driven by ongoing deployment at Google and Amazon as well as an accelerated adoption from Microsoft and Meta. We also started to report early 800 Gbps deployments at Google.
On our fourth prediction, needless to say that supply constraints have actually accelerated the need for silicon diversity. Latest entrants to the merchant silicon market such as Cisco, Intel (Barefoot), and Marvell (Innovium) have started to gain network footprint at the hyperscalers. Xsight Labs is another start-up trying to take a bite out of hyperscalers’ network spending.
On our fifth prediction, we believe that we barely started to scratch the surface in terms of the sea of innovation, disruption, and opportunities that AI workloads will bring to market.
Now with 2022 in the rearview mirror, most of the trends mentioned above will remain in focus and we will continue to explore them. Additionally, I would like to highlight other trends that have been overshadowed in 2022 and we believe it’s time to bring them back in the spotlight in 2023.
2023 Poised for Another Year of Strong Double-digit Growth and Record-Breaking Revenues
Despite all the concerns about the macroeconomic situation and a tough comparison with the year-ago period, we expect data center switch sales to grow double-digits and reach an all-time high in 2023. Most of this growth will be driven by the Cloud segment, most notably the hyperscalers, whose spending is usually less impacted by short-term macro-conditions. In the meantime, we expect spending from enterprises to be sluggish, as it was the case during prior market downturns.
In addition to this discrepancy in spending across various customer segments, we also expect a variance in market performance between the first and second halves of the year.
In the first half, we expect two tailwinds to drive revenue growth. We anticipate a strong backlog carried over from 2022. We also expect to see improvement in the supply situation that will help fulfill that backlog.
In the second half of 2023, we believe improving supply, combined with macroeconomic headwinds, will put a break on the panic-purchasing behavior that resulted in the outstanding booking growth rates experienced so far in the market. We, therefore, expect a significant slowdown in orders followed shortly thereafter by a slowdown in revenues, as most of the backlog would have been fulfilled in the first half of the year.
1) 200/400 Gbps Shipments to Nearly Double in 2023
2023 will mark a third major milestone in the adoption of 200/400 Gbps. The first one was marked by the early adoption spearheaded by Google and Amazon back in 2019/2020 time frame. The second milestone was marked by the deployment at Meta and Microsoft in 2021/2022 time frame. The third milestone is anticipated to happen in 2023 and will be marked by an accelerated adoption from Chinese Cloud Service Providers (SPs) and other Tier 2/3 Cloud SPs. This adoption by a wider set of customers, together with ongoing deployment at the hyperscalers, is expected to propel nearly triple-digit growth in 200/400 Gbps port shipments in 2023.
2) 800 Gbps Deployments May Start to Expand Beyond Google in 2023
The availability of 25.6T-based switch systems stimulated the 800 Gbps adoption at Google in 2022. With the availability of 51.2T-based switches, currently slated for the end of 2023, we expect other hyperscalers to implement those switch systems in the form of 64 ports of 800 Gbps ports. Of course, this prediction is contingent on the timing of volume availability of 800 Gbps optics.
3) SONiC is Ready for Prime Time
Over the prior years, we have witnessed an increased interest in the SONiC ecosystem but unfortunately, this interest has been hindered by persistent challenges, mostly related to the supportability aspect.
Tier 2/3 Cloud SPs as well as enterprises have limited financial and engineering resources, compared to hyperscalers, and may not be able to manage the full lifecycle of a project like SONiC.
To address the supportability issues, we have witnessed several offers from various incumbents but with the additional rise of new start-ups such as Aviz Networks and Hedgehog, we expect increased adoption of SONiC over the coming years.
We currently predict that by 2026, nearly 10% of the switches deployed in enterprise networks will be running SONIC. We plan to provide an updated SONiC forecast in our upcoming 5-year data center switch forecast report.
4) AI-driven Workloads Will Take Center Stage in terms of Spending from Customers as well as Investments from the Ecosystem
This trend will not be unique for 2023 but rather expected to continue for the foreseeable future. Dell’Oro Group projects that half of the spending on servers by 2026 will be on accelerated compute nodes for AI/ML applications. However, AI/ML workloads have a unique set of requirements in terms of latency, bandwidth, and power consumption, just to name a few. We expect AI/ML workloads to drive a significant amount of innovations across different areas: servers, storage, networking, and physical infrastructure, each of which we track at Dell’Oro Group as part of our research coverage. To answer those requirements, innovations; at a system level as well as a component level; will be needed. These innovations will be brought to market by incumbent vendors but more importantly by new entrants which we expect will enjoy a significant amount of funding. As industry analysts, we will be very excited to watch what kind of new product introductions and new network topologies will be announced in 2023.