Print Friendly, PDF & Email

800 Gbps adoption rate expected to be faster than 400 Gbps, composing more than 25% of data center switch ports by 2025

Since the onset of COVID-19, we have predicted that the data center switch market will be for the most part resilient to the effects of the pandemic and that it will quickly recover from its low, single-digit revenue decline in 2020. We continue to believe that the Ethernet switch data center market will return to growth in 2021 and be able to exceed its 2019 pre-pandemic revenue level.

Following are key takeaways from the July 2021 Five-Year Ethernet Switch Data Center forecast:

    • Our interviews with end-users and system and component vendors suggest that the pandemic has amplified the importance of the network and accelerated multi-year digital transformation projects. These trends are expected to bring major changes to data center networks and potentially generate additional market revenue.
    • Despite our optimism, our interviews with major vendors revealed that a number of them are already operating at full manufacturing capacity and that supply challenges will continue through the remainder of the year with a potentially more pronounced impact on market performance and the pricing environment. If this is true, our forecast may prove to be too high, as it doesn’t currently take into consideration the impact of these various supply issues.
    • Through our latest interviews with the large Cloud service providers (SPs), we have learned of a number of changes that may impact network architectures when they migrate to next-generation speeds. These changes will be driven by a limited power budget and new AI/ML applications, which may require different network topologies. These hyperscalers will make different choices in terms of network chips, switch radix, number of network tiers, and –ultimately – network speeds. We expect this diversity to increase when Cloud SPs build next-generation networks, as some will focus more on latency improvements while others will focus on power. Ultimately, however, all SPs will focus on cost reduction. Additional discussion about these possible changes and their associated effects may be found in our forecast report.
    • Optics have always played an important role in enabling speed migration on data center switches. With the transition to 400 Gbps and beyond, however, the role played by optics will become even more crucial for a number of reasons. First, because of their increased price, optics for 400 Gbps speeds and higher are expected to compose about 60% to 70% of network spending (compared with less than 50% for speeds lower than 400 Gbps). For this reason, some switch vendors are planning to use the optics opportunity to capture a higher portion of network spending. Second, optics may displace some dense wavelength division multiplexing (DWDM) transport systems for certain Data Center Interconnect (DCI) use cases. Last, but not least, while pluggable (as opposed to embedded) optics are currently the form factor of choice, they may potentially exhibit some thermal and density issues as we approach speeds of 1.6 Tbps and higher. All of these possible changes in optics and their corresponding impact on the data center switch market are addressed in greater detail in our report.
    • Data Center Switch market forecast - 400 Gbps vs 800 Gpbs Port Shipments - DellOroGroup.JPGWe predict that 800 Gbps adoption will be quick, surpassing 400 Gbps ports in 2024 (Figure). 800 Gbps deployments will be propelled by the availability of 100 Gbps SerDes and will not require 800 GE MAC. As a reminder, our forecast reflects port-switch capacity, regardless of how the port is configured. We expect early 800 Gbps ports to be used in breakout mode either as 8×100 Gbps or as 2×400 Gbps. (Breakout applications support many use cases, such as aggregation, shuffle, better fault tolerance, and bigger Radix.) The anticipated rapid adoption of 800 Gbps will be propelled by: 1) availability of 800 Gbps optics with a significantly lower cost per bit than two discrete 400 Gbps optics; and 2) lower cost per bit at a system level, as 800 Gbps will allow consuming 25.6 Tbps chips in a 1U form factor with 32 ports of 800 Gbps. These systems will have a better cost per bit than their equivalent 400 Gbps (which requires 2 U chassis to fit 64 ports). Since economics drive adoption, we believe that 800 Gbps will be more rapidly adopted than 400 Gbps.

To access the full report for details about revenue, units, pricing, speeds, regions, market segments, etc., please contact us at dgsales@delloro.com

 

About the Report

The Dell’Oro Group Ethernet Switch – Data Center Five Year Forecast Report provides a comprehensive overview of market trends, including tables covering manufacturers’ revenue, port shipments, and average-selling prices for modular, fixed, and managed and unmanaged by port speed. We report on 1000 Mbps and the following Gbps port speeds: 10, 25, 40, 50, 100, 200, 400, and 800. We also provide forecasts by region and market segment, including Top-4 U.S. Cloud SPs, Top-3 Chinese Cloud SPs, Telco SPs, Rest of Cloud, Large Enterprises, and Rest of Enterprises.

July 2021 5-Year Forecast DC Switch Market
Print Friendly, PDF & Email

In our latest Data Center Capex report published in June 2021, server spending, which accounts for more than 40% of the data center capex, is forecast to grow 8% in 2021. We anticipate growth to come mostly from an increase of server average selling price (ASP), as vendors pass on higher commodity pricing and supply chain costs to customers amid recent global semiconductor shortages. We predict demand on servers to strengthen in 2H 2021, as the major Cloud service providers ease out of a digestion cycle, and as enterprise spending unfreezes for certain sectors, which could further strain the supply chain.

We identify the following effects due to these ongoing supply chain constraints:

  • Data center equipment—such as servers, storage systems, and Ethernet switches—contain critical components that may be supply constrained due to the recent shortages. Examples of such components include CPUs and GPUs, network processors, storage and Ethernet controllers, and DRAM and NAND chips. Passive components on the motherboard, such as capacitors and resistors, have longer lead times. While it is unclear how long the current component shortages will persist, there is an expectation in the industry that the backlog for components could be relieved by late 2021. Therefore, we have weighted capex towards the second half of this year and some of that capex rolling over into next year.
  • As system vendors scramble to increase component purchases to meet immediate and future demand, the supply chain will continue to tighten, resulting in higher component and logistics costs that will eventually be passed to end-users in the form of higher system ASPs. For 2021, we forecast server ASP to approach double-digit growth. While server ASP grew by an unprecedented rate of 15% in 2018, also due to a high-demand and tight supply environment, we do not expect 2021 ASP increases to approach those of 2018. Consequently, system vendors could see a lift in their topline revenue from these ASP increases.
  • Higher server ASPs could have several implications. First, given that the Cloud SPs purchase servers based on unit demand, higher server ASP could directly result in higher server capex. Second, given that enterprise IT budgets are usually fixed for the year, higher server ASP could translate to lower server unit purchases for the year. Thus, even though our 2021 server revenue forecast is relatively unchanged compared to our prior forecast, we have curtailed our projections for server unit growth.

In addition, we will watch out for other developments that could impact data center spending this year, such as, the Intel Ice Lake CPU ramp and delays to Sapphire Rapids, Cloud demand and enterprise recovery, the proliferation of AI, and more. To access the full Data Center Capex report, please contact us at dgsales@delloro.com.

Print Friendly, PDF & Email

In our latest forecast published in January 2021, the worldwide capex on data center infrastructure is projected to increase at a 7% compound annual growth rate (CAGR) from 2020 to 2025 to $278 billion.

We anticipate the growth of data center capex to vary depending on the customer segment. The Cloud will continue to gain share over enterprise/on-premise data center deployments, with COVID-19 accelerating Cloud adoption to some extent. Edge computing deployed over Telco networks could emerge near the tail end of the forecast period.

Among all the technology areas that we track, we forecast the cumulative revenue growth of servers to surpass that of other technology areas such as Ethernet switch, network security, optical transport, and router. Higher capex on servers will be driven by a combination of higher server unit demand from the Cloud and increasing server average selling prices (ASP). We identify some notable trends in server architecture that could have the effect of lifting server ASP for our forecast horizon.

Dell'Oro Group Data Center Infrastructure Revenue 2025

 

The followings are some additional highlights from the Data Center Capex 5-Year Forecast January 2021 Report:

  • CPU Refresh: New generation of servers are typically equipped with more memory, cores, storage, and faster I/O than the preceding generation. In 2018, server ASPs increased by 15%, partly due to the refresh of the Intel Xeon Scalable platform. We anticipate an uplift in ASP with Intel’s ramp of the Whitley server platform based on the 10 nm microarchitecture.
  • Accelerated Computing: As the deployment of accelerated servers continues to grow, we expect that data centers will be better optimized to process AI and ML workloads with more powerful, denser, and costlier accelerated servers equipped with AI accelerator chips such as GPUs and FPGAs. Some of the Tier 1 Cloud service providers have deployed accelerated servers using internally developed AI chips. We estimate that the attach rate of servers with AI accelerators will reach double-digits by 2025.
  • Smart NICs: These specialized network cards typically have an on-board programmable processor, and can be configured to offload the CPU of a specific network, storage, and security services and to provide flexibility for software-defined and converged networks. Smart NICs, which carry a 3 to 5X price premium compared to standard NICs, could further inflate server ASP if deployed at scale.

These advances to server architecture will correspondingly drive innovations in the data center, such as the need for more advanced cooling solutions, additional network capacity, etc. To learn more about Data Center Infrastructure and Server spending, or if you need to access the full report, please contact us at dgsales@delloro.com.

 

About the Report:Dell'Oro Group Data Center Infrastructure Revenue 2025

Dell’Oro Group’s Data Center Capex 5-Year Forecast Report details the data center infrastructure capital expenditures of each of the ten largest Cloud service providers, as well as the Rest-of-Cloud, Telco, and Enterprise customer segments. Allocation of the data center infrastructure capex for servers, storage systems, and other auxiliary data center equipment is provided. The report also discusses the market and technology trends that can shape the forecast.

Print Friendly, PDF & Email

2020 has been a tumultuous year in which the industry has to reevaluate its data center deployment strategy. While COVID-19 and the ensuing recession did weigh down on projected 2020 data center capex growth to just 2%, the slowdown in spending was not as much as originally feared. Some Cloud service providers have continued to expand their infrastructure to support increased internet usage and work-from-home dynamics, while a great deal of uncertainty persists in other industry sectors. Our 2021 outlook is more optimistic, with a data center capex projection of 10%. We identify the following key trends that could shape the dynamics of the data center capex in 2021.

Cloud spending to return to higher growth:

This may not be a surprise given the surge in demand for Cloud services throughout the pandemic. But we project that all of the Top 10 Cloud service providers will increase their data center capex in 2021 by double-digit growth as they revert to an expansion cycle. Data center suppliers such as processor, memory, storage, and optics vendors have positive sentiment going into 2021 and have been proactively expanding capacity.

Soft Enterprise IT spending will persist:

Overall enterprise growth is forecast for tepid growth in 2021. While high-end enterprises are likely to invest in a hybrid Cloud strategy, small and medium enterprises have been making a secular shift to Cloud computing. This trend has materialized, simply because it is less expensive for smaller enterprises to rely on Public Cloud, as opposed to building and operating their own data centers. We expect this trend to accelerate in light of the macroeconomic uncertainties created by the pandemic.

System pricing expected to be higher, creating upside revenue growth for vendors:

While we may see some deflationary commodity pricing in 1H21, inflationary commodity pricing could return in 2H21 as global demand increases, driving system average selling prices higher. Furthermore, Intel’s new processor platform, Ice Lake, which will ramp in 2021, will enable deeper memory, more storage, and faster interconnects, and could drive up server cost. Accelerated compute servers, which can be many times the cost of a general-purpose server, should see greater adoption as well.

Accelerated computing further materializes:

As the number of artificial computing (AI) applications and use-cases increases, so will the deployment of accelerated compute servers with specialized processors, such GPUs, FPGAs, and custom ASICs, along with enhanced cooling designs such as liquid cooling. These specialized processors are designed to handle AI inference and training workloads much more efficiently than general-purpose processors such as CPUs. Smart NICs and data processing units (DPUs) are innovations that will nicely complement CPUs in increasing the efficiency and flexibility of the data center.

Intel will continue to dominate the data center CPU market, despite new entrants:

While AMD has made share gains and Intel’s data center business slipped 2020, we project Intel to retain a strong leadership position going into 2021. Intel still has a commanding share among the Top 10 Cloud service providers, and this is a market that will undergo an expansion in 2021 with the ramp of the new Intel Ice Lake processor platform. Nevertheless, we expect all the major vendors to increase their offerings of Intel and AMD x86 for enterprise servers. Furthermore, we believe that there are opportunities for ARM as well in niche markets and applications.

Print Friendly, PDF & Email

The 3rd AI Hardware Summit took place virtually earlier this month and it was exciting to see how quickly the ecosystem has evolved and to learn of the challenges the industry has to solve in scaling artificial intelligence (AI) infrastructure. I would like to share highlights of the Summit, along with other notable observations from the industry in the area of accelerated computing.

The proliferation of AI has emerged as a disruptive force, enhancing applications such as image and speech recognition, security, real-time text translation, autonomous driving, and predictive analytics. AI is driving the need for specialized solutions at the chip and system level in the form of accelerated compute servers optimized for training and inference workloads at the data center and the edge.

The Tier 1 Cloud service providers in the US and China lead the way in the deployment of these accelerated compute servers. While the deployment of these accelerated compute servers still occupies a fraction of the Cloud service providers’ overall server footprint, this market is projected to grow at a double-digit compound annual growth rate over the next five years. Most accelerated server platforms shipped today are based on GPUs and FPGAs from Intel, Nvidia, and Xilinx, the number of new entrants, especially for the edge AI market, is growing.

However, these Cloud service providers, or enterprises deploying AI applications, simply cannot increase the number of these accelerated compute servers without addressing bottlenecks at the system and data center level. I have identified some notable technology developments that need to be addressed to advance the proliferation of AI:

    • Rack Architecture: We have observed a trend of these accelerated processors shifting from a distributed model (i.e., one GPU in each server), to a centralized model consisting of an accelerated compute server with multiple GPUs or accelerated processors. These accelerated compute servers have demanding thermal dissipation requirements, oftentimes requiring unique solutions in form-factor, power distribution, and cooling. Some of these systems are liquid-cooled at the chip level, as we have seen with the Google TPU, while more innovative solutions such as liquid immersion cooling of entire systems are being explored. As these accelerated compute servers are becoming more centralized, resources are pooled and shared among many users through virtualization. NVIDIA’s recently launched A100 Ampere takes virtualizing to the next step with the ability to allow up to seven GPU instances with a single A100 GPU.
    • CPU: The GPU and other accelerated processors are complementary and are not intended to replace the CPU for AI applications. The CPU can be viewed as the taskmaster of the entire system, managing a wide range of general-purpose computing tasks, with the GPU and other accelerated processors performing a narrower range of more specialized tasks. The number of CPUs also needs to be balanced with the number of GPUs in the system; adequate CPU cycles are needed to run the AI application, while sufficient GPU cores are needed to parallel process large training models. Successive CPU platform refreshes, either from Intel or AMD, are better optimized with processing inference frameworks and libraries, and support higher I/O bandwidth within and out of the server.
    • Memory: My favorite session from the AI Hardware Summit was the panel discussion on memory and interconnects. During that session, experts from Google, Marvell, and Rambus shared their views on how memory performance can limit the scaling of large AI training models. Apparently, the abundance of data that needs to be processed in memory for large training models on these accelerated compute servers is demanding greater amounts of memory. More memory capacity means more modules and interfaces, which ultimately degrades chip-to-chip latencies. One proposed solution that was put forth is the use of 3D stacking to package chips closer together. High Bandwidth Memory (HBM) also helps to minimize the trade-off between memory bandwidth and capacity, but at a premium cost. Ultimately, the panel agreed that there needs to be an optimal balance between memory bandwidth and capacity within the system, while adequately addressing thermal dissipation challenges.
    • Network Connectivity: As these accelerated compute nodes become more centralized, a high-speed fabric is needed to ensure the flow of huge amounts of unstructured AI data over the network to accelerated compute servers for in-memory processing and training. These connections can be server-to-server as part of a large cluster, using NVIDIA’s NVlink and InfiniBand (which NVIDIA acquired with Mellanox). Ethernet, now available up to 400 Gbps, is an ideal choice for connecting storage and compute nodes within the network fabric. I believe that these accelerated compute servers will be the most bandwidth-hungry nodes within the data center, and will drive the implementation of next-generation Ethernet. Innovations, such as Smart NICs, could also be used to minimize packet loss, optimize network traffic for AI workloads, and enable the scaling of storage devices within the network using NVMe over Fabrics

I anticipate that specialized solutions in the form of accelerated computing servers will scale with the increasing demands of AI, and will comprise a growing portion of the data center capital expenditures. Data centers could benefit from the deployment of accelerated computing, and would be able to process AI workloads more efficiently with fewer, but more powerful and denser accelerated servers. For more insights and information on technology drivers shaping the server and data center infrastructure market, take a look at our Data Center Capex report.