[wp_tech_share]

As we enter a new decade, I would like to share my view on the key trends that will shape the server market at both the cloud and edge. While various use cases of enterprises running workloads in data centers on premise will persist, investments will continue to pour into the major public cloud data service providers (SPs). Workloads will continue to consolidate to the cloud, as cloud data centers scale, gain efficiencies, and deliver transformative services. In the longer-term, we forecast compute nodes could shift from centralized cloud data centers to the distributed edge as new use cases arise that demands lower latency. The following are five technology and market trends in the areas of compute, storage, and network to watch:

  1. Evolution of Server Architecture

Servers continue to densify and increase in complexity and price point. Higher-end processors, novel cooling techniques, accelerated chips, higher-speed interfaces, deeper memory, flash storage implementation, and software-defined architectures are expected to increase the price point of servers. Data centers continue to strive to run more workloads with fewer servers in order to minimize power consumption and footprint. Storage will continue to shift toward server-based software-defined architecture, thus dampening demand for specialized external storage systems.

  1. Software-defined Data Centers

Data centers will continue to become increasingly virtualized. Software-defined architectures, such as hyperconverged and composable infrastructure, will be employed to drive higher degrees of virtualization. Disaggregation of various compute nodes, such as GPU, storage, and compute, will continue to rise, enabling enhanced resource pooling and, hence, driving higher utilization. IT vendors will continue to introduce hybrid/multi-cloud solutions and increase their consumption-based offerings, emulating a cloud-like experience in order to remain relevant.

  1. Cloud Consolidation

The major public cloud SPs – AWS, Microsoft Azure, Google Cloud, and Alibaba Cloud (in Asia Pacific) – will continue to gain share as the majority of small-medium enterprises and certain large enterprises embrace the cloud. Smaller cloud providers and other enterprises will inevitably migrate their IT infrastructure to the public cloud due to its increased flexibility and feature set, improving security, and strong value proposition. The major public cloud SPs continue to scale and drive towards higher efficiencies. On the longer-term, growth among the large cloud SPs are projected to moderate, due to on-going efficiency improvements from the server rack to data center, and consolidation of the cloud data centers.

  1. Emergence of Edge Computing

Centralized cloud data centers will continue to drive the market within the forecast period of 2019 to 2024. At the end of this time frame and beyond, edge computing could be more impactful in driving IT investments because, as new use cases emerge, it has the potential to shift the balance of power from cloud SPs to telecom SPs and equipment vendors. We anticipate that cloud SPs will respond by developing edge capabilities internally and externally, through partnerships or acquisitions, in order to extend their own infrastructure to the edge of the network.

  1. Advances in Server Network Connectivity

From a server network connectivity standpoint, 25 Gbps is expected to dominate the majority of the market and to replace 10 Gbps for a wide range of applications. The large cloud SPs will strive to increase throughput, driving the SerDes technology roadmap, and enabling Ethernet connectivity to 100 Gbps and 200 Gbps. New network architectures, such as Smart NICs and multi-host NICs have the opportunity to drive higher efficiencies and streamline the network for scale-out architectures, provided that the price and power premiums over standard solutions are justified.

This is an exciting time, as increasing demand in cloud computing is driving the latest advances in digital interfaces, AI chip development, and software-defined data centers. Some vendors came out ahead and some were left behind with the transition from the enterprise to the cloud. We will watch closely to see how vendors and service providers will capitalize on the transition to the edge.

[wp_tech_share]

In March, I attended the 2019 Open Compute Project (OCP) Global Summit at the San Jose Convention Center. The event is growing with 3,600 participants this year, including a broad representation of vendors and end users who make up the OCP community. We continue to see innovation in the server rack for hyperscale Cloud, edge computing, and enterprise environments for OCP-based designs.

Following are three key takeaways in server network connectivity:

 1.  OCP NIC 3.0 (Network Interface Card) specification continues to evolve and is Smart NIC-ready.

The OCP NIC 3.0 specification addresses shortcomings of the OCP NIC 2.0 specification in the areas of the thermal and mechanical profile, connector placement, and board space. Key members, including Broadcom, Facebook, Intel, and Mellanox, contributed to the 3.0 development process. As it currently stands, the OCP NIC 3.0 specification is defined in two form factors: SFF (small form factor) and LFF (large form factor). The LFF form factor is designed to accommodate accelerated processors, such as an ARM SoC or FPGA for Smart NIC applications.

A Smart NIC designed for OCP is a wise future-proofing strategy. In Dell’Oro Group’s 2019 Controller and Adapter Market 5-Year Forecast January report, I projected that Smart NIC will become a $500 M market by 2023, representing 20 percent of the total controller and adapter market.  Furthermore, most of the earlier adopters of Smart NICs are hyperscale and telecom data centers are also expected to widely deploy OCP-based designs within the server rack.

2.  The introduction of 56 Gbps PAM-4 NICs enables server connectivity to 400 Gbps networks.

Another important development is the availability of Ethernet adapter products with 56 Gbps PAM-4 SerDes lanes by Broadcom (NetExtreme), Intel (800 series Columbiaville), and Mellanox (ConnectX6). All are available in the OCP 3.0 form factor. The SerDes lane transition from 28 Gbps NRZ to 56 Gbps PAM-4 will enable Ethernet connectivity up to 100 GbE (based on 2 SerDes lanes) or 200 GbE (based on 4 SerDes lanes). We see strong demand for server connectivity at 100 GbE and higher speeds, especially by Tier 1 Cloud service providers, as this segment transitions to 400 GbE networking at the top-of-rack (ToR) switch over the next one to two years. (See Dell’Oro’s press release,“Cloud Service Providers Drove Demand Volatility of High-Speed Network Adapters”)

3.  Multi-host NICs have the potential to streamline and densify server connector connectivity.

It is exciting to see multi-host NICs gaining additional support from vendors. This technology has the ability to streamline the network by reducing ToR connections while providing a dense compute rack architecture. Mellanox was first to market with multi-host NICs for Yosemite servers, which provide 50 Gbps Ethernet connectivity to four server nodes. At OCP, both Broadcom and Netronome announced network adapter products supporting multi-host connectivity for the Yosemite platform. Broadcom’s announcements are based on the NetExterme series with the Thor chipset, which provides single and multi-host connectivity for up to 200 GbE with a PAM-4 solution. Netronome’s solution, the Agilio CX, is also a Smart NIC that provides connectivity up to 50 GbE.

I believe that OCP will continue to grow in strength as the industry transitions from off-the-shelf equipment to open designs optimized to end-users’ technical and cost-of-ownership requirements.

[wp_tech_share]

The Huawei Connect 2018 was held in Shanghai on October 10 to 12 and over 20,000 attendees from different countries were at this event. It was a fascinating week led by Huawei key leaders sharing their Artificial Intelligence (AI) strategy along with its vision of an AI powered intelligent world.  For this event, I was looking forward to seeing how Huawei is transforming itself from primarily a provider of IT hardware solutions, to a provider of full-stack cloud services and applications.

Given that my interest lies in the areas of compute, server network connectivity, and cloud data center infrastructure, here are my main takeaways from the event:

AI Chips: Huawei launched the Ascend 910 and Ascend 310 at Huawei Connect 2018, aimed at accelerating AI workloads. The Ascend 910 is designed for the core data center, whereas the Ascend 310 is suitable for low-power edge computing. Both chips are designed by Hisilicon, a company owned by Huawei.  The Ascend announcement is groundbreaking because this is a rare instance in which a manufacturer is able to launch a viable alternative to accelerated processors, such as the GPU from NVidia, or FPGA from Intel or Xilinx, for AI workloads. Google, through its huge engineering resources, have also deployed its own accelerated processor, called the TPU, in its data centers. However, Huawei claims that a cluster of Ascend 910 can even outperform a comparable pod of TPU3, by a factor of 2.5X in floating point operations. More importantly, this is the first time in which a Chinese manufacturer has developed a seemingly competitive accelerated processor, and is aligned with China’s long-term goal of becoming self-reliant in the IT hardware market.  I believe the inclusion of another silicon vendor for accelerated chip sets, especially a foreign one, will drive additional innovation and adoption for AI technologies.

Smart NIC: Huawei announced a Smart NIC with an ASIC, also powered by Hisilicon, for applications such as offloading TCP/IP from the CPU. Initially this Smart NIC will likely be deployed in Huawei’s own cloud servers, but could eventually be sold alongside Huawei’s compute and storage portfolio to Huawei’s enterprise customers.  The Smart NIC market started to heat up in 2018 with no fewer than six major network adapter vendors, such as Intel, Broadcom, Mellanox, announcing or qualifying new products.  Smart NIC deployment is currently still fragmented and limited only to several hyperscalers.  I question whether or not the benefits Smart NICs could outweigh its high price premium and power consumption, which are factors inhibiting more wide-spread deployment of Smart NICs in the data center. However, Huawei’s vertical integration efforts might justify the economics of deploying Smart NIC in its cloud data centers.

Cloud Infrastructure: Huawei has been ramping and advancing its infrastructure to better compete against other public cloud providers, such as Alibaba Cloud. Currently, Huawei operates data centers worldwide, and is in the process of developing state-of-art modular data centers with redundant availability zones, and to optimize utilization and improve efficiencies.  In terms of absolute scale, Huawei has a long ways to go before catching up to other hyperscalers in terms of capacity.  However, I believe that Huawei is in a strong position to grow its public cloud business given the company’s penetration in enterprise accounts, and the only vendor to have an integrated cloud platform, from accelerated processors, to a global network of cloud data centers.

While the adoption of AI technologies is still nascent, its growth has been explosive with numerous potential applications that could change our daily lives.  Smart NIC is another area in which I am closely tracking.  It remains to be seen whether or not Huawei’s internal development of its Smart NIC will pay off and drive a strong use case.  For the next Huawei Connect event, I am looking forward to advances in the development and deployment of Huawei’s own silicon solutions in the fabric of Huawei’s future generation of data centers.

To learn more about my current market research coverage: