The AI industry is currently undergoing a significant inflection point, marked and defined by the rapid rise of emerging large language models and generative AI applications, such as OpenAI’s ChatGPT. What sets these emerging AI applications apart is the sheer number of parameters they must manage. Some of them deal with billions — or even trillions — of parameters, necessitating the use of thousands or tens of thousands of GPUs, TPUs, or other types of accelerated processors. Connecting these accelerated servers into large clusters requires a data center-scale fabric known as the AI back-end network. This network differs from the traditional front-end network used to connect general-purpose servers.
Furthermore, AI workloads possess unique attributes and characteristics that vastly differ from traditional general-purpose compute workloads. These distinctive attributes have important implications for the type of network required to run these AI workloads.
The AI Networks for AI Workloads Advanced Research Report (ARR) aims to answer critical market questions such as:
- What are the unique requirements of AI Networks
- What are the various network design options and topologies to support AI workloads?
- What is the total market opportunity for AI Networks?
- How big is the Back-end network vs. Front-end network?
- What is the current and future share of Ethernet vs. InfiniBand and what are the use cases driving each one of these protocols?