[wp_tech_share]

Very little is written about Huawei’s optical DWDM technology, but that doesn’t mean the company hasn’t made some big waves in the industry. We had the chance to sit down with the Huawei optical team, led by Gavin Gu, at MWC 2026 to learn about their latest coherent DWDM technology. This is what we learned.

Huawei has started shipping its next-generation high-performance coherent DSP in the first quarter of 2026 as an embedded assembly in a muxponder with two ports of 2.0 Tbps coherent wavelengths. The client ports in the module include a mix of 100 Gbps, 400 Gbps, and 800 Gbps. These muxponders are housed in the company’s DWDM systems, namely the OSN 9800 K12 and K36. And of course, Huawei’s new module delivers wavelengths across the entire Super C-band and Super L-band, which is increasingly important as wavelength channels get wider.

As is the situation in the industry, the highest wavelength speed used to identify the coherent technology is just that, the highest speed capable. One of the benefits of modern coherent line cards is that the symbol rate and modulation can be adjusted to deliver different wavelength speeds and performance. Huawei presented a few of those options in a chart (Figure 1) showing the unregenerated signal distance at different wavelength speeds. Maybe the most important speeds to look at are the 2.0 Tbps and 800 Gbps. We say this because the maximum distance at 2.0 Tbps gives us a good sense of the technology, and the maximum distance at 800 Gbps tells us if the muxponder will meet current customer requirements for unregenerated span lengths when they upgrade networks to 800 Gbps over the next few years.

How does Huawei’s new coherent wavelength technology compare to the rest of the industry? We did a simple comparison between the industry and Huawei (Figure 2). Specifically, we looked at high-performance coherent muxponders that are generally available as of 1Q 2026. Of course, this doesn’t give a deep assessment of Huawei’s technology or even that of the industry. But at a high level, we think it gives a good sense of where the company is at.

Figure 2: Currently Shipping High-Performance Coherent Line Cards

Two key differences show up in this comparison. The first is that Huawei DSP uses a larger semiconductor process node, while the industry is at 3 nanometers (nm). This difference puts Huawei at a slight disadvantage at the ASIC level, but the company can still deliver 2.0 Tbps at 80 km, which was proven in a live demonstration. Usually, a DSP using a larger process node would also consume more power. However, in Huawei’s newest muxponder, power consumption is lower at 0.1 Watts/Gbps, compared to the industry average of 0.125 Watts/Gbps. We believe this power advantage is created by Huawei’s extensive in-house development of every component inside a coherent optical module (tunable laser, receiver, TIA, driver, modulator, and DSP), along with its expertise in photonic packaging and manufacturing processes (Huawei has its own state-of-the-art manufacturing, assembly, and test facility for optical modules that we once visited).

Also, using an advanced InP-based modulator with a distributed electrode, internally designed and developed to achieve 30% lower parasitic capacitance, could give the company a power-consumption advantage at the module level, compensating for the DSP’s higher power consumption. Then, at the system level, Huawei also internally develops and manufactures the major components of its optical line systems, including its pump lasers and WSS modules, giving the company greater control over technology performance and time-to-market. As a result, Huawei is constantly innovating its optical system design, from the chip level to the system level.

The 2 Tbps technology is now ready, with the first wave of deployments underway. During MWC 2026, Huawei highlighted six successful trials of its 2 Tbps-capable muxponders across Europe, Asia Pacific, the Middle East, and Latin America for different applications, including terrestrial backbone, data center interconnect (DCI), and submarine. In an effort to address the rising demand for submarine network capacity, Huawei has also launched a muxponder specifically for this application, as demonstrated in one of its trials, which achieved 28 Tbps in a single fiber of undersea cable. In addition, the company proudly announced that the first commercial deployment with a major European Tier-1 communication service provider (CSP) is currently in progress.

[wp_tech_share]

My first post-RSAC 2026 post argued that the more important story was not who could assemble the broadest category slide, but where security architecture was actually consolidating. This second blog goes deeper into the meetings themselves. Across 30+ conversations and events, from the largest platforms to early specialists, the same pattern kept recurring: the market is not collapsing into one monolithic control plane, but it is consolidating around a smaller number of them inside the existing pillars of identity, endpoint, network, cloud, application, data, and security operations. What stood out most was not only where those control planes are getting stronger, but how unevenly product maturity is catching up to the architecture being described.

 

Why the Meeting Set Mattered

The breadth of the meeting set mattered because it helped separate conference noise from patterns that repeated across very different vendors. The conversations ranged from companies such as Microsoft, Cisco, Google, Palo Alto Networks, Fortinet, Netskope, Cloudflare, and Broadcom to smaller and earlier companies with narrower starting points, such as AppGate, Cloudbrink, Helmet Security, Neon Cyber, and Zenarmor. That range made it easier to see which themes were structural rather than promotional. It also reinforced that the market still maps back to the existing taxonomy. Identity remains the trust plane. Endpoint remains the local execution plane. Network Security remains the distributed enforcement plane, with SASE increasingly the most ambitious effort to unify that plane across multiple edges. Cloud Security remains the workload and infrastructure context plane, with Cloud-Native Application Protection Platform (CNAPP) increasingly central to prioritization and remediation. Application Security remains the software assurance and remediation plane. Data Security is becoming more central in the policy and governance plane. Security Operations remains the operating layer that turns all of that into action.

That broader structure also helps make sense of another shift that surfaced repeatedly during the week. For much of the past two decades, enterprise security could increasingly assume a user-to-cloud model: users and endpoints on one side, centralized applications and data on the other, with the network in between. That assumption is weakening. Applications, data, and increasingly AI execution are becoming distributed again across endpoints, browsers, branches, private clouds, public clouds, and SaaS. That makes the control-plane problem less about how users reach centralized resources and more about how trust, telemetry, policy, and enforcement remain coherent as both actors and execution environments become more distributed.

 

AI Is Becoming a Force Multiplier for Action Governance

The most consistent message from the meetings was not that AI has created a wholly separate security universe. It was that AI is accelerating a broader move toward action governance. The market is spending less time asking how to secure a model in isolation and more time asking who or what is acting, what it can access, how it is observed, and what policy should govern that behavior.

Microsoft framed that shift through agent identity, registry, observability, and the extension of existing controls across Entra, Defender, Purview, Intune, and Sentinel into agentic environments. Cisco described AI Defense less as a point feature than as a trust layer that can sit behind multiple enforcement points. Even smaller specialists used the same logic, though from a much earlier starting point. In that sense, AI is not the only reason the architecture is shifting. It is an accelerant in both directions: it expands the threat surface enterprises need to govern, and it improves what security platforms can do in threat hunting, investigation, and response.

That distinction matters for vendors and market watchers. The real competitive question is not who can attach the term “AI security” to the most products. It is who can connect authorization, observability, policy, and control into an operating model that enterprises can actually use. The stronger vendors increasingly sounded less focused on treating AI as an isolated layer and more focused on absorbing it into broader control planes.

 

Data Security Is Becoming More Central

If one pillar moved closer to the center of gravity during the week, it was Data Security. That does not mean Data Security replaces the other pillars. It means it increasingly supplies the policy logic that the others enforce. The taxonomy already points in that direction by describing Data Security as the system of record for sensitive-data policy, exposure, and misuse, with enforcement or informed action extending into SSE, CNAPP, Email Security, and AI-related controls. The meetings reinforced exactly that point.

Cyera made the argument most directly by repeatedly framing AI security as fundamentally a data problem. Netskope extended its AI-security story from its existing cloud-security and SASE base into guardrails, red teaming, and posture. Zscaler treated inline AI governance as a natural extension of its control path because that is where traffic is already inspected. Skyhigh tried to widen the conversation from SSE into a broader, data-centric platform story anchored in hybrid enforcement, unified policy, and regulated-industry fit. Even where vendors differed on packaging or scope, the broader direction was similar: data security is becoming more central because the enterprise increasingly needs a policy that follows data consistently across the web, cloud, endpoints, email, and AI-related interaction points.

That is one of the clearest bridges between the control-plane discussion and the tracked markets. SASE increasingly intersects with Data Security because distributed enforcement without a coherent data policy does not scale well. CNAPP increasingly intersects with Data Security because workload and infrastructure context alone are insufficient if the policy layer around sensitive data is disconnected. Data Security is not becoming the control plane for everything, but it is becoming more central to how the others coordinate.

 

Platform Claims Are Facing a Harder Test

The week also made the platform question more concrete. The real issue is no longer whether a vendor participates in several adjacent markets. The harder question is whether it has shared policy, telemetry, analytics, and workflows across multiple control points. That was already the pre-RSAC test, and the meetings gave it more substance.

Microsoft remains one of the clearer examples of a platform claim grounded in coordination across identity, data, endpoint, and SecOps. Cisco is trying to absorb more of its AI, browser, branch, firewall, and SSE logic into a more unified operating model. Broadcom is trying to refactor endpoint, network, and data controls into a tighter story around integration and lower-friction deployment. HPE is pursuing additive convergence by reusing enforcement and technology across its security and networking portfolio without forcing abrupt platform retirement. At the same time, other vendors were candid about what they are not. Akamai was more comfortable with “ecosystem” than “platform.” Cloudflare sounded stronger on composability and deployment simplification than on any claim to own every adjacent control plane. Those differences matter. The market is beginning to separate real cross-plane coordination from adjacency marketing.

This is also where SASE and CNAPP should be understood more precisely. SASE matters because it is emerging as the strongest effort to unify the distributed enforcement plane within Network Security. CNAPP matters because it is becoming the leading context and prioritization plane within Cloud Security. Neither has to become the entire security architecture to matter much.

 

The Architecture Is Moving Faster Than Adoption

If the direction of travel became clearer, the maturity gap also became harder to ignore. Repeated probing on general availability, product depth, and production readiness often produced a more cautious answer than the show-floor narrative suggested. F5 was unusually direct in stating that the market is behind the marketing and that many customers are still not ready. Skyhigh Security clearly distinguished the more stable employee-guardrail problem from the still-fluid agentic AI problem. Cloudflare was candid about the fact that some current controls are still fairly coarse. HPE described deeper prompt and file-level controls as still coming over the next several months. Broadcom made the point differently, arguing that customer readiness and trust, not missing technology alone, remain the gating issue.

That does not undercut the strategic importance of the shift. It clarifies the market’s near-term state. The more realistic progression remains discovery first, monitoring second, selective enforcement third, and only then broader operational trust. In other words, the architecture is moving faster than adoption. That matters not only for product planning, but for how investors and ecosystem participants judge which narratives are likely to monetize sooner and which remain further out on the curve.

 

What it Means for Vendors, Investors, and the Ecosystem

The most useful takeaway from RSAC 2026 is not that cybersecurity is collapsing into a single category, nor that enterprises will become fully autonomous next year. It is that the centers of gravity inside the existing pillars are becoming easier to identify. Identity is broadening. The endpoint is regaining weight as execution moves closer to the device. Network Security is converging toward distributed enforcement, with SASE as the most ambitious unifying model in that pillar. Cloud Security is converging around CNAPP as a context and prioritization plane. Data Security is becoming more central as the policy layer for the other planes. Security Operations remains the operating layer that determines whether those planes produce outcomes.

For vendors, that raises the standard. Participation in more adjacencies is not enough. What matters is whether a vendor can anchor a meaningful control plane, coordinate effectively with the others, and reduce operational burden rather than merely relocating it. For equity analysts and market watchers, it sharpens the filter between real platform progress and conference theater. For service providers, silicon suppliers, and hardware ecosystem participants, it suggests that distributed execution, hybrid placement, and enforcement locality are likely to matter more over time, not less. That is the clearer signal that emerged from the week. That is the clearer signal that emerged from the week.

The third and final installment in this RSAC 2026 series, written for current Dell’Oro clients, will take that one step further by examining what these signals mean for vendor positioning, market structure, and the watch items ahead.

 

Related RSAC 2026 blogs:

After RSAC 2026: Which Security Control Planes Are Taking Root

Beyond the Acronyms: What I Will Be Watching at RSAC 2026

[wp_tech_share]

At this year’s NVIDIA GTC, the narrative has moved decisively beyond the initial shift to accelerated computing. What stood out in 2026 is not just the continuation of that trend, but the expansion of AI infrastructure into a heterogeneous, domain-specific ecosystem.

As an analyst covering data center compute, the key takeaway is clear: the industry is entering its next phase—where optimization, not just scale, becomes the defining battleground.

 

From Retrieval to Generative—and Now to Reasoning Infrastructure

Hyperscaler workloads have evolved rapidly from retrieval-based systems toward generative AI, and now increasingly toward reasoning-driven architectures. Internal workloads such as search are being fundamentally re-architected around AI models, signaling a structural shift in how compute is deployed.

This transition continues to drive strong demand for accelerated computing. At Dell’Oro Group, we project global data center capex to exceed $1.7 T by 2030. These estimates could prove conservative given the scale of investment being signaled by hyperscalers, including multi-hundred-billion-dollar capex trajectories and long-term, large-scale infrastructure commitments.

 

The Emergence of LPUs: A Potential Inflection Point

LPUs, particularly through NVIDIA’s partnership with Groq, represent one of the more strategically important developments. Their SRAM-based architecture is optimized for low latency and strong performance per watt, enabling lower cost per token for inference and reasoning workloads.

This introduces greater flexibility in infrastructure design. Different service tiers can be optimized independently, with throughput-oriented configurations for lower-cost services and latency-sensitive deployments for premium offerings. LPUs provide a mechanism to fine-tune this balance in ways that GPUs alone cannot fully achieve.

Early deployments suggest LPUs can be configured at meaningful density. For example, a single Groq LPU rack can integrate hundreds of processors highlighting the degree of parallelism available for inference and reasoning workloads. In practice, such systems are likely to be deployed alongside GPU clusters, with the ratio depending on workload mix and service requirements.

If adoption reaches even modest levels, LPUs could expand the silicon TAM for domain-specific accelerators. At the same time, it remains unclear whether LPUs will primarily complement GPUs or displace portions of certain workloads as operators optimize for overall system efficiency. More broadly, LPUs underscore the growing importance of architectural specialization tailored to specific workload requirements.

 

GPU Roadmap: Density and Scale Continue to Accelerate

NVIDIA continues to push aggressively on GPU density and system integration. Platforms such as Vera Rubin Ultra demonstrate this trajectory, with multi-die architectures, massive HBM capacity reaching the terabyte scale per package, and highly dense, liquid-cooled rack designs.

Future platforms such as Feynman are expected to push these limits further, increasing both compute density and system complexity. However, this rapid scaling introduces new constraints around power, cooling, and system balance. As a result, complementary architectures and more specialized components will play a growing role in maintaining overall efficiency. With compute costs remaining elevated and data center capex scaling into the hundreds of billions annually, operators will need to strategically align infrastructure with domain-specific workloads to maximize efficiency and reduce total cost of ownership.

 

Interconnects: Balancing Standards and Proprietary Innovation

Interconnect strategy remains central to NVIDIA’s roadmap. The company continues to balance proprietary innovation with industry standards, investing in both InfiniBand and Ethernet for scale-out connectivity while advancing NVLink as the backbone of scale-up architectures.

As scale-up domains expand, NVLink will increasingly need to extend beyond the rack and, over time, into the optical domain. This evolution is necessary to support larger, more tightly coupled compute fabrics, but also introduces new technical challenges.

The expansion of scale-up capabilities naturally raises the question of whether they could displace portions of traditional scale-out networking. In practice, both architectures will need to evolve in parallel. Scale-up enables higher performance within tightly coupled systems, while scale-out remains essential for resilience, workload distribution, and efficient utilization across clusters. This is increasingly true not only for training but also for inference, where distributed workloads and service-level requirements demand flexibility.

NVIDIA is reducing reliance on PCIe-based x86 systems by positioning NVLink as an alternative interconnect. With initiatives such as NVLink Fusion and the development of its own CPU roadmap, the company is positioning NVLink as a broader system fabric that could extend beyond GPUs.

 

Connectivity, Networking, and System-Level Optimization

Connectivity is rapidly emerging as one of the primary constraints in next-generation AI infrastructure. Current systems are largely built on 200 Gbps SerDes, but the industry is already looking ahead to 400 Gbps SerDes. However, the transition to 400 Gbps presents significant challenges in signal integrity, power consumption, and packaging complexity, making the timeline aggressive and execution uncertain.

In this context, NVIDIA’s vertically integrated approach provides a meaningful advantage. Its control over InfiniBand technology, including SerDes development, allows it to move ahead of standard Ethernet ecosystems when necessary, particularly when industry standards lag behind system requirements.

At the same time, networking is no longer just about bandwidth. Smart NICs and DPUs, particularly NVIDIA’s BlueField platform, are becoming increasingly central to system architecture, with the market projected to grow at a 30% CAGR over the next five years. DPUs are expanding into broader roles within the AI infrastructure, managing data movement between compute, storage, and CPU domains while offloading networking and orchestration tasks from primary processors..

Taken together, these trends point toward a broader shift to system-level optimization, where performance is increasingly determined by how effectively compute, networking, and storage are integrated across the entire infrastructure stack.

 

Expanding the Platform: Beyond GPUs to Full-Stack Infrastructure

While GPUs remain the foundation of AI infrastructure, NVIDIA is clearly extending its reach across the full data center stack. Beyond its focus on domain-specific accelerators, GTC 2026 also highlighted the dense Vera CPU platform optimized for orchestrating agentic AI workloads, as well as the STX platform designed for KV cache-based context memory. A central theme underpinning this expansion is the increasing importance of co-design—bringing together compute, networking, and storage disciplines into a unified, system-level architecture rather than optimizing each component in isolation.

Taken together, these developments signal a clear expansion of NVIDIA’s total addressable market—from GPUs alone to a broader, full-stack infrastructure opportunity spanning compute, networking, and storage.

 

From Scale to Optimization: The Path Forward

NVIDIA’s rapid innovation cadence raises important questions around long-term economics, particularly as systems become more complex and capital-intensive. Maintaining a strong return on investment will depend not only on hardware performance, but on how effectively these systems can be utilized over time.

Here, NVIDIA’s software ecosystem remains a key advantage. CUDA provides continuity across generations, allowing developers to extract incremental performance improvements and enabling mixed-generation deployments that improve overall total cost of ownership.

More broadly, GTC 2026 makes it clear that the industry is moving beyond the initial phase of scaling AI infrastructure and into one defined by optimization and specialization. The shift toward heterogeneous architectures, combined with a growing focus on efficiency and workload-specific design, is reshaping how data centers are built and operated.

[wp_tech_share]

In a far-reaching ruling this week, the FCC added all consumer-grade routers produced in foreign countries to its existing Covered List–effectively blocking any new foreign-made router model from receiving FCC equipment authorization. Without FCC authorization, no new foreign-made routers can be imported or sold into the US market. Nearly 100% of consumer-grade routers are manufactured or assembled outside the United States, which means the FCC has significantly limits on new router imports and sales until approvals or waivers are granted.

Previously authorized devices are not affected and can continue to be imported, sold, and used. Firmware support for these models is expected to continue through at least March 1, 2027, with a possible extension.

For broadband providers, existing CPE deployment and inventory remain in place. However, the policy introduces uncertainty around the timing and availability of next-generation equipment.

 

Cybersecurity Concerns Could Hurt Broadband Providers

In its decision, the FCC cited cybersecurity concerns that foreign-made routers were implicated in the Volt, Flax, and Salt Typhoon  targeting vital U.S. infrastructure. All of those were serious and very concerted efforts at cyber espionage, and all have been tied back to China. The consumer routers that were targeted in each of these attacks were from multiple brands, including Cisco, D-Link, Netgear, Asus, and others, all of which generally split manufacturing and assembly between Taiwan, the Philippines, Malaysia, and Vietnam, among others. Some of these companies even have US-based corporate headquarters or major US sales offices. But for the FCC, the focus of its decision is not on the corporate nationality, but on the country of production.

For broadband operators, the implications could be meaningful. Many ISP-supplied gateways and mesh systems are assembled by global original design manufacturers (ODMs), including Sercomm, Arcadyan, Askey, Compal, and Wistron NeWeb. There is currently not enough domestic manufacturing capacity of residential CPE and routers to fill in the supply gap ISPs now face, since the vast majority are manufactured in other countries.

Cable operators running managed Wi-Fi programs—such as Comcast’s xFi, Charter’s Spectrum Wi-Fi—are particularly exposed, since those programs depend on a steady pipeline of certified gateway hardware to provision new subscribers and replace aging CPE in the field. A freeze on new model authorizations could not only limit the availability of new DOCSIS 4.0 and Wi-Fi 7 units, but also the limit the new revenue associated with the managed Wi-Fi services these operators are providing.

The FCC established a Conditional Approval pathway, which may require disclosure of management structure, supply chain details, and potential plan for to US manufacturing. However, there is no published timeline for how long that process takes, and no precedent for how many applications the relevant agencies can process in parallel.

Few, if any, brands known for consumer-grade routers currently build products stateside. Standing up domestic manufacturing lines—even for final assembly—is a capital-intensive, multi-year undertaking. Beyond the amount of time, it would take to get domestic manufacturing up and running is the cost to do so. CPE margins are incredibly slim to begin with, which makes it almost impossible that these companies would even consider onshoring manufacturing, where input costs are significantly higher than in Southeast Asia.

In the near term, the US residential router market will now stratify in ways that may not serve the underlying security objectives. Inventory of previously-authorized models will be rationed, prices will rise, and innovation cycles — particularly the transition to Wi-Fi 7 and Wi-Fi 8 — will slow in the U.S. market relative to the rest of the world. Whether that outcome makes American networks more secure, or simply more expensive, is an open question.

[wp_tech_share]

NVIDIA’s annual developer conference (San Jose, March 16–19) has become a bellwether for data center physical infrastructure (DCPI). This year was no exception. NVIDIA DSX took center stage — a full-stack platform for designing, building, and operating AI factories that now counts over 200 partners in its ecosystem. Several major DCPI vendors—including ABB, Eaton, Mitsubishi Electric, Schneider Electric, Siemens, Trane Technologies, and Vertiv—unveiled co-designed solutions in a tightly choreographed wave of announcements. It was a concrete expression of what CEO Jensen Huang declared in his keynote: “this conference is going to cover every single layer of the five-layer cake of artificial intelligence, from land, power and shell the infrastructure to chips, to the platforms, the models, and, of course, the most important, and ultimately what’s going to take get this industry taken off, is all of the applications.”

NVIDIA’s five-layer cake of AI

 

A Factory for Designing Factories

Among the DSX components, what particularly stood out was the Omniverse DSX Blueprint—a now generally available platform for modeling data center layouts, power topologies, and thermal behavior, using simulation-ready 3D models contributed by infrastructure partners in OpenUSD format. It is an ambitious vision at a time when the reality on the ground is that most data center design still relies on traditional CAD and BIM applications, and digital twin adoption is still in its infancy. This is NVIDIA being characteristically visionary—anticipating what will eventually become a necessity, even if today it can look like an overkill.

The industry is moving from adding capacity in the teens of gigawatts a year to potentially 100GW+ in a decade or less. At that scale, without AI-assisted tools in design, construction, and commissioning, it is hard to see how projects come online at the pace required—particularly given well-known skilled labor shortages. Just as semiconductor design has become fundamentally dependent on AI tools, data center design at gigawatt scale may have no choice but to follow the same path. The Omniverse Blueprint is NVIDIA’s bet on removing the barriers to building AI factories at scale.

But while the Omniverse Blueprint captures the imagination, the conversations dominating the show floor among DCPI vendors were far more immediate. Five topics in particular stood out: the growing heterogeneity of inferencing cluster racks, the fast-approaching 800 VDC transition, the ramp-up of liquid cooling designs, the potential commoditization within the MGX ecosystem and—as no data center discussion could miss it—power availability.

NVIDIA’s Vera Rubin DSX AI Factory Reference Design

 

The End of the One-Rack Era

For the past two NVIDIA generations, data center designers could plan around a single workhorse rack. The Hopper and then Blackwell platforms offered a largely homogeneous building block: one compute rack architecture, scaled across rows and halls, with relatively uniform power and cooling profiles. GTC 2026 broke that pattern decisively.

NVIDIA introduced not one but a number of rack configurations under the Vera Rubin umbrella. The NVL72 remains the flagship—72 Rubin GPUs and 36 Vera CPUs in a fully liquid-cooled, fanless, cableless enclosure exceeding 200 kW per rack. Alongside it, a CPX rack adds Rubin CPX accelerators to the Vera Rubin superchip trays, optimized for inference performance. A Vera CPU-only rack targets inference and data preprocessing without GPU acceleration. And the LPX rack with Groq’s LPUs debuts third-party silicon within NVIDIA’s own reference design.

This is a big departure. And it is also entirely expected. A single architecture serving every workload was only tenable while AI infrastructure was synonymous with large-scale training. As workloads diversify into a variety of fine-tuning and inferencing agentic AI applications, infrastructure must follow suit. Henry Ford was able to offer the Model T alone for only so long.

For DCPI vendors, the implications are immediate. Heterogeneous clusters mean managing mixed rack densities, uneven heat loads, and varying liquid cooling requirements coexisting on the same row. This is a design and operational challenge that will demand far more flexibility from infrastructure solutions than the relatively uniform AI halls of the Hopper and Blackwell era.

 

High Voltage, High Stakes

For the biggest disruption in data center power architecture in decades, 800 VDC power distribution received remarkably little attention in NVIDIA’s official channels. Absent from Jensen’s keynote and with no significant announcements since the technical blog and whitepaper released alongside last year’s OCP Global Summit—an event we covered in a previous blog—NVIDIA’s messaging on the architecture has been sparse.

The relevance of the discussion among vendors, however, could not have been more different. 800 VDC was the talk of the town. Multiple vendors showcased equipment and prototypes, and many dedicated sessions explored everything from semiconductor building blocks to rack-level power delivery and facility integration. Vendors like Delta Electronics, Texas Instruments, and STMicroelectronics focused their marquee March 16 announcements squarely on 800 VDC developments—an unusual departure from the lockstep of similar-themed announcements that have become the norm at GTC.

Schneider Electric’s Jim Simonelli session at GTC draws interest from audience

 

Such advancements are important and necessary, but many pieces of the 800 VDC topology remain unanswered. In his GTC session entitled “A Safe, Efficient, and Scalable Approach to 800 VDC Architecture,” Eaton’s J.P. Buzzell referenced an OCP white paper expected in the coming weeks. The draft should bring more clarity to the architecture, but there is still a long way to go before engineers can fully specify an 800 VDC data hall. And even once the specification matures, supply chains for components will need to be stood up and safety guidelines codified before broad deployment can begin.

 

45 Degrees of Separation

Much like 800 VDC, another infrastructure shift that made waves in an earlier NVIDIA keynote received little airtime at GTC. At CES in January, Jensen highlighted the move toward 45°C warm-water inlet temperatures—a significant departure from the designs more commonly deployed today. Beyond Jensen’s brief nod to Vera Rubin’s 45°C specification, the topic received little attention at GTC.

NVIDIA remains committed to 45°C, but there is no sign of it doubling down or rushing to get there. The convergence toward 45°C architectures will take longer to play out. Facility-side infrastructure needs to be adapted, but operators might remain reluctant to optimize the cooling system if doing so carries any risk of reducing accelerator performance. In an age of highly constrained compute, every token counts. And the imperative to maximize throughput trumps facility-level efficiency optimization.

The water temperature debate, however, was far from the only liquid cooling story at GTC. On the show floor, the direction of travel for CDU capacity was unmistakable. As pod architectures scale and per-rack thermal loads climb, vendors responded with a new class of multi-megawatt CDUs. These are a step change from capacities that dominated the market just a year ago, and we expect this upward trend to continue as next-generation pod architectures push thermal envelopes further still.

Delta Electronics’ 800VDC CDU

 

An interesting product found on the exhibition floor was a direct-current CDU, able to be connected straight to the 800 VDC bus. It is a thoughtful choice that adds flexibility for operators designing next-generation whitespace, even if we expect most large units to be housed in mechanical galleries in the grey space—where traditional AC power distribution is likely to remain the standard for the foreseeable future. Either way, the convergence of power and cooling design choices is becoming impossible to ignore.

 

MGX and the March Toward Standardization

The growing specificity of NVIDIA’s reference architectures—from rack dimensions and cooling requirements to power topologies and simulation-ready digital models—raises an uncomfortable question for DCPI vendors: as NVIDIA defines more of the design, what room is left for differentiation?

The “MGX wall” on the show floor—displaying components from dozens of vendors side by side within the standardized MGX ecosystem—made this tension visible. By standardizing interfaces, form factors, and performance specifications across the infrastructure stack, MGX makes it easier for operators to mix and match components from multiple suppliers. That is a win for deployment speed and supply chain resilience. But it also compresses the space in which vendors can compete on anything other than price and availability—the classic hallmarks of a commoditizing market.

Quick disconnects from multiple vendors showcased at the “MGX wall”

 

Not all vendors will be affected equally. Those with deep system integration expertise, intelligent controls, service capabilities, or engineering and quality differentiation in mission-critical components will find ways to stay above the commoditization line. But for vendors whose value proposition rests primarily on the physical product itself, the tightening of NVIDIA’s specifications around their equipment is a trend worth watching closely.

 

Unlocking the Grid

Perhaps the most consequential launch at GTC came not from the chip announcements but from DSX Flex—NVIDIA’s software layer for connecting AI factories to grid services and orchestrating dynamic power adjustment. With NVIDIA’s order book continuing to grow, the math is simple: the gap between the power needed to energize forecast chip shipments and the pace of grid updates is too large to ignore. And the only near-term path to more power is not launching data centers into space, but tapping into existing grid capacity when it is not being used.

This was a point I raised directly with Jensen during the event. His response was unequivocal: data centers must change their relationship with the grid and be willing to accept less stringent SLAs in exchange for faster access to capacity. AI workloads will need to flex around supply constraints rather than demanding always-on, fully firm power. In a world where tokens per watt is becoming the defining metric for AI factory economics, accessing these watts and maximizing them becomes a dealbreaker. Startups like Emerald AI and Phaidra are building the technology to support this, but unlocking it at scale requires more than just engineering ingenuity. It depends on the willpower and aligned incentives of primary gatekeepers involved—utilities, grid operators, and their regulators.

 

What This Means for the DCPI Market

Dell’Oro Group’s latest DCPI market update, released during GTC week, showed the market reached $10.9 billion in 4Q 2025—up 20% year-over-year—with synchronized backlog surges across vendors in power and cooling. The AI supercycle continues to drive record investment, and GTC 2026 did nothing to dampen expectations. The tone was one of confident optimism—about the trajectory of AI, the scale of compute still to be built, and the opportunities ahead for data center vendors.

Regardless of whether that optimism proves fully warranted, GTC 2026 left little doubt: the DCPI market is entering its most consequential chapter yet. Stay tuned as we continue to track these shifts—and connect with us at Dell’Oro Group to discuss these trends as they unfold.

 


Vendor Press Releases

Accelsius:

Delta Electronics:

Eaton:

Foxconn:

Flex:

Hitachi:

LiteOn:

Schneider Electric:

STMicroelectronics:

Texas Instruments:

Trane Technologies:

Vertiv: