[wp_tech_share]

Its absence dropped like a cannonball into a pond.  In Cisco’s August 2024 earnings call, CEO Chuck Robbins laid out the company’s investment strategy—with nary a mention of networking. Cisco was laying off up to 7% of its workforce, shifting resources into strategic areas. Robbins outlined his three-point list: AI, cybersecurity, and cloud.  Corporate focus was on integrating Splunk, the mega acquisition made in March.  Cisco had yet to announce Wi-Fi 7, a year and a half after competitors in China had taken the lead in the new technology.  It seemed that Cisco may be turning its back on campus networking, a market the company had dominated for decades.

What a difference 10 months can make.

Earlier this week, on June 10th, Chuck Robbins appeared on stage with Jeetu Patel, appointed by Robbins as Chief Product Officer just days after the August 2024 earnings call.Their message was that Cisco is an AI company with networking at its core, and that message was backed with a list of announcements so broad that Chief Marketing Officer Carrie Palin called it “bonkers”. Patel stole the show with a narrative designed to address the perception that a) Cisco is too complex and b) it missed the boat on AI.

The three focus areas outlined in August shifted. The word “cloud” morphed to “data center”, with a view that increasingly, AI workloads will be running on private infrastructure.  Cisco’s new strategy was infused with AI throughout, starting with a prediction that millions of AI agents will one day be introduced into the human workforce.  However, insisted Cisco executives, these agents will be “network-bound”.  To grow to its full potential, AI needs network infrastructure, but it also needs to be trusted.  To build trust, says Patel, security needs to be fused into the network, and Cisco, as a networking company, is best placed to make it happen.

To back up the proposition that the AI Era requires a robust underlying network, including inside the enterprise, Cisco revealed a slate of new developments for the campus network:

  • Two new Smart Switches for the campus (C9350 and C9610) with Silicon One coprocessors designed to run parallel workloads, such as Cisco’s Hypershield.
  • A cloud-native gateway, designed to help enterprises transition APs from controller-managed to cloud-managed architectures.
  • 19 new industrial switches, including small form factors intended to be installed on robots.
  • A behemoth of a Wi-Fi 7 AP, the CW9179F, weighing ten pounds, with front and back beam coverage, designed for large venue deployments. This is the latest addition to Cisco’s family of Wi-Fi 7 APs, the first of which was revealed in October 2024.
  • The addition of Ultra-Reliable Wireless Backhaul (URWB) on 6E APs (IW9165 and IW9167), with a plan to introduce URWB to some Wi-F 6 APs as a software upgrade.
  • The unification of Meraki and Catalyst from a hardware, licensing, and management perspective.
  • An AI-fueled management platform, AI-Canvas, with a multi-player, dynamic user interface. The platform relies on an LLM purpose-built for networking (Deep Network) fed with live telemetry and Cisco’s vast array of TAC insights (alpha version expected in October).

The vision was compelling and impressive in scale.  The reality will reveal itself as the platforms become available over the next few months.  Only as customers begin ordering, deploying, and using these new products in earnest will we begin to get answers to the following questions:

Will enterprises be prepared to pay a premium for an additional DPU in a campus switch, a concept originally designed for the data center?  Will the adoption of these models be tied to the penetration of Cisco’s Hypershield security strategy, and how will that unfold?

Will the North American market finally shift over to Wi-Fi 7, or will piles of remaining Wi-Fi 6E inventory continue to be the main source of shipments?

Can Meraki and Catalyst really be converged from a management perspective?  How will Cisco address the challenge of feature parity between the two, and how will the new, converged platform be branded?

What will the fee structure be for AI Canvas? Can IT departments adjust from having their hands on the network to being humans-on-the-loop?  Will the chosen licensing model for AI Canvas hamper its adoption?

While there might still be hiccups as products start rolling out (says Patel, products are never finished, they are either incomplete or obsolete), it is clear that Cisco has found a compelling vision to lay out for customers. The sheer breadth of the company’s portfolio is dizzying, which has previously worked against Cisco as a source of complexity.  But breadth can also be a seller if it’s positioned correctly.  Says Oliver Tuszik, Cisco’s newly appointed Chief Sales Officer, “When we combine two or three parts of Cisco, we are unbeatable… nobody in the market can build this solution.”

As for CEO Chuck Robbins, he called the June 2025 show the most important Cisco Live ever.  “I probably say that every year,” said Robbins.  “But this year, I mean it.”

[wp_tech_share]

I spent the past three days at Cisco Live 2025 and the adjunct Press & Analyst Conference in San Diego watching the company deliver a sweeping vision that fuses networking, security, observability, and silicon into one agent-ready platform. For example, Cisco framed its AI Canvas as a cross-domain cockpit and its Hypershield as distributed “micro-firewalls.” At the same time, programmable Cisco Silicon One and NVIDIA-aligned AI factories promised bandwidth without power blow-outs. Building on that, customers like Hilton and Steve Madden validated the strategy with million-device Meraki rollouts and 30 percent tool consolidation. Furthermore, a “One Cisco” sales overhaul simplifies buying and seeds outcome-based services. Collectively, these moves signal an ambitious pivot from box vendor to AI platform orchestrator—an evolution explored next with the key themes from Cisco Live 2025.

Cisco Live 2025 Key Themes

Secure Network Platformization Meets Agentic AI

Building on last year’s tentative integrations, Cisco leaders unveiled a cohesive platform that grafts identity, policy, and APIs across networking, security, and observability. Meanwhile, AI Canvas surfaced as the centerpiece UI where humans and software agents co-create dynamic troubleshooting boards, auto-generate interface widgets, and execute rollback-safe remediations. DJ Sampah, Vice President & GM, AI Software Platforms, likened the experience to a “collaborative cockpit” that turns cross-domain chaos into deterministic workflows. Consequently, the company bundles Canvas into existing subscriptions, delaying direct monetization yet accelerating adoption across its million-customer base.

Likewise, Hypershield extends this vision by embedding layer-4 firewall enforcement in smart switches, endpoints, and Kubernetes nodes. Tom Gillis, EVP & GM, Security & Networking, Cisco, argued that “east-west traffic is where attackers hide; distributed firewalls light it up.” The policy plane lives in Security Cloud Control, allowing enforcement to roam without forklift upgrades. Two large banks are already piloting the technology, and regulated enterprises are slated to enter production within a year. While we see its architectural elegance, we question timeline realism and third-party interoperability.

Programmable Silicon and AI-ready Fabrics

Meanwhile, surging inferencing workloads are rewiring data-center economics, and Cisco is positioning Silicon One as the programmable answer to hyperscale ASIC lock-in. Kevin Wollenweber, Cisco’s SVP & GM, Data Center, Internet & Cloud, emphasized that runtime programmability “avoids 24-month retape-outs while incurring no power penalty.” Building on that, Martin Lund, Cisco’s EVP, Common Hardware Group, revealed co-packaged-optics prototypes targeting 400 Gb/s per lane, promising reduced loss and improved energy efficiency. Consequently, Cisco’s secure AI factory reference design, co-engineered with NVIDIA, bundles front-end and GPU back-end networks, zero-trust segmentation, and AI Defense model guardrails.

Furthermore, executives argued that network bandwidth, not GPU scarcity, will become the gating factor. The roadmap scales port speeds from today’s 800 Gb/s to 3.2 Tb/s. Although the silicon story resonates, Cisco still lacks its GPU and must lean on allies like NVIDIA and AMD. Consequently, the company’s silicon agility will be scrutinized as enterprises demand latency budgets below 50 ms for interactive agents and sub-5 ms for robotics.

Go-to-Market Reinvention and Customer Momentum

Meanwhile, Cisco’s sales and marketing overhaul seeks to translate platform breadth into double-digit growth. Oliver Tuszik, Cisco’s EVP & Chief Revenue Officer, collapsed 14 specialist teams into a unified “One Cisco” motion, backed by 90,000 employees and AI-driven account intelligence. Building on that, Cisco’s Chief Marketing Officer, Carrie Palin, repositioned the brand around four outcome pillars: AI Infrastructure, Future-Proof Workplaces, Digital Resilience, and Secure Networking. We applaud the candor around perception gaps, yet caution that enablement depth and partner capacity will determine execution.

Customer narratives reinforce the pitch. Hilton has deployed 700,000 Meraki devices across 6,000 hotels, targeting 1 million by December. Steve Madden slashed standalone tools by 30 percent after standardizing on Meraki, Secure Access, and Splunk-fed XDR, while Grok jumped from 100 Gb/s to 800 Gb/s switching for inference clusters. These cases showcase simplified operations, supply-chain reliability, and AI-ready bandwidth—but also reveal remaining friction. Dan Wood, Hilton’s VP, Global Network Engineering, stated that full autonomy will follow only after “bringing the feeds together before trusting AI.” That cautious stance mirrors broader industry ambivalence toward agentic control.

Looking ahead, Cisco must prove that unified licensing and friction-free trials can convert marquee case studies into mainstream repeatability across partners and verticals.

Cisco’s New Vision in Today’s Market

Building on the thematic foundation, Cisco’s platformization strategy enters a contested arena where many others are vocalizing similar platformization and AI-first themes. These include security juggernauts like Palo Alto Networks, Fortinet, and Zscaler, other network vendors such as Arista, Juniper Networks, and HPE, public cloud giants like AWS, Google, and Microsoft, and lastly, AI silicon behemoth Nvidia.  Meanwhile, Cisco leans on three differentiators—security-infused networking, programmable silicon, and Splunk-fueled telemetry processing—to outflank suite rivals and point players.

On the security front, Hypershield seeks to upset the immense network security foothold that Palo Alto Networks, Fortinet, and Zscaler enjoy today. Embedding security into every switch port could rewrite networking firewall price-performance curves and unlock new security value, yet it risks cannibalizing Cisco’s firewall appliance revenue if adoption outpaces upsell. Curiously, Cisco also announced the latest data center-focused 6100 series firewall appliance. Conversely, Palo Alto Networks’ threat protection remains deeply respected by its customers, and Fortinet’s ASIC-accelerated FortiFabric still holds performance leadership in raw layer-4 deployments, forcing Cisco to convince that hypershield threat protection is sufficient to displace Palo Alto Networks or that smart-switch elasticity outweighs Fortinet’s raw layer-4 throughput.

On the programmable silicon front, Cisco’s Silicon One positions it against Juniper’s and Broadcom’s silicon. Dynamic tuning with no power penalty offers future-proofing, but network-OS diversity may complicate software consistency and partner certification. Meanwhile, NVIDIA’s Spectrum-X fabric magnetizes hyperscale interest, prompting Cisco to co-develop secure AI factories rather than compete head-on for GPU boards. The alliance may grant Cisco optical sockets in trillion-dollar TAMs, yet deepens dependency on NVIDIA’s supply chain during component shortages.

In the fight for network and security telemetry processing, the Splunk federation underpins an “intelligent data fabric” that rivals Elastic and Datadog in observability. Offering no-charge log ingestion for Cisco firewalls shifts cost optics. The pricing gambit is a clever land grab, but deferred revenue recognition could pressure short-term financials if upsell velocity stalls. Meanwhile, Microsoft’s $20 billion security franchise looms as the benchmark; Cisco must match Microsoft’s cloud-native scale without forcing data migrations that customers resist.

Advantages dominate early momentum. Customers cite tool-chain consolidation, supply-chain agility, and cross-domain visibility as primary wins. Hilton’s million-device ambition underscores vertical scalability; Grok’s 800 Gb/s backbone attests to silicon headroom; and CVS Health’s multi-billion AI investment validates trust at regulated scale. Moreover, Cisco’s open-API narrative draws startups, leveraging a platform rather than smothering innovation—a contrast to so-called “rip-and-replace” incumbents.

Yet, disadvantages remain material. First, roadmap skepticism persists: Hypershield lacks complete layer-7 threat protection standard in standalone firewall appliances. Cisco did not share a specific timeline for adding complete threat protection to Hypershield. Second, licensing complexity still confounds partners juggling many licenses,  spanning the entire portfolio from networking, security, and observability. Third, the “grandfather’s Cisco” perception endures. Cisco has a perception problem, and it knows it. Fourth, agentic ops raise governance alarms; early adopters demand deterministic rollback and audit trails before surrendering root privileges to generative models. Finally, execution risk surrounds a nine-month idea-to-product cadence. Such sustained velocity could strain quality assurance and channel readiness, which, for decades, has worked with Cisco, which is much more conservative in shipping products.

Despite those cons, the trajectory remains favorable. Cisco’s willingness to cannibalize hardware for recurring software revenue demonstrates strategic maturity, aligning economic incentives with customer outcomes. Consequently, the blend of programmable silicon, AI-mediated operations, and federated data fabrics positions Cisco to capture incremental spend as enterprises refresh data centers for persistent inference traffic. Meanwhile, hyperscaler collaboration and sovereign-AI localization diversify addressable markets, foreshadowing competitive realignment over the next 18 months.

Conclusion and Looking Forward

Cisco Live 2025 underscored the company’s intent to become an AI-native orchestrator that fuses security, telemetry, and silicon. Yet even as the many innovations announced during Cisco Live 2025 promise agent-guided automation, a Reddit thread titled “Discouraged at Cisco Live (2025)” reminds us that practitioners still weigh hype against day-to-day realities. One attendee joked that the show echoed nothing but “AI, AI, AI,” sparking gallows humor about whether network engineers will soon automate themselves out of a job. Such grassroots skepticism tempers vendor optimism, underlining the need for tangible wins—latency cuts, tool reduction, and simpler licenses—before narrative momentum becomes mainstream trust.

On a tactical level, I have three litmus tests that I’ll be keeping my eye on as a barometer to Cisco’s journey over the next year:

  • Secure branch revenue velocity per my recent blog.
  • General-availability uptake of Hypershield and its impact on firewall appliance refresh revenue.
  • Customer conversion rates from free Splunk firewall-log ingestion to paid data-fabric expansions.

If Cisco translates roadmap ambitions into measurable adoption and incremental ARR, the company will emerge not just AI-ready but AI-native, reshaping how enterprises perceive the intersection of networking, security, and silicon.

[wp_tech_share]

Cisco intensified the secure branch battle today. During Cisco Live 2025, the company’s annual customer conference, it unveiled three new branch-focused elements: Secure Routers, Secure Firewalls, and the Mesh Policy Engine within Cisco Security Cloud Control. Rival vendors already consolidate routing and security in fewer product lines, so Cisco’s strategy warrants close review. This post explains what Cisco shipped, what competitive forces it faces, and why its three-product play emerged.

What Cisco Introduced at Cisco Live 2025

Cisco launched five Secure Router 8000-Series models—8100, 8200, 8300, 8400, 8500—positioned as successors to Catalyst 8000 Edge. Each appliance merges IOS XE routing, Catalyst SD-WAN, post-quantum MACsec, and an embedded Layer-7 firewall. From Cisco’s security business,  the new Secure Firewall 200-Series arrived in parallel, running Firepower Threat Defense with Snort 3, encrypted traffic analytics, file sandboxing, and boasting up to 1.5 Gbps throughput.

Hardware innovation pairs with two software pillars. First, the Mesh Policy Engine unifies rule objects across routers, firewalls, and cloud enforcement points. Cisco positions it as part of the broader Hybrid Mesh Firewall framework. Second, Cisco Security Cloud Control became generally available in May 2025. The SaaS portal onboards devices, provides analytics, and houses the Mesh Policy Engine.

Cisco now addresses three branch personas. Secure Router serves WAN teams demanding rich routing and “good-enough” firewalling. Secure Firewall targets security teams that require full-featured firewalling (richer threat protection, encrypted traffic analytics). Meraki MX continues as a cloud-managed option for lean IT staff. The company argues that a shared SaaS policy plane offsets the complexity of sustaining three hardware families.

What Is Cisco Up Against?

Figure 1 shows branch solution revenue—access routers, SD-WAN, and low-end firewalls—across Cisco, Fortinet, and Palo Alto Networks during 2019-2024. Cisco’s branch revenue edges from about $2.5 B in 2020 to $2.6 B in 2024, representing just a 1% five-year compounded annual growth rate (CAGR). Fortinet rises from $345 M to $919M, delivering 22% five-year CAGR. Palo Alto Networks expands from about $82 M to $625 M, a 50% five-year CAGR.

Fortinet competes with one hardware family—every FortiGate appliance ships with FortiOS, integrating NGFW, SD-WAN, ZTNA, and LAN control. Cloud or on-prem control planes push identical policy because the code base (FortiOS) is uniform across the portfolio.

Palo Alto Networks employs two product lines. PA-Series Strata firewalls run PAN-OS and focus on deep inspection. ION-Series devices, which trace their history to the acquisition of CloudGenix in 2020, integrate SD-WAN, PoE switching, optional 5G, and secure about 1 Gbps of traffic. Both device classes appear in Strata Cloud Manager, a single SaaS console introduced in late 2024. This platform also manages Prisma Access points of presence, offering one policy model across physical and cloud edges.

Why Cisco Launched What It Did?

The revenue chart illustrates Cisco’s challenge. Its branch growth has not kept pace with the security-first Fortinet and Palo Alto drive in the branch. Cisco’s installed base remains large, yet procurement teams now evaluate converged platforms that collapse routing, security, and LAN control. Fortinet delivers that promise through one appliance. Palo Alto offers a unified policy across two tightly linked lines. Cisco’s trio of Secure Router, Secure Firewall, and Meraki must deliver differentiated value or risk revenue erosion.

Strategic logic explains the three-product play. Cisco cannot abandon IOS XE routing, a core competency that WAN engineers value. Hence, the Secure Router retains familiar CLI, BGP, voice DSP roadmaps, and high routed throughput. The embedded firewall is streamlined to avoid burdening routing silicon, matching most branch risk profiles. The Secure Firewall family preserves Snort feature depth demanded by SOC teams. Firepower Threat Defense offers machine-learning heuristics, encrypted-visibility analytics, and file trajectory inspection that the router cannot yet match. Keeping Firepower intact minimizes migration friction for customers with existing Firepower or ASA estates. Meraki MX remains critical for small IT shops and managed service providers. Its Dashboard UI orchestrates Wi-Fi, switching, cameras, and security from one tab. Removing MX would alienate a rapidly growing segment that values zero-touch deployment.

The Mesh Policy Engine and Security Cloud Control are Cisco’s unification layer. They promise consistent rule intent across three operating systems—IOS XE, FTD, and Meraki OS—while allowing personas to keep their native workflows. The approach avoids a forced rip-and-replace but introduces integration risk. Success hinges on seamless policy translation, co-termed licensing, and synchronized feature releases.

Cisco also needed throughput parity. The new Secure Router family closes performance gaps without sacrificing routing or security functions. The Firewall 200-Series secures 1.5 Gbps, aligning with branch attack profiles where deep analytics outweigh raw speed.

Licensing complexity remains a concern. Cisco still sells DNA Advantage for SD-WAN, Threat Defense subscriptions for firewalls, and Meraki Enterprise licenses. The firm announced a Networking Subscription model for late 2025 that should co-term renewals. Whether this resolves budget headaches is an open question.

Cisco launched three discrete hardware lines because each maps to an entrenched persona, and because immediate unification would disrupt large customer bases. The Mesh Policy Engine aspires to hide complexity while preserving product heritage. Market data suggest the bet must succeed quickly to reclaim growth momentum.

Conclusion

Cisco refreshed its branch portfolio to confront accelerating competition. Secure Routers safeguard routing heritage, Secure Firewalls protect security parity, and Meraki maintains cloud simplicity. Fortinet and Palo Alto Networks leverage fewer product lines and show faster revenue expansion. The outcome now depends on Cisco’s ability to translate cross-platform policy seamlessly, simplify licensing, and deliver promised throughput gains.

[wp_tech_share]

On May 14th, I had the opportunity to attend Fastly’s Xcelerate 2025 customer roadshow in Los Angeles. It was a full day of customer case studies, partner demonstrations, and executive briefings, all of which delivered a clear message: Fastly is admidst the transformation from being a traditional content delivery network vendor to becoming an integrated edge services vendor that aims to reduce operational friction and operating expenses, while opening new avenues for adopting AI applications.  The three most prominent themes follow.

A Software-Defined Edge Platform Enables Distributed Cloud Networking Strategies

At Dell’Oro, I’ve been championing Distributed Cloud Networking. It is an architecture that couples the user edge, the wide-area middle mile, and the application edge, using a software-defined control plane that spans multiple clouds and networks. Although still emerging, Distributed Cloud Networking aims to harmonize routing, security, and compute policies wherever applications run. Fastly’s platform vision aligns tightly with this model. Executives described a composable stack that integrates content delivery, DDoS mitigation, Web Application Firewall, bot controls, object storage, WebAssembly compute, and observability behind a single set of Terraform modules and APIs.

Customers emphasized the operational upside. For example, customers credited Fastly’s new production-equivalent “staging edge,” where they can trial configurations and code before promotion. This safeguard has virtually eliminated rollbacks, enabling WAF users to ship approximately one-third more features each year. Moreover, flexible deployment options—such as edge points of presence (POPs), Fastly-managed environments in Amazon Web Services, or on-premises agents—support data-residency mandates without disrupting toolchains.

However, risks revolve around platform dependence. Enterprises that prefer best-of-breed tools may find the breadth of APIs to be demanding and the exit costs uncertain. Competitor Akamai continues to expand into core cloud services, while Cloudflare layers networking and security features at speed. We see enterprises benchmarking onboarding friction, roadmap transparency, and contractual agility before entrusting mission-critical workloads to any single vendor.

Offloading AI Workloads Closer to Users for Better Performance and Cost

Artificial intelligence was front and center at Xcelerate—less an aspiration and more an everyday workload. In a joint demo, Google and Fastly demonstrated how a semantic-aware edge cache handles Gemini prompts, with the cached reply being returned in approximately half the time of a cold request and using noticeably fewer tokens. For enterprises, that means faster pages and smaller AI bills without involving origin GPUs.

What makes the example interesting is where it happens. By utilizing an intelligent fabric, Google and Fastly can direct traffic to the nearest inference node, then maintain popular responses in place. It is a textbook illustration of Distributed Cloud Networking’s promise: policies and data move together through a programmable cloud networking fabric, allowing application teams to gain speed while finance teams experience predictable costs.

Shutterstock, the global stock-image and media marketplace, echoed the theme on the training side. Its video-analysis pipeline streams tens of millions of clips across AWS, Azure, and Google, while keeping preprocessing and vector embedding at edge points of presence. Running the heavy lifting in Fastly’s fabric enables Shutterstock to maintain steady throughput across clouds and avoid cross-region egress surprises—a real-world proof that Distributed Cloud Networking fabrics improve both performance and cost control for data-intensive AI jobs.

Challenges remain—semantic caching is young, model versions evolve quickly, and data-residency rules vary—but the direction is clear. Vendors, including Akamai and NVIDIA, are racing to offer similar edge-GPU overlays. Therefore, enterprises should pair any rollout with tight version control, automated rollback, and transparent governance to prevent the benefits from slipping away.

Edge Caching + Integrated Storage: Controlling Spend While Powering “The Best of the Internet”

Edge caching and integrated storage may not be as eye-catching as a software-defined edge-services platform or AI offload. Yet, when traffic surges and the finance team wants lower IT spend, their combination of uptime insurance and cost control often matters most.

For many customers, one of the most significant benefits of Fastly’s integrated object storage is the cost reduction it enables while serving massive amounts of data without interruption. Keeping hot data at the edge wipes out per-gigabyte cloud egress fees and shortens time-to-first-byte:

  • Fox Sports hit a 99.97 % cache-hit ratio during Super Bowl 2025, offloading terabits from its origin and avoiding a game-day cloud-bill spike.
  • Shutterstock migrated 35 PB of images once and now serves them approximately 40% faster, while eliminating a six-figure monthly cloud egress line item.

Cost efficiency is not reserved for media giants. Wildfire-alert nonprofit Watch Duty routinely saw incident spikes, ranging from 20,000 to 100,000 requests per second, during the devastating fires in Los Angeles in early 2025. Fastly provided WatchDuty capacity at a steep discount—an embodiment of the company’s aim to “Power the best of the internet.”

Whether it’s a global streaming platform or a community-safety service, the message was clear: every byte that stays in edge storage is one less byte paid for twice—first in bandwidth and then in user patience.

Conclusion

Fastly Xcelerate 2025 reinforced its commitment to an integrated edge platform that aligns with our vision for Distributed Cloud Networking. Customers repeatedly praised Fastly’s engineers for extracting every microsecond of performance and its high-touch support teams for restoring service stability when seconds mattered most—an operational culture evident in Fox’s Super Bowl war room and WatchDuty’s wildfire surge. We will continue tracking forthcoming roadmap milestones against the backdrop of our Distributed Cloud Network report, while evaluating Fastly tactically in our application security and delivery coverage within the quarterly Network Security report. Further developments deserve close observation.

[wp_tech_share]

Earlier this month, San Francisco’s Moscone Center buzzed with energy as 45,000 security professionals convened for the RSAC 2025 Conference. Across scheduled briefings, product launches, and crowded corridors, one reality became clear. Enterprises are rebuilding their cyber defenses for a cloud-first era characterized by geopolitical tension, architectural complexity, and non-stop release cycles. Attack surfaces expand while budgets tighten, making every architectural bet consequential. Drawing on my 26 analyst meetings at RSAC 2025, this post distills three key forces that are guiding investments and supplier roadmaps. The conference floor affirmed that cybersecurity strategy is now inseparable from business resilience and national policy.

Sovereignty Moving Center Stage

Data location, once ranked low on vendor scorecards, is now becoming a table stake. Multinational buyers are increasingly demanding that security controls, telemetry, and even help-desk staff remain within chosen jurisdictions. Regulators are hardening their stance. The European Union Data Act, Japan’s amended APPI, and parallel proposals in Latin America will codify expectations of sovereignty and impose meaningful penalties for non-compliance.

Vendors are responding by dual-provider architectures, modular key-management offerings, and portals that verify locality compliance in real-time. Another example is how security service edge (SSE), web application firewalls (WAF), and zero-trust services are providing or will provide options to pin policy engines to specific countries while routing inspection traffic only through approved data centers.

The net result is that we are seeing early adopter enterprises beginning to update their request-for-proposal templates. Jurisdictional flexibility will differentiate leading vendors from laggards, and late adopters’ risk costly retrofits as upcoming regulations become even stricter.

Security Becomes an Everywhere Fabric

Perimeter defense has dissolved. Protection now forms an enforcement fabric that spans top-of-rack switches, smart NICs, private cloud gateways, and microsegmentation agents embedded within every workload. We are on the verge of 800G networking systems that push line-rate policy checks into switching silicon, while lightweight software already extends native host filters for east-west inspection.

This convergence blurs product lines. The common objective is to deliver uniform policy logic at the nearest feasible hop, thereby reducing lateral movement risk without requiring expensive data center redesigns. Hardware offload further reduces latency and power consumption, enabling organizations to meet aggressive carbon reduction targets.

The rise of generative-AI workloads adds urgency. Vendors warned that 2-kilowatt GPUs, liquid cooling, and 800G links create new lateral movement paths, making switch-resident firewalls and host eBPF agents mandatory safeguards for model pipelines, vector databases, and inference gateways.

Operational complexity remains the hurdle. An everywhere fabric only works when application flows are mapped and kept up to date. Early adopters emphasized the importance of domain-specific language models and graph-based visualization in maintaining context as environments evolve. Vendors that supply open APIs, distributed telemetry lakes, and workflow integrations will win mindshare.

Consolidation and Managed Security Services Accelerate

Console fatigue is real. Chief information security officers described staff juggling dozens of dashboards, overlapping agents, and unpredictable subscription bills. With headcount flat, many organizations view platform consolidation or managed delivery as the only viable escape.

RSAC exhibitors leaned into that demand. Several vendors introduced unified licensing that bundles networking, cloud access, endpoint protection, and security operations into a single contract. Managed service providers unveiled outcome-based agreements promising defined detection times, integrated compliance reporting, and one-hour onboarding for new locations. New alliances between telecom carriers and hyperscale clouds aim to embed managed detection natively within connectivity bundles.

Economics also favors consolidation as volume commitments push scale advantages upstream into vendor roadmaps. During analyst sessions, suppliers acknowledged that cross-product telemetry lakes enhance threat-model accuracy more than isolated engines, further strengthening the business case.

Dell’Oro analysis highlights partner-delivered SASE (Secure Access Service Edge) as a key enabler for expanding the reach of SASE into smaller enterprises that lack the necessary technology expertise and personnel. Renewal cycles will prompt strategic platform pivots rather than incremental add-ons. Vendors offering transparent pricing, shared analytics, and structured migration tooling will capture a disproportionate share as enterprises rationalize portfolios.

Cellular 5G emerged as a surprise accelerant. Compact routers and slice-aware software, provided by several exhibitors, enable managed-service providers to extend SASE to pop-up branches, public safety fleets, and the long tail of small enterprises without requiring trenching of cable or fiber.

Conclusion

RSAC 2025 confirmed that the security industry stands at a strategic crossroads. Sovereign-ready architectures, AI-aware controls, 5G-enabled reach, and integrated delivery models now define a competitive advantage. Readers following these shifts should engage with Dell’Oro Group’s forthcoming Network Security, SASE/SD-WAN, and CNAPP reports and advisory services to benchmark against new imperatives and guide investment decisions.