artificial intelligence marketing
PR Newswire
Published on : Mar 17, 2026
Enterprise AI has a scaling problem. Plenty of pilots, not enough production—and too many disconnected systems in between. Cisco and NVIDIA want to change that.
The two companies have announced a major expansion of their Secure AI Factory initiative, positioning it as a full-stack framework for deploying AI across core data centers and edge environments—with security baked in from silicon to software.
The goal: help enterprises move from experimentation to real-world deployment in weeks, not months, while avoiding the integration headaches that often stall AI projects.
One of the biggest shifts driving this update is where AI actually runs.
Inference—where models generate predictions or decisions—is increasingly happening outside centralized data centers, closer to where data is created. Think hospital floors, retail stores, or factory lines where latency matters and decisions can’t wait.
Cisco and NVIDIA are leaning into that reality with expanded edge capabilities:
Support for NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs across Cisco UCS and edge platforms
New reference architectures for service providers via the Cisco AI Grid
Integration with Cisco’s Mobility Services Platform for carrier-grade AI services
The pitch is simple: bring AI to the data, not the other way around.
That’s especially relevant as industries push toward real-time analytics, autonomous systems, and AI-driven operations—all of which depend on low-latency processing at the edge.
Beyond edge computing, Cisco is also targeting one of the biggest bottlenecks in AI infrastructure: network performance and deployment complexity.
The expansion introduces new high-performance networking hardware, including:
Cisco N9100 switches delivering up to 102.4 Tbps throughput, powered by NVIDIA Spectrum-6 silicon
800G switching support for high-bandwidth AI workloads
Integration into Cisco Nexus One and Nexus Hyperfabric for simplified deployment
If that sounds like overkill, it’s not. Large-scale AI workloads—especially those involving distributed training or real-time inference—are incredibly network-intensive. Bottlenecks at the network layer can cripple performance.
Cisco’s approach is to treat networking as a first-class component of AI infrastructure, not an afterthought.
For organizations building large-scale AI environments—what NVIDIA often calls “AI factories”—Cisco is offering two validated deployment models:
A reference architecture aligned with NVIDIA’s Cloud Partner program
A Cisco-native cloud architecture built on its Silicon One platform
Both aim to reduce the need for custom integration, a common pain point for enterprises stitching together multi-vendor stacks.
This reflects a broader industry trend: pre-validated, modular architectures are becoming the default for AI deployments, replacing bespoke builds that are costly and slow to scale.
If there’s one theme running through this announcement, it’s security.
As AI systems become more autonomous—particularly with the rise of AI agents—attack surfaces expand. Models, data pipelines, and even agent-to-agent interactions introduce new risks.
Cisco is embedding security across multiple layers:
Infrastructure Security
Cisco Hybrid Mesh Firewall enforces policies across networks and workloads
Extended to NVIDIA BlueField DPUs for server-level threat blocking
Designed to stop threats before they reach sensitive data
AI Model and Agent Security
Cisco AI Defense adds vulnerability testing and model protection
Integration with NVIDIA NeMo Guardrails to manage AI behavior
New controls for securing agent interactions, especially at the edge
Agent Runtime Protection
Support for NVIDIA OpenShell runtimes
Continuous monitoring of agent actions to prevent misuse or unintended behavior
The message is clear: in the “agentic AI” era, security can’t be bolted on later—it has to be embedded from the start.
The timing aligns with a broader shift in enterprise AI.
According to analysts like IDC, companies are moving past the “what can AI do?” phase and into “how do we operationalize it?” That shift brings new challenges:
Scaling infrastructure efficiently
Managing distributed workloads
Securing increasingly autonomous systems
Avoiding vendor fragmentation
Cisco and NVIDIA are positioning Secure AI Factory as a solution to all four—essentially offering a blueprint for enterprise AI at scale.
Cisco isn’t alone in this push. Hyperscalers and infrastructure players—from AWS to Microsoft Azure—are also racing to provide end-to-end AI stacks.
What differentiates Cisco’s approach is its focus on networking, edge infrastructure, and security integration—areas where it already has deep enterprise penetration.
By partnering closely with NVIDIA, the dominant force in AI hardware, Cisco is strengthening its position in a market increasingly defined by full-stack ecosystems rather than standalone products.
Cisco and NVIDIA’s expanded Secure AI Factory is less about launching new hardware and more about reducing friction in enterprise AI adoption.
By combining high-performance networking, edge-ready infrastructure, and embedded security, the companies are trying to solve a persistent problem: turning AI from a promising pilot into a scalable, secure, production system.
For enterprises under pressure to show ROI on AI investments, that shift—from experimentation to execution—may be the most important upgrade of all.
Get in touch with our MarTech Experts.