Expanding AI’s Equity in the Telecom Networking Stratosphere

Cisco has officially announced the launch of several new solutions to accelerate secure, scalable AI across market segments.

According to certain reports, this assortment of solutions is markedly headlined by the first ever NVIDIA partner-developed data center switch based on NVIDIA Spectrum-X Ethernet switch silicon, an innovation which makes it possible for Cisco to deliver NVIDIA Cloud Partner-compliant reference architecture, geared towards serving the needs of neocloud and sovereign cloud deployments.

Cisco further took this opportunity to launch Secure AI Factory with NVIDIA, capable of strengthening protection and visibility across AI deployments with new security and observability integrations.

On top of that, Cisco, NVIDIA, and other partners, also took this opportunity to reveal industry’s first ever AI-native wireless stack for 6G. All in all, when leveraged in conjunction, these innovations will tread up a long distance to offer neocloud, enterprise, and telecom customers the flexibility and interoperability required for efficiently building, managing and securing AI infrastructure at scale

“NVIDIA Spectrum-X Ethernet delivers the performance of accelerated networking for Ethernet,” said Gilad Shainer, SVP of Networking at NVIDIA. “Working with Cisco’s Cloud Reference Architectures and NVIDIA Cloud Partner design principles, customers can choose to deploy Spectrum-X Ethernet using the newest Cisco N9100 series or Cisco Silicon One based switches to build open, high-performance AI networks.”

Talk about the whole value proposition on a slightly deeper level, we begin from the promise of security and observability. This translates to how Cisco AI Defense can now seamlessly integrate with NVIDIA NeMo Guardrails to deliver robust cybersecurity for AI applications.

More on that would reveal how the technology in question is currently orderable for on-premises data-plane deployment, a use case where it can empower security and AI teams to protect AI models and applications, and therefore, limit the sensitive data which leaves their organization’s data centers.

Complementing that would be the Splunk Observability Cloud solution, which is designed to help teams monitor the performance, quality, security, and cost of their AI application stack, including real-time insights into AI infrastructure health with Cisco AI PODs. To go along with that, users can also access Splunk Enterprise Security for the chief purpose of protecting their AI workloads.

Next up, we must dig into how Cisco Isovalent is now validated for inference workloads on AI PODs to support enterprise grade, high-performance Kubernetes networking.  On the other hand, Cisco Nexus Hyperfabric AI, boosting a new cloud-managed Cisco G200 Silicon One switch, can now be leveraged as a deployment option in AI PODs.

In fact, the tech giant also took this opportunity to confirm that UCS 880A M8 rack servers are also formally a part of AI PODs. Such a mechanism, on its part,  should tread up long distance to dispatch support for a wide range of workloads, including generative AI fine-tuning, inference and more.

Another detail worth a mention stems from wider availability of NVIDIA Run:ai, a solutions meant to empower intelligent AI workload and GPU orchestration capabilities, whereas Nutanix Kubernetes Platform (NKP) solution is now a supported Kubernetes platform.

Hold on, we still have a few bits left to unpack, considering we haven’t yet touched upon Nutanix Unified Storage (NUS) solution becoming a supported storage option. This storage option will enjoy Nutanix Enterprise AI (NAI) solution as the interoperable software component to simplify the process of building and operating containerized inference services.

Rounding up highlights would be Cisco and NVIDIA’s collective effort to advance the prospects of NVIDIA AI Factory for Government, which is understood to be a full-stack end-to-end reference design for AI workloads deployed in highly regulated environments.

“We’re at the beginning of the largest data center build-out in history,” said Jeetu Patel, President and Chief Product Officer, Cisco. “The infrastructure that will power the agentic AI applications and innovation of the future requires new architectures designed to overcome today’s constraints in power, computing, and network performance. Together, Cisco and NVIDIA are leading the way in defining the technologies that will power these AI-ready data centers in all their varieties, from emerging neoclouds, to global service providers, to enterprises, and beyond.”

Hot Topics

Related Articles