artificial intelligence automation
Business Wire
Published on : Apr 24, 2026
Netris has extended its network automation platform to support NVIDIA BlueField DPUs, enabling hardware-level multi-tenancy and network isolation for AI infrastructure—an increasingly critical requirement as enterprises scale GPU-intensive workloads.
As AI infrastructure scales, networking is emerging as a critical bottleneck—not just for performance, but for resource efficiency. Netris’ latest update to its Network Automation, Abstraction, and Multi-Tenancy (NAAM) platform reflects a growing industry focus on solving this challenge at the hardware level.
With version 4.7.0, Netris enables orchestration of NVIDIA BlueField DPUs alongside NVIDIA Spectrum-X switches within a unified Ethernet fabric. The result is a system that allows cloud providers and enterprise AI operators to implement granular, hardware-enforced tenant isolation—from entire GPU clusters down to individual GPUs within a server.
This level of granularity addresses a long-standing inefficiency in AI cloud environments. Traditionally, GPU resources are allocated at the server level, meaning that even small workloads often consume entire machines. This leads to underutilization, particularly when tenants require only a fraction of available compute capacity.
The introduction of concurrent multi-tenancy changes that dynamic. By enabling multiple tenants to share a single server while maintaining strict isolation, operators can significantly improve utilization rates and reduce idle capacity. However, achieving this in software alone introduces performance trade-offs, as CPU resources are diverted to manage networking and security functions.
That’s where DPUs come into play. NVIDIA BlueField devices offload networking, storage, and security tasks from the CPU, executing them directly in hardware. This not only improves performance but also ensures consistent enforcement of policies such as tenant isolation and access control.
Netris’ contribution lies in orchestrating these hardware components into a cohesive system. By automating configuration across switches and DPUs, the platform creates a unified control plane that manages network segmentation, connectivity, and policy enforcement across the entire data center.
The underlying technologies—EVPN and VXLAN—are not new, but their automated application at scale is becoming increasingly important. Netris dynamically generates and maintains these configurations, allowing physical switch ports and DPU virtual functions to be assigned to the same tenant environment. This enables a mix of workloads, including bare-metal servers, virtualized applications, and edge devices, to coexist within a single virtual private cloud (VPC) while maintaining isolation.
From an enterprise perspective, this approach aligns with the shift toward composable infrastructure. Instead of fixed resource allocations, organizations can dynamically assemble compute, storage, and networking resources based on workload requirements. This flexibility is particularly valuable in AI environments, where training and inference workloads have different performance and scaling characteristics.
The platform also integrates with NVIDIA’s DOCA framework, enabling zero-trust configurations that restrict host-level access to networking controls. This is a critical feature in multi-tenant environments, where security boundaries must be enforced consistently across hardware and software layers.
The broader context is the rapid growth of AI infrastructure. According to IDC, spending on AI hardware and infrastructure is expected to grow at a double-digit rate through the decade, driven by enterprise adoption of machine learning and generative AI applications. As these deployments scale, efficient resource utilization and secure multi-tenancy become key operational priorities.
Cloud providers and enterprises alike are investing heavily in GPU clusters, often referred to as “AI factories.” These environments require not only compute power but also sophisticated networking to manage data flows, isolate workloads, and ensure consistent performance.
Netris’ platform positions itself as a complement to higher-level orchestration tools, which typically operate above the network layer. While those tools manage compute and application workloads, they often rely on underlying network infrastructure to enforce isolation and connectivity. By providing a unified network control plane, Netris fills a gap that can otherwise lead to fragmentation and operational complexity.
The competitive landscape includes both traditional networking vendors and newer software-defined networking platforms. However, the integration of DPUs into network architectures is creating a new layer of differentiation. Vendors that can effectively orchestrate these components are likely to play a central role in next-generation data centers.
The implications extend beyond infrastructure teams. For organizations building AI-driven applications—including marketing analytics, customer data platforms, and real-time personalization engines—network performance and scalability directly impact user experience and business outcomes.
Technology leaders such as Amazon, Microsoft, and Google are already investing in similar architectures, integrating specialized hardware and software to optimize AI workloads at scale.
Looking ahead, the combination of DPUs, automated networking, and multi-tenancy is likely to become a standard feature of AI infrastructure. As organizations seek to maximize return on investment in GPU resources, solutions that enable fine-grained allocation and secure sharing will be increasingly valuable.
Netris’ latest release reflects this الاتجاه. By extending its platform to orchestrate NVIDIA BlueField DPUs within a unified fabric, the company is positioning itself at the intersection of networking and AI infrastructure—two domains that are becoming inseparable as enterprises scale their AI ambitions.
AI infrastructure is evolving toward highly optimized, composable architectures that integrate compute, networking, and storage at a granular level. The adoption of DPUs represents a significant shift, enabling hardware-level acceleration and security.
As enterprises and cloud providers build AI factories, the need for automated, scalable networking solutions is increasing. Platforms that can unify control across diverse hardware components are emerging as critical enablers of next-generation data centers.
Get in touch with our MarTech Experts