artificial intelligence cloud technology
Business Wire
Published on : Mar 5, 2026
The AI infrastructure race is accelerating—and networking may be the quiet bottleneck everyone’s trying to solve.
Netris, a vendor focused on network automation and multi-tenancy for AI infrastructure, says demand for its platform is surging. The company reported 622% year-over-year ARR growth in 2025, alongside rapid adoption from AI cloud operators building large-scale GPU infrastructure.
In the past 10 months alone, Netris says it has onboarded 15 AI cloud operators across more than 20 deployments, many spanning multiple data centers. The company claims this footprint now makes its platform the most widely deployed network automation and multi-tenancy layer for AI infrastructure.
That momentum reflects a broader shift in the AI cloud market: as organizations race to build GPU-heavy infrastructure, networking—particularly multi-tenant, automated networking—has emerged as a critical layer for delivering AI services at scale.
The scale of AI infrastructure investment is staggering. According to industry projections, global AI infrastructure spending could reach $758 billion by 2029, while AI-driven economic impact may exceed $22 trillion by 2030.
But the networking tools originally designed for traditional enterprise data centers weren’t built with AI workloads in mind.
Training clusters and GPU clouds demand extremely high bandwidth, dynamic resource allocation, and strict tenant isolation. At the same time, operators must deliver cloud-like functionality such as elastic networking, rapid provisioning, and secure multi-tenancy.
Legacy approaches struggle to keep up.
Enterprises that attempt to build network automation internally often face long development timelines and fragile results. Manual configuration errors remain common, and delays in provisioning infrastructure directly translate into lost revenue—particularly when expensive GPUs sit idle.
In AI infrastructure, every idle GPU is effectively money left on the table.
Netris positions its platform—known as NAAM (Network Automation and Abstraction for Multi-tenancy)—as a purpose-built control layer for AI infrastructure operators.
Instead of relying on legacy fabric managers or manually built automation scripts, the platform enables cloud operators to automate the entire lifecycle of AI networking, including provisioning, segmentation, and capacity allocation.
The result, the company argues, is the ability to launch GPU cloud services far faster than traditional approaches.
Among the capabilities Netris highlights:
Automated multi-tenancy: Tenants receive dedicated network isolation automatically when GPU resources are provisioned.
Dynamic GPU pool resizing: Operators can adjust cluster capacity without interrupting active AI workloads.
Elastic networking features: Capabilities like elastic IPs and load balancing that resemble hyperscale cloud infrastructure.
Reduced configuration errors: Automation helps eliminate manual networking mistakes that can disrupt customer workloads.
For AI cloud providers trying to monetize GPU infrastructure quickly, these features can make the difference between launching services in weeks versus years.
A key factor behind Netris’ growth appears to be its integration with NVIDIA’s expanding AI infrastructure ecosystem.
The company says it is the first independent software vendor validated by NVIDIA for AI network automation, with deployments supporting multi-tenant environments built on NVIDIA Spectrum-X Ethernet networking.
Using AI factory simulations in NVIDIA Air, Netris has extended integrations across several pieces of the AI networking stack, including:
Spectrum-X Ethernet networking
Quantum InfiniBand
NVL72 GPU architectures
BlueField DPUs
Edge and virtual networking components
That ecosystem alignment matters because many emerging AI cloud providers rely heavily on NVIDIA reference architectures to build GPU clusters.
Networking platforms that integrate seamlessly with those architectures can dramatically simplify deployment.
Beyond switch-level automation, Netris also introduced a new component called Softgate HS, designed as a horizontally scalable, multi-tenant edge gateway.
In practice, this fills a networking gap that traditional switching infrastructure doesn’t address.
While switches can provide segmentation and traffic management inside the data center fabric, cloud providers also need application-level networking capabilities such as tenant routing, edge services, and flexible connectivity.
Softgate aims to deliver those features as a software layer integrated with the Netris automation platform.
According to the company, 95% of customers running Netris-managed switch fabrics have adopted Softgate as well, suggesting operators see value in extending automation beyond the core network fabric.
The company’s customer base reflects several fast-growing segments in the AI infrastructure market.
One major category is neocloud providers—new entrants focused specifically on delivering GPU-based AI compute. Companies such as STN, Boost Run, and TensorWave have built AI cloud services using Netris as their networking foundation.
These providers compete with hyperscalers by offering highly specialized GPU clusters optimized for AI training and inference workloads.
Another major segment is sovereign AI infrastructure operators, which are building national AI capabilities in response to data sovereignty and geopolitical concerns.
Organizations including TELUS in Canada, DCAI in Denmark, and Yotta Data Services in India are deploying AI infrastructure designed to meet national compliance and security requirements.
In these environments, strict multi-tenancy and workload isolation are essential.
“Dedicated GPU isolation, compliance, and predictable performance are table stakes,” said Sabur Mian, founder and CEO of STN. “Netris provides the network-level abstraction and segmentation that makes secure, cloud-scale multi-tenancy possible.”
Alongside customer growth, Netris has also expanded its global footprint.
The company now operates teams in:
The United States
Taiwan
Australia
India
It plans to expand further in 2026 with new operations in the United Kingdom and Singapore.
That geographic spread reflects where AI infrastructure demand is emerging: not just in hyperscale markets but also in regional cloud ecosystems and sovereign AI initiatives.
Governments and enterprises increasingly want domestic AI capacity rather than relying entirely on global cloud providers.
The rapid growth Netris is reporting highlights a broader trend in the AI infrastructure stack.
Much of the industry’s attention has focused on GPUs, AI accelerators, and data center buildouts. But networking automation—particularly multi-tenant networking—has become a critical layer enabling AI infrastructure to function as a cloud service.
Without automation, GPU clusters are difficult to scale, expensive to operate, and slow to provision.
That’s why infrastructure vendors across the ecosystem—from networking companies to AI platform providers—are racing to build orchestration layers for GPU-heavy environments.
If Netris can maintain its current trajectory, it could become one of the defining control layers in the emerging AI cloud stack.
For now, the company’s pitch is straightforward: if AI infrastructure is the next trillion-dollar buildout, networking automation may determine who can actually deploy it at scale.
Netris says its partner ecosystem continues to expand as compute and platform vendors integrate with its networking automation stack.
The company is also deepening collaboration with NVIDIA and other infrastructure providers to support new GPU generations and AI networking architectures.
CEO Alex Saroyan frames the moment as an early phase in a much larger transformation.
“Building AI infrastructure is the opportunity of a generation,” he said. “The road ahead is even bigger as the industry enters its next phase of growth.”
Netris plans to showcase new capabilities and live demonstrations of its platform at the upcoming NVIDIA GTC conference in San Jose.
For AI cloud operators racing to build the next generation of infrastructure, networking automation may increasingly determine who wins the GPU cloud race.
Get in touch with our MarTech Experts.