News | Marketing Events | Marketing Technologies
GFG image

News

Genesys Empowers Aterian to Deliver Empathetic, AI-Powered Customer Experiences at Scale

Genesys Empowers Aterian to Deliver Empathetic, AI-Powered Customer Experiences at Scale

customer experience management 20 Nov 2025

Genesys®, a global leader in AI-powered experience orchestration, has enabled Aterian — the multibrand consumer products powerhouse behind Squatty Potty, Healing Solutions, and more — to transform its customer experience (CX) operations. Leveraging the Genesys Cloud™ platform, Aterian has redefined how it manages interactions across marketplaces like Amazon, Walmart, eBay, Temu, and Shopify, achieving a 65% decrease in cost of ownership, greater efficiency, and more emotionally intelligent customer engagement.

Aterian’s CX Challenges Before Genesys

  • Rapid expansion across major ecommerce marketplaces increased operational complexity.

  • Over 70% of customer interactions were handled through asynchronous channels such as emails and buyer messages.

  • The company needed a unified CX foundation to personalize customer journeys without sacrificing emotional connection.

  • Fragmented tools made it difficult to scale, maintain consistency, or optimize agent performance.

How Genesys Cloud Transformed Aterian’s Customer Experience

1. Unified CX Infrastructure for Faster, More Personal Support

  • Aterian rebuilt its entire customer experience foundation using the Genesys Cloud platform.

  • Consolidated systems streamlined communication and simplified agent workflows.

  • The company created a cohesive, omnichannel experience, enabling agents to handle inquiries seamlessly across channels.

2. Intelligent Automation to Scale Empathy and Efficiency

  • Aterian integrated its own AI models with Genesys Cloud AI, creating a hybrid “human + AI” support environment.

  • Nearly 50% of all customer interactions are now automated, freeing agents for complex or emotionally sensitive cases.

  • Automation delivered consistent support quality while reducing handle time by almost 25%.

3. Empowered Agents Through AI-Driven Coaching and Real-Time Insights

  • AI-powered guidance is embedded natively in the Genesys interface — no additional dashboards or external tools.

  • Agents receive contextual insights and next-best-action recommendations during live interactions.

  • This resulted in a 33% increase in agent satisfaction, showcasing how AI can reduce friction, enhance performance, and support agent confidence.

4. Delivering Empathy at Scale: A Strategic Advantage

  • Genesys Cloud enabled Aterian to maintain its core brand philosophy — emotionally intelligent customer care — even as it scaled rapidly.

  • The platform ensured customers feel heard and valued whether interacting with a human or a virtual agent.

  • The company now turns each interaction into an opportunity to strengthen loyalty and trust.

5. Supporting Aterian’s Diversifying Product Portfolio

  • The new CX foundation supports expansion into categories like:

    • Squatty Potty Flushable Wipes

    • Healing Solutions Tallow Skincare

  • Genesys Cloud ensures consistent brand experiences across all new product lines.

  • The platform prepares Aterian for smooth seasonal surges, especially during high-volume periods like the holidays.

 

Aterian’s partnership with Genesys marks a pivotal shift in how consumer brands approach customer experience at scale. By merging AI-powered orchestration with human empathy, the company has built a modern CX ecosystem that supports growth, consistency, and deeper customer connection. With intelligent automation, real-time agent coaching, and a unified platform, Aterian is positioned to deliver meaningful, emotionally resonant experiences across every touchpoint — today and as it expands into the future.

Get in touch with our MarTech Experts.

AVOXI’s AI-Powered Proactive Service Promises Faster, Smarter Voice Diagnostics for Global Contact Centers

AVOXI’s AI-Powered Proactive Service Promises Faster, Smarter Voice Diagnostics for Global Contact Centers

audio technology 20 Nov 2025

Cloud voice platforms have long been judged on reliability, call quality, and how quickly they can untangle the messy business of diagnosing failures. Now AVOXI wants to rewrite that playbook with Proactive Service, an AI-enabled diagnostic tool that claims to spot voice issues before customers ever notice—97% of the time, according to the company.

For global contact centers juggling thousands of conversations per hour, that kind of foresight isn’t just convenient—it’s a competitive advantage.

A New Spin on Voice Reliability

Traditional voice troubleshooting typically leans on ticket queues, manual testing, and a fair amount of guesswork when outages or quality hiccups occur. AVOXI’s pitch is simple: replace the legacy break-fix workflow with continuous monitoring powered by automation and AI.

Proactive Service scans for call flow disruptions, number-level availability problems, and traffic anomalies. When it detects suspicious behavior, it automatically runs diagnostics, gathers context, and opens support cases—no human intervention needed. AVOXI says this approach cuts issue resolution time in half.

It’s a timely upgrade. According to the 2025 State of International Voice for the Contact Center Report, the industry is wrestling with a three-headed monster:

  • 80% cite voice security concerns

  • 77% struggle with call quality

  • 78% see gaps in global coverage

With contact centers expanding across regions and marketplaces, the pressure to maintain uptime and route calls intelligently has never been higher. Competitors like Genesys, Twilio, and Zoom have all been weaving AI into their communications fabrics, but diagnostics—particularly proactive diagnostics—remain an area where few have staked a definitive claim.

Replacing “Break/Fix” With “Predict/Prevent”

AVOXI frames Proactive Service as a shift from reactive to preventive support. Instead of waiting for angry customers to call about dropped connections, the platform preemptively identifies the underlying risk—whether it’s a misbehaving virtual number, a routing misconfiguration, or a traffic spike that hints at fraud.

For enterprises running contact centers in five or more countries, these early warnings can translate into avoided downtime, protected revenue, and fewer blown SLAs. And with call quality continuing to define customer experience, the stakes are only getting higher.

“Every second counts for enterprises that rely heavily on contact centers to engage with callers,” says CEO Barbara Dondiego. “Proactive Service sets a new standard for protecting global voice more actively and intelligently.”

Built Into AVOXI’s New Premium AI Cloud Package

Proactive Service is part of AVOXI’s new Premium AI Cloud SaaS package, designed for organizations that want deeper oversight and stronger threat resilience in their voice infrastructure. The suite combines analytics, call flow monitoring, issue triage, security insights, and automated case creation in one dashboard.

For companies operating across Amazon, Walmart, Walmart Connect, and emerging global marketplaces, a diagnostic tool that works quietly in the background—and works fast—could deliver a measurable edge.

In a market where AI is reshaping everything from workforce scheduling to fraud detection, AVOXI’s focus on automated diagnostics feels like a natural next step. Whether it becomes the new normal across cloud voice platforms will depend on how effectively Proactive Service scales—but the early numbers give it a strong opening argument.

Get in touch with our MarTech Experts.

KERV.ai Secures Series B to Push Interactive, Shoppable Video Into Its Next Growth Phase

KERV.ai Secures Series B to Push Interactive, Shoppable Video Into Its Next Growth Phase

advertising 20 Nov 2025

In a digital landscape where video is king and attention spans are the ultimate currency, KERV.ai is doubling down on its ambition to make every frame count—literally. The Austin-based startup just closed its Series B funding round, led by Coral Tree Partners, to accelerate its push into interactive, shoppable, and data-rich video experiences across online and connected TV (CTV).

KERV.ai has been building momentum for months, reporting record commercial and partnership growth. Now, with fresh capital in hand, the company wants to expand globally, pour more fuel into R&D, and build out its contextual commerce engine—the same engine quietly powering clickable product moments inside ads, shows, and creator content.

Video’s New Currency: Object-Level Metadata

While much of the industry talks about AI-powered video, KERV.ai’s pitch is more granular. Its platform parses videos frame-by-frame, identifying products, objects, scenes, and contextual cues with proprietary object-level metadata. That data then drives everything from shoppable overlays to dynamic creative optimization to first-party data targeting.

In a world where advertisers are staring down the deprecation of third-party cookies and increasingly opaque attribution, KERV.ai’s approach offers something rare: actionable, privacy-safe intelligence extracted directly from content itself. Brands and publishers get smarter targeting and measurable outcomes; consumers get interactive moments that feel less like ads and more like discovery.

It’s a formula that’s resonating with agencies and CTV publishers searching for ways to improve performance without cramming more ads into their streams.

Investors See a Convergence Moment

Coral Tree Partners, known for backing companies at the intersection of media and technology, says KERV.ai is well-positioned to lead a long-overdue shift.

“KERV.ai has built a proprietary technology that combines creative storytelling, commerce activation, and data-driven performance,” said Coral Tree’s Alan Resnikoff. “This team is poised to lead the convergence of content, commerce and contextual intelligence.”

That convergence is already happening across the ecosystem. Amazon has been experimenting with shoppable streaming formats, Roku continues to invest in retail media tie-ins, and TikTok is pushing deeper into AI-powered product recognition. KERV.ai’s differentiation is its ability to apply these capabilities across all screens, not just its own walled garden.

CTV Growth, Ad-Supported Tiers, and the Commerce Flywheel

A big tailwind behind this raise is the explosive growth of ad-supported streaming. As more platforms—from Disney+ to Netflix—launch or expand AVOD tiers, the pressure is on to make ads more effective without increasing volume.

That’s where contextual commerce comes in.

Instead of relying on broad demographics or third-party segments, object-level metadata allows advertisers to target based on exact on-screen relevance. A character carries a particular handbag? A viewer can buy it. A cooking show features a specific spice blend? One tap takes you to checkout.

Publishers benefit too: interactive formats often deliver higher engagement and superior CPMs.

KERV.ai’s CEO Gary Mittman frames it as the start of a new era of performance video:
“Video remains the most powerful medium for connection, and KERV.ai is redefining how data, commerce and creativity come together,” he said. “With Coral Tree’s partnership, we’ll continue scaling our contextual commerce and AI video-intelligence solutions to drive measurable results for our clients.”

What Comes Next

With the new funding, KERV.ai plans to invest in:

  • Expanded R&D for advanced AI video intelligence

  • Global infrastructure and engineering talent

  • New strategic partnerships across retail media and CTV

  • Scalable tools for brands and agencies to build interactive creative

The company’s raise also underscores a broader industry trend: interactive video is becoming a competitive differentiator, not a novelty. As CTV continues its march toward retail media integration and AI personalization, expect more players to double down on contextual commerce.

KERV.ai—armed with fresh capital, growing demand, and a maturing tech stack—appears ready to push video deeper into the shoppable, measurable, data-enriched future marketers have been chasing.

Get in touch with our MarTech Experts.

Rivvit Launches AI Virtual Analyst to Turn Investment Data Into Instant Answers

Rivvit Launches AI Virtual Analyst to Turn Investment Data Into Instant Answers

artificial intelligence 20 Nov 2025

In the world of investment management, clean data has always been the dream; usable data, a luxury; and conversational data? Practically science fiction—until now. Rivvit Inc., known for its data management and reporting tools used by investment firms, is launching an AI-powered virtual analyst designed to let professionals query their portfolios, documents, and reports as casually as talking to a colleague.

If it works as advertised, Rivvit isn’t just bolting AI onto old infrastructure. It’s positioning itself as a pioneer of “explainable, governed AI” in an industry where messy data is often the single biggest obstacle to automation.

Why This Matters: AI Only Works If the Data Does

Generative AI has flooded nearly every corner of finance, but the industry’s biggest pain point hasn’t changed: garbage in, garbage out. Rivvit CEO Matt Biver is leaning directly into that problem.

“Data is the fuel for AI,” he says. “But AI only works when the data beneath it is clean, organized, and reliable.”

That’s where Rivvit’s long-standing pitch comes into play. The company already centralizes, validates, and governs investment data across portfolio management systems, custodians, internal documents, and reporting workflows. Now the same infrastructure powers a conversational layer capable of answering natural language questions.

This stands in sharp contrast to generic AI copilots that operate on loosely connected data lakes or static documents. Rivvit’s point of differentiation: a fully governed, institution-grade data backbone that ensures answers are trustworthy and traceable, not “AI guesses dressed up as facts.”

The Virtual Analyst: A New Interface for Investment Intelligence

Rivvit’s virtual analyst can handle a variety of investment tasks without requiring SQL skills, BI dashboard builds, or specialized reporting knowledge. Users simply ask:

  • “How has our allocation to global equities shifted over the last three quarters?”

  • “Explain the change in AUM for Fund X.”

  • “What are the emerging risk exposures across the portfolio?”

  • “Pull notable performance trends for tomorrow’s investment committee.”

The platform promises conversational intelligence layered over deterministic, governed data—something that’s rare even among modern data-focused fintech firms.

In practice, the system touches nearly every functional group in an investment organization:

  • Portfolio managers get allocation, attribution, and macro trend insights.

  • Risk teams get immediate explanations behind anomalies and performance swings.

  • Operations and accounting get fast reconciliation and AUM movement analysis.

  • Executives and committee members get instant briefings and narrative summaries.

It’s essentially the pitch: Why wait for next week’s reporting cycle when you could ask a question right now?

Filling the Gaps Left by BI and Reporting Tools

For years, asset managers have stitched together dashboards, spreadsheets, SQL queries, and static PDF reports. The result: fragmented visibility and heavy analyst workloads spent preparing (not analyzing) data.

Rivvit argues that the virtual analyst doesn’t replace analysts or BI tools—it eliminates the tedious layers between business questions and answers.

This marks the next step in the company’s five-stage data evolution:

1. Data foundation — unify and clean data
2. Reliable reports — provide validated, consistent output
3. Governance — track lineage, quality, and availability
4. Trusted queries — enable self-service exploration
5. AI intelligence — layer natural language understanding on top

Most vendors try to start at Stage 5, leaving clients to untangle their messy foundations. Rivvit is taking the opposite route: build the plumbing first, then build AI.

It’s a difference that institutional investors will not overlook.

Competitive Landscape: A Race Toward Explainable AI in Finance

Rivvit’s move comes as investment managers increasingly experiment with generative AI—JPMorgan is building investment copilots, BlackRock is investing heavily in AI models, and dozens of emerging fintechs promise AI-enabled insights. But many of these tools rely on static or incomplete data, and few integrate with existing pipelines deeply enough to guarantee reliability.

Rivvit’s strength is that it lives inside the data layer itself. It doesn’t just access data; it governs it.

That’s a meaningful differentiator in an industry where regulators expect explainability and firms expect precision.

The Future: AI as the Reward for Doing Data Right

Biver puts it bluntly:
“AI isn’t the end of the data journey. It’s the reward for doing data right.”

By that logic, Rivvit’s virtual analyst is less a feature launch and more a culmination of years of infrastructure work. It also signals a broader shift—investment firms no longer want analytics tools that require technical expertise. They want natural language, fast answers, and reliable data.

 

If Rivvit can deliver all three without compromising accuracy, it could set a new benchmark for AI-enabled data intelligence in financial services.

Get in touch with our MarTech Experts.

Delight.ai Wants to Be Your Brand’s AI Concierge—with a Memory That Doesn’t Forget

Delight.ai Wants to Be Your Brand’s AI Concierge—with a Memory That Doesn’t Forget

customer experience management 20 Nov 2025

Customer support bots are everywhere now—but most of them still suffer from goldfish-level memory. Sendbird wants to fix that. The company today launched Delight.ai, a branded AI concierge designed to remember every interaction, follow customers across channels, and actually act on behalf of a brand. Think of it as a customer support agent that doesn’t forget you the moment the chat window closes.

Sendbird, which already powers conversations for more than 300 million people per month, says Delight.ai is meant to be deployed anywhere customers communicate: in-app chat, voice, SMS, email, and social channels. The draw? Long-term memory that adapts, anticipates, and personalizes over time—something most AI agents don’t even attempt.

Why This Matters Now

Consumers have made their preferences clear: 62% now choose automated support over waiting for a human, and 75% of service leaders are increasing their AI budgets this year. If customer experience is a revenue engine—and for many brands it is—the AI servicing it can’t be amnesiac.

Most AI support systems are reactive, instantly forgetting conversation context and forcing users to repeat themselves across channels. Not only is that inefficient, it’s a fast track to customer churn. Sendbird argues that Delight.ai shifts the equation from transactional service to proactive, memory-driven engagement.

CEO John Kim doesn’t mince words: conventional AI agents “fail customers,” he says, limiting trust and revenue. By contrast, Delight.ai aims to deliver “personal, present and trustworthy” experiences—less chatbot, more concierge.

What’s New: How Delight.ai Works

Sendbird positions Delight.ai as the first branded AI concierge built on long-term memory, anchored around three strategic pillars:

1. Persistent Memory That Actually Persists

Instead of relying on static CRM records or short-lived session data, Delight.ai absorbs signals from every interaction—actions, preferences, behaviors—to build an evolving customer profile. The promise: personalization that matures over time rather than resetting with each ticket.

2. Omnichannel Continuity and Proactive Follow-Up

Switching from SMS to chat mid-conversation? Delight.ai carries context with you. Drop off halfway through a conversation? It proactively re-engages. This continuity is key for brands juggling multiple touchpoints—and tired customers who hate repeating themselves.

3. Enterprise-Grade Governance via Trust OS

Concerns about AI autonomy? Sendbird has an answer: Trust OS, a governance layer offering observability, policy controls, traceability, and guardrails. The pitch is clear—give your AI agent autonomy, but never let it color outside the brand lines.

A Real-World Proof Point

Hanssem Furniture, an early adopter, claims Delight.ai now nails 90% of first-touch engagements and delivers interactions that feel “natural,” according to CEO Eugene Kim. The metric that matters: customers “feel remembered”—a rarity in today’s fractured support landscape.

Where Delight.ai Fits in the Market

AI support tools like Intercom Fin, Zendesk’s AI agent, and Ada have pushed personalization and efficiency forward—but none emphasize persistent, customer-specific memory as a core feature. That’s where Sendbird is positioning its differentiator.

If Delight.ai delivers on its promise, it could redefine what brands expect from their AI agents—moving from fast responses to relationship-driven engagement that impacts lifetime value.

Who It’s For—and What Comes Next

Delight.ai is available now for mid-market and enterprise companies across retail, travel, on-demand services, SaaS, fintech, and healthcare. Because it can work across the full lifecycle—sales, marketing, support, loyalty—it’s pitched as a revenue driver, not just a support tool.

The bigger question is whether persistent-memory AI becomes the new standard in customer experience. If it does, Delight.ai may have arrived right on time.

Get in touch with our MarTech Experts.

Microsoft Fabric Gets a Data Quality Boost with Telmai’s Real-Time Observability

Microsoft Fabric Gets a Data Quality Boost with Telmai’s Real-Time Observability

artificial intelligence 20 Nov 2025

In the era of agentic AI—where autonomous systems rely on constant, high-quality, contextual data—data observability isn’t a nice-to-have anymore. It’s survival gear. Telmai, the AI-powered data quality and observability platform, is stepping into that gap with a new partnership aimed squarely at Microsoft Fabric users.

The company announced that its data reliability engine now integrates natively with Microsoft OneLake, bringing real-time monitoring, validation, and trust signals directly into the heart of the Fabric ecosystem. The result: faster insight, fewer broken pipelines, and analytics models that don’t need a rescue mission every time the data shifts.

Why This Partnership Matters Now

Organizations building agentic AI and real-time analytics systems face a fundamental bottleneck: traditional data validation isn’t built for low latency, distributed architectures, or constant context shifts. Fabric users—many of whom are already grappling with data spread across domains—need observability that keeps pace with the speed of automation.

Telmai is positioning its platform as an answer to that shift. Rather than validating data downstream—after it hits dashboards or AI workflows—it monitors and checks data as soon as it lands in OneLake, across structured, semi-structured, and even unstructured formats.

CEO and co-founder Mona Rakibe puts it bluntly: “Ensuring data reliability is no longer optional—it’s table stakes.” For agentic AI, where decisions happen autonomously and instantly, bad data isn’t just costly; it’s dangerous.

What’s New: Real-Time, Source-Level Validation in Fabric

Telmai’s integration with OneLake brings a few capabilities that stand out:


1. Continuous Validation at the Source

Data is checked the moment it arrives in OneLake—catching anomalies before they propagate into dashboards, models, or downstream apps. This ensures Fabric users can maintain low-latency access to validated, contextualized data, eliminating blind spots that slow decision-making.


2. Custom Business Logic and Alerting

Telmai’s engine allows teams to configure their own validation rules, anomaly detection thresholds, and alerting policies. Rather than generic “something broke somewhere” notifications, users get targeted, actionable insights tied to business context.


3. AI-Powered Data Reliability Agents Across Fabric

Here’s where Telmai differs from traditional observability tools: its Data Reliability Agents allow both technical and non-technical users to query issues, troubleshoot anomalies, and deploy monitoring policies using plain-language commands.

This decentralized model is critical for Fabric’s domain-first architecture, reducing the burden on engineering teams and making data trust a shared—and accessible—capability.


4. Contextual Explanations for Root Causes

Instead of dumping a list of anomalies on data teams, Telmai provides explanations and supporting context about why issues occurred. Faster troubleshooting means shorter time-to-resolution and less operational drag on analytics pipelines.


The Fabric Landscape—and Telmai’s Place in It

Microsoft Fabric has quickly become a central hub for enterprises consolidating analytics, governance, and AI workloads. But this consolidation raises the bar for data quality: errors travel farther, faster, and into more systems.

Telmai’s integration signals Microsoft’s growing emphasis on vetted, explainable, production-ready data. Dipti Borkar, VP & GM of Microsoft OneLake & ISV Ecosystem, noted that accuracy and trust are “critical to the success of any analytics and AI project,” emphasizing that Telmai’s capabilities help users “quickly and easily build AI-ready, trusted data products.”

In a market filled with observability contenders—Monte Carlo, Bigeye, Soda, Databand—Telmai is carving out a space that leans heavily into AI explainability and domain-level trust, aligning closely with Fabric’s own architectural philosophy.

The Bottom Line

Agentic AI won’t tolerate laggy, inconsistent, or context-poor data. Telmai’s partnership with Microsoft is a strategic play to make Fabric not just a unified analytics platform, but a trusted one—with real-time validation baked in at the source.

For enterprises scaling AI-driven analytics, this integration may prove to be not just a convenience but a competitive necessity.

Get in touch with our MarTech Experts.

VertexOne Reshuffles Leadership to Sharpen Customer Experience in the Utility Tech Race

VertexOne Reshuffles Leadership to Sharpen Customer Experience in the Utility Tech Race

customer experience management 19 Nov 2025

VertexOne, long known for its customer-experience-first approach to utility and energy software, is reorganizing its top bench. The company announced a pair of strategic leadership changes designed to tune up delivery performance and unify the customer journey—a move that reflects how fiercely competitive the utility tech landscape has become.

Energy providers today face more pressure than ever: rising customer expectations, digital modernization mandates, and the operational complexity of distributed energy resources. Vendors in the space aren’t just selling software—they’re selling outcomes. And VertexOne is clearly betting that the right leadership alignment is the lever that drives those outcomes faster.

A New Ops Chief to Tighten the Machine

Keith Ahonen steps into the role of Executive Vice President, Operations, placing him squarely in charge of deployments and delivery across VertexOne’s client portfolio. For utilities, where timelines are tight and integrations are deep, consistency isn’t just nice to have—it's the whole mandate.

Ahonen arrives with 25 years of execution-heavy experience in the energy sector and a recent stint as COO of Accelerated Innovations, which VertexOne acquired in 2024. His task now: streamline internal processes, speed up deployments, and create a delivery organization that scales cleanly as the company grows.

In an industry where system replacements often resemble open-heart surgery for utilities, his focus on reliability and quality isn’t just operational cleanup—it’s a competitive differentiator.

A Chief Client Officer to Own the Entire Journey

While Ahonen sharpens the back end, Tina Santizo takes command of the front. Previously COO, she steps into VertexOne’s newly minted role of Chief Client Officer (CCO). The title signals something clear: VertexOne wants a single leader accountable for the full customer lifecycle, from onboarding to renewals.

It’s a position many tech companies have added in the last few years, especially as cloud vendors compete on lifetime value rather than one-time licensing. For VertexOne, the move formalizes what Santizo has already been known for internally—championing client advocacy and ensuring measurable ROI.

As utilities increasingly evaluate vendors based on delivered value, not just feature checklists, a unified customer-success strategy becomes a powerful retention engine.

Why This Matters for the Utility Tech Market

Across the industry, software vendors are consolidating and optimizing leadership to contend with evolving expectations from utilities. Customers want platforms that adapt quickly, integrate cleanly, and provide clarity on outcomes. VertexOne’s leadership realignment mirrors moves from competitors who are embedding customer success more deeply into product and operations strategy.

This shift also comes at a time when VertexOne is expanding its feature suite, including the recently launched VXconnect—a platform the company has pitched as a “game-changer” for personalized, omnichannel utility customer engagement. Strong operations plus a tightly organized client-experience team could become the backbone that accelerates adoption of such offerings.

The Bigger Theme: Experience is Becoming the Product

Utility software is no longer just about billing engines, outage modules, or portals. Increasingly, CX is the product. Whether a utility chooses Vendor A or Vendor B often comes down to deployment reliability, ongoing guidance, and the confidence that value won’t drop off after go-live.

By elevating ops and client success—two areas where software companies often struggle—VertexOne is signaling that long-term service quality is as central to its strategy as the products themselves.

Looking Ahead

These executive moves won’t instantly transform the company, but they create structural clarity at a time when utilities are demanding more accountability from vendors. With Ahonen refining the delivery engine and Santizo owning the customer journey end-to-end, VertexOne appears to be positioning itself for a market where CX maturity directly influences vendor selection.

The utility tech sector is tightening, expectations are rising, and VertexOne’s reorganization shows it plans to keep pace—not by adding louder marketing claims, but by reinforcing the operational backbone behind them.

Get in touch with our MarTech Experts.

WEKA Shatters the GPU Memory Wall With Augmented Memory Grid for AI at Scale

WEKA Shatters the GPU Memory Wall With Augmented Memory Grid for AI at Scale

artificial intelligence 19 Nov 2025

At SC25, WEKA—best known for bringing high-performance data architectures to AI infrastructure—announced something that feels less like an upgrade and more like a pressure-relief valve for the entire AI industry. The company has taken its Augmented Memory Grid technology from concept to full commercial availability on NeuralMesh. And the timing could not be more relevant.

AI builders everywhere are running into the same wall: GPU memory. It’s fast, it’s precious, and it’s nowhere near large enough for the sprawling long-context models and agentic AI workflows that now dominate the market. The industry has thrown compute, distributed clusters, and clever caching at the problem—yet the wall remains.

WEKA’s answer: eliminate the wall entirely.

Validated on Oracle Cloud Infrastructure (OCI) and other major AI clouds, Augmented Memory Grid expands the available GPU memory footprint by 1000x, turning gigabytes into petabytes, while cutting time-to-first-token by up to 20x. Long-context inference, reasoning agents, research copilots, and multi-turn systems suddenly behave like they’ve been freed from a decade-old hardware ceiling.

It’s not an incremental improvement—it’s a structural rewrite of how AI memory can work.

The AI Memory Wall: Why GPU HBM Can’t Keep Up

The bottleneck isn’t theoretical. High-bandwidth memory (HBM) on GPUs is blisteringly fast but extremely small. System DRAM offers more space but only a fraction of the bandwidth. Once both tiers fill, inference workloads begin dumping their key-value cache (KV cache), forcing GPUs to recompute previously processed tokens.

That recomputation is the silent killer: it burns GPU cycles, slows inference speeds, drives up power consumption, and breaks the economics of long-context AI.

As large language models move toward 100K-token, 1M-token, and agentic, continuously-running interactions, the HBM-DRAM hierarchy collapses under its own constraints. And so far, no amount of clever software trickery has truly solved it.

WEKA’s approach: change the architecture.

Augmented Memory Grid: A New Memory Layer for AI

Instead of forcing GPUs to live inside the rigid boundaries of HBM, Augmented Memory Grid creates a high-speed bridge between GPU memory and flash-based storage. It continuously streams KV cache to and from WEKA’s “token warehouse,” a storage layer built for memory-speed access.

The important detail:
It behaves like memory, not storage.

Using RDMA and NVIDIA Magnum IO GPUDirect Storage, WEKA maintains near-HBM performance while letting models access petabytes of extended memory.

The result is that LLMs and reasoning agents can keep enormous context windows alive—no recomputation, no token wastage, and no cost explosions.

“We’re bringing a proven solution validated with OCI and other leading platforms,” said WEKA CEO and co-founder Liran Zvibel. “Scaling agentic AI isn’t just compute—it’s about smashing the memory wall with smarter data paths. Augmented Memory Grid lets customers run more tokens per GPU, support more users, and enable entirely new service models.”

This isn’t “HBM someday.” It’s HBM-scale capacity today.

OCI Validation: The Numbers That Matter

The technology didn’t just run in a lab. OCI testing confirmed the kind of performance that turns heads:

  • 1000x KV cache expansion with near-memory speeds

  • 20x faster time-to-first-token when processing 128K tokens

  • 7.5M read IOPs and 1M write IOPs across an eight-node cluster

These aren’t modest deltas—they fundamentally change how inference clusters scale.

Nathan Thomas, VP of Multicloud at OCI, put it bluntly:
“The 20x improvement in time-to-first-token isn’t just performance—it changes the cost structure of running AI at scale.”

Cloud GPU economics have become one of the industry’s greatest pain points. Reducing idle cycles, avoiding prefill recomputations, and achieving consistent cache hits directly translate into higher tenant density and lower dollar-per-token costs.

For model providers deploying long-context systems, this is the difference between a business model that breaks even and one that thrives.

Why Long-Context AI Needed This Yesterday

As LLMs evolve from text generators into autonomous problem-solvers, the context window becomes the brain’s working memory. Coding copilots, research assistants, enterprise knowledge engines, and agentic workflows depend on holding vast amounts of information active simultaneously.

Until now, supporting those windows meant trading off between:

  • astronomical compute bills

  • degraded performance

  • artificially short interactions

  • forced summarization that loses fidelity

With Augmented Memory Grid, the trade-offs shrink dramatically. AI agents can maintain state, continuity, and long-running memory without burning GPU cycles on re-prefill phases.

Put differently:
LLMs get to think bigger, remember longer, and respond faster—without crushing infrastructure budgets.

A Broader Shift: AI Architecture Moves Beyond Compute

For the last five years, AI scaling strategies have focused overwhelmingly on compute—bigger GPUs, faster interconnects, more parallelization. Memory, by contrast, has been the quiet constraint no one could fix.

WEKA’s move highlights a turning point:
AI’s next leap forward won’t come from more FLOPs. It will come from smarter memory architectures.

NVIDIA’s ecosystem support—Magnum IO GPUDirect Storage, NVIDIA NIXL, and NVIDIA Dynamo—signals that silicon vendors recognize the same shift. Open-sourcing a plugin for the NVIDIA Inference Transfer Library shows WEKA wants widespread adoption, not a walled garden.

OCI’s bare-metal infrastructure with RDMA networking makes it one of the first clouds capable of showcasing the technology without bottlenecks.

This ecosystem convergence—cloud, GPU, and storage—suggests that memory-scaling tech will become a foundational layer of next-gen inference stacks.

Commercial Rollout and What Comes Next

Augmented Memory Grid is now available as a feature for NeuralMesh deployments and listed on the Oracle Cloud Marketplace. Support for additional clouds is coming, though the company hasn’t yet named which.

The implications for AI providers are straightforward:

  • Long-context models become affordable to run

  • Agentic AI becomes easier to scale and commercialize

  • GPU clusters become more efficient

  • New monetization models become viable (persistent assistants, multi-user agents, continuous reasoning systems)

WEKA has effectively repositioned memory—from hardware limitation to software-defined superpower.

If compute defined AI’s last decade, memory may define its next one.

Get in touch with our MarTech Experts.

   

Page 97 of 1452

REQUEST PROPOSAL