artificial intelligence 20 Nov 2025
In the era of agentic AI—where autonomous systems rely on constant, high-quality, contextual data—data observability isn’t a nice-to-have anymore. It’s survival gear. Telmai, the AI-powered data quality and observability platform, is stepping into that gap with a new partnership aimed squarely at Microsoft Fabric users.
The company announced that its data reliability engine now integrates natively with Microsoft OneLake, bringing real-time monitoring, validation, and trust signals directly into the heart of the Fabric ecosystem. The result: faster insight, fewer broken pipelines, and analytics models that don’t need a rescue mission every time the data shifts.
Organizations building agentic AI and real-time analytics systems face a fundamental bottleneck: traditional data validation isn’t built for low latency, distributed architectures, or constant context shifts. Fabric users—many of whom are already grappling with data spread across domains—need observability that keeps pace with the speed of automation.
Telmai is positioning its platform as an answer to that shift. Rather than validating data downstream—after it hits dashboards or AI workflows—it monitors and checks data as soon as it lands in OneLake, across structured, semi-structured, and even unstructured formats.
CEO and co-founder Mona Rakibe puts it bluntly: “Ensuring data reliability is no longer optional—it’s table stakes.” For agentic AI, where decisions happen autonomously and instantly, bad data isn’t just costly; it’s dangerous.
Telmai’s integration with OneLake brings a few capabilities that stand out:
Data is checked the moment it arrives in OneLake—catching anomalies before they propagate into dashboards, models, or downstream apps. This ensures Fabric users can maintain low-latency access to validated, contextualized data, eliminating blind spots that slow decision-making.
Telmai’s engine allows teams to configure their own validation rules, anomaly detection thresholds, and alerting policies. Rather than generic “something broke somewhere” notifications, users get targeted, actionable insights tied to business context.
Here’s where Telmai differs from traditional observability tools: its Data Reliability Agents allow both technical and non-technical users to query issues, troubleshoot anomalies, and deploy monitoring policies using plain-language commands.
This decentralized model is critical for Fabric’s domain-first architecture, reducing the burden on engineering teams and making data trust a shared—and accessible—capability.
Instead of dumping a list of anomalies on data teams, Telmai provides explanations and supporting context about why issues occurred. Faster troubleshooting means shorter time-to-resolution and less operational drag on analytics pipelines.
Microsoft Fabric has quickly become a central hub for enterprises consolidating analytics, governance, and AI workloads. But this consolidation raises the bar for data quality: errors travel farther, faster, and into more systems.
Telmai’s integration signals Microsoft’s growing emphasis on vetted, explainable, production-ready data. Dipti Borkar, VP & GM of Microsoft OneLake & ISV Ecosystem, noted that accuracy and trust are “critical to the success of any analytics and AI project,” emphasizing that Telmai’s capabilities help users “quickly and easily build AI-ready, trusted data products.”
In a market filled with observability contenders—Monte Carlo, Bigeye, Soda, Databand—Telmai is carving out a space that leans heavily into AI explainability and domain-level trust, aligning closely with Fabric’s own architectural philosophy.
Agentic AI won’t tolerate laggy, inconsistent, or context-poor data. Telmai’s partnership with Microsoft is a strategic play to make Fabric not just a unified analytics platform, but a trusted one—with real-time validation baked in at the source.
For enterprises scaling AI-driven analytics, this integration may prove to be not just a convenience but a competitive necessity.
Get in touch with our MarTech Experts.
customer experience management 19 Nov 2025
VertexOne, long known for its customer-experience-first approach to utility and energy software, is reorganizing its top bench. The company announced a pair of strategic leadership changes designed to tune up delivery performance and unify the customer journey—a move that reflects how fiercely competitive the utility tech landscape has become.
Energy providers today face more pressure than ever: rising customer expectations, digital modernization mandates, and the operational complexity of distributed energy resources. Vendors in the space aren’t just selling software—they’re selling outcomes. And VertexOne is clearly betting that the right leadership alignment is the lever that drives those outcomes faster.
Keith Ahonen steps into the role of Executive Vice President, Operations, placing him squarely in charge of deployments and delivery across VertexOne’s client portfolio. For utilities, where timelines are tight and integrations are deep, consistency isn’t just nice to have—it's the whole mandate.
Ahonen arrives with 25 years of execution-heavy experience in the energy sector and a recent stint as COO of Accelerated Innovations, which VertexOne acquired in 2024. His task now: streamline internal processes, speed up deployments, and create a delivery organization that scales cleanly as the company grows.
In an industry where system replacements often resemble open-heart surgery for utilities, his focus on reliability and quality isn’t just operational cleanup—it’s a competitive differentiator.
While Ahonen sharpens the back end, Tina Santizo takes command of the front. Previously COO, she steps into VertexOne’s newly minted role of Chief Client Officer (CCO). The title signals something clear: VertexOne wants a single leader accountable for the full customer lifecycle, from onboarding to renewals.
It’s a position many tech companies have added in the last few years, especially as cloud vendors compete on lifetime value rather than one-time licensing. For VertexOne, the move formalizes what Santizo has already been known for internally—championing client advocacy and ensuring measurable ROI.
As utilities increasingly evaluate vendors based on delivered value, not just feature checklists, a unified customer-success strategy becomes a powerful retention engine.
Across the industry, software vendors are consolidating and optimizing leadership to contend with evolving expectations from utilities. Customers want platforms that adapt quickly, integrate cleanly, and provide clarity on outcomes. VertexOne’s leadership realignment mirrors moves from competitors who are embedding customer success more deeply into product and operations strategy.
This shift also comes at a time when VertexOne is expanding its feature suite, including the recently launched VXconnect—a platform the company has pitched as a “game-changer” for personalized, omnichannel utility customer engagement. Strong operations plus a tightly organized client-experience team could become the backbone that accelerates adoption of such offerings.
Utility software is no longer just about billing engines, outage modules, or portals. Increasingly, CX is the product. Whether a utility chooses Vendor A or Vendor B often comes down to deployment reliability, ongoing guidance, and the confidence that value won’t drop off after go-live.
By elevating ops and client success—two areas where software companies often struggle—VertexOne is signaling that long-term service quality is as central to its strategy as the products themselves.
These executive moves won’t instantly transform the company, but they create structural clarity at a time when utilities are demanding more accountability from vendors. With Ahonen refining the delivery engine and Santizo owning the customer journey end-to-end, VertexOne appears to be positioning itself for a market where CX maturity directly influences vendor selection.
The utility tech sector is tightening, expectations are rising, and VertexOne’s reorganization shows it plans to keep pace—not by adding louder marketing claims, but by reinforcing the operational backbone behind them.
Get in touch with our MarTech Experts.
artificial intelligence 19 Nov 2025
At SC25, WEKA—best known for bringing high-performance data architectures to AI infrastructure—announced something that feels less like an upgrade and more like a pressure-relief valve for the entire AI industry. The company has taken its Augmented Memory Grid technology from concept to full commercial availability on NeuralMesh. And the timing could not be more relevant.
AI builders everywhere are running into the same wall: GPU memory. It’s fast, it’s precious, and it’s nowhere near large enough for the sprawling long-context models and agentic AI workflows that now dominate the market. The industry has thrown compute, distributed clusters, and clever caching at the problem—yet the wall remains.
WEKA’s answer: eliminate the wall entirely.
Validated on Oracle Cloud Infrastructure (OCI) and other major AI clouds, Augmented Memory Grid expands the available GPU memory footprint by 1000x, turning gigabytes into petabytes, while cutting time-to-first-token by up to 20x. Long-context inference, reasoning agents, research copilots, and multi-turn systems suddenly behave like they’ve been freed from a decade-old hardware ceiling.
It’s not an incremental improvement—it’s a structural rewrite of how AI memory can work.
The bottleneck isn’t theoretical. High-bandwidth memory (HBM) on GPUs is blisteringly fast but extremely small. System DRAM offers more space but only a fraction of the bandwidth. Once both tiers fill, inference workloads begin dumping their key-value cache (KV cache), forcing GPUs to recompute previously processed tokens.
That recomputation is the silent killer: it burns GPU cycles, slows inference speeds, drives up power consumption, and breaks the economics of long-context AI.
As large language models move toward 100K-token, 1M-token, and agentic, continuously-running interactions, the HBM-DRAM hierarchy collapses under its own constraints. And so far, no amount of clever software trickery has truly solved it.
WEKA’s approach: change the architecture.
Instead of forcing GPUs to live inside the rigid boundaries of HBM, Augmented Memory Grid creates a high-speed bridge between GPU memory and flash-based storage. It continuously streams KV cache to and from WEKA’s “token warehouse,” a storage layer built for memory-speed access.
The important detail:
It behaves like memory, not storage.
Using RDMA and NVIDIA Magnum IO GPUDirect Storage, WEKA maintains near-HBM performance while letting models access petabytes of extended memory.
The result is that LLMs and reasoning agents can keep enormous context windows alive—no recomputation, no token wastage, and no cost explosions.
“We’re bringing a proven solution validated with OCI and other leading platforms,” said WEKA CEO and co-founder Liran Zvibel. “Scaling agentic AI isn’t just compute—it’s about smashing the memory wall with smarter data paths. Augmented Memory Grid lets customers run more tokens per GPU, support more users, and enable entirely new service models.”
This isn’t “HBM someday.” It’s HBM-scale capacity today.
The technology didn’t just run in a lab. OCI testing confirmed the kind of performance that turns heads:
1000x KV cache expansion with near-memory speeds
20x faster time-to-first-token when processing 128K tokens
7.5M read IOPs and 1M write IOPs across an eight-node cluster
These aren’t modest deltas—they fundamentally change how inference clusters scale.
Nathan Thomas, VP of Multicloud at OCI, put it bluntly:
“The 20x improvement in time-to-first-token isn’t just performance—it changes the cost structure of running AI at scale.”
Cloud GPU economics have become one of the industry’s greatest pain points. Reducing idle cycles, avoiding prefill recomputations, and achieving consistent cache hits directly translate into higher tenant density and lower dollar-per-token costs.
For model providers deploying long-context systems, this is the difference between a business model that breaks even and one that thrives.
As LLMs evolve from text generators into autonomous problem-solvers, the context window becomes the brain’s working memory. Coding copilots, research assistants, enterprise knowledge engines, and agentic workflows depend on holding vast amounts of information active simultaneously.
Until now, supporting those windows meant trading off between:
astronomical compute bills
degraded performance
artificially short interactions
forced summarization that loses fidelity
With Augmented Memory Grid, the trade-offs shrink dramatically. AI agents can maintain state, continuity, and long-running memory without burning GPU cycles on re-prefill phases.
Put differently:
LLMs get to think bigger, remember longer, and respond faster—without crushing infrastructure budgets.
For the last five years, AI scaling strategies have focused overwhelmingly on compute—bigger GPUs, faster interconnects, more parallelization. Memory, by contrast, has been the quiet constraint no one could fix.
WEKA’s move highlights a turning point:
AI’s next leap forward won’t come from more FLOPs. It will come from smarter memory architectures.
NVIDIA’s ecosystem support—Magnum IO GPUDirect Storage, NVIDIA NIXL, and NVIDIA Dynamo—signals that silicon vendors recognize the same shift. Open-sourcing a plugin for the NVIDIA Inference Transfer Library shows WEKA wants widespread adoption, not a walled garden.
OCI’s bare-metal infrastructure with RDMA networking makes it one of the first clouds capable of showcasing the technology without bottlenecks.
This ecosystem convergence—cloud, GPU, and storage—suggests that memory-scaling tech will become a foundational layer of next-gen inference stacks.
Augmented Memory Grid is now available as a feature for NeuralMesh deployments and listed on the Oracle Cloud Marketplace. Support for additional clouds is coming, though the company hasn’t yet named which.
The implications for AI providers are straightforward:
Long-context models become affordable to run
Agentic AI becomes easier to scale and commercialize
GPU clusters become more efficient
New monetization models become viable (persistent assistants, multi-user agents, continuous reasoning systems)
WEKA has effectively repositioned memory—from hardware limitation to software-defined superpower.
If compute defined AI’s last decade, memory may define its next one.
Get in touch with our MarTech Experts.
cloud technology 19 Nov 2025
Enterprise AI is booming, messy, and—more often than many leaders admit—dangerously inaccurate. OpenText thinks it knows why: organizations have unleashed AI on oceans of unstructured, unlabeled, poorly governed data, then act surprised when the models hallucinate, misinterpret, or leak sensitive information.
This week at OpenText World 2025, the company revealed its counterstrategy: the OpenText AI Data Platform (AIDP), an open, governed data layer engineered to give enterprise AI the one thing it consistently struggles with—context.
Where other vendors chase bigger models or flashier agents, OpenText is doubling down on its heritage: decades of document management, metadata discipline, and enterprise-grade information governance. In an era where half of AI-using organizations report at least one serious accuracy or risk failure (McKinsey’s numbers, not OpenText’s), the pitch hits close to home.
OpenText’s message is blunt: if the data is wrong, the AI will be wrong—no matter how impressive the model is.
OpenText has spent more than 30 years holding, securing, and classifying some of the world’s largest enterprise datasets. That experience underpins its thesis: AI agents only become useful when they understand where they are, what they’re allowed to see, and why a task matters.
Documents. Tickets. Commerce records. Security logs. Machine outputs. Human inputs.
All tagged, secured, governed, versioned, and compliant.
OpenText says enterprises must treat AI less like a chatbot experiment and more like a discipline rooted in data lineage, identity access control, retention policies, and contextual metadata. Otherwise, even the smartest models become highly efficient generators of confusion.
This foundation feeds directly into OpenText Aviator, the company’s enterprise AI engine, which can now orchestrate workflows through domain-aware agents.
OpenText insists it’s not building another AI walled garden. Aviator’s architecture leans heavily into openness:
Multi-cloud
Works across on-prem, cloud, hybrid, or multi-cloud deployments.
Multi-model
Compatible with any LLM or SLM—including “bring your own model.”
Multi-application
Built for deep integration with ERP, CRM, ITSM, security suites, and more.
In reality, this means OpenText wants its AI agents to plug into the daily arteries of enterprise work—from SAP order flows to Salesforce deals to Oracle records to Microsoft infrastructure.
“Everyone is chasing the mega-agent. But enterprises need armies of domain-specific agents,” said Savinay Berry, CPO & CTO at OpenText. “Accuracy through trusted data isn’t an IT feature—it’s a C-level mandate.”
A major announcement embedded in the platform launch is OpenText’s expanded partnership with Databricks. The companies will co-innovate on AIDP with deeper technical integrations, Delta Sharing, and a unified governance path.
OpenText already ran Threat Detection and Response on the Databricks Data Intelligence Platform. Now the partnership widens into joint engineering.
The intent is clear:
Combine Databricks’ analytics engine with OpenText’s governed data fabric to deliver trustworthy, enterprise-ready AI.
If successful, this pairing could become a serious contender against Microsoft’s Fabric, Google’s Vertex-BigQuery pipeline, and Snowflake’s AI-ready enterprise stack.
At OpenText World, the company revealed a surprisingly detailed roadmap for the next six releases:
A unified data and AI framework with governance orchestration. Think of it as a control tower for every agent decision.
A no-code environment for building and governing enterprise AI agents—without requiring data scientists to hand-craft pipelines.
A metadata-first ingestion engine that transforms structured and unstructured data into AI-ready context.
A suite spanning privacy, tokenization, encryption, PII controls, redaction, AI readiness checks, and threat detection.
A professional services track to help enterprises move from AI experiments to production-grade agent deployments.
This aggressive roadmap signals OpenText’s belief that the battle for enterprise AI will be fought not in the model layer, but in the data and governance layer.
OpenText emphasized that Aviator is already live for real-world use cases like:
fraud detection
claims management
predictive maintenance
customer service automation
IT operations workflows
The company also announced that the Aviator entry-tier package will be included at no extra cost with upgrades to OT 26.1 for Content Management, Service Management, and Communications Management.
Better yet for risk-averse industries, Aviator will become fully available on-premises starting with OT 26.1 across multiple modules, including DevOps and Application Security.
For global enterprises navigating sovereignty laws, this on-prem push is a quiet but important differentiator.
OpenText is staking out a clear and contrarian position:
AI models do not matter unless the data behind them is governed, contextual, and trustworthy.
This philosophy diverges sharply from model-first players—hugging the foundational layers of enterprise information instead of competing in the model arms race. With model commoditization accelerating, that may prove to be a winning angle.
AIDP also signals a broader industry shift toward:
governed AI pipelines
enterprise-grade agent orchestration
model-agnostic architectures
contextual knowledge layers
compliance-integrated design
In short, OpenText is rewriting AI around the data source, not the model endpoint.
If other vendors follow, the next generation of enterprise AI may finally behave less like an unpredictable intern and more like a dependable colleague.
Get in touch with our MarTech Experts.
security 19 Nov 2025
When it comes to communication, federal agencies operate under an impossible paradox: they must modernize fast—without making a single mistake. Messaging apps like Teams, SMS, and WhatsApp have become the backbone of everyday collaboration, yet government environments remain bound by some of the strictest security and compliance rules in the industry.
That's the gap LeapXpert and Iron Bow Technologies are now aiming to close.
The two companies have announced a partnership to bring secure, compliant, audit-ready messaging solutions across U.S. government agencies—a move that feels less like a nice-to-have and more like long-overdue modernization.
Most agencies have already embraced modern collaboration platforms, but using them securely is a different challenge altogether. Communications need to be encrypted, discoverable, logged, and retained according to frameworks like NIST 800-53 and the increasingly important CMMC.
For federal IT leaders, the mandate is simple:
Modernize communication, but don’t break any laws while doing it.
LeapXpert’s platform is purpose-built for environments where messaging must be both convenient and controlled. It provides:
Secure communication across Teams, WhatsApp, SMS, and other channels
Full audit trails and message capture
Encryption and policy-driven retention
Compliance alignment for NIST, CMMC, and federal cybersecurity standards
In other words, the real-time flexibility employees want, with the accountability government regulators demand.
“Government agencies need the same communication flexibility as the private sector, but with far greater accountability,” said Avi Pardo, Co-founder and CBO at LeapXpert. His point lands: the government can’t simply adopt consumer-grade tools and hope for the best.
Iron Bow Technologies isn’t new to federal modernization. The company has long been embedded in federal IT procurement, cybersecurity implementation, and mission-critical digital transformation initiatives.
Which is why the pairing makes sense. Iron Bow knows the compliance terrain; LeapXpert knows secure communication. Together, they remove one of the last barriers to complete digital collaboration inside agencies.
“LeapXpert stood out because they address one of the most urgent and often overlooked challenges in federal IT: enabling secure, modern messaging without sacrificing control or compliance,” said Rachel Murphy, General Manager for Federal Civilian Sales at Iron Bow.
With cloud adoption surging and agencies accelerating their cybersecurity modernization plans, the partnership arrives at a critical moment.
Agencies have been under pressure—political, operational, and regulatory—to digitize faster. But messaging has remained a stubborn blind spot with significant security implications.
This collaboration signals a broader industry shift:
Modern communication tools are no longer optional in government—they’re becoming core infrastructure.
Expect ripple effects. Rival collaboration providers will need to demonstrate similarly airtight compliance. Legacy communication setups that rely on rigid, siloed systems will face scrutiny. And as more agencies shift to multi-channel messaging, platforms that can secure every interaction—across every device—will have the upper hand.
LeapXpert and Iron Bow are providing agencies with something they haven’t had until now: a safe on-ramp to modern communication. The combination of LeapXpert’s compliance-driven tech and Iron Bow’s federal deployment expertise gives agencies a clear path to embrace messaging without compromising accountability or cybersecurity.
It’s modernization with guardrails—and in the federal world, that’s exactly the point.
Get in touch with our MarTech Experts.
">
artificial intelligence 19 Nov 2025
In the financial world, data may be abundant, but usable data—the kind analysts can actually trust—is another matter entirely. That’s where Rimes has staked its reputation. And now, it’s plugging that data expertise directly into Databricks, one of the fastest-rising players in enterprise AI.
Rimes has partnered with Databricks to make its Managed Data Services available natively on the Databricks Data Intelligence Platform, using Delta Sharing, the open-source protocol designed for secure data exchange. For investment teams that have been juggling data pipelines, governance obstacles, and latency headaches, this move could be a meaningful shift.
Traditionally, investment firms have had to replicate or manually pipe their structured datasets into analytics platforms—a costly endeavor that introduces delay and governance risk. By delivering Rimes’ curated datasets via Delta Sharing, clients can now connect directly to governed data without replication, eliminating a major bottleneck.
The benefits feel tailor-made for today’s AI-driven investment workflows:
Faster time-to-insight with low-latency access
A single governed source of truth
AI-ready data powering modeling, automation, and workflow optimization
Direct integration into Databricks notebooks, dashboards, and AI agents (including Agent Bricks)
In other words: the plumbing just got a lot smarter.
“Rimes has built a reputation for delivering the highest quality managed data and data governance capabilities,” said Vijay Mayadas, CEO of Rimes. “Through our partnership with Databricks, we’re enabling clients to accelerate their time to insight and unlock the full potential of their investment data.”
Databricks, fresh off its own acceleration in the enterprise AI race, sees the partnership as essential infrastructure for financial institutions hoping to build scalable AI applications.
“Enterprises are looking for ways to scale high-quality, trusted AI apps and agents on their own data,” said Dael Williamson, Field CTO, EMEA at Databricks. “By making Rimes’ Managed Data Services available via Delta Sharing, financial institutions can now access clean, curated, and timely investment data directly within their Databricks workspaces.”
Databricks’ pitch to Wall Street is clear: AI isn’t magic—it’s data quality, governance, and explainability. With Rimes feeding its platform, the data layer just got significantly more robust.
The partnership also marks an early milestone in Rimes’ post-Five Arrows investment expansion. As part of its long-term strategy, Rimes plans to add more datasets, broaden availability, and introduce AI-driven use cases built on top of its unified data layer.
The vision:
A seamless, interoperable data foundation that can power analytics, automation, compliance, and next-generation intelligent workflows across the financial ecosystem.
If the industry trend holds, investment firms are increasingly turning away from fragmented data estates and toward unified, governed platforms that can feed AI systems responsibly. With Databricks gaining momentum as the go-to open AI stack, Rimes’ deep domain expertise lands at exactly the right moment.
Rimes and Databricks aren’t just aligning technologies—they’re aligning philosophies: open, governed, trustworthy data as the backbone of financial innovation.
For financial institutions wrestling with AI adoption, messy data estates, and governance challenges, this partnership offers a cleaner, faster path forward. The combination of Rimes’ investment data pedigree and Databricks’ AI capabilities could reshape how firms build intelligence into their workflows.
Get in touch with our MarTech Experts.
customer experience management 19 Nov 2025
Mitel is doubling down on the future of customer experience, and this time it’s taking aim at one of the most persistent enterprise problems: fragmented, aging communications stacks. With the launch of Mitel CX 2.0, the company is rolling out what it calls an “AI-embedded, hybrid communications engine” designed to unify agents, supervisors, and back-office teams on a single workspace. Think of it as a modern contact center, stretched across the entire organization—minus the usual tangle of disconnected apps and clunky interfaces.
And if Mitel gets its way, the customer journey won’t just live inside the contact center anymore; it will live anywhere an employee interacts with a customer.
CX 2.0 expands on Mitel’s multi-cloud hybrid communications portfolio, blending private cloud control with modern AI workflows. It’s a response to a market that has clearly shifted: IDC data shows that two-thirds of enterprises now prefer hybrid communications for resiliency and flexibility, while Techaisle points to customer engagement as the leading driver behind communications investments.
The pitch? Enterprises shouldn’t have to choose between innovation and compliance, scalability and control. CX 2.0 tries to offer all of it at once.
“Mitel CX 2.0 gives enterprises the freedom to innovate without sacrificing control,” said Martin Bitzinger, SVP of Product Management. “We’re extending customer engagement beyond the walls of the contact center and giving every employee the AI tools to influence the customer journey.”
Mitel’s timing is convenient—and strategic. The company has been gaining traction in the CX market, earning recognition from Aragon Research as a Leader in the Intelligent Contact Center category, and scoring high marks from The Eastern Management Group in large enterprise evaluations. According to the firm, Mitel beats several competitors on reliability and management tools—two factors CIOs weigh heavily when modernizing CX.
“Mitel has consistently ranked among the top vendors,” noted John Malone, President and CEO at The Eastern Management Group. “Enterprises want flexibility and control, and Mitel delivers both.”
CX 2.0 builds directly on this momentum, adding AI depth, hybrid resiliency, and more enterprise-grade integration options.
CX 2.0’s biggest upgrade is its unified, AI-powered workspace. Instead of juggling separate tools for voice, messaging, digital channels, analytics, and coaching, employees can now manage everything in one place. Supervisors get real-time insights and performance tools, while agents can move fluidly across channels.
Behind the scenes, Mitel’s AI assistants work quietly but aggressively—summarizing interactions, suggesting responses, routing customers, and even taking autonomous actions.
The City of Baltimore is already seeing the benefits. “Our 458 agents can now work from anywhere, and our workflows have become dramatically simpler,” said Ron Gross, Deputy Director of Communications. “The GenAI automation built into Workflow Studio is a game-changer.”
Much of the real differentiation lies in Mitel’s deeper integration with Workflow Studio, its AI-ready orchestration platform. CX 2.0 ties directly into this layer, which lets enterprises build agentic workflows, automate actions, and connect communication data to business processes.
Key capabilities include:
Industry-Tailored AI Virtual Agents
Built via Workflow Studio, these agents can resolve routine inquiries, escalate complex cases, and tap both front-line and back-office teams.
Voice AI with Smart Handoff
When calls move from bots to humans, transcripts, context, and suggested responses travel with them, eliminating repetitive pre-amble and improving resolution speed.
Agentic AI Workflows
These mini-agents automate actions—placing orders, generating tickets, sending alerts, processing approvals—reducing human workload and cutting delays.
Low-Code/No-Code Design Tools
Workflow Studio and the MCX Bot Builder let teams build GenAI-driven workflows without specialized development knowledge.
Ultimately, Mitel CX 2.0 isn’t just a new contact center release—it’s a shot at redefining enterprise engagement. The company’s approach is less about replacing agents and more about giving every team member access to AI-driven insights, automation, and communication tools.
In a market where competitors like Genesys, NICE, and Cisco are aggressively layering AI into their CX stacks, Mitel’s hybrid-first, workflow-oriented model stands out—especially for customers with complex compliance or on-prem requirements.
CX 2.0 positions Mitel not just as a contact center vendor, but as an enterprise-wide engagement orchestrator. And for businesses betting on hybrid operations for the long haul, that’s a compelling pitch.
Get in touch with our MarTech Experts.
artificial intelligence 19 Nov 2025
ScaleOut Software, known for its powerful enterprise caching and in-memory data grid solutions, has announced a major upgrade to its product line: the “Gen AI Release” of its ScaleOut Product Suite. At its core, this release injects generative AI into ScaleOut Active Caching™, allowing users—especially non-technical ones—to transform live, fast-moving data into real-time insights with natural-language prompts.
This isn’t just a UI facelift. ScaleOut is betting big on its distributed cache—not just as a place to store data, but as a live engine for operational intelligence. By embedding an LLM (OpenAI’s models, specifically) directly into the cache management layer, the platform now supports real-time analytics, charting, queries, and geospatial visualizations, all generated by users through plain English.
Traditionally, analytics on frequently changing data streams—like transactions, user behavior, or operational signals—has required complex ETL (extract, transform, load) pipelines, streaming frameworks, or even micro-batch systems. ScaleOut’s innovation flips that model: instead of moving data out, you analyze it where it lives.
With Active Caching now paired with generative AI, business users can ask questions like, “Show me a chart of order volume over the past hour”, or “Map customer clicks in our southeastern region”, and get immediate visual feedback. That means no waiting on data scientists to build dashboards, no painful BI setup, and far fewer handoffs.
For companies operating in sectors where real-time context matters—such as e-commerce, financial services, logistics, gaming, or cybersecurity—this is a potential game-changer. ScaleOut CEO Dr. William Bain frames it well: “Organizations of all sizes face the same need to respond quickly as conditions change… a combination of active caching with Gen AI-powered analytics enables customers to strengthen their operational intelligence, increase efficiency, and respond to changing conditions in real time.”
One of the most compelling aspects of this release is how ScaleOut lowers the technical bar for real-time analytics. Rather than requiring SQL knowledge, data modeling, or BI tool mastery, non-technical users can prompt the system in natural language.
Behind the scenes, the LLM parses these prompts and translates them into precise queries against JSON-encoded objects in ScaleOut’s cache. Then it generates chart specifications or map visualizations as needed—all on the fly.
This democratization has notable implications:
Faster decision-making: Business leaders don’t have to wait for data teams to build dashboards.
Lower friction: Analytics becomes accessible across roles, not just to data scientists or BI specialists.
Real-time responsiveness: As live data changes, so do the visualizations and insights, keeping everyone aligned with current conditions.
In effect, ScaleOut is turning its distributed cache into an AI-powered front door for real-time operational intelligence.
Alongside the Gen AI features, ScaleOut has revamped its management UI. A redesigned object browser now allows administrators and users to search and filter cached objects more easily, tailored to modern usability expectations.
This is more than aesthetic—it addresses a real enterprise pain point: large in-memory caches can store millions of complex objects, and managing or exploring them can be tedious. With improved filtering, search, and navigation, users can jump directly to the data they care about, inspect it, and even tweak their analytics modules from within the same interface.
ScaleOut didn’t stop at analytics. The Gen AI Release also introduces support for Amazon Simple Queuing Service (SQS). This means ScaleOut’s distributed cache can directly subscribe to SQS message streams—making it possible to process queued events in real time. This is especially valuable for architectures where decoupling via message queues is common, like microservices, event-driven systems, or cloud-native pipelines.
By listening to SQS, ScaleOut can keep its cache fresh, respond to events instantly, and feed its AI-powered analytics engine with up-to-date data without additional glue code.
ScaleOut’s move comes in an era where real-time analytics and operational intelligence are increasingly prerequisites, not luxuries. Competitors like Redis (with RedisAI) and Hazelcast tout in-memory speed, but often rely on separate analytics or streaming platforms.
ScaleOut, on the other hand, aims to collapse that stack: caching, computation, LLM-based query interpretation, and analytics all live together. That unified model could deliver lower latency, simpler architecture, and fewer moving parts. For enterprises with high-speed workloads—fraud detection, live personalization, logistics optimization—this integrated approach could offer a smoother, more performant path forward.
Here are some concrete scenarios where ScaleOut’s new features could shine:
E-commerce Flash Sales
Retailers can monitor live customer behavior during flash sales—who’s hitting what product, where drop-offs are happening, and how demand is evolving—all through live visualizations. They can then tweak pricing, inventory, or messaging in real-time.
Financial Market Trading
Trade desks or quant teams can query for patterns in transactional data, streaming orders, or credit risk signals without waiting for batch jobs or overnight ETL runs.
Logistics & Operations
Supply chain operators can map real-time vehicle locations, process inventory updates as they arrive, and visualize geospatial trends dynamically.
Gaming & Online Services
Gaming platforms can track user engagement, in-game events, or server performance in real time and make automated adjustments or trigger alerts.
Security & Monitoring
Security teams can track anomaly detection outputs, suspicious events, or threat indicators as they're cached, and immediately visualize or escalate via automated workflows.
One of the biggest hurdles in real-time systems has always been making insights accessible to non-engineering teams. ScaleOut's Gen AI Release tackles this by bringing real-time data into the hands of business analysts, operations professionals, and domain leaders—not just engineers.
Ops leaders can spot and correct trends fast.
Business analysts can ask “what just changed?” without opening a BI tool.
Service managers can chart performance metrics on-the-fly.
Product teams can monitor usage behavior in real time and pivot quickly.
By reducing the friction between data and decision-makers, ScaleOut gives organizations a powerful lever to act fast—not just with data, but with understanding.
Naturally, injecting an LLM into fast-moving data systems isn’t without challenges:
Cost: Running LLM-backed analytics on high-throughput caches may be expensive, depending on scale.
Latency: While caching reduces data-access latency, prompt processing and LLM inference could introduce new delays.
Security and Privacy: Live data may contain sensitive information; ensuring secure prompt handling, encryption, and auditing becomes critical.
Accuracy: Generative AI systems can misinterpret prompts or mis-generate query syntax. Users will need guardrails, validation, and possibly human oversight.
Despite these risks, ScaleOut's architecture—bringing the AI directly into the cache rather than sitting downstream—positions it to mitigate some of them. Caching ensures speed, but the platform design still requires governance and thoughtful implementation.
ScaleOut’s Gen AI Release reflects a broader trend in enterprise IT: bringing intelligence closer to the data. Rather than shipping data off to dedicated analytics clusters, more organizations are embedding compute—and now, generative AI—into wherever data lives.
This shift has several implications:
Simplified architecture: fewer systems to integrate, less data movement.
Better performance: faster insights and lower operational latency.
Greater democratization: business users can self-serve, reducing demand on data teams.
Competitive differentiation: companies that act on real-time data gain a leg-up in responsiveness and agility.
ScaleOut is positioning itself as a pioneer in this space, not just as a cache vendor, but as a platform for real-time operational intelligence powered by AI.
Looking ahead, the company may push into other areas:
More LLM integrations: support for other models or private LLMs.
Expanded visualizations: richer dashboards, more chart types, custom layouts.
Workflow automation: coupling analytics with automated actions—alerts, triggers, business processes.
Deeper cloud integrations: beyond SQS, support for more message queues, event buses, and cloud-native services.
As real-time demands mount across industries—particularly in financial trading, e-commerce, and cybersecurity—ScaleOut's Gen AI Release could become a cornerstone for architecture designs that prioritise speed, insight, and action.
ScaleOut Software’s Gen AI Release for Active Caching isn’t just an incremental upgrade—it’s a shift in how enterprises think about in-memory data. By embedding generative AI directly into the cache, the company bridges the gap between raw, fast-changing data and actionable insight, all while making it accessible to non-technical users.
For organizations seeking real-time responsiveness and intelligence, particularly in high-velocity industries, this could be the nudge that pushes them from being data-rich to insight-rich. And in today’s world, that might be what defines competitive advantage.
Get in touch with our MarTech Experts.
">
Page 10 of 1365