News | Marketing Events | Marketing Technologies
GFG image

News

Authenticom Wins 2026 Merit Award for Experiential Marketing

Authenticom Wins 2026 Merit Award for Experiential Marketing

marketing 15 Apr 2026

Experiential marketing is evolving into a measurable performance channel, blending immersive engagement with real-time data capture. Authenticom Group of Companies is the latest example of this shift, earning recognition at the 2026 Merit Awards for a campaign that combined physical activation, digital engagement, and revenue-focused execution.

Authenticom Group has been named a Silver Winner in the 2026 Merit Awards for Marketing & Communications, highlighting a growing trend in enterprise marketing: the transformation of experiential campaigns into structured, data-driven performance engines.

The award recognizes Authenticom’s activation at NADA Show 2026, where the company deployed a high-impact, Formula 1-inspired campaign designed to drive both visibility and measurable business outcomes within a compressed 48-hour window.

At a glance, the campaign leaned heavily on spectacle—a branded Formula 1 car positioned along a high-traffic convention skybridge ensured early-stage awareness. But beneath the visual layer, the initiative was architected as a full-funnel marketing system, integrating live demos, digital engagement, and CRM-linked sales qualification.

Experiential Marketing Meets Performance Infrastructure

What sets this campaign apart is how it redefines experiential marketing. Traditionally viewed as a top-of-funnel brand play, experiential activations are increasingly being engineered for mid- and bottom-funnel impact.

Authenticom’s booth extended beyond static displays into an interactive environment that combined:

  • Live product demonstrations tied to structured engagement flows
  • A WebAR-based “Pit Pass Experience” to capture digital interaction data
  • A content engine producing over 25 podcast sessions with industry experts

These elements were not isolated. They were connected through backend systems that enabled real-time lead qualification and pipeline generation, aligning experiential engagement directly with sales outcomes.

This approach mirrors broader enterprise marketing trends, where platforms like Salesforce and Adobe are enabling tighter integration between campaign execution and revenue attribution.

Data Experience Management Takes Center Stage

The campaign also served as the launchpad for Authenticom’s Data Experience Management (DXM) category—a framework designed to unify data integration, customer experience, and marketing execution.

DXM is built around five pillars: Connect, Drive, Transform, Measure, and Predict. In practice, this model positions data as the central layer connecting customer interactions across touchpoints, from initial awareness to post-engagement analysis.

From an AEO perspective, Data Experience Management can be defined as an approach that integrates data infrastructure, customer experience systems, and marketing workflows to deliver measurable, end-to-end engagement outcomes.

This concept aligns with the growing importance of first-party data strategies, particularly as industries face increasing privacy regulations and signal loss in traditional digital advertising channels.

Amplifying Reach Through Multi-Channel Engagement

While the physical activation was central, Authenticom extended its reach through coordinated digital channels. Email campaigns distributed via NADA reached approximately 60,000 past attendees, driving pre-event awareness and booth traffic.

The integration of email, on-site engagement, and digital interaction reflects a shift toward omnichannel campaign design. Rather than treating each channel independently, enterprise marketers are orchestrating synchronized touchpoints that guide audiences through a structured journey.

This approach is increasingly critical as customer expectations evolve. According to Gartner, organizations that integrate online and offline customer data see up to a 30% improvement in campaign effectiveness. Meanwhile, McKinsey & Company notes that data-driven marketing strategies can increase ROI by 15–20%.

Why It Matters for Automotive and Beyond

Authenticom’s recognition highlights a broader shift within the automotive technology ecosystem. As dealerships, OEMs, and technology providers become more interconnected, the ability to manage and activate data across the lifecycle is becoming a competitive differentiator.

Through platforms like DealerVault® and MIX®, Authenticom already operates at the intersection of data integration and automotive retail. The NADA campaign extends this positioning into marketing execution, demonstrating how data infrastructure can directly support customer engagement and revenue generation.

For enterprise marketing leaders, the takeaway is clear: experiential marketing is no longer just about creating memorable moments. It is about building systems that capture intent, qualify leads, and feed actionable data into broader marketing and sales ecosystems.

The Rise of Measurable Experiences

The success of Authenticom’s campaign underscores a key industry evolution. As marketing budgets face increased scrutiny, every channel—including events—must demonstrate measurable impact.

This is driving the adoption of technologies such as WebAR, real-time analytics, and CRM integration, transforming how marketers design and evaluate campaigns. The result is a new category of experiential marketing—one that operates with the precision and accountability of digital performance channels.

In this context, awards like the Merit Awards are becoming more than recognition programs. They are indicators of how marketing itself is evolving, highlighting the convergence of creativity, technology, and data-driven execution.

Market Landscape

Experiential marketing is undergoing a transformation as enterprise organizations integrate it with digital and data-driven strategies. The convergence of martech, CRM systems, and customer data platforms is enabling marketers to measure engagement and attribute revenue across traditionally offline channels.

In industries like automotive, where complex ecosystems and long sales cycles are common, this integration is particularly valuable. Companies that can connect data, customer experience, and marketing execution are better positioned to drive growth and maintain competitive advantage.

Top Insights

  • Authenticom Group’s award-winning campaign demonstrates how experiential marketing is evolving into a performance-driven channel, integrating physical engagement with digital tracking and real-time CRM-based lead qualification.
  • The introduction of Data Experience Management (DXM) highlights a growing focus on unifying data, customer experience, and marketing execution into a single, measurable framework.
  • Multi-channel orchestration, including WebAR, email outreach, and live demos, enabled Authenticom to drive high-intent engagement and measurable pipeline within a 48-hour event window.
  • The campaign reflects broader enterprise trends toward integrating offline and online marketing data, improving attribution, engagement, and overall campaign effectiveness.
  • Recognition at the Merit Awards signals increasing industry emphasis on measurable outcomes in experiential marketing, not just creative execution or brand visibility.

Get in touch with our MarTech Experts

Appdome Expands IDAnchor With Risk Intelligence APIs for AI Era

Appdome Expands IDAnchor With Risk Intelligence APIs for AI Era

artificial intelligence 15 Apr 2026

As mobile fraud grows in scale and sophistication, enterprises are rethinking how risk intelligence is generated and consumed. Appdome has introduced a new set of Risk Intelligence APIs for its IDAnchor platform, positioning mobile threat data as a core input for backend decisioning and enterprise AI systems.

Appdome’s latest update to IDAnchor reflects a broader shift in cybersecurity and mobile infrastructure: moving from reactive threat detection to continuous, identity-driven risk intelligence.

The company has launched a suite of server-to-server Risk Intelligence APIs designed to deliver real-time threat data, device reputation, and identity verification signals directly into enterprise backends. The goal is to enable organizations to integrate mobile risk intelligence into fraud prevention systems, authentication workflows, and AI-driven decision engines.

At a foundational level, the new APIs transform mobile security from an app-level function into a cross-channel intelligence layer. This allows mobile-derived risk signals to influence decisions across systems, including fraud platforms, customer data pipelines, and enterprise analytics environments.

From Threat Detection to Risk Intelligence Infrastructure

Mobile ecosystems are generating unprecedented volumes of threat data. Appdome reports processing over 1.3 trillion mobile threat events per month, a scale that underscores the growing attack surface across mobile applications, devices, and user sessions.

The new APIs are designed to operationalize that data. Instead of limiting insights to in-app protections, organizations can now access verified threat histories and reputation signals within backend systems.

This shift aligns with enterprise architecture trends, where security and risk intelligence are increasingly integrated into platforms like Microsoft Azure and Amazon Web Services, enabling real-time decisioning across distributed systems.

From an AEO perspective, Risk Intelligence APIs can be defined as interfaces that provide verified threat data, device reputation, and behavioral risk signals to enterprise systems for automated security and fraud decision-making.

New Identity Layer for Mobile Trust

A key component of the update is the introduction of two new identity constructs: AppID and InstanceID.

AppID serves as a verified fingerprint for a mobile application, ensuring that the app has not been tampered with. InstanceID, meanwhile, provides a persistent identifier for each app installation, maintaining continuity across updates, upgrades, and even downgrades.

Together, these identifiers create a durable identity layer that links threat data to specific devices, applications, and sessions over time. This allows enterprises to track behavior patterns, detect repeat offenders, and correlate risk signals across multiple touchpoints.

In practice, this approach addresses a long-standing challenge in mobile security: the lack of persistent, trustworthy identifiers in dynamic environments where devices and apps frequently change state.

APIs Built for Real-Time Decisioning

The new Risk Intelligence APIs include several modules designed to support enterprise use cases:

  • DeviceMATCH™ verifies whether interactions originate from the same physical device
  • InstanceMATCH™ ensures app authenticity and lineage continuity
  • ThreatHISTORY™ provides longitudinal threat intelligence tied to identity
  • MobileRISK™ delivers AI-generated risk and reputation scores

These capabilities enable organizations to move beyond static risk scoring models. Instead of relying on isolated signals, enterprises can build dynamic risk pipelines that combine historical data, real-time events, and AI-driven analysis.

This is particularly relevant as fraud tactics evolve. Attackers increasingly use coordinated device farms, app manipulation, and account takeovers—methods that require cross-session and cross-device visibility to detect effectively.

Powering the Enterprise AI Stack

One of the most significant aspects of the announcement is its focus on AI readiness. The APIs are designed to feed verified, high-quality data into enterprise AI models, supporting use cases such as fraud detection, anomaly detection, and adaptive authentication.

This aligns with a growing trend in enterprise AI development: the need for trusted, structured data pipelines. According to Gartner, over 70% of AI project failures are linked to poor data quality or lack of reliable data sources. Similarly, McKinsey & Company highlights that organizations with robust data pipelines are significantly more likely to achieve ROI from AI initiatives.

By exposing threat intelligence through APIs, Appdome is effectively positioning mobile security data as a foundational input for AI-driven decisioning systems.

Building a Continuous Risk Pipeline

The concept underpinning the release is what Appdome describes as a “continuous risk pipeline.” Rather than evaluating risk at a single point in time, enterprises can now track and update risk profiles across the entire lifecycle of a device, app, or user.

This enables a range of practical applications:

  • Detecting repeat fraud across devices and accounts
  • Identifying coordinated attacks and bot activity
  • Reducing false positives by distinguishing new users from known threats
  • Adapting authentication flows based on real-time risk signals

For security and fraud teams, this represents a shift from reactive defense to proactive orchestration. Risk intelligence becomes an ongoing process, integrated into every interaction rather than triggered by isolated events.

Why It Matters

For enterprise technology leaders, Appdome’s update highlights a critical evolution in mobile and cybersecurity strategy. As mobile becomes the primary interface for digital services, the ability to secure and analyze mobile interactions at scale is becoming essential.

At the same time, the convergence of security, data infrastructure, and AI is reshaping how organizations approach risk management. Platforms that can deliver verified, real-time intelligence across systems are likely to play a central role in this new architecture.

Appdome’s Risk Intelligence APIs signal a move toward that future—where mobile identity, threat data, and AI-driven decisioning operate as a unified system.

Market Landscape

The mobile security and fraud prevention market is rapidly evolving, driven by increased mobile usage, sophisticated attack vectors, and the rise of AI-driven applications. Vendors are shifting from standalone protection tools to integrated platforms that combine identity, data, and analytics.

This trend is closely tied to the growth of enterprise AI, where high-quality, real-time data is essential for model accuracy and decisioning. As a result, solutions that can bridge mobile environments and backend systems are becoming critical components of modern enterprise architecture.

Top Insights

  • Appdome introduced Risk Intelligence APIs for IDAnchor, enabling enterprises to integrate mobile threat data and device reputation directly into backend systems and AI-driven decision workflows.
  • New identity constructs like AppID and InstanceID provide persistent, verified identifiers, allowing organizations to track risk and behavior across app lifecycles and device interactions.
  • The APIs transform mobile security into a continuous risk intelligence pipeline, supporting real-time fraud detection, adaptive authentication, and cross-channel risk correlation.
  • Enterprise AI initiatives benefit from access to verified threat data, addressing a key challenge in building accurate, high-performing machine learning models.
  • The release reflects a broader shift toward integrating security, data infrastructure, and AI into unified enterprise decisioning systems.

Get in touch with our MarTech Experts

Teradata Launches Analyst Agent on Microsoft Marketplace for AI Analytics

Teradata Launches Analyst Agent on Microsoft Marketplace for AI Analytics

artificial intelligence 15 Apr 2026

Enterprise AI is moving beyond experimentation toward operational decision-making. Teradata has introduced its Analyst Agent on Microsoft Marketplace, aiming to bring conversational analytics and transparent AI governance directly into enterprise data environments powered by Microsoft Azure.

Teradata’s latest release signals a growing shift in enterprise analytics: the rise of agentic AI systems that not only generate insights but also explain, validate, and continuously improve their outputs.

The newly launched Teradata Analyst Agent allows business and data analysts to interact with enterprise data using natural language, eliminating the need for SQL queries or traditional business intelligence (BI) dashboards. Instead, users can ask questions conversationally, while the agent orchestrates complex queries, performs iterative analysis, and generates visual outputs.

At a high level, the Analyst Agent functions as an AI-powered interface layer on top of enterprise data platforms—bridging the gap between technical data systems and business users who need actionable insights quickly.

Conversational Analytics Moves Into Production

Conversational analytics has been a growing trend, but many implementations have struggled with reliability and governance. Teradata’s approach addresses this by embedding the agent directly into existing data environments, rather than treating it as a standalone tool.

By launching on Microsoft Marketplace, Teradata is aligning with enterprise procurement and deployment workflows. Organizations can integrate the agent within their existing Azure infrastructure, reducing friction around adoption and scaling.

This integration reflects a broader ecosystem trend. Enterprise platforms like Microsoft Azure, Google Cloud, and Amazon Web Services are increasingly becoming distribution layers for AI applications, enabling faster deployment and tighter integration with data systems.

From an AEO standpoint, a conversational analytics agent is an AI system that allows users to query data in natural language, automatically generating queries, insights, and visualizations without requiring technical expertise.

Tackling the “Black Box” AI Problem

A key differentiator in Teradata’s offering is its focus on transparency through Agent Telemetry. One of the biggest challenges in enterprise AI adoption is the lack of visibility into how models generate outputs—often referred to as the “black box” problem.

Teradata’s telemetry framework captures:

  • Execution steps and orchestration logic
  • Query performance and estimated cost
  • Model usage and system behavior
  • User feedback and quality signals

This data allows organizations to audit, monitor, and optimize AI performance over time. More importantly, it enables enterprises to enforce governance standards—an increasingly critical requirement in regulated industries.

Users can also configure custom quality signals to detect issues such as hallucinated results, inefficient query loops, or weak prompts. This transforms AI systems from static tools into continuously improving platforms.

From BI Tools to Agentic Systems

The Analyst Agent represents a broader evolution in analytics technology. Traditional BI platforms required users to build dashboards and reports manually. Even modern self-service tools often depend on predefined data models and visualizations.

Agentic AI systems, by contrast, dynamically generate insights based on user queries. They can iterate on analysis, explore multiple hypotheses, and adapt to changing data contexts in real time.

This shift is particularly relevant for enterprise marketing, finance, and operations teams, where speed and accuracy of decision-making are critical. Instead of waiting for reports, teams can interact directly with data and receive immediate, contextual insights.

Enterprise Adoption and ROI Considerations

One of the barriers to AI adoption has been the gap between experimentation and production deployment. Teradata is addressing this with pre-built templates and integration frameworks designed to reduce implementation time and cost.

The Analyst Agent includes:

  • Pre-configured deployment for Azure environments
  • Multi-agent templates for extensibility
  • Integration with Teradata Enterprise MCP

These features are designed to accelerate time-to-value, enabling organizations to move from pilot projects to production systems more quickly.

According to Gartner, by 2027, more than 50% of business decisions will be augmented or automated by AI agents. Meanwhile, IDC reports that organizations investing in AI-driven analytics are seeing significant improvements in decision speed and operational efficiency.

Why It Matters for Enterprise Teams

For enterprise leaders, the launch highlights a critical transition point in AI adoption. The focus is shifting from building models to operationalizing them within business workflows.

Teradata’s Analyst Agent addresses several key enterprise requirements:

  • Accessibility: enabling non-technical users to interact with data
  • Governance: providing transparency and auditability
  • Integration: embedding AI within existing infrastructure
  • Scalability: supporting enterprise-wide deployment

This combination is essential for organizations looking to scale AI beyond isolated use cases.

In practical terms, the Analyst Agent allows enterprises to democratize data access while maintaining control over quality, cost, and compliance. This balance is likely to define the next phase of AI adoption across industries.

Market Landscape

The enterprise analytics market is rapidly evolving toward AI-native platforms. Vendors are integrating conversational interfaces, automation, and predictive capabilities into their offerings, blurring the lines between BI tools and AI systems.

Cloud marketplaces are emerging as key distribution channels, enabling enterprises to discover, deploy, and manage AI solutions within existing ecosystems. This trend is accelerating the adoption of agentic AI, particularly in organizations with mature data infrastructure.

As competition intensifies, differentiation is shifting toward transparency, governance, and integration—areas that are critical for enterprise-scale AI deployment.

Top Insights

  • Teradata’s Analyst Agent brings conversational analytics to Microsoft Marketplace, enabling enterprises to query data using natural language and generate insights without coding or traditional BI tools.
  • Agent Telemetry addresses the “black box” AI challenge by providing visibility into execution, performance, and quality signals, supporting governance and continuous optimization.
  • Integration with Microsoft Azure allows seamless deployment within existing enterprise infrastructure, reducing friction and accelerating AI adoption at scale.
  • The rise of agentic AI systems signals a shift from static dashboards to dynamic, real-time decision-making tools across enterprise functions.
  • Pre-built templates and multi-agent frameworks help organizations move from AI experimentation to production, improving time-to-value and operational efficiency.

Get in touch with our MarTech Experts

Sprinklr Spring ’26 Release Advances AI-Native CX Platform

Sprinklr Spring ’26 Release Advances AI-Native CX Platform

artificial intelligence 15 Apr 2026

Enterprise customer experience platforms are rapidly evolving into AI-native systems that combine automation, analytics, and governance at scale. Sprinklr’s Spring ’26 (26.4) release signals a deeper shift toward agentic AI, where copilots and autonomous systems are embedded across marketing, service, and insights workflows.

Sprinklr’s latest platform update introduces a broad set of AI-driven capabilities aimed at helping enterprises operationalize customer experience (CX) strategies across channels. The Spring ’26 release focuses on three core pillars: agentic automation, high-fidelity data intelligence, and enterprise-grade governance.

At its core, the update reflects a growing industry need: moving beyond isolated AI use cases toward fully integrated, scalable AI systems that deliver measurable outcomes across the customer lifecycle.

AI Copilots Expand Across CX Workflows

A major highlight of the release is the expansion of AI copilots across Sprinklr’s suite. These copilots are designed to simplify complex workflows by enabling users to interact with data and systems through conversational interfaces.

The Customer Feedback Copilot enhances voice-of-customer (VoC) capabilities by transforming raw feedback into structured insights, visual trends, and comparative analysis. This allows organizations to identify patterns and act on customer sentiment faster.

Similarly, the Marketing Copilot introduces conversational automation into campaign management, enabling marketers to explain performance fluctuations, generate reports, and build analytics dashboards without manual configuration.

This aligns with broader enterprise trends, where platforms like Adobe and Salesforce are embedding AI assistants into marketing and customer data workflows to improve speed and accessibility.

From an AEO perspective, AI copilots are intelligent assistants that help users analyze data, automate workflows, and generate insights using natural language interactions.

Agentic AI Moves Into Customer Service

The Spring ’26 release places significant emphasis on service operations, where AI agents are increasingly handling customer interactions autonomously.

Sprinklr introduces Autonomous Evaluation, a framework that provides transparent logs and test-backed validation for AI agent behavior. This addresses a key challenge in enterprise AI adoption: trust. Organizations need to understand how AI systems make decisions before scaling them across customer-facing operations.

Agent Copilot has also been enhanced to deliver proactive recommendations during live interactions. By offering real-time guidance, the system helps improve key service metrics such as first call resolution (FCR) and average handle time.

This shift toward explainable, testable AI reflects a broader industry movement. As AI becomes more deeply embedded in customer service, governance and observability are becoming just as important as performance.

Precision Listening and Unified Customer Intelligence

On the insights side, Sprinklr is focusing on improving signal quality and data unification. AI Topics now use generative AI to filter out irrelevant noise, ensuring that only meaningful conversations and mentions are surfaced.

This is critical in an era where brands must process vast volumes of social and conversational data. Without effective filtering, insights teams risk being overwhelmed by low-value signals.

The platform also introduces unified, governed customer profiles, consolidating feedback and interaction data across channels. This enables organizations to build a more complete view of each customer, supporting personalization and targeted engagement strategies.

Additionally, enhancements to web surveys—including localization and intelligent sampling—aim to improve data quality and representativeness at scale.

Marketing Automation Meets Creative Integration

Sprinklr is also expanding its marketing capabilities by integrating creative workflows and performance analytics.

New integrations with platforms like Canva streamline asset management, allowing teams to import and manage creative content while maintaining brand governance. Access to TikTok’s commercial music library further supports the creation of compliant, on-trend video content.

On the analytics side, the platform introduces automated root-cause analysis for campaign performance shifts, along with unified dashboards that compare pre- and post-boost metrics. This helps marketers move from observation to action more quickly.

Support for tracking seller performance on LinkedIn adds another layer of visibility, particularly for B2B organizations leveraging social selling strategies.

Governance and AI at Scale

A defining feature of the Spring ’26 release is its focus on governance. As enterprises scale AI adoption, the need for control, transparency, and compliance becomes critical.

Sprinklr’s AI+ Studio now includes bulk testing and telemetry capabilities, enabling organizations to evaluate AI performance at scale. Additional platform updates, such as integration management via the Sprinklr Marketplace and enhanced compliance controls (DRP 2.0), reinforce the platform’s enterprise readiness.

These features position Sprinklr as not just a CX platform, but a governed AI environment—one where organizations can deploy, monitor, and optimize AI systems safely.

According to Gartner, enterprises that implement strong AI governance frameworks are significantly more likely to achieve scalable, production-ready AI deployments. Meanwhile, Forrester notes that unified CX platforms can improve customer retention and operational efficiency when paired with advanced analytics and automation.

Why It Matters

For enterprise marketing, service, and CX leaders, Sprinklr’s Spring ’26 release underscores a key transition: AI is no longer a feature—it is the foundation of customer experience platforms.

The combination of copilots, agentic automation, and governance tools enables organizations to:

  • Automate complex workflows across marketing and service
  • Improve decision-making with real-time, high-quality insights
  • Maintain control and compliance as AI scales

In practical terms, this means faster execution, better customer experiences, and more measurable business outcomes.

As competition intensifies and customer expectations rise, platforms that can unify data, automation, and governance will define the next phase of enterprise CX innovation.

Market Landscape

The customer experience management market is evolving toward AI-native platforms that integrate marketing, service, and insights into a single ecosystem. Vendors are investing heavily in generative AI, automation, and data unification to differentiate their offerings.

This shift is driven by the need for real-time personalization, operational efficiency, and measurable ROI. As a result, enterprise buyers are prioritizing platforms that combine advanced AI capabilities with governance and scalability.

Top Insights

  • Sprinklr’s Spring ’26 release introduces agentic AI and copilots across marketing, service, and insights, enabling enterprises to automate workflows and improve decision-making at scale.
  • Autonomous Evaluation brings transparency and trust to AI agents, addressing enterprise concerns around explainability, governance, and continuous optimization of AI-driven customer interactions.
  • Enhanced AI Topics and unified customer profiles improve signal quality and data integration, helping organizations generate more accurate, actionable insights from large datasets.
  • Marketing updates, including Canva integration and automated performance analysis, streamline creative workflows while improving campaign measurement and optimization.
  • Platform-wide governance features position Sprinklr as a scalable, enterprise-ready AI environment, supporting safe deployment and management of AI systems across customer experience functions.

Get in touch with our MarTech Experts

8x8 Launches AI Studio to Build Agentic AI for CX Natively

8x8 Launches AI Studio to Build Agentic AI for CX Natively

artificial intelligence 15 Apr 2026

The race to operationalize AI in customer experience is exposing a persistent gap between ambition and execution. 8x8, Inc. is aiming to close that gap with the launch of 8x8 AI Studio, a native environment that allows enterprises to build, test, and deploy AI agents directly within its CX platform using natural language.

8x8’s introduction of AI Studio reflects a broader industry shift toward embedded, agentic AI systems that are tightly integrated into enterprise infrastructure rather than layered on top of it.

The new offering enables business users—not just developers—to create AI agents that operate across voice and digital channels. By leveraging natural language instructions, teams can design workflows, automate interactions, and deploy agents without the need for specialized coding skills or external integration layers.

At a fundamental level, AI Studio is designed to eliminate one of the biggest barriers to AI adoption: complexity. Many organizations struggle with long implementation cycles, high costs, and fragmented tools. By embedding AI capabilities directly into its platform, 8x8 is positioning AI as a native function of customer experience operations.

AI Built Into the Infrastructure

Unlike traditional conversational AI tools that operate as overlays, 8x8 AI Studio is integrated directly into the 8x8 Platform for CX. This architecture gives AI agents direct access to real-time voice data, interaction context, and network telemetry.

The result is a more responsive and context-aware system. Without relying on intermediary layers such as transcription engines, the platform can reduce latency and improve the natural flow of conversations—an issue that has historically limited the effectiveness of AI-driven customer interactions.

This approach aligns with broader enterprise trends. Cloud ecosystems from Microsoft, Amazon, and Google are increasingly embedding AI capabilities directly into infrastructure, enabling real-time processing and tighter integration with business workflows.

From an AEO standpoint, agentic AI refers to AI systems capable of autonomously executing tasks, making decisions, and interacting with users across workflows without constant human intervention.

Democratizing AI Agent Development

A key differentiator in 8x8 AI Studio is accessibility. The platform includes a Builder that allows users to describe desired outcomes in plain language, automatically generating functional AI agents.

This lowers the barrier to entry for organizations that lack specialized AI development teams. According to the Metrigy Customer Experience Optimization 2025–26 report, nearly 75% of CX leaders prefer building their own AI agents, primarily due to trust and domain expertise concerns.

By enabling in-platform development, 8x8 allows enterprises to retain control over their data, workflows, and customer interactions—critical factors in regulated industries and complex service environments.

Real-World Use Cases in Production

Even in early availability, organizations are deploying AI Studio across a range of operational scenarios. These include:

  • Inbound customer interactions: AI agents manage intake, identity verification, and call routing with business-hours awareness
  • Outbound engagement: Automated follow-ups, appointment confirmations, and data collection
  • Sales qualification: Lead capture and qualification workflows integrated with Salesforce
  • Internal support: Helpdesk triage and ticket creation across enterprise systems
  • Employee productivity: Personal AI agents handling calls and after-hours interactions

These use cases illustrate how AI agents are evolving from experimental tools into core components of business operations. Instead of augmenting human agents, they are increasingly handling entire workflows end-to-end.

From Experimentation to Scalable Outcomes

One of the challenges in enterprise AI adoption has been moving from pilot projects to production-scale deployment. 8x8’s approach addresses this by removing the need for additional infrastructure, vendors, or contracts.

Because AI Studio is native to the platform, organizations can deploy agents using existing systems, data, and workflows. This reduces friction and accelerates time-to-value, allowing teams to test, iterate, and scale more efficiently.

According to Gartner, by 2027, AI-driven automation will handle a significant portion of customer interactions, particularly in contact centers. Meanwhile, Forrester emphasizes that organizations integrating AI into core CX platforms see higher efficiency gains compared to those using standalone tools.

Why It Matters for Enterprise CX Leaders

For enterprise CX and marketing leaders, the launch of AI Studio highlights a critical evolution in how AI is deployed.

The focus is shifting toward:

  • Embedded AI: integrated directly into operational platforms
  • User-driven development: enabling non-technical teams to build and manage AI agents
  • Real-time decisioning: leveraging live data for immediate action
  • Scalable automation: moving from isolated use cases to enterprise-wide deployment

In practical terms, this means organizations can move faster, reduce costs, and deliver more consistent customer experiences.

8x8’s strategy also reflects a larger competitive dynamic in the CX market. As vendors race to embed AI across their platforms, differentiation will increasingly depend on how seamlessly AI integrates with existing infrastructure and how effectively it delivers measurable outcomes.

Market Landscape

The customer experience platform market is rapidly converging with AI and communications infrastructure. Vendors are embedding AI capabilities into voice, messaging, and contact center systems to enable real-time, end-to-end automation.

This shift is driven by rising customer expectations for instant, personalized interactions and the need for operational efficiency. As a result, platforms that combine communications, data, and AI into a unified environment are gaining traction among enterprise buyers.

Top Insights

  • 8x8 AI Studio introduces a native AI development environment, enabling enterprises to build and deploy agentic AI directly within their CX platform using natural language inputs.
  • Embedded AI architecture eliminates integration complexity, providing direct access to real-time voice data and interaction context for improved performance and scalability.
  • Organizations are deploying AI agents across inbound, outbound, sales, and internal workflows, demonstrating the transition from experimental AI to operational automation.
  • The platform democratizes AI development, allowing non-technical users to create and manage agents, aligning with enterprise demand for greater control and flexibility.
  • Early adoption trends indicate a shift toward infrastructure-level AI, where automation is deeply integrated into core business systems rather than layered on top.

Get in touch with our MarTech Experts

StackAdapt Launches Live Events Workflow for CTV Advertising

StackAdapt Launches Live Events Workflow for CTV Advertising

advertising 15 Apr 2026

Live sports streaming is rapidly reshaping programmatic advertising, but execution challenges remain. StackAdapt has introduced a new Live Events campaign workflow for connected TV (CTV), designed to help advertisers plan, activate, and optimize campaigns around high-demand, real-time sports moments.

StackAdapt’s latest release targets a growing gap in the CTV advertising ecosystem: the mismatch between always-on campaign infrastructure and the unpredictable, high-intensity nature of live sports events.

As streaming platforms expand access to live sports inventory, advertisers are increasingly drawn to these premium environments for their scale and engagement. However, traditional programmatic workflows—built for continuous delivery—struggle to handle the rapid spikes in demand, limited time windows, and inventory coordination required for live broadcasts.

The new Live Events campaign workflow aims to address these constraints with a purpose-built system tailored specifically for live sports advertising.

CTV Meets Real-Time Event Advertising

Connected TV has emerged as a critical channel for brand marketers, combining the reach of television with the targeting capabilities of digital advertising. Yet live sports introduces a different operational dynamic.

Unlike standard campaigns, live events require precise timing, rapid pacing adjustments, and careful frequency management to avoid overexposure. StackAdapt’s workflow introduces a dedicated campaign subtype that aligns with these requirements, allowing advertisers to activate campaigns specifically for live sports moments.

At a functional level, a live events CTV workflow is a campaign management system designed to optimize ad delivery during short, high-traffic windows, ensuring efficient spend, controlled frequency, and real-time performance visibility.

Centralized Planning and Inventory Access

A key component of the new offering is a centralized planning hub that aggregates live sports inventory and event schedules. Advertisers can use an integrated calendar to identify upcoming events and align campaigns accordingly.

This addresses a long-standing issue in programmatic advertising: fragmentation. Live sports inventory is often distributed across multiple broadcasters and streaming platforms, making coordination complex and time-consuming.

By consolidating discovery and planning into a single interface, StackAdapt is positioning its platform as an orchestration layer for live event advertising.

The move reflects broader industry trends, where platforms are evolving beyond media buying tools into full-stack orchestration systems—similar to how Google and Amazon have expanded their advertising ecosystems.

Managing Pacing in High-Demand Environments

One of the biggest challenges in live sports advertising is pacing—ensuring that budgets are spent effectively during unpredictable spikes in viewership.

StackAdapt introduces enhanced pacing controls that adapt to real-time demand fluctuations. This helps advertisers avoid underspending during peak moments or overspending too quickly at the start of an event.

Equally important are frequency controls, which prevent ad fatigue by limiting how often viewers see the same ad. In live sports environments, where audiences are highly engaged but exposure windows are short, maintaining a positive viewer experience is critical.

Transparency and Performance Measurement

The platform also expands reporting capabilities, offering package-level insights and improved delivery visibility. This is particularly important in live event advertising, where performance measurement has traditionally been limited.

Advertisers can now track how campaigns perform across specific events, inventory packages, and audience segments. This level of transparency supports better optimization and more accurate ROI measurement.

According to Statista, global CTV ad spending continues to grow at double-digit rates, driven in part by increased demand for measurable, performance-driven advertising. Meanwhile, McKinsey & Company highlights that advertisers are prioritizing real-time data and attribution as key factors in media investment decisions.

Why Live Sports Is a Strategic Focus

The timing of StackAdapt’s launch aligns with a major shift in sports consumption. As audiences move from linear television to streaming platforms, live sports is becoming one of the most valuable—and competitive—advertising environments.

Digital live sports viewership in the U.S. is expected to grow significantly in the coming years, with major global events such as the FIFA World Cup 2026 likely to accelerate adoption.

For advertisers, this creates an opportunity to reach highly engaged audiences in real time. For platforms, it creates pressure to deliver tools that can handle the complexity of live event activation.

Implications for Enterprise Marketers

For enterprise marketing and media teams, StackAdapt’s Live Events workflow represents a step toward more specialized programmatic infrastructure.

The ability to plan campaigns around specific events, manage pacing dynamically, and access premium inventory without rigid constraints can significantly improve campaign effectiveness.

More broadly, the launch underscores a shift in AdTech: from generalized platforms to purpose-built solutions tailored to specific channels and use cases.

As CTV continues to mature, advertisers are likely to demand greater control, transparency, and flexibility—particularly in high-value environments like live sports.


Market Landscape

The CTV and programmatic advertising market is entering a new phase defined by premium inventory, real-time engagement, and performance accountability. Live sports is emerging as a key battleground, attracting both traditional broadcasters and digital platforms.

AdTech vendors are responding by developing specialized tools for event-based advertising, integrating planning, execution, and measurement into unified workflows. This evolution is expected to accelerate as streaming adoption grows and major global events drive audience scale.

Top Insights

  • StackAdapt’s Live Events workflow introduces a purpose-built system for managing CTV campaigns during live sports, addressing challenges around pacing, timing, and inventory coordination.
  • A centralized planning hub and event calendar simplify campaign alignment with live sports moments, reducing fragmentation and improving execution efficiency for advertisers.
  • Enhanced pacing and frequency controls help optimize spend during high-demand windows while maintaining a positive viewer experience and reducing ad fatigue.
  • Expanded reporting capabilities provide greater transparency into campaign performance, enabling better optimization and ROI measurement in live event environments.
  • The launch reflects a broader shift toward specialized AdTech solutions as CTV and live sports streaming become critical channels for enterprise marketers.

Get in touch with our MarTech Experts

Globant, Autodesk Expand Digital Twin Push for Physical AI

Globant, Autodesk Expand Digital Twin Push for Physical AI

artificial intelligence 14 Apr 2026

In a move that reflects the growing convergence of AI, infrastructure data, and enterprise software, Globant has been named an Autodesk Tandem Digital Twin Solution Provider, extending a 15-year collaboration with Autodesk. The announcement signals a deeper push into digital twin technology as enterprises look to connect physical assets with real-time intelligence systems.

From BIM to Real-Time Intelligence

At its core, the partnership focuses on accelerating adoption of Autodesk Tandem, a cloud-based digital twin platform designed to bridge design data with live operational systems. Digital twins—virtual replicas of physical environments—are increasingly being positioned as a foundational layer for what industry leaders are calling “Physical AI.”

Physical AI refers to systems that combine real-world data from infrastructure, sensors, and operations with machine learning models to enable predictive decision-making. In practical terms, this allows enterprises to move beyond static building information modeling (BIM) into continuously updated, intelligent operational environments.

Globant’s role centers on implementation, system integration, and operational data enablement. Through its Digital Twins Practice, the company will help enterprises connect fragmented data sources—ranging from building management systems to IoT sensors—into unified digital environments powered by Autodesk Tandem.

This transition is significant. Traditional BIM systems have long been used in architecture and construction, but they often remain siloed after project completion. By contrast, digital twins extend the lifecycle value of those models, turning them into live operational tools.

Why This Matters for Enterprise Infrastructure

The partnership targets complex, high-value environments such as airports, smart buildings, manufacturing plants, and logistics hubs—settings where real-time visibility and predictive maintenance can directly impact efficiency and cost.

For enterprise teams, the value proposition is straightforward: digital twins enable organizations to monitor assets continuously, simulate operational scenarios, and anticipate failures before they occur. This has implications for industries ranging from energy and healthcare to retail and transportation.

The announcement also aligns with broader enterprise trends. Major technology ecosystems—including Google Cloud, Microsoft Azure, and Amazon Web Services—have been investing heavily in IoT, AI, and data infrastructure capabilities that support digital twin deployments. Autodesk Tandem fits into this landscape as a specialized layer focused on built environments and infrastructure intelligence.

Expanding Beyond Construction

One of the more notable aspects of the collaboration is its focus on expanding digital twin use cases beyond architecture and engineering workflows. While Autodesk has historically been associated with design and construction software, Tandem is increasingly positioned as an operational platform.

Globant’s Digital Twins Practice is targeting sectors such as manufacturing and logistics, smart retail environments, energy infrastructure, and life sciences. These industries share a common challenge: managing complex physical systems that generate large volumes of data but lack unified intelligence layers.

By integrating design data with operational systems, the partnership aims to enable predictive maintenance, energy optimization, and operational automation—key capabilities in modern enterprise digital transformation strategies.

Competitive Landscape and Differentiation

The digital twin market is becoming increasingly crowded. Enterprise platforms from Microsoft (Azure Digital Twins), Siemens, and Dassault Systèmes are all competing to define the category. Meanwhile, Salesforce and Adobe are approaching adjacent opportunities through customer data and experience platforms rather than physical infrastructure.

Autodesk Tandem differentiates itself through its deep integration with design workflows and BIM data, giving it a strong foothold in industries where infrastructure design is critical. Globant, on the other hand, brings system integration expertise and AI-driven delivery models, which are often required to operationalize digital twin initiatives at scale.

This combination positions the partnership as a bridge between design-centric platforms and enterprise IT environments—a gap that has historically slowed adoption.

Market Momentum Behind Digital Twins

The timing of the announcement aligns with rapid market growth. According to MarketsandMarkets, the global digital twin market is expected to expand from approximately $21.14 billion in 2025 to $149.81 billion by 2030. This surge is driven by increasing adoption across manufacturing, infrastructure, and logistics sectors.

Additional research from IDC suggests that by 2027, more than 40% of Global 2000 companies will be using digital twins to improve operational efficiency and resilience. Meanwhile, Gartner has identified digital twins as a key enabler of intelligent infrastructure and autonomous operations.

These projections underscore why companies like Globant and Autodesk are doubling down on the space: digital twins are moving from experimental deployments to core enterprise infrastructure.

What It Means for Enterprise Marketing and Data Teams

While digital twins are often discussed in the context of engineering and operations, their implications extend into marketing technology and customer experience.

For example, smart retail environments can use digital twins to analyze foot traffic patterns and optimize store layouts in real time. Event venues and smart buildings can personalize experiences based on occupancy data and behavioral insights. In logistics, real-time visibility can improve delivery experiences and supply chain transparency.

This convergence of physical and digital data creates new opportunities for marketing analytics, customer data platforms (CDPs), and AI-driven personalization—areas where companies like Salesforce and Adobe are already investing heavily.

In this sense, digital twins are not just an infrastructure innovation—they are becoming a data layer that feeds into broader enterprise decision-making ecosystems.

Looking Ahead: From Proof of Concept to Scale

Globant expects to roll out multiple proof-of-concept deployments by 2026, signaling a near-term focus on validating use cases across industries. The broader challenge will be scaling these deployments across enterprise environments that are often fragmented and legacy-heavy.

Success will depend on more than technology. Organizations will need to align data governance, integration strategies, and AI capabilities to fully realize the benefits of digital twins.

Still, the direction is clear. As enterprises seek to unify physical and digital operations, partnerships like this one are likely to play a central role in shaping the next phase of digital transformation—one where real-world environments become as measurable and optimizable as digital platforms.

Market Landscape

The digital twin ecosystem is evolving into a core layer of enterprise digital infrastructure, intersecting with AI, IoT, and cloud computing. Platforms from Microsoft, Siemens, and Autodesk are competing to define standards, while integration partners like Globant are enabling real-world deployment at scale.

As enterprises increasingly prioritize real-time data and predictive analytics, digital twins are emerging as a bridge between operational technology (OT) and IT systems—unlocking new efficiencies across industries.

Top Insights

  • Globant’s designation as an Autodesk Tandem solution provider strengthens enterprise adoption of digital twin platforms by combining system integration, AI capabilities, and operational data enablement across complex industries.
  • The partnership accelerates the shift from static BIM models to real-time, predictive digital twins, enabling enterprises to optimize infrastructure performance and reduce operational risks.
  • Rapid market growth, projected to reach nearly $150 billion by 2030, highlights increasing enterprise demand for AI-powered infrastructure intelligence across manufacturing, logistics, and smart environments.
  • By extending digital twins into retail, healthcare, and energy sectors, the collaboration expands use cases beyond construction into broader enterprise digital transformation initiatives.
  • Integration with cloud ecosystems and enterprise platforms positions digital twins as a foundational data layer for AI-driven decision-making and customer experience optimization.

Get in touch with our MarTech Experts.

Commvault Expands AI Resilience Platform for Agentic Enterprise

Commvault Expands AI Resilience Platform for Agentic Enterprise

artificial intelligence 14 Apr 2026

As enterprises accelerate investments in artificial intelligence, a new challenge is emerging alongside innovation: maintaining control over data, AI agents, and increasingly complex automation systems. Commvault is positioning itself at the center of that shift with a set of new capabilities aimed at what it calls “agentic AI resilience.”

The company announced a series of enhancements to Commvault Cloud, introducing tools designed to help organizations safely deploy AI, govern autonomous agents, and recover from failures across AI-driven environments. The move reflects a broader industry push to operationalize AI while addressing rising concerns around data security, compliance, and system integrity.

Building a System of Record for AI

Commvault’s latest update introduces three core capabilities—Data Activate, AI Protect, and AI Studio—each targeting a different stage of the AI lifecycle. Together, they aim to create a unified control layer for enterprise AI operations.

At a high level, the technology is designed to solve a growing problem: enterprises are deploying AI models and agents faster than they can govern them. These systems often operate across hybrid cloud environments, interacting with sensitive data and critical infrastructure.

Commvault’s approach reframes the problem through the lens of resilience. In this model, AI systems are only as reliable as the data and infrastructure supporting them. If data is compromised or unrecoverable, AI outputs cannot be trusted.

This concept—AI as a “system of record”—echoes earlier enterprise technology waves, where platforms like ERP and CRM became foundational systems for operations and customer data.

What the New Capabilities Do

Data Activate focuses on preparing enterprise data for AI use. It allows organizations to extract, classify, and curate datasets from backup environments, transforming them into formats such as Apache Iceberg and Parquet for use in AI pipelines.

The key distinction is governance. Rather than feeding raw operational data into large language models, enterprises can filter out sensitive information—such as personally identifiable data—before activating datasets. This reduces the risk of exposing regulated data during AI development and deployment.

AI Protect addresses a different challenge: managing and recovering AI-driven environments. As organizations deploy more autonomous agents, the risk of unintended system changes increases. AI Protect is designed to identify vulnerabilities, track agent behavior, and enable full-stack recovery—including data, applications, and configurations.

This capability reflects a shift in enterprise resilience strategies. Traditional backup and recovery solutions focused on restoring data. In agentic environments, recovery must extend to entire systems and workflows.

AI Studio rounds out the offering by enabling organizations to build and manage AI agents. It includes a repository of prebuilt agents and tools for creating custom workflows, integrated through Commvault’s Model Context Protocol (MCP).

Together, these tools aim to give enterprises end-to-end visibility and control over AI systems—from data preparation to agent orchestration and recovery.

Why Agentic AI Changes the Risk Equation

Agentic AI refers to systems where autonomous agents can make decisions, execute tasks, and modify environments with minimal human intervention. While this unlocks efficiency gains, it also introduces new risks.

Unlike traditional applications, AI agents can interact with multiple systems simultaneously, creating cascading effects when something goes wrong. A single misconfiguration can propagate across data pipelines, infrastructure, and applications.

This is one of the key barriers to adoption. According to Deloitte, 60% of AI leaders cite risk, compliance, and legacy system integration as major challenges in deploying agentic AI.

Commvault’s strategy directly targets these concerns by embedding governance and recovery into the AI lifecycle, rather than treating them as afterthoughts.

Competitive Landscape: Where Commvault Fits

The announcement places Commvault in an increasingly competitive space where data platforms, cloud providers, and cybersecurity vendors are all vying to define enterprise AI infrastructure.

Cloud ecosystems from companies like Google Cloud, Microsoft Azure, and Amazon Web Services already offer AI development and deployment tools. Meanwhile, platforms such as Salesforce and Adobe are integrating AI into customer data and marketing workflows.

Commvault differentiates itself by focusing on resilience—specifically, the ability to control, govern, and recover AI systems at scale. This positions the company closer to the intersection of data protection, cybersecurity, and AI operations (AIOps).

The introduction of AI Studio also signals a move into territory traditionally occupied by automation and low-code platforms, suggesting that Commvault is expanding beyond its core backup and recovery roots.

Enterprise Impact: From IT to Marketing and Data Teams

While the announcement is rooted in infrastructure, its implications extend across enterprise functions, including marketing, analytics, and customer experience.

AI-driven marketing platforms increasingly rely on large datasets and automated workflows. Ensuring that these systems operate on trusted, governed data is critical for compliance and performance. Tools like Data Activate could help marketing teams safely prepare datasets for personalization and predictive analytics.

Similarly, AI Protect’s full-stack recovery capabilities are relevant for organizations running real-time marketing campaigns or customer engagement platforms, where downtime or data corruption can have immediate business impact.

For enterprises building modern martech stacks, the ability to govern AI agents and recover systems quickly could become a key differentiator.

Market Momentum and Future Outlook

The push toward AI resilience comes as enterprises scale their AI initiatives. IDC predicts that by 2026, over 75% of organizations will operationalize AI as part of core business processes, increasing the need for governance and risk management.

Gartner has also emphasized the importance of AI trust, risk, and security management (TRiSM), identifying it as a critical component of enterprise AI strategies.

Commvault’s roadmap suggests a future where resilience is embedded into every layer of AI infrastructure. The company plans to continue advancing its AI capabilities, with a focus on enabling organizations to manage agent ecosystems across on-premises, SaaS, and hybrid environments.

The broader implication is clear: as AI becomes more autonomous, the ability to control and recover it will be just as important as the ability to build it.

Market Landscape

Enterprise AI is shifting from experimentation to operational scale, driving demand for governance, security, and resilience platforms. Vendors across cloud, cybersecurity, and data infrastructure are converging to address agentic AI risks, with resilience emerging as a critical differentiator in enterprise adoption.

Top Insights

  • Commvault introduces Data Activate, AI Protect, and AI Studio to help enterprises safely deploy, govern, and recover AI systems across hybrid and multi-cloud environments.
  • The platform addresses growing risks in agentic AI, where autonomous agents can modify systems and data, requiring full-stack visibility, governance, and recovery capabilities.
  • With 60% of AI leaders citing risk and compliance barriers, the solution targets one of the biggest challenges in scaling enterprise AI adoption.
  • Commvault positions AI resilience as a “system of record,” aligning with enterprise platforms like ERP and CRM in managing mission-critical data and operations.
  • The expansion into agent orchestration and workflow automation signals Commvault’s move beyond backup into broader AI infrastructure and enterprise platform territory.

Get in touch with our MarTech Experts.

   

Page 4 of 1465

REQUEST PROPOSAL