Interviews | Marketing Technologies | Marketing Technology Insights
GFG image

Interview

  Perfecting Personalized Promotions, The secret to achieving advanced personalization, and what’s holding retailers back

Perfecting Personalized Promotions, The secret to achieving advanced personalization, and what’s holding retailers back

marketing 25 Feb 2026

Jeff Baskin, Chief Revenue Officer 

Promotions are a key performance area for all retailers, and effectively implementing truly personalized offers at scale has been a goal for enterprise retailers for decades. Eagle Eye’s CRO Jeff Baskin shares his thoughts on how technology is making this goal more attainable, the legacy approaches that are holding retailers back, and what impacts they can expect from genuine one-to-one promotional engagement.  


1. What are the biggest inefficiencies you see in traditional promotional models today and why are so many retailers still relying on broad, mass-discount approaches? 


The biggest flaw with traditional mass discounting is that it often incentivizes customers who would have purchased anyway, while failing to influence behavior where it matters. This inefficiency is largely driven by legacy systems and the inertia of “what’s always worked.” Most retail infrastructure was built to support blanket offers or, at best, broadly segmented campaigns, not individualized promotions at scale. For years, that was effective, but as competition intensifies and technology improves, more retailers are embracing approaches that enable one-to-one engagement. After all, dynamically creating an offer for the exact brand of organic snacks the individual customer is most likely to respond to at the exact discount level most likely to prompt them to action is inherently more effective – and efficient – than placing a generic discount on similar items in the weekly circular.    


2. Why has true one-to-one promotional personalization been so difficult to achieve? 


Manual processes, unstructured data, and legacy platforms are the main roadblocks to true one-to-one personalization and are what keep retailers relying on broad-based approaches. There are also two issues of scale: first, the volume of data (customer data, SKUs, multiple sales channel data) retailers must manage has increased exponentially; and second, the ability to deploy personalized offers at enterprise level across millions of transactions remains out of reach. Few incumbent promotional systems support the real-time decisioning or on-the-fly offer creation necessary to deliver unique promotions to individual customers across a multi-store network, let alone a portfolio of banners.  


3. What has changed (technologically or operationally) that is now making individualized promotions possible at enterprise scale? 


From a tech perspective, AI and machine learning models can now analyze behavior and generate custom offers in milliseconds. Cloud infrastructure handles the computational demands of real-time adjudication across millions of shoppers or loyalty members. Operationally, retailers now have the customer data and digital touchpoints necessary to identify individual shoppers and deliver offers at checkout, online and in-app. These two components are equally important; even the most advanced AI will deliver irrelevant promotions without data-based insights into what individual customers care about, and all the customer data in the world is useless without the technology make it actionable. 


4. Retailers often struggle to know whether an offer is actually influencing behavior or simply rewarding shoppers who would have purchased anyway. How can retailers start measuring true incremental impact? 


Attribution at the individual shopper level is essential. You need systems that track each customer's baseline purchasing patterns, then measure how behavior changes when specific offers are delivered. Closed-loop reporting that connects offer allocation, redemption, and actual sales lift reveals which promotions are working. Of course, this requires technology that follows the complete customer journey from offer to purchase, across platforms and channels, and incorporating both marketing-exclusive systems (like retail media networks) and traditionally analog interaction points (like physical stores). 


5. Boston Consulting Group has estimated that shifting even a portion of mass promotion spend into personalized offers can dramatically improve ROI. What does that tell us about how much promotional budget is currently being misallocated? 


BCG estimates enterprise retailers can generate over $100 million in topline impact from scaling personalized offer execution. That suggests that retailers’ current promotional spending is underperforming, delivering little incremental value for the budget. It tells us that when retailers offer undifferentiated incentives to customers with existing purchase intent, or offer deeper discounts than necessary to change behavior, they’re essentially paying for sales they already had. 


7. As shoppers’ expectations for immediate value increase, how are promotions emerging as a new competitive battleground for retailers? 


Customers now expect offers that reflect their actual shopping behavior; generic discounts feel irrelevant. In this way, promotions have become a de facto indicator of whether retailers truly understand their customers. Those who do can deliver timely, meaningful incentives, build stronger engagement and capture more share of wallet. Those who don't risk spending promotional dollars with little measurable return. In a marketplace with more choice than ever, relevance is a clear competitive advantage. 


8. When promotions are personalized at the individual level, how does that change the way shoppers engage with offers and deliver value at the right moment? 


Personalization ensures that customers receive offers that feel relevant and appropriate rather than random or excessive. When offers align with consumers’ actual preferences, purchase patterns and contextual cues, engagement naturally increases. Delivered through digital channels or at checkout in the moment of decision, personalized incentives create a higher-value experience that encourages repeat behavior and strengthens ongoing loyalty. They also drive results for retailers; Eagle Eye’s AI-powered Personalized Challenges, which creates personalized, incremental goals for each shopper based on their purchase history, has generated 7:1 ROI for high-profile retailers that have implemented the solution  
 AI’s Double-Edged Sword: Countering AI-Enabled Cyberattacks by Deploying Defensive AI Strategies

AI’s Double-Edged Sword: Countering AI-Enabled Cyberattacks by Deploying Defensive AI Strategies

artificial intelligence 24 Feb 2026

By Dr. David Utzke, CEO and CTO at MyKey Technologies
 
Organizations are at an inflection point where AI is accelerating cybercrime at scale, as experts warn that it broadens the attack surface, creates new vulnerabilities, and introduces complex governance and compliance challenges.

 Like all AI systems, those deployed in cyberattacks continuously learn and evolve, enabling them to adapt, evade detection, and develop attack patterns that traditional security tools may fail to recognize.

 Furthermore, AI agents capable of operating autonomously are significantly increasing the scalability and sophistication of cyberattacks and fraud operations.
 

(Q) What makes AI-powered cyberattacks fundamentally different from traditional automated cyber threats?

 

This is a great question and one that I am frequently asked. I find it helpful to begin by defining a cyberattack. In cybersecurity, a cyberattack is an intentional, malicious attempt by an individual or organization to breach a computer network or system. These attacks aim to compromise the CIA triad: the Confidentiality, Integrity, or Availability of digital assets and information. The NIST (National Institute of Standards and Technology) CSRC (Computer Security Resource Center) Glossary officially defines it as an attempt to gain unauthorized access to system services or resources, or to compromise system integrity and availability.

 

So, working from this common definition of cyberattacks, AI-powered cyberattacks differ from traditional, non-AI attacks primarily through increased speed that increases the capability in the number of attacks, considerable automation lowering the barrier to entry, and intelligent, real-time adaptation to avoid detection. While traditional attacks rely on manual, static methods, AI technologies enable autonomous scanning, evasive polymorphic (i.e., occurring in several different forms) malware, and highly personalized social engineering at scale, transforming the threat landscape from weeks of planning to near-instantaneous execution.

 

Some of the core advancements in AI technology-facilitated cyberattacks include:
 
o   Hyper-personalized social engineering
o   Synthetic media (deepfakes)
o   Autonomous vulnerability discovery
o   LLMjacking
o   Prompt Injection
o   AI Model data poisoning
 

(Q) How can autonomous AI agents amplify the speed, sophistication, and scale of modern cybercrime?

 

It is important to articulate that the term “autonomous AI agents technology” is considered partially accurate but often hyped, representing an emerging capability rather than a fully realized, foolproof technology as of the time of this interview. I have to laugh every time I see the ServiceNow ad on streaming. In the dialogue, when AI agents are brought up, it is clarified that they are not just “secret agents,” but rather “autonomous minions that you control” to handle routine, repetitive tasks. How can the minion (def.: underling of a powerful person) at the same time be autonomous? Get it? The hype!
 

So, here is another opportunity to define another frequently misunderstood term from the perspective of AI architecture. An “AI agent” is most commonly an LLM (Large Language Model) that can take actions to achieve specific, high-level goals with minimal human oversight – a step up from an AI bot. Unlike an AI bot, AI agents can break down complex tasks, use tools, and learn from experience. 
 

An AI agent is a coded system that can, to a limited extent, set its own sub-goals, plan, and take actions to achieve a high-level objective with little to no human intervention. However, most, if not all, current “autonomous” agents require human-in-the-loop for oversight (aka Human Agent), especially for high-stakes decisions, making them more “agentic” than fully autonomous.
 

The term “autonomous AI agents” is often used as a marketing buzzword that obscures the actual technology behind it. To highlight AI technologies involved in cyberattacks, include:
 
o   ML (Machine Learning) and DL (Deep Learning)
o   GPTs (General Pre-trained Transformers) and LLMs (e.g., WormGPT and FraudGPT)
o   GANs (Generative Adversarial Networks) and NNs (Neural Networks)
o   NLP (Natural Language Processing): Voice-to-Text and Text-to-Voice
 

Given the advancements in ML, specifically DL, AI models can understand complex, nuanced language patterns. NLP is the driving force underpinning LLMs, enabling more accurate, context-aware, and human-like interactions to enact more sophisticated cyberattacks against cybersecurity frameworks, even if an organization deploys AI-enhanced cybersecurity systems.
 

It is for this reason that it is crucial for cybersecurity professionals to understand AI model architecture rather than treating AI as an impenetrable “singularity” or a magical black box. As AI models become deeply integrated into IT infrastructure, understanding the specific mechanisms, data pipelines, potential failure points of these systems, and how to audit AI models for vulnerabilities is essential for effective, proactive defense. Viewing AI as a “singularity,” or as a mysterious, all-knowing entity, leaves organizations vulnerable to unique, AI-based cyberattack threats.  
 

(Q) How can organizations detect AI-generated attacks that are specifically designed to evade conventional security tools?

 

When I teach grad students and CPE sessions on the topic of cybersecurity, I emphasize that the first necessary step for an organization is to have a well-established AI model and data governance framework. Implementing technology governance frameworks is no longer just a compliance task; it is a foundational strategic requirement for any organization. Having AI model and data governance frameworks is critical for organizations to ensure AI initiatives are reliable, ethical, secure, and compliant with emerging regulations. Without a governance framework, organizations face significant risks beyond cyberattacks that include biased models, inaccurate or harmful outputs, as well as suffering from potential reputational damage and legal penalties.
 

With the above noted, cybersecurity professionals can audit and detect AI-based cyberattacks, which often evade traditional defense mechanisms. But it requires moving from point-in-time, snapshot, random, or set periodic audits to a continuous monitoring approach. Continuous monitoring is crucial because it replaces snapshot, point-in-time, or random audits with real-time, “always-on” visibility, allowing organizations to detect and remediate risks instantly rather than months later. It reduces security vulnerabilities and ensures continuous regulatory compliance (e.g., DORA, PCI DSS 4.0). 
 

(Q) In what ways can companies move from reactive incident response to predictive, AI-driven threat prevention?
 

To protect against AI-based cyberattacks, organizations need to adopt a ZTA (zero-trust architecture) and a defense-in-depth strategy that combines AI-driven security tools, robust AI governance, and enhanced human training. Key measures include deploying anomaly detection, behavioral biometrics, and automated AI-based security tools to counter rapid, automated attacks, while enforcing strict data validation to prevent data poisoning.
 
·       Defense-in-depth is a comprehensive cybersecurity strategy that layers multiple, heterogeneous security controls—covering people, technology, and operations—to protect assets, ensuring that if one defense fails, others contain the threat. Inspired by military, castle-style tactics (i.e., reinforced architecture), it aims to increase attacker complexity and prevent single points of failure.
 
·       ZTA is a cybersecurity framework based on “never trust, always verify,” treating all network traffic as hostile, regardless of origin. It removes implicit trust, focusing on strict IAM (Identity & Access Management) verification, least-privilege access, and microsegmentation (divides networks into small, isolated, and granular security zones) to contain breaches. Key components include continuous monitoring, MFA, and data encryption to secure distributed, modern, cloud-based environments.
 

(Q) How can resilient risk-based AI governance frameworks help organizations rebuild trust and accountability as AI-driven threats continue to escalate?
 

As AI-driven cyber threats, such as adversarial attacks, data poisoning, and model BS (imprecisely called hallucination), escalate, the need for governance frameworks to provide the necessary guardrails to ensure AI technologies are reliable, ethical, and secure becomes even more urgent.
 

Key ways that governance frameworks rebuild trust and accountability include:
 
·   Establishing Proactive Risk Management
·   Ensuring Transparency and Explainability
·   Enforcing Clear Accountability
·   Implementing Real-Time Monitoring and Control
·   Aligning with Ethical Standards
 

In addition, well-devised governance frameworks counteract escalating threats as AI-driven threats grow, by offering a structured approach to resilience by incorporating Red-teaming and adversarial testing to uncover security gaps before deployment, Data Security Posture Management (DSPM) to protect sensitive data used in AI workloads, and continuous monitoring to identify vulnerabilities and potential threats in real-time. 
 

Ultimately, these frameworks turn cyberattacks into a manageable risk and compliant processes, moving from a position of “control” to “confidence.”
 

As a final note, this interview is given with an eye on research of the near-term future of cybercrimes through cyberattacks as AI technologies that are currently being converged with quantum computing. MyKey Technologies is addressing the research involving the near-term future (2026–2030) of the integration of Artificial Intelligence (AI) technologies with emerging quantum computing capabilities, which is set to fundamentally reshape the threat landscape, turning cybercrime into a highly automated, “agentic” ecosystem. While fully functional quantum attacks on encryption are anticipated closer to the 2030s, the immediate threat lies in the combination of AI-powered reconnaissance with the “harvest now, decrypt later” (HNDL) strategy.
 

So, balancing the immediate, “here-and-now” threat responses with attention given to near-term strategic planning is a critical, yet challenging endeavor for organizations. However, failing to do so can lead to a “whack-a-mole” cycle of endless crisis management. Effective approaches involve integrating short-term actions into a strategic vision regarded as strategic agility. 
 Cancer Awareness Month Can Highlight Real Community Support

Cancer Awareness Month Can Highlight Real Community Support

marketing 19 Feb 2026

Each February, the country turns pink. Landmarks glow, national campaigns launch, and stories of survival and resilience fill television screens and social feeds. The scale of support is both inspiring and necessary, reminding millions that they are not alone in the fight against cancer. Yet beyond the national spotlight, something quieter is happening.
 
In hospital waiting rooms, volunteers sit beside patients before chemotherapy begins. In community centers, local nonprofits coordinate rides so no one misses treatment. In church basements and neighborhood gathering spaces, families come together for support groups because healing is not only physical, but emotional. In kitchens across America, neighbors prepare meals for someone too exhausted to cook.
 
These moments rarely make headlines, but they form the backbone of the fight.
 
Cancer is deeply personal. It touches families street by street and house by house, and the organizations responding most immediately are often local and deeply rooted in the communities they serve. They know the names behind the diagnoses. They understand the practical barriers patients face. And they continue showing up long after awareness campaigns fade from view.
 
Cancer Awareness Month offers a powerful national platform. The opportunity before us is to extend that platform to the people doing this work closest to home.
 
Imagine if the storytelling strength that powers major national campaigns also illuminated the hospital down the road, the screening event at the high school gym, or the local survivor who turned personal hardship into community action. When people see their own community reflected back to them, something shifts. Engagement becomes personal. Support becomes immediate. Action feels tangible.
 
Today, we have the technology to elevate local organizations with the same creative quality and reach once reserved for large national causes. Through modern media channels, community based nonprofits can share their stories at scale, connecting households to resources and reminding viewers that help is not abstract. It is nearby.
 
National momentum and local action do not compete with one another. They reinforce each other. Broad awareness drives conversation, while local visibility drives participation. Together, they create a stronger and more responsive support system for patients and families.
 
Awareness is most powerful when it becomes tangible, when it connects a household to a place they recognize, a service they can access, or a story they understand. It is measured not only in dollars raised or campaigns launched, but in rides provided, meals delivered, appointments kept, and hands held during uncertain moments.
 
The fight against cancer lives in communities, carried forward by neighbors, volunteers, caregivers, and local leaders who work tirelessly, often without recognition. This Cancer Awareness Month, as we honor the national movement, let us also make space to elevate the people doing the work closest to home. Their impact is real, immediate, and deeply human. They deserve to be seen.
 Why longevity and adaptability after deploying agentic AI will define enterprise success in 2026

Why longevity and adaptability after deploying agentic AI will define enterprise success in 2026

artificial intelligence 19 Feb 2026

Spokesperson: Adam Beavis, Country Manager Australia and New Zealand, Databricks.

Q1: Agentic AI has moved quickly from experimentation to deployment. What will separate organisations that succeed from those that fall behind after rollout? 


A: What separates organisations that win with agentic AI after rollout from those that stall is less about the tech — and more about the operating model and discipline. The key differentiators tend to be:


 


1. Data readiness: High performers invest heavily in clean, permissioned, continuously improving data and instrument agents with feedback loops. Without this, agent performance degrades quickly after initial rollout.


2. Strong guardrails and governance by design: Winning organisations bake in controls, auditability, escalation paths, and human-in-the-loop thresholds from day one. Those that fall behind treat governance as an afterthought—leading to trust issues, halted deployments, or regulatory friction.


3. Clear business ownership, not just tech ownership: Successful firms tie agents to specific business outcomes (cost, speed, risk reduction, revenue uplift) with accountable business unit owners. 

 

4. Cultural and behavioural change: The biggest gap is human, not technical. Leaders who succeed redesign roles around human–agent collaboration, and retrain employees to integrate AI into their daily work and oversee autonomous systems.
 


Q2: Many organisations feel they have done AI once it is deployed. What often goes wrong for organisations beyond that point?


A: The biggest misconception is treating deployment as a box-ticking exercise. Models that are trained on historical data can drift as inputs change, and without consistent and continuous evaluation, problems often surface too late. 

 

The solution is shifting from one-off checks to continuous evaluation in production. Just like humans need performance reviews, so do AI systems. Enterprises need systems that continuously measure performance against real tasks, retrain or adjust agents and balance quality against cost. Many early deployments struggle, because they were not designed with long-term operation in mind. 

 

Q3: Why is the transition from single agents to multi-agent orchestration important for enterprises?


A: Enterprise work rarely happens in a single step. A realistic workflow often includes retrieving from multiple data sources and validating data against business rules, compliance checks, and a final decision with explainability requirements. Expecting a single agent to handle all these tasks reliably and efficiently is unrealistic. 


In 2026 we will see broader adoption of multi-agent orchestration, where specialised agents handle distinct tasks and a supervising agent coordinates sequences, mirroring how human teams operate. While this gives the benefit of better performance, it also allows for improved governance.  Each agent can be monitored and evaluated on its specific responsibility, modifications can be isolated, and the overall system remains transparent and auditable, and easier to troubleshoot.

 

Q4: Many enterprises struggle to get AI agents and applications into production. What is causing the bottleneck and how is Databricks addressing it?


A: The bottleneck is not building a demo, it is making agents reliable in the real enterprise. In productions, agents must consistently reason over complex, proprietary data, operate with guardrails and integrate with operational systems. 


General knowledge of AI is becoming a commodity, but it’s still elusive to get AI that truly understands the proprietary data inside an enterprise. Many AI agents fail in enterprise environments because they prioritise ease of use over accuracy, leading to inconsistent results or behaviour organisations cannot trust. 

 

Databricks is addressing this by building through a number of growth areas:

  1. The rise of AI-powered coding is changing how software is built. As developers create apps via natural language processing, those apps automatically need databases and agent backends. Databricks is seeing this first-hand, with over 80% of databases launched on Databricks now being created by AI agents, rather than humans. We are enabling developers to rapidly build applications that run on Lakebase and are powered by agents, all within a unified and governed platform.. 
  2. Enterprises need a modern transactional layer for AI-native apps. Traditional transactional databases have changed little for decades, so we launched Lakebase, which simplifies operational data workflows and is optimised for AI agents operating at machine speed.
  3. Agent Bricks then helps organisations build and deploy agents that can securely work within their own data, where most of the business value sits. It helps organisations build domain-specific agents that reason over their data, track quality with task-specific benchmarks and balance performance with cost over time. The aim is to make agent quality measurable and improvable in production, not assumed at deployment.

Together, this trifecta removes the common production blockers by combining an operational database layer, an agent-building platform that works with enterprise data and an application layer that helps ship faster, with reliability and governance built in. 
 

Q5: In Australia and New Zealand, how are organisations specifically adopting AI applications and how does that differ from earlier phases?


A: Across Australia and New Zealand, we’re seeing a pivot from general-purpose experimentation to domain-specific AI applications that are grounded in trusted enterprise data, with stronger attention to governance and sovereignty. As organisations shift from pilots into production to deliver real business outcomes, organisations are now embedding AI into real workflows, from customer support and supply chain to finance and operations. 


For example, Suncorp needed to scale AI across the organisations to improve claims accuracy, reduce operational risk and support more automated digital customer experiences. Manual processes were creating additional loads for staff, and employees often lacked instant access to complex policy and claims information when decision making was required. By building, deploying and scaling domain-specific AI directly into claims workflows with Databricks, the company has achieved 99% accuracy and saved more than 15,000 hours of manual workload.

Additionally, Atlassian uses Databricks AI/BI Genie to power on demand insights in plain English through Atlassian Rovo, its AI assistant. This allows teams across the business to ask complex questions of their data and receive trusted, contextual answers directly within their existing workflows. 

These examples reflect a broader trend that we’re seeing across the region in 2026, where value increasingly comes from AI applications designed around the business, grounded in governed data, and operationalised end to end. 

Q6: How are AI applications, including AI agents, playing out across key verticals? What advice would you give to enterprise leaders in these sectors? 


A: Whether you lead in finance, pharma, media, CPG, or tech, the questions are converging: how do we use AI to improve business productivity? How do we balance industry regulation with AI innovation? How do we control costs without slowing adoption? The leaders who solve these challenges today will build faster, more resilient operations and gain a competitive edge. 
 

In the public sector, AI is connecting data across agencies to reduce administrative burden and support decisions from benefits assessment to emergency response. Success depends on strong governance, clear lineage and transparency so outputs can be trusted and audited. 
 

In marketing, AI applications are moving beyond content generation to orchestrating campaigns, analysing performance data and adapting system strategies in near real time. Data Intelligence for Marketing allows organisations to centralise customer and campaign data, apply AI to drive more accurate decisions, and automate tasks that scale human resources using AI agents.
 

In cybersecurity, multi-agent systems are proving effective to validate threats and accelerate response times while keeping humans in the loop. Databricks’ Data Intelligence for Cybersecurity powers scalable SecOps at scale with Agent Bricks by automating triage, enrichment, response and investigation, reducing alert fatigue and costs while boosting analyst productivity.
 

My advice for leaders is simple: 


●      Invest in your data and AI foundations with high-quality data and governance


●      Have clear business ownership and outcomes you want AI to accomplish


●      Scale what is already working. 
 The Future of Digital Identity: How AI is Enhancing Authentication and Fraud Prevention

The Future of Digital Identity: How AI is Enhancing Authentication and Fraud Prevention

cybersecurity 18 Feb 2026

Whether you sell restricted products, impart banking services, or provide just verification services, digital identities are at the heart of all these. Everyone requires digital identity that is secure and verifiable across systems easily. In recent years, digital identity fraud is on the rise, and it is only going to increase as the attackers leverage latest technology and become smarter. 
 
The best way to secure your digital identity verification and creation process is to leverage AI in the process. In this article, we will look at how AI is enhancing authentication and fraud prevention as well as the future of digital identity. But before that, let's understand what digital identity is.

What is Digital Identity?

As the name suggests, a digital identity is a collection of information that can help in verifying the identity of anyone in  a digital world. A digital identity will contain personal information for the user, biometrics, and other such identification data that can help in uniquely identifying a user online. 
 
Having known about digital identities, now is the right time to understand the issues with traditional verification systems and why we need digital identity with AI in the future. 

Issues with Traditional Verification Systems

Weak Security

In traditional verification systems people often resort to small and easy to guess passwords which provide weak security. This makes the system vulnerable and easier to hack for attackers. 
 

Poor User Experience

Traditional verification systems have a lot of friction points, and they often deliver a poor user experience due to complex verification workflows and frustrate users. 
 

Centralized Data Storage

Traditional verification systems rely on centralized data storage which can easily become a target for hackers. Once the hacker gets into the system, they can get access to all the verification data and misuse it, which is really bad for users. 
 
As we have already discussed that traditional systems are not safe and they provide poor user experience, it is perfect time to understand how digital identity is evolving with AI and what the future looks like for identity verification. 

How AI Enhances Digital Identity Verification?

Biometric Authentication

AI models can help in biometric authentication by leveraging facial recognition technologies and liveness detection models. These models can verify whether a user is live through video feeds, and only grant access when the biometrics match and the liveness detection is passed. This way you can build systems which are highly secure and only accessed by real users. 
 

Behavioral Analytics

Every real user has a different behavior, and when building digital identity verification this characteristic can be really helpful. AI models can be built and trained to analyze typing patterns, mouse movements, device data and other pointers to analyze behavioral data for any user. By building robust behavioral analytical models, you can ensure that the system also verifies user behavior and only gives access if it is a real user. 
 

Continuous Verification

Traditional systems verify identity once, and then they trust the user, but modern times require better solutions. Today, attackers are smart and to combat them, we need to build systems that can do continuous verification. AI models can help in continuously analyzing user behavior and monitor actions on the platform, which can be then matched with behavioral data or model to check whether it is a legitimate user or not. 
 

Risk-based Adaptive Authentication

AI models are smart, they can identify risky situations and help you make the authentication process stricter and adaptive for such situations. You can collect contextual data like device, location, time and user behavior to categorize whether an identity verification attempt is riskier or not. If the model thinks its risky, it can adapt the verification attempt to be stricter and perform deeper verification than usual to ensure only safe users can access your platform. 
 
AI helps in enhancing authentication in modern systems in many different ways, but it also protects your systems from fraud. So, let’s look at how AI prevents fraud in modern digital identity verification systems. 

How AI Prevents Fraud?

Real-time Anomaly Detection

As you collect and process behavioral and user data on your platforms, you can also develop machine learning models that are experts at identifying patterns and highlighting suspicious user behavior through real-time anomaly detection. These models can help you quickly find anomalous behavior on your platform, and take remediation actions to safeguard your platform. 
 

Pattern Recognition

Every fraud transaction has a pattern, and when you give a large enough dataset to your machine learning and AI models to understand these patterns, it can help you with pattern recognition. The model can then try to find such patterns in real transactions and block them or route them through stricter processes to prevent fraud on your platform and ensure your platform is safer for everyone. 
 

Synthetic Fraud Prevention

Synthetic fraud is rising rapidly and attackers are creating fake digital identities of users to perform this. While it is hard to detect and prevent such fraud manually, it is not impossible for AI models. AI models can prevent synthetic fraud by verifying data across multiple signals and creating a confidence score for each transaction. If the confidence score is below the threshold it can prevent the action and ensure safety on the platform. 
 
As we have already discussed how AI is enhancing authentication and preventing fraud for modern platforms and digital identity, lets also look at the future of digital identity. 

Future of Digital Identity

Passwordless Authentication

Passwordless authentication will become mainstream, and it will replace the need to enter password for every verification. Instead verification can be done through push notifications and biometric identity verification methods which are much faster.
 

Decentralized Identity 

As attackers become smart and they try to target centralized identity stores, platforms and users will start leveraging decentralized identity storage systems that securely store identity data, and restrict damage or identity theft even when the system is compromised. 
 

AI-powered Identity Wallets

AI powered identity wallets will also be on the rise in the future as they help users manage their identity data better and provide stricter security and usage guidelines around their data. 
 
If you are unsure about the future of digital identity as AI advances, you should know that AI models will help in making digital identity verification systems smarter, faster and much more secure than traditional methods. By moving first, you can create a better user experience for your users, and beat your competitors in implementing the latest digital identity verification solutions powered with AI. 
 The Content Bottleneck Has Shifted from Production to Operations

The Content Bottleneck Has Shifted from Production to Operations

marketing 17 Feb 2026

Q1: This is now the fourth year of Canto’s State of Digital Content report. What stood out to you most in this year’s findings compared to previous years?


Just how tangible the cost of fragmentation in brands’ content and creative operations has become. We’ve been seeing teams acknowledging the problem, saying “yeah, our digital assets are scattered, our workflows are messy.” But this year the data puts real business consequences behind that. The survey found 44% of folks reporting employee burnout tied directly to poor asset management. Wasted budget, duplicated work, missed revenue, and delayed launches are no longer hypothetical risks of fragmentation, they are measurable business outcomes.


The other thing that struck me is product content and information as a major theme. Previous reports focused heavily on creative assets, but this year we saw just how much brands are struggling to manage product information alongside their digital assets. 88% of teams can’t keep product content consistent across channels, and more than half are still managing product data completely separately from the assets used to actually market and sell those products. That disconnect is creating real friction, especially as e-commerce demands keep growing.

 


Q2: The report found that 82% of content teams saw volume increase in the past year, with three in four attributing at least some of that growth to AI. How are teams actually keeping up with that pace, and where are they falling short?


AI is driving the content volume up, but it’s also the thing helping teams manage the surge. 75% of content professionals told us AI has increased their output, and 30% said that increase was significant. At the same time, about half of teams are already using AI to accelerate creation, tagging, and organization. So the same technology pushing volume higher is also becoming the relief valve.


A critical part of the data, though, shows how teams are falling short on the operational side. The volume is growing but the underlying systems and workflows haven’t yet caught up. Only 43% of teams describe their digital content workflows as standardized and automated. The rest are still dealing with manual processes, inconsistencies, and fragmented tools that slow everything down. You can produce content faster with AI, but if your team can’t find the right asset, doesn’t know which version is current, or has to manually push updates across channels, you’re just creating more chaos. The content bottleneck has shifted from production to operations.


Q3: One of the more striking data points is that teams with full connectivity between their digital assets and product information are more than four times as likely to report significant ROI improvements. Why is that gap so wide?
 


That 4x multiplier surprised even us, but when you think about what connectivity actually enables, it makes sense. When your product information and digital assets live in the same environment, you eliminate an enormous amount of duplicated effort. Teams aren’t hunting across systems to match the right image with the right product description, nor manually updating the same information in five different places every time something changes.


The data backs this up across the board. Teams with fully connected systems told us that tasks like locating assets, maintaining brand consistency, and collaborating across teams were “extremely easy” at rates two to three times higher than everyone else. 67% of fully connected teams said locating assets was extremely easy, compared to 21% of those without full integration. That speed and confidence compound across every campaign, every channel update, every product launch. Just as importantly, it all shows up directly in revenue. 65% of teams that can make real-time content updates reported significant revenue increases, versus just 16% of those with slower timelines.


Q4: Only 35% of respondents said they feel very confident that employees are using the most current, approved version of brand assets. What’s driving that confidence gap, and what does it cost organizations?
 


That number reflects the reality of how most brands manage their content today. When assets are spread across cloud drives, local desktops, email threads, and multiple platforms, it becomes almost impossible to guarantee that everyone is working from the same source of truth. 62% of teams are using cloud file storage, 44% have content on local servers, and 41% still rely on individual hard drives. That’s a lot of places where an outdated logo or last quarter’s product spec can be sitting around, ready to get used by mistake.


The cost shows up in a few ways. 30% of respondents reported publishing off-brand or inconsistent content as a direct consequence of poor asset management. You also see it in the 30% who flagged legal or compliance risk. When you’re operating in regulated industries or across global markets, using the wrong version of an asset can have real legal and financial consequences.


Beyond those specific risks, there’s the broader drag on team productivity. People spend time second-guessing whether they have the right file, chasing down approvals, or recreating something that already exists somewhere in the organization.

 

Q5: The data shows that 51% of teams still rely on spreadsheets to manage product information, and 56% manage product content separately from digital assets. What needs to change operationally before those numbers start to shift?


I think a lot of brands have grown into this situation organically. Spreadsheets are familiar, they’re flexible, and when you only have a handful of products or channels, they work fine. But when you’re managing hundreds or thousands of SKUs across e-commerce platforms, retail partners, marketplaces, and your own website, spreadsheets stop scaling. You end up with version control nightmares, no clear ownership, and inconsistencies that erode customer trust.
 

The shift really starts with recognizing that product content and creative assets are two sides of the same coin. A product image, its description, its specifications, its pricing…those all need to move together when something changes. When 78% of teams are using two or more separate solutions just to manage product content, every update becomes a multi-system coordination exercise. Teams told us the improvements that would deliver the most benefit include making product data easier to access across teams, eliminating duplicate or outdated information, and managing product data alongside creative assets. Those are all fundamentally about bringing things together rather than continuing to manage them in silos.
 


Q6: AI adoption for content is nearly universal at 96%, but only 30% of teams describe their use of AI as widespread. What’s holding back deeper adoption?

The adoption curve is real, and I think it’s actually healthy that most teams are taking a measured approach. 47% are using AI in limited ways, and 16% are planning to adopt. That’s a lot of momentum. But moving from experimentation to deep integration requires trust, infrastructure, and governance, and those things take time.

On the trust side, the news is actually encouraging. 81% of content professionals expressed confidence in AI’s accuracy for tagging and organizing assets, and that confidence gets even stronger with hands-on experience. Among teams using AI most extensively, 77% reported high confidence. The hesitation seems to be more about the operational layer. When we asked about top worries over the next two years, integrating new technologies like AI tied for the number one concern alongside security and access control, both at 30%. Teams want to adopt AI more broadly, but they want to do it in a way that doesn’t compromise brand consistency or introduce new security risks. Brands seeing the biggest returns are embedding AI into a centralized, governed environment rather than layering it on top of fragmented systems.
 


Q7: Teams with advanced, standardized workflows were dramatically more likely to see significant ROI gains - 48% versus 0% among teams with ad hoc processes. What does workflow maturity actually look like in practice for content and creative teams?

 

That 48% to 0% gap is one of the most compelling findings in the report, and it really underscores that operational maturity isn’t merely a nice-to-have.


In practice, workflow maturity starts with processes for creating, reviewing, approving, and distributing content that are documented and consistent across teams rather than reinvented every time. On top of that, the repetitive work (like tagging, metadata generation, format conversions, routing assets for approval) is automated instead of eating up people’s time. Additionally, the tools are connected so that updates flow through the system rather than requiring someone to manually copy information from one platform to another.


The teams doing this well are also much more likely to have invested in AI, analytics, and template-driven approaches. When we looked at what high-ROI organizations are doing differently, they’re significantly more likely to be introducing automation to reduce manual work, expanding template and modular content approaches, and measuring content performance to refine their processes.

 

Q8: The report highlights security, AI integration, and brand consistency as the top concerns about managing content at scale over the next two years. How should content leaders be prioritizing those?

I’d say these concerns are actually more interconnected than they might appear at first. Security and access control, AI integration, and brand consistency all improve when you centralize how content is managed and governed. If your assets are scattered across disconnected systems with inconsistent permissions, you have a security problem, a brand consistency problem, and a much harder time rolling out AI in a controlled way.

The practical starting point is getting your foundation right. That means establishing a centralized, structured environment where assets are governed, versioned, and accessible to the right people with the right permissions. Once that’s in place, you can layer in AI capabilities, things like smart tagging, visual search, and content recommendations, with confidence that the AI is working within guardrails rather than amplifying existing chaos. And brand consistency becomes much more manageable when there’s one source of truth rather than dozens of repositories where outdated files can linger.

 

I think content leaders should also be paying attention to the product content dimension. Managing storage costs at 26% was one of the top concerns, and that’s only going to grow as content volume keeps climbing. The teams managing costs most effectively can reduce duplication, improve reuse, and avoid recreating assets that already exist somewhere in the organization.

 

Q9: What’s one thing you’d want a marketing or content operations leader to take away from this year’s report and act on immediately?
 

Audit your fragmentation! Take an honest look at how many systems, folders, drives, and platforms your team is using to manage content and product information today. The data is really clear that fragmentation is the single biggest drag on performance. Teams using two or more systems to manage digital assets are significantly more likely to experience delays, missed revenue, burnout, and wasted budget compared to those working from a unified approach.


You don’t have to solve everything at once, but understanding the full scope of the problem is the first step. Once you can see where assets and product information are scattered, you can start making intentional decisions about what to centralize, what to connect, and where to apply AI and automation to eliminate the most painful bottlenecks.

"First-Party Data Isn’t Enough Anymore”.

marketing 17 Feb 2026

By: Scott Kozub, VP, Product at Experian Marketing Services 


For years, first-party data has been positioned as the answer to nearly every challenge in digital advertising. Lose cookies? Build first-party relationships. Privacy gets more complicated? Lean into owned data. Measurement becomes murky? Go direct to the source.

That logic still holds, but only up to a point.


What many marketers are discovering in practice is that first-party data alone creates depth without scale. It offers rich insight into customers a brand already knows, but far less visibility into the audiences it still needs to reach. In a fragmented, privacy-conscious ecosystem, relying exclusively on first-party signals often results in limited reach, frequency challenges, and diminishing returns on prospecting
 

The next phase of targeting will be defined by how well marketers combine first-party, third-party, contextual, and geographic signals to drive growth, improve efficiency, and strengthen customer relationships.


 Why first-party and third-party data are better together


The biggest challenge facing modern targeting is not the loss of identifiers. It is the growing fragmentation of signals across devices, channels, and environments. In that reality, identity does not disappear. It becomes more important as the connective layer that brings different data sources together for planning, activation, and measurement
 

First-party data remains essential. It provides accuracy, consent, and a reliable foundation for personalization and measurement. But on its own, it reflects only a partial view of the market. Most first-party data sets skew toward existing customers, logged-in users, or known devices, leaving significant gaps in reach and understanding.


This is why third-party data is so valuable. Not as a standalone solution, but as a complementary layer that expands perspective beyond what first-party data can capture alone. Responsibly sourced third-party data adds demographic, behavioral, interest, and purchase context that helps marketers understand who they should be reaching next, especially in an environment shaped by privacy constraints and signal fragmentation.


First-party data on its own is limiting. Third-party data on its own is incomplete. The real power comes from connecting the two through identity, allowing marketers to plan, activate, and measure across fragmented environments with greater accuracy and confidence.
 

Contextual and geographic signals as privacy-safe extensions
 

Contextual and geographic targeting are not new tactics. They are proven approaches that have evolved alongside changes in technology, privacy expectations, and data availability.
 

Today, data-informed contextual targeting goes far beyond keywords or simple page adjacency. When contextual signals are combined with audience insights, they help marketers understand where high-indexing audiences naturally spend time, regardless of channel or environment. Certain content consistently attracts users with shared behaviors, demographics, or purchase intent. Identifying those patterns allows advertisers to reach relevant audiences in ways that are both effective and privacy-safe.


Geographic data functions in a similar way. People with similar lifestyles, needs, and behaviors often cluster in similar locations. When geographic signals are informed by behavioral and demographic data, rather than used as blunt radius targeting, they become a meaningful proxy for intent. This is especially important for categories like retail, CPG, and automotive, where location continues to influence decision-making.

These signals are not replacements for first- or third-party data. They are additional layers that strengthen a modern data strategy while supporting privacy-forward activation.


 

AI as decision intelligence in a fragmented ecosystem


Artificial intelligence plays an increasingly active role in making fragmented signals and multi-source data strategies manageable.

AI is not replacing targeting strategy. It is enabling it. By interpreting fragmented signals at scale, machine learning models help marketers connect identity, first-party data, third-party insights, contextual signals, and geographic information into actionable intelligence. Models trained on both structured and unstructured data can identify patterns across content, timing, device behavior, and location, then optimize delivery in real time.

This shift allows campaigns to move beyond static audience definitions and toward dynamic decisioning. As performance signals change, activation strategies can adapt accordingly, without relying on persistent identifiers or exposing sensitive personal data.

 

What this means for marketers in 2026
 

Marketers who want to create and activate campaigns more efficiently in 2026 will need integrated approaches that reflect how fragmented the ecosystem has become. Success will not come from betting on a single data type, but from building flexible systems that connect signals through identity and intelligence.

First-party data alone is no longer sufficient. Marketers who combine it with third-party, contextual, and geographic signals will be better positioned to plan, reach, and measure advertising in an environment defined by fragmentation, evolving privacy standards, and constant change.
 How's this

How's this "Returns Shouldn’t Be Tolerated — They Should Be a Strategic Differentiator"

marketing 13 Feb 2026

Your research shows returns are now a routine part of shopping, not a seasonal issue. What does the data reveal about how frequently consumers are returning items, and why should CX leaders care?


It’s true, what we uncovered with our survey is that returns are no longer a seasonal anomaly, but a meaningful brand interaction, a routine part of commerce, and a stepping stone to building lasting relationships. When our survey was conducted in early January, 55% of respondents had already made or planned to make a post-holiday return, and 21% of shoppers said they return an item as frequently as once a month. This means returns are a recurring touchpoint that happens across the customer lifecycle, not just in peak holiday periods. Given the volume of returns, even small inefficiencies become points of real friction, and that’s tied directly to loyalty and CSAT. CX leaders in retail and ecommerce should recognize returns as a high-value touchpoint and focus on making the process an opportunity for brand affinity and trust, not frustration.
 
More than half of shoppers say a bad returns experience could impact future purchases. Why do returns have such an outsized effect on loyalty compared to other post-purchase moments?

Returns matter because they’re consequential and emotional. While purchase experiences are driven by anticipation and reward, a return is triggered by disappointment. How a brand handles that disappointment fundamentally shapes trust. 57% of consumers say a bad return experience would influence whether they buy from that brand again, regardless of previous loyalty. It’s a high-stakes moment. If brands can’t resolve a problem quickly, transparently, and with a bit of empathy, they risk turning a one-time issue into long-term disengagement. 
 
More than 60% of consumers say they’d use an AI-powered agent to handle returns. What are shoppers actually hoping AI will fix at that moment?

Speed, clarity, and resolution are the top three things consumers expect from returns. While only a small percentage currently prefer chatbots (12%), 60% of respondents in our survey said they would use an AI-powered agent if it could instantly answer questions and process their return. This is customers signaling a desire for accurate, real-time assistance that gets the job done, with as little friction as possible. Only 36% of survey respondents say they are "very satisfied" with the returns process today, leaving significant room for improvement. AI, when done well, can eliminate many of the pain points consumers feel, including long wait times, confusing policies, and shipping hassles.

For retail leaders evaluating AI investments in 2026, why should returns be prioritized alongside acquisition and personalization efforts?

Trends in retail tech investment continue to focus on personalization and AI integrations to help the buyer build confidence. But what happens after the first purchase often determines whether the brand will get a second purchase, a third purchase, and so on. Returns are one of the few moments in the journey where customers are actively questioning their relationship with a brand, and that moment in time is where differentiation matters the most. AI investments in customer service are maturing quickly, proving that they can handle sensitive, complex situations with clarity and human-like empathy, all of which are critical to a successful returns process. But AI is not a “set and forget it” proposition. CX leaders must invest in training and empowering their teams to ensure their AI can grow, learn, and evolve alongside the needs of their customers. If a brand provides a strong purchase experience, but then loses the customer during a frustrating return experience, all those early investments in acquisition are at risk. 
 
Trust remains a major concern with AI. According to your research, what conditions make consumers comfortable using AI for returns?

Earning consumer trust will be an ongoing challenge for brands as they continue to integrate AI into their practices. Our recent survey took a deeper look into why consumers lack trust in AI currently. It found that consumers worry AI will be less efficient than a human, will have difficulty understanding their issue, or will provide inaccurate information. All of these concerns can be addressed by ensuring that the AI agent is given accurate customer data and policy information from the brand, and is overseen by well-trained ACX managers and teams.
 
 At Ada, we know this can be done well because our customers are seeing significant results from their AI investments today. One of our customers, IPSY, operates one of the largest beauty subscription networks in the world, serving more than 20 million community members across its brands. At that scale, customer experience isn’t just about support. It’s about relationship management, where every improvement compounds.
 

In just four months, IPSY, GenAI agent, Glam Bot, which is built and managed through Ada’s ACX Platform, unlocked:


→ a 41% lift in CSAT,

→ a 943% ROI on their generative AI investment,

→ 64% increase in autonomous resolution, and

→ It remains one of the largest AI deployments inside the company to date.
 

The key to ensuring consumers are comfortable with AI isn’t removing humans, but creating a seamless integration with humans, including transparent escalation paths. 
 

Returns should no longer be an interaction that consumers tolerate, but a strategic differentiator for brands using AI to turn problems into opportunities.

Looking ahead, how do you expect AI to reshape post-purchase CX over the next 12–24 months, particularly around returns?

In the next 12-24 months, AI will become increasingly agentic. This means it will do more than answer simple queries – it will automate increasingly complex tasks end-to-end with context, accuracy, and even empathy. This would include checking inventory at nearby stores for pickup, processing payments, and making repurchases of the same products easy. We will see AI become more deeply capable in policy, status updates, logic, and personal preferences, which can make returns virtually frictionless by default. Brands will also increasingly measure the success of their ACX investments not simply in resolution rates, but in revenue generation, both from cross-sell/upsell opportunities and in reduced customer churn. But this requires a thoughtful approach to AI management and adoption, as well as a team that’s empowered to grow and evolve their own agents. Brands that win will understand AI success isn’t just a technology deployment, it’s a management discipline. You cannot delegate your transformation to a vendor. 
 
   

Page 4 of 44

REQUEST PROPOSAL