Martech Edge | Best News on Marketing and Technology
GFG image

artificial intelligence

Why longevity and adaptability after deploying agentic AI will define enterprise success in 2026

Why longevity and adaptability after deploying agentic AI will define enterprise success in 2026

artificial intelligence 19 Feb 2026

Spokesperson: Adam Beavis, Country Manager Australia and New Zealand, Databricks.

Q1: Agentic AI has moved quickly from experimentation to deployment. What will separate organisations that succeed from those that fall behind after rollout? 


A: What separates organisations that win with agentic AI after rollout from those that stall is less about the tech — and more about the operating model and discipline. The key differentiators tend to be:


 


1. Data readiness: High performers invest heavily in clean, permissioned, continuously improving data and instrument agents with feedback loops. Without this, agent performance degrades quickly after initial rollout.


2. Strong guardrails and governance by design: Winning organisations bake in controls, auditability, escalation paths, and human-in-the-loop thresholds from day one. Those that fall behind treat governance as an afterthought—leading to trust issues, halted deployments, or regulatory friction.


3. Clear business ownership, not just tech ownership: Successful firms tie agents to specific business outcomes (cost, speed, risk reduction, revenue uplift) with accountable business unit owners. 

 

4. Cultural and behavioural change: The biggest gap is human, not technical. Leaders who succeed redesign roles around human–agent collaboration, and retrain employees to integrate AI into their daily work and oversee autonomous systems.
 


Q2: Many organisations feel they have done AI once it is deployed. What often goes wrong for organisations beyond that point?


A: The biggest misconception is treating deployment as a box-ticking exercise. Models that are trained on historical data can drift as inputs change, and without consistent and continuous evaluation, problems often surface too late. 

 

The solution is shifting from one-off checks to continuous evaluation in production. Just like humans need performance reviews, so do AI systems. Enterprises need systems that continuously measure performance against real tasks, retrain or adjust agents and balance quality against cost. Many early deployments struggle, because they were not designed with long-term operation in mind. 

 

Q3: Why is the transition from single agents to multi-agent orchestration important for enterprises?


A: Enterprise work rarely happens in a single step. A realistic workflow often includes retrieving from multiple data sources and validating data against business rules, compliance checks, and a final decision with explainability requirements. Expecting a single agent to handle all these tasks reliably and efficiently is unrealistic. 


In 2026 we will see broader adoption of multi-agent orchestration, where specialised agents handle distinct tasks and a supervising agent coordinates sequences, mirroring how human teams operate. While this gives the benefit of better performance, it also allows for improved governance.  Each agent can be monitored and evaluated on its specific responsibility, modifications can be isolated, and the overall system remains transparent and auditable, and easier to troubleshoot.

 

Q4: Many enterprises struggle to get AI agents and applications into production. What is causing the bottleneck and how is Databricks addressing it?


A: The bottleneck is not building a demo, it is making agents reliable in the real enterprise. In productions, agents must consistently reason over complex, proprietary data, operate with guardrails and integrate with operational systems. 


General knowledge of AI is becoming a commodity, but it’s still elusive to get AI that truly understands the proprietary data inside an enterprise. Many AI agents fail in enterprise environments because they prioritise ease of use over accuracy, leading to inconsistent results or behaviour organisations cannot trust. 

 

Databricks is addressing this by building through a number of growth areas:

  1. The rise of AI-powered coding is changing how software is built. As developers create apps via natural language processing, those apps automatically need databases and agent backends. Databricks is seeing this first-hand, with over 80% of databases launched on Databricks now being created by AI agents, rather than humans. We are enabling developers to rapidly build applications that run on Lakebase and are powered by agents, all within a unified and governed platform.. 
  2. Enterprises need a modern transactional layer for AI-native apps. Traditional transactional databases have changed little for decades, so we launched Lakebase, which simplifies operational data workflows and is optimised for AI agents operating at machine speed.
  3. Agent Bricks then helps organisations build and deploy agents that can securely work within their own data, where most of the business value sits. It helps organisations build domain-specific agents that reason over their data, track quality with task-specific benchmarks and balance performance with cost over time. The aim is to make agent quality measurable and improvable in production, not assumed at deployment.

Together, this trifecta removes the common production blockers by combining an operational database layer, an agent-building platform that works with enterprise data and an application layer that helps ship faster, with reliability and governance built in. 
 

Q5: In Australia and New Zealand, how are organisations specifically adopting AI applications and how does that differ from earlier phases?


A: Across Australia and New Zealand, we’re seeing a pivot from general-purpose experimentation to domain-specific AI applications that are grounded in trusted enterprise data, with stronger attention to governance and sovereignty. As organisations shift from pilots into production to deliver real business outcomes, organisations are now embedding AI into real workflows, from customer support and supply chain to finance and operations. 


For example, Suncorp needed to scale AI across the organisations to improve claims accuracy, reduce operational risk and support more automated digital customer experiences. Manual processes were creating additional loads for staff, and employees often lacked instant access to complex policy and claims information when decision making was required. By building, deploying and scaling domain-specific AI directly into claims workflows with Databricks, the company has achieved 99% accuracy and saved more than 15,000 hours of manual workload.

Additionally, Atlassian uses Databricks AI/BI Genie to power on demand insights in plain English through Atlassian Rovo, its AI assistant. This allows teams across the business to ask complex questions of their data and receive trusted, contextual answers directly within their existing workflows. 

These examples reflect a broader trend that we’re seeing across the region in 2026, where value increasingly comes from AI applications designed around the business, grounded in governed data, and operationalised end to end. 

Q6: How are AI applications, including AI agents, playing out across key verticals? What advice would you give to enterprise leaders in these sectors? 


A: Whether you lead in finance, pharma, media, CPG, or tech, the questions are converging: how do we use AI to improve business productivity? How do we balance industry regulation with AI innovation? How do we control costs without slowing adoption? The leaders who solve these challenges today will build faster, more resilient operations and gain a competitive edge. 
 

In the public sector, AI is connecting data across agencies to reduce administrative burden and support decisions from benefits assessment to emergency response. Success depends on strong governance, clear lineage and transparency so outputs can be trusted and audited. 
 

In marketing, AI applications are moving beyond content generation to orchestrating campaigns, analysing performance data and adapting system strategies in near real time. Data Intelligence for Marketing allows organisations to centralise customer and campaign data, apply AI to drive more accurate decisions, and automate tasks that scale human resources using AI agents.
 

In cybersecurity, multi-agent systems are proving effective to validate threats and accelerate response times while keeping humans in the loop. Databricks’ Data Intelligence for Cybersecurity powers scalable SecOps at scale with Agent Bricks by automating triage, enrichment, response and investigation, reducing alert fatigue and costs while boosting analyst productivity.
 

My advice for leaders is simple: 


●      Invest in your data and AI foundations with high-quality data and governance


●      Have clear business ownership and outcomes you want AI to accomplish


●      Scale what is already working. 
What scaling AI reveals about governing personalisation

What scaling AI reveals about governing personalisation

artificial intelligence 12 Feb 2026

By Mark Drasutis, Head of Value, APJ, Amplitude
 
As brands increasingly seek to understand and act on customer behavior, they need to continuously analyse user journeys, identify patterns and friction, and recommend or execute next steps in real time to deliver true personalisation. AI is accelerating this shift, redefining personalisation by moving brands beyond static journeys to experiences that adapt dynamically to customer behaviour.
 
Australia’s National AI Plan sends a clear message to marketing and product teams; AI can only scale if it is safe, transparent and responsibly governed. Yet, while AI capabilities are advancing rapidly toward greater autonomy, most organisational governance remains manual and fragmented. 
 
With conversational AI and agentic AI becoming the primary interfaces for digital experiences, governance needs to operate at the same speed and complexity as the systems it oversees. Brands need capability uplift and accountability in equal measure or they risk falling behind. 

The trust gap limiting AI-driven personalisation 


AI-driven personalisation is being held back not by technology but by trust and transparency – a gap driven by weak governance, unclear accountability and a lack of workflows to manage AI safely. This matters because trust in AI remains fragile in Australia. A University of Melbourne-led study found that while half of Australians already use AI regularly, only one in three feel confident trusting it.
 

That trust gap is widening as personalisation evolves. Traditional rules based marketing, built on fixed segments, pre-defined journeys and manual triggers, is being replaced by real-time, generative personalisation where decisions are made continuously by AI. This shift demands new operating models, stronger governance frameworks and far greater visibility into how AI systems make decisions.  

As agentic AI becomes more embedded in personalisation, teams are moving beyond static segmentation toward systems that can learn continuously from behaviour, test autonomously and adapt experiences in the moment. But even the most advanced systems will fail if customers don’t trust the intelligence behind them.


Australia’s National AI Plan reinforces that trust and transparency are not optional – they are the foundation for safe, scalable AI-driven personalisation. Done well, brands can deliver meaningful, adaptive experiences without compromising privacy, fairness or customer confidence. 

AI governance needs to be built in, not bolted on 


As AI takes on a bigger role in shaping personalised customer experiences, the governance behind those systems becomes just as important as the technology itself. The rise of employees using AI tools independently outside formal approval channels creates security and compliance risks. Organisations cannot rely on ad hoc controls anymore – they need transparent systems that formalise how AI is accessed, monitored and governed so teams can innovate without losing control. Boards and executives are accountable for AI strategy, governance and ethical application, emphasising that oversight must be enterprise grade, not experimental.


Effective guardrails start with visibility. As AI drives personalised decisions, brands need full clarity on how those decisions are being made. Brands need to trace which data an AI model uses to make a decision, understand the prompts, models and parameters behind an output and maintain clear logs that show how AI shapes the paths customers take and the outcomes they experience. Without transparency, it becomes impossible to spot bias, drift or unintended behaviour. 


What matters in practice is real time visibility. When teams can see how AI driven decisions influence user behaviour, conversion and retention, they can assess whether those decisions are delivering value or creating unintended consequences. This kind of visibility is what allows personalisation to move from experimentation to something dependable. 


Some early adopters are already putting this into practice. ZIP, an Australian fintech company, is already using AI agents on Amplitude’s MCP server to embed their domain knowledge directly into their LLM workflows, improving how personalised journeys are monitored and optimised. The result of this was a 60% increase in customers starting an additional repayment flow and the removal of more than 4,000 days of navigation friction. 


This visibility makes it possible to intervene early, course correct when required and prevent minor issues from scaling into larger problems. For marketing and product teams, this means AI driven personalisation becomes safer, more predictable and more aligned with actual customer behaviour. AI governance cannot be patched on later. It must be embedded into the core of decisioning systems so AI operates safely, predictably and in line with both regulation and customer expectations. 

Invest in continuous oversight for continuous experimentation
 
Echoed in the National AI Plan, real time personalisation means AI is constantly adapting, which requires continuous oversight rather than periodic manual checks.
 
When AI underpins the customer experience, these risks compound quickly. Automation without continuous oversight risks locking incorrect decisions at scale. Continuous oversight is what ensures experimentation remains safe, explainable and aligned with customer expectations on personalisation.
 
AI Agents are most effective when they work alongside humans, not in place of them. They can monitor customer behaviour, surface opportunities and support controlled experimentation at speed, while humans can remain responsible for setting strategy, defining guardrails and approving customer facing changes. A leading Australian bank currently using Amplitude’s AI Agents has advanced their data-driven experimentation, allowing them to uncover key customer behavioural patterns and traffic shifts with central human oversight. Autonomy can be adjusted over time as confidence grows, but accountability remains firmly with people. 


This in-loop model ensures personalised experiences adapt based on real customer behaviour, while still reflecting brand intent, fairness standards and evolving privacy expectations. Products can optimise continuously, but only within approved parameters, keeping customer experience safety and performance aligned.  

AI has the potential to fundamentally reshape personalisation, but only when trust, transparency, and governance scale alongside the technology. Without them, AI accelerates risk and limits growth. With them, it becomes a powerful and defensible competitive advantage. 


The brands that succeed won’t be those that deploy the most AI, but those that govern it with intent and discipline. Now is the time to move beyond experimentation - strengthening oversight, embedding clear governance and building transparent data foundations that allow AI to scale safely and deliver personalised experiences customers genuinely trust.
How Mundial Media Uses AI to Decode Cultural Context

How Mundial Media Uses AI to Decode Cultural Context

artificial intelligence 11 Feb 2026

Tony, there's a lot of talk about multicultural audiences being "important." Can you explain?

Multicultural audiences are no longer a segment; they’re the primary drivers of U.S. economic growth. Multicultural consumers are fueling most of the country's buying power. But reaching this deeply nuanced, diverse audience in a privacy-first ad technology environment has never been more difficult. Mainstream ad platforms weren’t built for this. 

You've been vocal about mainstream ad platforms becoming "too automated." What's the problem with automation?

Automation is powerful for handling large volumes, but it falls short when it ignores cultural layers. Culture shapes everything from how people interpret signals to what motivates them. For instance, a Puerto Rican millennial in New York and a Mexican American Gen Z in Texas could show similar online patterns, yet their cultural influences create distinct needs. That's why tools like Mundial Media’s proprietary Cadmus AI technology are designed to decode those deeper contexts.

How does Mundial Media achieve that precision at scale?

It starts with processing hundreds of millions of signals daily and pinpointing where audience interests and cultural shifts overlap. This enables us to effectively reach over 50 million users, balancing broad scale with targeted accuracy.

Privacy regulations are tightening, and third-party cookies are disappearing. How is Mundial Media navigating this shift?

We've long prioritized first-party data and contextual cues over invasive tracking. Cadmus AI, trained on over three years of compounded AI learnings, delivers precise,  real-time cultural understanding, privacy-safe scale, and high-performing contextual targeting, the “right ad at the right moment” without outdated cookies, IDs, or legacy identity signals. 

Mundial Media emphasizes that your team embodies the diversity of the audiences you serve. Why does "lived experience" matter in the technical world of ad tech?

Technical tools alone can't capture bias and subtleties; that's where personal insights come in. Our diverse team brings an innate grasp of what makes messaging authentic, spotting resonant visuals, and avoiding stereotypes. This human element sharpens AI's effectiveness beyond raw data analysis.

You mentioned Cadmus AI has been trained on "over three years of compounded learnings." What does that continuous training look like?

It's an ongoing cycle where each campaign refines the system. Over time, this builds a smarter model for predicting what engages various segments and when to deliver messages for maximum relevance.

What does delivering "the right ad at the right moment" mean in a culturally nuanced context?

Delivering 'the right ad at the right moment' in a culturally nuanced context is about relevance rooted in understanding. It means knowing why a moment matters, who it matters to, and how a brand can show up in a way that feels natural and aligned with the audience's mindset.

With Cadmus AI, we know when brands want to target NFL football versus global football or soccer, and when Beyoncé has a major moment, it's a moment your brand should be part of. It's using cultural insight to match a message with the emotional and social context people are in at that exact moment, so the brand feels relevant to what they actually care about right then.


Can you give an example of how Mundial Media can help brands capitalize on major cultural moments?

The 2026 FIFA World Cup is the perfect example. We're talking about 6 billion viewers worldwide, over 29 million multicultural fans in the U.S. alone. This is arguably the decade's biggest multicultural marketing moment. The opportunity here goes beyond traditional sponsorship. What actually works is showing up authentically. Cadmus AI helps brands understand when and how to participate in ways that honor what these moments actually mean to different countries. Those are real emotional connections – hometown pride. Brands that respect that earn trust, and trust drives everything else.


What's the biggest misconception brands have about reaching multicultural audiences?

Many view it as a simple add-on, such as translating content or ticking diversity boxes, while seeing these groups as peripheral. In truth, they're central to modern culture, large consumer spending, innovating trends and adopting early, making them essential for any brand eyeing long-term growth.

Looking ahead, how do you see AI and cultural understanding evolving in advertising?

With AI democratizing data processing, the edge will come from embedding cultural depth to handle nuance and authenticity. We're advancing both tech and expertise to merge these, creating systems that target precisely while respecting human contexts in an increasingly complex landscape.
How Marketing Agencies Can Protect Client Data in an Era of AI-Powered Threats

How Marketing Agencies Can Protect Client Data in an Era of AI-Powered Threats

artificial intelligence 11 Feb 2026

Marketing agencies are uniquely positioned as custodians of client data across dozens of platforms. How has this role evolved in terms of security responsibility, and why is 2026 a critical year for agencies to address this?


Marketing agencies have fundamentally transformed from service providers into data custodians, often holding the keys to their clients' most valuable digital assets. A typical agency today manages credentials for 50+ client accounts across advertising platforms, analytics tools, social media, CRMs, and content management systems. Each login represents a potential entry point not just to the agency's infrastructure, but directly into client operations.


2026 marks a critical inflection point for three reasons. First, AI-powered attacks have made credential harvesting exponentially more sophisticated; attackers can now analyze user behavior patterns and craft targeted phishing campaigns that are nearly indistinguishable from legitimate communications. Second, regulatory frameworks around data protection are tightening globally, with agencies increasingly held liable for breaches originating from their access points. Third, clients are becoming more security-conscious in their vendor selection process. We're seeing RFPs that explicitly require agencies to demonstrate robust security protocols, including how they manage shared credentials. Agencies that can't articulate their security posture are losing contracts to competitors who can.

How can agencies transform their security practices from a checkbox requirement into an actual competitive advantage during pitches and contract renewals?


The agencies that win in 2026 are those positioning security as a core competency, not an afterthought. During pitches, leading agencies now include dedicated sections on their security infrastructure, demonstrating their zero-knowledge password management system, showing how they can onboard and offboard team members to client accounts in minutes rather than days, and explaining their audit trail capabilities.


The competitive advantage comes from trust. When an agency can tell a prospective client, "We use enterprise-grade password management with military-grade AES-256 encryption, and no one, not even our leadership, can access your credentials without proper authorization," that's powerful differentiation. We're working with agencies that have made their security protocol a key selling point in proposals. It demonstrates professionalism and shows they take their custodian role seriously. In an industry where one breach can destroy years of client relationships, that message resonates.

AI-powered phishing attacks are becoming increasingly sophisticated. Can you describe what modern social engineering attacks targeting marketing agencies actually look like in 2026, and what makes agencies particularly vulnerable to these AI-driven threats compared to other industries?


Today's AI-powered attacks targeting agencies are remarkably sophisticated. We're seeing threat actors create fake emails that perfectly mimic client communication styles, analyzing previous email threads to replicate tone, terminology, and timing patterns. An account manager might receive what appears to be an urgent request from their client's CMO asking for immediate access to campaign data or credentials, using language and formatting that's virtually identical to legitimate requests.


Agencies are particularly vulnerable for several reasons. First, they operate in a high-velocity environment where urgent client requests are routine, and attackers exploit this culture of responsiveness. Second, agencies typically have multiple team members accessing the same client accounts, creating more potential entry points. Third, the creative nature of agency work means employees regularly click on links to review creative assets, making them more susceptible to malicious links disguised as client deliverables or campaign previews.


The most dangerous attacks we're seeing involve AI tools that harvest credentials while appearing to provide legitimate services. An employee might install what seems like a helpful SEO analysis tool or content optimization app, not realizing it's designed to capture login credentials and monitor user behavior.

Beyond technical solutions, what role does human awareness and training play in defending against these evolving threats?


Technology provides the foundation, but human awareness is your critical last line of defense. The most sophisticated password management system in the world can be undermined by an employee who falls for a convincing phishing email or shares credentials via an unsecured channel.


Effective training goes beyond annual compliance modules. Agencies need ongoing security awareness that addresses real-world scenarios; what does a credential harvesting attempt actually look like? How do you verify an urgent request is legitimate? What are the red flags in AI-generated phishing attempts? The key is making security awareness part of the agency culture, not just an IT department concern.


We also emphasize the importance of establishing clear protocols for credential sharing and verification. When someone requests access to a client account, what's the verification process? Training employees to pause and verify, even when requests seem urgent, can prevent the majority of social engineering attacks. It's about creating a security-conscious culture where asking "Can you verify this request through a secondary channel?" is encouraged, not viewed as slowing down work.

How should agencies think about credential management differently when they're not just protecting their own data, but serving as the gateway to client accounts across platforms?


Agencies need to shift from thinking about passwords as individual assets to viewing credential management as an enterprise-wide access control system. When you're managing keys to client kingdoms across dozens of platforms, you need infrastructure that provides visibility, control, and accountability.


This means implementing a zero-knowledge architecture where credentials are encrypted at the source and can only be decrypted by authorized users. It means having granular access controls so team members only access the specific client accounts relevant to their projects. It means maintaining detailed audit trails so you can track exactly who accessed which credentials and when, which is essential for both security and client trust.


The critical shift is moving from reactive to proactive management. Rather than manually hunting for passwords when someone needs access or scrambling to change credentials when someone leaves, you need systems that allow instant onboarding and one-click offboarding. When a client relationship ends or a team member transitions, you should be able to revoke access immediately without requiring manual password changes across multiple platforms. This isn't just about security; it's about operational efficiency and demonstrating to clients that their data is managed with enterprise-level rigor.

If you could recommend three immediate actions that agencies should take this quarter to strengthen their security posture, what would they be?


First, implement a business-grade password management solution immediately. This is your foundation; everything else builds from here. For less than $400 annually for a 20-person team, you eliminate the single biggest vulnerability in your security stack. Every day you continue managing client credentials through spreadsheets or browser-saved passwords is a day you're exposed to preventable breaches.


Second, conduct a Shadow IT audit. Require every team member to log every software tool and platform they're using into your password manager, sanctioned or otherwise. You cannot protect what you cannot see. This gives you a complete inventory of your software ecosystem and often reveals surprising security gaps where sensitive data is being stored in unapproved tools.


Third, establish and document your credential management protocols. Create clear written policies for how credentials are shared, how access is granted and revoked, and how urgent requests are verified. Make sure every team member understands these protocols and knows that following them isn't bureaucracy, it's protecting both the agency and your clients. Share these protocols with clients during onboarding and in annual reviews. It demonstrates professionalism and gives them confidence in your security practices.

For agencies that have historically viewed cybersecurity investments as cost centers, how should they reframe this thinking given the current threat landscape?


The calculation has fundamentally changed. A single credential breach can cost an agency a major client relationship, trigger regulatory penalties, and destroy years of reputation building. We've seen agencies lose six-figure accounts because they couldn't demonstrate adequate security controls. Conversely, agencies that position security as a strength are winning competitive pitches specifically because of their security infrastructure.


Consider the math: implementing enterprise-grade password management costs roughly $54 per user annually. Compare that to the cost of a single client breach: legal fees, notification requirements, lost business, reputation damage. Or consider the competitive advantage: if robust security protocols help you win just one additional mid-sized client per year, the ROI is exponential.


But beyond risk mitigation and competitive advantage, there's operational efficiency. How many hours does your team waste hunting for passwords, resetting forgotten credentials, or manually managing access when team members join or leave projects? Proper credential management eliminates this friction, making your team more productive and your operations more professional. This isn't a cost center, it's a revenue enabler and an efficiency multiplier.

Looking ahead through 2026, what emerging threats should agencies be preparing for now, even if they haven't fully materialized yet?


The intersection of AI and social engineering will become increasingly dangerous. We're already seeing early versions, but expect to see AI-powered attacks that can conduct real-time conversations, adapting their approach based on responses. Deepfake audio and video will make verification of urgent requests significantly more challenging. Imagine receiving a video call from a "client" requesting immediate credential access.


Watch for increased targeting of mobile devices. As remote work remains standard and team members access client accounts from personal devices, mobile endpoints become attractive targets. Agencies need to ensure their security infrastructure works seamlessly across devices without compromising security.


Finally, regulatory compliance will expand. More jurisdictions will implement data protection regulations that specifically address third-party access to client data. Agencies that can demonstrate compliance, showing encrypted credential management, detailed access logs, and clear data handling protocols, will have significant advantages in enterprise client relationships.


The agencies that thrive in 2026 won't be those that react to threats after they emerge, but those that build security into their operational DNA now. Password management as the first line of defense isn't just about protecting credentials, it's about demonstrating to clients that when they trust you with their digital assets, that trust is respected with enterprise-grade security at every level.
Redesigning Marketing Operations for the AI Era: Key Insights from Incubeta.

Redesigning Marketing Operations for the AI Era: Key Insights from Incubeta.

artificial intelligence 30 Jan 2026

  1. How is AI changing the way marketing teams structure their creative and media workflows today?
    1. AI is shifting workflows from linear handoffs to more connected, parallel processes. Instead of creative, media, and analytics operating in silos, teams increasingly work from a shared intelligence layer where insights, audience signals, and performance feedback flow continuously. At Incubeta, we see the biggest impact when AI accelerates iteration and personalization at scale, while strategic and creative decision-making remains firmly human-led.

 

  1. What principles does Incubeta prioritize when helping brands redesign workflows around AI?
    1. The first principle is that AI should augment human expertise, not replace it. We redesign workflows so AI handles speed, scale, and pattern recognition, especially in production and optimization, while people focus on strategy, creativity, and brand stewardship. The second principle is integration. AI delivers the most value when creative, media, and data systems operate as one connected workflow rather than separate layers.

 

  1. Why is a human-centered approach still essential when applying AI across marketing operations?
    1. AI is only as effective as the behaviors it’s designed to influence. A human-centered approach ensures AI-driven outputs reflect how people actually think, feel, and make decisions, rather than optimizing solely for short-term performance signals. At Incubeta, we use AI to support better human judgment and customer understanding, not to override them.

 

  1. How do behavioral science frameworks like StoryVesting or the Bow Tie Funnel guide AI-driven marketing decisions?
    1. Frameworks like StoryVesting and the Bow Tie Funnel give AI direction and purpose. They help ensure automation and personalization reinforce trust, relevance, and long-term value rather than simply increasing volume or efficiency. These frameworks also align internal teams around a shared customer logic, making AI-driven execution more consistent and easier to operationalize.

 

  1. What does an AI-ready data workflow look like from Incubeta’s perspective?
    1. An AI-ready data workflow is unified, accessible, and decision-oriented. It connects media, customer, and performance data into a single environment that supports real-time analysis and activation. At Incubeta, we approach this through a Data-as-a-Service mindset, where data is treated as a continuously available, governed layer that fuels planning, activation, attribution, and prediction. This allows teams to move from reporting what happened to anticipating what will happen next and acting with confidence.

 

  1.  How does AI improve attribution and predictive modeling in modern marketing organizations?
    1. AI is fundamentally changing how attribution and predictive modeling support decision-making. Instead of forcing fragmented customer journeys into last-click or channel-based reports, AI-driven models account for multiple touchpoints, creative variables, and rapidly shifting behaviors to show what’s actually driving incremental impact.

 

Predictive modeling then builds on those signals to forecast outcomes, scenario-test media and creative investments, and evaluate trade-offs before decisions are made. As measurement systems become more advanced, marketers are moving away from trying to perfectly reconstruct a journey that no longer exists and instead using AI-driven modeling to plan what comes next with greater confidence, even as privacy constraints and signal loss accelerate.

 

The result is a move from reactive optimization to proactive, forward-looking planning, where reporting becomes a decision engine rather than a justification exercise.

 

  1. What role do platforms like Google Marketing Platform and Google Cloud play in enabling AI-powered decision-making?
    1. Google Marketing Platform and Google Cloud provide the infrastructure needed to connect data, activate insights, and scale AI responsibly. Together, they enable advanced analytics, modeling, and automation while maintaining governance and transparency. Incubeta works closely within these ecosystems to help brands operationalize AI in ways that support both performance and accountability.

 

  1. How is AI reshaping collaboration between creative, media, and analytics teams?
    1. AI creates a shared language between teams by grounding decisions in common data and insights. Creative teams gain faster feedback, media teams gain clearer signals, and analytics teams can focus on higher-value modeling instead of manual reporting. The result is more cohesive collaboration and fewer disconnects between strategy, execution, and measurement.

 

  1. What practical steps can marketing leaders take to govern AI usage across their organizations?
    1. Effective AI governance is less about restriction and more about clarity. Marketing leaders need to define where AI is appropriate in the workflow, where human judgment is required, and how outputs are reviewed before activation. At Incubeta, we see the most progress when governance is built directly into everyday processes, so AI use feels intentional and repeatable rather than experimental or risky.

 

  1. What signals or outcomes help demonstrate AI’s impact to executive leadership?
    1. Executives respond best to outcomes tied to efficiency, effectiveness, and decision quality. This includes faster time to market, improved personalization at scale, and clearer links between marketing activity and business results. Framing AI as an operational and strategic advantage, rather than a standalone tool, helps make its value tangible to the C-suite.

 

  1. Are there any exciting developments on the horizon at Incubeta in 2026?
    1. As we kick off 2026, the excitement at Incubeta is palpable. One of the standout moments I’m particularly looking forward to is the launch of our new podcast, Digital Edge in Q1. This podcast will bring together a dynamic range of voices, offering diverse perspectives from across industries on key topics like the future of AI, marketing effectiveness, and much more.

 

I’m honored to be a guest on an upcoming episode, where I’ll dive into AI architecture and share how organizations can set themselves up for success with AI. If you’re eager to gain actionable insights and hear from industry leaders on how they’re driving innovation in marketing and advertising, make sure to tune in!

 

 

AI Agents & Reinforcement Learning: The Future of Customer Engagement | Jojo Zieff, Braze

AI Agents & Reinforcement Learning: The Future of Customer Engagement | Jojo Zieff, Braze

artificial intelligence 8 Sep 2025

1. What advantages does reinforcement learning offer over traditional A/B testing or rules-based personalization models? 
 
Traditional A/B testing remains a valuable tool for marketers—it allows for quick experimentation and helps identify which version of a message or experience performs best. But its scope is limited to testing a limited number of fixed variants in isolation.

Reinforcement learning (RL) transforms static personalization to true relevance, facilitating customer experience at the individual level. Instead of relying on static tests or rules-based systems, RL continuously learns from real-time customer behavior and adapts engagement strategies across multiple dimensions. This allows brands to optimize billions of decision points across the full customer journey and delivers increasingly relevant, 1:1 experiences at scale.

More than just enhancing personalization, reinforcement learning helps marketers drive meaningful outcomes by aligning individual experiences with their most impactful business goals. It helps marketers create a deeply relevant experience for customers, while optimizing any marketer-defined goals. 

2. What kinds of behavioral or contextual data will be used to power more intelligent message optimization within your journeys? 
 
For many marketers, the challenge isn’t a lack of data—it’s making sense of it. Vast amounts of static data are of little value if they don’t translate into meaningful insights that can turn into action. And the complexity grows when trying to personalize and optimize experiences at scale across countless customers and touchpoints throughout the lifecycle.

This is where AI is a game changer. By leveraging behavioral and contextual data, such as a unique user's loyalty and interaction history, to help uncover insights that are not only actionable but also highly relevant to each individual. And with the rise of AI agents, we’re entering a new era where decisions about how and when to engage can be made intelligently and automatically—taking personalization efforts to the next level and delivering true business impact at scale.

3. In what ways will marketers retain creative control while letting AI automate experimentation and optimization?
 
Investment in generative AI assistants has empowered creative professionals and marketers to work more efficiently and collaboratively with AI. These tools have helped eliminate tedious tasks and bottlenecks in their process—freeing teams to focus on higher-impact work like strategy and creativity.

Now, with the rise of AI agents—systems that perceive their environment, make autonomous decisions, and take action to achieve specific goals—marketers can take creativity to the next level. These agents can run millions of simultaneous tests on creative messages, optimizing every dimension to support the most relevant experience for each individual, all at massive scale.

AI agents extend the capabilities of marketing teams by helping determine which creative components resonate most with each customer, while making sure we maintain the optimal levels of control. By putting guardrails in place, such as defining which channels or parts of the experience to optimize, and pairing agents with expert AI services that continuously fine-tune their decisions, marketers can maintain alignment with brand goals.

This balance allows marketers to stay focused on creativity and strategy, while AI dynamically experiments and personalizes content at the 1:1 level—turning great ideas into truly relevant experiences.

4. How do you see the role of AI agents evolving within customer engagement platforms over the next 2–3 years? 
 
AI agents are evolving from helpful assistants to autonomous decision-makers that will fundamentally change how marketers operate. In the future, working with agents will feel less like working with a tool, and more like working with a team of specialists: a brand strategist, copywriter, developer, data analyst and more—all ready to amplify relevant customer experiences. They will also help marketers derive insights and experiment with data at an unprecedented scale, and expand personalized experiences across millions of touchpoints. 

The evolution extends beyond reactive optimization to predictive engagement, where agents anticipate customer needs before they're expressed. This shift enables AI to handle tactical execution while marketers focus on strategy, creativity, and relationship building. The objective isn't increasing message volume, but helping marketers be more strategic and relevant about when and how they reach customers. 

This shift will elevate the marketer’s role to that of a strategic conductor, guiding AI to achieve business outcomes rather than executing manual tasks.

5. What impact should enterprise customers expect on KPIs like engagement rate, retention, and CLV from this enhanced AI decisioning? 
 
When every customer interaction is truly personalized, brands unlock reciprocal value. As AI continuously learns and adapts, it can orchestrate the experiences that are deeply relevant for each customer across touchpoints. This level of precision deepens relationships, strengthens loyalty, and positions your brand as an essential part of a customer lifecycle.

With the flexibility of AI decisioning, marketers can optimize for virtually any business goal—whether it’s increasing top-line revenue, boosting customer lifetime value, or driving more loyalty sign-ups.With Braze’s recent acquisition of Offerfit, Braze’s AI agents are already supporting millions of decisions every day—and the impact doesn’t stop there. AI agents can adapt to support whatever metric matters most to your brand, helping you move faster and smarter across channels and touchpoints. 

6. How will OfferFit’s reinforcement learning capabilities reshape how you approach cross-channel customer engagement?

To resonate with consumers across both traditional and emerging channels, it’s no longer just about finding the right message, for the right channel, in the right moment—it’s finding the most relevant, end-to-end experience: the right copy and creative, combination of messages, the right sequencing of channels, and the right moments to send each message across the customer’s journey. We see the capabilities of reinforcement learning representing a fundamental shift within Customer Engagement. The successful deployment of machine-learning-driven and reinforcement-learning optimization is key to helping marketers achieve relevance at scale across the many different dimensions of a customer's experience.

Cross-channel engagement becomes truly orchestrated rather than simply coordinated—the AI determines not only message content but also optimal channel selection, financial offers, engagement timing, and more. Each interaction teaches the system something new about that specific customer, creating a feedback loop that becomes more effective over time, and delivers more relevant experiences for customers.

We are excited by OfferFit’s capabilities and how they will shape our approach for customer engagement. OfferFit AI agents make 6.4B agent decisions per day. Millions of end users are getting 1:1 personalized decisions a day - meaning marketers can orchestrate more deeply relevant experiences for their customers, at scale.
 
Get in touch with our MarTech Experts.
AI-Driven Product Experiences: Personalization, Trust & Data Accuracy | Romain Fouache, Akeneo

AI-Driven Product Experiences: Personalization, Trust & Data Accuracy | Romain Fouache, Akeneo

artificial intelligence 8 Sep 2025

1. Given that nearly one-third of consumers complete purchases based on AI recommendations, how is your organization evolving its AI capabilities to influence decision-making across the customer journey?

Based on our data, we know that about 33% of consumers have completed a purchase based on AI recommendations. We also know that 84% of them were satisfied with the purchase – a significant success rate. This tells us that the majority of people are benefiting from these recommendations that are relevant and personalized to their needs, which is why we are always looking for ways to evolve and mold our AI capabilities to go beyond the basics, such as “you previously purchased a similar item so you might like…” and focus on helping to ensure that recommendations and product information are complete, consistent, and contextually relevant for every shopper no matter where they are in their journey. It’s not just about nudging a sale, it’s about building and fostering a greater level of trust, reducing friction, and helping consumers feel more confident in their purchases.

2. How do you assess the current maturity of your product information systems to support AI-driven personalization across your digital commerce channels?

Product information maturity is a critical foundation for any successful AI strategy, especially when it comes to personalization. Akeneo helps brands assess this by providing the right foundation of technology, and through a unique blend of data audits, system diagnostics, and customer journey mapping to better understand where content is falling short. Most of the time, the challenge isn’t the lack of data; it’s that the data is siloed, inconsistent across channels, or doesn’t have the right context that AI needs. Looking at key indicators such as readiness, completeness, and consistency helps evaluate maturity. Once there is a baseline, we help customers move up the maturity curve and automate where possible to scale AI personalization efforts.

3. How is your team measuring the impact of AI implementations on key metrics such as product return rates, customer satisfaction, and conversion efficiency?

AI isn’t valuable unless it’s working to drive business impact, so it’s important to track key metrics to ensure efficiency and accuracy. We are always looking to tie our implementations and product offerings to our clients' success metrics that matter, and customer satisfaction, conversation efficiency, and return rates fall into that category. For example, when product information is incomplete, we know it leads to confusion and frustration, AKA more likelihood of returns. So, using AI to automatically flag gaps, suggest improvements, scan reviews for common themes, and generate missing content allows brands to enrich their product content with the help of our AI tools.

4. With trust in AI-powered features still emerging, what measures is your organization taking to ensure transparency around how AI is used in customer interactions and data handling?

Increasing trust in AI is an issue that every company is facing. Without trust, the technology will fall flat, so it’s top of mind to increase. At Akeneo, our approach is always a transparency-first mindset. That means we are crystal clear with our customers, and ultimately their customers, about how, when, where, and why AI is being used and incorporated into the product experience. For example, if an AI model is working to enrich product descriptions or recommending alternative options, we make sure that users know its AI-driven and provide that context. Or if AI is scanning reviews to highlight themes, we outline that clearly to consumers.

5. In what ways is your organization investing in improving product data accuracy and enriching descriptions to support AI applications such as improved search results, summaries, and personalized recommendations?

AI is only as smart as the data that it’s fed. For Akeneo, that means the product data that it’s given. A major aspect of our investment is going toward helping brands not only clean up their plethora of data and information, but also to ensure it’s AI-ready. Our PIM platform incorporates AI capabilities that can detect inconsistencies, suggest category-specific improvements, and generate richer, more contextual descriptions at scale. This is essential for powering better search results, more accurate summaries, and ultimately, recommendations. Because when marketers and product teams can collaborate and enrich the product data faster, they’re able to provide a strong customer experience.

6. How is your leadership balancing the pursuit of AI innovation with the need to establish ethical boundaries that prioritize user consent, data privacy, and transparent value exchange?

Our roots as an open-source company have instilled a deep commitment to transparency, openness, and user trust, which are values that continue to guide our approach to AI innovation. As we develop and integrate AI capabilities across our platform, we remain committed to upholding ethical principles, particularly around user consent, data privacy, and transparent value exchange. We believe that innovation should never come at the cost of trust, which is why we prioritize building AI features that are explainable, auditable, and respectful of customer data boundaries, while ensuring users understand how value is being created and shared. Our commitment to openness is the foundation for how we shape the future of AI at Akeneo.

Get in touch with our MarTech Experts.

Ethical AI in Marketing: Balancing Innovation and Trust | Sara Clodman, CMA

Ethical AI in Marketing: Balancing Innovation and Trust | Sara Clodman, CMA

artificial intelligence 8 Sep 2025

1. What strategies should leaders employ to ensure their teams are adequately trained and prepared for AI integration?

The most critical strategy for AI integration is to treat it as a continuous process, not a one-time project. AI is evolving rapidly, and marketing teams need structured, sustained support to build confidence and competence. According to our recent Generative AI Readiness Survey, in collaboration with Twenty44, more than half (56 per cent) of marketers reported receiving either no training or ineffective training on AI tools. That's a clear signal that more investment is needed in practical, role-specific upskilling.

Leaders should start by setting clear expectations for how AI will be used, developing guidelines for what tools are approved, who reviews AI-generated content and how to manage privacy and consent. Training should help teams not only operate AI tools, but also review their outputs carefully. For example, AI-generated copy should be checked for accuracy, audience targeting should be monitored for fairness and organizations should ensure that customers understand when AI is being used.

To help organizations on this journey, the CMA has developed resources like the CMA Guide on AI for Marketers and the CMA Mastery Series of weekly playbooks. These resources provide practical advice on adopting AI tools, setting policies and reviewing outputs. By combining skills training with clear guidelines and review processes, leaders can help their teams use AI effectively and responsibly.

2. How can companies make their AI processes more understandable to consumers and stakeholders?

Making AI processes more understandable to consumers and stakeholders isn't just about disclosure statements; it's about designing transparency into the experience. Trust is more than a value: it's a strategic asset that determines how brands grow and endure.

Transparency means not only stating that AI is used, but helping people intuitively grasp when and how AI is playing a role in product recommendations, personalized content, and so forth.

One way to do this is by creating real-time touchpoints that signal AI involvement. For example, prompts like "Why am I seeing this?" in recommendation engines or "Reviewed by a human" tags in chatbots make AI more tangible, and more trustworthy.

Similarly, a simple note like "This content was generated with the help of AI" in emails or apps can manage expectations and build trust. Some companies are introducing "transparency hubs" or layered explanations where users can find out whether a piece of content or interaction was AI-assisted. These cues provide clarity and empower choice.

Internally, explainability dashboards help customer-facing teams respond to inquiries with confidence and provide insight into how decisions are made. Embedding explainability doesn't require revealing proprietary algorithms: it's about giving people enough information to understand how AI contributes to their experience, how targeting decisions were made, and ensuring teams are equipped to answer questions if concerns arise.

Ultimately, the brands that make their AI visible, relatable, and explainable will build trust and achieve greater success.

3. What lessons can be learned from international markets that are ahead in AI integration?

Strong governance creates a more predictable environment for innovators, encouraging responsible development and investment. It gives organizations the confidence to experiment, knowing the rules of the game. It also sets a higher bar for trust, which is increasingly a differentiator in competitive global markets.

The European Union (EU) has taken a bold and early lead in AI governance, offering a globally recognized reference point for responsible innovation with its General Data Protection Regulation (GDPR). Its emphasis on transparency, accountability, and fundamental rights has helped shape a culture of responsibility across industries and jurisdictions.

That said, being first doesn't always mean getting everything right. For example, the GDPR improved data protection rights and awareness for consumers, but its shortcomings – from interpretational ambiguity to over-compliance and operational strain – offer critical lessons for any nation developing its own framework.

Other countries, like the U.K. and Singapore, have pursued a more flexible, risk-based approach that aims to support innovation while safeguarding public trust.

Canada has the opportunity to evaluate what has, or has not, worked in other jurisdictions and to develop an approach that serves as a model for the world, while reflecting and supporting local conditions, practices and expectations.

The key lesson from these international approaches is that proactive governance builds trust. Canadian organizations can lead by embedding these principles now, without waiting for legislation:

• Establish pre-defined ethical checkpoints for all AI-powered marketing campaigns

• Use visible content labels such as "AI-generated" to maintain transparency

• Display confidence scores or "human approval" indicators in decision systems

• Conduct regular diversity and bias audits

• Publish internal reports on AI use to foster transparency

These measures build internal confidence and external trust.

4. How should marketing leaders balance innovation with ethical considerations to maintain consumer trust?

Ethics and innovation are not competing priorities; they are inextricably linked. The most durable innovations are built on an ethical foundation.

Companies have existing codes of conduct, ethics, privacy principles, and brand safety standards. But many of these were designed before the age of generative AI. Leaders should review existing ethics frameworks through an AI lens, ensuring they are updated to address issues like bias in automated targeting, transparency in AI-generated content, and accountability for machine-assisted decisions. This is not about reinventing governance — it's about evolving it to match today's reality.

An effective system ensures innovation and ethical responsibility reinforce each other.

This begins with integrating governance into AI-related decision-making from the start. Practical steps may include:

• Pre-launch ethical reviews of AI-generated content to identify bias, tone sensitivity, or fairness issues

• Ensuring inclusive representation in audience segmentation and flagging patterns that risk exclusion

• Providing clear opt-out options when AI is used for personalization

It’s also important to define accountability, which is best achieved by establishing a formal "human-in-the-loop" protocol. This approach goes beyond theory and answers the critical operational questions: Who is the designated person responsible for reviewing and approving AI outputs? Who has the authority to monitor for ethical compliance and the duty to intervene when something goes wrong? By embedding human oversight directly into the workflow, marketing leaders ensure that technology serves strategy, not the other way around.

Establishing these structures early helps translate values into action, making ethics a consistent part of the workflow, not an afterthought.

Organizations that treat ethics as operational, not optional, are better equipped to navigate complexity and earn lasting trust.

Integrity doesn't constrain innovation, it gives innovation staying power.

5. What emerging AI technologies do you foresee having the most significant impact on marketing strategies in the next five years?

Over the next five years, AI will evolve from a creative assistant into a dynamic co-pilot: able to personalize content, adapt journeys and optimize campaigns across channels with minimal human input. The most significant impact won't come from tools that merely automate tasks, but from intelligent systems that can think, learn, and act autonomously.

A major shift will be the rise of AI agents — intelligent systems that don't just recommend actions but autonomously execute them. These agents will manage complex tasks like campaign orchestration, budget adjustments, and real-time response to customer behaviour, enabling a move from reactive to proactive, autonomous marketing.

Predictive analytics and adaptive content engines will also play a growing role. Marketers will be able to tailor experiences based on real-time signals and audience context, while generative tools will scale voice, visual, and written creative across platforms.

Perhaps most importantly, AI is advancing ethical and inclusive marketing through tools that analyze social sentiment, generate accessible content like captions and translations, and adapt messaging for diverse communities.

The key differentiator won't be the tools themselves, but how responsibly they're deployed. The most successful marketers will use AI as a creative and analytical partner, maintaining human oversight to ensure alignment with brand values, ethics, and consumer trust.

The future belongs to marketers who design with both intelligence and intention—letting AI amplify their values, not just their velocity.

6. What role do industry associations play in guiding ethical AI adoption, and how can companies collaborate with such bodies to shape the future of marketing?

Industry associations provide an essential platform for setting standards, sharing knowledge and fostering collaboration as AI adoption grows. By offering guidance, convening expert voices and translating emerging regulations into actionable practices, associations help businesses navigate AI's complexities with more confidence.

Associations play a vital liaison role, ensuring the marketing industry's perspective is represented in policy discussions and regulatory development. They also help nurture best practices by developing shared frameworks, toolkits, and use cases that companies can adopt and scale. As educators, they elevate industry competence by upskilling marketers and leaders on the risks, opportunities, and operational realities of AI.

Companies can collaborate by participating in working groups, contributing to discussions about ethical guidelines, or sharing their own case studies and lessons learned. This collaboration not only helps shape the resources and standards that emerge but also ensures businesses stay connected to evolving best practices.

Associations also serve as a bridge between marketers, policymakers and technical experts. Engaging with these groups enables companies to anticipate regulatory changes, align with industry expectations and build AI strategies that balance innovation with accountability. By working together, the marketing community can help ensure AI delivers long-term value while protecting trust and fairness.

Get in touch with our MarTech Experts.

   

Page 1 of 8

REQUEST PROPOSAL