What scaling AI reveals about governing personalisation | Martech Edge | Best News on Marketing and Technology
GFG image
What scaling AI reveals about governing personalisation

artificial intelligencepersonalization

What scaling AI reveals about governing personalisation

MTEMTE

Published on 12th Feb, 2026

By Mark Drasutis, Head of Value, APJ, Amplitude
 
As brands increasingly seek to understand and act on customer behavior, they need to continuously analyse user journeys, identify patterns and friction, and recommend or execute next steps in real time to deliver true personalisation. AI is accelerating this shift, redefining personalisation by moving brands beyond static journeys to experiences that adapt dynamically to customer behaviour.
 
Australia’s National AI Plan sends a clear message to marketing and product teams; AI can only scale if it is safe, transparent and responsibly governed. Yet, while AI capabilities are advancing rapidly toward greater autonomy, most organisational governance remains manual and fragmented. 
 
With conversational AI and agentic AI becoming the primary interfaces for digital experiences, governance needs to operate at the same speed and complexity as the systems it oversees. Brands need capability uplift and accountability in equal measure or they risk falling behind. 

The trust gap limiting AI-driven personalisation 


AI-driven personalisation is being held back not by technology but by trust and transparency – a gap driven by weak governance, unclear accountability and a lack of workflows to manage AI safely. This matters because trust in AI remains fragile in Australia. A University of Melbourne-led study found that while half of Australians already use AI regularly, only one in three feel confident trusting it.
 

That trust gap is widening as personalisation evolves. Traditional rules based marketing, built on fixed segments, pre-defined journeys and manual triggers, is being replaced by real-time, generative personalisation where decisions are made continuously by AI. This shift demands new operating models, stronger governance frameworks and far greater visibility into how AI systems make decisions.  

As agentic AI becomes more embedded in personalisation, teams are moving beyond static segmentation toward systems that can learn continuously from behaviour, test autonomously and adapt experiences in the moment. But even the most advanced systems will fail if customers don’t trust the intelligence behind them.


Australia’s National AI Plan reinforces that trust and transparency are not optional – they are the foundation for safe, scalable AI-driven personalisation. Done well, brands can deliver meaningful, adaptive experiences without compromising privacy, fairness or customer confidence. 

AI governance needs to be built in, not bolted on 


As AI takes on a bigger role in shaping personalised customer experiences, the governance behind those systems becomes just as important as the technology itself. The rise of employees using AI tools independently outside formal approval channels creates security and compliance risks. Organisations cannot rely on ad hoc controls anymore – they need transparent systems that formalise how AI is accessed, monitored and governed so teams can innovate without losing control. Boards and executives are accountable for AI strategy, governance and ethical application, emphasising that oversight must be enterprise grade, not experimental.


Effective guardrails start with visibility. As AI drives personalised decisions, brands need full clarity on how those decisions are being made. Brands need to trace which data an AI model uses to make a decision, understand the prompts, models and parameters behind an output and maintain clear logs that show how AI shapes the paths customers take and the outcomes they experience. Without transparency, it becomes impossible to spot bias, drift or unintended behaviour. 


What matters in practice is real time visibility. When teams can see how AI driven decisions influence user behaviour, conversion and retention, they can assess whether those decisions are delivering value or creating unintended consequences. This kind of visibility is what allows personalisation to move from experimentation to something dependable. 


Some early adopters are already putting this into practice. ZIP, an Australian fintech company, is already using AI agents on Amplitude’s MCP server to embed their domain knowledge directly into their LLM workflows, improving how personalised journeys are monitored and optimised. The result of this was a 60% increase in customers starting an additional repayment flow and the removal of more than 4,000 days of navigation friction. 


This visibility makes it possible to intervene early, course correct when required and prevent minor issues from scaling into larger problems. For marketing and product teams, this means AI driven personalisation becomes safer, more predictable and more aligned with actual customer behaviour. AI governance cannot be patched on later. It must be embedded into the core of decisioning systems so AI operates safely, predictably and in line with both regulation and customer expectations. 

Invest in continuous oversight for continuous experimentation
 
Echoed in the National AI Plan, real time personalisation means AI is constantly adapting, which requires continuous oversight rather than periodic manual checks.
 
When AI underpins the customer experience, these risks compound quickly. Automation without continuous oversight risks locking incorrect decisions at scale. Continuous oversight is what ensures experimentation remains safe, explainable and aligned with customer expectations on personalisation.
 
AI Agents are most effective when they work alongside humans, not in place of them. They can monitor customer behaviour, surface opportunities and support controlled experimentation at speed, while humans can remain responsible for setting strategy, defining guardrails and approving customer facing changes. A leading Australian bank currently using Amplitude’s AI Agents has advanced their data-driven experimentation, allowing them to uncover key customer behavioural patterns and traffic shifts with central human oversight. Autonomy can be adjusted over time as confidence grows, but accountability remains firmly with people. 


This in-loop model ensures personalised experiences adapt based on real customer behaviour, while still reflecting brand intent, fairness standards and evolving privacy expectations. Products can optimise continuously, but only within approved parameters, keeping customer experience safety and performance aligned.  

AI has the potential to fundamentally reshape personalisation, but only when trust, transparency, and governance scale alongside the technology. Without them, AI accelerates risk and limits growth. With them, it becomes a powerful and defensible competitive advantage. 


The brands that succeed won’t be those that deploy the most AI, but those that govern it with intent and discipline. Now is the time to move beyond experimentation - strengthening oversight, embedding clear governance and building transparent data foundations that allow AI to scale safely and deliver personalised experiences customers genuinely trust.