artificial intelligence 19 Feb 2026
artificial intelligence 12 Feb 2026
artificial intelligence 11 Feb 2026
artificial intelligence 11 Feb 2026
Marketing agencies are uniquely positioned as custodians of client data across dozens of platforms. How has this role evolved in terms of security responsibility, and why is 2026 a critical year for agencies to address this?
How can agencies transform their security practices from a checkbox requirement into an actual competitive advantage during pitches and contract renewals?
AI-powered phishing attacks are becoming increasingly sophisticated. Can you describe what modern social engineering attacks targeting marketing agencies actually look like in 2026, and what makes agencies particularly vulnerable to these AI-driven threats compared to other industries?
Beyond technical solutions, what role does human awareness and training play in defending against these evolving threats?
How should agencies think about credential management differently when they're not just protecting their own data, but serving as the gateway to client accounts across platforms?
If you could recommend three immediate actions that agencies should take this quarter to strengthen their security posture, what would they be?
For agencies that have historically viewed cybersecurity investments as cost centers, how should they reframe this thinking given the current threat landscape?
Looking ahead through 2026, what emerging threats should agencies be preparing for now, even if they haven't fully materialized yet?
artificial intelligence 30 Jan 2026
Predictive modeling then builds on those signals to forecast outcomes, scenario-test media and creative investments, and evaluate trade-offs before decisions are made. As measurement systems become more advanced, marketers are moving away from trying to perfectly reconstruct a journey that no longer exists and instead using AI-driven modeling to plan what comes next with greater confidence, even as privacy constraints and signal loss accelerate.
The result is a move from reactive optimization to proactive, forward-looking planning, where reporting becomes a decision engine rather than a justification exercise.
I’m honored to be a guest on an upcoming episode, where I’ll dive into AI architecture and share how organizations can set themselves up for success with AI. If you’re eager to gain actionable insights and hear from industry leaders on how they’re driving innovation in marketing and advertising, make sure to tune in!
artificial intelligence 8 Sep 2025
artificial intelligence 8 Sep 2025
1. Given that nearly one-third of consumers complete purchases based on AI recommendations, how is your organization evolving its AI capabilities to influence decision-making across the customer journey?
Based on our data, we know that about 33% of consumers have completed a purchase based on AI recommendations. We also know that 84% of them were satisfied with the purchase – a significant success rate. This tells us that the majority of people are benefiting from these recommendations that are relevant and personalized to their needs, which is why we are always looking for ways to evolve and mold our AI capabilities to go beyond the basics, such as “you previously purchased a similar item so you might like…” and focus on helping to ensure that recommendations and product information are complete, consistent, and contextually relevant for every shopper no matter where they are in their journey. It’s not just about nudging a sale, it’s about building and fostering a greater level of trust, reducing friction, and helping consumers feel more confident in their purchases.
2. How do you assess the current maturity of your product information systems to support AI-driven personalization across your digital commerce channels?
Product information maturity is a critical foundation for any successful AI strategy, especially when it comes to personalization. Akeneo helps brands assess this by providing the right foundation of technology, and through a unique blend of data audits, system diagnostics, and customer journey mapping to better understand where content is falling short. Most of the time, the challenge isn’t the lack of data; it’s that the data is siloed, inconsistent across channels, or doesn’t have the right context that AI needs. Looking at key indicators such as readiness, completeness, and consistency helps evaluate maturity. Once there is a baseline, we help customers move up the maturity curve and automate where possible to scale AI personalization efforts.
3. How is your team measuring the impact of AI implementations on key metrics such as product return rates, customer satisfaction, and conversion efficiency?
AI isn’t valuable unless it’s working to drive business impact, so it’s important to track key metrics to ensure efficiency and accuracy. We are always looking to tie our implementations and product offerings to our clients' success metrics that matter, and customer satisfaction, conversation efficiency, and return rates fall into that category. For example, when product information is incomplete, we know it leads to confusion and frustration, AKA more likelihood of returns. So, using AI to automatically flag gaps, suggest improvements, scan reviews for common themes, and generate missing content allows brands to enrich their product content with the help of our AI tools.
4. With trust in AI-powered features still emerging, what measures is your organization taking to ensure transparency around how AI is used in customer interactions and data handling?
Increasing trust in AI is an issue that every company is facing. Without trust, the technology will fall flat, so it’s top of mind to increase. At Akeneo, our approach is always a transparency-first mindset. That means we are crystal clear with our customers, and ultimately their customers, about how, when, where, and why AI is being used and incorporated into the product experience. For example, if an AI model is working to enrich product descriptions or recommending alternative options, we make sure that users know its AI-driven and provide that context. Or if AI is scanning reviews to highlight themes, we outline that clearly to consumers.
5. In what ways is your organization investing in improving product data accuracy and enriching descriptions to support AI applications such as improved search results, summaries, and personalized recommendations?
AI is only as smart as the data that it’s fed. For Akeneo, that means the product data that it’s given. A major aspect of our investment is going toward helping brands not only clean up their plethora of data and information, but also to ensure it’s AI-ready. Our PIM platform incorporates AI capabilities that can detect inconsistencies, suggest category-specific improvements, and generate richer, more contextual descriptions at scale. This is essential for powering better search results, more accurate summaries, and ultimately, recommendations. Because when marketers and product teams can collaborate and enrich the product data faster, they’re able to provide a strong customer experience.
6. How is your leadership balancing the pursuit of AI innovation with the need to establish ethical boundaries that prioritize user consent, data privacy, and transparent value exchange?
Our roots as an open-source company have instilled a deep commitment to transparency, openness, and user trust, which are values that continue to guide our approach to AI innovation. As we develop and integrate AI capabilities across our platform, we remain committed to upholding ethical principles, particularly around user consent, data privacy, and transparent value exchange. We believe that innovation should never come at the cost of trust, which is why we prioritize building AI features that are explainable, auditable, and respectful of customer data boundaries, while ensuring users understand how value is being created and shared. Our commitment to openness is the foundation for how we shape the future of AI at Akeneo.
Get in touch with our MarTech Experts.
artificial intelligence 8 Sep 2025
1. What strategies should leaders employ to ensure their teams are adequately trained and prepared for AI integration?
The most critical strategy for AI integration is to treat it as a continuous process, not a one-time project. AI is evolving rapidly, and marketing teams need structured, sustained support to build confidence and competence. According to our recent Generative AI Readiness Survey, in collaboration with Twenty44, more than half (56 per cent) of marketers reported receiving either no training or ineffective training on AI tools. That's a clear signal that more investment is needed in practical, role-specific upskilling.
Leaders should start by setting clear expectations for how AI will be used, developing guidelines for what tools are approved, who reviews AI-generated content and how to manage privacy and consent. Training should help teams not only operate AI tools, but also review their outputs carefully. For example, AI-generated copy should be checked for accuracy, audience targeting should be monitored for fairness and organizations should ensure that customers understand when AI is being used.
To help organizations on this journey, the CMA has developed resources like the CMA Guide on AI for Marketers and the CMA Mastery Series of weekly playbooks. These resources provide practical advice on adopting AI tools, setting policies and reviewing outputs. By combining skills training with clear guidelines and review processes, leaders can help their teams use AI effectively and responsibly.
2. How can companies make their AI processes more understandable to consumers and stakeholders?
Making AI processes more understandable to consumers and stakeholders isn't just about disclosure statements; it's about designing transparency into the experience. Trust is more than a value: it's a strategic asset that determines how brands grow and endure.
Transparency means not only stating that AI is used, but helping people intuitively grasp when and how AI is playing a role in product recommendations, personalized content, and so forth.
One way to do this is by creating real-time touchpoints that signal AI involvement. For example, prompts like "Why am I seeing this?" in recommendation engines or "Reviewed by a human" tags in chatbots make AI more tangible, and more trustworthy.
Similarly, a simple note like "This content was generated with the help of AI" in emails or apps can manage expectations and build trust. Some companies are introducing "transparency hubs" or layered explanations where users can find out whether a piece of content or interaction was AI-assisted. These cues provide clarity and empower choice.
Internally, explainability dashboards help customer-facing teams respond to inquiries with confidence and provide insight into how decisions are made. Embedding explainability doesn't require revealing proprietary algorithms: it's about giving people enough information to understand how AI contributes to their experience, how targeting decisions were made, and ensuring teams are equipped to answer questions if concerns arise.
Ultimately, the brands that make their AI visible, relatable, and explainable will build trust and achieve greater success.
3. What lessons can be learned from international markets that are ahead in AI integration?
Strong governance creates a more predictable environment for innovators, encouraging responsible development and investment. It gives organizations the confidence to experiment, knowing the rules of the game. It also sets a higher bar for trust, which is increasingly a differentiator in competitive global markets.
The European Union (EU) has taken a bold and early lead in AI governance, offering a globally recognized reference point for responsible innovation with its General Data Protection Regulation (GDPR). Its emphasis on transparency, accountability, and fundamental rights has helped shape a culture of responsibility across industries and jurisdictions.
That said, being first doesn't always mean getting everything right. For example, the GDPR improved data protection rights and awareness for consumers, but its shortcomings – from interpretational ambiguity to over-compliance and operational strain – offer critical lessons for any nation developing its own framework.
Other countries, like the U.K. and Singapore, have pursued a more flexible, risk-based approach that aims to support innovation while safeguarding public trust.
Canada has the opportunity to evaluate what has, or has not, worked in other jurisdictions and to develop an approach that serves as a model for the world, while reflecting and supporting local conditions, practices and expectations.
The key lesson from these international approaches is that proactive governance builds trust. Canadian organizations can lead by embedding these principles now, without waiting for legislation:
• Establish pre-defined ethical checkpoints for all AI-powered marketing campaigns
• Use visible content labels such as "AI-generated" to maintain transparency
• Display confidence scores or "human approval" indicators in decision systems
• Conduct regular diversity and bias audits
• Publish internal reports on AI use to foster transparency
These measures build internal confidence and external trust.
4. How should marketing leaders balance innovation with ethical considerations to maintain consumer trust?
Ethics and innovation are not competing priorities; they are inextricably linked. The most durable innovations are built on an ethical foundation.
Companies have existing codes of conduct, ethics, privacy principles, and brand safety standards. But many of these were designed before the age of generative AI. Leaders should review existing ethics frameworks through an AI lens, ensuring they are updated to address issues like bias in automated targeting, transparency in AI-generated content, and accountability for machine-assisted decisions. This is not about reinventing governance — it's about evolving it to match today's reality.
An effective system ensures innovation and ethical responsibility reinforce each other.
This begins with integrating governance into AI-related decision-making from the start. Practical steps may include:
• Pre-launch ethical reviews of AI-generated content to identify bias, tone sensitivity, or fairness issues
• Ensuring inclusive representation in audience segmentation and flagging patterns that risk exclusion
• Providing clear opt-out options when AI is used for personalization
It’s also important to define accountability, which is best achieved by establishing a formal "human-in-the-loop" protocol. This approach goes beyond theory and answers the critical operational questions: Who is the designated person responsible for reviewing and approving AI outputs? Who has the authority to monitor for ethical compliance and the duty to intervene when something goes wrong? By embedding human oversight directly into the workflow, marketing leaders ensure that technology serves strategy, not the other way around.
Establishing these structures early helps translate values into action, making ethics a consistent part of the workflow, not an afterthought.
Organizations that treat ethics as operational, not optional, are better equipped to navigate complexity and earn lasting trust.
Integrity doesn't constrain innovation, it gives innovation staying power.
5. What emerging AI technologies do you foresee having the most significant impact on marketing strategies in the next five years?
Over the next five years, AI will evolve from a creative assistant into a dynamic co-pilot: able to personalize content, adapt journeys and optimize campaigns across channels with minimal human input. The most significant impact won't come from tools that merely automate tasks, but from intelligent systems that can think, learn, and act autonomously.
A major shift will be the rise of AI agents — intelligent systems that don't just recommend actions but autonomously execute them. These agents will manage complex tasks like campaign orchestration, budget adjustments, and real-time response to customer behaviour, enabling a move from reactive to proactive, autonomous marketing.
Predictive analytics and adaptive content engines will also play a growing role. Marketers will be able to tailor experiences based on real-time signals and audience context, while generative tools will scale voice, visual, and written creative across platforms.
Perhaps most importantly, AI is advancing ethical and inclusive marketing through tools that analyze social sentiment, generate accessible content like captions and translations, and adapt messaging for diverse communities.
The key differentiator won't be the tools themselves, but how responsibly they're deployed. The most successful marketers will use AI as a creative and analytical partner, maintaining human oversight to ensure alignment with brand values, ethics, and consumer trust.
The future belongs to marketers who design with both intelligence and intention—letting AI amplify their values, not just their velocity.
6. What role do industry associations play in guiding ethical AI adoption, and how can companies collaborate with such bodies to shape the future of marketing?
Industry associations provide an essential platform for setting standards, sharing knowledge and fostering collaboration as AI adoption grows. By offering guidance, convening expert voices and translating emerging regulations into actionable practices, associations help businesses navigate AI's complexities with more confidence.
Associations play a vital liaison role, ensuring the marketing industry's perspective is represented in policy discussions and regulatory development. They also help nurture best practices by developing shared frameworks, toolkits, and use cases that companies can adopt and scale. As educators, they elevate industry competence by upskilling marketers and leaders on the risks, opportunities, and operational realities of AI.
Companies can collaborate by participating in working groups, contributing to discussions about ethical guidelines, or sharing their own case studies and lessons learned. This collaboration not only helps shape the resources and standards that emerge but also ensures businesses stay connected to evolving best practices.
Associations also serve as a bridge between marketers, policymakers and technical experts. Engaging with these groups enables companies to anticipate regulatory changes, align with industry expectations and build AI strategies that balance innovation with accountability. By working together, the marketing community can help ensure AI delivers long-term value while protecting trust and fairness.
Get in touch with our MarTech Experts.
Page 1 of 8
Interview Of : Kris Johns
Interview Of : Adam Beavis
Interview Of : Jeremy Blackburn
Interview Of : Jen Neary
Interview Of : Scott Kozub
Interview Of : Mike Murchison