Martech Edge | Best News on Marketing and Technology
GFG image

marketing

 Perfecting Personalized Promotions, The secret to achieving advanced personalization, and what’s holding retailers back

Perfecting Personalized Promotions, The secret to achieving advanced personalization, and what’s holding retailers back

marketing 25 Feb 2026

Jeff Baskin, Chief Revenue Officer 

Promotions are a key performance area for all retailers, and effectively implementing truly personalized offers at scale has been a goal for enterprise retailers for decades. Eagle Eye’s CRO Jeff Baskin shares his thoughts on how technology is making this goal more attainable, the legacy approaches that are holding retailers back, and what impacts they can expect from genuine one-to-one promotional engagement.  


1. What are the biggest inefficiencies you see in traditional promotional models today and why are so many retailers still relying on broad, mass-discount approaches? 


The biggest flaw with traditional mass discounting is that it often incentivizes customers who would have purchased anyway, while failing to influence behavior where it matters. This inefficiency is largely driven by legacy systems and the inertia of “what’s always worked.” Most retail infrastructure was built to support blanket offers or, at best, broadly segmented campaigns, not individualized promotions at scale. For years, that was effective, but as competition intensifies and technology improves, more retailers are embracing approaches that enable one-to-one engagement. After all, dynamically creating an offer for the exact brand of organic snacks the individual customer is most likely to respond to at the exact discount level most likely to prompt them to action is inherently more effective – and efficient – than placing a generic discount on similar items in the weekly circular.    


2. Why has true one-to-one promotional personalization been so difficult to achieve? 


Manual processes, unstructured data, and legacy platforms are the main roadblocks to true one-to-one personalization and are what keep retailers relying on broad-based approaches. There are also two issues of scale: first, the volume of data (customer data, SKUs, multiple sales channel data) retailers must manage has increased exponentially; and second, the ability to deploy personalized offers at enterprise level across millions of transactions remains out of reach. Few incumbent promotional systems support the real-time decisioning or on-the-fly offer creation necessary to deliver unique promotions to individual customers across a multi-store network, let alone a portfolio of banners.  


3. What has changed (technologically or operationally) that is now making individualized promotions possible at enterprise scale? 


From a tech perspective, AI and machine learning models can now analyze behavior and generate custom offers in milliseconds. Cloud infrastructure handles the computational demands of real-time adjudication across millions of shoppers or loyalty members. Operationally, retailers now have the customer data and digital touchpoints necessary to identify individual shoppers and deliver offers at checkout, online and in-app. These two components are equally important; even the most advanced AI will deliver irrelevant promotions without data-based insights into what individual customers care about, and all the customer data in the world is useless without the technology make it actionable. 


4. Retailers often struggle to know whether an offer is actually influencing behavior or simply rewarding shoppers who would have purchased anyway. How can retailers start measuring true incremental impact? 


Attribution at the individual shopper level is essential. You need systems that track each customer's baseline purchasing patterns, then measure how behavior changes when specific offers are delivered. Closed-loop reporting that connects offer allocation, redemption, and actual sales lift reveals which promotions are working. Of course, this requires technology that follows the complete customer journey from offer to purchase, across platforms and channels, and incorporating both marketing-exclusive systems (like retail media networks) and traditionally analog interaction points (like physical stores). 


5. Boston Consulting Group has estimated that shifting even a portion of mass promotion spend into personalized offers can dramatically improve ROI. What does that tell us about how much promotional budget is currently being misallocated? 


BCG estimates enterprise retailers can generate over $100 million in topline impact from scaling personalized offer execution. That suggests that retailers’ current promotional spending is underperforming, delivering little incremental value for the budget. It tells us that when retailers offer undifferentiated incentives to customers with existing purchase intent, or offer deeper discounts than necessary to change behavior, they’re essentially paying for sales they already had. 


7. As shoppers’ expectations for immediate value increase, how are promotions emerging as a new competitive battleground for retailers? 


Customers now expect offers that reflect their actual shopping behavior; generic discounts feel irrelevant. In this way, promotions have become a de facto indicator of whether retailers truly understand their customers. Those who do can deliver timely, meaningful incentives, build stronger engagement and capture more share of wallet. Those who don't risk spending promotional dollars with little measurable return. In a marketplace with more choice than ever, relevance is a clear competitive advantage. 


8. When promotions are personalized at the individual level, how does that change the way shoppers engage with offers and deliver value at the right moment? 


Personalization ensures that customers receive offers that feel relevant and appropriate rather than random or excessive. When offers align with consumers’ actual preferences, purchase patterns and contextual cues, engagement naturally increases. Delivered through digital channels or at checkout in the moment of decision, personalized incentives create a higher-value experience that encourages repeat behavior and strengthens ongoing loyalty. They also drive results for retailers; Eagle Eye’s AI-powered Personalized Challenges, which creates personalized, incremental goals for each shopper based on their purchase history, has generated 7:1 ROI for high-profile retailers that have implemented the solution  
Cancer Awareness Month Can Highlight Real Community Support

Cancer Awareness Month Can Highlight Real Community Support

marketing 19 Feb 2026

Each February, the country turns pink. Landmarks glow, national campaigns launch, and stories of survival and resilience fill television screens and social feeds. The scale of support is both inspiring and necessary, reminding millions that they are not alone in the fight against cancer. Yet beyond the national spotlight, something quieter is happening.
 
In hospital waiting rooms, volunteers sit beside patients before chemotherapy begins. In community centers, local nonprofits coordinate rides so no one misses treatment. In church basements and neighborhood gathering spaces, families come together for support groups because healing is not only physical, but emotional. In kitchens across America, neighbors prepare meals for someone too exhausted to cook.
 
These moments rarely make headlines, but they form the backbone of the fight.
 
Cancer is deeply personal. It touches families street by street and house by house, and the organizations responding most immediately are often local and deeply rooted in the communities they serve. They know the names behind the diagnoses. They understand the practical barriers patients face. And they continue showing up long after awareness campaigns fade from view.
 
Cancer Awareness Month offers a powerful national platform. The opportunity before us is to extend that platform to the people doing this work closest to home.
 
Imagine if the storytelling strength that powers major national campaigns also illuminated the hospital down the road, the screening event at the high school gym, or the local survivor who turned personal hardship into community action. When people see their own community reflected back to them, something shifts. Engagement becomes personal. Support becomes immediate. Action feels tangible.
 
Today, we have the technology to elevate local organizations with the same creative quality and reach once reserved for large national causes. Through modern media channels, community based nonprofits can share their stories at scale, connecting households to resources and reminding viewers that help is not abstract. It is nearby.
 
National momentum and local action do not compete with one another. They reinforce each other. Broad awareness drives conversation, while local visibility drives participation. Together, they create a stronger and more responsive support system for patients and families.
 
Awareness is most powerful when it becomes tangible, when it connects a household to a place they recognize, a service they can access, or a story they understand. It is measured not only in dollars raised or campaigns launched, but in rides provided, meals delivered, appointments kept, and hands held during uncertain moments.
 
The fight against cancer lives in communities, carried forward by neighbors, volunteers, caregivers, and local leaders who work tirelessly, often without recognition. This Cancer Awareness Month, as we honor the national movement, let us also make space to elevate the people doing the work closest to home. Their impact is real, immediate, and deeply human. They deserve to be seen.
The Content Bottleneck Has Shifted from Production to Operations

The Content Bottleneck Has Shifted from Production to Operations

marketing 17 Feb 2026

Q1: This is now the fourth year of Canto’s State of Digital Content report. What stood out to you most in this year’s findings compared to previous years?


Just how tangible the cost of fragmentation in brands’ content and creative operations has become. We’ve been seeing teams acknowledging the problem, saying “yeah, our digital assets are scattered, our workflows are messy.” But this year the data puts real business consequences behind that. The survey found 44% of folks reporting employee burnout tied directly to poor asset management. Wasted budget, duplicated work, missed revenue, and delayed launches are no longer hypothetical risks of fragmentation, they are measurable business outcomes.


The other thing that struck me is product content and information as a major theme. Previous reports focused heavily on creative assets, but this year we saw just how much brands are struggling to manage product information alongside their digital assets. 88% of teams can’t keep product content consistent across channels, and more than half are still managing product data completely separately from the assets used to actually market and sell those products. That disconnect is creating real friction, especially as e-commerce demands keep growing.

 


Q2: The report found that 82% of content teams saw volume increase in the past year, with three in four attributing at least some of that growth to AI. How are teams actually keeping up with that pace, and where are they falling short?


AI is driving the content volume up, but it’s also the thing helping teams manage the surge. 75% of content professionals told us AI has increased their output, and 30% said that increase was significant. At the same time, about half of teams are already using AI to accelerate creation, tagging, and organization. So the same technology pushing volume higher is also becoming the relief valve.


A critical part of the data, though, shows how teams are falling short on the operational side. The volume is growing but the underlying systems and workflows haven’t yet caught up. Only 43% of teams describe their digital content workflows as standardized and automated. The rest are still dealing with manual processes, inconsistencies, and fragmented tools that slow everything down. You can produce content faster with AI, but if your team can’t find the right asset, doesn’t know which version is current, or has to manually push updates across channels, you’re just creating more chaos. The content bottleneck has shifted from production to operations.


Q3: One of the more striking data points is that teams with full connectivity between their digital assets and product information are more than four times as likely to report significant ROI improvements. Why is that gap so wide?
 


That 4x multiplier surprised even us, but when you think about what connectivity actually enables, it makes sense. When your product information and digital assets live in the same environment, you eliminate an enormous amount of duplicated effort. Teams aren’t hunting across systems to match the right image with the right product description, nor manually updating the same information in five different places every time something changes.


The data backs this up across the board. Teams with fully connected systems told us that tasks like locating assets, maintaining brand consistency, and collaborating across teams were “extremely easy” at rates two to three times higher than everyone else. 67% of fully connected teams said locating assets was extremely easy, compared to 21% of those without full integration. That speed and confidence compound across every campaign, every channel update, every product launch. Just as importantly, it all shows up directly in revenue. 65% of teams that can make real-time content updates reported significant revenue increases, versus just 16% of those with slower timelines.


Q4: Only 35% of respondents said they feel very confident that employees are using the most current, approved version of brand assets. What’s driving that confidence gap, and what does it cost organizations?
 


That number reflects the reality of how most brands manage their content today. When assets are spread across cloud drives, local desktops, email threads, and multiple platforms, it becomes almost impossible to guarantee that everyone is working from the same source of truth. 62% of teams are using cloud file storage, 44% have content on local servers, and 41% still rely on individual hard drives. That’s a lot of places where an outdated logo or last quarter’s product spec can be sitting around, ready to get used by mistake.


The cost shows up in a few ways. 30% of respondents reported publishing off-brand or inconsistent content as a direct consequence of poor asset management. You also see it in the 30% who flagged legal or compliance risk. When you’re operating in regulated industries or across global markets, using the wrong version of an asset can have real legal and financial consequences.


Beyond those specific risks, there’s the broader drag on team productivity. People spend time second-guessing whether they have the right file, chasing down approvals, or recreating something that already exists somewhere in the organization.

 

Q5: The data shows that 51% of teams still rely on spreadsheets to manage product information, and 56% manage product content separately from digital assets. What needs to change operationally before those numbers start to shift?


I think a lot of brands have grown into this situation organically. Spreadsheets are familiar, they’re flexible, and when you only have a handful of products or channels, they work fine. But when you’re managing hundreds or thousands of SKUs across e-commerce platforms, retail partners, marketplaces, and your own website, spreadsheets stop scaling. You end up with version control nightmares, no clear ownership, and inconsistencies that erode customer trust.
 

The shift really starts with recognizing that product content and creative assets are two sides of the same coin. A product image, its description, its specifications, its pricing…those all need to move together when something changes. When 78% of teams are using two or more separate solutions just to manage product content, every update becomes a multi-system coordination exercise. Teams told us the improvements that would deliver the most benefit include making product data easier to access across teams, eliminating duplicate or outdated information, and managing product data alongside creative assets. Those are all fundamentally about bringing things together rather than continuing to manage them in silos.
 


Q6: AI adoption for content is nearly universal at 96%, but only 30% of teams describe their use of AI as widespread. What’s holding back deeper adoption?

The adoption curve is real, and I think it’s actually healthy that most teams are taking a measured approach. 47% are using AI in limited ways, and 16% are planning to adopt. That’s a lot of momentum. But moving from experimentation to deep integration requires trust, infrastructure, and governance, and those things take time.

On the trust side, the news is actually encouraging. 81% of content professionals expressed confidence in AI’s accuracy for tagging and organizing assets, and that confidence gets even stronger with hands-on experience. Among teams using AI most extensively, 77% reported high confidence. The hesitation seems to be more about the operational layer. When we asked about top worries over the next two years, integrating new technologies like AI tied for the number one concern alongside security and access control, both at 30%. Teams want to adopt AI more broadly, but they want to do it in a way that doesn’t compromise brand consistency or introduce new security risks. Brands seeing the biggest returns are embedding AI into a centralized, governed environment rather than layering it on top of fragmented systems.
 


Q7: Teams with advanced, standardized workflows were dramatically more likely to see significant ROI gains - 48% versus 0% among teams with ad hoc processes. What does workflow maturity actually look like in practice for content and creative teams?

 

That 48% to 0% gap is one of the most compelling findings in the report, and it really underscores that operational maturity isn’t merely a nice-to-have.


In practice, workflow maturity starts with processes for creating, reviewing, approving, and distributing content that are documented and consistent across teams rather than reinvented every time. On top of that, the repetitive work (like tagging, metadata generation, format conversions, routing assets for approval) is automated instead of eating up people’s time. Additionally, the tools are connected so that updates flow through the system rather than requiring someone to manually copy information from one platform to another.


The teams doing this well are also much more likely to have invested in AI, analytics, and template-driven approaches. When we looked at what high-ROI organizations are doing differently, they’re significantly more likely to be introducing automation to reduce manual work, expanding template and modular content approaches, and measuring content performance to refine their processes.

 

Q8: The report highlights security, AI integration, and brand consistency as the top concerns about managing content at scale over the next two years. How should content leaders be prioritizing those?

I’d say these concerns are actually more interconnected than they might appear at first. Security and access control, AI integration, and brand consistency all improve when you centralize how content is managed and governed. If your assets are scattered across disconnected systems with inconsistent permissions, you have a security problem, a brand consistency problem, and a much harder time rolling out AI in a controlled way.

The practical starting point is getting your foundation right. That means establishing a centralized, structured environment where assets are governed, versioned, and accessible to the right people with the right permissions. Once that’s in place, you can layer in AI capabilities, things like smart tagging, visual search, and content recommendations, with confidence that the AI is working within guardrails rather than amplifying existing chaos. And brand consistency becomes much more manageable when there’s one source of truth rather than dozens of repositories where outdated files can linger.

 

I think content leaders should also be paying attention to the product content dimension. Managing storage costs at 26% was one of the top concerns, and that’s only going to grow as content volume keeps climbing. The teams managing costs most effectively can reduce duplication, improve reuse, and avoid recreating assets that already exist somewhere in the organization.

 

Q9: What’s one thing you’d want a marketing or content operations leader to take away from this year’s report and act on immediately?
 

Audit your fragmentation! Take an honest look at how many systems, folders, drives, and platforms your team is using to manage content and product information today. The data is really clear that fragmentation is the single biggest drag on performance. Teams using two or more systems to manage digital assets are significantly more likely to experience delays, missed revenue, burnout, and wasted budget compared to those working from a unified approach.


You don’t have to solve everything at once, but understanding the full scope of the problem is the first step. Once you can see where assets and product information are scattered, you can start making intentional decisions about what to centralize, what to connect, and where to apply AI and automation to eliminate the most painful bottlenecks.

"First-Party Data Isn’t Enough Anymore”.

marketing 17 Feb 2026

By: Scott Kozub, VP, Product at Experian Marketing Services 


For years, first-party data has been positioned as the answer to nearly every challenge in digital advertising. Lose cookies? Build first-party relationships. Privacy gets more complicated? Lean into owned data. Measurement becomes murky? Go direct to the source.

That logic still holds, but only up to a point.


What many marketers are discovering in practice is that first-party data alone creates depth without scale. It offers rich insight into customers a brand already knows, but far less visibility into the audiences it still needs to reach. In a fragmented, privacy-conscious ecosystem, relying exclusively on first-party signals often results in limited reach, frequency challenges, and diminishing returns on prospecting
 

The next phase of targeting will be defined by how well marketers combine first-party, third-party, contextual, and geographic signals to drive growth, improve efficiency, and strengthen customer relationships.


 Why first-party and third-party data are better together


The biggest challenge facing modern targeting is not the loss of identifiers. It is the growing fragmentation of signals across devices, channels, and environments. In that reality, identity does not disappear. It becomes more important as the connective layer that brings different data sources together for planning, activation, and measurement
 

First-party data remains essential. It provides accuracy, consent, and a reliable foundation for personalization and measurement. But on its own, it reflects only a partial view of the market. Most first-party data sets skew toward existing customers, logged-in users, or known devices, leaving significant gaps in reach and understanding.


This is why third-party data is so valuable. Not as a standalone solution, but as a complementary layer that expands perspective beyond what first-party data can capture alone. Responsibly sourced third-party data adds demographic, behavioral, interest, and purchase context that helps marketers understand who they should be reaching next, especially in an environment shaped by privacy constraints and signal fragmentation.


First-party data on its own is limiting. Third-party data on its own is incomplete. The real power comes from connecting the two through identity, allowing marketers to plan, activate, and measure across fragmented environments with greater accuracy and confidence.
 

Contextual and geographic signals as privacy-safe extensions
 

Contextual and geographic targeting are not new tactics. They are proven approaches that have evolved alongside changes in technology, privacy expectations, and data availability.
 

Today, data-informed contextual targeting goes far beyond keywords or simple page adjacency. When contextual signals are combined with audience insights, they help marketers understand where high-indexing audiences naturally spend time, regardless of channel or environment. Certain content consistently attracts users with shared behaviors, demographics, or purchase intent. Identifying those patterns allows advertisers to reach relevant audiences in ways that are both effective and privacy-safe.


Geographic data functions in a similar way. People with similar lifestyles, needs, and behaviors often cluster in similar locations. When geographic signals are informed by behavioral and demographic data, rather than used as blunt radius targeting, they become a meaningful proxy for intent. This is especially important for categories like retail, CPG, and automotive, where location continues to influence decision-making.

These signals are not replacements for first- or third-party data. They are additional layers that strengthen a modern data strategy while supporting privacy-forward activation.


 

AI as decision intelligence in a fragmented ecosystem


Artificial intelligence plays an increasingly active role in making fragmented signals and multi-source data strategies manageable.

AI is not replacing targeting strategy. It is enabling it. By interpreting fragmented signals at scale, machine learning models help marketers connect identity, first-party data, third-party insights, contextual signals, and geographic information into actionable intelligence. Models trained on both structured and unstructured data can identify patterns across content, timing, device behavior, and location, then optimize delivery in real time.

This shift allows campaigns to move beyond static audience definitions and toward dynamic decisioning. As performance signals change, activation strategies can adapt accordingly, without relying on persistent identifiers or exposing sensitive personal data.

 

What this means for marketers in 2026
 

Marketers who want to create and activate campaigns more efficiently in 2026 will need integrated approaches that reflect how fragmented the ecosystem has become. Success will not come from betting on a single data type, but from building flexible systems that connect signals through identity and intelligence.

First-party data alone is no longer sufficient. Marketers who combine it with third-party, contextual, and geographic signals will be better positioned to plan, reach, and measure advertising in an environment defined by fragmentation, evolving privacy standards, and constant change.
How's this

How's this "Returns Shouldn’t Be Tolerated — They Should Be a Strategic Differentiator"

marketing 13 Feb 2026

Your research shows returns are now a routine part of shopping, not a seasonal issue. What does the data reveal about how frequently consumers are returning items, and why should CX leaders care?


It’s true, what we uncovered with our survey is that returns are no longer a seasonal anomaly, but a meaningful brand interaction, a routine part of commerce, and a stepping stone to building lasting relationships. When our survey was conducted in early January, 55% of respondents had already made or planned to make a post-holiday return, and 21% of shoppers said they return an item as frequently as once a month. This means returns are a recurring touchpoint that happens across the customer lifecycle, not just in peak holiday periods. Given the volume of returns, even small inefficiencies become points of real friction, and that’s tied directly to loyalty and CSAT. CX leaders in retail and ecommerce should recognize returns as a high-value touchpoint and focus on making the process an opportunity for brand affinity and trust, not frustration.
 
More than half of shoppers say a bad returns experience could impact future purchases. Why do returns have such an outsized effect on loyalty compared to other post-purchase moments?

Returns matter because they’re consequential and emotional. While purchase experiences are driven by anticipation and reward, a return is triggered by disappointment. How a brand handles that disappointment fundamentally shapes trust. 57% of consumers say a bad return experience would influence whether they buy from that brand again, regardless of previous loyalty. It’s a high-stakes moment. If brands can’t resolve a problem quickly, transparently, and with a bit of empathy, they risk turning a one-time issue into long-term disengagement. 
 
More than 60% of consumers say they’d use an AI-powered agent to handle returns. What are shoppers actually hoping AI will fix at that moment?

Speed, clarity, and resolution are the top three things consumers expect from returns. While only a small percentage currently prefer chatbots (12%), 60% of respondents in our survey said they would use an AI-powered agent if it could instantly answer questions and process their return. This is customers signaling a desire for accurate, real-time assistance that gets the job done, with as little friction as possible. Only 36% of survey respondents say they are "very satisfied" with the returns process today, leaving significant room for improvement. AI, when done well, can eliminate many of the pain points consumers feel, including long wait times, confusing policies, and shipping hassles.

For retail leaders evaluating AI investments in 2026, why should returns be prioritized alongside acquisition and personalization efforts?

Trends in retail tech investment continue to focus on personalization and AI integrations to help the buyer build confidence. But what happens after the first purchase often determines whether the brand will get a second purchase, a third purchase, and so on. Returns are one of the few moments in the journey where customers are actively questioning their relationship with a brand, and that moment in time is where differentiation matters the most. AI investments in customer service are maturing quickly, proving that they can handle sensitive, complex situations with clarity and human-like empathy, all of which are critical to a successful returns process. But AI is not a “set and forget it” proposition. CX leaders must invest in training and empowering their teams to ensure their AI can grow, learn, and evolve alongside the needs of their customers. If a brand provides a strong purchase experience, but then loses the customer during a frustrating return experience, all those early investments in acquisition are at risk. 
 
Trust remains a major concern with AI. According to your research, what conditions make consumers comfortable using AI for returns?

Earning consumer trust will be an ongoing challenge for brands as they continue to integrate AI into their practices. Our recent survey took a deeper look into why consumers lack trust in AI currently. It found that consumers worry AI will be less efficient than a human, will have difficulty understanding their issue, or will provide inaccurate information. All of these concerns can be addressed by ensuring that the AI agent is given accurate customer data and policy information from the brand, and is overseen by well-trained ACX managers and teams.
 
 At Ada, we know this can be done well because our customers are seeing significant results from their AI investments today. One of our customers, IPSY, operates one of the largest beauty subscription networks in the world, serving more than 20 million community members across its brands. At that scale, customer experience isn’t just about support. It’s about relationship management, where every improvement compounds.
 

In just four months, IPSY, GenAI agent, Glam Bot, which is built and managed through Ada’s ACX Platform, unlocked:


→ a 41% lift in CSAT,

→ a 943% ROI on their generative AI investment,

→ 64% increase in autonomous resolution, and

→ It remains one of the largest AI deployments inside the company to date.
 

The key to ensuring consumers are comfortable with AI isn’t removing humans, but creating a seamless integration with humans, including transparent escalation paths. 
 

Returns should no longer be an interaction that consumers tolerate, but a strategic differentiator for brands using AI to turn problems into opportunities.

Looking ahead, how do you expect AI to reshape post-purchase CX over the next 12–24 months, particularly around returns?

In the next 12-24 months, AI will become increasingly agentic. This means it will do more than answer simple queries – it will automate increasingly complex tasks end-to-end with context, accuracy, and even empathy. This would include checking inventory at nearby stores for pickup, processing payments, and making repurchases of the same products easy. We will see AI become more deeply capable in policy, status updates, logic, and personal preferences, which can make returns virtually frictionless by default. Brands will also increasingly measure the success of their ACX investments not simply in resolution rates, but in revenue generation, both from cross-sell/upsell opportunities and in reduced customer churn. But this requires a thoughtful approach to AI management and adoption, as well as a team that’s empowered to grow and evolve their own agents. Brands that win will understand AI success isn’t just a technology deployment, it’s a management discipline. You cannot delegate your transformation to a vendor. 
 
Navigating the New Era of Mobile User Acquisition: Inside Zoomd’s 20-Year Journey

Navigating the New Era of Mobile User Acquisition: Inside Zoomd’s 20-Year Journey

marketing 3 Feb 2026

Tell me about Zoomd’s business.


Zoomd
 is a mobile-first marketing solution company enabling global advertisers to generate new mobile app users cost-effectively as they grow their business profitably. By working with our proprietary UA platform, a mobile DSP, content creators, and Albert.ai technologies, Zoomd is able to deliver a full-funnel, holistic solution, running advertising campaigns, empowering our clients to generate new users across gaming, entertainment, commerce, and fintech verticals.

Zoomd has been working in User Acquisition for a while.


Zoomd (originally founded in 2007 as a search startup and later merged with Moblin in 2012), has nearly two decades of mobile user acquisition experience. We’ve managed user acquisition campaigns through all of the changes in the industry, from the pivot to social media and then video, to the implementation of privacy legislation and operating system changes, which restructured user acquisition best practices.


Through all of the industry changes, we’ve focused on uncovering opportunities for our clients and partners to cost-effectively manage their user acquisition campaigns across channels and regions. This experience has made Zoomd proactive and more nimble, enabling us to anticipate the changes being made by big tech, marketing, and end-users, resulting in profitable user acquisition campaigns for our clients.


Today, as a publicly traded company, Zoomd Technologies Ltd. (TSXV: ZOMD) (OTC: ZMDTF), offers comprehensive and privacy-friendly user acquisition across the leading platforms as well as programmatic and direct channels through the open mobile web.

How is User Acquisition different today?


User acquisition on mobile devices has changed a lot since we started running user acquisition campaigns in 2007. The initial campaigns were across a non-programmatic open mobile web or in-app ads in other apps. Back then, Facebook was a desktop website.

Apple’s App Store launched in July 2008 with 500 apps. Google Play, which unified the Android Market, Google Music, Google Movies, and Google Books into one app store, only launched in March 2012.

Today, user acquisition is more competitive and complicated, with campaigns running across social media networks, in-app, and programmatically over the open web via Demand and Supply-Side Platforms and exchanges.

Privacy, which wasn’t an issue in 2007, became important in 2018 when the General Data Protection Regulation (GDPR) took effect in Europe, followed in 2020 by the California Consumer Privacy Act (CCPA), the first of many US-based state-based privacy laws. In 2021, Apple limited Identifier for Advertisers (IDFA) when the company rolled out App Tracking Transparency (ATT) framework with the release of iOS 14.5. This required users to explicitly opt-in before they could be tracked for advertising purposes, significantly limiting the targeting data available for advertising. Google also rolled out enhanced privacy protocols, though they were less aggressive than Apple’s.


Having worked in user acquisition since there were just a few hundred apps, managing campaigns through all of the aforementioned changes in the industry has made Zoomd a stronger and more effective mobile marketing partner. We’ve literally seen and done it all, and are therefore ready to help companies manage the new changes and challenges that will happen in 2026 and beyond. For example, the EU’s Digital Markets Act (DMA) requires gatekeeper platforms like Apple’s App Store to allow European users to download apps from alternative app marketplaces and to use alternative payment systems outside of the app.

Today… is it all Google and Meta?


Though Google and Meta are important, there are a lot of other channels used in successful and profitable user acquisition campaigns. Depending on the target audience and geographies, Zoomd runs campaigns across many social platforms, including Snapchat, Reddit, Pinterest, Twitter, as well as on regional platforms, like Kwai and Bigo Live. For example, for one European campaign, Snapchat was the platform that delivered the most money-making users, which we anticipated based on our team’s experience.

Beyond the platforms, we’re big believers in the open mobile web. Through programmatic channels, we’re able to successfully convert lots of users cost-effectively across app categories and geographies through display and video ads, including user generated content videos.

What Key Performance Indicators (KPIs) are important for User Acquisition?


The KPIs that advertisers monitor can vary widely based on the app’s maturity and function, but ultimately, they all share the same objective: driving growth and profitability. That’s why so many of our campaigns prioritize revenue-focused KPIs, such as first deposit amount or average order value. By centering user acquisition strategies around these profit-driven metrics, marketers can accurately measure campaign success and ensure they are acquiring valuable, high-quality users who contribute to the bottom line.


Vanity metrics, such as clicks and engagement rates, are also important for crunching data and gaining a better understanding of creative effectiveness, segmentation, etc. User acquisition creative must do more than generate clicks – it must drive prospective users into the funnel to download and install the app and take an action, like depositing money or placing an order.

What should a marketer new to User Acquisition understand before launching his or her first campaign?


A new marketer should first understand his or her company’s business model and the actions that deliver profitable users. That’s the marketing funnel and the north star that will lead marketing and user acquisition activity. Once the marketer understands the business, we’ll work together to set up the campaign based on the company’s business and our experience across similar geographies, target audiences, and product categories to ensure that the user acquisition campaign is on target and within the budget.


Closing thoughts on the future of User Acquisition?

As I said, there are changes happening in the app stores, like the opening up of alternative app stores and payment systems in Europe. App marketing is a dynamic market, which is why for user acquisition, it’s important to work with an agile partner having extensive experience with app marketing across verticals, geographies, and platforms. Artificial Intelligence (AI) is now in every aspect of our work and lives, and it’s also necessary for effective user acquisition.  Advertisers need a partner that moves fast, with the trends, to ensure that their brand doesn’t fall behind. After nearly 20 years of actively working in user acquisition, Zoomd is a trusted partner that understands the market and can profitably manage user acquisition campaigns.
Beyond the Click: Why Transforming B2B Attribution Starts with AI

Beyond the Click: Why Transforming B2B Attribution Starts with AI

marketing 3 Feb 2026

Over the past three decades, technology has transformed attribution from a rough art into an exact science, allowing businesses to peer through the darkness and attribute actual spend to campaigns. However, as B2B sales cycles continue to lengthen, are our current attribution strategies keeping up?

To find out, I recently caught up with Chris Golec, a martech industry veteran who pioneered the ABM category and founded Demandbase. Chris is now the Founder and CEO of Channel99, an online platform that enhances attribution and marketing decision-making using AI. 


To begin, why do you feel the current attribution standard (Click-Through Attribution, or CTA) is failing B2B marketers?


It all comes down to the increasing complexity of the B2B purchase process. Historically, to ensure proper attribution, we've expected a prospect to click an ad before purchasing. This "click-through" method is tidy and easy to track via UTM parameters.
 
The problem with this approach is that it captures only a tiny slice of the B2B sales funnel. Unlike selling a pair of sneakers to a consumer, the B2B process is long and involves multiple decision-makers. With an average of 266 touchpoints to close a B2B deal, relying solely on CTA means you are losing an entire forest of latent interest just to find a few one-off clicks.


If CTA is only a "tiny slice," what does the rest of the B2B sales funnel look like? What is the alternative?


The alternative is View-Through Attribution (VTA for short). This approach assigns credit when a buyer sees an ad, video, or piece of content and later converts, even if they never clicked the ad directly.

The results speak for themselves. Marketers who screen for VTA see a nearly 79% jump in conversion compared to CTA alone. It uncovers a wealth of insights, often revealing a hidden "first touch"—like organic social or industry review sites—that CTA completely misses. In today’s world, data-backed results are a matter of survival for marketers. VTA allows us to identify these audiences early and often.


If the data is so much stronger, why are nearly 25% of marketers still relying solely on click-through?

Ease is often the greatest threat to progress. CTA is simple to measure and works with standard analytics dashboards. More than any other factors, these two traits have made it the industry standard.

But this convenience comes at a cost. While CTA measures click-based campaigns well, it falls woefully short when asked to evaluate any other types of campaigns. Impressions, brand exposure, and social interactions often have just as much influence on deal closing as a digital ad does, yet CTA leaves these data points untouched.

That’s not even the worst part, though. Without view-through data, marketers naturally craft campaigns to attract high click volumes, but those often don’t speak to the intricacies of B2B sales, or connect with users in a position to buy. That’s why my #1 piece of advice to marketers is this: Craft your campaigns and measure success with sales at the center, nothing else. When it comes to attribution, that means leveraging  VTA and CTA.


Let’s talk about implementation. For companies ready to modernize, what are the "traps" they need to avoid?


The biggest trap right now is privacy. Regulations like GDPR and CCPA have fundamentally reshaped what marketers can track. The third-party cookie is under threat, and consent requirements have already reduced attribution data volume by 30-40%.
To avoid a measurement crisis, businesses must invest in cookie-free view-through technology. These systems capture engagement signals at an account or company level without violating privacy standards.


In addition to privacy, are there any other issues that modernizing companies should look out for?

Absolutely. Once you solve the privacy piece, you still have to contend with company culture. The insights that VTA can provide are so substantial compared to CTA that a lot of old assumptions get called into question as soon as the new data comes in. Having a company-wide growth mindset and being willing to abandon time-tested strategies for a new, data-backed approach isn’t always easy, but it’s essential for success. 

For our readers who are ready to make the switch, how do they begin? Is there a roadmap?


Absolutely. For any organization that’s ready to go all-in on VTA, I would recommend these five steps:

  1. Build a new foundation: Adopt privacy-forward account identification. Leave third-party cookies behind and focus on identifying accounts, not just anonymous users.
  2. Unify your datasets: B2B buyers don't live in a single channel. You need to integrate tracking across every touchpoint—display, ABM, organic social, email, etc.—to connect exposures to revenue.
  3. Screen for data validity: New data streams are useless if they are full of noise. Incorporate rigorous ad verification to ensure you are measuring actual target audience impressions, not bot traffic.
  4. Recalibrate performance benchmarks: Be prepared to throw out the old click-based playbook. Reallocate budget toward tactics that are truly driving influence from your target accounts and addressable market, the rest is noise..
  5. Loop in the sales team: Don't keep these insights in the marketing silo. Share them promptly so both teams can understand which accounts are generating deals.

 Any final thoughts?

For the past two decades, our most widely used attribution metric started and ended with the cursor. But with the rise of AI, that reliance will be tested like never before.

By shifting to AI-powered, view-through attribution, a far more robust and complete strategy picture comes into focus for marketers. It’s past time to move beyond the click.
Beyond Static Floors: What Publishers Need to Know About Dynamic Flooring in Programmatic Advertising

Beyond Static Floors: What Publishers Need to Know About Dynamic Flooring in Programmatic Advertising

marketing 23 Jan 2026

1) When publishers talk about their “programmatic floor strategy,” what does that typically look like in practice today? How are most teams approaching this?

In practice, most publishers’ “floor strategy” is still largely manual and backward-looking.

Typically, teams set static CPM floors in GAM (via UPRs) or in their wrapper/Prebid configuration, segmented by broad buckets like geo, device, or placement. These floors are usually based on historical averages, past performance, or rough heuristics rather than real-time demand signals.

They revisit and tweak them periodically — weekly, monthly, or around seasonal moments — but the core approach rarely changes: fixed prices applied across highly variable auctions.

So most “floor strategies” today are really just static rule sets designed for a slower market. They are not built for the speed, variability, or complexity of modern programmatic auctions.


2) Modern programmatic auctions happen in milliseconds with DSPs constantly repricing based on real-time signals. How should that reality influence the way publishers think about setting price floors?

It should completely reshape how publishers think about pricing.

If buyers are making decisions in real time, then floors cannot be static. They need to reflect what is actually happening in that specific auction — not what happened last week or last month.

Instead of asking, “What should my average floor be?” publishers should be asking, “What is this impression worth right now, given who is in the auction and how they are behaving?”

That means treating floors as dynamic, responsive signals — not fixed thresholds — that react to live demand, bidder behavior, and competition in real time.


3) Many publishers measure price floor success by looking at CPM changes, often comparing this week to last week. Is this the most effective way to measure this?

No — it is one of the weakest ways to evaluate floors.

CPM alone is an incomplete metric. It only tells you the price of impressions that are sold. It says nothing about suppressed demand, lost bidders, or auctions that never cleared.

On top of that, before-and-after comparisons (this week vs last week) are fundamentally flawed because the market is constantly changing. You end up measuring market volatility more than pricing impact.

This is why floor changes often look “good” or “bad” depending on timing — not because of the floor itself, but because of shifting demand conditions.


4) What other metrics should publishers actually be tracking to understand whether their floor strategy is working?

The single most important metric is holistic RPM — revenue per thousand ad opportunities (requests) across all programmatic channels (Prebid, Amazon, AdX, Open Bidding).

This metric captures:

●      Price impact (CPM)

●      Volume impact (fill)

●      Buyer participation and routing effects

Crucially, this must be measured per ad unit first, then aggregated to site level. Site-wide averages hide too much.

Beyond holistic RPM, publishers should also track:

●      Bid density (bids per auction) — a proxy for competition

●      Win rates by bidder — to see which partners are reacting to floors

●      Timeouts / drop-offs — signs of demand suppression

●      Clearing price distributions — where auctions are actually settling

Together, these give a much clearer picture of whether floors are helping or hurting.


5) What happens at the auction level when a publisher sets a floor that doesn’t align with what DSPs are willing to bid in that moment?

Two things typically happen, depending on the direction of misalignment:

If the floor is too high, DSPs don’t negotiate — they exit. Bidders reduce participation, route budgets elsewhere, or stop bidding altogether. Fill drops, and auctions become less competitive.

If the floor is too low, auctions clear too easily. Bidders don’t need to bid aggressively, competition thins out, and you end up leaving meaningful value on the table — selling inventory below what buyers were actually willing to pay and losing yield in the process.

In both cases, revenue is lost — it just appears differently: either as lower fill or lower effective prices.


6) When publishers analyze floors using site-wide or monthly aggregated data, what critical dynamics are hidden from view?

A lot.

Aggregated data hides:

●      How different ad units respond to floors

●      How specific bidders behave in specific geos or devices

●      Time-of-day demand patterns

●      Differences between mobile vs desktop, app vs web, or browser types

You might see “healthy” site-wide CPMs, but underneath that some placements could be massively underpriced while others are choking demand.

Most auction-level behaviors — like bid density shifts or bidder pullback — get completely washed out in monthly rollups.


7) How do price floors influence which demand partners and campaigns even enter an auction? Is it just about setting a minimum price?

It’s more than just having a minimum price.

In modern auctions, price is a signal — but a static price is a weak one. Without a reliable, real-time signal, DSPs have to guess the likely clearing price, which makes pacing and confident bidding harder, even when the audience match is strong.


Dynamic floors strengthen that signal. By adjusting in real time, they give buyers a clearer view of where the market is clearing right now, which makes them more willing to participate and bid at the true value of the impression.

Dynamic floors can also help unlock demand. When a higher, real-time clearing price is visible, it can influence which campaigns DSPs prioritize for that impression.

Finally, dynamic floors improve how inventory moves through SSP and DSP throttling systems. Requests that are priced in line with real demand are more likely to clear — and therefore more likely to pass throttling and access available budgets.

In short, dynamic floors act as a real-time market signal that shapes participation, routing, and budget access — not just a guardrail on price.


8) If a Head of Programmatic wanted to properly test whether a floor change improved performance, what would that test need to look like?

They would need a true real-time A/B test, not a before-and-after comparison.

That means:

●      Splitting traffic into a floored cohort and a control cohort running simultaneously

●      Measuring holistic RPM per ad unit across both groups

●      Ensuring both cohorts see the same demand conditions

●      Tracking bidder behavior (bid density, win rates, drop-offs) alongside revenue

Only with this setup can you isolate the true impact of pricing from normal market volatility.


9) What patterns have you seen when publishers shift from static to more dynamic floor strategies? What typically changes in their auction outcomes?

The most consistent pattern is that auctions become healthier.

Typically we see:

●      Higher holistic RPM (often 10–16%)

●      Slightly higher CPMs (around 7-11%)

●      Stable or improved fill

In other words, pricing aligns better with real demand. Publishers capture more value without suppressing liquidity.

Perhaps most importantly, they stop seeing the wild performance swings that plague static floor setups.


10) Where do you see price floor strategies evolving as the programmatic ecosystem becomes increasingly algorithmic on both buy and sell sides?

On the web, pricing is becoming more holistic, unified, and visible inside the auction.

Two recent shifts are accelerating this:

●      Amazon now participates in Prebid as a bidder, meaning Prebid floor decisions can directly influence Amazon demand instead of pricing it in a separate silo.

●      Google now allows separate pricing for AdX demand in GAM, giving publishers far more control over how AdX is treated relative to other demand.

Prebid floors themselves aren’t new — they’ve existed since early header bidding. What’s changing is that more major demand sources now sit inside a common pricing perimeter.

As a result, I expect floors to become more holistic and consistent across channels, delivering clearer RPM impact and giving publishers greater control over how their inventory is positioned and monetized.

   

Page 5 of 19

REQUEST PROPOSAL