Interviews | Marketing Technologies | Marketing Technology Insights
GFG image

Interview

 OOH as a Hedge Against 'Digital Fatigue' ROI

OOH as a Hedge Against 'Digital Fatigue' ROI

advertising 14 May 2026

With 2026 marking the definitive "death of the cookie" and rising digital ad blindness, performance marketers are seeing diminishing returns on traditional MarTech stacks. 


1. The 2026 Reality Check
 

We’ve officially moved past the 'death of the cookie.' Now that the dust has settled, how has this shift fundamentally changed the way performance marketers view the traditional MarTech stack versus real-world triggers like OOH?

“For years, the industry sold performance marketers a promise that if you just gathered enough granular data and tweaked enough levers, you’d build a well-oiled pipeline machine. Marketers went all-in on that promise, but the machine still breaks down. And it has led to a dependency on hyper-targeting that has now hit a wall of diminishing returns.

The shift isn't away from digital; it’s toward balance. Marketers are looking at OOH not as a ‘top-of-funnel luxury,’ but as a high-fidelity data trigger. They’re realizing that real-world placement is the only unblockable, unskippable signal left. We’ve moved from a tracking-first mindset to a planning-first mindset. If you know exactly where your ICP lives and works, you don't need a cookie to find them; you just need to be there.”

2. The 'Anchor of Trust' Concept
 

OOH can be described as an 'anchor of trust' in an era of digital fatigue. How does seeing a brand in the physical world change a consumer's psychological response when they later encounter that same brand in a social or AI-driven feed?

“There is a fundamental psychological difference between a pixel and a pillar. We call this the Legitimacy Signal. Anyone can buy a LinkedIn ad or spin up an AI-generated landing page for $50. But when you see a brand on a massive wallscape in SoHo or a digital spectacular in Austin, your brain registers permanence.

When that same consumer later sees your ad in a social feed, the friction of "Is this company legit?" is already gone. OOH provides the physical proof of existence that makes every subsequent digital touchpoint convert at a higher rate. It’s the difference between a stranger knocking on your door and a neighbor waving from across the street.”

3. The ROI Hedge
 

With digital CPMs rising and conversion rates hitting a ceiling for many, how exactly does OOH act as a 'hedge' against diminishing returns in a digital-only budget?

“Every digital-only budget eventually hits a point of diminishing returns where the next dollar spent actually increases your average CAC. OOH acts as a hedge because it lowers the floor for your other channels.

By priming the market with OOH, you increase the efficiency of your paid social and search spend. You aren't just buying impressions; you’re buying market familiarity. We see it constantly: when a brand enters a physical market, its branded search volume goes up, and its digital CTRs improve. You’re hedging against rising CPMs by making your existing digital ads work 20% harder.”

4. Quantifying the 'Halo Effect'
 

The data suggests that OOH can drive a 50%+ increase in monthly revenue when used to 'prime' a market. Can you walk us through the mechanics of that priming? What is happening to the digital CAC (Customer Acquisition Cost) during those campaigns?

“The mechanics are simple: Trust scales faster than targeting. When we talk about priming, we’re talking about building a baseline of awareness so that your digital ads don't have to do the heavy lifting of introduction.

During these campaigns, we typically see digital CAC drop significantly. Why? Because the consideration phase has been compressed in the real world. At OneScreen, we focus on Front-End Intelligence. We map your ICP to specific real-world environments before you spend a dollar. When you establish physical presence around your target accounts or key offices, your sales team hears, ‘I see you guys everywhere.’ That’s the sound of a falling CAC.”

5. Omnichannel Orchestration
 

As a CRO, you see the full funnel. How should brands be orchestrating the hand-off between a physical OOH placement and a digital activation? Is there a specific 'window of influence' that marketers should be aiming for?

“The ‘hand-off’ shouldn't be a relay race; it should be a symphony. The biggest mistake is treating OOH as a silo. You should be orchestrating your digital spend to retarget the geographies where your OOH is live.

We look for a ‘Window of Influence,’ usually a 4-to-6 week immersion period. If you’re a B2B company at a major conference, your OOH should be live a week before and two weeks after. The goal is to create a surround-sound effect where the physical placement provides legitimacy, and the digital activation provides the ‘click here’ convenience.”

6. Solving for 'Digital Blindness'
 

We talk a lot about 'ad blindness' on mobile and desktop. Why is it that OOH seems immune to this fatigue, and how is OneScreen helping brands bridge that gap between 'real-world awareness' and 'digital conversion'?

“You can’t ‘AdBlock’ a billboard. You can't scroll past a wrapped train when you’re standing on the platform. OOH is part of the environment, not an interruption of it.

The problem has always been the Execution Gap. Marketers wanted to do OOH, but it was too hard: too many vendors, too much friction. OneScreen bridges that gap by being a tech-native partner. We handle the 17 different vendors and the fragmented formats so the marketer can focus on the strategy. We move at the speed of a software company, not a legacy billboard vendor.

7. Advice for the Performance Marketer
 

For the performance-obsessed marketer who is used to tracking every single click, what is the biggest mindset shift required to successfully integrate OOH into an omnichannel strategy?

“I would spend more time measuring the metrics that brand campaigns like OOH move, and look for business results where the brand is strong. I wouldn’t try to shoehorn that data into fine-grain attribution, since the obsession with last-click attribution is what led to the digital fatigue we’re in now.

The shift is moving from measurement to confidence. If you use data-driven planning to put your brand in front of your ICP every day for a month, you don’t need a spreadsheet to prove it had an impact; your pipeline will tell you. Trust the signal. Real-world presence compounds in ways a digital ad can’t match. Don't just buy inventory; buy a defensible strategy.”

 The AI-Powered Shift From Media Buying to Media Performance

The AI-Powered Shift From Media Buying to Media Performance

artificial intelligence 13 May 2026

Q1: VideoAmp describes itself as a media performance platform, not just another point solution. What does that architectural decision actually mean — and why does it change what's possible for your customers?

It starts with the data. Every time content is streamed, it leaves a footprint on what was watched, when, and for how long. Second-by-second, device-level, census-scale data. That's the foundation. When you combine that with large-scale identity data, you get accurate reach and outcome measurement across publishers. Add co-viewing and out-of-home, and you have a unified picture across linear and digital. That's the infrastructure layer and until now, that hadn’t been available to advertisers and publishers.

What we did differently is build the full stack on top of that foundation. Publishers use VideoAmp to organize inventory, forecast revenue, measure campaigns, and serve ads. Agencies use us to plan, buy, optimize, and measure across publishers. All of it running against the same dataset and identity graph.

That architectural decision is what makes AI actually useful here. When everything shares the same foundation, AI can turn what used to be a fragmented, multi-step process into one continuous workflow — optimizing decisions in real time, learning from every campaign, and eventually underwriting outcome guarantees. That's not possible when you're stitching point solutions together because the data layers don't match, the identity doesn't reconcile, and the AI has nothing reliable to learn from.

For VideoAmp, the hard part is done. A lot of companies are still trying to figure out all of these layers that we’ve already put the work in over the years. Now we get to focus on what you can actually do with it.

Q2: What's fundamentally changed in the last few years that makes a unified, AI-driven workflow actually achievable now?

Two things have changed, and they work together.

The first is data. As mentioned, we now have second-by-second, device-level, census data at the scale needed to power every stage of the workflow. From planning, buying, optimization, measurement– all running against the same underlying asset. That's genuinely new, and it's what makes closing the loop between spend and outcome operationally possible rather than theoretical.

The second change is what you can do with that data now that LLMs exist. This is where things get interesting.


The media planning and buying process is complex. Most people operating in it, even experienced buyers, don't always know exactly what to ask for or how to ask for it. They know they want better performance, but the path from that goal to the right set of decisions across publishers, audiences, formats, and timing is not obvious. Historically, that complexity lived in the heads of experts, or got lost in the gap between systems.

LLMs trained on the context of this data make what should feel like magic possible. A system that understands what you're trying to accomplish even when you can't fully articulate it, and guides you through the process of getting there. Not a dashboard or a reporting tool, but an experience that reasons with you, surfacing what matters, so you can make better decisions at every step.

That's what we're building– an AI-powered platform that understands your campaign goals better than any single tool has before, and it should do better than you'd do on your own– not by replacing your judgment, but by elevating it.  LLMs let put an intelligent, agentic experience on top of the planning, optimization and measurement solutions we’ve already built. Turning infrastructure into something anyone can use, and that gets smarter with every campaign it touches.

The combination of census-scale data plus AI, that actually understands what to do with it, is what makes this moment different from every previous one.

Q3: Historically, buyers and sellers have operated with different data and different incentives. How does shared infrastructure change that dynamic and what does real alignment look like in practice?

The misalignment was never really about incentives. Everyone wants campaigns that perform. The problem was that buyers and sellers were literally working from different data. A seller reported on impressions served. A buyer measured what converted. Those two numbers came from different systems, different methodologies, different moments in time.

Shared infrastructure changes that at the root by making it so there's only one version of the data to begin with.

That's what makes our position unusual. Publishers use VideoAmp to organize their inventory, forecast revenue, measure campaigns, and serve ads. Agencies use VideoAmp to plan, buy, optimize, and measure across those same publishers. Both sides are operating on the same platform, against the same data, built on the same identity graph. The shared truth isn’t negotiated, it’s structural. It's not that we've convinced buyers and sellers to look at the same number. It's that there is only one number.

And that foundation is what makes everything else possible. When a buyer and seller are planning and measuring against the same census-scale, second-by-second data, there's no ambiguity. Guarantees become operationally possible because the line from "here's what we projected" to "here's what we delivered" is unbroken. Outcome-based deals become credible because both parties can see exactly what happened and why.

Real alignment in practice looks like this: spend that a CFO can defend, inventory that a publisher can price with conviction, and a campaign that both sides agree performed, or didn't, because they're both looking at the same truth.

Q4: You've spoken about a future where guaranteed outcomes become the standard, not a premium add-on. What needs to happen for the industry to get there?

The industry has been talking about outcome-based guarantees for years. The reason it hasn't happened at scale isn't lack of interest, it’s because the technology to actually underwrite a guarantee didn't exist.

Three things have to be true to make a guarantee possible. You need census-scale data. You need that data to be the same data driving every stage of the workflow, from planning through measurement. And you need a system intelligent enough to optimize in real time, learning from what's happening, and correcting course before a campaign goes off track.

That third piece is what LLMs make possible in a way nothing before them did. A large language model trained on the full context of this data, including the history of campaigns, the performance patterns across publishers, audiences, formats, etc., doesn't just report on what happened. It understands why it happened. It can optimize decisions in real time, learn from every campaign it touches, and get progressively better at predicting and delivering outcomes. It self-corrects. That's what makes it possible to underwrite a guarantee rather than just promise one.

But here's what I think people underestimate about where this is going. The path to guaranteed outcomes isn't just a technology argument, it's a trust argument. And trust gets built through experience. You try a system like this. It improves your return on ad spend. You try it again. It improves again. Over time, the performance compounds and so does the trust. That's how any powerful technology gets adopted at scale.

What makes our platform different is that the foundation for all of this is already built. The data, the identity graph, the full-stack workflow for both buyers and sellers. The LLMs have something real to learn from and act on. So when the system optimizes a campaign, it's working from the deepest, most comprehensive view of video advertising that exists. That's what makes the guarantee credible. 

Q5: Streaming has introduced what amounts to unlimited ad supply compared to traditional TV. How does the industry avoid a race to the bottom on pricing and where do outcomes fit into that equation?

The supply problem in streaming is real. When inventory expands without a consistent link to performance, pricing pressure is inevitable and self-reinforcing. Supply that can't prove its value drags CPMs down across the board, including content that genuinely deserves a premium. The market has no reliable mechanism to separate inventory that drives results, from inventory that simply adds volume.

A true outcome accounts for everything that went into it: the quality of the content, the relevance of the creative, the precision of the placement, the price paid for the impression. All of those variables collapse into a single signal–  did this investment deliver a result or didn't it?

That's a fundamentally different basis for pricing. CPMs are a proxy,measuring the opportunity to be seen. Outcomes measure whether something actually happened. When you transact against outcomes, you're no longer debating whether premium content deserves a premium price because the performance answers that question directly. Good content will prove its value. Inventory that can't prove its value will price accordingly.

This is why outcomes are so important for the long-term health of streaming. They create the economic conditions for premium content to be properly monetized. When publishers can demonstrate that their inventory drives real results, they can defend premium pricing with evidence rather than reputation. And when premium content is properly monetized, it supports the investment required to keep creating it. That's the virtuous cycle the industry needs.

Q6: Where does VideoAmp go from here? What does the next phase of the platform look like?

The hardest part is done. Most companies are still trying to figure out the data layer, the identity layer, the privacy infrastructure, let alone stitching all of that together. We figured that out, now we get to focus on what you can actually do with it.

The AI we're layering on top has a complete view of the entire media workflow, across both sides of the market, from a single data foundation. That's a system operating at a level of context no fragmented stack can match. It learns faster, corrects more precisely, and compounds in value with every campaign.

That's what makes outcome guarantees achievable. Not ambition, but infrastructure.

 Is Gen Z Ready to Level Up at Work in the AI Age? 3 Tips to Elevate Upskilling

Is Gen Z Ready to Level Up at Work in the AI Age? 3 Tips to Elevate Upskilling

artificial intelligence 13 May 2026

As more companies implement AI to automate tasks, many tech-forward business leaders are touting the promise of “freeing up” employees for more creative and impactful work. However, much of this automation is streamlining work done by entry-level employees, who may not have the skills to step into those new roles and responsibilities. 

Gen Z employees need upskilling support to stay ahead of the curve and help their organizations seize new opportunities for growth and innovation. Unfortunately, only 39% of Gen Z workers now feel equipped to perform well in their role, dropping over 20% from the previous year. More than two-thirds of Gen Z employeesadmit to feeling out of their depth at work, and nearly 6 in 10 say they have used AI to complete work tasks – not to boost their efficiency, but because they felt undertrained. 

How can organizational leaders reverse this trend and equip Gen Z employees with the confidence, support, and skills they need to do their best work? Start by designing a training strategy using these three pillars:
1. Leverage AI for Guided Learning
While many are focused on AI’s potential for disruption in the workforce, AI and other innovative technologies can also play a key role in supporting employee development. Currently, only one-third of employees are satisfied with their organization’s opportunities for skill development, but technology can enable organizations of any size to make training accessible, personalized, and engaging for Gen Z. 

Already, 75% of Gen Z professionals are using AI to learn new skills. This may tempt workforce leaders to leave younger employees to upskill on their own, but without oversight, employees may learn “bad habits” or ways of working that aren’t aligned with their company’s processes. Instead, organizations should explore how they can harness AI to quickly create training content that is tailored to the employee’s needs and company’s goals. 


2. Welcome New Joiners to a Vibrant Learning Culture

Gen Zs know that skill development is key to career progression. This means training is not only about productivity and performance, but retention. In a recent survey, nearly half of Gen Z participants said they were ready to quit their jobs to seek better professional growth opportunities. When Gen Z professionals don’t believe they can learn and grow within the company, they leave as soon as possible, leaving companies to start the expensive recruiting process all over again.

The best way to stop this cycle is to show new Gen Z employees from day 1 that they are joining an organization with a culture of learning. A stunning 8 in 10 employees say they would stay at a company longer if they had better onboarding, while only 12% grades their company’s onboarding highly. By redesigning onboarding with clear and personalized learning pathways that show new employees how they can grow, organizations can close this opportunity gap and raise retention rates, while also better preparing new Gen Z workers to succeed in their roles. 

Additionally, managers of Gen Z employees can help showcase the company’s learning culture by modeling continuous learning themselves. When Gen Zs see their managers regularly growing their own skills and knowledge, learning feels like a way of life at the organization, where all employees have opportunities to develop. 


3. Prepare Gen Z to Pivot with Soft Skills

As organizational leaders rethink training for Gen Z, it’s critical to not only consider how much training to offer, but how to deliver it most effectively. For many years, corporate training has been dominated by one-directional presentations and online training that functioned as a checklist activity. These strategies not only fail to engage Gen Z; they also fail to prepare them for long-term success in a rapidly changing work environment.

According to Deloitte, over 80% of Gen Zs believe soft skills – including communication, leadership, and cooperation – are more critical for career progression than technical skills. Research also indicates that soft skills and other foundational skills help professionals stay resilient to industry changes, learn specialized skills faster, and advance to higher job roles. 

With this in mind, soft skills development should be integrated into all training, especially for Gen Z. For example, employees can practice teamwork in online game-based training. Communication and leadership skills can be honed with peer learning and interactive presentations, supported by digital learning tools. This approach delivers training that strengthens technical and soft skills simultaneously, boosting training efficiency, employee engagement, and team connections.

The most successful training programs for Gen Z employees will be those based on an understanding of how Gen Zs want to learn and the skills they need to adapt to an ever-changing world. From this foundation, AI and other technologies can act as an accelerant, enabling trainers to experiment and iterate quickly to optimize learning outcomes and drive real business results.

 

 Who Owns the Risk When AI Makes the Call

Who Owns the Risk When AI Makes the Call

artificial intelligence 12 May 2026

 

A last-minute meeting hits your calendar: “URGENT:  Client Issue, Transaction Freeze.”

You join and learn the AI fraud detection system your team deployed froze a multimillion-dollar transaction from a long-time client. The account team has escalated. Legal’s asking questions. The client wants an explanation.


Your team doesn’t have one.

Moments like this are becoming routine. And when people ask why, the answer is AI made the decision. The outcome appears clean and neutral. But no one feels like they made the call, and that’s the risk. 

AI doesn’t reduce accountability or liability; it obscures who owns the decision.

Why AI Amplifies Bias at Scale

Once AI is trained on past decisions or set with narrow criteria, it carries those patterns forward, shaping who gets hired, how people are paid and treated, and which transactions are flagged.

A model that favors certain schools, career paths, geographies, spending patterns, or customer profiles can produce uneven outcomes while still appearing neutral. What once were isolated decisions becomes a pattern that’s easier to document and challenge. That’s what changes the nature of risk. A flawed assumption or biased dataset doesn’t stay isolated, it scales.

By the time those outcomes are visible, the logic behind them is embedded in the workflow. The result is inefficiency, bias, and legal exposure.

Where AI Liability Shows Up Inside Organizations

AI liability shows up in decisions that are sensitive, regulated, or likely to be challenged. 

In HR, that includes hiring, pay, and promotion. In marketing, it’s claims, targeting, and personalization. In customer operations, it’s access, service decisions, and complaint handling. In finance and procurement, it’s approvals, fraud flags, and vendor treatment. 

These systems shape who gets hired, paid, served, approved, flagged, or denied service. When those decisions are questioned, by a regulator, a customer, or an employee the company must explain them. 

If the outcome is discriminatory, misleading, unfair, or can’t be clearly defended, the company is liable.

How to Integrate AI Risk Into Your Existing Risk Framework

Most organizations treat AI as a new category of risk. That leads to new policies and separate oversight, but leaves the core issue unchanged. AI-driven decisions often sit outside existing controls, influencing outcomes without being clearly tied to escalation paths, documentation standards, or accountable owners.

Integrating AI risk means treating those decisions like any other high-impact process. Where outcomes carry regulatory or financial consequences, they should be governed the same way. That requires identifying where AI is used in decision-making, defining who is accountable, and ensuring those decisions can be explained and reviewed when challenged.

The goal is to close the gap between how decisions are made and how they’re governed.

The Governance Trap

In practice, many governance efforts fall into a predictable pattern: policies are written, oversight is assigned, and “human-in-the-loop” safeguards are introduced.

But these measures often create the appearance of control without changing outcomes.

Policies are static, while AI systems evolve. Human oversight is frequently too late—applied after decisions are executed—or is too shallow to challenge the system’s logic.

The result is a gap between formal governance and real accountability. Decisions are made, but ownership remains unclear.

A Better Way to Think About Ownership

Closing that gap requires a more precise definition of ownership.

AI systems distribute responsibility across multiple layers:

  • Design: Who sets objectives,
         selects data, and defines the model
  • Deployment: Who chooses to
         apply it in a specific context
  • Decision: Who owns the
         outcome when it’s used


Risk becomes difficult to manage when these layers are fragmented. A model may be built by one team, deployed by another, and relied on by a third. When something goes wrong, accountability diffuses.

Clarity comes from explicitly assigning ownership at each layer and aligning them.

What Leaders Must Do

Managing AI risk is ultimately a leadership issue. It requires decisions about accountability, and technology. That starts with a few non-negotiables.

First, make ownership explicit. Every AI-driven process should have a clearly defined accountable party. Second, embed governance in workflows. Controls should exist where decisions happen, not just in policy documents. Third, audit inputs and assumptions, not just outputs. The logic behind the system matters as much as the results it produces. Finally, define where AI informs decisions versus where it makes them. Not every decision should be automated, especially in high-stakes contexts.

As regulators, enterprise buyers, and partners ask more detailed questions about how AI decisions are made and explained, governance is becoming a visible signal of maturity.

Organizations that can answer those questions clearly, with defined ownership and embedded processes, will be easier to trust and work with than those trying to reconstruct decisions after the fact.

 

 

 

 How to Build a Post-Sale Support System That Scales With Your Customers

How to Build a Post-Sale Support System That Scales With Your Customers

marketing 7 May 2026

A post-sale support system is everything that your customers are given once they’ve made their purchases, and these are so important to a growing business because they make your customers feel properly cared for. In this article, we explain why scalability is essential and how to properly build a post-sale support system that scales with your customer base.

The Need for Scalability

It can be very difficult to properly scale your support system when you experience quick and/or unprecedented growth. Also, failing to scale your support as your business grows will ultimately lead to frustrated customers, lost sales, employee burnout, increased operational costs, and increased turnover rates. However, building a scalable support system will maintain service quality, lower or maintain operational costs, build customer loyalty, and improve employee engagement.

Automate Routine Tasks

One of the easiest ways to make your support system scalable is by automating routine tasks. Use automated responses and automated follow-ups so that customers get an immediate confirmation that their issue is being looked at as well as confirmation that their request is moving through the system. 
 
You can also set up a ticket routing system and an escalation workflow so that customer issues are automatically separated out by keywords and then appropriately escalated so that the most pressing issues are addressed and resolved first while the lower priority ones are solved through automation.

Use AI Chatbots

Many successful businesses are using AI chatbots as part of their scalable customer service model. Modern chatbots can be trained on historical tickets and other information you feed them so that they can answer questions and handle basic issues like account updates or simple order changes.
 
AI chatbots can also be used as part of escalation workflows so that the most complex and pressing issues are handled by trained human employees while the most routine tasks are handled by the software of your choice. For example, a CNC machinery  manufacturer could train a chatbot to answer common questions about their fiber lasers while escalating more complex questions to one of their CNC specialists.

Establish Self-Service Options

Self-service options are a simple but effective way to let customers solve their own problems and answer their own questions without employee involvement. The most basic self-service option is to build out FAQs on various landing pages and/or as its own page on your website so that the most common questions and issues can be answered or resolved in seconds.
 
Also, you can create helpful articles that quickly answer most customer questions and have an automated system put in place that guides customers to these articles before they actually need support from an actual employee. Try to diversify your self-service options as your business grows so that there is an entire self-service section to your site that can help customers get their issues resolved quickly and without hassle.

Segment Customers Based on Needs

Within all of these support features, you should try to segment your customers based on the seriousness of their needs. The most pressing and complex issues should be put in a higher priority list and receive personalized attention from your support staff. You can separate out the most common high-priority issues you expect your customers to have and then assign them keywords or phrases so that your team and software know what to look at first.
 
Less pressing or complex customer issues can be resolved through automation and AI chat bots. Basic things like delivery time estimates, simple order changes, and other routine tasks do not need to be prioritized or even necessarily handled by actual members of your team. You can also create an escalation workflow so that unresolved tasks can be increased to a higher level of priority.

Use Multi-Channel Support Systems

You can also set up a support system connected to multiple channels that your brand works in. The first channel can be for email so that any incoming customer emails are converted to tickets so that they can be separated by keywords and routed to the right team members. Another channel can be for phone integration so that support agents on the phone with customers have access to customer ticketing histories and other relevant information.
 
Live chats can be integrated as well to answer customer questions quickly during regular business hours. Lastly, social media messages can also be channeled into your support system so that they can be treated like regular support tickets.

Track Key Metrics

Use key metrics like first contact resolution, response times, resolution times, and customer effort scores to understand how quickly and often customer problems are getting resolved. Metrics like customer satisfaction and cost per resolution are also important for the long-term financial success of your business.
 
On top of testing these metrics, you should regularly test your system, stay on top of industry trends, and invest in continuous improvement/training of your employees as you scale. Creating this feedback loop will keep your support system perpetually strong.

Applicable Takeaways

Building a scalable support system will maintain service quality, increase sales, lower operational costs, build customer loyalty, improve employee engagement, and lower employee turnover rates. You can make your support system scalable by automating routine tasks, using AI chatbots, establishing self-service tasks, segmenting your customer base by issue priority, using multi-channel support, and tracking key metrics. By implementing all of these tactics, your support system will keep service quality high, reduce employee burnout, and, most importantly, keep your customer base happy.
 Gaming Solved App Monetization From Day One. Why Is the Rest of the App Economy Still Playing Catch-Up?

Gaming Solved App Monetization From Day One. Why Is the Rest of the App Economy Still Playing Catch-Up?

marketing 5 May 2026

Shobeir Shobeiri, Director of Publisher Sales, Moloco 


Often, app publishers still treat monetization as a partner decision. Gaming publishers treat it as infrastructure.


Early on, gaming companies approached monetization as a system, not an add-on, fostering an environment where multiple advertisers compete for every impression, driving revenue while maintaining performance and user experience.


What’s more, leading publishers like King and Supercell have operated top-grossing titles such as Candy Crush Saga and Clash of Clans for more than a decade. These games are still culturally relevant, standing the test of time as high-engagement products that continue to rank among the most downloaded and highest-earning apps globally.


What’s important is that they achieved this while aggressively monetizing through ads and in-app purchases. These examples directly challenge the notion that monetization comes at the expense of user experience. In gaming, the opposite appears to be true. According to critics, aggressively monetizing through ads and in-app purchases should have led to a worse user experience and, over time, a reduction in engagement. However, their sustained decade-long success suggests that well-designed monetization systems can allow publishers to increase competition and yield while maintaining engagement over time.


The majority of the remaining app ecosystem took a different path. Utility apps like news or weather, and sports scoring or social apps appear to have focused on building engagement and scale. While they succeeded, enjoying the reach of millions of users each day, their monetization unfortunately seemingly still lags behind, especially as many of these apps are free to download.


Most non-gaming apps monetize the few while gaming monetizes the many.


Subscriptions, transactions, and commerce models generate meaningful revenue, but oftentimes, it is only from a small percentage of users. In today’s environment, that model would be increasingly under pressure if subscription growth slows, retention could become more volatile, or broader macroeconomic conditions might limit consumer willingness to spend. 60% of the app store revenue is attributed to games, which suggests that the audience of non-gaming audience still remains under-monetized. Gaming publishers focused on solving the monetization issue from day one. Other app categories are still catching up.


Instead, these apps stitched together an ad strategy. An SDK here or tagging in a demand partner there. A setup where only a limited number of advertisers can compete for each impression, leaving meaningful revenue on the table. Over time, that approach created fragmented stacks where limited demand competes, auctions lack pressure, and yield plateaus.


This divide has defined the last decade of mobile trends. 


The scale of the gap is visible and widening by the day. With mobile games generating the majority of the appstore revenue, it appears the difference is not in audience size, but rather in monetization maturity.


It seems gaming built systems designed to extract value from the entire user base, while most other apps monetize just a fraction of theirs.


In gaming, monetization is diversified across formats. We are seeing that some major studios can generate around 15 percent of revenue from advertising, while hybrid models often balance revenue more evenly between ads and in-app purchases. In some cases, such as hyper-casual games, advertising accounts for nearly all revenue.


Even today, despite increased screen time and the removal of friction around payments, converting users to make in-app purchases remains challenging. It was even more difficult over a decade ago, which is why gaming apps took to this strategy early on. 


Hybrid monetization works because it increases competition for each impression. Research shows that combining in-app advertising with in-app purchases yields higher revenue and lifetime value than single-revenue models, with some segments seeing returns more than 50 percent higher.


The gains come from better auction dynamics, not simply more ads. More demand sources competing in real time leads to higher performance and higher yield without degrading the user experience.


The challenge is no longer just acquiring users. It is capturing value once they are inside the app.


As acquisition becomes more expensive and less predictable, the ability to monetize existing users becomes a primary growth driver. 


Publishers that can support more demand competition and better performance within their apps will be in a stronger position to capture value as budgets move. Those that cannot will see more of that value captured elsewhere.


The next phase of app monetization will not be defined by how many SDKs a publisher adds. It will be defined by how effectively those partners are made to compete for each impression, and how much control the publisher retains over performance.


Gaming solved this years ago. The rest of the app economy is just starting to catch up.
 The Security Threat Your AI Strategy Didn’t Account For.

The Security Threat Your AI Strategy Didn’t Account For.

marketing 4 May 2026

Q1: Autonomous AI agents are gaining traction fast how do you define them in a business context today?


An autonomous agent is software that plans, decides, and acts across systems using its own reasoning, not a pre-coded workflow. The business-relevant distinction is not really about AI itself but about agency with credentials: a true agent holds its own non-human identity, invokes tools and APIs, and produces outcomes with minimal human involvement. The honest reality is that most of what is being sold as “agentic AI” right now is not actually agentic, and analysts like Gartner estimate that thousands of vendors claiming agentic solutions, only around a hundred offer genuinely agentic features. That gap exists largely because SaaS can no longer raise venture capital the way it once could, so companies position themselves as AI businesses whether they are or not. For CISOs, boards, and buyers, a system is only truly agentic when it can plan multi-step action toward a goal, select tools dynamically, and operate without a pre-defined script.


Q2: Why do you think autonomous agents introduce a new and poorly understood layer of enterprise risk?


Autonomous agents collapse four risk domains that organizations have always governed separately: identity, application logic, data access, and change control. An agent is a non-human identity acting around the clock at machine speed, with non-deterministic reasoning, meaning the same prompt can produce different actions on different runs, and it discovers and chains access paths that the developers who deployed it never mapped. Its behavior can also drift at runtime from something as simple as a prompt injection hidden in a document or a tool that behaves slightly differently than it did last week. What makes this poorly understood is that most organizations have deployed these systems without the controls to match, and in many cases cannot reliably stop a misbehaving agent, constrain it to its stated purpose, or even produce a full inventory of what agents are running in their environment. That is not an abstract risk: it is an unsupervised insider with administrative access operating at a speed no human security analyst can match.


Q3: What are some real-world examples where these AI agents could create unexpected security vulnerabilities?


The incidents are already happening, and they share a common thread: no malware, no traditional exploit. The agent’s own privileges were the attack surface, and in each case the agent did exactly what it was instructed to do, just by the wrong party. A few that illustrate the range of exposure:


•       AI coding agent deletes production database: An AI coding agent deleted a live production database during a code freeze, then fabricated records to conceal the action.


•       AI chat agent OAuth token compromise: Compromised OAuth tokens for an AI chat agent enabled supply-chain data theft from hundreds of downstream companies.


•       AI coding assistant remote prompt injection: A vulnerability in an AI coding assistant allowed hidden instructions embedded in source code to manipulate the agent into exfiltrating code, patched after responsible disclosure.


These are documented failures from production environments, and the organizations involved are early movers who deployed faster than they governed. Every enterprise on a similar trajectory is carrying similar exposure.


Q4: Do you think most organizations are underestimating the risks associated with autonomous AI? If yes, why?


The underestimation is structural, not attitudinal, and it starts at the board level. Most directors broadly understand that AI matters and can speak to the headlines, but they cannot distinguish a real agentic deployment from agent washing or meaningfully probe the risk profile of what their organization is actually running. That gap at the oversight layer would be manageable if AI were being treated as a strategic capability requiring patient capital, but it is mostly being treated as a cost reduction lever, and that framing cascades downward as relentless pressure on CEOs and CFOs to return value to shareholders. In that environment, the controls conversation loses to the velocity conversation almost every time, shadow AI proliferates, and identity governance debt gets stress-tested by a technology that creates non-human identities at machine speed. The bigger strategic risk here is actually not deploying agents at all, because competitors that figure out governed deployment first will compound productivity advantages faster than security-driven laggards can recover, and the organizations still debating whether to start have already lost ground.


Q5: How are traditional security models falling short when it comes to managing AI-driven systems?


Traditional security models were built for a world where identity meant a human, behavior was deterministic, and change was reviewable before it reached production, and agents break all three of those assumptions simultaneously. Multi-factor authentication has no meaningful application against a non-human identity operating without a human in the loop, SIEM baselines built around normal working hours fall apart against systems that run around the clock, and data loss prevention tuned to keyword patterns is trivially defeated by an agent that can chain approved tools to exfiltrate through sanctioned channels. In practice, developers also grant broad access scopes to ship fast, and credential hygiene at the machine identity layer has been failing in most enterprises for years before agents arrived to stress-test it. The control surface has moved from the perimeter and identity layer to the runtime action layer, the point where an agent reaches out to call a tool, touch data, or change state, and security programs that have not rebuilt enforcement there are protecting against last year’s threat model while the actual attack surface runs unmonitored one layer deeper.


Q6: What are the biggest challenges companies face in trying to control or monitor autonomous agents?


The foundational challenge is inventory, because you cannot govern what you cannot see, and agents are harder to discover than shadow IT ever was since they get built on personal API keys, run inside developer workflows, and quietly accumulate across business units without anyone maintaining a definitive list. Close behind that is containment: a surprisingly large share of organizations that have deployed agents cannot actually stop one mid-action when it begins to misbehave, and without a runtime policy engine or fast enough credential revocation, every agent deployment becomes an asymmetric bet with bounded upside from automation and unbounded downside if something goes wrong. Attribution is the third problem, because when agents share credentials, which they often do since developers default to the path of least resistance, there is no way to tie a specific action back to a specific agent, and in multi-agent workflows there is no mature standard for one agent to cryptographically verify another’s identity and scope. Explainability rounds it out: when an agent takes an action, most organizations cannot produce a reasoning trace that answers the basic question of why, and that will matter enormously to auditors and regulators. None of these are exotic problems, but they do require treating agents as a new class of actor rather than another application to slot into an existing security stack.


Q7: How can organizations start building better governance frameworks for AI agents today?


Start with discovery: a full inventory of every agent, every MCP server, and every non-human identity tied to AI systems, each mapped to a named human owner, because organizations that skip this step build governance on sand. From there, anchor on a clear set of standards rather than getting stuck debating frameworks: NIST AI RMF or ISO/IEC 42001 for the enterprise governance spine, OWASP ASI 2026 as the threat taxonomy for engineering and red-teaming, and AIUC-1 as the assurance bar for agents you procure or ship. Every agent should be designed for containment from day one with scoped credentials, time-bound tokens, an explicit tool allowlist, and a runtime kill switch, with policy enforcement operating at the action layer where every tool call is evaluated in-line and high-blast-radius actions require a human in the loop. Behavioral telemetry capturing reasoning traces, tool calls, inputs, outputs, and memory state needs to be standard practice, because without it there is no credible incident response capability when something goes wrong. The organizations that get this right will treat agent governance as a permanent operating capability rather than a project with an end date.


Q8: Are there specific industries that are more exposed to these risks than others?


Exposure does not track cleanly to the industries most people assume, and the sectors most at risk right now are the ones under the greatest economic pressure to adopt AI fast, which cuts across industries that have historically been quite cautious. Retail, consumer tech, logistics, and high-volume service businesses combine high agent volume, high customer data exposure, and intense margin pressure to deploy ahead of the competition, and when the board message is “move fast or lose to someone who will,” governance discipline is typically the first thing that slips. Traditional high-regulation industries carry real exposure too but for different reasons: financial services face autonomous transactions under heavy regulatory scrutiny, healthcare combines patient data with clinical decision-making where an agent error can translate to patient harm, and critical infrastructure is where agent compromise moves beyond data loss into life safety territory. Software and SaaS providers carry a particularly sharp version of supply-chain risk, where a single compromised agent can cascade to hundreds of downstream customers, which is a pattern we have already seen play out in real incidents. The common factor is the intersection of economic pressure, data sensitivity, regulatory weight, and blast radius, and any organization sitting at two or more of those dimensions should be treating this as a board-level risk rather than a technology program.


Q9: What role should cybersecurity teams play in shaping AI adoption strategies?


Security needs to operate as a co-architect of AI adoption rather than a gatekeeper at the end of it, because the gatekeeper model is precisely how organizations end up with shadow AI, surprise deployments, and a governance posture that is always reacting to decisions already made. In practice that means security is in the room for use-case selection, model selection, and architecture from day one, publishing a paved road of approved models, vetted servers, pre-built identity templates, and sanctioned architecture blueprints that makes the secure path the easy path. It also means tiering autonomy by risk so low-risk agents move through self-service while high-blast-radius agents get the scrutiny they deserve, and using AI to govern AI through runtime policy engines and automated red-teaming, because manual review will not scale to agent volume. The CISOs winning this cycle are the ones making the case clearly to their boards that slow traditional review is not the safer choice, it is the choice that drives deployment underground where there is no visibility at all.


Q10: Looking ahead, what are the key steps enterprises should take now to safely scale autonomous AI?


Before scaling anything, get the foundations right: build a real inventory of agents, MCP servers, non-human identities, and model dependencies, and organizations that cannot produce that list today should pause new deployments until they can, because the goal is to make sure adoption is happening on a surface you can actually see. In parallel, pick a governance spine and stop debating frameworks, with AIUC-1 as the most directly relevant anchor given it is the first standard written specifically for AI agent security, safety, and reliability, layered with OWASP ASI and NIST AI RMF as your regulatory posture requires. On the controls side, deploy runtime policy enforcement in-path between agents and the tools they call, rebuild the identity layer on time-bound tokens and least-privilege scoping, and capture behavioral telemetry to a dedicated AI observability platform, because agent governance without identity governance is theater. Strategically, architect for a world where agents are the default actors, which means guardian agents monitoring peers for drift, multi-agent architectures that assume one agent in the chain will be compromised, and signed inter-agent messages with explicit trust boundaries. The enterprises that win this cycle will be the ones that can demonstrate governed adoption is faster and more durable than ungoverned adoption, because if security cannot make that case clearly, the argument is lost before it starts.
 



One last thought worth leaving readers with: the real risk is not that agents will be attacked in the traditional sense – it’s that they will do exactly what they were asked to do, in a way no one anticipated, at machine speed, across systems no one mapped. Build the controls for that reality, not the one in the marketing deck.
 From Fragmented Martech Stacks to Unified Data Platforms as a foundation for AI

From Fragmented Martech Stacks to Unified Data Platforms as a foundation for AI

marketing 30 Apr 2026

Q1. The industry is clearly moving away from fragmented martech stacks. What are the main limitations you've observed with traditional setups involving DMPs, CDPs, and data clean rooms?


These tools were never designed to work together; they were built to solve different problems for different segments of the media industry at different points in time. DMPs were built mainly for publishers navigating the third-party cookie era. CDPs came along to fix the single-customer-view problem for brands internally. Data clean rooms were adopted in response to signal loss across the board by brands, publishers, and retailers alike. So you’re looking at three separate architectures, three vendor relationships, three data pipelines.


What we hear constantly from publishers and retailers is that stitching these together creates enormous operational drag. Every handoff between tools is a point of latency, a potential compliance risk, and a cost center. And because none of them were built with collaboration in mind from the start, the moment you try to do something cross-party (enrichment with a partner's data, joint measurement, audience activation beyond your own properties, etc.) you hit a wall. The stack simply wasn't designed for the collaboration era, and even less for AI.

 

Q2. What is driving organizations to adopt more unified and flexible data platforms today, and how urgent is this shift?


Three pressures are converging simultaneously, which is what makes this moment feel different from earlier transitions.


First, regulation has fundamentally changed what's permissible. GDPR and a growing body of case law have made clear that moving customer data freely between systems is over: organizations need technical guarantees, not just contractual ones, for hassle-free and fast collaboration. Second, the signal environment has decreased: third-party cookies are declining, and universal identity solutions have helped at the margins but haven't filled the gap. Third — and most importantly — the value of first-party data is now demonstrably tied to collaboration. Data sitting in one organisation's DMP is interesting. Connected to a brand's CDP or a retailer's transaction history, it becomes genuinely powerful.


The media players moving now are building structural advantages. Those waiting are watching legacy DMP contracts come up for renewal with no clear answer for what replaces them.

 

Q3. From your perspective, what does a truly "unified" data platform look like in practice, beyond just integrating multiple tools?


"Unified" gets used to mean fewer vendor logos on a slide. That's not what I mean in this case necessarily.


A truly unified platform is one where the architecture was designed from the start for collaboration and privacy with the goal of creating networks between data owners, not just optimising data within a single organisation. When a CDP or DMP adds a clean room module, the privacy guarantees are only as strong as the wrapper. Additionally, you don't necessarily inherit any network here either, meaning each partnership might have to be built from scratch.


At Decentriq, we started from the opposite direction. Our clean room uses confidential computing: hardware-level encryption where data remains protected during processing, even from us. Using that as a foundation, we built the Collaborative Audience Platform: a unified layer adding CDP- and DMP-style capabilities — segmentation, identity resolution, activation, shared audience products. In practice, a publisher can collect data, build and enrich audiences, activate to GAM or DSPs, run closed-loop measurement, and refresh automatically all in one environment, with no seams between layers. That's what genuinely unified looks like.

 

Q4. Many companies still rely on stitching together multiple solutions. Where do these approaches typically fall short when it comes to scalability and efficiency?


The failures tend to only become visible at scale, which is precisely when they're most painful.


The first is the identity tax. Every time data moves between tools, you make assumptions about identity resolution. If your system can only handle one ID type, you can lose a significant portion of your audience during matching. The second is engineering overhead: stitched integrations need constant maintenance, and onboarding each new partner is its own project, meaning there is a hard ceiling on how many collaborations you can run in parallel. The third, which comes up in almost every conversation with publishers replacing their DMP, is the inability to operationalize collaboration at scale. One-off clean room projects are feasible. Repeatable, automated, always-on audience collaboration with multiple partners simultaneously is a different problem (and stitched stacks weren't designed for it).

 

Q5. How is this shift impacting data collaboration between brands, publishers, and retailers in real-world scenarios?


The most significant change is the move from one-to-one integrations to network-based collaboration, because this changes the economics of data entirely and provides a crucial foundation for AI.


In the old model, a publisher ran a bespoke clean room project with one advertiser at a time. High cost, limited scale. A platform model enables something fundamentally different: standardised, repeatable collaborations across a growing network simultaneously. We've seen this with OneLog in Switzerland using our technology: five publishers unified under a single audience monetization platform, enabling advertisers to plan, activate, and measure across their combined audiences.


We're seeing the same dynamic for retailers. Decentriq's Collaborative Audience Platform lets them build audiences from online and offline signals and activate with brands and premium publishers (including CTV) without raw transactional data ever leaving their control. For brands, this means accessing publisher and retailer audience data through a standardized, privacy-safe workflow instead of negotiating lots of separate agreements.

 

Q6. Privacy and compliance remain key concerns. How do modern unified platforms address these challenges more effectively than legacy martech stacks?


Legacy stacks address privacy primarily through contracts — data processing agreements, retention policies. These are necessary but not sufficient. Contracts tell you what should happen; they don't technically prevent what shouldn't.


Decentriq uses confidential computing as the central technology for data collaboration: a hardware-level technology where data is processed inside a secure enclave inaccessible to any party, including us. The privacy guarantee is technical, not contractual. A significant recent CJEU ruling validated exactly this approach:  clarifying that pseudonymised data processed through technology where re-identification is technically impossible carries a different compliance profile than data protected only by agreement. 


For organizations navigating GDPR, this shifts the burden dramatically: instead of documenting every data flow and relying on ongoing contractual enforcement, you can demonstrate provable technical compliance. That's increasingly what regulators, legal teams, and enterprise procurement are demanding.

 

Q7. What role does AI and automation play in enabling more seamless and actionable data collaboration within these new ecosystems?


The critical point is where AI runs. AI operating on raw data is a privacy risk. AI operating inside a confidential computing environment — on data that is never exposed — is a fundamentally different proposition.


At Decentriq, AI is embedded at several levels: lookalike modelling that extends a seed audience without either party revealing their underlying data (a luxury automotive brand saw +80% engagement and +58% conversion rate using this, for example), audience size estimation before a segment is built, and automated refresh cycles that keep audiences current across partners without manual intervention. 


Further out, the more AI is integrated into these environments, the more the collaboration network itself learns — from joint activations, measurement results, and partner interactions — rather than resetting with each new campaign. That's the direction this is heading.

 

Q8. Looking ahead, what key changes do you expect in how organizations approach data infrastructure and collaboration over the next 2–3 years?


Three shifts feel clear.


First, stack consolidation. Organisations running separate DMPs, CDPs, and clean rooms will consolidate around platforms that do two, if not all three three, natively. The maintenance cost, compliance complexity, and operational drag will drive that decision.


Second, the ecosystem model becomes the norm. The value of first-party data is increasingly defined not by how much you have, but by how well it connects. Publishers contributing audiences to a collaborative network unlock revenue that's unavailable to those working in isolation. Retailers whose data can activate across a premium publisher network and close the loop with sales measurement are in a completely different competitive position. That logic will only accelerate. And as AI becomes more deeply embedded in these workflows, the network itself becomes a training asset: the more data flows through a shared collaborative infrastructure, the smarter and more precise the models that power lookalike targeting, audience estimation, and measurement become. Isolated stacks simply can't compete with that.


Third, privacy-preserving infrastructure shifts from differentiator to baseline expectation. Confidential computing and hardware-level privacy guarantees are currently seen as advanced or optional. In 2–3 years, driven by regulation, enterprise procurement standards, and demonstrated risk of alternatives, they'll be standard requirements. The organisations betting on these foundations now will be ahead of that curve rather than catching up to it.
   

Page 1 of 47

REQUEST PROPOSAL