advertising 14 May 2026
With 2026 marking the definitive "death of the cookie" and rising digital ad blindness, performance marketers are seeing diminishing returns on traditional MarTech stacks.
1. The 2026 Reality Check
We’ve officially moved past the 'death of the cookie.' Now that the dust has settled, how has this shift fundamentally changed the way performance marketers view the traditional MarTech stack versus real-world triggers like OOH?
“For years, the industry sold performance marketers a promise that if you just gathered enough granular data and tweaked enough levers, you’d build a well-oiled pipeline machine. Marketers went all-in on that promise, but the machine still breaks down. And it has led to a dependency on hyper-targeting that has now hit a wall of diminishing returns.
The shift isn't away from digital; it’s toward balance. Marketers are looking at OOH not as a ‘top-of-funnel luxury,’ but as a high-fidelity data trigger. They’re realizing that real-world placement is the only unblockable, unskippable signal left. We’ve moved from a tracking-first mindset to a planning-first mindset. If you know exactly where your ICP lives and works, you don't need a cookie to find them; you just need to be there.”
2. The 'Anchor of Trust' Concept
OOH can be described as an 'anchor of trust' in an era of digital fatigue. How does seeing a brand in the physical world change a consumer's psychological response when they later encounter that same brand in a social or AI-driven feed?
“There is a fundamental psychological difference between a pixel and a pillar. We call this the Legitimacy Signal. Anyone can buy a LinkedIn ad or spin up an AI-generated landing page for $50. But when you see a brand on a massive wallscape in SoHo or a digital spectacular in Austin, your brain registers permanence.
When that same consumer later sees your ad in a social feed, the friction of "Is this company legit?" is already gone. OOH provides the physical proof of existence that makes every subsequent digital touchpoint convert at a higher rate. It’s the difference between a stranger knocking on your door and a neighbor waving from across the street.”
3. The ROI Hedge
With digital CPMs rising and conversion rates hitting a ceiling for many, how exactly does OOH act as a 'hedge' against diminishing returns in a digital-only budget?
“Every digital-only budget eventually hits a point of diminishing returns where the next dollar spent actually increases your average CAC. OOH acts as a hedge because it lowers the floor for your other channels.
By priming the market with OOH, you increase the efficiency of your paid social and search spend. You aren't just buying impressions; you’re buying market familiarity. We see it constantly: when a brand enters a physical market, its branded search volume goes up, and its digital CTRs improve. You’re hedging against rising CPMs by making your existing digital ads work 20% harder.”
4. Quantifying the 'Halo Effect'
The data suggests that OOH can drive a 50%+ increase in monthly revenue when used to 'prime' a market. Can you walk us through the mechanics of that priming? What is happening to the digital CAC (Customer Acquisition Cost) during those campaigns?
“The mechanics are simple: Trust scales faster than targeting. When we talk about priming, we’re talking about building a baseline of awareness so that your digital ads don't have to do the heavy lifting of introduction.
During these campaigns, we typically see digital CAC drop significantly. Why? Because the consideration phase has been compressed in the real world. At OneScreen, we focus on Front-End Intelligence. We map your ICP to specific real-world environments before you spend a dollar. When you establish physical presence around your target accounts or key offices, your sales team hears, ‘I see you guys everywhere.’ That’s the sound of a falling CAC.”
5. Omnichannel Orchestration
As a CRO, you see the full funnel. How should brands be orchestrating the hand-off between a physical OOH placement and a digital activation? Is there a specific 'window of influence' that marketers should be aiming for?
“The ‘hand-off’ shouldn't be a relay race; it should be a symphony. The biggest mistake is treating OOH as a silo. You should be orchestrating your digital spend to retarget the geographies where your OOH is live.
We look for a ‘Window of Influence,’ usually a 4-to-6 week immersion period. If you’re a B2B company at a major conference, your OOH should be live a week before and two weeks after. The goal is to create a surround-sound effect where the physical placement provides legitimacy, and the digital activation provides the ‘click here’ convenience.”
6. Solving for 'Digital Blindness'
We talk a lot about 'ad blindness' on mobile and desktop. Why is it that OOH seems immune to this fatigue, and how is OneScreen helping brands bridge that gap between 'real-world awareness' and 'digital conversion'?
“You can’t ‘AdBlock’ a billboard. You can't scroll past a wrapped train when you’re standing on the platform. OOH is part of the environment, not an interruption of it.
The problem has always been the Execution Gap. Marketers wanted to do OOH, but it was too hard: too many vendors, too much friction. OneScreen bridges that gap by being a tech-native partner. We handle the 17 different vendors and the fragmented formats so the marketer can focus on the strategy. We move at the speed of a software company, not a legacy billboard vendor.
7. Advice for the Performance Marketer
For the performance-obsessed marketer who is used to tracking every single click, what is the biggest mindset shift required to successfully integrate OOH into an omnichannel strategy?
“I would spend more time measuring the metrics that brand campaigns like OOH move, and look for business results where the brand is strong. I wouldn’t try to shoehorn that data into fine-grain attribution, since the obsession with last-click attribution is what led to the digital fatigue we’re in now.
The shift is moving from measurement to confidence. If you use data-driven planning to put your brand in front of your ICP every day for a month, you don’t need a spreadsheet to prove it had an impact; your pipeline will tell you. Trust the signal. Real-world presence compounds in ways a digital ad can’t match. Don't just buy inventory; buy a defensible strategy.”
artificial intelligence 13 May 2026
Q1: VideoAmp describes itself as a media performance platform, not just another point solution. What does that architectural decision actually mean — and why does it change what's possible for your customers?
It starts with the data. Every time content is streamed, it leaves a footprint on what was watched, when, and for how long. Second-by-second, device-level, census-scale data. That's the foundation. When you combine that with large-scale identity data, you get accurate reach and outcome measurement across publishers. Add co-viewing and out-of-home, and you have a unified picture across linear and digital. That's the infrastructure layer and until now, that hadn’t been available to advertisers and publishers.
What we did differently is build the full stack on top of that foundation. Publishers use VideoAmp to organize inventory, forecast revenue, measure campaigns, and serve ads. Agencies use us to plan, buy, optimize, and measure across publishers. All of it running against the same dataset and identity graph.
That architectural decision is what makes AI actually useful here. When everything shares the same foundation, AI can turn what used to be a fragmented, multi-step process into one continuous workflow — optimizing decisions in real time, learning from every campaign, and eventually underwriting outcome guarantees. That's not possible when you're stitching point solutions together because the data layers don't match, the identity doesn't reconcile, and the AI has nothing reliable to learn from.
For VideoAmp, the hard part is done. A lot of companies are still trying to figure out all of these layers that we’ve already put the work in over the years. Now we get to focus on what you can actually do with it.
Q2: What's fundamentally changed in the last few years that makes a unified, AI-driven workflow actually achievable now?
Two things have changed, and they work together.
The first is data. As mentioned, we now have second-by-second, device-level, census data at the scale needed to power every stage of the workflow. From planning, buying, optimization, measurement– all running against the same underlying asset. That's genuinely new, and it's what makes closing the loop between spend and outcome operationally possible rather than theoretical.
The second change is what you can do with that data now that LLMs exist. This is where things get interesting.
The media planning and buying process is complex. Most people operating in it, even experienced buyers, don't always know exactly what to ask for or how to ask for it. They know they want better performance, but the path from that goal to the right set of decisions across publishers, audiences, formats, and timing is not obvious. Historically, that complexity lived in the heads of experts, or got lost in the gap between systems.
LLMs trained on the context of this data make what should feel like magic possible. A system that understands what you're trying to accomplish even when you can't fully articulate it, and guides you through the process of getting there. Not a dashboard or a reporting tool, but an experience that reasons with you, surfacing what matters, so you can make better decisions at every step.
That's what we're building– an AI-powered platform that understands your campaign goals better than any single tool has before, and it should do better than you'd do on your own– not by replacing your judgment, but by elevating it. LLMs let put an intelligent, agentic experience on top of the planning, optimization and measurement solutions we’ve already built. Turning infrastructure into something anyone can use, and that gets smarter with every campaign it touches.
The combination of census-scale data plus AI, that actually understands what to do with it, is what makes this moment different from every previous one.
Q3: Historically, buyers and sellers have operated with different data and different incentives. How does shared infrastructure change that dynamic and what does real alignment look like in practice?
The misalignment was never really about incentives. Everyone wants campaigns that perform. The problem was that buyers and sellers were literally working from different data. A seller reported on impressions served. A buyer measured what converted. Those two numbers came from different systems, different methodologies, different moments in time.
Shared infrastructure changes that at the root by making it so there's only one version of the data to begin with.
That's what makes our position unusual. Publishers use VideoAmp to organize their inventory, forecast revenue, measure campaigns, and serve ads. Agencies use VideoAmp to plan, buy, optimize, and measure across those same publishers. Both sides are operating on the same platform, against the same data, built on the same identity graph. The shared truth isn’t negotiated, it’s structural. It's not that we've convinced buyers and sellers to look at the same number. It's that there is only one number.
And that foundation is what makes everything else possible. When a buyer and seller are planning and measuring against the same census-scale, second-by-second data, there's no ambiguity. Guarantees become operationally possible because the line from "here's what we projected" to "here's what we delivered" is unbroken. Outcome-based deals become credible because both parties can see exactly what happened and why.
Real alignment in practice looks like this: spend that a CFO can defend, inventory that a publisher can price with conviction, and a campaign that both sides agree performed, or didn't, because they're both looking at the same truth.
Q4: You've spoken about a future where guaranteed outcomes become the standard, not a premium add-on. What needs to happen for the industry to get there?
The industry has been talking about outcome-based guarantees for years. The reason it hasn't happened at scale isn't lack of interest, it’s because the technology to actually underwrite a guarantee didn't exist.
Three things have to be true to make a guarantee possible. You need census-scale data. You need that data to be the same data driving every stage of the workflow, from planning through measurement. And you need a system intelligent enough to optimize in real time, learning from what's happening, and correcting course before a campaign goes off track.
That third piece is what LLMs make possible in a way nothing before them did. A large language model trained on the full context of this data, including the history of campaigns, the performance patterns across publishers, audiences, formats, etc., doesn't just report on what happened. It understands why it happened. It can optimize decisions in real time, learn from every campaign it touches, and get progressively better at predicting and delivering outcomes. It self-corrects. That's what makes it possible to underwrite a guarantee rather than just promise one.
But here's what I think people underestimate about where this is going. The path to guaranteed outcomes isn't just a technology argument, it's a trust argument. And trust gets built through experience. You try a system like this. It improves your return on ad spend. You try it again. It improves again. Over time, the performance compounds and so does the trust. That's how any powerful technology gets adopted at scale.
What makes our platform different is that the foundation for all of this is already built. The data, the identity graph, the full-stack workflow for both buyers and sellers. The LLMs have something real to learn from and act on. So when the system optimizes a campaign, it's working from the deepest, most comprehensive view of video advertising that exists. That's what makes the guarantee credible.
Q5: Streaming has introduced what amounts to unlimited ad supply compared to traditional TV. How does the industry avoid a race to the bottom on pricing and where do outcomes fit into that equation?
The supply problem in streaming is real. When inventory expands without a consistent link to performance, pricing pressure is inevitable and self-reinforcing. Supply that can't prove its value drags CPMs down across the board, including content that genuinely deserves a premium. The market has no reliable mechanism to separate inventory that drives results, from inventory that simply adds volume.
A true outcome accounts for everything that went into it: the quality of the content, the relevance of the creative, the precision of the placement, the price paid for the impression. All of those variables collapse into a single signal– did this investment deliver a result or didn't it?
That's a fundamentally different basis for pricing. CPMs are a proxy,measuring the opportunity to be seen. Outcomes measure whether something actually happened. When you transact against outcomes, you're no longer debating whether premium content deserves a premium price because the performance answers that question directly. Good content will prove its value. Inventory that can't prove its value will price accordingly.
This is why outcomes are so important for the long-term health of streaming. They create the economic conditions for premium content to be properly monetized. When publishers can demonstrate that their inventory drives real results, they can defend premium pricing with evidence rather than reputation. And when premium content is properly monetized, it supports the investment required to keep creating it. That's the virtuous cycle the industry needs.
Q6: Where does VideoAmp go from here? What does the next phase of the platform look like?
The hardest part is done. Most companies are still trying to figure out the data layer, the identity layer, the privacy infrastructure, let alone stitching all of that together. We figured that out, now we get to focus on what you can actually do with it.
The AI we're layering on top has a complete view of the entire media workflow, across both sides of the market, from a single data foundation. That's a system operating at a level of context no fragmented stack can match. It learns faster, corrects more precisely, and compounds in value with every campaign.
That's what makes outcome guarantees achievable. Not ambition, but infrastructure.
artificial intelligence 13 May 2026
As more companies implement AI to automate tasks, many tech-forward business leaders are touting the promise of “freeing up” employees for more creative and impactful work. However, much of this automation is streamlining work done by entry-level employees, who may not have the skills to step into those new roles and responsibilities.
Gen Z employees need upskilling support to stay ahead of the curve and help their organizations seize new opportunities for growth and innovation. Unfortunately, only 39% of Gen Z workers now feel equipped to perform well in their role, dropping over 20% from the previous year. More than two-thirds of Gen Z employeesadmit to feeling out of their depth at work, and nearly 6 in 10 say they have used AI to complete work tasks – not to boost their efficiency, but because they felt undertrained.
How can organizational leaders reverse this trend and equip Gen Z employees with the confidence, support, and skills they need to do their best work? Start by designing a training strategy using these three pillars:
1. Leverage AI for Guided Learning
While many are focused on AI’s potential for disruption in the workforce, AI and other innovative technologies can also play a key role in supporting employee development. Currently, only one-third of employees are satisfied with their organization’s opportunities for skill development, but technology can enable organizations of any size to make training accessible, personalized, and engaging for Gen Z.
Already, 75% of Gen Z professionals are using AI to learn new skills. This may tempt workforce leaders to leave younger employees to upskill on their own, but without oversight, employees may learn “bad habits” or ways of working that aren’t aligned with their company’s processes. Instead, organizations should explore how they can harness AI to quickly create training content that is tailored to the employee’s needs and company’s goals.
2. Welcome New Joiners to a Vibrant Learning Culture
Gen Zs know that skill development is key to career progression. This means training is not only about productivity and performance, but retention. In a recent survey, nearly half of Gen Z participants said they were ready to quit their jobs to seek better professional growth opportunities. When Gen Z professionals don’t believe they can learn and grow within the company, they leave as soon as possible, leaving companies to start the expensive recruiting process all over again.
The best way to stop this cycle is to show new Gen Z employees from day 1 that they are joining an organization with a culture of learning. A stunning 8 in 10 employees say they would stay at a company longer if they had better onboarding, while only 12% grades their company’s onboarding highly. By redesigning onboarding with clear and personalized learning pathways that show new employees how they can grow, organizations can close this opportunity gap and raise retention rates, while also better preparing new Gen Z workers to succeed in their roles.
Additionally, managers of Gen Z employees can help showcase the company’s learning culture by modeling continuous learning themselves. When Gen Zs see their managers regularly growing their own skills and knowledge, learning feels like a way of life at the organization, where all employees have opportunities to develop.
3. Prepare Gen Z to Pivot with Soft Skills
As organizational leaders rethink training for Gen Z, it’s critical to not only consider how much training to offer, but how to deliver it most effectively. For many years, corporate training has been dominated by one-directional presentations and online training that functioned as a checklist activity. These strategies not only fail to engage Gen Z; they also fail to prepare them for long-term success in a rapidly changing work environment.
According to Deloitte, over 80% of Gen Zs believe soft skills – including communication, leadership, and cooperation – are more critical for career progression than technical skills. Research also indicates that soft skills and other foundational skills help professionals stay resilient to industry changes, learn specialized skills faster, and advance to higher job roles.
With this in mind, soft skills development should be integrated into all training, especially for Gen Z. For example, employees can practice teamwork in online game-based training. Communication and leadership skills can be honed with peer learning and interactive presentations, supported by digital learning tools. This approach delivers training that strengthens technical and soft skills simultaneously, boosting training efficiency, employee engagement, and team connections.
The most successful training programs for Gen Z employees will be those based on an understanding of how Gen Zs want to learn and the skills they need to adapt to an ever-changing world. From this foundation, AI and other technologies can act as an accelerant, enabling trainers to experiment and iterate quickly to optimize learning outcomes and drive real business results.
artificial intelligence 12 May 2026
A last-minute meeting hits your calendar: “URGENT: Client Issue, Transaction Freeze.”
You join and learn the AI fraud detection system your team deployed froze a multimillion-dollar transaction from a long-time client. The account team has escalated. Legal’s asking questions. The client wants an explanation.
Your team doesn’t have one.
Moments like this are becoming routine. And when people ask why, the answer is AI made the decision. The outcome appears clean and neutral. But no one feels like they made the call, and that’s the risk.
AI doesn’t reduce accountability or liability; it obscures who owns the decision.
Why AI Amplifies Bias at Scale
Once AI is trained on past decisions or set with narrow criteria, it carries those patterns forward, shaping who gets hired, how people are paid and treated, and which transactions are flagged.
A model that favors certain schools, career paths, geographies, spending patterns, or customer profiles can produce uneven outcomes while still appearing neutral. What once were isolated decisions becomes a pattern that’s easier to document and challenge. That’s what changes the nature of risk. A flawed assumption or biased dataset doesn’t stay isolated, it scales.
By the time those outcomes are visible, the logic behind them is embedded in the workflow. The result is inefficiency, bias, and legal exposure.
Where AI Liability Shows Up Inside Organizations
AI liability shows up in decisions that are sensitive, regulated, or likely to be challenged.
In HR, that includes hiring, pay, and promotion. In marketing, it’s claims, targeting, and personalization. In customer operations, it’s access, service decisions, and complaint handling. In finance and procurement, it’s approvals, fraud flags, and vendor treatment.
These systems shape who gets hired, paid, served, approved, flagged, or denied service. When those decisions are questioned, by a regulator, a customer, or an employee the company must explain them.
If the outcome is discriminatory, misleading, unfair, or can’t be clearly defended, the company is liable.
How to Integrate AI Risk Into Your Existing Risk Framework
Most organizations treat AI as a new category of risk. That leads to new policies and separate oversight, but leaves the core issue unchanged. AI-driven decisions often sit outside existing controls, influencing outcomes without being clearly tied to escalation paths, documentation standards, or accountable owners.
Integrating AI risk means treating those decisions like any other high-impact process. Where outcomes carry regulatory or financial consequences, they should be governed the same way. That requires identifying where AI is used in decision-making, defining who is accountable, and ensuring those decisions can be explained and reviewed when challenged.
The goal is to close the gap between how decisions are made and how they’re governed.
The Governance Trap
In practice, many governance efforts fall into a predictable pattern: policies are written, oversight is assigned, and “human-in-the-loop” safeguards are introduced.
But these measures often create the appearance of control without changing outcomes.
Policies are static, while AI systems evolve. Human oversight is frequently too late—applied after decisions are executed—or is too shallow to challenge the system’s logic.
The result is a gap between formal governance and real accountability. Decisions are made, but ownership remains unclear.
A Better Way to Think About Ownership
Closing that gap requires a more precise definition of ownership.
AI systems distribute responsibility across multiple layers:
Risk becomes difficult to manage when these layers are fragmented. A model may be built by one team, deployed by another, and relied on by a third. When something goes wrong, accountability diffuses.
Clarity comes from explicitly assigning ownership at each layer and aligning them.
What Leaders Must Do
Managing AI risk is ultimately a leadership issue. It requires decisions about accountability, and technology. That starts with a few non-negotiables.
First, make ownership explicit. Every AI-driven process should have a clearly defined accountable party. Second, embed governance in workflows. Controls should exist where decisions happen, not just in policy documents. Third, audit inputs and assumptions, not just outputs. The logic behind the system matters as much as the results it produces. Finally, define where AI informs decisions versus where it makes them. Not every decision should be automated, especially in high-stakes contexts.
As regulators, enterprise buyers, and partners ask more detailed questions about how AI decisions are made and explained, governance is becoming a visible signal of maturity.
Organizations that can answer those questions clearly, with defined ownership and embedded processes, will be easier to trust and work with than those trying to reconstruct decisions after the fact.
marketing 7 May 2026
marketing 5 May 2026
marketing 4 May 2026
marketing 30 Apr 2026
Page 1 of 47
OOH as a Hedge Against 'Digital Fatigue' ROI
Interview Of : Pat Griffin
The AI-Powered Shift From Media Buying to Media Performance
Interview Of : Tony Fagan
Is Gen Z Ready to Level Up at Work in the AI Age? 3 Tips to Elevate Upskilling
Interview Of : Sean D'Arcy
Who Owns the Risk When AI Makes the Call
Interview Of : Shawn McIntire
How to Build a Post-Sale Support System That Scales With Your Customers
Interview Of : Jess Muehlfeld
Interview Of : Shobeir Shobeiri