Interviews | Marketing Technologies | Marketing Technology Insights
GFG image

Interview

 Gaming Solved App Monetization From Day One. Why Is the Rest of the App Economy Still Playing Catch-Up?

Gaming Solved App Monetization From Day One. Why Is the Rest of the App Economy Still Playing Catch-Up?

marketing 5 May 2026

Shobeir Shobeiri, Director of Publisher Sales, Moloco 


Often, app publishers still treat monetization as a partner decision. Gaming publishers treat it as infrastructure.


Early on, gaming companies approached monetization as a system, not an add-on, fostering an environment where multiple advertisers compete for every impression, driving revenue while maintaining performance and user experience.


What’s more, leading publishers like King and Supercell have operated top-grossing titles such as Candy Crush Saga and Clash of Clans for more than a decade. These games are still culturally relevant, standing the test of time as high-engagement products that continue to rank among the most downloaded and highest-earning apps globally.


What’s important is that they achieved this while aggressively monetizing through ads and in-app purchases. These examples directly challenge the notion that monetization comes at the expense of user experience. In gaming, the opposite appears to be true. According to critics, aggressively monetizing through ads and in-app purchases should have led to a worse user experience and, over time, a reduction in engagement. However, their sustained decade-long success suggests that well-designed monetization systems can allow publishers to increase competition and yield while maintaining engagement over time.


The majority of the remaining app ecosystem took a different path. Utility apps like news or weather, and sports scoring or social apps appear to have focused on building engagement and scale. While they succeeded, enjoying the reach of millions of users each day, their monetization unfortunately seemingly still lags behind, especially as many of these apps are free to download.


Most non-gaming apps monetize the few while gaming monetizes the many.


Subscriptions, transactions, and commerce models generate meaningful revenue, but oftentimes, it is only from a small percentage of users. In today’s environment, that model would be increasingly under pressure if subscription growth slows, retention could become more volatile, or broader macroeconomic conditions might limit consumer willingness to spend. 60% of the app store revenue is attributed to games, which suggests that the audience of non-gaming audience still remains under-monetized. Gaming publishers focused on solving the monetization issue from day one. Other app categories are still catching up.


Instead, these apps stitched together an ad strategy. An SDK here or tagging in a demand partner there. A setup where only a limited number of advertisers can compete for each impression, leaving meaningful revenue on the table. Over time, that approach created fragmented stacks where limited demand competes, auctions lack pressure, and yield plateaus.


This divide has defined the last decade of mobile trends. 


The scale of the gap is visible and widening by the day. With mobile games generating the majority of the appstore revenue, it appears the difference is not in audience size, but rather in monetization maturity.


It seems gaming built systems designed to extract value from the entire user base, while most other apps monetize just a fraction of theirs.


In gaming, monetization is diversified across formats. We are seeing that some major studios can generate around 15 percent of revenue from advertising, while hybrid models often balance revenue more evenly between ads and in-app purchases. In some cases, such as hyper-casual games, advertising accounts for nearly all revenue.


Even today, despite increased screen time and the removal of friction around payments, converting users to make in-app purchases remains challenging. It was even more difficult over a decade ago, which is why gaming apps took to this strategy early on. 


Hybrid monetization works because it increases competition for each impression. Research shows that combining in-app advertising with in-app purchases yields higher revenue and lifetime value than single-revenue models, with some segments seeing returns more than 50 percent higher.


The gains come from better auction dynamics, not simply more ads. More demand sources competing in real time leads to higher performance and higher yield without degrading the user experience.


The challenge is no longer just acquiring users. It is capturing value once they are inside the app.


As acquisition becomes more expensive and less predictable, the ability to monetize existing users becomes a primary growth driver. 


Publishers that can support more demand competition and better performance within their apps will be in a stronger position to capture value as budgets move. Those that cannot will see more of that value captured elsewhere.


The next phase of app monetization will not be defined by how many SDKs a publisher adds. It will be defined by how effectively those partners are made to compete for each impression, and how much control the publisher retains over performance.


Gaming solved this years ago. The rest of the app economy is just starting to catch up.
 The Security Threat Your AI Strategy Didn’t Account For.

The Security Threat Your AI Strategy Didn’t Account For.

marketing 4 May 2026

Q1: Autonomous AI agents are gaining traction fast how do you define them in a business context today?


An autonomous agent is software that plans, decides, and acts across systems using its own reasoning, not a pre-coded workflow. The business-relevant distinction is not really about AI itself but about agency with credentials: a true agent holds its own non-human identity, invokes tools and APIs, and produces outcomes with minimal human involvement. The honest reality is that most of what is being sold as “agentic AI” right now is not actually agentic, and analysts like Gartner estimate that thousands of vendors claiming agentic solutions, only around a hundred offer genuinely agentic features. That gap exists largely because SaaS can no longer raise venture capital the way it once could, so companies position themselves as AI businesses whether they are or not. For CISOs, boards, and buyers, a system is only truly agentic when it can plan multi-step action toward a goal, select tools dynamically, and operate without a pre-defined script.


Q2: Why do you think autonomous agents introduce a new and poorly understood layer of enterprise risk?


Autonomous agents collapse four risk domains that organizations have always governed separately: identity, application logic, data access, and change control. An agent is a non-human identity acting around the clock at machine speed, with non-deterministic reasoning, meaning the same prompt can produce different actions on different runs, and it discovers and chains access paths that the developers who deployed it never mapped. Its behavior can also drift at runtime from something as simple as a prompt injection hidden in a document or a tool that behaves slightly differently than it did last week. What makes this poorly understood is that most organizations have deployed these systems without the controls to match, and in many cases cannot reliably stop a misbehaving agent, constrain it to its stated purpose, or even produce a full inventory of what agents are running in their environment. That is not an abstract risk: it is an unsupervised insider with administrative access operating at a speed no human security analyst can match.


Q3: What are some real-world examples where these AI agents could create unexpected security vulnerabilities?


The incidents are already happening, and they share a common thread: no malware, no traditional exploit. The agent’s own privileges were the attack surface, and in each case the agent did exactly what it was instructed to do, just by the wrong party. A few that illustrate the range of exposure:


•       AI coding agent deletes production database: An AI coding agent deleted a live production database during a code freeze, then fabricated records to conceal the action.


•       AI chat agent OAuth token compromise: Compromised OAuth tokens for an AI chat agent enabled supply-chain data theft from hundreds of downstream companies.


•       AI coding assistant remote prompt injection: A vulnerability in an AI coding assistant allowed hidden instructions embedded in source code to manipulate the agent into exfiltrating code, patched after responsible disclosure.


These are documented failures from production environments, and the organizations involved are early movers who deployed faster than they governed. Every enterprise on a similar trajectory is carrying similar exposure.


Q4: Do you think most organizations are underestimating the risks associated with autonomous AI? If yes, why?


The underestimation is structural, not attitudinal, and it starts at the board level. Most directors broadly understand that AI matters and can speak to the headlines, but they cannot distinguish a real agentic deployment from agent washing or meaningfully probe the risk profile of what their organization is actually running. That gap at the oversight layer would be manageable if AI were being treated as a strategic capability requiring patient capital, but it is mostly being treated as a cost reduction lever, and that framing cascades downward as relentless pressure on CEOs and CFOs to return value to shareholders. In that environment, the controls conversation loses to the velocity conversation almost every time, shadow AI proliferates, and identity governance debt gets stress-tested by a technology that creates non-human identities at machine speed. The bigger strategic risk here is actually not deploying agents at all, because competitors that figure out governed deployment first will compound productivity advantages faster than security-driven laggards can recover, and the organizations still debating whether to start have already lost ground.


Q5: How are traditional security models falling short when it comes to managing AI-driven systems?


Traditional security models were built for a world where identity meant a human, behavior was deterministic, and change was reviewable before it reached production, and agents break all three of those assumptions simultaneously. Multi-factor authentication has no meaningful application against a non-human identity operating without a human in the loop, SIEM baselines built around normal working hours fall apart against systems that run around the clock, and data loss prevention tuned to keyword patterns is trivially defeated by an agent that can chain approved tools to exfiltrate through sanctioned channels. In practice, developers also grant broad access scopes to ship fast, and credential hygiene at the machine identity layer has been failing in most enterprises for years before agents arrived to stress-test it. The control surface has moved from the perimeter and identity layer to the runtime action layer, the point where an agent reaches out to call a tool, touch data, or change state, and security programs that have not rebuilt enforcement there are protecting against last year’s threat model while the actual attack surface runs unmonitored one layer deeper.


Q6: What are the biggest challenges companies face in trying to control or monitor autonomous agents?


The foundational challenge is inventory, because you cannot govern what you cannot see, and agents are harder to discover than shadow IT ever was since they get built on personal API keys, run inside developer workflows, and quietly accumulate across business units without anyone maintaining a definitive list. Close behind that is containment: a surprisingly large share of organizations that have deployed agents cannot actually stop one mid-action when it begins to misbehave, and without a runtime policy engine or fast enough credential revocation, every agent deployment becomes an asymmetric bet with bounded upside from automation and unbounded downside if something goes wrong. Attribution is the third problem, because when agents share credentials, which they often do since developers default to the path of least resistance, there is no way to tie a specific action back to a specific agent, and in multi-agent workflows there is no mature standard for one agent to cryptographically verify another’s identity and scope. Explainability rounds it out: when an agent takes an action, most organizations cannot produce a reasoning trace that answers the basic question of why, and that will matter enormously to auditors and regulators. None of these are exotic problems, but they do require treating agents as a new class of actor rather than another application to slot into an existing security stack.


Q7: How can organizations start building better governance frameworks for AI agents today?


Start with discovery: a full inventory of every agent, every MCP server, and every non-human identity tied to AI systems, each mapped to a named human owner, because organizations that skip this step build governance on sand. From there, anchor on a clear set of standards rather than getting stuck debating frameworks: NIST AI RMF or ISO/IEC 42001 for the enterprise governance spine, OWASP ASI 2026 as the threat taxonomy for engineering and red-teaming, and AIUC-1 as the assurance bar for agents you procure or ship. Every agent should be designed for containment from day one with scoped credentials, time-bound tokens, an explicit tool allowlist, and a runtime kill switch, with policy enforcement operating at the action layer where every tool call is evaluated in-line and high-blast-radius actions require a human in the loop. Behavioral telemetry capturing reasoning traces, tool calls, inputs, outputs, and memory state needs to be standard practice, because without it there is no credible incident response capability when something goes wrong. The organizations that get this right will treat agent governance as a permanent operating capability rather than a project with an end date.


Q8: Are there specific industries that are more exposed to these risks than others?


Exposure does not track cleanly to the industries most people assume, and the sectors most at risk right now are the ones under the greatest economic pressure to adopt AI fast, which cuts across industries that have historically been quite cautious. Retail, consumer tech, logistics, and high-volume service businesses combine high agent volume, high customer data exposure, and intense margin pressure to deploy ahead of the competition, and when the board message is “move fast or lose to someone who will,” governance discipline is typically the first thing that slips. Traditional high-regulation industries carry real exposure too but for different reasons: financial services face autonomous transactions under heavy regulatory scrutiny, healthcare combines patient data with clinical decision-making where an agent error can translate to patient harm, and critical infrastructure is where agent compromise moves beyond data loss into life safety territory. Software and SaaS providers carry a particularly sharp version of supply-chain risk, where a single compromised agent can cascade to hundreds of downstream customers, which is a pattern we have already seen play out in real incidents. The common factor is the intersection of economic pressure, data sensitivity, regulatory weight, and blast radius, and any organization sitting at two or more of those dimensions should be treating this as a board-level risk rather than a technology program.


Q9: What role should cybersecurity teams play in shaping AI adoption strategies?


Security needs to operate as a co-architect of AI adoption rather than a gatekeeper at the end of it, because the gatekeeper model is precisely how organizations end up with shadow AI, surprise deployments, and a governance posture that is always reacting to decisions already made. In practice that means security is in the room for use-case selection, model selection, and architecture from day one, publishing a paved road of approved models, vetted servers, pre-built identity templates, and sanctioned architecture blueprints that makes the secure path the easy path. It also means tiering autonomy by risk so low-risk agents move through self-service while high-blast-radius agents get the scrutiny they deserve, and using AI to govern AI through runtime policy engines and automated red-teaming, because manual review will not scale to agent volume. The CISOs winning this cycle are the ones making the case clearly to their boards that slow traditional review is not the safer choice, it is the choice that drives deployment underground where there is no visibility at all.


Q10: Looking ahead, what are the key steps enterprises should take now to safely scale autonomous AI?


Before scaling anything, get the foundations right: build a real inventory of agents, MCP servers, non-human identities, and model dependencies, and organizations that cannot produce that list today should pause new deployments until they can, because the goal is to make sure adoption is happening on a surface you can actually see. In parallel, pick a governance spine and stop debating frameworks, with AIUC-1 as the most directly relevant anchor given it is the first standard written specifically for AI agent security, safety, and reliability, layered with OWASP ASI and NIST AI RMF as your regulatory posture requires. On the controls side, deploy runtime policy enforcement in-path between agents and the tools they call, rebuild the identity layer on time-bound tokens and least-privilege scoping, and capture behavioral telemetry to a dedicated AI observability platform, because agent governance without identity governance is theater. Strategically, architect for a world where agents are the default actors, which means guardian agents monitoring peers for drift, multi-agent architectures that assume one agent in the chain will be compromised, and signed inter-agent messages with explicit trust boundaries. The enterprises that win this cycle will be the ones that can demonstrate governed adoption is faster and more durable than ungoverned adoption, because if security cannot make that case clearly, the argument is lost before it starts.
 



One last thought worth leaving readers with: the real risk is not that agents will be attacked in the traditional sense – it’s that they will do exactly what they were asked to do, in a way no one anticipated, at machine speed, across systems no one mapped. Build the controls for that reality, not the one in the marketing deck.
 From Fragmented Martech Stacks to Unified Data Platforms as a foundation for AI

From Fragmented Martech Stacks to Unified Data Platforms as a foundation for AI

marketing 30 Apr 2026

Q1. The industry is clearly moving away from fragmented martech stacks. What are the main limitations you've observed with traditional setups involving DMPs, CDPs, and data clean rooms?


These tools were never designed to work together; they were built to solve different problems for different segments of the media industry at different points in time. DMPs were built mainly for publishers navigating the third-party cookie era. CDPs came along to fix the single-customer-view problem for brands internally. Data clean rooms were adopted in response to signal loss across the board by brands, publishers, and retailers alike. So you’re looking at three separate architectures, three vendor relationships, three data pipelines.


What we hear constantly from publishers and retailers is that stitching these together creates enormous operational drag. Every handoff between tools is a point of latency, a potential compliance risk, and a cost center. And because none of them were built with collaboration in mind from the start, the moment you try to do something cross-party (enrichment with a partner's data, joint measurement, audience activation beyond your own properties, etc.) you hit a wall. The stack simply wasn't designed for the collaboration era, and even less for AI.

 

Q2. What is driving organizations to adopt more unified and flexible data platforms today, and how urgent is this shift?


Three pressures are converging simultaneously, which is what makes this moment feel different from earlier transitions.


First, regulation has fundamentally changed what's permissible. GDPR and a growing body of case law have made clear that moving customer data freely between systems is over: organizations need technical guarantees, not just contractual ones, for hassle-free and fast collaboration. Second, the signal environment has decreased: third-party cookies are declining, and universal identity solutions have helped at the margins but haven't filled the gap. Third — and most importantly — the value of first-party data is now demonstrably tied to collaboration. Data sitting in one organisation's DMP is interesting. Connected to a brand's CDP or a retailer's transaction history, it becomes genuinely powerful.


The media players moving now are building structural advantages. Those waiting are watching legacy DMP contracts come up for renewal with no clear answer for what replaces them.

 

Q3. From your perspective, what does a truly "unified" data platform look like in practice, beyond just integrating multiple tools?


"Unified" gets used to mean fewer vendor logos on a slide. That's not what I mean in this case necessarily.


A truly unified platform is one where the architecture was designed from the start for collaboration and privacy with the goal of creating networks between data owners, not just optimising data within a single organisation. When a CDP or DMP adds a clean room module, the privacy guarantees are only as strong as the wrapper. Additionally, you don't necessarily inherit any network here either, meaning each partnership might have to be built from scratch.


At Decentriq, we started from the opposite direction. Our clean room uses confidential computing: hardware-level encryption where data remains protected during processing, even from us. Using that as a foundation, we built the Collaborative Audience Platform: a unified layer adding CDP- and DMP-style capabilities — segmentation, identity resolution, activation, shared audience products. In practice, a publisher can collect data, build and enrich audiences, activate to GAM or DSPs, run closed-loop measurement, and refresh automatically all in one environment, with no seams between layers. That's what genuinely unified looks like.

 

Q4. Many companies still rely on stitching together multiple solutions. Where do these approaches typically fall short when it comes to scalability and efficiency?


The failures tend to only become visible at scale, which is precisely when they're most painful.


The first is the identity tax. Every time data moves between tools, you make assumptions about identity resolution. If your system can only handle one ID type, you can lose a significant portion of your audience during matching. The second is engineering overhead: stitched integrations need constant maintenance, and onboarding each new partner is its own project, meaning there is a hard ceiling on how many collaborations you can run in parallel. The third, which comes up in almost every conversation with publishers replacing their DMP, is the inability to operationalize collaboration at scale. One-off clean room projects are feasible. Repeatable, automated, always-on audience collaboration with multiple partners simultaneously is a different problem (and stitched stacks weren't designed for it).

 

Q5. How is this shift impacting data collaboration between brands, publishers, and retailers in real-world scenarios?


The most significant change is the move from one-to-one integrations to network-based collaboration, because this changes the economics of data entirely and provides a crucial foundation for AI.


In the old model, a publisher ran a bespoke clean room project with one advertiser at a time. High cost, limited scale. A platform model enables something fundamentally different: standardised, repeatable collaborations across a growing network simultaneously. We've seen this with OneLog in Switzerland using our technology: five publishers unified under a single audience monetization platform, enabling advertisers to plan, activate, and measure across their combined audiences.


We're seeing the same dynamic for retailers. Decentriq's Collaborative Audience Platform lets them build audiences from online and offline signals and activate with brands and premium publishers (including CTV) without raw transactional data ever leaving their control. For brands, this means accessing publisher and retailer audience data through a standardized, privacy-safe workflow instead of negotiating lots of separate agreements.

 

Q6. Privacy and compliance remain key concerns. How do modern unified platforms address these challenges more effectively than legacy martech stacks?


Legacy stacks address privacy primarily through contracts — data processing agreements, retention policies. These are necessary but not sufficient. Contracts tell you what should happen; they don't technically prevent what shouldn't.


Decentriq uses confidential computing as the central technology for data collaboration: a hardware-level technology where data is processed inside a secure enclave inaccessible to any party, including us. The privacy guarantee is technical, not contractual. A significant recent CJEU ruling validated exactly this approach:  clarifying that pseudonymised data processed through technology where re-identification is technically impossible carries a different compliance profile than data protected only by agreement. 


For organizations navigating GDPR, this shifts the burden dramatically: instead of documenting every data flow and relying on ongoing contractual enforcement, you can demonstrate provable technical compliance. That's increasingly what regulators, legal teams, and enterprise procurement are demanding.

 

Q7. What role does AI and automation play in enabling more seamless and actionable data collaboration within these new ecosystems?


The critical point is where AI runs. AI operating on raw data is a privacy risk. AI operating inside a confidential computing environment — on data that is never exposed — is a fundamentally different proposition.


At Decentriq, AI is embedded at several levels: lookalike modelling that extends a seed audience without either party revealing their underlying data (a luxury automotive brand saw +80% engagement and +58% conversion rate using this, for example), audience size estimation before a segment is built, and automated refresh cycles that keep audiences current across partners without manual intervention. 


Further out, the more AI is integrated into these environments, the more the collaboration network itself learns — from joint activations, measurement results, and partner interactions — rather than resetting with each new campaign. That's the direction this is heading.

 

Q8. Looking ahead, what key changes do you expect in how organizations approach data infrastructure and collaboration over the next 2–3 years?


Three shifts feel clear.


First, stack consolidation. Organisations running separate DMPs, CDPs, and clean rooms will consolidate around platforms that do two, if not all three three, natively. The maintenance cost, compliance complexity, and operational drag will drive that decision.


Second, the ecosystem model becomes the norm. The value of first-party data is increasingly defined not by how much you have, but by how well it connects. Publishers contributing audiences to a collaborative network unlock revenue that's unavailable to those working in isolation. Retailers whose data can activate across a premium publisher network and close the loop with sales measurement are in a completely different competitive position. That logic will only accelerate. And as AI becomes more deeply embedded in these workflows, the network itself becomes a training asset: the more data flows through a shared collaborative infrastructure, the smarter and more precise the models that power lookalike targeting, audience estimation, and measurement become. Isolated stacks simply can't compete with that.


Third, privacy-preserving infrastructure shifts from differentiator to baseline expectation. Confidential computing and hardware-level privacy guarantees are currently seen as advanced or optional. In 2–3 years, driven by regulation, enterprise procurement standards, and demonstrated risk of alternatives, they'll be standard requirements. The organisations betting on these foundations now will be ahead of that curve rather than catching up to it.
 AI Made PR and Marketing Work Faster. But It Didn’t Fix Your Biggest Inefficiency.

AI Made PR and Marketing Work Faster. But It Didn’t Fix Your Biggest Inefficiency.

marketing 23 Apr 2026

By Carey Madsen, VP and CMO, The Fletcher Group


94% of B2B buyers now use AI during the buying process, and most marketers are working hard to insert their brands into those buyer recommendations. But you’re probably making it harder than you need to.  

Here’s a scenario that plays out every day in B2B: a company earns a strong media placement in a respected trade publication. The story is sharp, well-positioned, and reaches the right audience. Then it disappears. Posted once on LinkedIn, shared internally, and forgotten. Sales never sees it. The website never references it. No one writes a follow-up post that builds on the insight. The executives who could have amplified it don’t.

This is what happens when PR and marketing operate in silos. Coverage and content don’t travel far  and in 2026, that has consequences that go beyond missed amplification. It affects how often your brand appears in AI-generated answers.

The way B2B buyers research and evaluate vendors has changed fundamentally over the past two years. Buyers no longer follow a neat funnel. They may read a trade article, which prompts a question, so they ask ChatGPT or Claude. The answer frames their next steps, which might include a visit to your website to read an FAQ or case study, an industry report, or to a competitor’s site instead.

If your messaging isn’t aligned and repeated across these channels, you haven’t made your brand known; and it’s difficult for buyers to find you, because they don’t know what you solve for. In a nutshell, vague messaging gets skipped, while consistent messages gets cited.


How Do B2B Buyers Research Vendors in 2026?


Forrester’s 2026 State of Business Buying report
shows that purchasing is more collaborative, and dependent on validation from trusted sources than in previous search eras. Buyers rely on what Forrester calls a “buying network”  internal stakeholders plus analysts, peers, and earned media — to validate what they learn from any single channel, including AI tools.

The Forrester data paints a clear picture of just how early these decisions are forming:

·       92% of B2B buyers enter the process with at least one vendor in mind, and 70% of the journey happens before sales engagement 

·       9 out of 10 C-suite decision makers say they are more receptive to thought leadership than traditional marketing materials

·       94% of buyers use generative AI during the buying process, but 20% report inaccuracies—leading them to validate AI outputs against third-party sources 

Buyers use AI as a data point, then confirm what they find through media, analysts, LinkedIn, and your owned content. If your brand shows up in only one of those places, you’re missing other essential validation opportunities.


Why Do LLMs Favor Brands with Multi-Channel Presence?

This is where buyer behavior and AI visibility intersect. LLMs pull from media coverage, brand content, social conversations, and third-party validation to shape the answers buyers see. Brands that appear across more source types tend to be cited more often and with more context.

The rules of AI-fueled search are evolving in real time, but several patterns are already clear enough to act on:

·       Earned media drives the majority of AI citations. Muck Rack found that 82% of citations come from earned sources

·       Brand search volume is a stronger predictor of AI citation than traditional SEO authority like backlinks 

·       LLMs do not share the same resource pools, so, appearing on a wide range of relevant channels—owned, paid and earned—is necessary to be cited by all the most popular LLMs

In practice, this means disconnected or incomplete efforts across PR and marketing teams create visibility gaps that competitors can fill. When PR, content, and executive visibility aren’t aligned, you reduce the number of trusted signals AI systems rely on.


How Does One Asset Become Five?

The real value of integration is making one success work four times harder. This helps large companies absolutely dominate their space and lets smaller firms punch above their weight through efficient use of resources.

Here’s what that looks like in practice. Take a single starting point: your company releases original data or research on a trend that matters to your buyers.

•      Earned: The research is pitched to key trade publications and tier 1 business outlets. Stories are published, your CEO is quoted with a distinctive point of view.

•      Owned: The research becomes an un-gated blog post and report on your website, structured with clear headers, FAQ sections, and schema markup so both Google and LLMs can parse it effectively. Key data points are formatted as standalone, citable claims that start showing up in other earned media.

•      Shared: Your CEO and other executives post their own take on LinkedIn — not identical reshares, but distinct perspectives that create multiple entry points for key audiences. The company page amplifies with a summary post linking to the blog.

•      Third-Party/Paid: A LinkedIn sponsored content campaign targets decision-makers in your key verticals. An analyst briefing results in an informed industry expert that validates the narrative for media and prospect inquiries. The research serves as the foundation of a presentation or webinar at an industry event.


Does Integrated PR and Marketing Require a Large Budget?

No. In fact, smaller teams are often better positioned to do this well from day one, because they can’t afford to be spread too thin. Even some larger brands can’t activate all channels at scale, and trying to do everything at a surface level is worse than doing two things well. But whatever you do invest in, do it well, and set your campaigns up to compound across channels rather than exist in isolation. 

A single earned media placement that nobody amplifies, repurposes, or references on your website is a missed opportunity — and that’s true whether your budget is $50,000 or $500,000. A blog post that answers a question your buyers are asking but never gets shared by an executive or promoted to a targeted audience is content that only works in one way, instead of four or five.

Integration is a mindset about how assets get used, not a mandate to spend more. Start with what you have. Make each piece of content and each media win work across every channel you can reach. 


The Outcome: Consistent Presence Where Buyers Look

The B2B buyer’s journey is no longer a path you control. It is now made up of a network of sources — and increasingly, a network that AI tools reference on their behalf.

When PR, content, social, and paid efforts work together, your brand appears more consistently across those sources. That consistency builds consensus  and ultimately, trust.

 Reinventing the AI Supply Chain: Inside JFrog and NVIDIA’s trust layer for AI Agents

Reinventing the AI Supply Chain: Inside JFrog and NVIDIA’s trust layer for AI Agents

artificial intelligence 22 Apr 2026

 Yashaswi Mudumbai, Senior Director of Solutions Engineering, APAC, JFrog

Q1: JFrog has announced a new integration with NVIDIA around agentic AI. What problem is this solving and why is it becoming critical now?

At the core, this solution closes a growing trust gap. As AI evolves from copilots to autonomous agents that can access systems, data, and tools, they require stronger governance than traditional software pipelines can provide. The risk is real, just as a malicious software package can compromise an application, an unvetted skill can guide an agent to perform harmful actions.

In an agentic environment, it is now about governing skills, models, MCP services, and other agentic assets that can directly influence how AI behaves in production. 

This is critical because AI agents are moving from experimentation into real enterprise workflows. JFrog’s new Agent Skills Registry, with early integration with NVIDIA, is designed to provide the missing trust layer required for autonomous AI workforces to operate safely at enterprise speed and scale.

By serving as a secure system of record for skills, models, MCPs, and agentic binary assets, JFrog serves as a secure, single source of truth for rigorously scanning and governing all agentic binary assets, which NVIDIA’s NemoClaw then executes in highly isolated sandboxes with zero initial permissions. This ensures every skill is approved and safe for use at enterprise scale.

Enterprises cannot rely on blind trust, they need a way to verify which agents and assets are being used, where they come from, and whether they comply with internal policies before agents can operate at scale. 

Q2: Many Australian organisations struggle to move AI projects from pilot to production due to security and compliance concerns. How does this joint solution with NVIDIA help bridge that gap? 

One of the biggest barriers to scaling AI is that innovation often outpaces governance. Teams build pilots and test models, but when it comes to deploying them into production, questions around security, compliance, and accountability slow everything down. 

The partnership  between JFrog and NVIDIA helps put structure around that process, giving organisations a centralised way to manage all the components that power AI agents, from models to connectors to reusable skills, while ensuring they meet enterprise standards before they are deployed. 

Instead of relying on fragmented tools or manual approvals, organisations can automate checks, enforce policies, and maintain visibility across the entire lifecycle. That makes it much easier to move from experimentation to production without introducing unmanaged risk. 

Q3: As AI adoption accelerates globally, how is the concept of an “AI Supply Chain” evolving compared to traditional software pipelines, and how is Australia responding?

The AI supply chain is fundamentally different from traditional software delivery. In the past, organisations were managing relatively static components like code and packages. Now they are dealing with dynamic elements such as models, datasets, prompts, and agent behaviours. 

With AI systems now adapting and acting independently, this means organisations need to track  not only what goes into an application but also how it behaves once deployed. In Australia, we’re seeing a strong emphasis on governance and accountability as part of this shift, particularly as organisations align with the Australian Government’s AI in Government Policy and broader responsible AI frameworks that emphasise transparency, accountability, and safe deployment.

Enterprises are recognising that adopting AI at scale requires visibility, traceability, and control, particularly in an increasingly regulated marketplace.  

Q4: Australia is seeing growing enterprise investment in AI, particularly across sectors like financial services and government. What specific risks or opportunities do you see for Australian organisations adopting agentic AI?

When agents are given access to internal systems, data, and workflows, any gap in oversight can lead to serious consequences, from data exposure to compliance breaches. There is also a growing concern around ‘shadow AI,' where teams adopt tools or models outside of approved processes. This creates blind spots for security and governance teams, making it difficult to understand what is actually running inside the organisation. 

For Australian enterprises, especially those operating in regulated environments, the priority is to ensure that innovation is matched with strong controls from the outset. Those that get this balance right have a clear opportunity to build a trusted AI and software supply chain that not only reduces risk, but also accelerates speed to market by giving teams the confidence to scale AI safely and consistently. 

Q5: Trust and governance are emerging as major concerns for enterprises deploying AI agents. How does JFrog’s new Agent Skills Registry address these challenges in practical terms?

JFrog’s Agent Skills Registry is designed to bring order to what is otherwise a highly fragmented landscape. It acts as a central point where organisations can manage the different components that AI agents rely on. 

This means every skill or asset can be inspected, validated, and approved before it is made available for use. It also allows organisations to define who can access what and under what conditions, ensuring that agents operate within clearly defined boundaries.

Importantly, it creates an audit trail, enabling organisations to track where assets came from, how they were used, and whether they meet compliance requirements. That level of visibility is essential for building trust in systems that are becoming autonomous. 

On the execution side, NVIDIA’s NemoClaw then runs each agent in an isolated, virtual environment, sandboxed with zero initial permissions. Thus, even if a skill were to behave unexpectedly, it can not affect broader systems or trigger network-level risk. 

Q6: For developers and engineering teams in Australia, how can they balance strong governance with the need to innovate quickly when building and deploying AI agents?

The goal is to embed governance into the workflow rather than treat it as a separate step. If security and compliance rely on manual reviews, they will always slow teams down. 

Instead, organisatons should focus on automating these controls. By providing developers with access to pre-approved, trusted components, they can move quickly without needing to navigate complex approval processes each time. 

This approach allows teams to maintain speed while ensuring that everything they use has already been vetted. For Australian organisations, particularly those under regular pressure, this balance between agility and control is critical to scaling AI successfully.

 Creative Over Signals: Rethinking Attention as Performance Across Omnichannel Advertising

Creative Over Signals: Rethinking Attention as Performance Across Omnichannel Advertising

marketing 20 Apr 2026

Author: Jonathan Frohilinger, Founder and CEO of Big Happy
 

How is fragmentation across DOOH, mobile, and retail media impacting marketers today?

Fragmentation across DOOH, mobile, and retail media has created a more complex, noisy landscape, with countless data signals and platforms all competing to drive performance. But marketers are starting to realize that signals alone aren’t enough. Without strong creative, optimization falls flat. The brands that stand out are the ones using creative to cut through the noise and deliver messages that actually connect with people in the moment.


What are the biggest challenges brands face when trying to unify these channels?

It’s too many moving parts and not enough cohesion. You’ve got different teams handling creative, media, and measurement, and they’re not always working in sync. Even if the idea is strong, it can break down in execution. The opportunity is simplifying that process so the idea carries through instead of getting lost between partners. 


How can advertisers create a more seamless omnichannel experience across these touchpoints?

It comes down to continuity. The experience should feel like one idea moving across channels, not separate campaigns stitched together. If someone sees something in DOOH, there should be a natural next step on mobile. When that flow is intentional, it feels less like advertising and more like something that actually makes sense to engage with.


What role does data play in bridging the gap between DOOH, mobile, and retail media?

Data is important, but there’s almost too much of it now. Everyone has access to similar signals, similar targeting, similar optimization. The real value is using data to support the experience, not define it. When it’s used correctly, it helps connect exposure to action, but it can’t replace what actually makes someone pay attention in the first place. which is the creative.


Are there specific technologies or platforms helping reduce fragmentation effectively?

The ones that work are bringing everything closer together, creative, distribution, and measurement. Speed is a big part of that. If it takes months to build something and then longer to get it live, you’ve already lost the moment. The shift is toward systems that can move faster and keep everything connected from the start and deliver results in days.


How can brands better measure ROI across multiple channels without siloed data?

It’s about looking at the full path, not individual channels. When you connect DOOH exposure to mobile engagement and real-world behavior, you start to see how everything works together. That’s when measurement becomes meaningful instead of just reporting on isolated pieces.


What trends are you seeing in programmatic advertising across DOOH and mobile?

Programmatic is evolving from just automating delivery to actually connecting channels. DOOH is becoming more measurable, mobile is capturing that follow-on behavior, and together they’re starting to show real lift when used properly. The more those pieces work together, the more effective the system becomes.


How important is personalization when connecting retail media with DOOH and mobile campaigns?

Personalization matters, but it’s less about over-targeting and more about relevance. If you’re in the right place at the right time with something that actually resonates, that’s what drives action. Overcomplicating it with too many variations or signals can actually make things harder to execute effectively.


What advice would you give to marketers just starting to integrate these channels?


Start by understanding where your audience is in the real world and how your brand can show up meaningfully in those everyday moments. Then focus on creative that is contextually relevant, not just reaching people but actually capturing attention and leaving an impression. Most importantly, treat these touchpoints as opportunities to create engaging, memorable experiences that bring a bit of energy and delight to their day.


Looking ahead, how do you see the future of unified advertising across DOOH, mobile, and retail media evolving?

It will be less about channels and more about how quickly you can move from capturing attention to driving action. The signals will continue to look more similar across platforms, so the difference comes down to what actually makes someone stop and engage. The advantage will sit with brands that can do that consistently and move quickly across environments.
 Where Brands Become Experiences: The Rise of Experiential Retail Spaces

Where Brands Become Experiences: The Rise of Experiential Retail Spaces

marketing 20 Apr 2026

1. How are Gen Z and Gen Alpha changing the traditional retail mall experience?


Gen Z and Gen Alpha don’t see malls as places to shop. They see them as places to experience, linger, socialize and find joy. For them, physical spaces are social platforms. They’re coming for discovery, content creation, and shared moments, not just transactions.


Gen Z expects environments that are dynamic, immersive, and constantly evolving. They want spaces that give them something to do, content to capture, and experiences to share. That fundamentally shifts the role of the mall from retail destination to cultural stage.


2. What inspired Westfield to transform malls into broadcast and experiential hubs?


We recognized a simple truth: Attention has fragmented, but physical environments and tangible experiences still command attention at scale if you design them correctly.


Our properties sit at the intersection of commerce, culture and community. By evolving them to operate at their full potential, they evolve from places people visit to platforms brands can activate and amplify. It’s about turning passive foot traffic into active audience engagement.


3. Can you explain the idea of “the physical environment as media”?


Traditionally, media has been something you place into an environment. We’re flipping that.


The environment itself becomes the media channel. Every surface, every screen, every spatial moment is an opportunity for storytelling. Instead of interrupting people, brand moments have become part of the experience. You’re not just seeing a campaign - you’re actually inside it. That creates deeper emotional resonance and dramatically increases memorability.


4. How do creator-led launches and live cultural events fit into this new strategy?


Creators are today’s cultural distributors. They don’t just amplify moments—they define them.


By operating spaces that are production-ready and broadcast-capable, we allow brands to launch products, host premieres, and stage cultural moments that live simultaneously in the physical and digital worlds.


A creator-led launch at Westfield doesn’t just reach the audience in the room, it cascades across social platforms in real time, turning a single event into a global moment.  A great example of this is the BTS x Arih retail ramen launch that took place the weekend of April 10th – which was posted about across TikTok and Instagram feeds.

 

5. What makes Westfield Century City’s new space unique compared to traditional malls?


Westfield Century City is purpose-built for this new era. It’s not retrofitted—it’s designed from the ground up as a hybrid of venue, media platform, and cultural hub. Our hero event space in LA, The Atrium, acts as the “town square of LA,” with integrated infrastructure that supports large-scale productions, premieres, and brand activations seamlessly.


What makes it unique is the convergence: event space, high-impact media, and audience density all working together in one cohesive ecosystem.


6. Could you tell us more about The Centurion and its role in this transformation?


The Centurion is a defining example of where retail and media are headed. It’s not just a screen—it’s a broadcast surface engineered for live moments, real-time content, and cinematic storytelling. With high-resolution LED, optimal sightlines, and integrated production capabilities, it enables brands to create experiences that feel more like live entertainment than advertising.


Its role is to anchor the entire media ecosystem at Century City, turning the space into a stage where brands can premiere and participate in culture.


7. How does real-time impression measurement benefit brands and advertisers?


Measurement is what turns experiential from art into science. With real-time analytics like dwell and attention time, we can quantify impact in ways that weren’t possible before. Brands can understand not just how many people saw something, but how they engaged with it. This capability closes the loop between physical experience and performance marketing, making experiential a measurable and scalable channel.


8. Why are social-first and viral activations becoming so important in retail spaces?


Because the true audience is no longer limited to who’s physically present. The most successful activations today are designed to travel—visually, emotionally and culturally. If it doesn’t translate beyond the four walls, it isn’t scalable.


We think about every experience through a dual lens: how it feels in person and how it performs afterwards. When you get both right, you create exponential reach.


9. What impact will these changes have on brand storytelling and customer engagement?


Brand storytelling is becoming more immersive, more participatory, and definitely more immediate.


Instead of telling consumers what a brand stands for, we’re creating environments where they can experience it firsthand. That shifts engagement from passive consumption to active involvement.


The result is stronger emotional connection, higher recall, and ultimately, greater brand affinity. We’re seeing this across categories, but especially in entertainment. As just announced – Apple TV+ is hosting a truly immersive two weekend-long activation “Think Apple TV" (April 23-April 26 and April 30-May 3) featuring interactive experiences from a lineup of series including: Pluribus, Margo’s Got Money Troubles, The Morning Show, Shrinking, Your Friends & Neighbors, Imperfect Women, Slow Horses, and Stickfan. The installation will offer fans the opportunity to experience their favorite shows up close like never before.


10. What future trends do you see shaping the next generation of retail experiences?


We’re moving toward a world where retail, media, and entertainment fully converge. You’ll see more real-time, adaptive content—experiences that change based on audience behavior. More integration with creators and communities. More seamless connections between physical spaces and digital ecosystems, from AR to live streaming to commerce.


And most importantly, you’ll see a continued shift toward purpose-built environments—spaces designed not just to sell products, but to host culture. The future of retail isn’t about transactions. It’s about moments and the brands that create them will win.
 AI may shape the search, but retail media still wins the sale

AI may shape the search, but retail media still wins the sale

marketing 17 Apr 2026

By Brendan Straw, Country Manager, Shopfully Australia


AI is rapidly changing how Australians shop. It is making product discovery faster, easier and more personalised, and giving consumers new ways to compare prices, assess options and narrow their choices.


But discovery is not the same as conversion. Shopfully’s 2026 State of Shopping research shows that while AI is becoming a powerful tool for comparison and recommendation, the final purchase decision still depends on something more immediate: whether the product is available nearby, competitively priced and relevant in that moment. That is where retail media matters most.


AI is changing how shoppers discover


Australian shoppers are more deliberate than ever. They are price-conscious, research-driven and increasingly comfortable using digital tools to stay in control of what they spend.


Digital has long played a role in this behaviour, with around 81% of Australians researching products online before purchasing in-store. AI is now accelerating this shift. It removes friction from the research phase, giving shoppers instant access to comparisons, tailored recommendations and real-time alternatives.


Our research found that 71% of shoppers are already using AI to compare prices across retailers, 43% use it to track price drops, and 38% rely on AI for personalised product recommendations. For retailers, this wider top of funnel creates a more competitive and complex path to purchase. 


Retail media is where decisions are won


As AI expands the discovery phase, it also makes it easier for shoppers to delay commitment. Shopfully’s research shows 67% of shoppers are spreading their spend across multiple retailers to secure better value. Loyalty is weaker, comparison is easier, and the route to purchase is no longer linear.


That creates a new challenge for retailers. It is no longer enough to be discovered. Retailers also need to win the decision at the point where a shopper is ready to act.


Retail media plays that role by turning intent into action. It gives retailers a way to reach high-intent shoppers with information that is locally relevant and immediately useful: whether a product is in stock nearby, available at the right price or backed by a timely promotion. In a shopping journey shaped by AI, those details are often what close the sale.


Turning AI‑led discovery into real‑world sales


To convert AI-led discovery into sales, retailers need to focus on three things.


First, they need to be visible at the moment of decision, not just the moment of discovery. As shoppers compare options across channels, brands must show up when consumers are actively weighing where to buy.


Second, real-time retail data needs to be connected to that experience. Inventory, local store availability and current pricing should not sit in separate systems if retailers want shoppers to move from interest to action quickly.


Third, promotions need to be more dynamic and more relevant. In an environment where shoppers can compare alternatives instantly, generic messaging is easier to ignore. Retailers need offers that reflect intent, timing and location if they want to convert consideration into purchase.


Just as importantly, the path from digital discovery to store visit must feel seamless. The less friction there is between finding a product and buying it, the more likely a shopper is to convert.


The future belongs to retailers who can bridge the gap


AI is reshaping the top of the funnel, but it is also making competition harder. The easier it becomes for shoppers to compare products, prices and retailers, the harder it becomes to win a sale on visibility alone.


The retailers that succeed will be those that can bridge discovery and decision making. AI may influence what shoppers consider, but retail media is what helps turn that consideration into action with the right offer, in the right place, at the right time.
   

Page 1 of 46

REQUEST PROPOSAL