News | Marketing Events | Marketing Technologies
GFG image

News

Deloitte Launches Dot Good to Bring Practical AI to Resource-Strapped Nonprofits

Deloitte Launches Dot Good to Bring Practical AI to Resource-Strapped Nonprofits

artificial intelligence 9 Jan 2026

As artificial intelligence reshapes how organizations plan, operate, and measure impact, nonprofits are at risk of being left behind—not for lack of vision, but for lack of resources. Deloitte is betting it can help close that gap.

The consulting giant this week unveiled Dot Good, a new suite of services designed specifically to help nonprofits apply AI and other advanced technologies in practical, mission-aligned ways. The offering blends Deloitte’s social impact expertise with its growing AI, data, and technology muscle, packaged at discounted rates to make it more accessible to cash-constrained organizations.

The launch reflects a broader reality in the nonprofit sector: leaders see the promise of AI, but many simply can’t afford to experiment, scale, or hire the talent needed to do it responsibly.

AI ambition meets nonprofit reality

Dot Good didn’t emerge from a whiteboard exercise. Deloitte says it interviewed 50 nonprofit leaders while shaping the program, and a clear pattern emerged. Most leaders believe AI could significantly improve strategic decision-making, operational efficiency, and program impact. But they also acknowledged that limited budgets, talent shortages, and competing priorities make adoption difficult.

That tension—high expectations, low capacity—has become a defining theme across the nonprofit world. While large enterprises race ahead with generative AI pilots and agent-based automation, many nonprofits are still wrestling with legacy systems, manual processes, and data silos.

Dot Good is positioned as a response to that imbalance.

What Dot Good actually offers

Rather than a single product or platform, Dot Good is structured as a customized service model that can support nonprofits at different stages of their technology journey. Deloitte says engagements can include:

  • AI and technology strategy, helping organizations identify where advanced tools can realistically support their mission

  • Tech-focused human capital services, including workforce planning and change management

  • System customization and implementation, translating strategy into usable systems rather than abstract roadmaps

The key differentiator, Deloitte argues, is flexibility. Nonprofits can engage at an early exploratory stage or move directly into implementation, depending on readiness and resources.

Just as important, Deloitte is offering these services at discounted rates for the nonprofit market—an acknowledgment that traditional consulting price tags often put firms like Deloitte out of reach for social sector organizations.

Beyond consulting: pro bono AI education

To complement paid engagements, Deloitte is also rolling out a pro bono AI learning series for nonprofit professionals. The program is designed to meet organizations where they are, whether they’re just beginning to understand AI or preparing for more strategic deployments.

The idea is to raise baseline AI literacy across the sector, not just deliver one-off projects.

Dana O’Donovan, US Purpose leader at Deloitte Services LP, framed the initiative as a response to rapid technological change. “Technology is rapidly evolving, leaving many resource-constrained nonprofits struggling to keep up in today’s tech-driven world,” she said. Dot Good, she added, is meant to help nonprofits use advanced technologies to transform their organizations while staying focused on their core missions.

A people-first framing for AI

Deloitte is careful to position Dot Good as people-first, not automation-for-automation’s-sake. Nina Gonzalez, a principal at Deloitte Consulting LLP, emphasized that AI’s value lies in how it supports human decision-making and mission delivery.

By combining AI-driven insights, human capital solutions, and implementation support, Gonzalez said Dot Good aims to improve operational value, unlock innovation, and enable transformative change—without pulling nonprofits away from their purpose.

That framing aligns with a growing trend in the AI market. As skepticism rises around hype-heavy AI claims, organizations are increasingly focused on practical, ethical, and trust-based adoption, especially in sensitive sectors like healthcare, education, and social services.

How this fits into Deloitte’s AI push

Dot Good also serves as another showcase for Deloitte’s expanding AI ecosystem. Over the past decade, the firm has invested heavily in AI capabilities, including:

  • Its Generative AI practice

  • Zora AI™, an agentic platform offering ready-to-deploy digital workers

  • The Deloitte Ascend™ delivery platform, used to build and deploy AI solutions and agents

  • Its Trustworthy AI™ framework, designed to manage sector-specific risks and governance concerns

  • The Deloitte AI Academy™, which focuses on AI fluency and workforce training

While Dot Good isn’t about selling Zora AI or prebuilt agents directly, it benefits from the same underlying infrastructure and governance frameworks—an important consideration for nonprofits that must balance innovation with accountability and public trust.

Why this matters for the MarTech and AI ecosystem

At first glance, Dot Good may seem far removed from mainstream MarTech. But the implications ripple outward.

Nonprofits are increasingly digital-first organizations, relying on data, marketing automation, CRM systems, and analytics to fundraise, engage donors, and measure outcomes. AI-powered insights can influence everything from campaign targeting to impact reporting and resource allocation.

By lowering the barrier to AI adoption in the nonprofit sector, Deloitte is helping expand the addressable AI market beyond enterprises and into mission-driven organizations. That shift mirrors what’s happening in SMB MarTech, where vendors are racing to simplify AI tools for smaller teams with limited budgets.

It also puts pressure on rival consultancies and tech providers. If Dot Good gains traction, competitors may need to rethink how they package AI services for nonprofits—or risk ceding influence in a sector that, while not always lucrative, carries reputational and long-term strategic value.

A pragmatic step, not a silver bullet

Dot Good won’t magically solve the nonprofit sector’s funding or talent challenges. AI tools still require clean data, leadership buy-in, and ongoing change management—areas where many organizations struggle.

But by combining discounted consulting, tailored implementation, and free education, Deloitte is making a pragmatic bet: that nonprofits don’t need moonshot AI experiments, but practical, guided adoption that respects their constraints.

 

In a market saturated with AI promises, that grounded approach may be exactly what resonates.

Get in touch with our MarTech Experts.

VIAVI Brings Augmented Reality to RF Testing With RF Viewer on OneAdvisor 800

VIAVI Brings Augmented Reality to RF Testing With RF Viewer on OneAdvisor 800

technology 9 Jan 2026

Radio frequency signals run modern networks—but they remain invisible, abstract, and notoriously difficult to interpret in real-world environments. VIAVI Solutions wants to change that.

The company has announced the integration of RF Viewer, a new augmented reality (AR) solution, into its OneAdvisor 800 Wireless test platform. The move signals a broader shift in how RF analysis is performed: away from dense charts and static measurements, and toward intuitive, visual, in-the-field understanding.

Developed in close collaboration with Verizon Wireless, RF Viewer overlays real-time RF signal strength directly onto a live video feed, allowing technicians to “see” RF emissions as they move through physical spaces. For telecom operators, smart building designers, and RF safety teams, the result is faster diagnostics, clearer decision-making, and improved on-site safety.

Making the invisible visible

Traditional RF testing tools require users to interpret spectrum graphs, signal metrics, and numeric readouts—skills that take years to master and are prone to error under time pressure. RF Viewer tackles that problem by translating RF data into a visual AR overlay, showing signal intensity, location, and distribution in real time.

Using a live camera feed, RF Viewer superimposes RF signal strength onto the physical environment, making it immediately clear where emissions are strongest, how they propagate, and where potential issues may exist. What once required educated guesswork now becomes visually obvious.

This approach is particularly valuable in dense RF environments, such as urban deployments, indoor venues, and smart buildings, where reflections, interference, and passive intermodulation (PIM) can degrade performance in hard-to-diagnose ways.

Built with operators, not just for them

VIAVI’s collaboration with Verizon Wireless played a key role in shaping RF Viewer’s design and use cases. According to Vikramjeet Singh, Associate Director of System Performance at Verizon Wireless, the AR-based approach has immediate operational benefits.

“This joint collaboration helps us promptly and efficiently locate PIM sources in a safe and effective manner,” Singh said. “RF Viewer enhances our ability to maintain optimal network performance while ensuring technician safety.”

That focus on safety is notable. RF safety assessments are often time-consuming and conservative by necessity. By visualizing RF exposure levels directly in the environment, RF Viewer can help teams identify high-exposure zones more quickly and plan mitigation strategies with greater confidence.

From expert-only tools to broader usability

While RF engineering has traditionally been the domain of highly specialized professionals, RF Viewer is designed to be accessible beyond expert users. VIAVI highlights a user-friendly interface that supports both seasoned RF engineers and less technical personnel who still need situational awareness of RF conditions.

Key features of RF Viewer include:

  • Live AR overlays showing RF signal strength and spatial distribution

  • Real-time diagnostics for troubleshooting, optimization, and interference identification

  • Intuitive interaction, reducing reliance on complex RF charts and manual interpretation

This democratization of RF insight aligns with a wider industry trend. As networks become more complex—spanning private 5G, IoT, smart buildings, and hybrid indoor-outdoor deployments—organizations need tools that reduce cognitive load and speed up decision cycles.

Strengthening the OneAdvisor 800 platform

RF Viewer is not a standalone product. It extends the capabilities of the VIAVI OneAdvisor 800 Wireless, an all-in-one test platform already used across telecom, enterprise, and public-sector deployments.

The OneAdvisor 800 Wireless combines functions such as:

  • Spectrum analysis

  • Interference detection

  • Transport network validation

  • End-to-end performance testing

By adding AR-driven RF visualization, VIAVI is enhancing the platform’s value for field teams who need to diagnose issues quickly without switching between tools or relying on remote experts.

The integration also reinforces OneAdvisor 800’s positioning as networks evolve toward 5G Advanced and early 6G architectures, where higher frequencies, denser deployments, and more complex propagation characteristics increase the difficulty of RF planning and maintenance.

AR meets next-generation networks

The timing of RF Viewer’s launch is not accidental. As operators roll out mid-band and mmWave 5G—and begin laying the groundwork for 6G—the industry is grappling with new RF challenges. Higher frequencies behave differently, with shorter ranges, increased sensitivity to obstacles, and more complex interference patterns.

AR-based tools like RF Viewer offer a glimpse into how network testing may evolve: blending physical context with digital intelligence to create situational awareness that static dashboards cannot match.

Competitors in the test and measurement space have explored AI-driven analytics and automation, but AR remains a relatively untapped interface. VIAVI’s move could pressure rivals to rethink how they present RF data, especially as technician shortages make ease of use a strategic advantage.

Beyond telecom: broader implications

While telecom operators are an obvious audience, RF Viewer’s use cases extend into smart buildings, enterprise wireless, and RF safety compliance. As offices, factories, and public venues deploy private wireless networks and dense IoT infrastructures, understanding RF behavior indoors becomes critical.

For designers and facility managers, being able to visually assess RF coverage and exposure could streamline planning, compliance, and optimization—areas that increasingly overlap with enterprise IT and digital transformation initiatives.

A more human interface for RF intelligence

“RF Viewer bridges the gap between invisible RF data and human perception,” said Ian Langley, Senior Vice President of VIAVI’s Wireless Business Unit. By combining AR with RF analytics, he noted, the company aims to help technicians and engineers make faster, smarter decisions in the field.

That statement captures the broader significance of the launch. As networks grow more complex, the challenge is no longer just collecting data—it’s making that data understandable and actionable in real-world conditions.

With RF Viewer, VIAVI is betting that augmented reality can become a practical interface for network intelligence, not just a futuristic add-on. If adoption follows, AR may soon be as common in RF testing as spectrum analyzers are today.

Get in touch with our MarTech Experts.

NRF Taps Insider One to Power AI-Driven Engagement for Retail’s Biggest Global Event

NRF Taps Insider One to Power AI-Driven Engagement for Retail’s Biggest Global Event

artificial intelligence 9 Jan 2026

 

When the National Retail Federation (NRF) prepares to host 40,000 retailers from around the world at its flagship Retail’s Big Show, the challenge isn’t visibility—it’s relevance. In an era where large-scale events compete in a crowded, digital-first attention economy, NRF is rethinking how it attracts, engages, and personalizes experiences for a massive and diverse audience.

To do that, the world’s largest retail trade association has standardized its customer engagement stack on Insider One, an AI-native omnichannel experience and customer engagement platform. The move reflects a broader shift in event marketing: away from generic campaigns and toward real-time, data-driven personalization at scale.

Why NRF needed a new engagement model

For years, major industry events relied on broad messaging, static websites, and email-heavy outreach to drive registrations. That approach no longer works—especially for global audiences with different roles, interests, regions, and expectations.

NRF’s audience spans retailers, brands, technology providers, analysts, and executives across multiple continents. Each group engages differently, consumes different content, and values different outcomes from the event. Treating them as a single cohort risks lower registrations, weaker engagement, and missed opportunities.

Insider One entered the picture as NRF sought to unify its fragmented digital ecosystem and activate real-time data across every touchpoint—from first website visit to post-event engagement.

One platform, unified data foundation

A key part of the partnership was consolidation. NRF integrated its Retail’s Big Show, NRF Protect, and NRF corporate websites into a single Insider One panel. This created a centralized foundation for managing experiences, campaigns, and experimentation.

By connecting Segment data, NRF embedded registration and behavioral insights directly into the personalization layer. Instead of siloed analytics and disconnected tools, engagement decisions are now informed by live user behavior and historical data.

This matters because scale introduces complexity. At tens of thousands of attendees, even small inefficiencies—irrelevant messaging, poorly timed nudges, or generic content—can significantly impact conversion and engagement metrics.

Driving registrations with AI-led segmentation

At the core of Insider One’s value for NRF is AI-powered segmentation and orchestration. Rather than manually defining static audience lists, NRF can activate dynamic segments based on behavior, intent signals, and engagement patterns.

That enables what marketers often promise but struggle to deliver: the right message, to the right audience, at the right moment.

For NRF, this translates into faster campaign execution, reduced friction in the registration journey, and higher-impact outreach across channels. Messaging can adapt in near real time as users interact with content, revisit pages, or show interest in specific themes such as AI, supply chain, or digital commerce.

The result is a registration strategy that feels more like a guided journey than a broadcast campaign.

Expanding engagement beyond email

Email remains important, but it’s no longer sufficient on its own—especially for time-sensitive event communication. To deepen engagement, NRF introduced web push notifications as a new channel through Insider One.

The channel quickly grew into a highly engaged subscriber base, allowing NRF to deliver timely, personalized updates before, during, and after the event. Web push is particularly effective for reminders, deadline-driven messaging, and content highlights—areas where inbox fatigue often limits impact.

NRF has also rolled out a series of high-impact on-site experiences, including:

  • Countdown banners highlighting registration deadlines

  • Slide-out bars promoting key sessions and content tracks

  • An upcoming experience designed to drive mobile app downloads, extending engagement into the on-site and post-event phases

Together, these elements reflect a shift toward always-on engagement, where each interaction builds on the last rather than resetting with every campaign.

Personalization at scale through experimentation

Personalization at NRF isn’t static—it’s experimental by design. Using Insider One, the organization is actively testing and optimizing on-site experiences to understand what resonates with different audience segments.

Recent experiments include experiences tailored for international audiences, visibility for AI-focused content, and tests around blog placement and content discovery. Additional experiments are planned to refine both content performance and advertising outcomes.

This test-and-learn approach mirrors what leading retailers themselves practice. NRF isn’t just talking about innovation on stage; it’s applying the same principles to its own digital operations.

Smarter content discovery with AI recommendations

Looking ahead, NRF plans to deploy Insider One’s Smart Recommender to personalize content discovery across its blogs and digital properties.

Rather than serving the same articles to every visitor, Smart Recommender uses prior engagement and individual interests to surface relevant content. For an organization with a deep content library and diverse audience, this can significantly increase dwell time, repeat visits, and perceived value.

It also positions NRF to extend personalization well beyond event marketing—into year-round thought leadership, research, and member engagement.

What this signals for event marketing and MarTech

NRF’s adoption of Insider One reflects a larger trend reshaping the MarTech and event marketing landscape.

Large-scale events are increasingly treated like digital products, not one-off campaigns. That means continuous optimization, omnichannel engagement, and AI-driven decision-making are becoming table stakes rather than differentiators.

Platforms that unify data, orchestration, experimentation, and personalization are gaining ground over point solutions. At the same time, AI-native systems are reducing the manual effort required to manage complexity at scale.

For MarTech vendors, NRF’s move is also a signal. Even organizations with strong brands and guaranteed attendance are investing heavily in personalization—not to attract attention, but to deliver relevance.

Built for what’s next

As AI becomes a foundational layer of customer engagement, NRF is applying the same innovation mindset it promotes across the retail industry to its own operations.

With Insider One, NRF isn’t just promoting Retail’s Big Show. It’s demonstrating how large organizations can orchestrate personalized, data-driven experiences across massive audiences—without sacrificing speed or scale.

In doing so, NRF is setting a new benchmark for how global events are marketed and experienced in an AI-first world.

Get in touch with our MarTech Experts.

 

Vertiv’s Frontiers Report Maps How AI Is Rewriting the Data Center Playbook

Vertiv’s Frontiers Report Maps How AI Is Rewriting the Data Center Playbook

artificial intelligence 9 Jan 2026

Artificial intelligence is no longer just another workload inside the data center—it’s reshaping the data center itself. That’s the central message of Vertiv’s new Frontiers report, which examines how macro forces tied to AI are driving fundamental changes in how facilities are designed, powered, cooled, and operated.

Drawing on expertise from across Vertiv’s engineering and technology teams, the report expands on the company’s annual data center trends outlook, offering a deeper look at the forces pushing the industry toward what Vertiv increasingly describes as “AI factories.” These facilities are denser, faster to deploy, and more tightly integrated than anything that came before them.

“The data center industry is continuing to rapidly evolve how it designs, builds, operates and services data centers, in response to the density and speed of deployment demands of AI factories,” said Scott Armul, Vertiv’s chief product and technology officer. According to Armul, cross-technology pressures—especially extreme densification—are accelerating shifts toward higher-voltage DC power, advanced liquid cooling, on-site energy generation, and digital twins.

Together, these changes point to an industry in the middle of a structural reset.

The macro forces reshaping data centers

At the heart of the Frontiers report is Vertiv’s identification of four macro forces that are redefining data center innovation.

First is extreme densification, driven primarily by AI and high-performance computing workloads. GPU-rich racks are pushing power densities far beyond what legacy facilities were designed to handle, stressing everything from power distribution to cooling systems.

Second is gigawatt scaling at speed. AI demand is forcing operators to deploy capacity faster—and at larger scale—than ever before. Hyperscale-style growth is no longer limited to cloud giants; enterprises, governments, and AI-native companies are now planning facilities measured in hundreds of megawatts or more.

Third is the idea of the data center as a unit of compute. In the AI era, facilities can no longer be treated as collections of loosely coupled systems. Power, cooling, IT, and software must be designed and operated as a single, tightly integrated platform.

Finally, silicon diversification is complicating infrastructure planning. Data centers must now support a growing mix of CPUs, GPUs, accelerators, and custom silicon, each with different power, cooling, and operational profiles.

These forces set the stage for five technology trends that Vertiv believes will define the next phase of data center evolution.

Powering up for AI

Power architecture sits at the center of the AI data center challenge. Most facilities today still rely on hybrid AC/DC power distribution, with multiple conversion stages between the grid and the IT rack. While proven, this approach introduces inefficiencies that become increasingly problematic as rack densities climb.

AI workloads are exposing those limits. According to Vertiv, the industry is moving toward higher-voltage DC power architectures, which reduce current, shrink conductor size, and eliminate some conversion stages by centralizing power conversion at the room level.

Hybrid AC/DC systems remain common, but as standards mature and equipment ecosystems develop, full DC architectures are expected to gain traction—especially in high-density environments. The shift is further reinforced by on-site generation and microgrids, which naturally align with DC-based distribution.

In practical terms, power design is becoming less about incremental efficiency gains and more about enabling scale. Without rethinking power delivery, gigawatt-class AI deployments simply won’t be feasible.

Distributed AI changes where compute lives

The first wave of AI investment focused heavily on centralized hyperscale data centers built to train and run large language models. But Vertiv’s report suggests the next phase will be more distributed.

As AI becomes mission-critical, organizations will make more nuanced decisions about where inference workloads run. Factors such as latency, data residency, security, and regulatory compliance are pushing some industries toward on-premises or hybrid AI environments.

Highly regulated sectors—including finance, defense, and healthcare—are prime examples. For these organizations, sending sensitive data to public clouds may not be an option, even if cloud-based AI services are readily available.

Supporting distributed AI requires flexible, scalable infrastructure, particularly high-density power and liquid cooling systems that can be deployed in new builds or retrofitted into existing facilities. This trend blurs the traditional line between hyperscale and enterprise data centers, bringing AI-class infrastructure closer to the edge.

Energy autonomy accelerates

Resiliency has always required on-site power generation, but AI is changing the equation. Power availability—not just reliability—is becoming a limiting factor for new data center projects in many regions.

Vertiv notes that extended energy autonomy is emerging as a strategic priority, especially for AI-focused facilities. Investments in natural gas turbines and other on-site generation technologies are increasingly driven by grid constraints rather than backup requirements alone.

This shift is giving rise to strategies like “Bring Your Own Power (and Cooling)”, where operators design facilities around self-generated energy and tightly integrated thermal systems. While capital-intensive, these approaches offer more predictable scaling and faster time to deployment in power-constrained markets.

Energy autonomy also intersects with sustainability goals, forcing operators to balance capacity expansion with emissions considerations and long-term regulatory risk.

Digital twins move from planning tool to necessity

As AI infrastructure grows more complex, traditional design and deployment processes are struggling to keep up. Vertiv’s report highlights digital twin technology as a critical enabler for speed and scale.

By using AI-driven digital twins, operators can virtually model entire data centers—including IT, power, and cooling systems—before anything is built. These virtual environments allow teams to validate designs, optimize layouts, and integrate prefabricated modular components.

The payoff is speed. Vertiv estimates that digital twin-driven approaches can reduce time-to-token by up to 50%, a metric that matters deeply in competitive AI markets. Faster deployment means faster access to compute, which can translate directly into business advantage.

Digital twins also support the concept of the data center as a unit of compute, reinforcing tighter integration between physical infrastructure and AI workloads.

Adaptive, resilient liquid cooling

Liquid cooling has rapidly moved from niche to necessity as AI workloads push beyond the limits of air cooling. But Vertiv argues that cooling innovation isn’t stopping at adoption—it’s becoming smarter.

AI itself is now being applied to optimize liquid cooling systems, using advanced monitoring and control to predict failures, manage fluid dynamics, and improve overall resilience. In high-value AI environments, where downtime can be extraordinarily expensive, predictive cooling intelligence could significantly boost uptime and hardware longevity.

As liquid cooling becomes mission-critical, adaptive systems that learn and respond in real time may become a standard expectation rather than a premium feature.

Why this matters beyond data centers

While the Frontiers report is focused on infrastructure, its implications ripple outward into cloud strategy, AI economics, and even MarTech and AdTech ecosystems. AI-driven services—from personalization engines to real-time analytics—ultimately depend on the scalability and reliability of the underlying compute layer.

If AI factories struggle with power, cooling, or deployment speed, innovation at the application layer slows as well. Conversely, breakthroughs in infrastructure efficiency can lower costs and expand access to AI capabilities across industries.

Vertiv’s framing also underscores a broader industry truth: AI transformation isn’t just about models and software. It’s equally about electrons, heat, and physical space.

A data center industry in transition

Vertiv operates in more than 130 countries, spanning power management, thermal management, and IT infrastructure from the cloud to the edge. That breadth gives the company a wide-angle view of how infrastructure demands are changing—and how quickly legacy assumptions are being challenged.

The Frontiers report makes it clear that incremental upgrades won’t be enough. AI is forcing a rethinking of foundational design choices, from power architecture to cooling strategy to how facilities are conceptualized and delivered.

As Armul puts it, gigawatt-scale AI innovation depends on embracing these shifts. The data center, once a supporting actor, is now a central character in the AI story—and its evolution may determine how fast the next wave of AI progress arrives.

Get in touch with our MarTech Experts.

transcosmos and Priv Tech Launch Privacy Consulting to Unlock Data-Driven Marketing in a Post-Cookie Era

transcosmos and Priv Tech Launch Privacy Consulting to Unlock Data-Driven Marketing in a Post-Cookie Era

digital marketing 9 Jan 2026

As privacy regulations tighten and signal loss reshapes digital advertising, marketers face a growing paradox: they need more data to improve performance, but fewer ways to use it safely. transcosmos and Priv Tech, Inc. believe the solution lies at the intersection of privacy engineering and marketing execution.

The two companies have announced the joint launch of Privacy Consulting Services, set to roll out in December 2025. The new offering is designed to help companies leverage first-party data for digital marketing—particularly through Conversion APIs (CAPI)—while complying with complex privacy regulations in Japan and overseas.

The timing is deliberate. As platforms push server-side tracking and AI-driven optimization, outdated privacy policies have become a hidden bottleneck preventing marketers from fully activating their data.

Why privacy has become a growth constraint

For years, privacy compliance was treated as a legal checkbox. Today, it directly impacts marketing performance.

Regulations such as Japan’s Act on the Protection of Personal Information (APPI), the Telecommunications Business Act, GDPR, and CCPA have raised the bar for how companies collect, disclose, and activate user data. At the same time, advertising platforms increasingly rely on first-party conversion and event data—often transmitted via CAPI—to fuel AI learning and improve targeting accuracy.

The result is a growing gap. Many companies want to deploy CAPI to offset cookie loss and improve ROI, but their privacy policies, consent frameworks, and internal governance structures aren’t ready. Revising them has become slow, risky, and resource-intensive, especially for organizations operating across markets.

That’s the problem transcosmos and Priv Tech are aiming to solve.

A joint approach: privacy by design, performance by default

The new Privacy Consulting Services combine transcosmos’s strength in marketing execution and technology deployment with Priv Tech’s expertise in privacy protection and privacy-enhancing technologies.

Rather than treating privacy and performance as opposing forces, the partnership positions privacy as an enabler of modern digital marketing. By building compliant foundations first, companies can activate data with greater confidence and scale.

The service provides end-to-end support, spanning privacy strategy, policy revision, technology implementation, and marketing enablement. This approach is designed to help businesses move from regulatory uncertainty to operational readiness—without stalling their advertising initiatives.

What the service covers

At its core, the offering focuses on removing the friction that prevents companies from deploying CAPI and other data-driven marketing technologies.

Key service components include:

  • Support for revising privacy policies to align with new and evolving regulations

  • Compliance assistance for Japanese privacy laws, including APPI and the Telecommunications Business Act

  • Cookie policy development, including site scans and policy template creation

  • Consent management platform (CMP) deployment, ensuring proper user consent flows

  • Regulatory risk mitigation, reducing exposure to compliance failures and public backlash

By addressing both legal requirements and technical implementation, the service aims to shorten the path from compliance to activation.

Why CAPI is central to the strategy

Conversion APIs have become a critical infrastructure layer for digital advertising. As browser-based tracking degrades, server-side data transmission allows platforms like Meta and Google to receive higher-quality conversion signals, improving attribution and optimization.

But CAPI only works if companies can lawfully collect, process, and transmit user data—and clearly disclose those practices. Without compliant privacy policies and consent mechanisms, CAPI adoption can expose businesses to regulatory and reputational risk.

This is where the partnership’s positioning is notable. Instead of selling CAPI deployment in isolation, transcosmos and Priv Tech are framing it as part of a privacy-first data utilization strategy.

Expected outcomes for marketers

According to the companies, businesses that adopt the new service can expect several tangible benefits:

  • Stronger customer trust and brand value, driven by transparent data practices

  • More effective use of marketing data, improving targeting precision and ROI

  • Reduced privacy risk, including protection against compliance violations and online backlash

  • A competitive advantage, built on privacy as a differentiator rather than a constraint

In an environment where consumers are increasingly sensitive to how their data is used, these outcomes carry strategic weight beyond short-term performance metrics.

A broader signal for the MarTech market

The launch reflects a broader trend across MarTech and AdTech: privacy is no longer a downstream concern—it’s becoming a core capability.

As AI-driven marketing depends more heavily on high-quality first-party data, companies that can operationalize privacy at scale will move faster than those stuck in legal and technical gridlock. Consulting models that blend compliance, technology, and performance may become more common, particularly in markets like Japan where regulatory complexity is high.

For transcosmos, the partnership reinforces its role as a full-stack marketing and technology partner. For Priv Tech, it positions privacy engineering not just as risk management, but as a growth enabler.

Building sustainable growth through trust

transcosmos says it remains committed to helping businesses achieve sustainable growth by addressing real-world client challenges. In today’s digital marketing environment, few challenges are as pressing—or as intertwined—as privacy compliance and performance.

 

By launching Privacy Consulting Services together, transcosmos and Priv Tech are making a clear statement: the future of data-driven marketing belongs to companies that can balance trust, compliance, and AI-powered optimization—at the same time.

Get in touch with our MarTech Experts.

ThreatModeler Acquires IriusRisk to Create a Global Powerhouse in AI-Driven Threat Modeling

ThreatModeler Acquires IriusRisk to Create a Global Powerhouse in AI-Driven Threat Modeling

artificial intelligence 9 Jan 2026

Threat modeling has long been considered a “nice to have” in application security—valuable in theory, but hard to scale in practice. That’s changing fast. As AI accelerates software development and expands the attack surface, enterprises are being forced to rethink how security is embedded from day one.

Against that backdrop, ThreatModeler has announced the acquisition of IriusRisk, bringing together what the companies describe as the two leading enterprise threat modeling platforms. The deal positions ThreatModeler as a dominant force in a rapidly expanding $30 billion application security market, with ambitions to make secure-by-design practices continuous, scalable, and deeply integrated into modern development lifecycles.

The move is as much about timing as it is about technology.

Why this acquisition matters now

Enterprise security teams are under pressure from both sides. On one end, development velocity is increasing, driven by cloud-native architectures, microservices, and AI-assisted coding. On the other, cyber threats are becoming more automated, targeted, and sophisticated.

Threat modeling sits at the intersection of those forces. It helps organizations identify design-level risks before code is written or deployed—but only if it can be applied consistently and at scale. Historically, that’s been the challenge.

By acquiring IriusRisk, ThreatModeler is betting that consolidation, automation, and AI-native intelligence are the keys to unlocking threat modeling’s next phase.

“With the addition of IriusRisk, we’re building the global leader in the threat modeling market to meet rapidly expanding demand,” said Matt Jones, CEO of ThreatModeler. “Together, we deliver customers greater innovation, expanded support, and more scalable solutions that make secure-by-design a sustainable, continuous practice at enterprise scale.”

Two leaders, complementary strengths

While both companies operate in the same category, their strengths have historically been complementary rather than redundant.

ThreatModeler is known for its AI-driven threat modeling platform, designed to help security architects rapidly model threats across complex, enterprise-scale environments. Its focus has been on speed, automation, and consistency—critical for organizations managing hundreds or thousands of applications.

IriusRisk, by contrast, has built deep traction with development and architecture teams, emphasizing collaboration, education, and adoption. Over time, that approach has helped foster what is widely regarded as the industry’s most active professional threat modeling community.

Bringing these two approaches together creates a platform that spans both sides of the security equation: architectural rigor at the enterprise level and practical engagement at the developer level.

According to the companies, customers using the combined capabilities have already seen measurable gains, including building threat models twice as fast and scaling adoption by more than tenfold.

Scaling “secure by design” beyond theory

One of the most striking claims around the acquisition is its focus on democratization. Threat modeling has traditionally been the domain of specialized security experts—a bottleneck in organizations trying to move faster.

The combined ThreatModeler–IriusRisk organization says it is uniquely positioned to change that dynamic. With hundreds of customers, tens of thousands of threat models built, and the largest professional threat modeling communities, the goal is to make secure-by-design practices accessible across entire enterprises.

That matters because most breaches aren’t caused by obscure zero-days. They’re the result of architectural oversights, misconfigurations, and design decisions made early—and rarely revisited.

By embedding threat modeling across the software lifecycle, the combined platform aims to help enterprises “virtually scale” their security teams, applying expert-level analysis without requiring expert-level headcount.

The AI angle: data, intelligence, and speed

AI is a recurring theme in the deal, and not just as a buzzword.

ThreatModeler emphasizes that the acquisition accelerates its vision of an AI-native security platform, powered by what it calls the industry’s largest proprietary threat modeling dataset. That dataset—now expanded with IriusRisk’s models, patterns, and community insights—forms the foundation for deeper intelligence and more automated decision-making.

“This milestone accelerates our vision to protect customers with an AI-native platform powered by the industry’s largest proprietary dataset,” said Archie Agarwal, Founder and Chief Innovation Officer of ThreatModeler. “By combining our teams and technology, we’re enabling faster innovation, deeper intelligence, and a security partner built to scale with our customers.”

In practical terms, this means more automated threat identification, smarter recommendations, and less reliance on manual expertise—all critical as AI both empowers developers and lowers the barrier for attackers.

A market ripe for consolidation

The threat modeling space has historically been fragmented, with a mix of open-source tools, consultancy-led approaches, and niche platforms. That fragmentation made it difficult for large enterprises to standardize practices globally.

This acquisition signals a shift toward consolidation, mirroring what has already happened in adjacent security markets such as application security testing and cloud security posture management.

Investors appear to agree. The combined company is majority owned by Invictus Growth Partners, with Paladin Capital Group, a long-standing investor in IriusRisk, remaining a shareholder. That continuity suggests confidence in the long-term growth of threat modeling as a core security discipline.

“Cybersecurity is a nonstop arms race, now accelerated by AI,” said John DeLoche, Co-Founder and Managing Partner at Invictus Growth Partners. “Threat modeling is essential for teams that want to proactively protect enterprise systems and applications. This acquisition unites leading threat-modeling expertise and creates the industry’s largest dataset, giving enterprises a decisive advantage in the AI era.”

Implications for enterprise security teams

For CISOs and application security leaders, the deal highlights a broader trend: design-time security is becoming non-negotiable.

As regulatory pressure increases and software supply chains grow more complex, organizations are being judged not just on how they respond to incidents, but on how well they prevent them. Threat modeling, once relegated to periodic reviews, is increasingly expected to run continuously alongside development.

By combining AI-driven automation with deep community adoption, ThreatModeler and IriusRisk are positioning themselves as a foundational layer in that shift.

Competitors will likely feel the pressure. Smaller vendors may struggle to match the scale, dataset depth, and enterprise reach of the combined platform, while larger security suites may look to strengthen their own design-time security capabilities through partnerships or acquisitions.

What comes next

While financial terms were not disclosed, the strategic intent is clear. ThreatModeler isn’t just expanding its footprint—it’s attempting to define what enterprise threat modeling looks like in an AI-first world.

“This is an exciting leap forward for the industry,” said Stephen de Vries, CEO of IriusRisk. “Both our companies share a passion for helping enterprises start left with their secure-by-design approach. By joining forces, we are better positioned to deliver on that shared mission.”

If successful, the acquisition could mark a turning point for threat modeling—from a specialist discipline practiced by a few, to an automated, AI-augmented capability embedded across every application and infrastructure layer.

In a security landscape where speed and foresight increasingly matter more than reaction, that shift could prove decisive.

Get in touch with our MarTech Experts.

Microsoft Unveils Agentic AI Suite to Power End-to-End Retail Automation

Microsoft Unveils Agentic AI Suite to Power End-to-End Retail Automation

artificial intelligence 9 Jan 2026

Microsoft has announced a new suite of agentic AI solutions aimed at bringing intelligent automation across the entire retail value chain—from merchandising and marketing to fulfillment and store operations.

Designed to help retailers move faster and operate with greater precision, the new capabilities introduce a connected layer of intelligence that replaces fragmented workflows with coordinated, context-aware execution. Microsoft positions the offering as a foundation for a unified, intelligence-driven retail operating model built for speed, relevance, and resilience.

“The retailers that thrive will be the ones that unify their business with intelligence that reaches every corner of the value chain,” said Kathleen Mitford, Corporate Vice President of Global Industry at Microsoft. “With Microsoft’s agentic AI, retailers can automate what slows them down and amplify what sets them apart.”

Turning Conversations Into Conversions With Copilot Checkout

A central pillar of Microsoft’s retail push is Copilot Checkout, a new capability that allows shoppers to complete purchases directly within Copilot conversations—without being redirected to external websites.

The launch comes as AI-driven ecommerce traffic continues to surge. Adobe reports that AI-powered ecommerce visits during the 2025 holiday season increased 693% year over year, underscoring the growing importance of frictionless, intent-driven shopping experiences.

Copilot Checkout is now live in the U.S. on Copilot.com, with support from partners including PayPal, Shopify, and Stripe. Early participating brands include Urban Outfitters, Anthropologie, Ashley Furniture, and Etsy sellers.

For Shopify merchants, Copilot Checkout will be enabled by default following an opt-out period, allowing retailers to preserve their checkout experience while meeting customers inside AI-powered discovery flows.

Personalized Shopping Through Brand and Commerce Agents

Microsoft is also introducing Brand Agents for Shopify merchants and a personalized shopping agent template in Copilot Studio. These tools allow retailers to deploy conversational shopping experiences trained on their product catalogs and brand voice.

Brand Agents provide a turnkey option for answering product questions and guiding shoppers, while the customizable shopping agent template enables advanced experiences such as real-time recommendations, outfit building, and cross-channel discovery across web, mobile, and in-store environments.

Retailers including Kappahl Group are already exploring these tools to improve conversion rates and reduce returns by helping shoppers make more confident purchase decisions.

Catalog Enrichment and Smarter Product Discovery

To support discovery and personalization at scale, Microsoft is launching a catalog enrichment agent template in public preview. The agent automatically extracts product attributes from images, enriches listings with social and contextual insights, and streamlines onboarding, categorization, and error resolution.

Brands like Guess see catalog enrichment as a foundational layer for delivering real-time recommendations and cohesive shopping journeys across channels.

AI Agents for Store Operations and Frontline Staff

Microsoft is extending agentic AI beyond digital commerce into physical retail operations. The new store operations agent template, now in public preview, provides store associates and managers with natural-language access to inventory data, policies, and operational insights.

By combining internal data—such as sales trends and foot traffic—with external signals like weather and local events, the agent recommends staffing adjustments, flags exceptions, and suggests next-best actions in real time.

Retailers such as Strandbags are using the solution to empower frontline teams while improving decision-making speed and consistency.

A Broader Shift Toward Agentic Retail

With this launch, Microsoft is signaling a broader shift toward agentic AI as the backbone of modern retail operations. By automating routine workflows across commerce, catalog management, and store operations, retailers can redirect resources toward strategy, innovation, and customer experience.

 

Microsoft says its approach combines deep enterprise integration with responsible AI development—positioning intelligent agents not just as tools, but as long-term operational partners for retailers navigating an increasingly competitive and AI-driven market.

Get in touch with our MarTech Experts.

Snowflake to Acquire Observe to Deliver AI-Powered Observability at Enterprise Scale

Snowflake to Acquire Observe to Deliver AI-Powered Observability at Enterprise Scale

artificial intelligence 9 Jan 2026

Snowflake has signed a definitive agreement to acquire Observe, an AI-powered observability platform built natively on Snowflake, as the data cloud provider deepens its push into enterprise-scale reliability, operations, and AI-driven application management.

The acquisition positions Snowflake to deliver a new generation of AI-powered observability, designed for the scale, complexity, and economics of modern AI-native enterprises operating across distributed systems, autonomous agents, and data-intensive applications.

“As our customers build increasingly complex AI agents and data applications, reliability is no longer just an IT metric—it’s a business imperative,” said Sridhar Ramaswamy, CEO of Snowflake. “By bringing Observe’s capabilities directly into the Snowflake AI Data Cloud, we’re enabling enterprise-wide observability with open architecture and AI-powered troubleshooting at massive scale.”

From Monitoring to Agentic, AI-Driven Troubleshooting

Observe was built on Snowflake from its inception, making the integration a natural extension of the Snowflake AI Data Cloud. Together, the companies aim to move observability beyond reactive monitoring toward proactive, automated operations.

At the core of the offering is Observe’s AI-powered Site Reliability Engineer (SRE), which correlates logs, metrics, and traces through a unified context graph. By combining this intelligence with Snowflake’s trusted enterprise data, teams can detect anomalies earlier, identify root causes faster, and resolve production issues up to 10 times faster, according to the companies.

This agentic approach is increasingly critical as enterprise systems become more distributed, autonomous, and AI-driven.

Open Standards and Telemetry at Scale

The acquisition also establishes a unified, open-standard observability architecture based on Apache Iceberg and OpenTelemetry, both of which Snowflake actively contributes to.

By treating telemetry as first-class data within the Snowflake AI Data Cloud, enterprises can manage terabytes to petabytes of logs, metrics, and traces using economical object storage and elastic compute. This approach allows observability data to be analyzed alongside business and operational data with consistent governance, analytics, and AI models.

Industry analysts see this as part of a broader shift away from specialized observability stacks toward lakehouse-style economics.

“Observability’s cost problem stems from treating telemetry as special-purpose data,” said Sanjeev Mohan, Principal Analyst at SanjMo. “Snowflake’s acquisition highlights how the lines between data platforms and observability platforms are blurring.”

Eliminating the Cost-Retention Tradeoff

As AI-driven applications generate unprecedented telemetry volumes, many organizations have been forced to rely on data sampling or short retention windows to control costs. Snowflake and Observe say their combined platform removes that tradeoff.

By unifying Observe’s AI-driven observability with Snowflake’s scalable data foundation, enterprises can retain high-fidelity telemetry data for longer periods while reducing overall observability costs—improving visibility across their entire data estate without sacrificing economics.

“Observability is fundamentally a data problem,” said Jeremy Burton, CEO of Observe. “By combining our AI-powered SRE with Snowflake’s AI Data Cloud, we can deliver faster insights, greater reliability, and dramatically better economics for operating the next generation of AI applications.”

Expanding Snowflake’s IT Operations Footprint

Beyond product integration, the acquisition expands Snowflake’s presence in the fast-growing IT operations management (ITOM) market. Gartner estimates the ITOM software market reached $51.7 billion in 2024, growing 9% year over year.

Following the close of the transaction—subject to regulatory approvals—Snowflake plans to deepen its focus on helping enterprises operate reliable AI agents and applications at scale. Observe’s developer-friendly approach is expected to complement Snowflake’s existing workload engines with real-time enterprise context, faster root-cause analysis, and AI-assisted troubleshooting.

 

Together, Snowflake and Observe aim to redefine observability as a core capability of the modern data platform—one built for AI-native systems where reliability, cost efficiency, and speed are inseparable.

Get in touch with our MarTech Experts.

   

Page 74 of 1454

REQUEST PROPOSAL