artificial intelligence 4 Mar 2026
Manufacturers have spent the past few years grappling with supply chain shocks and workforce volatility. Now, a new threat is looming larger: the slow but steady loss of institutional knowledge.
That’s the focus of an upcoming March 3, 2026 webinar from Intellect, hosted by CEO Heather Preu and featuring analysts Allison Kuhn and James Wells from LNS Research. The session, titled Navigating the Chaos of Manufacturing in 2026, zeroes in on a challenge that’s quickly becoming existential for industrial and life sciences firms: how to preserve operational expertise before it walks out the door.
The webinar arrives on the heels of what many in the industry describe as one of the toughest recall years in recent memory. In 2025, manufacturers faced mounting product recalls, compliance scrutiny, and operational disruptions—often tied not to a lack of data, but to disconnected systems and fractured processes.
As experienced operators, engineers, and quality leaders retire or transition roles, decades of tacit knowledge—how to troubleshoot a finicky line, how to interpret subtle quality deviations, how to navigate compliance gray areas—can vanish with them.
According to Intellect, this loss isn’t just a talent issue; it’s a structural risk to quality, compliance, and production continuity.
The March 3 discussion will examine how AI-driven platforms can capture frontline expertise and convert it into reusable, scalable operational intelligence. The premise: if knowledge can be codified, connected to execution systems, and embedded into workflows, it doesn’t disappear when a veteran worker does.
That’s a sharp departure from traditional manufacturing IT architectures, where Quality Management Systems (QMS), frontline worker tools, and operational data often sit in separate silos.
A central theme of the webinar is the convergence of QMS and Connected Frontline Worker technologies. Rather than treating compliance and execution as parallel tracks, modern manufacturers are increasingly seeking integrated systems that unify them.
Preu is expected to discuss how customer demand for this integration has shaped Intellect’s acquisition strategy—specifically its push to combine quality management and frontline execution into a single operational framework. The goal: link compliance data, production workflows, and institutional knowledge in one AI-enabled environment.
It’s a move that mirrors broader industry trends. As AI adoption accelerates in manufacturing, companies are moving beyond isolated predictive maintenance pilots and toward enterprise-wide operational intelligence. The focus is shifting from dashboards to decision-making—embedding insights directly into frontline workflows.
When quality events, deviations, and corrective actions are digitally connected to shop-floor execution, manufacturers gain more than traceability. They gain context.
The timing of this conversation is no accident.
Life sciences and industrial manufacturers are under intensifying regulatory pressure, while operating with thinner margins and more complex global supply networks. Recalls don’t just hurt financially—they damage brand equity and invite long-term scrutiny.
At the same time, digital transformation initiatives are entering a new phase. Early adopters have already digitized documentation and basic workflows. The next frontier is intelligence: systems that don’t just record events, but actively guide decisions.
That’s where AI becomes less about hype and more about resilience.
If knowledge from seasoned operators can be embedded into digital workflows—automating best practices, flagging risks, standardizing responses—new hires can ramp faster, quality deviations can be caught earlier, and compliance gaps can be closed before they trigger audits or recalls.
In other words, AI shifts from experimental to operational.
Kuhn and Wells are expected to provide independent analysis on how workforce transformation, AI adoption, and digital platform consolidation are reshaping performance expectations across manufacturing and life sciences.
Industry analysts have increasingly framed 2026 as a pivotal year: demographic shifts are accelerating, regulatory environments are tightening, and boards are demanding measurable ROI from digital investments.
For vendors, that means platform narratives must translate into tangible outcomes—fewer recalls, faster onboarding, improved yield, and lower compliance risk.
For manufacturers, it means rethinking architecture. Point solutions may still solve local problems, but disconnected systems create blind spots. Unified platforms promise continuity—of data, of processes, and of expertise.
While positioned as an educational event, the webinar also underscores Intellect’s broader market positioning: AI-native, unified, and purpose-built for regulated manufacturing.
The emphasis on integrating QMS and frontline execution places the company in direct conversation with larger enterprise software providers and niche connected worker vendors alike. The differentiator, according to Intellect, is natively linking quality data with real-time execution in one system rather than stitching tools together post-deployment.
Whether that approach becomes the dominant model remains to be seen. But as recalls mount and workforce churn persists, manufacturers are clearly reassessing legacy architectures.
The on-demand webinar will explore:
Strategies to capture and preserve institutional expertise
How AI can convert operational data into actionable intelligence
Approaches to reducing recall risk through unified platforms
The evolving role of QMS in a connected workforce environment
Manufacturing, quality, and operations leaders navigating digital transformation initiatives in 2026 may find the discussion particularly timely.
Because in today’s environment, the real chaos isn’t just external volatility—it’s what happens when critical knowledge is fragmented, disconnected, or gone entirely.
Get in touch with our MarTech Experts.
cloud technology 4 Mar 2026
Enterprises building AI apps often face an uncomfortable choice: hand sensitive data to a third-party managed service—or shoulder the operational burden of running infrastructure themselves.
Zilliz is betting that companies are done choosing.
The company behind the open-source vector database Milvus has announced general availability of Zilliz Cloud BYOC (Bring Your Own Cloud) on Microsoft Azure. With the move, Zilliz Cloud BYOC is now available across Amazon Web Services, Google Cloud Platform, and Azure—making Zilliz the first managed vector database provider to support BYOC on all three hyperscale clouds.
In a market crowded with vector database startups and AI infrastructure vendors, that’s more than a checkbox update. It’s a strategic signal about where enterprise AI infrastructure is heading.
Vector databases are foundational to modern AI workloads—from semantic search and recommendation engines to retrieval-augmented generation (RAG) pipelines. As enterprises scale generative AI pilots into production, they need search infrastructure that can index and query embeddings efficiently.
But they also need to meet strict compliance, governance, and data residency requirements—especially in regulated sectors like financial services, healthcare, and public sector.
Traditionally, managed services require customers to move data into the vendor’s cloud account. That simplifies operations but complicates compliance. Self-hosted deployments solve the data control problem but demand DevOps muscle many teams would rather apply elsewhere.
Zilliz Cloud BYOC attempts to split the difference. Instead of hosting customer data in its own environment, Zilliz deploys a fully managed vector database directly inside the customer’s cloud account. The company manages the service; the customer retains full control over the infrastructure and data perimeter.
“The AI infrastructure landscape is at an inflection point,” said Charles Xie, founder and CEO of Zilliz, in a statement accompanying the launch. “Enterprises need platforms that respect their security, compliance, and multi-cloud realities.”
Translation: speed without surrender.
The Azure launch is arguably the most strategic piece of the rollout.
Zilliz previously expanded BYOC support from AWS to GCP. Adding Azure closes the loop for enterprises standardized on Microsoft’s ecosystem—particularly those building AI apps alongside Azure-native services like Azure OpenAI and other components of the Azure AI stack.
Keeping vector search infrastructure inside the same cloud environment reduces cross-cloud data movement, simplifies billing, and avoids potential latency overhead. For enterprises running large-scale AI workloads, that’s not just a technical detail—it’s a cost and governance consideration.
Azure customers can also align BYOC deployments with existing enterprise agreements, reserved capacity commitments, and internal governance frameworks. That means procurement, compliance, and infrastructure teams don’t have to carve out exceptions to adopt a new AI component.
In short: fewer internal battles, faster production timelines.
Zilliz is also emphasizing operational maturity. With an official Zilliz Cloud Terraform Provider, enterprises can deploy and manage BYOC environments through infrastructure-as-code workflows—integrating vector search into existing CI/CD pipelines.
That’s critical for organizations operating at scale. AI infrastructure that can’t plug into established DevOps practices tends to stall in proof-of-concept purgatory.
The broader message here is that vector databases are no longer experimental add-ons. They’re becoming core infrastructure—expected to meet the same reliability, observability, and automation standards as traditional databases.
Zilliz enters a competitive field that includes managed-first vector database players and open-source alternatives.
The company is explicitly positioning BYOC as a differentiator against fully managed SaaS models from vendors like Pinecone, as well as open-source-centric players such as Qdrant and Weaviate. It also highlights migration paths from legacy systems like Elasticsearch, PostgreSQL, and OpenSearch.
That migration story matters. Many enterprises initially experimented with vector search inside general-purpose databases or search engines. As AI workloads grow, those stopgap solutions often hit scaling or performance limits, prompting a move to purpose-built vector infrastructure.
By offering BYOC across all major clouds, Zilliz is effectively telling CIOs: you don’t need to re-platform your cloud strategy to standardize on our database.
The implications extend beyond vendor bragging rights.
As generative AI moves from prototype to production, infrastructure decisions are becoming more strategic. Data sovereignty regulations are tightening globally. Boards are asking pointed questions about AI governance. And multi-cloud strategies remain common among large enterprises.
In that environment, the ability to:
Keep data inside a specific jurisdiction
Align with existing cloud billing and contracts
Avoid cross-cloud egress fees
Standardize vector infrastructure across teams
…isn’t a nice-to-have. It’s a prerequisite.
BYOC models reflect a broader shift in enterprise software: customers want managed convenience without losing architectural control.
With Azure support now generally available, Zilliz Cloud BYOC spans AWS, GCP, and Azure. Every deployment includes the full Zilliz Cloud feature set built on Milvus, along with tooling for migration from competing vector databases and search platforms.
For enterprises under pressure to accelerate AI adoption—without triggering compliance alarms or ballooning infrastructure complexity—that combination may prove compelling.
The next competitive frontier won’t just be performance benchmarks. It will be how seamlessly vector infrastructure fits into the messy, multi-cloud reality of enterprise IT.
Zilliz just made a clear bet on where that future is headed.
Get in touch with our MarTech Experts.
marketing 4 Mar 2026
Influencer marketing has officially left its “brand awareness only” era.
QYOU Media Inc. has launched QYOU Amplify, a new business unit aimed at transforming creator-led campaigns into performance-driven media engines. The move formalizes the company’s paid media, advanced targeting, and analytics capabilities—signaling that influencer marketing is now expected to compete with search, social, and programmatic on measurable ROI.
For brands increasingly shifting budgets into social and creator-aligned channels, the message is clear: engagement metrics alone won’t cut it.
As marketers embed paid distribution directly into creator campaigns, the playbook is evolving. Instead of relying purely on organic reach, brands are boosting and optimizing creator content through paid media to extend reach, refine targeting, and drive conversion.
QYOU Amplify is designed to support that shift.
Rather than treating influencer campaigns as standalone bursts of content, the unit integrates creator strategy, creative production, and media distribution from day one. That structure allows QYOU to identify breakout content early, redirect budget in real time, and scale what’s performing—before the cultural moment fades.
“Creator marketing has matured, and brands expect it to perform like every other core media channel,” said Glenn Ginsburg, president of QYOU, in the company’s announcement.
The subtext: if influencer marketing wants a permanent seat at the budget table, it has to deliver performance metrics that look a lot like paid social or digital video.
The bigger ambition behind QYOU Amplify is repositioning creator content as a full-funnel growth driver—supporting awareness, consideration, and conversion rather than just top-of-funnel buzz.
That’s a notable evolution. Historically, influencer campaigns were often judged on impressions, likes, and shares. Now, brands are asking tougher questions: Did it drive traffic? Did it convert? What was the blended CAC?
QYOU says its model uses precision targeting and continuous optimization to answer those questions in real time. By integrating certified media buyers and structured campaign measurement into its creator programs, the company aims to move from reporting on performance to actively steering it.
In practice, that means:
Boosting high-performing creator posts quickly
Pivoting creative or audience targeting mid-flight
Reallocating spend across platforms as signals emerge
Extending the lifecycle of culturally resonant content
It’s a data-led approach designed to move at what marketers often call “the speed of culture,” without sacrificing accountability.
The launch builds on QYOU’s recent designation as a Badged TikTok Agency Partner—a status tied to platform expertise and campaign performance on TikTok.
That alignment matters. As short-form video continues to dominate social consumption, brands are investing heavily in TikTok-native creator campaigns. But the platform’s algorithmic volatility means content can surge—or stall—quickly.
Having paid media capabilities embedded within the same team that works directly with creators gives QYOU a feedback loop that pure-play media agencies may lack. When performance signals surface, they can be acted on immediately.
To lead the new unit, QYOU has promoted Peggy Lin to General Manager of QYOU Amplify. Previously VP of Account Management and Planning, Lin will oversee the integrated paid practice and help drive the evolution of influencer marketing from an engagement play to a measurable ROI channel.
Her expanded remit underscores how central paid distribution has become to the company’s strategy.
“The platforms evolve daily, and so do client expectations,” Lin said in the announcement, pointing to the need for agility in deciding when to boost, pivot, or reallocate spend.
That agility is increasingly non-negotiable. Algorithms shift. Trends flare and fade. Creative fatigue sets in. In that environment, static campaign planning can quickly become sunk cost.
QYOU Amplify enters a market where creative agencies, influencer shops, and performance media firms are converging.
Large holding companies have been integrating influencer capabilities into performance media stacks. Meanwhile, influencer-native agencies are racing to build out data science and paid media arms to defend their budgets.
The result: a blending of disciplines that were once siloed.
QYOU’s differentiation lies in claiming integration from inception—creator strategy, creative development, and media execution operating under a unified model rather than stitched together post-campaign.
For mid-market and growth-focused brands, that may offer a simpler procurement and accountability structure. One partner. One performance narrative.
The launch of QYOU Amplify reflects a broader industry truth: influencer marketing is no longer experimental spend.
CMOs face mounting pressure to justify every dollar. As social commerce expands and attribution models improve, creator content is being evaluated against hard metrics—ROAS, CPA, LTV—not just sentiment and share of voice.
In that landscape, agencies that can combine cultural fluency with media math will likely win more budget share.
QYOU Amplify positions the company squarely in that camp—promising both cultural relevance and commercial impact within a single operating model.
Whether that integrated approach consistently delivers measurable lift will ultimately determine its staying power. But one thing is clear: influencer marketing’s era of soft metrics is ending.
And performance is the new creative brief.
Get in touch with our MarTech Experts.
cloud technology 4 Mar 2026
At this year’s Mobile World Congress, Tencent didn’t unveil a flashy consumer app. Instead, it made a quieter—but arguably more strategic—move: expanding its European cloud footprint.
Tencent Cloud announced the opening of a new availability zone in Frankfurt, bringing its total to three in Germany. The new zone, set to open to customers in Q2, strengthens the company’s infrastructure in Europe at a time when demand for AI and mission-critical cloud services is accelerating.
For a Chinese tech giant navigating global markets, infrastructure expansion in Germany is more than a capacity upgrade—it’s a signal of long-term commitment to Europe’s regulatory, performance, and data sovereignty requirements.
Frankfurt is already one of Europe’s most important cloud hubs, home to dense interconnection points and regional data centers operated by hyperscalers and regional players alike. By adding a third availability zone there, Tencent Cloud is boosting redundancy, resilience, and scalability for European customers.
The move also directly addresses a core enterprise concern: local data residency.
With data hosted in Germany and operated in alignment with local and EU regulations, the new zone is designed to reassure customers wary of cross-border data exposure. In a post-GDPR environment, that’s table stakes.
But Tencent is pairing infrastructure expansion with something more ambitious: AI-native services targeted at Europe’s fast-growing creator economy and fintech sector.
One of the headline services riding on the new capacity is Tencent Cloud’s HY 3D AI creation engine, which allows companies to generate high-quality 3D assets from multimodal inputs—text descriptions, image references, and more—in minutes.
Among its European adopters:
3D AI Studio in Germany, which is co-developing scalable generative 3D solutions to meet rising demand from designers and developers.
CGTrader, one of the world’s largest 3D model marketplaces, planning to introduce generative AI workflows for its global creator base.
Maxon, the German software developer known for Cinema 4D, which will integrate Tencent’s HY 3D model engine into its Academy Award-winning Cinema 4D tool on iPad and desktop, with availability planned for late 2026.
That last integration is particularly notable. Cinema 4D has long been a staple in motion graphics and VFX. Embedding AI-powered 3D generation directly into a production-grade tool suggests generative AI is moving from experimental add-on to embedded workflow layer.
For Europe’s creative and industrial design sectors, this could significantly compress production timelines—turning what once required hours of modeling into minutes of AI-assisted iteration.
Tencent Cloud’s expansion isn’t limited to creators.
The company has built the first European cloud-based production platform for iyzico BV, a leading Turkish payment provider. The platform is designed to support EU-wide scaling of its virtual payment services and currently handles transactions for more than 185,000 merchants.
Payments infrastructure is unforgiving. High availability, regulatory compliance, and security are non-negotiable. By anchoring iyzico’s European production platform in Germany, Tencent positions itself as a credible alternative to traditional hyperscalers for fintech workloads.
It’s also a strategic wedge into a sector where cloud providers often win long-term, sticky contracts.
Tencent Cloud is not operating in a vacuum. Europe is a battleground dominated by AWS, Microsoft Azure, and Google Cloud, alongside regional players emphasizing sovereignty and compliance.
Rather than attempting to out-hyperscale the hyperscalers, Tencent appears to be carving out a differentiated narrative: AI-native services combined with global ecosystem experience.
Fred Sun, General Manager of Tencent Cloud Europe, emphasized the company’s track record integrating AI across Weixin, internationally known as WeChat—one of the world’s largest digital ecosystems. The implication: Tencent doesn’t just build AI tools; it operates them at massive consumer scale.
Whether that experience translates seamlessly to Europe’s regulatory and competitive environment remains to be seen. But it provides a credibility anchor as enterprises assess AI partners.
Tencent also highlighted Palm AI, a contactless biometric technology already deployed in Asia across transport, retail, and enterprise settings. While not yet a mainstream European deployment, the reference signals Tencent’s intent to export AI use cases proven in Asian markets to European customers seeking innovation.
In parallel, Tencent Cloud is positioning itself as a bridge to Asia for European firms—offering infrastructure, technical expertise, and regional market knowledge to support international expansion.
For mid-sized European companies eyeing Asian growth, that two-way channel could become a meaningful differentiator.
The Frankfurt expansion follows broader global growth. Tencent Cloud now serves hundreds of thousands of enterprises across 30 industries and recently launched two availability zones in Saudi Arabia, further extending its international network.
In a world where cloud infrastructure decisions increasingly intersect with geopolitics, compliance, and AI strategy, scale and geographic diversity matter.
The real story behind Tencent’s Frankfurt expansion isn’t just capacity—it’s convergence.
AI workloads demand high-performance infrastructure. Regulators demand local control. Enterprises demand resilience. And developers demand integrated tools that move from experimentation to production seamlessly.
By expanding German infrastructure while pushing AI engines into mainstream creative and fintech workflows, Tencent Cloud is aligning itself with that convergence.
The question isn’t whether Europe needs more cloud capacity. It’s whether Tencent can convert infrastructure presence into long-term enterprise trust.
With three availability zones now in Germany and AI partnerships taking shape, the company has made its move. The European market will decide how far it goes.
Get in touch with our MarTech Experts.
marketing 4 Mar 2026
Retail investing has no shortage of noise. What it often lacks is context.
That’s the gap Benzinga and Bloom are aiming to close through a new data partnership announced this week. Under the agreement, Bloom will embed Benzinga’s real-time educational market intelligence directly into its investing app—layering professional-grade explanations over market activity for everyday investors.
The goal isn’t to push faster trades. It’s to help users understand why markets move in the first place.
Bloom now integrates two of Benzinga’s flagship tools:
Benzinga Why Is It Moving, which provides real-time explanations behind stock and broader market price movements.
Benzinga Earnings Calendar, which tracks company reporting dates and key performance indicators (KPIs), offering insight into how earnings events affect stock prices.
By embedding these tools natively into its app, Bloom is attempting something many fintech platforms struggle with: making education contextual rather than separate.
Instead of sending users to external blogs, YouTube explainers, or Reddit threads, Bloom places professional-grade explanations at the point of decision-making.
“Financial education today is scattered,” said Jae Hwang, CEO of Bloom, in the announcement. “We built Bloom so learning and investing happen in the same place, at the same moment.”
That framing highlights a broader industry shift. As retail investors mature post-2020 trading boom, platforms are recalibrating away from pure speculation and toward sustainable engagement models built around trust and transparency.
Bloom’s positioning centers on long-term wealth building and institutional visibility. Rather than gamifying trades or leaning into short-term volatility, the platform emphasizes how professional investors allocate capital, diversify portfolios, and manage risk across economic cycles.
That’s a notable pivot in a market where copy trading often skews toward high-frequency moves and headline-driven trades.
Bloom’s institutional copy trading model is designed less as “follow this hot stock” and more as “observe how disciplined portfolios are structured and adjusted over time.” With Benzinga’s real-time market intelligence layered in, users gain context around earnings cycles, macroeconomic shifts, and volatility events.
The combination is meant to demystify professional investing—making strategy, not speculation, the central narrative.
For Benzinga, the deal reflects a continued expansion of its data licensing footprint beyond traditional media distribution.
Founded on the premise of democratizing access to market-moving information, Benzinga has increasingly positioned itself as a backend data provider for fintech platforms looking to enrich user experience without building editorial and market intelligence teams from scratch.
“Benzinga was founded on the belief that everyone should be able to invest and have access to the same information as professionals,” said Andrew Lebbos, SVP of Data Licensing at Benzinga.
By plugging directly into Bloom’s user workflows, Benzinga strengthens its role not just as a news brand, but as infrastructure for contextual finance.
The partnership arrives at a pivotal moment for retail investing platforms.
After the pandemic-era surge in trading activity, many fintechs are grappling with user churn, regulatory scrutiny, and pressure to demonstrate responsible engagement. Encouraging informed, long-term investing may prove more sustainable than chasing short-lived trading frenzies.
Bloom’s differentiation lies in combining:
That blend could appeal to younger investors seeking mentorship-like insight without paying for traditional wealth management services.
At the same time, it raises the bar for other platforms. Basic price charts and social feeds increasingly feel insufficient in a market where users expect richer context and credible data sources.
The broader implication is philosophical as much as technical.
Retail investing is evolving from event-driven speculation toward structured learning environments. Platforms that can merge content, data, and execution into a unified experience may have an advantage in building long-term trust.
By integrating Benzinga’s “why” into Bloom’s “how,” the two companies are betting that understanding the reasoning behind market moves fosters better investor behavior than reacting to price swings alone.
If successful, the model could signal a shift in how fintech platforms design their user journeys—from chasing clicks to cultivating comprehension.
In a market saturated with signals, context may be the ultimate differentiator.
Get in touch with our MarTech Experts.
advertising 4 Mar 2026
Connected TV has grown up fast. Now advertisers want it to behave like the rest of their performance media stack.
That’s the pitch behind a new integration between Smartly and Amazon DSP, announced today. The partnership allows brands to extend Smartly-managed video campaigns into Amazon’s premium CTV inventory—including Prime Video and Fire TV—as well as third-party publisher supply.
The integration is available globally, with additional features slated for rollout later in 2026.
Streaming ad spend is accelerating. Nearly 70% of marketers plan to increase CTV budgets over the next year, according to industry surveys. But with those rising investments comes higher scrutiny.
Advertisers no longer see CTV as an experimental reach play. They want personalization, agility, and measurable outcomes—standards already common in social and programmatic channels.
Historically, CTV has lagged in creative flexibility and real-time optimization. Production bottlenecks, siloed workflows, and fragmented reporting have made it harder to treat streaming like a true performance channel.
Smartly’s integration aims to close that gap.
The partnership brings Amazon streaming inventory directly into Smartly’s existing platform through three main pillars:
1. AI-Powered Creative Optimization
Smartly enables advertisers to personalize and optimize video creative across channels, extending AI-driven dynamic creative capabilities from social into streaming environments. The idea is to eliminate the traditional CTV production bottleneck that slows activation.
2. Unified Campaign Management
Advertisers can create, manage, and optimize Amazon DSP campaigns inside the same Smartly workflow used for social advertising. No separate platform hopping. No disconnected dashboards.
3. Cross-Channel Measurement and Budget Agility
The integration supports real-time performance visibility across social and CTV, enabling incremental reach analysis and faster budget reallocation based on live performance insights.
In practical terms, that means a high-performing social video can be adapted and deployed to Prime Video inventory quickly—without rebuilding creative or restructuring the campaign from scratch.
Access to Amazon’s premium streaming supply is a key component of the deal. Prime Video and Fire TV represent some of the most coveted ad-supported streaming real estate globally, offering both scale and first-party commerce signals.
For brands already advertising on Amazon’s retail and DSP ecosystem, the integration adds a performance-oriented CTV layer without introducing additional operational complexity.
“Driving impactful business outcomes across channels is key for advertisers,” said Erin McGee, Director of Partner and Advertiser Growth Marketing at Amazon Ads. The integration, she noted, allows Smartly advertisers to extend proven social creative into premium streaming inventory “in a fraction of the time.”
Speed is more than convenience—it’s competitive leverage in a media environment where trends and performance signals evolve daily.
Smartly frames the move as part of a broader media convergence.
“As CTV matures, entertainment, commerce, and social are merging into one experience,” said Melissa Yang, SVP of Ecosystems & AI Applications at Smartly.
That convergence is reshaping expectations. Marketers now expect the same creative intelligence, personalization, and optimization in CTV that they rely on in paid social.
But streaming platforms have traditionally been walled gardens with limited creative iteration and slower feedback loops. By integrating directly with Amazon DSP, Smartly is effectively positioning itself as the connective tissue between social-style creative agility and premium streaming scale.
The CTV landscape is heating up.
Major DSPs, retail media networks, and adtech platforms are racing to position streaming as a full-funnel channel rather than a digital version of linear TV. Measurement improvements, shoppable ads, and retail data integration are accelerating that shift.
For Smartly, which built its reputation on creative automation and cross-channel execution, expanding deeper into CTV is a logical evolution. The company is betting that brands want one environment where creative insights, activation, and performance data are unified.
That unified model could prove especially valuable for global brands managing campaigns across social, retail media, and streaming—where fragmentation often leads to duplicated effort and inconsistent measurement.
The integration offers several potential advantages:
Faster CTV activation using existing social creative
AI-driven personalization adapted for streaming environments
Real-time cross-channel performance insights
Streamlined budget reallocation based on incremental reach
As streaming budgets grow, marketers face mounting pressure to justify ROI. Tools that make CTV measurable, agile, and creatively flexible bring the channel closer to the performance standards long expected from digital advertising.
The bigger question is whether advertisers will treat CTV as just another line item—or as a fully integrated part of their omnichannel growth engine.
Smartly’s integration with Amazon DSP suggests the industry is leaning toward the latter.
Get in touch with our MarTech Experts.
artificial intelligence 4 Mar 2026
Marketers have spent the last decade stitching together audiences, dashboards, and media plans across disconnected tools. LiveRamp thinks AI agents can finally do that stitching automatically.
LiveRamp (NYSE: RAMP) has introduced a new suite of agentic AI capabilities designed to transform how brands plan, execute, measure, and optimize campaigns. The update includes agent-powered access to the LiveRamp platform, enabling specialized AI agents to autonomously collaborate with ecosystem partners inside a governed environment.
In short: what marketers currently do manually—building audiences, running experiments, measuring cross-media performance—AI agents can now handle at machine speed.
LiveRamp’s pitch is straightforward. Marketing execution is still too manual and fragmented. Even in sophisticated enterprises, audience creation, measurement, and optimization often involve multiple teams, exports, and reconciled reports.
“We're making it possible for AI agents to do what marketers have been doing manually — build audiences, measure cross-media performance, and optimize spend — but faster and within the governed environment our customers already trust,” said Matt Karasick, Chief Product Officer at LiveRamp.
The company says customers can activate these capabilities in alignment with their internal AI policies and the usage rules of the underlying AI providers—an increasingly important consideration as enterprises formalize AI governance frameworks.
The first wave of live agent integrations includes:
Newton Research, which enables marketers to query LiveRamp’s Cross-Media Intelligence platform using natural language to unlock instant measurement insights.
SemantIQ, allowing health and life sciences marketers to build and activate healthcare provider audiences directly from the LiveRamp Clean Room.
These agents operate within LiveRamp’s identity and data collaboration framework, meaning marketers don’t have to move sensitive data outside controlled environments to benefit from AI automation.
John Hoctor, CEO and co-founder of Newton Research, framed the collaboration as a way to translate analytics into “quantifiable performance increases” through specialized intelligent agents.
LiveRamp says additional agent partnerships across audience planning, segmentation, optimization, and measurement categories are in development.
The real differentiator in LiveRamp’s agentic strategy may not be the agents themselves—but the identity infrastructure underneath them.
LiveRamp has long positioned its identity graph as the connective tissue across media channels. Now, that foundation becomes the operating system for AI agents.
Among the new capabilities:
Enhanced AI-powered lookalike modeling leveraging first-, second-, and third-party data
The ability to apply a single, identity-powered control group across channels for consistent performance measurement
Faster experimentation management with scalable, agent-assisted execution
This unified control group approach addresses a long-standing marketer pain point: inconsistent measurement across platforms. If AI agents can standardize experimentation frameworks across surfaces, cross-channel incrementality becomes easier to measure—and defend in budget conversations.
Lookalike audiences are hardly new. But marketers often struggle with data fragmentation, workflow complexity, and activation hurdles.
Ananda Chakravarty, Research VP for Retail Insights at IDC, noted that friction typically stems from three areas: data access, workflow integration, and connectivity.
LiveRamp’s update attempts to address all three—sourcing from multiple data tiers, adapting to marketer workflows, and ensuring seamless activation across partners.
For retail media networks and large advertisers, that could streamline precision targeting without adding operational burden.
Austin Leonard of DG Media Network and Thomas Atkins of MGM Resorts International both highlighted the role of identity and data security in building higher-quality AI models—underscoring how privacy-safe collaboration is becoming foundational rather than optional.
LiveRamp’s move aligns with a broader industry shift toward “agentic” systems—AI tools that don’t just generate insights but take action autonomously.
Instead of dashboards requiring human interpretation, agentic systems can:
Build and refine audience segments
Launch and adjust experiments
Reallocate media budgets
Generate performance summaries in natural language
The promise is exponential performance gains through automation. The risk, of course, lies in governance and transparency—especially when agents operate across premium data environments.
LiveRamp is betting that its clean room architecture and identity controls make it a safer home for AI-driven execution than fragmented adtech stacks.
As retail media networks, DSPs, and data clean rooms integrate AI layers into their platforms, the competition is shifting from raw data access to intelligent orchestration.
If agents can operate across LiveRamp’s partner ecosystem with governed permissions, the platform could evolve from a data collaboration hub into an autonomous marketing command center.
For enterprise marketers, the value proposition is clear: fewer manual handoffs, faster optimization cycles, and consistent cross-channel measurement—without compromising privacy compliance.
Whether agents fully replace traditional workflows remains to be seen. But the trajectory is clear.
Marketing is moving from assisted intelligence to autonomous execution. LiveRamp just handed the keys to the agents.
Get in touch with our MarTech Experts.
artificial intelligence 4 Mar 2026
The era of AI copilots suggesting code snippets may be ending. Postman wants AI to do the work instead.
Postman has introduced Agent Mode, a new AI-native assistant embedded directly inside its API collaboration platform. Unlike traditional coding copilots, Agent Mode doesn’t just suggest—it executes. Developers can describe what they want in natural language, and the agent designs, tests, documents, and monitors APIs accordingly.
It’s a shift from code-first tooling to intent-first execution.
AI-powered developer tools have surged over the past two years. Studies from firms like IDC and McKinsey suggest generative AI can significantly accelerate development cycles—reducing manual testing, improving code quality, and potentially doubling task completion speed.
Most of those tools, however, function as copilots. They suggest snippets, refactor code, or autocomplete functions. The developer still orchestrates the workflow.
Agent Mode goes further.
According to Postman, the assistant is connected to live APIs, understands context and state, and can execute multi-step flows across collections, tests, environments, and documentation. That means instead of asking, “How do I write this test?” developers can say, “Validate this API response schema and simulate edge cases,” and the agent handles the implementation.
“Agent Mode signifies a major shift in how developers interact with tools, APIs, and the systems they build,” said Abhijit Kane, co-founder of Postman. He described the launch as the first combination of enterprise-grade API tooling with an intelligent agent capable of delivering production-grade workflows on demand.
In practical terms, that’s an ambitious claim: turning weeks of manual API lifecycle work into hours.
Agent Mode operates as a conversational interface layered over Postman’s existing capabilities. Through natural language, developers can:
Create and manage collections and environments with proper variable handling and reusable configurations
Generate and run automated tests, including schema validation and edge-case simulation
Produce detailed API documentation once workflows are validated
Set up monitoring and observability, including health checks
Build reusable multi-step agents that automate recurring engineering or business tasks
Everything created by the agent lives inside Postman’s collaborative workspace, making outputs immediately accessible to developers, QA teams, product managers, and even non-technical stakeholders.
That cross-functional accessibility is part of the broader vision: expanding the pool of “builders” inside organizations by reducing reliance on manual developer intervention.
One of the more forward-looking aspects of Agent Mode is its roadmap for reusable agents.
Developers will be able to construct multi-step agents tailored to recurring API tasks—say, spinning up standardized testing workflows or generating compliance-ready documentation—and store them in shared workspaces. These agents can then be reused across teams and projects, compounding productivity gains over time.
If executed well, that capability could move Postman from being a collaboration hub to becoming an automation layer for API-driven businesses.
Postman operates in a crowded developer tooling ecosystem, competing with API design and testing platforms as well as integrated development environments layering in AI assistance.
What differentiates Agent Mode is its embedded position within a platform reportedly used by more than 40 million developers worldwide. Instead of adding AI to a narrow slice of the workflow, Postman is inserting an execution agent across the entire API lifecycle—from design to monitoring.
The broader industry trend is clear: tools are evolving from passive environments to active systems.
In the same way that marketing platforms are adopting “agentic” AI to automate audience building and optimization, developer platforms are racing to embed agents that can autonomously manage complex workflows.
The stakes are high. APIs are now foundational to digital products, SaaS platforms, fintech ecosystems, and AI applications themselves. Any reduction in friction across the API lifecycle translates directly into faster product releases and quicker iteration cycles.
Speed-to-market is no longer a vanity metric. It influences revenue growth, competitive positioning, and customer satisfaction.
If Agent Mode can genuinely compress weeks of prototyping, testing, and documentation into days, it may materially alter how teams allocate resources. Instead of spending cycles wiring together collections and test cases, developers can focus on architecture and innovation.
At the same time, enterprise adoption will hinge on governance. AI agents that execute real actions inside production workflows must operate within clear policy boundaries and audit trails. Postman’s enterprise-grade positioning suggests it is targeting organizations that demand both speed and control.
Agent Mode reflects a deeper philosophical change in software development.
Rather than instructing tools step by step, developers increasingly describe outcomes. The system interprets intent, orchestrates the workflow, and delivers results.
It’s a move from syntax to semantics—from “how” to “what.”
If that paradigm holds, API development could become less about stitching together endpoints manually and more about defining business objectives in plain language.
Postman is betting that the future of APIs isn’t just collaborative—it’s autonomous.
Get in touch with our MarTech Experts.
Page 1 of 1426