artificial intelligence 25 Mar 2026
Idomoo today announced Strata, a generative AI foundation model that reimagines how brands create video content. Unlike traditional AI video models that output a single flat file, Strata generates fully layered compositions—including text, animation, footage, and actors—allowing enterprises to edit and personalize videos at virtually unlimited scale.
Layered video has long been a staple in professional production tools like Adobe After Effects, but generative AI has historically only produced uneditable flat videos. Strata changes that by generating structured video blueprints with independent layers for typography, motion, animation, and synchronized audio—all while enforcing brand guidelines automatically.
“Every other AI video model generates pixels: a flat file you can’t meaningfully edit,” said Danny Kalish, cofounder and CTO of Idomoo. “Strata generates structure…solving this makes AI video genuinely usable and scalable for enterprises for the first time.”
Template-based AI video solutions often force content into rigid layouts, limiting brand compliance, personalization, and studio-level quality. Strata, by contrast, creates custom blueprints for each video, ensuring production-ready quality and flexible editing for sales videos, onboarding content, eCommerce ads, and more.
Idomoo’s Lucas AI Video Agent analyzes a company’s approved content to learn its brand voice, visual style, and motion cues. This Brand DNA—covering design, narrative, and assets—is embedded into every video Strata generates, ensuring 360-degree brand compliance across all content.
Each video blueprint is fully personalizable: text, images, footage, and more can be adjusted layer by layer. This allows brands to create real-time, hyper-personalized video experiences for millions of viewers in any language, all while preserving brand integrity.
Strata is patent pending, with early access already in use by several of Idomoo’s largest customers. The model is available now through Lucas AI Video Agent, part of Idomoo’s AI video platform.
Get in touch with our MarTech Experts.
artificial intelligence 25 Mar 2026
Diffblue today announced the general availability of its Diffblue Testing Agent, an autonomous regression test generator designed to work alongside AI coding platforms such as GitHub Copilot and Claude Code. The agent automatically produces verified unit tests across entire codebases without developer intervention, addressing a longstanding challenge in enterprise software engineering: generating comprehensive, trustworthy test coverage at scale.
AI coding agents have transformed how developers write software, but creating robust regression tests has remained labor-intensive. Developers often spend hours iterating with AI coding assistants, manually verifying output, and coaxing adequate coverage—frequently achieving suboptimal results.
Diffblue’s Testing Agent introduces an orchestration and verification layer that autonomously scopes codebases, generates tests at the method and class level, validates compilation, verifies test results, rolls back failed tests, and even prepares pull requests—all without manual intervention.
“Our benchmark data shows that the developer effort for driving even the best AI coding agents reaches unaffordable levels quickly,” said Dr. Peter Schrammel, co-founder and CTO of Diffblue. “The Diffblue Testing Agent achieves 80%+ coverage autonomously—the difference between an AI experiment and an AI-enabled engineering workforce.”
In tests across eight real-world Java projects, Diffblue delivered:
The benchmarks highlight the platform’s ability to scale efficiently across hundreds or thousands of classes in a single run, turning AI coding tools from experimental assistants into reliable engineering collaborators.
The Diffblue Testing Agent integrates with existing AI coding platforms, delegating test generation to the underlying agent while orchestrating:
Initially available for Java and Python, Diffblue plans to expand to additional platforms and software quality domains, including test quality assessment, code review automation, large-scale refactoring, and requirements-driven test generation.
By automating regression test creation, Diffblue enables engineering teams to:
The Diffblue Agents platform, founded by researchers from the University of Oxford and backed by IP Group, Albion, Parkwalk, and Citi, is now available for enterprise evaluation.
Get in touch with our MarTech Experts.
artificial intelligence 24 Mar 2026
AI-powered presentation platforms are rapidly evolving, and AiPPT.com is the latest to expand its toolkit. The company has announced an update to its built-in AI image generator with the integration of the advanced model Nano Banana 2, giving users more flexibility to create presentation visuals without leaving the editor.
The update strengthens AiPPT.com’s position as a unified AI presentation maker that merges writing, layout creation, and media generation into a single workspace. With Nano Banana 2, the platform aims to streamline the design workflow by enabling users to generate slide-ready visuals directly within the presentation environment.
In a landscape where AI tools are increasingly reshaping how professionals produce content, the move reflects a broader shift toward integrated creative workflows—particularly for marketers, educators, and business teams who rely heavily on slide-based communication.
The addition of Nano Banana 2 expands the range of image generation models already available within AiPPT.com’s ecosystem. While users previously had access to earlier versions in the Nano Banana family—including Nano Banana and Nano Banana Pro—the latest model brings improved prompt comprehension and greater visual accuracy.
That improvement matters in presentation design, where images must quickly align with slide context and narrative flow. Instead of manually searching for stock visuals or leaving the presentation editor to generate graphics elsewhere, users can now create images through short prompts that correspond directly to slide themes.
For example, educators preparing lesson slides can generate diagrams or illustrative visuals on demand. Marketing teams can produce concept graphics to support campaign storytelling. Even abstract creative scenes can be generated to enhance visual storytelling within slides.
By keeping the image creation process embedded within the editor, AiPPT.com reduces friction in the design workflow—something many presentation tools still struggle with.
AiPPT.com’s approach reflects a broader trend among productivity platforms: integrating generative AI features rather than forcing users to rely on standalone tools. Many AI-powered presentation platforms already include text generation and layout automation, but fewer offer a robust ecosystem of image models accessible directly inside the editor.
In addition to the Nano Banana models, AiPPT.com supports other well-known AI image systems, including Flux models, Imagen models, and Seedream 4.0. The availability of multiple models allows users to experiment with different visual styles and output qualities depending on their presentation needs.
This model diversity mirrors developments in other AI design platforms, where creators increasingly expect the ability to switch between generation engines rather than relying on a single system.
Nano Banana 2 isn’t arriving in isolation. The update fits into AiPPT.com’s broader strategy of positioning itself as an end-to-end AI presentation environment.
Beyond image generation, the platform allows users to build slide decks from simple prompts, uploaded documents, or images. The system analyzes the input and generates structured outlines that can be converted into full presentations.
One particularly distinctive feature is the ability to download PPT files directly from web links. Users can paste a webpage URL into the platform, which then extracts key information, generates an outline, and converts the content into an editable slide deck.
This capability addresses a common productivity challenge—transforming long-form web content into concise presentation material. For professionals who frequently turn research or blog posts into slides, that feature can significantly reduce preparation time.
Complementing its AI generation features is a substantial design library. AiPPT.com provides access to more than 200,000 presentation templates, covering a wide range of use cases including business reports, academic lectures, marketing campaigns, and corporate training materials.
Templates remain an important part of presentation workflows even in the age of generative AI. While AI can generate content and visuals, structured layouts often provide the visual consistency and branding alignment that organizations require.
The large template selection therefore acts as a foundation on which AI-generated content can be layered, combining automation with established design frameworks.
The update from AiPPT.com highlights a growing trend across the productivity software industry: the rapid convergence of generative AI and traditional office tools.
Slide presentations remain one of the most widely used formats in business and education, yet creating them has historically been time-consuming. Writers must draft content, designers must arrange layouts, and creators must source or produce visuals.
AI tools are now collapsing those steps into a single workflow. Text, layouts, and imagery can all be generated from prompts, dramatically shortening production timelines.
For marketers and communications teams in particular, the ability to generate visual storytelling assets quickly is becoming increasingly valuable as content cycles accelerate.
AiPPT.com’s latest update also signals how competitive the AI presentation space has become. Platforms are racing to offer deeper integrations between generative models and productivity features.
Some tools focus primarily on automated slide creation, while others emphasize design or AI writing. AiPPT.com appears to be positioning itself as a hybrid solution that combines all three elements—content generation, visual creation, and structured presentation design.
By integrating Nano Banana 2 and expanding its image generation ecosystem, the platform strengthens its appeal to users who want more control over presentation visuals without leaving the workspace.
As AI models continue to evolve, presentation tools are likely to incorporate increasingly advanced capabilities—from more sophisticated image generation to automated data visualization and interactive slide elements.
For now, the addition of Nano Banana 2 marks another step in AiPPT.com’s push to make AI-assisted presentation creation faster and more flexible. By embedding image generation directly inside the editor, the company is betting that the future of presentation design lies in tightly integrated creative workflows rather than fragmented toolchains.
Whether building marketing decks, educational slides, or internal reports, users are increasingly looking for platforms that can turn ideas into polished presentations with minimal friction—and updates like this suggest that AI presentation makers are moving steadily in that direction.
Get in touch with our MarTech Experts.
artificial intelligence 24 Mar 2026
At the Gartner Digital Workplace Summit, TeamViewer unveiled Tia Reporting, a new conversational AI capability inside its TeamViewer DEX platform designed to transform how IT teams access and interpret operational data. The feature enables administrators to generate real-time dashboards simply by typing natural-language prompts—eliminating the need for manual data analysis or complex business intelligence tools.
The launch also coincided with the first activation of TeamViewer’s new global brand campaign, “Fix it before they feel it,” which underscores the company’s push toward Autonomous Endpoint Management (AEM) and improved Digital Employee Experience (DEX).
Together, the announcements highlight a broader industry shift: IT operations platforms are rapidly adopting generative AI interfaces to simplify data analysis and speed up decision-making across enterprise environments.
At its core, Tia Reporting is designed to remove the friction typically associated with IT analytics. Rather than navigating complicated dashboards or submitting requests to data teams, IT professionals can ask questions in plain language—such as querying device performance trends or application reliability—and receive instantly generated dashboards in response.
These dashboards pull from TeamViewer’s proprietary Digital Employee Experience data, which includes telemetry from devices, application performance metrics, and employee experience signals. The result is a consolidated view of the digital workplace that can be explored dynamically.
Administrators can further refine insights by adjusting filters, timeframes, and visualizations through an intuitive, no-code interface. This allows teams to drill deeper into issues in real time without relying on analysts or specialized reporting teams.
According to TeamViewer, the goal is to make operational insights accessible across IT roles, not just data specialists.
One of the longstanding challenges in IT operations has been the gap between data availability and actionable insight. Many organizations collect extensive telemetry from endpoints and applications but struggle to translate that data into meaningful decisions quickly.
Tia Reporting aims to close that gap by placing real-time analytics directly into the hands of frontline IT staff.
“IT teams are accountable for outcomes they have historically struggled to measure with speed and confidence,” said Adrian Todd, Vice President of Product Management at TeamViewer. “Tia Reporting changes that dynamic. It democratizes access to insight, empowering every IT professional to create their own reports and dashboards without relying on analysts or BI teams.”
The shift toward self-service analytics reflects a wider enterprise trend. As IT environments become more complex—with remote work, distributed devices, and cloud applications—organizations increasingly need faster ways to interpret operational signals.
The new reporting capability runs on top of TeamViewer’s Digital Employee Experience data infrastructure, which aggregates signals from multiple layers of the workplace technology stack.
This includes:
By combining these datasets, the platform aims to provide a holistic perspective on the digital workplace.
For IT administrators, this unified view can help pinpoint performance bottlenecks, detect emerging issues before they escalate, and measure the overall impact of technology on workforce productivity.
The introduction of Tia Reporting aligns closely with TeamViewer’s broader strategy around Autonomous Endpoint Management, a category that emphasizes automated remediation, predictive analytics, and proactive issue prevention.
Historically, IT teams have operated in reactive mode—responding to tickets after problems affect employees. With AI-powered analytics and real-time monitoring, vendors increasingly promise a shift toward proactive operations.
TeamViewer’s messaging around the “Fix it before they feel it” campaign reflects that philosophy. The idea is straightforward: if IT systems can detect early warning signals, organizations can resolve issues before employees even notice them.
For enterprises managing thousands of devices and applications, that proactive approach could translate into significant gains in productivity and employee satisfaction.
TeamViewer’s move comes amid a broader surge of AI-driven features across IT management platforms. Vendors are racing to embed conversational interfaces, automated analytics, and predictive insights into tools that were once heavily reliant on manual reporting.
The concept of natural-language reporting—where users query operational data as if they were chatting with an assistant—is becoming particularly popular. By lowering the technical barrier to analytics, these tools allow more employees to extract insights without needing deep expertise in data visualization or query languages.
For organizations adopting Digital Employee Experience platforms, the ability to translate telemetry into actionable intelligence quickly is becoming a key differentiator.
Tia Reporting is also part of TeamViewer’s broader AI roadmap, which will continue to evolve alongside the company’s Autonomous Endpoint Management ambitions.
The company has indicated that further capabilities tied to AEM are expected later this year, suggesting that conversational analytics could play a central role in how administrators interact with endpoint management systems going forward.
If the strategy succeeds, IT teams may increasingly rely on AI assistants not just to analyze data but also to recommend or even automate remediation actions.
For IT leaders, the promise of conversational analytics goes beyond convenience. Faster insights mean faster response times, improved digital workplace performance, and better visibility into how technology impacts employee productivity.
By embedding natural-language reporting directly into its Digital Employee Experience platform, TeamViewer is betting that the future of IT operations lies in AI-driven decision-making.
Whether that vision becomes the industry norm remains to be seen, but the direction is clear: enterprise software is steadily evolving toward interfaces that allow humans to ask questions—and receive answers—in real time.
Get in touch with our MarTech Experts.
marketing 24 Mar 2026
Enterprises are embracing generative AI at a rapid pace—but many are doing so without the safeguards needed to manage its risks. That’s the central finding of a new global study released by OpenText in partnership with the Ponemon Institute.
The report, “Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI,” reveals that 52% of organizations have already fully or partially deployed generative AI, yet security governance and risk management practices are lagging far behind adoption.
The research underscores a growing tension across the enterprise AI landscape: companies are racing to integrate AI into operations, but the governance frameworks required to ensure trust, compliance, and reliability are still catching up.
As AI systems become more embedded in business workflows—and increasingly autonomous—the stakes for managing these risks are rising quickly.
According to the study, only one in five enterprises has reached what researchers consider “AI maturity.” In practical terms, that means organizations where AI-driven cybersecurity systems are fully deployed and the risks associated with those systems are properly assessed and managed.
For the majority of enterprises, that maturity remains out of reach.
Nearly 79% of organizations have not yet achieved full AI maturity in cybersecurity, indicating that while AI adoption is widespread, the operational and governance infrastructure needed to support it is still developing.
The findings reflect a broader industry reality: implementing AI technology is often faster and easier than establishing the policies, oversight structures, and risk frameworks needed to manage it responsibly.
“AI maturity isn't just about adopting AI tools—it's about doing it responsibly,” said Muhi Majzoub, EVP of Product and Engineering at OpenText. “Security and governance are foundational to getting real value from AI.”
The study highlights several major gaps in how organizations are managing AI-related risks.
Among the most striking findings:
These numbers suggest that while enterprises recognize AI’s potential benefits, many are still struggling to integrate governance practices that address risks such as bias, misinformation, or security vulnerabilities.
In fact, 58% of respondents reported that prompt or input risks—such as misleading or harmful outputs—are extremely difficult to mitigate.
User behavior also introduces new challenges. More than half of organizations surveyed reported difficulty controlling how employees interact with AI systems, particularly when it comes to the unintended spread of inaccurate or misleading information.
Beyond governance concerns, organizations are also confronting technical limitations in AI systems themselves.
Nearly two-thirds of respondents (62%) say minimizing model bias is very or extremely difficult, raising concerns about fairness, reliability, and ethical AI use.
Operational challenges further complicate deployment:
These issues directly affect how well AI can perform in cybersecurity and threat detection scenarios.
While many organizations hope AI will accelerate security operations, the study suggests those gains remain uneven.
Just 51% of respondents say AI effectively reduces the time needed to detect anomalies or emerging threats, and only 48% believe AI meaningfully improves threat detection and analysis.
In other words, the technology is promising—but far from perfect.
One of the most ambitious visions for enterprise AI involves systems that can operate autonomously—analyzing threats, making decisions, and responding without human intervention.
But the study indicates that level of independence is still a long way off.
Fewer than half of organizations surveyed (47%) say their AI models are capable of learning behavioral norms and making safe decisions autonomously.
Because of these limitations, 51% of organizations say human oversight remains essential in AI governance—particularly as cyber attackers evolve their tactics and attempt to exploit AI systems themselves.
This reliance on human supervision highlights a fundamental paradox in AI adoption: the technology promises automation and efficiency, yet still requires careful monitoring to ensure reliability.
The study also points to a deeper issue affecting enterprise AI adoption: trust.
For AI systems to be widely accepted in critical business operations, they must be transparent and explainable. Organizations need to understand not only what decisions AI makes, but why it makes them.
Without that transparency, enterprises may hesitate to rely fully on automated systems—particularly in high-risk areas such as cybersecurity or regulatory compliance.
Industry experts increasingly argue that explainability, governance frameworks, and policy-based controls must be built into AI systems from the start, rather than added later as an afterthought.
The report’s findings are based on a global survey conducted by the Ponemon Institute in November 2025.
Researchers gathered responses from 1,878 IT and security professionals across North America, Europe, Asia-Pacific, the Middle East, Africa, and Latin America. Participants represented a wide range of industries including financial services, healthcare, technology, manufacturing, and energy.
The survey included executives, engineers, security specialists, compliance professionals, and other decision-makers involved in AI and cybersecurity strategy.
This broad sample provides a global perspective on how organizations are navigating the challenges of AI adoption.
The report arrives at a critical moment for enterprise technology. Generative AI adoption has accelerated dramatically over the past two years, and many organizations are now experimenting with more advanced systems such as agentic AI—models capable of performing complex tasks with minimal human direction.
But as AI systems grow more powerful, the risks associated with them also increase.
For companies hoping to unlock the full value of AI, the message from the study is clear: adoption alone isn’t enough. Governance, security frameworks, and responsible AI policies must evolve just as quickly.
Organizations that invest early in these foundations may gain a significant competitive advantage—not only by avoiding regulatory and security pitfalls, but by building AI systems that employees and customers can trust.
As Majzoub noted, the next generation of AI leaders will likely be those that combine innovation with transparency and control.
Get in touch with our MarTech Experts.
artificial intelligence 24 Mar 2026
As enterprises race to deploy AI across their organizations, security and governance challenges are quickly becoming a critical bottleneck. To address that gap, BeyondID and Nexera have announced a strategic partnership designed to help organizations scale AI adoption while maintaining strong identity governance, compliance, and operational security.
The collaboration aims to tackle a rapidly emerging risk in enterprise AI environments: the rise of non-human identities (NHIs)—including AI agents, automated workflows, and service accounts—which often operate without the same governance frameworks applied to human users.
As companies integrate AI platforms like Microsoft Copilot, Google Gemini, and tools from OpenAI and Anthropic, these automated identities are multiplying quickly, creating a new and often poorly managed attack surface.
The BeyondID–Nexera partnership aims to ensure enterprises can deploy AI rapidly without compromising security or compliance.
The core idea behind the partnership is a layered approach to AI deployment.
Nexera focuses on the Intelligence Layer, helping organizations design, build, and operate production-grade AI systems from early strategy to ongoing managed operations.
BeyondID, meanwhile, secures the Identity and Trust Layer, governing access across AI agents, models, and automated workflows using identity-first architecture and least-privileged access principles.
The goal is to ensure every AI-driven process—whether it’s a chatbot, automation workflow, or machine learning model—operates within a controlled identity framework that can be monitored and audited.
“Enterprises are under enormous pressure to deploy AI quickly, but speed without governance is a liability,” said Arun Shrestha, founder of BeyondID. “Nexera builds intelligent AI systems while BeyondID ensures every AI agent, model, and workflow is securely identified, governed, and monitored.”
In traditional IT environments, identity management primarily focuses on human users—employees, partners, and customers.
But AI is dramatically changing that model.
AI agents, automated pipelines, machine-learning models, and service accounts increasingly act as independent entities within enterprise systems. These non-human identities often interact with APIs, data platforms, and cloud infrastructure autonomously.
Without proper governance, they can create significant risks:
Security analysts have increasingly warned that unmanaged machine identities represent one of the fastest-growing threat vectors in modern enterprise environments.
The BeyondID–Nexera partnership is specifically designed to address this issue by embedding identity governance into AI architecture from the start.
As part of the partnership, the companies introduced four integrated go-to-market services designed to help enterprises move quickly from AI experimentation to secure production deployment.
This initial engagement focuses on assessing an organization’s AI readiness. It includes evaluating AI use cases, reviewing identity risks, selecting appropriate AI platforms, and developing a governance blueprint along with a 90-day execution roadmap.
The goal is to ensure enterprises establish a clear identity and access strategy before deploying AI systems at scale.
The second offering focuses on deploying production-grade AI agents with security embedded at the architecture stage.
This includes implementing identity controls, secrets management, monitoring capabilities, and compliance validation frameworks. The companies say this structured rollout approach allows organizations to launch secure AI agents within roughly three months.
Many enterprises are already rolling out AI tools internally, but often without sufficient governance controls.
This offering helps secure deployments of platforms such as Microsoft Copilot, Google Gemini, and other enterprise AI systems by implementing features such as:
The goal is to prevent unmanaged AI tools from becoming security vulnerabilities.
The final offering provides ongoing managed services that monitor both AI operations and identity governance.
Services include model drift monitoring, identity anomaly detection, agent access recertification, and continuous governance optimization.
This ensures AI systems remain secure as they evolve and scale.
One of the key differentiators highlighted by both companies is the execution-focused model behind the partnership.
Traditional system integrators often provide high-level AI strategy consulting but may lack deep expertise in identity governance.
The BeyondID–Nexera approach attempts to bridge that gap by combining AI engineering expertise with identity security architecture.
The companies say engagements can move from strategy to production deployment in as little as 90 days, with identity governance built directly into the AI infrastructure rather than layered on later.
For many enterprises, the biggest challenge with AI adoption isn’t just building models—it’s ensuring those systems can be trusted.
AI agents often interact with sensitive data, business processes, and critical infrastructure. Without strong identity controls, organizations risk exposing internal systems or violating compliance regulations.
According to Nexera CEO Tom Wisnowski, establishing trust in AI operations is essential for scaling enterprise deployments.
“AI is only as powerful as the trust placed in it,” Wisnowski said. “With BeyondID, we can now offer our clients the full stack—from intelligent systems to the identity infrastructure that makes those systems safe to operate at enterprise scale.”
The partnership reflects a broader shift happening across the enterprise technology landscape.
As AI systems become more autonomous and integrated into business operations, identity management is emerging as a critical foundation for AI governance.
Security experts increasingly argue that traditional identity frameworks—designed primarily for human users—must evolve to handle millions of machine identities across APIs, AI agents, and automated systems.
Vendors that can integrate AI capabilities with robust identity and access management frameworks may gain a significant advantage in the next phase of enterprise AI adoption.
For organizations eager to deploy AI quickly but safely, partnerships like this could provide a blueprint for balancing innovation with security.
Get in touch with our MarTech Experts.
artificial intelligence 24 Mar 2026
As generative AI rapidly becomes the new gateway for product discovery, brands are facing an unfamiliar challenge: how to maintain control over their identity inside AI-driven shopping environments.
To address that shift, DaVinci Commerce has launched DaVinci Agentic BrandStore, a new platform designed to create immersive, AI-native shopping experiences directly within large language model ecosystems.
The launch marks a significant step in the evolution of AI-powered commerce, enabling brands to embed curated product experiences and branded interactions into AI assistants and conversational interfaces.
The company also announced a strategic investment and global partnership with Accenture, aimed at helping enterprise brands deploy AI-driven shopping experiences at scale.
The innovation has already gained industry recognition, earning a spot among the Top 50 innovations at the 2026 Innovators Showcase during the National Retail Federation event, widely regarded as one of the retail industry's most influential technology showcases.
Consumer behavior is shifting rapidly as AI assistants become the first stop for product research and shopping decisions.
According to data from Adobe Analytics, traffic from generative AI platforms surged 693% year over year in 2025, while roughly 40% of consumers used AI tools for shopping assistance during the same period.
This shift is forcing brands to rethink how they appear in digital environments increasingly mediated by AI.
Without direct control over their presence in these systems, brands risk becoming indistinguishable data points in AI responses, where product recommendations may prioritize price and availability rather than brand identity or storytelling.
DaVinci Commerce aims to change that dynamic by transforming traditional brand assets—such as product feeds, reviews, websites, and digital media—into conversational shopping experiences designed specifically for AI ecosystems.
“AI is becoming the new storefront,” said Diaz Nesamoney, founder and CEO of DaVinci Commerce. “The commerce infrastructure currently available in AI platforms enables AI to transact, but brands need a way to compete and differentiate in these new environments.”
At the heart of the launch is what DaVinci calls a Commerce Experience Platform (CEP)—a new category designed to bridge traditional e-commerce infrastructure with emerging AI commerce environments.
The platform converts brand content into dynamic, AI-native storefronts that operate inside conversational interfaces powered by major LLM ecosystems.
These storefronts can interact directly with consumers through natural language conversations, providing product recommendations, answering questions, and guiding shoppers toward purchase decisions.
Initially, the Agentic BrandStore experience will launch as an application inside ChatGPT, with plans to expand across other LLM platforms such as Google Gemini and Claude.
To help companies create these experiences, the platform includes BrandStore Studio, a development environment where brands can configure how their AI storefront behaves.
Within the studio, brands can define:
The system also manages the DaVinci Commerce Answer Agent, which orchestrates conversations and determines how information should be presented to shoppers.
By combining curated content with conversational AI, the storefront becomes more than a simple chatbot—it acts as a guided shopping assistant tailored to each brand’s identity.
The DaVinci platform is built around four major AI components designed to manage discovery, content, and transactions.
This agent handles multi-turn conversations with shoppers, ensuring that responses remain consistent with brand voice and guidelines. It also guides customers through the buying journey from product discovery to purchase.
Content Agents transform brand materials—including product data, digital assets, and user reviews—into structured information that AI systems can interpret and present dynamically.
These agents pull content from systems like Product Information Management (PIM), Digital Asset Management (DAM), and product detail pages.
The Commerce Agent connects AI conversations to real purchasing options. These can include:
This allows AI storefronts to transition smoothly from conversation to transaction.
Finally, the platform includes a self-learning engine that analyzes shopper intent and continuously improves recommendations and experiences without manual intervention.
Over time, this system helps brands better understand customer preferences and refine their AI-powered shopping journeys.
One of the most significant risks in AI-driven commerce is the potential loss of brand control.
When AI assistants summarize product options, they may rely on fragmented or inconsistent data sources. That can lead to inaccurate claims, off-brand messaging, or recommendations that dilute brand differentiation.
DaVinci Commerce addresses this issue with a governance and compliance framework that allows brands to enforce rules around:
The system also supports omni-LLM deployment, allowing brands to create a single experience that can operate across multiple AI ecosystems without vendor lock-in.
To accelerate adoption, DaVinci Commerce has partnered with Accenture, integrating the platform into the consulting giant’s broader AI, commerce, and digital transformation services.
Through this partnership, Accenture will help enterprise clients deploy AI-native shopping experiences across LLM ecosystems including ChatGPT, Gemini, and Claude.
Ndidi Oteh, CEO of Accenture Song, emphasized that AI-driven discovery is rapidly reshaping how consumers interact with brands.
“As people increasingly rely on AI-assisted recommendations and begin delegating decisions to intelligent agents, being discoverable is no longer enough,” Oteh said. “Brands must be relevant, personable, and ready to transact in agent-led environments.”
The launch reflects a broader shift toward agentic commerce, where AI assistants play an active role in recommending, evaluating, and even purchasing products on behalf of consumers.
In this environment, brands must compete not only for consumer attention but also for algorithmic representation within AI systems.
Platforms like DaVinci’s Agentic BrandStore are attempting to give brands tools to shape those interactions—ensuring that AI-driven shopping experiences reflect brand identity rather than generic product listings.
If the trend continues, the next frontier of e-commerce may not be traditional websites or marketplaces, but conversational storefronts embedded directly inside AI assistants.
For brands navigating this shift, the question is no longer whether AI will influence commerce—it’s how much control they’ll have over the experience.
Get in touch with our MarTech Experts.
security 24 Mar 2026
As governments and regulated industries tighten requirements around data sovereignty, cybersecurity vendors are racing to deliver AI-powered protection without relying on the cloud.
SentinelOne is the latest to respond to that demand, unveiling an expanded portfolio designed to bring autonomous AI-driven security to on-premise and self-hosted environments, including air-gapped systems.
The new capabilities extend SentinelOne’s platform beyond endpoint protection to secure servers, private cloud infrastructure, and data pipelines, all while keeping threat detection and analysis entirely inside the customer’s environment.
For organizations in sectors such as national security, healthcare, and financial services, the move addresses a persistent challenge: adopting advanced AI security without sending sensitive data to external cloud services.
The cybersecurity industry has largely embraced cloud-native architectures for threat detection and response. While effective for many enterprises, that model can pose serious limitations for organizations that must maintain strict control over where their data resides.
SentinelOne’s expanded on-premises portfolio is designed to eliminate that trade-off.
By running its autonomous detection engines directly within customer infrastructure, the platform processes telemetry and threat intelligence locally—ensuring sensitive data never leaves the organization’s secure environment.
“Empowering global organizations with the certainty that their data stays in their control is more urgent than ever given the need to adopt AI without compromising privacy,” said Ana Pinczuk, President of Product and Technology at SentinelOne.
According to Pinczuk, highly regulated industries have long been forced to choose between AI-driven security innovation and full control over their data. SentinelOne aims to remove that compromise by delivering its advanced protection capabilities directly into customer hardware environments.
The launch arrives at a time when geopolitical pressures and regulatory requirements are reshaping cybersecurity strategies worldwide.
Critical infrastructure operators, government agencies, and defense organizations are increasingly adopting air-gapped systems—networks physically isolated from the internet—to prevent external access.
While these environments offer strong isolation, they also create challenges for traditional security platforms that rely on continuous cloud connectivity.
SentinelOne’s approach allows organizations to run multiple detection engines locally, enabling threat analysis and automated remediation even when systems operate completely offline.
This architecture allows customers to maintain full security coverage while keeping data confined within national or organizational boundaries.
SentinelOne already provides on-premises endpoint protection used across millions of devices worldwide. The new portfolio extends those capabilities across a broader set of infrastructure components.
The platform now delivers protection for:
All protections operate through a single lightweight agent, enabling organizations to standardize security policies across complex environments.
Security telemetry generated by the agent is streamed directly into the organization’s own monitoring systems, allowing internal teams to conduct threat hunting and investigations without relying on third-party cloud analytics.
Beyond endpoint protection, the new offering introduces advanced safeguards for data storage environments, integrating with enterprise infrastructure platforms such as NetApp and Dell Technologies.
These integrations allow organizations to automatically scan files for malware as they enter the system, quarantining threats before they can spread across internal networks.
Because the inspection process occurs locally, sensitive information remains inside the organization’s security perimeter during analysis and remediation.
For industries bound by strict compliance regulations—such as financial institutions and healthcare providers—this architecture helps maintain data privacy while still benefiting from modern AI-driven threat detection.
Another notable addition to the portfolio is Prompt Security On-Premise, a self-hosted security layer designed to protect enterprise AI environments.
As organizations increasingly deploy generative AI tools, new risks have emerged around data leakage, prompt injection attacks, and unauthorized AI usage—often referred to as “shadow AI.”
Prompt Security addresses these concerns by acting as a specialized firewall for AI applications.
The system can:
Crucially, these protections operate entirely within the organization’s environment, ensuring that no AI-related data is transmitted to external services.
SentinelOne also introduced a new AI Data Pipeline tailored specifically for on-premises deployments.
Security teams often face an overwhelming volume of telemetry data generated by modern IT environments. The new pipeline addresses that challenge through intelligent filtering that prioritizes relevant signals and reduces noise.
The system can enrich telemetry data, monitor the health of incoming data streams, and optimize how information flows between internal systems.
Organizations can also move data between endpoints, analytics tools, and generative AI models while sanitizing sensitive information—all without sending data to external cloud services.
This capability aims to help security teams reduce alert fatigue while lowering infrastructure costs associated with processing large volumes of security data.
SentinelOne’s expanded on-premises strategy reflects a broader shift in the cybersecurity market.
Governments around the world are increasingly implementing data residency and sovereignty regulations, requiring organizations to maintain strict control over where data is stored and processed.
At the same time, AI adoption is accelerating across sectors that handle highly sensitive information—from defense agencies to financial institutions and healthcare systems.
These organizations want the advantages of AI-powered security, but many cannot rely on public cloud services due to regulatory or operational constraints.
By delivering autonomous AI protections that operate entirely inside customer infrastructure, SentinelOne is positioning itself to serve that growing segment of the cybersecurity market.
As AI becomes a central component of cybersecurity strategies, the ability to deploy those systems in sovereign environments may become a key differentiator for vendors.
Organizations responsible for critical infrastructure, national security, and regulated industries increasingly demand platforms that combine advanced automation with strict data control.
SentinelOne’s latest expansion suggests that the future of enterprise security may not be exclusively cloud-based. Instead, it may involve hybrid and sovereign architectures where AI operates locally—bringing powerful automation to environments that must remain fully under customer control.
For enterprises navigating both regulatory pressure and evolving cyber threats, that balance between innovation and sovereignty is becoming essential
Get in touch with our MarTech Experts.
Page 22 of 1465