marketing 20 Mar 2026
In a move that underscores the growing shift toward warehouse-native analytics, Kubit has announced a new integration with Snowflake that promises to simplify how enterprises analyze customer behavior and business performance—without moving data out of their core systems.
The pitch is straightforward: stop copying data into fragmented tools and start running analytics directly where it already lives.
That idea isn’t new, but Kubit is betting that tighter execution inside Snowflake’s AI Data Cloud—combined with explainable AI—can finally make it practical at scale.
Enterprises have increasingly standardized on Snowflake as a “single source of truth.” The problem? Product analytics and BI tools haven’t kept up. They often rely on separate pipelines, creating duplicate datasets, inconsistent metrics, and governance headaches.
Kubit’s integration tackles this by querying data directly inside Snowflake environments. That means product, growth, and analytics teams can track customer journeys, behavioral events, and key business metrics—like revenue and lifetime value—without exporting or reshaping data.
In practical terms, this reduces the lag between question and answer. It also cuts down on the quiet chaos of mismatched dashboards—a common pain point in large organizations.
The more interesting angle is Kubit’s AI layer.
Instead of bolting on opaque AI tools, Kubit introduces AI agents that generate and execute SQL queries directly within Snowflake. These agents operate within existing access controls and use a shared semantic layer to keep metrics consistent.
The result:
Anomaly detection across product and business metrics
Root-cause analysis for sudden changes
Natural language report generation
Narrative summaries backed by live queries
That last point matters. In an era where “AI insights” often feel like black boxes, Kubit is leaning into transparency. Every insight ties back to a verifiable query running in Snowflake—something data teams and auditors alike will appreciate.
Serko, a global travel tech provider, is already using Kubit with Snowflake to power product analytics for its Booking.com for Business platform.
Before Kubit, accessing insights reportedly took weeks. Now, product teams can self-serve analytics directly from governed warehouse data—without disrupting their Snowflake-first architecture.
That’s a telling example. The real value here isn’t just faster dashboards; it’s shifting analytics from a centralized bottleneck to a distributed capability across teams.
Kubit’s move lands at a time when the analytics stack is undergoing a quiet but significant transformation.
Traditional tools like Tableau and Looker helped define the modern BI era—but they often depend on extracted or modeled datasets. Meanwhile, newer players are pushing “warehouse-native” as the next evolution, aligning analytics directly with cloud data platforms.
Snowflake, for its part, has been steadily positioning itself not just as a storage layer, but as a full-fledged application and AI platform. Partnerships like this one reinforce that strategy.
The implication is clear: the center of gravity is shifting toward the data warehouse itself. Tools that don’t adapt risk becoming redundant layers.
Perhaps the most compelling aspect of Kubit’s approach is how it blends governance with AI.
Many organizations are racing to make data “AI-ready,” but struggle with trust, consistency, and compliance. By keeping AI execution inside Snowflake—and within existing controls—Kubit sidesteps a major barrier to enterprise adoption.
It’s a subtle but important shift. Instead of asking companies to trust new systems, Kubit extends the ones they already trust.
Kubit’s Snowflake integration isn’t flashy, but it hits on a real pain point: the fragmentation of analytics in modern data stacks.
If it delivers on its promise, the combination of warehouse-native analytics and transparent AI could help enterprises move faster without sacrificing control—a balance that’s been notoriously hard to achieve.
And in a market crowded with analytics tools, that might be the differentiator that actually sticks.
Get in touch with our MarTech Experts.
marketing 20 Mar 2026
In a bid to close one of enterprise security’s most persistent blind spots, Keeper Security has introduced KeeperDB, a vault-native database access feature designed to bring zero-trust principles directly to how teams interact with sensitive data.
Set for official debut at the RSA Conference 2026, KeeperDB extends the company’s Privileged Access Management (PAM) platform by embedding database access controls directly into its existing vault environment. The goal: eliminate risky workarounds and bring order to a notoriously fragmented part of enterprise infrastructure.
Despite years of investment in identity and access management, database access remains surprisingly outdated.
Developers and administrators often rely on a patchwork of desktop clients, shared credentials, and VPN tunnels to access production systems. These methods may be convenient, but they come with serious trade-offs: limited visibility, inconsistent policy enforcement, and a heightened risk of credential leaks or insider misuse.
That’s a problem when databases house some of the most sensitive assets an organization owns—from customer records to financial data.
KeeperDB’s premise is simple: if privileged access is already governed inside a vault, database access should be too.
KeeperDB embeds database session management directly into the Keeper Vault, allowing users to initiate connections without ever exposing credentials.
Instead of copying passwords into external tools, users can launch sessions from within the vault itself—via either a browser-based interface or command-line access. Initial support includes widely used systems like MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.
The shift may sound incremental, but it addresses a core issue: credential sprawl. By keeping secrets contained and never revealed to endpoints, Keeper reduces one of the most common attack vectors in enterprise environments.
Where KeeperDB stands out is in how it applies zero-trust principles to database workflows.
Access is governed through centralized policies, with granular controls such as read-only permissions and regulated data transfers. Every session is fully recorded, creating an audit trail that security teams can review for compliance or incident response.
In effect, Keeper is treating database access the same way modern systems treat privileged infrastructure access—with strict verification, minimal exposure, and full visibility.
That’s increasingly critical as organizations face rising pressure to demonstrate compliance and prevent data exfiltration in real time, not after the fact.
Security tools often fail when they disrupt how teams actually work. KeeperDB tries to avoid that trap.
The platform offers a modern, browser-based interface for direct access, but also introduces KeeperDB Proxy for organizations that want to keep using existing database clients. The proxy routes connections through Keeper’s control layer, enforcing policies without forcing teams to abandon familiar tools.
It’s a pragmatic approach—one that acknowledges that security adoption depends as much on usability as it does on technical rigor.
KeeperDB enters a space where database access is typically handled by standalone tools or bolted onto broader platforms. Vendors like CyberArk and HashiCorp (via Vault) have tackled adjacent problems, particularly around secrets management and privileged access.
What Keeper is doing differently is collapsing those layers into a single, vault-native experience. Instead of integrating multiple tools, it’s positioning the vault itself as the control plane for database access.
That aligns with a broader industry trend: consolidating security functions to reduce complexity and improve governance.
KeeperDB reflects a growing recognition that securing identities isn’t enough—organizations also need to secure how those identities interact with data.
By embedding database access into a zero-trust, zero-knowledge architecture, Keeper is aiming to eliminate an entire class of risks tied to exposed credentials and unmanaged sessions.
For enterprises, the payoff could be significant: fewer tools to manage, stronger compliance posture, and reduced risk of breaches tied to database access.
But adoption will hinge on execution. If KeeperDB can deliver both airtight security and a frictionless user experience, it could reshape how organizations think about database access altogether.
Get in touch with our MarTech Experts.
customer relationship management 20 Mar 2026
Independent auto dealers—long underserved by enterprise-grade software—may be getting a meaningful upgrade. AutoRaptor has joined the member benefit program of the National Independent Automobile Dealers Association, unlocking discounts of up to 35% on its AI-powered CRM platform.
The partnership aims to lower the barrier to entry for advanced sales and customer management tools, bringing capabilities typically reserved for large franchise groups to smaller, independent dealerships.
Independent dealerships operate under very different constraints than their franchise counterparts—lean teams, fast-moving used inventory, and customers who expect near-instant responses.
Traditional CRM systems, often designed for larger enterprises, don’t always translate well in this environment. They can be bloated, expensive, or simply too rigid.
AutoRaptor’s pitch is that it was built specifically for this segment. Its platform combines lead management, automated follow-ups, and AI-driven engagement tools into a single interface designed for speed and simplicity.
With the NIADA partnership, that value proposition becomes more accessible—especially for cost-conscious operators.
At the center of AutoRaptor’s platform is its AI Sales Assistant, which automates customer engagement across multiple channels, including voice, SMS, email, and web chat.
The idea is to help dealers respond faster and more consistently without increasing headcount—a key advantage in an industry where missed leads often translate directly into lost revenue.
Other features round out the offering:
Integrated desking and payment structuring tools with e-signature support
Unified communications across phone, email, web, and third-party listings
Real-time pipeline tracking with automated next steps
Integrations with dealer management systems like Dealertrack
Taken together, the platform aims to centralize operations that are often scattered across multiple tools or manual processes.
NIADA represents more than 13,000 independent dealers, making it a significant distribution channel for vendors targeting this market.
Its member benefit program acts as a curated marketplace, connecting dealers with vetted solutions at discounted rates. For AutoRaptor, inclusion in the program offers both credibility and reach.
For dealers, it reduces the risk of adopting new technology—especially in a category where implementation missteps can disrupt day-to-day sales operations.
This partnership reflects a wider trend: the digitization of independent auto retail.
Large dealership groups have spent years investing in sophisticated CRM, analytics, and digital retailing platforms. Independents, by contrast, have often relied on simpler—or fragmented—systems.
That gap is now narrowing, driven by cloud software, AI automation, and competitive pressure from online-first car marketplaces.
Vendors like AutoRaptor are betting that purpose-built, AI-first tools can help independents compete not just on inventory and price, but on customer experience.
AutoRaptor’s NIADA deal isn’t just about a discount—it’s about distribution and democratization.
By bundling AI-driven CRM capabilities into a more affordable package, the company is positioning itself as a go-to platform for independent dealers looking to modernize without overextending budgets.
If adoption follows, it could signal a broader shift in how smaller dealerships approach sales technology—moving from reactive workflows to more automated, data-driven operations.
Get in touch with our MarTech Experts.
artificial intelligence 20 Mar 2026
As enterprises push deeper into AI and cloud environments, the security conversation is rapidly shifting toward data itself. This week, Commvault announced a major expansion of its data and AI security capabilities within Commvault Cloud, extending visibility and governance into structured databases—including vector databases increasingly used in AI applications.
The new capabilities, enabled by Commvault’s recent acquisition of Satori, aim to close a growing gap in enterprise security: controlling how sensitive data is accessed and exposed across modern data environments.
Commvault has traditionally been known for backup and cyber resilience. But as AI models and analytics platforms consume more enterprise data, the company is positioning Commvault Cloud as a broader data security platform.
The update extends the platform’s existing Data Security Posture Management (DSPM) functionality—previously focused on unstructured data—into structured environments such as databases and cloud data warehouses.
That expansion matters because structured systems now hold some of the most critical assets feeding AI pipelines, including customer records, operational metrics, and regulated information.
Commvault’s approach unifies discovery, classification, risk analysis, and access governance across:
Structured data
Semi-structured data
Unstructured data
All of it spans hybrid and multi-cloud environments, reflecting how most enterprises actually manage data today.
One reason Commvault is pushing deeper into data governance: AI itself is becoming a new attack and exposure vector.
Vector databases—which store embeddings used by AI models—can inadvertently surface sensitive information if not properly governed. Legacy security tools weren’t designed with these systems in mind.
By adding real-time access governance for structured databases and vector data stores, Commvault aims to reduce the risk of data leakage, including exposure through generative AI applications.
The platform’s new capabilities include:
AI-driven discovery and classification of sensitive data
Identification of environments with high-risk data exposure
Monitoring and control of structured data access in real time
Integration with cyber resilience and recovery workflows
The end goal is to help organizations reduce risk before incidents occur—and recover more effectively if they do.
Commvault’s acquisition of Satori earlier this year appears to be central to the strategy.
Satori specialized in data access governance and real-time controls, technologies that complement Commvault’s strengths in backup, cyber recovery, and resilience.
The result is a platform that doesn’t just store and protect data copies—it also manages how live data is accessed, used, and potentially exposed.
That convergence is becoming increasingly important as CISOs look for unified security approaches rather than fragmented tooling.
Industry analysts have been warning that AI adoption is moving faster than security frameworks designed to govern it.
According to research cited by Commvault, a large percentage of organizations have sensitive data that could be surfaced by AI systems, while a significant share of breaches still involve personally identifiable information.
Analysts say this shift is forcing security leaders to rethink their approach. Tools that combine DSPM with cyber resilience could help organizations better manage AI-driven risk across expanding data ecosystems.
Some of the new capabilities are already available.
Real-time data access governance for structured environments can be accessed today through single sign-on within Commvault Cloud and is offered as an add-on across platform tiers.
Meanwhile, expanded structured data discovery and classification features are expected to reach general availability in late summer 2026.
Commvault plans to showcase the updates at the RSA Conference 2026 in San Francisco, where the company will highlight its broader push toward unified resilience across data security, identity protection, and cyber recovery.
Commvault’s latest update reflects a broader shift in enterprise security: protecting data wherever it lives—and however it’s used by AI.
As organizations deploy more AI-driven systems and vector databases, governance at the data layer is becoming non-negotiable. By extending DSPM and access controls into structured and AI data environments, Commvault is positioning itself at the intersection of cyber resilience and AI security.
That’s a strategic move in a market where the next generation of breaches may be driven less by infrastructure vulnerabilities—and more by exposed data fueling AI.
Get in touch with our MarTech Experts.
artificial intelligence 20 Mar 2026
Enterprise AI is often long on promise and short on measurable impact. But a new deployment from Emporix and ACR suggests that, at least in order management, automation is starting to deliver tangible results.
The companies announced the successful rollout of an AI-powered order automation solution that reduces purchase order processing times from roughly eight minutes to under 60 seconds—an improvement of up to 87% in early deployments.
That’s not just incremental efficiency. It’s the kind of operational shift that hints at where enterprise commerce is heading: toward fully autonomous execution.
Like many large distributors, ACR’s order workflows were a mix of structured and unstructured inputs.
While some transactions flowed through electronic data interchange (EDI), a significant portion still arrived via email as unstructured purchase orders—requiring manual entry into ERP systems. That created bottlenecks, increased error rates, and tied up customer service teams in repetitive tasks.
Emporix tackled this with an AI-driven orchestration layer capable of:
Interpreting unstructured purchase order documents
Validating business rules in real time
Triggering downstream ERP actions automatically
In short, the system replaces manual data entry with autonomous decision-making—without requiring human intervention.
The key differentiator here isn’t just automation—it’s orchestration.
Emporix combines composable commerce architecture with agentic AI, enabling workflows that don’t just execute tasks but make decisions within predefined business logic.
That’s a step beyond traditional robotic process automation (RPA), which typically mimics human actions without deeper contextual understanding.
The result for ACR:
Processing times reduced from minutes to seconds
Fewer manual errors and downstream corrections
Customer service teams freed up for higher-value interactions
For enterprises struggling to scale operations without scaling headcount, that’s a compelling proposition.
One of the more notable aspects of the deployment is what didn’t happen: a full platform overhaul.
ACR, which has grown through acquisitions, operates across a complex, multi-system environment. Instead of replacing core systems, Emporix integrated its orchestration layer into the existing stack using a headless, API-first approach.
That allowed the company to automate order intake without disrupting its ERP or broader IT architecture—a critical factor for large enterprises where replatforming can take years.
The implementation itself was completed in about six months, with a phased rollout designed to minimize risk and allow for iterative improvements.
This project is part of ACR’s broader enterprise AI strategy, led by its internal AI Framework Program and Center of Excellence.
The goal isn’t just efficiency—it’s building a foundation for autonomous commerce, where systems can manage operations end-to-end with minimal human input.
Emporix already supports several ACR capabilities, including:
Customer portals with real-time order and pricing visibility
Automated returns management
Centralized product catalogs and digital asset management
Next on the roadmap: deeper expansion into cart, checkout, and self-service features, along with broader use of AI agents across workflows.
ACR’s approach aligns with the “Business Orchestration and Automation Technologies” (BOAT) framework defined by Gartner. The concept brings together RPA, workflow automation, and integration platforms into a unified system.
What’s emerging now is the next layer: agent-driven orchestration.
Instead of simply automating predefined steps, AI agents can dynamically interpret inputs, make decisions, and execute actions across systems. That shift moves enterprises closer to true autonomy—where processes don’t just run faster, but run themselves.
Order processing may not sound glamorous, but it’s foundational to revenue operations. Delays, errors, and inefficiencies directly impact customer experience and margins.
By cutting processing times dramatically and reducing manual intervention, Emporix and ACR are demonstrating how AI can move beyond pilot projects into core business operations.
If replicated at scale, this model could reshape how enterprises handle everything from procurement to fulfillment—turning traditionally manual workflows into intelligent, self-optimizing systems.
Emporix’s deployment at ACR is a clear example of enterprise AI delivering measurable ROI—not in theory, but in day-to-day operations.
More importantly, it signals a shift from automation as a tool to automation as a foundation. As orchestration platforms evolve and AI agents become more capable, the line between human-managed and machine-managed commerce will continue to blur.
For enterprises aiming to scale without adding complexity, that future may arrive sooner than expected.
Get in touch with our MarTech Experts.
automation 20 Mar 2026
For many manufacturing executives, selling a business is a once-in-a-lifetime event—yet preparation often starts too late. Automation Alley is aiming to change that with the launch of its Exit Excellence for Industry Leaders Webinar Series, a 10-part program focused on helping owners build value and plan smarter exits.
Developed in partnership with GlobalAutoIndustry.com, the monthly series runs from March through December 2026 and targets small- to mid-sized industrial and manufacturing firms navigating mergers, acquisitions, and succession planning.
M&A guidance is often dense, expensive, or inaccessible until companies are already in the deal process. Automation Alley’s approach is to bring that expertise earlier—and make it actionable.
Each one-hour session breaks down key exit planning topics into digestible insights, covering everything from valuation fundamentals to post-sale transitions. The goal is to help leaders make informed decisions years before a transaction is on the table.
That timing matters. Early preparation can significantly influence valuation, deal structure, and long-term outcomes, especially in the competitive mid-market manufacturing segment.
The series is structured as a progressive roadmap, walking attendees through the full lifecycle of a potential sale.
Key sessions include:
Valuation fundamentals (March): How buyers assess manufacturing businesses, including EBITDA multiples and working capital considerations
Pre-sale preparation (April): An 18-month roadmap to clean up financials and strengthen positioning
Management team development (May): Reducing owner dependency to increase buyer confidence
Buyer landscape (June): Strategic buyers vs. private equity vs. family offices
Deal process (July): From initial outreach to closing timelines and common pitfalls
Tax strategies (August): Structuring deals to maximize after-tax returns
Due diligence readiness (September): Preparing for quality of earnings scrutiny
Value optimization (October): Driving EBITDA and operational improvements
Negotiation tactics (November): Understanding deal terms beyond price
Post-sale planning (December): Navigating life after exit
Sessions will be led by experienced M&A advisors, legal experts, and private equity professionals with hands-on experience in mid-market transactions.
The timing of the series aligns with broader shifts in the manufacturing sector.
A wave of ownership transitions is underway as aging business owners look toward retirement, while private equity firms continue to target industrial companies for consolidation and growth.
At the same time, economic uncertainty and supply chain volatility are pushing leaders to rethink how they build resilient, valuable businesses—whether or not a sale is imminent.
Programs like this reflect a growing recognition that exit readiness is no longer just about selling—it’s about running a better business today.
Automation Alley’s initiative also highlights a larger trend: the democratization of M&A knowledge.
Historically, deep transaction expertise was concentrated among advisors and large firms. By offering structured, affordable education—$30 per session or $195 for the full series—the organization is making that knowledge more accessible to smaller operators.
For many, that could translate into better deal outcomes—or the confidence to delay a sale until conditions are right.
Automation Alley’s Exit Excellence series isn’t just another webinar lineup—it’s a strategic play to equip manufacturing leaders with the tools to navigate one of the most critical decisions they’ll ever make.
By breaking down complex M&A concepts into practical guidance, the program helps shift exit planning from a reactive scramble to a proactive strategy.
And in a market where preparation can directly impact valuation and legacy, that shift could make all the difference.
Get in touch with our MarTech Experts.
artificial intelligence 20 Mar 2026
The “vibe coding” era is quickly colliding with enterprise reality—and Netlify wants to bridge the gap.
The company has unveiled Agent Runners, a new capability that lets developers and teams spin up fully functional web applications directly from AI prompts via netlify.new. By integrating leading coding agents like Claude Code, OpenAI Codex, and Gemini CLI, Netlify is positioning itself as a platform where AI-generated ideas can move straight into production—without the usual rebuilds.
AI-assisted coding tools have made it easy to generate quick prototypes. The problem? Most of them stop there.
Developers often face a painful second phase: rewriting or migrating AI-generated code into production-ready environments. Netlify’s pitch is that this step should disappear entirely.
With Agent Runners, projects created from prompts are deployed instantly as live web apps—on the same infrastructure teams can continue using as they scale. That means no handoffs between prototyping tools and production systems, and no duplicated effort.
It’s a subtle but important shift. Instead of treating AI as a starting point, Netlify is treating it as part of the full software lifecycle.
Netlify’s approach also reflects a broader change in how software teams collaborate.
Rather than forcing a choice between code-first and prompt-first workflows, the platform allows both to coexist within the same project. Developers can work directly in code, while non-technical team members iterate using prompts—on the same infrastructure and in the same environment.
That unified workflow could prove especially valuable for cross-functional teams, where product managers, designers, and marketers increasingly want to experiment with building tools themselves.
Another key differentiator is what comes prepackaged.
Projects created through Agent Runners automatically have access to Netlify’s core platform features, including:
Serverless functions
Identity and authentication
Data storage via Blobs
Forms and user input handling
AI Gateway for managing model interactions
In many cases, stitching these components together is what slows teams down after initial prototyping. By bundling them in from the start, Netlify is trying to eliminate that friction.
Netlify is also targeting enterprise adoption with a new Internal Builder seat, designed to bring AI-assisted development inside organizational guardrails.
The feature introduces role-based access and governance controls, allowing non-engineering teams to build internal tools using AI agents—while engineering maintains oversight of what gets deployed to production.
That’s a notable move. As AI lowers the barrier to building software, enterprises are grappling with how to enable experimentation without creating security or compliance risks.
Netlify’s answer is to keep everything inside its platform, rather than letting teams spin up disconnected tools and shadow IT projects.
Netlify isn’t alone in chasing the “prompt-to-app” future. Platforms like Vercel and Replit have also leaned into AI-assisted development workflows.
The difference here is emphasis.
While others focus heavily on rapid prototyping or developer experience, Netlify is doubling down on continuity—ensuring that what starts as an AI-generated idea can evolve into a production-grade application without switching platforms.
That positioning could resonate with teams tired of rebuilding early-stage projects once they gain traction.
The rise of AI coding agents is reshaping expectations around how quickly software can be created. But speed alone isn’t enough—especially for businesses that need reliability, scalability, and governance.
By integrating AI agents directly into a production-ready platform, Netlify is addressing a key friction point in the current toolchain: the disconnect between experimentation and execution.
If successful, this approach could help redefine what a “development platform” looks like in the AI era—less about writing code from scratch, and more about orchestrating humans and machines in a shared workflow.
Netlify’s Agent Runners signal a shift from AI-assisted coding to AI-integrated software delivery.
Instead of treating prompts as disposable experiments, the platform turns them into durable, scalable projects from day one. For developers, that means less rework. For enterprises, it offers a path to embrace AI-driven development without losing control.
The real test will be adoption—but the direction is clear: the future of software development won’t just start with AI. It will run on it.
Get in touch with our MarTech Experts.
marketing 20 Mar 2026
Cybersecurity has long treated employees as the weakest link—but also the hardest to scale. Now, KnowBe4 is betting that AI agents can finally fix that imbalance.
The company has introduced AIDA Orchestration, a new autonomous agent within its Artificial Intelligence Defence Agents (AIDA) suite, designed to automate and personalize phishing simulations and security awareness training at the individual level.
It’s a notable shift away from static, one-size-fits-all campaigns toward continuous, adaptive human risk management.
Traditional security awareness programs typically run on scheduled campaigns—quarterly phishing tests, annual training modules, and broad user segmentation.
The problem? Threats don’t operate on schedules, and neither do users.
AIDA Orchestration replaces that model with an always-on system that continuously assesses individual risk and dynamically adjusts training. Instead of grouping employees into broad categories, the platform tailors phishing simulations and learning paths based on real-time behavior.
That means a user who repeatedly clicks suspicious links might receive more frequent, targeted interventions, while lower-risk users are trained differently.
One of the biggest selling points is operational efficiency.
Security teams often spend hours designing campaigns, segmenting users, and scheduling training. AIDA Orchestration automates those workflows entirely—generating, deploying, and managing programs in seconds rather than hours.
The system operates autonomously but within defined “Plans,” allowing administrators to set high-level policies and guardrails while the AI handles execution.
That balance—automation with oversight—is becoming a recurring theme in enterprise AI deployments.
AIDA Orchestration doesn’t operate in isolation. It connects with other agents in KnowBe4’s AIDA suite, including those focused on phishing template generation and remedial training.
Together, they form a coordinated system designed to:
Simulate increasingly sophisticated phishing attacks
Deliver targeted remediation based on user behavior
Continuously refine training strategies using real-time data
The goal is to create a feedback loop where training evolves alongside both user performance and emerging threats.
The timing of this launch is no coincidence.
According to KnowBe4’s own research, nearly half of cybersecurity leaders now rank AI-powered threats as their top concern. Generative AI has made it easier for attackers to craft highly convincing, personalized phishing messages at scale—removing many of the traditional red flags users relied on.
That escalation is forcing organizations to rethink human risk management. Static training programs can’t keep pace with dynamic, AI-driven threats.
By introducing continuous, adaptive training, KnowBe4 is aligning its platform with how modern attacks actually behave.
KnowBe4 has long been a leader in security awareness training, competing with platforms like Proofpoint and Cofense.
What sets this launch apart is the move toward agentic AI.
While many vendors incorporate AI into content generation or analytics, KnowBe4 is pushing toward autonomous systems that manage entire workflows—from simulation to remediation—without human intervention.
That’s a more ambitious vision, and one that reflects a broader shift in cybersecurity tooling toward automation at scale.
KnowBe4’s framing of “human risk management” is also evolving.
Rather than treating training as a compliance requirement, the company is positioning it as a continuous system—one that integrates data, behavior, and AI to reduce risk over time.
In that sense, AIDA Orchestration is less about training delivery and more about risk optimization.
AIDA Orchestration signals a turning point in how organizations approach the human side of cybersecurity.
By automating personalized training and making it continuous, KnowBe4 is attempting to close the gap between rapidly evolving AI threats and slower, manual defense strategies.
If it works as advertised, the result could be fewer successful phishing attacks—and a lot less manual effort for already stretched security teams.
Get in touch with our MarTech Experts.
Page 16 of 1456
Zenfox Launches AI Operating System for Professionals
EIN Presswire