artificial intelligence technology
Business Wire
Published on : Feb 20, 2026
AI experimentation is easy. AI you can trust at scale? That’s harder.
As enterprises move from pilot projects to business-critical AI deployments, the central question is no longer access to models. It’s oversight. Today, Dataiku is tackling that problem head-on with the launch of the 575 Lab, its new Open Source Office focused on building trust infrastructure for modern AI systems.
The initiative debuts with two open-source toolkits aimed at making enterprise AI more transparent, governable, and secure—particularly in the emerging world of agentic AI systems.
For the past two years, enterprises have raced to integrate large language models and AI agents into workflows. But as these systems take on more autonomous roles—triggering actions, making recommendations, and orchestrating multi-step processes—the governance challenge has intensified.
Open source, Dataiku argues, offers a structural advantage.
Hannes Hapke, Director of the 575 Lab, frames it succinctly: open source isn’t just a distribution model—it’s a trust model. When core components are inspectable and standardized, enterprises can verify how systems operate rather than relying on opaque assurances.
That philosophy underpins the lab’s first two projects.
The first toolkit focuses on agent explainability.
Modern AI agents often execute multi-step workflows—pulling data, reasoning over it, calling tools, and making decisions. While impressive, these layered actions can be difficult to trace.
Dataiku’s Agent Explainability Tools are designed to help teams:
Trace decision-making across multi-step agent workflows
Understand how conclusions were reached
Provide visibility for data scientists, compliance teams, and end users
In regulated industries, that traceability isn’t optional. Whether it’s financial services evaluating risk decisions or healthcare systems managing patient workflows, the ability to explain “why” is as important as the output itself.
As agentic ecosystems grow more complex, explainability tools could become foundational rather than supplementary.
The second project tackles another enterprise tension: leveraging powerful closed-source models while protecting sensitive data.
Privacy-Preserving Proxies are designed to:
Protect sensitive data end-to-end
Enable safer interaction with closed-source models
Run locally within enterprise environments
Many organizations hesitate to send proprietary or regulated data into external AI APIs. By introducing proxy layers that sanitize and manage data flows, Dataiku aims to reduce that risk without sacrificing access to high-performing models.
This reflects a broader industry shift. Enterprises increasingly want hybrid AI stacks—combining open and closed models, internal tools, and external APIs. Governance layers that mediate those interactions are becoming critical infrastructure.
The 575 Lab builds on Dataiku’s decade of enterprise AI experience and extends its involvement in the open-source ecosystem. The company is a member of the Linux Foundation and the Agentic AI Foundation, signaling an intent to collaborate rather than operate in isolation.
Florian Douetteau, CEO and co-founder of Dataiku, emphasizes reusable building blocks as the goal. As enterprises construct increasingly complex agentic ecosystems, standardized control and inspection mechanisms will likely emerge as industry norms. By contributing these tools in the open, Dataiku hopes to help shape those standards.
The timing is strategic. As regulatory scrutiny intensifies globally, enterprises are under pressure to demonstrate responsible AI practices. Toolkits that support explainability, privacy, and governance may soon be prerequisites for large-scale deployments.
Enterprise AI platforms are rapidly adding governance features—model monitoring, bias detection, compliance reporting. What differentiates 575 Lab is its open-source orientation.
Rather than locking governance capabilities inside proprietary systems, Dataiku is pushing foundational components into the open. That approach may appeal to large enterprises wary of vendor lock-in and eager to align with emerging community standards.
At the same time, open-source governance tools can accelerate adoption by enabling cross-platform compatibility. In agentic AI environments where multiple vendors’ systems interact, interoperability matters.
If successful, 575 Lab could position Dataiku not just as an AI platform provider, but as a contributor to the trust infrastructure underpinning enterprise AI at large.
The 575 Lab is now open to AI specialists, data scientists, developers, and enterprise partners. Community members can follow the projects, contribute, and help shape what Dataiku describes as “open trust infrastructure” for AI at scale.
That community-driven approach aligns with the broader open-source ethos: transparency, collaboration, and shared accountability.
As AI systems become more autonomous and more consequential, enterprises need more than model access. They need visibility, control, and standards they can rely on.
With 575 Lab, Dataiku is betting that trust in AI will be built not just through performance benchmarks, but through open, inspectable foundations. In the race toward agentic enterprise systems, governance may prove to be the most valuable innovation of all.
Get in touch with our MarTech Experts.