artificial intelligenceartificial intelligence
Yashaswi Mudumbai, Senior Director of Solutions Engineering, APAC, JFrog
Q1: JFrog has announced a new integration with NVIDIA around agentic AI. What problem is this solving and why is it becoming critical now?
At the core, this solution closes a growing trust gap. As AI evolves from copilots to autonomous agents that can access systems, data, and tools, they require stronger governance than traditional software pipelines can provide. The risk is real, just as a malicious software package can compromise an application, an unvetted skill can guide an agent to perform harmful actions.
In an agentic environment, it is now about governing skills, models, MCP services, and other agentic assets that can directly influence how AI behaves in production.
This is critical because AI agents are moving from experimentation into real enterprise workflows. JFrog’s new Agent Skills Registry, with early integration with NVIDIA, is designed to provide the missing trust layer required for autonomous AI workforces to operate safely at enterprise speed and scale.
By serving as a secure system of record for skills, models, MCPs, and agentic binary assets, JFrog serves as a secure, single source of truth for rigorously scanning and governing all agentic binary assets, which NVIDIA’s NemoClaw then executes in highly isolated sandboxes with zero initial permissions. This ensures every skill is approved and safe for use at enterprise scale.
Enterprises cannot rely on blind trust, they need a way to verify which agents and assets are being used, where they come from, and whether they comply with internal policies before agents can operate at scale.
Q2: Many Australian organisations struggle to move AI projects from pilot to production due to security and compliance concerns. How does this joint solution with NVIDIA help bridge that gap?
One of the biggest barriers to scaling AI is that innovation often outpaces governance. Teams build pilots and test models, but when it comes to deploying them into production, questions around security, compliance, and accountability slow everything down.
The partnership between JFrog and NVIDIA helps put structure around that process, giving organisations a centralised way to manage all the components that power AI agents, from models to connectors to reusable skills, while ensuring they meet enterprise standards before they are deployed.
Instead of relying on fragmented tools or manual approvals, organisations can automate checks, enforce policies, and maintain visibility across the entire lifecycle. That makes it much easier to move from experimentation to production without introducing unmanaged risk.
Q3: As AI adoption accelerates globally, how is the concept of an “AI Supply Chain” evolving compared to traditional software pipelines, and how is Australia responding?
The AI supply chain is fundamentally different from traditional software delivery. In the past, organisations were managing relatively static components like code and packages. Now they are dealing with dynamic elements such as models, datasets, prompts, and agent behaviours.
With AI systems now adapting and acting independently, this means organisations need to track not only what goes into an application but also how it behaves once deployed. In Australia, we’re seeing a strong emphasis on governance and accountability as part of this shift, particularly as organisations align with the Australian Government’s AI in Government Policy and broader responsible AI frameworks that emphasise transparency, accountability, and safe deployment.
Enterprises are recognising that adopting AI at scale requires visibility, traceability, and control, particularly in an increasingly regulated marketplace.
Q4: Australia is seeing growing enterprise investment in AI, particularly across sectors like financial services and government. What specific risks or opportunities do you see for Australian organisations adopting agentic AI?
When agents are given access to internal systems, data, and workflows, any gap in oversight can lead to serious consequences, from data exposure to compliance breaches. There is also a growing concern around ‘shadow AI,' where teams adopt tools or models outside of approved processes. This creates blind spots for security and governance teams, making it difficult to understand what is actually running inside the organisation.
For Australian enterprises, especially those operating in regulated environments, the priority is to ensure that innovation is matched with strong controls from the outset. Those that get this balance right have a clear opportunity to build a trusted AI and software supply chain that not only reduces risk, but also accelerates speed to market by giving teams the confidence to scale AI safely and consistently.
Q5: Trust and governance are emerging as major concerns for enterprises deploying AI agents. How does JFrog’s new Agent Skills Registry address these challenges in practical terms?
JFrog’s Agent Skills Registry is designed to bring order to what is otherwise a highly fragmented landscape. It acts as a central point where organisations can manage the different components that AI agents rely on.
This means every skill or asset can be inspected, validated, and approved before it is made available for use. It also allows organisations to define who can access what and under what conditions, ensuring that agents operate within clearly defined boundaries.
Importantly, it creates an audit trail, enabling organisations to track where assets came from, how they were used, and whether they meet compliance requirements. That level of visibility is essential for building trust in systems that are becoming autonomous.
On the execution side, NVIDIA’s NemoClaw then runs each agent in an isolated, virtual environment, sandboxed with zero initial permissions. Thus, even if a skill were to behave unexpectedly, it can not affect broader systems or trigger network-level risk.
Q6: For developers and engineering teams in Australia, how can they balance strong governance with the need to innovate quickly when building and deploying AI agents?
The goal is to embed governance into the workflow rather than treat it as a separate step. If security and compliance rely on manual reviews, they will always slow teams down.
Instead, organisatons should focus on automating these controls. By providing developers with access to pre-approved, trusted components, they can move quickly without needing to navigate complex approval processes each time.
This approach allows teams to maintain speed while ensuring that everything they use has already been vetted. For Australian organisations, particularly those under regular pressure, this balance between agility and control is critical to scaling AI successfully.