artificial intelligence 22 Apr 2026
Yashaswi Mudumbai, Senior Director of Solutions Engineering, APAC, JFrog
Q1: JFrog has announced a new integration with NVIDIA around agentic AI. What problem is this solving and why is it becoming critical now?
At the core, this solution closes a growing trust gap. As AI evolves from copilots to autonomous agents that can access systems, data, and tools, they require stronger governance than traditional software pipelines can provide. The risk is real, just as a malicious software package can compromise an application, an unvetted skill can guide an agent to perform harmful actions.
In an agentic environment, it is now about governing skills, models, MCP services, and other agentic assets that can directly influence how AI behaves in production.
This is critical because AI agents are moving from experimentation into real enterprise workflows. JFrog’s new Agent Skills Registry, with early integration with NVIDIA, is designed to provide the missing trust layer required for autonomous AI workforces to operate safely at enterprise speed and scale.
By serving as a secure system of record for skills, models, MCPs, and agentic binary assets, JFrog serves as a secure, single source of truth for rigorously scanning and governing all agentic binary assets, which NVIDIA’s NemoClaw then executes in highly isolated sandboxes with zero initial permissions. This ensures every skill is approved and safe for use at enterprise scale.
Enterprises cannot rely on blind trust, they need a way to verify which agents and assets are being used, where they come from, and whether they comply with internal policies before agents can operate at scale.
Q2: Many Australian organisations struggle to move AI projects from pilot to production due to security and compliance concerns. How does this joint solution with NVIDIA help bridge that gap?
One of the biggest barriers to scaling AI is that innovation often outpaces governance. Teams build pilots and test models, but when it comes to deploying them into production, questions around security, compliance, and accountability slow everything down.
The partnership between JFrog and NVIDIA helps put structure around that process, giving organisations a centralised way to manage all the components that power AI agents, from models to connectors to reusable skills, while ensuring they meet enterprise standards before they are deployed.
Instead of relying on fragmented tools or manual approvals, organisations can automate checks, enforce policies, and maintain visibility across the entire lifecycle. That makes it much easier to move from experimentation to production without introducing unmanaged risk.
Q3: As AI adoption accelerates globally, how is the concept of an “AI Supply Chain” evolving compared to traditional software pipelines, and how is Australia responding?
The AI supply chain is fundamentally different from traditional software delivery. In the past, organisations were managing relatively static components like code and packages. Now they are dealing with dynamic elements such as models, datasets, prompts, and agent behaviours.
With AI systems now adapting and acting independently, this means organisations need to track not only what goes into an application but also how it behaves once deployed. In Australia, we’re seeing a strong emphasis on governance and accountability as part of this shift, particularly as organisations align with the Australian Government’s AI in Government Policy and broader responsible AI frameworks that emphasise transparency, accountability, and safe deployment.
Enterprises are recognising that adopting AI at scale requires visibility, traceability, and control, particularly in an increasingly regulated marketplace.
Q4: Australia is seeing growing enterprise investment in AI, particularly across sectors like financial services and government. What specific risks or opportunities do you see for Australian organisations adopting agentic AI?
When agents are given access to internal systems, data, and workflows, any gap in oversight can lead to serious consequences, from data exposure to compliance breaches. There is also a growing concern around ‘shadow AI,' where teams adopt tools or models outside of approved processes. This creates blind spots for security and governance teams, making it difficult to understand what is actually running inside the organisation.
For Australian enterprises, especially those operating in regulated environments, the priority is to ensure that innovation is matched with strong controls from the outset. Those that get this balance right have a clear opportunity to build a trusted AI and software supply chain that not only reduces risk, but also accelerates speed to market by giving teams the confidence to scale AI safely and consistently.
Q5: Trust and governance are emerging as major concerns for enterprises deploying AI agents. How does JFrog’s new Agent Skills Registry address these challenges in practical terms?
JFrog’s Agent Skills Registry is designed to bring order to what is otherwise a highly fragmented landscape. It acts as a central point where organisations can manage the different components that AI agents rely on.
This means every skill or asset can be inspected, validated, and approved before it is made available for use. It also allows organisations to define who can access what and under what conditions, ensuring that agents operate within clearly defined boundaries.
Importantly, it creates an audit trail, enabling organisations to track where assets came from, how they were used, and whether they meet compliance requirements. That level of visibility is essential for building trust in systems that are becoming autonomous.
On the execution side, NVIDIA’s NemoClaw then runs each agent in an isolated, virtual environment, sandboxed with zero initial permissions. Thus, even if a skill were to behave unexpectedly, it can not affect broader systems or trigger network-level risk.
Q6: For developers and engineering teams in Australia, how can they balance strong governance with the need to innovate quickly when building and deploying AI agents?
The goal is to embed governance into the workflow rather than treat it as a separate step. If security and compliance rely on manual reviews, they will always slow teams down.
Instead, organisatons should focus on automating these controls. By providing developers with access to pre-approved, trusted components, they can move quickly without needing to navigate complex approval processes each time.
This approach allows teams to maintain speed while ensuring that everything they use has already been vetted. For Australian organisations, particularly those under regular pressure, this balance between agility and control is critical to scaling AI successfully.
artificial intelligence 24 Feb 2026
artificial intelligence 19 Feb 2026
artificial intelligence 12 Feb 2026
artificial intelligence 11 Feb 2026
artificial intelligence 11 Feb 2026
Marketing agencies are uniquely positioned as custodians of client data across dozens of platforms. How has this role evolved in terms of security responsibility, and why is 2026 a critical year for agencies to address this?
How can agencies transform their security practices from a checkbox requirement into an actual competitive advantage during pitches and contract renewals?
AI-powered phishing attacks are becoming increasingly sophisticated. Can you describe what modern social engineering attacks targeting marketing agencies actually look like in 2026, and what makes agencies particularly vulnerable to these AI-driven threats compared to other industries?
Beyond technical solutions, what role does human awareness and training play in defending against these evolving threats?
How should agencies think about credential management differently when they're not just protecting their own data, but serving as the gateway to client accounts across platforms?
If you could recommend three immediate actions that agencies should take this quarter to strengthen their security posture, what would they be?
For agencies that have historically viewed cybersecurity investments as cost centers, how should they reframe this thinking given the current threat landscape?
Looking ahead through 2026, what emerging threats should agencies be preparing for now, even if they haven't fully materialized yet?
artificial intelligence 30 Jan 2026
Predictive modeling then builds on those signals to forecast outcomes, scenario-test media and creative investments, and evaluate trade-offs before decisions are made. As measurement systems become more advanced, marketers are moving away from trying to perfectly reconstruct a journey that no longer exists and instead using AI-driven modeling to plan what comes next with greater confidence, even as privacy constraints and signal loss accelerate.
The result is a move from reactive optimization to proactive, forward-looking planning, where reporting becomes a decision engine rather than a justification exercise.
I’m honored to be a guest on an upcoming episode, where I’ll dive into AI architecture and share how organizations can set themselves up for success with AI. If you’re eager to gain actionable insights and hear from industry leaders on how they’re driving innovation in marketing and advertising, make sure to tune in!
artificial intelligence 8 Sep 2025
Page 1 of 9
Interview Of : Jess Muehlfeld
Interview Of : Shobeir Shobeiri
Interview Of : Adam MaGill
Interview Of : Maximilian Groth
Interview Of : Carey Madsen
Interview Of : Yashaswi Mudumbai