cloud technology marketing
Published on : Oct 17, 2025
Oracle Cloud Infrastructure (OCI) is doubling down on performance and efficiency. The company has announced the upcoming general availability of A4 compute shapes, powered by AmpereOne® M, the newest generation of Ampere-based processors. With this launch, Oracle becomes the first cloud provider to bring AmpereOne® M-based instances to market, further cementing its leadership in scalable, sustainable cloud infrastructure.
Slated for a November release across key regions—including Ashburn, Phoenix, Frankfurt, and London—the new A4 compute shapes promise a major leap in AI workload efficiency, price-performance, and energy savings.
The A4 line builds on the success of OCI’s A1 and A2 compute shapes, which already serve more than 1,000 customers globally. With A4, Oracle is delivering up to 45% higher per-core performance on cloud-native workloads and 30% better price-performance compared to AMD EPYC-based OCI E6 shapes.
Available in both bare metal and virtual machine configurations, A4 instances feature 96 cores running at 3.6GHz, 100G networking, and 12-channel DDR5 memory bandwidth—making them ideal for AI inference, data analytics, and high-performance enterprise applications.
"Customers choose OCI for choice and flexibility," said Kiran Edara, VP of Compute at Oracle Cloud Infrastructure. "Our new Ampere A4 shape builds on what innovators like Uber and Oracle Red Bull Racing already achieve on OCI—delivering stronger price-performance and meaningful power savings."
Designed specifically for cloud and AI workloads, AmpereOne® M offers predictable performance and scalability without the overhead of legacy x86 architectures. It enables efficient compute performance across diverse workloads, from web services to machine learning inference.
"AmpereOne® M was designed from the ground up for cloud and AI," said Jeff Wittich, Chief Product Officer at Ampere. "Our collaboration with OCI gives customers access to the full potential of this latest processor."
With A4 shapes, enterprises can expect faster AI inferencing, reduced time-to-first-token, and higher tokens-per-second—key metrics for developers running small and mid-sized LLMs such as Llama 3.1 8B. OCI’s internal benchmarks suggest an 83% price-performance advantage compared to Nvidia A10-based systems, enabling cost-effective CPU-based AI deployment at scale.
As AI adoption surges, enterprises face the dual challenge of scaling compute while managing costs and carbon footprint. The A4’s energy-efficient design tackles both, providing a sustainable alternative to power-hungry GPU clusters.
Ampere’s AI Playground, featuring optimized software libraries and pre-built demos on GitHub, is helping developers get started with LLM deployment and inference-ready applications quickly—accelerating adoption across industries.
Leading enterprises are already on board. Uber, which operates much of its infrastructure on OCI’s Ampere-based compute, plans to expand its workloads to A4, citing up to 15% more performance and lower power consumption.
Oracle Red Bull Racing, another early adopter, is leveraging A4 instances in its London data center for race strategy simulations and AI workloads, expecting a 12% performance boost.
Internally, Oracle is migrating key services like Fusion Applications and Block Storage to A4, enhancing SaaS performance and resilience. The company is also deploying memory tagging—a security feature unique to Ampere processors—to strengthen exploit detection with minimal overhead.
The A4 rollout highlights Oracle’s commitment to open, efficient cloud computing and reflects Ampere’s growing momentum across the industry. As enterprises modernize for AI-driven workloads, A4 delivers the balance of performance, sustainability, and scalability that traditional systems struggle to match.
The message is clear: the future of cloud performance is efficient, AI-optimized, and Ampere-powered.
Get in touch with our MarTech Experts.