International Tech Times
SEE OTHER BRANDS

Your science and technology news reporter

Alluxio Closes Strong Q2 with Customer Growth, Sub-Millisecond Latency Capability for AI Data & Record MLPerf Storage v2.0 Benchmark Results

SAN MATEO, Calif., Aug. 27, 2025 (GLOBE NEWSWIRE) -- Alluxio, the AI and data-acceleration platform, today announced strong results for the second quarter of its 2026 fiscal year. During the quarter, the company launched Alluxio Enterprise AI 3.7, a major release that delivers sub-millisecond TTFB (time to first byte) latency for AI workloads accessing data on cloud storage.

Alluxio also reported new customer wins across multiple industries and AI use cases, including model training, model deployment, and feature store query acceleration. In addition, the MLPerf Storage v2.0 benchmark results underscored Alluxio’s leadership in AI infrastructure performance, with the platform achieving exceptional GPU utilization and I/O acceleration across diverse training and checkpointing workloads.

"This was a phenomenal quarter for Alluxio, and I couldn’t be prouder of what the team has achieved,” said Haoyuan Li, Founder and CEO, Alluxio. “With Alluxio Enterprise AI 3.7, we’ve eliminated one of the most stubborn bottlenecks in AI infrastructure, cloud storage performance. By combining sub-millisecond latency with our industry-leading, throughput-maximizing distributed caching technology, we’re delivering even greater value to our customers building and serving AI models at scale. The strong customer momentum and outstanding MLPerf benchmark results further reinforce Alluxio’s critical role in the AI infrastructure stack.”

Key Features of Alluxio Enterprise AI 3.7

  • Ultra-Low Latency Caching for Cloud Storage – Alluxio AI 3.7 introduces a distributed, transparent caching layer that reduces latency to sub-millisecond levels while retrieving AI data from cloud storage. It achieves up to 45× lower latency than S3 Standard and 5× lower latency than S3 Express One Zone, plus up to 11.5 GiB/s (98.7 Gbps) throughput per worker node, with linear scalability as nodes are added.
  • Enhanced Cache Preloading – The Alluxio Distributed Cache Preloader now supports parallel loading, delivering up to 5× faster cache preloading to ensure hot data availability for faster AI training and inference cold starts.
  • Role-Based Access Control (RBAC) for S3 Access – New granular RBAC capabilities allow tight integration with identity providers (OIDC/OAuth 2.0, Apache Ranger), controlling user authentication, authorization, and permitted operations on cached data.

Customer Momentum in H1 2025

The first half of 2025 saw record market adoption of Alluxio AI, with customer growth exceeding 50% compared to the previous period. Organizations across tech, finance, e-commerce, and media sectors have increasingly deployed Alluxio’s AI acceleration platform to enhance training throughput, streamline feature store access, and speed inference workflows. With growing deployments across hybrid and multi-cloud environments, demand for Alluxio AI reflects rapidly rising expectations for high-performance, low-latency AI data infrastructure. Notable customers added in the half include:

  • Salesforce
  • Dyna Robotics
  • Geely

Substantial I/O Performance Gains Confirmed in MLPerf Storage v2.0 Benchmark

Alluxio’s distributed caching architecture underscores its commitment to maximizing GPU efficiency and AI workload performance. In the MLPerf Storage v2.0 benchmarks:

  • Training Throughput
    • ResNet50: 24.14 GiB/s supporting 128 accelerators with 99.57% GPU utilization, scaling linearly from 1 to 8 clients and 2 to 8 workers.
    • 3D-Unet: 23.16 GiB/s with 8 accelerators, 99.02% GPU utilization, similarly scaling linearly.
    • CosmoFlow: 4.31 GiB/s with 8 accelerators, utilizing 74.97%, nearly doubling performance when scaling clients.
  • LLM Checkpointing
    • Llama3-8B: 4.29 GiB/s read and 4.54 GiB/s write (read/write times: 24.44s and 23.14s).
    • Llama3-70B: 33.29 GiB/s read and 36.67 GiB/s write (read/write times: 27.39s and 24.86s).

"The MLPerf Storage v2.0 results validate the core value of our architecture: keeping GPUs fed with data at the speed they require,” Li added. “High GPU utilization translates directly into faster training times, better throughput for large models, and higher ROI on infrastructure investments. These benchmarks show that Alluxio can deliver performance at scale, across diverse workloads, without compromising flexibility in hybrid and multi-cloud environments.”

Availability

Alluxio Enterprise AI version 3.7 is available here: https://www.alluxio.io/demo

About Alluxio

Alluxio is a leading provider of accelerated data access platforms for AI workloads. Alluxio’s distributed caching layer accelerates AI and data-intensive workloads by enabling high-speed data access across diverse storage systems. By creating a global namespace, Alluxio unifies data from multiple sources—on-premises and in the cloud—into a single, logical view, eliminating the need for data duplication or complex data movement.

Designed for scalability and performance, Alluxio brings data closer to compute frameworks like TensorFlow, PyTorch, and Spark, significantly reducing I/O bottlenecks and latency. Its intelligent caching, data locality optimization, and seamless integration with modern data platforms make it a powerful solution for teams building and scaling AI pipelines across hybrid and multi-cloud environments. Backed by leading investors, Alluxio powers technology, internet, financial services, and telecom companies, including 9 out of the top 10 internet companies globally. To learn more, visit www.alluxio.io.

Media Contact:
Beth Winkowski
Winkowski Public Relations, LLC for Alluxio
978-649-7189
beth@alluxio.com


Primary Logo

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions