111 111
联系我们
  • 产品
    平台
    • HyperPOD™
    • 数据智能平台
    • AI400X3
    • AI400X2 Turbo
    • Inferno
    • 统一存储
    系统
    • 云
    • 数据中心
    软件
    • EXA 文件存储
    • EXAScaler 云
    • IndustrySync
    • Infinia
    • Insight Software
    • xFusionAI
  • 解决方案
    AI 人工智能
    • 自主型AI
    • AI推理
    • AI工作流
    • 聊天机器人与智能助手
    • 数据分析
    • 生成式AI
    • 机器学习
    • 主权AI
    云计算
    • 云服务提供商
    • AI即服务
    高性能计算
    • 超级计算
    • 设计与数字孪生
  • 行业应用
    行业应用
    • AI工厂
    • 汽车行业
    • 金融服务
    • 医疗健康与生命科学
    • 科研与高校
  • 关于DDN
    关于DDN
    • 关于我们
    • 人才招聘
    • 新闻资讯
    • 办公地点
    • 资源中心
    为您定制
    • 需要一个联系通道,功能持续开放中...
  • 合作伙伴
  • 客户
  • 技术支持
News

Google Cloud Managed Lustre, Powered by DDN EXAScaler Offering High-Performance File System Now Generally Available for AI and HPC Workloads

Managed Lustre Launch with DDN

Chatsworth, Calif. — [July 8, 2025] — DDN®, the global leader in AI and data intelligence solutions, today announced that Google Cloud Managed Lustre, a fully managed, high-performance parallel file system service, powered by DDN’s industry-leading EXAScaler® technology, is now generally available.

Designed to accelerate the most demanding workloads in AI, high-performance computing (HPC), and data-intensive enterprise environments, Google Cloud Managed Lustre brings the power of Lustre — the world’s most scalable parallel file system — natively into the Google Cloud ecosystem. The service is now generally available globally.

“By bringing our EXAScaler technology to Google Cloud customers as a fully managed service, we’re enabling organizations across industries to accelerate innovation without the overhead of managing complex infrastructure,” said Paul Bloch, Co-Founder and President at DDN.

“Partnering with DDN allows us to bring their industry-leading parallel file systems to Google Cloud as a deeply integrated, first-party service,” said Asad Khan, Senior Director, Product Management, Google Cloud Storage. “By combining DDN’s decades of expertise in high-performance Lustre with Google Cloud’s global infrastructure and AI ecosystem, we are delivering a foundational capability that removes storage bottlenecks and helps our customers solve their most complex challenges in AI and HPC.”

“Enterprises today demand AI infrastructure that combines accelerated computing with high-performance storage solutions to deliver uncompromising speed, seamless scalability and cost efficiency at scale,” said Dave Salvator, Director of Accelerated Computing Products, NVIDIA. “Google and DDN’s collaboration on Google Cloud Managed Lustre creates a better-together solution uniquely suited to meet these needs. By integrating DDN’s enterprise-grade data platforms and Google’s global cloud capabilities, organizations can readily access vast amounts of data and unlock the full potential of AI with the NVIDIA AI platform (or NVIDIA accelerated computing platform) on Google Cloud —reducing time-to-insight, maximizing GPU utilization, and lowering total cost of ownership.”

Purpose-Built for Performance, Simplicity, and Scale

Google Cloud Managed Lustre delivers industry-leading performance of up to 1 TB/s read throughput with <1ms latency and can scale seamlessly from 18 TiB to 8 PiB+. With multiple performance tiers (125 MB/s/TiB to 1000 MB/s/TiB), organizations can tailor performance to meet the specific needs of their AI, simulation, or analytics workloads.

The service offers:

  • Fully managed operations through the Google Cloud Console, CLI, and APIs
  • Native integration with Google Cloud services like Google Cloud Compute Engine, Google Kubernetes Engine (GKE), IAM, VPC Service Controls, and Google Cloud’s Vertex AI platform
  • Optimized performance for AI/ML training, financial modeling, rendering, genomic analysis, and more
  • Enterprise-grade reliability with a 99.9% SLA
  • Terraform support, bulk data movement with Google Cloud Storage, and a Managed CSI Driver for GKE

Unlocking Value Across Industries

From LLM training and risk analysis to climate research and drug discovery, Google Cloud Managed Lustre serves customers in financial services, life sciences, manufacturing, public sector, and media and entertainment.

“I consolidated the previous storage systems into one centralized DDN storage system using a global EXAScaler Lustre file system, which can deliver the needed performance and scalability. Having spent 20 years implementing DDN systems successfully, it was an easy decision for me to choose DDN for Helmholtz Munich’s infrastructure rehaul,” said Dr Alf Wachsmann, Head of DigIT Infrastructure & Scientific Computing | Helmholtz Munich.

Availability and Getting Started

Google Cloud Managed Lustre is generally available. Customers can deploy instances directly through the Google Cloud Console or speak with Google Cloud or DDN representatives for tailored guidance and support.

To learn more, read the Google blog: https://cloud.google.com/blog/products/storage-data-transfer/google-cloud-managed-lustre-for-ai-hpc.

About DDN

DDN is the world’s leading AI and data intelligence company, empowering organizations to maximize the value of their data with end-to-end HPC and AI-focused solutions. Its customers range from the largest global enterprises and AI hyperscalers to cutting-edge research centers, all leveraging DDN’s proven data intelligence platform for scalable, secure, and high-performance AI deployments that drive 10x returns.

Follow DDN: LinkedIn, X, and YouTube.

DDN Media Contact:

Amanda Lee
VP, Marketing – Analyst and Media Relations
amlee@ddn.com

Last Updated
Sep 8, 2025 12:51 AM
Share
News

DDN’s Data Platform Propels xAI’s Colossus to World-Class Performance

With 100,000 NVIDIA GPUs, DDN’s high-efficiency data platform enables Grok to
push the limits of natural language processing and AI inference at an unprecedented scale.

CHATSWORTH, Calif., Nov. 18, 2024 – DDN®, a leading force in AI data intelligence, proudly announces a collaboration with NVIDIA to drive xAI’s Project Colossus in Memphis, Tennessee. This collaboration is a cornerstone in xAI’s bold vision to expand AI’s potential, driving Grok. Initially fueled by a combination of 100,000 NVIDIA Hopper GPUs and the NVIDIA Spectrum-X Ethernet networking platform, the solution maintains a 95% data throughput efficiency level during massive AI training. Colossus will soon scale to 200,000 GPUs, cementing its place as one of the world’s most powerful AI supercomputers and advancing the limits of what AI can achieve.

The Memphis facility, now a true data metropolis stretching across multiple data halls, has been designed to satisfy Grok’s requirement for speed, scale, and raw computational power.  Think of this infrastructure as converting a high-rise into a bustling hub, fully optimized to support one of the world’s most powerful AI engines. At its core, DDN’s advanced AI data platform, turbocharged by the NVIDIA accelerated computing platform, combines the power of DDN’s EXAScaler and Infinia solutions. This setup delivers the scale and precision that cutting-edge AI demands—an engine fine-tuned for extreme efficiency and designed to handle intensive generative AI workloads.

DDN’s platform, designed for organizations to scale model training and inference, allows data to flow smoothly and efficiently, thanks to its streamlined DataPath technology. This setup maximizes data movement without the usual strain on hardware, power, cooling, or network resources, enabling xAI to expand Colossus’ training capabilities while keeping costs down and minimizing environmental impact. The result is a supercomputer that is as efficient as it is powerful.

Leaders on the Cutting Edge:

“By powering DDN’s platform with NVIDIA’s accelerated computing platform, we are equipping xAI with the technology needed to advance its most ambitious AI projects,” said Alex Bouzari, CEO and co-founder of DDN. “Our solutions are specifically engineered to drive efficiency at massive scale, and this deployment at xAI perfectly demonstrates the capabilities of our high-performance, AI-optimized technology.”

Elon Musk, CEO of xAI said on X: “Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months. Excellent work by the team, NVIDIA and our many partners/suppliers.”

“Powerful AI systems require cutting-edge performance and scalability to meet the increasing demands of frontier AI models,” said Dion Harris, director of accelerated data center product solutions at NVIDIA. “Complementing the power of 100,000 NVIDIA Hopper GPUs connected via the NVIDIA Spectrum-X Ethernet platform, DDN’s cutting-edge data solutions provide xAI with the tools and infrastructure needed to drive AI development at exceptional scale and efficiency, helping push the limits of what’s possible in AI.”

Unprecedented Training Power and Efficiency

Project Colossus, supercharged by DDN, sets a new benchmark in AI model training power and speed. Grok taps into the massive compute power of 100,000 GPUs, all seamlessly supported by DDN’s EXAScaler and Infinia solutions. DDN’s data platform drastically reduces training time, enabling rapid model iteration and greater flexibility for updates. With Colossus and DDN’s architecture, xAI can tackle larger datasets and increasingly complex model architectures, driving breakthrough performance in applications like natural language processing and conversational AI—all at a scale previously thought unachievable.

Powering Real-World AI Inference at Scale

Beyond training, DDN’s high-efficiency platform amplifies AI inference capabilities in Colossus, allowing xAI to deploy powerful models at scale. DDN’s streamlined data pathways boost inference speeds for real-time applications, ensuring Grok’s impact is felt directly by users across platforms like X. The enhanced performance Colossus achieves by leveraging DDN solutions primes Grok to become one of the most advanced AI systems available commercially, bringing AI-driven user experiences to new heights and setting the standard for speed and scalability in real-world applications.

DDN Enables AI Success at Three Critical Levels:

  • Data Center & Cloud Optimization DDN solutions deliver end-to-end optimization across compute, network, and storage for GPU workloads, drastically reducing overhead and inefficiencies by 75% compared to others. In large language models (LLMs), DDN achieves a 10x cost benefit by optimizing data loading, checkpointing, and inference in generative AI (GenAI). This means faster AI results, with lower costs, in a smaller footprint.
  • AI Framework/LLM/GenAI Acceleration DDN accelerates the analytics layer in AI workflows, often boosting LLM performance by up to 10x, even in constrained environments. This reduces GPU waste, speeds up training, and shortens time to market for AI products, providing a strong business advantage.
  • Data Orchestration and Movement Optimization The DDN platform ensures efficient data flow across edge, data center, and multi-cloud environments. By minimizing latency and reducing unnecessary data transfer, we cut costs and enhance scalability, creating a flexible, future-proof infrastructure for AI-driven innovation.

A Legacy of Collaboration with NVIDIA

For over seven years, DDN has been working with NVIDIA on supercomputing innovations, starting with the renowned Selene supercomputer​. This collaboration grew to include support for the Eos supercomputer​ ​and now extends to the latest NVIDIA Blackwell platform​.

About DDN

DDN is the world’s leading data intelligence company that provides an unfair advantage to over 11,000 customers focused on unlocking real-time AI & HPC insights. The DDN Data Intelligence Platform supercharges more than 500,000 GPUs worldwide across a broad range of use cases, including autonomous driving, financial services, healthcare, research and academia. Manage complex data, enhance performance, deliver cost savings, increase security and accelerate your AI & HPC workloads at-scale from edge to core to cloud. 

Contact:

Press Relations at DDN
 sgilmore@ddn.com

©2024 All rights reserved. DDN is a trademark or registered trademark owned by DataDirect Networks. All other trademarks are the property of their respective owners.

Last Updated
Nov 27, 2024 4:52 AM
Share
News

DDN Launches Enterprise AI HyperPOD, the DDN AI Data Platform Built on Supermicro, Accelerated by NVIDIA at GTC-DC

Supermicro, Accelerated by NVIDIA at GTC-DC Turnkey AI Data Platform Delivers Unmatched Efficiency, Scale, and ROI for Enterprise and Sovereign AI

Washington, D.C. — GTC DC — October 28, 2025 — DDN, the global leader in AI and data intelligence, announced the launch of DDN Enterprise AI HyperPOD, built on Supermicro, accelerated by NVIDIA, a turnkey AI data platform engineered to redefine how enterprises and sovereign organizations deploy and scale artificial intelligence.

Designed for industries including healthcare and life sciences, manufacturing, financial services, automotive, neo clouds, and sovereign AI, the new platform based on the NVIDIA AI Data Platform reference design, integrates DDN’s Infinia Data Intelligence Platform, Supermicro’s AI-optimized hardware featuring NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA BlueField-3 DPUs, and NVIDIA AI Enterprise software including NVIDIA NIM and NVIDIA NeMo Retriever microservices, to deliver record-setting efficiency, density, and scalability for inferencing at every level—from entry to exascale.

How the Turnkey Solution Impacts Business

  • Turnkey Deployment: Ready on day one with modular configurations, allowing enterprises to deploy and run AI Inference in minutes instead of months.
  • Maximum ROI: DDN HyperPOD accelerates the Enterprise Inference pipeline and offers significant advantages over alternative solutions. DDN HyperPOD accelerates data ingestion by 22x and the KV Cache stage of Inference by 18x
  • Sustainability at Scale: 10x power savings and industry-leading density—2PB in 1U—make AI both transformative and ESG-compliant.
  • Hybrid and Future-Proof: Built to seamlessly integrate with Oracle Cloud Infrastructure (OCI), Google Managed Lustre, and future NVIDIA technologies, ensuring organizations can innovate today and adapt tomorrow.

Executive Perspective

“AI success isn’t about how many GPUs you buy, it’s about how well you use them,” said Sven Oehme, CTO at DDN. “Enterprises are tired of wasted capacity and spiraling costs. With DDN Enterprise AI HyperPOD, built on Supermicro Edition, accelerated by NVIDIA, we’re delivering a turnkey platform that keeps GPUs 95% busy, transforms infrastructure into outcomes, and scales from edge deployments to sovereign AI factories. This is about giving CEOs and CIOs the confidence to invest in AI with measurable business returns.” 

“Organizations need a new class of AI-native storage to gain real-time insights from unstructured data,” said Justin Boitano, vice president of enterprise AI products at NVIDIA. “The DDN Enterprise AI HyperPOD, which integrates the full NVIDIA AI stack on Supermicro systems, gives enterprises a proven, secure, and high-performance foundation to harness the full potential of AI data at scale.”

“DDN’s Enterprise AI HyperPOD based on NVIDIA’s AI Data Platform architecture and using Supermicro Hyper servers and NVIDIA-based Supermicro GPU servers, makes deploying an enterprise AI solution a turnkey experience when purchased through Supermicro,” said Vik Malyala, President & Managing Director EMEA SVP, Technology & AI, Supermicro.  “Supermicro’s integration of all of the technologies and software, validation, and support enables customers to receive a fully unified solution for their AI workflow.”

Customer Momentum

Singtel, a leading sovereign AI cloud provider, has adopted the solution as part of its next-generation AI services:

“Our goal is to support enterprises across industries in accelerating the adoption of agentic AI and large-scale inference with the right balance of performance, economics, security, and compliance,” said Manoj Prasana Kumar, CTO at Singtel Digital Infraco. “By bringing together the strengths of Singtel’s AI infrastructure, connectivity, and managed services with DDN’s high-performance data platform, customers gain a seamless foundation to scale AI initiatives faster, mitigate risk, and move from proof-of-concept to production with confidence.”

Configurations for Every Stage of AI

  • XS (4 GPUs, 0.5+PB): Instant deployment for inference and edge AI.
  • Small (32 GPUs, 1+PB): Cost-efficient AI scaling for enterprise workloads.
  • Medium (64 GPUs, 3+PB): Full-scale enterprise AI factories.
  • Large (256 GPUs, 12+PB): Exascale, sovereign-grade AI for global leaders.

Each configuration is pre-integrated with Supermicro hardware, NVIDIA AI Enterprise software, and DDN Infinia 2.3, including the new NVIDIA Dynamo engine, for 100x faster metadata queries, ensuring enterprises can move seamlessly from pilot projects to massive production environments.

Industry Impact

By aligning compute, networking, and data intelligence, DDN Enterprise AI HyperPOD, Supermicro Edition accelerated by NVIDIA addresses one of the most critical challenges in enterprise AI: wasted infrastructure. Where enterprises today can lose up to 60% of compute capacity to inefficiencies, Infinia sustains near-perfect GPU utilization, transforming infrastructure investment into scalable outcomes.

Availability

DDN Enterprise AI HyperPOD, built on Supermicro, accelerated by NVIDIA, is available immediately through Supermicro as turnkey packages. Enterprises and sovereign organizations can learn more at: https://www.ddn.com/partners/supermicro/ and https://www.supermicro.com/en/solutions/ddn.

About DDN

DDN is a leading AI and data intelligence company, empowering organizations to maximize the value of their data with end-to-end HPC and AI-focused solutions. Its customers range from the largest global enterprises and AI hyperscalers to cutting-edge research centers, all leveraging DDN’s proven data intelligence platform for scalable, secure, and high-performance AI deployments that drive 10x returns.  Follow DDN: LinkedIn, X, and YouTube.  

Media Contact:
Amanda Lee
VP, Marketing – Analyst and Public Relations
Email: amlee@ddn.com

Last Updated
Oct 28, 2025 11:42 AM
Share
DDN

DDN是数据智能领域的全球知名厂商,致力于以先进的创新技术与深厚的专业能力,加速各类企业的AI发展。

  • 关于我们
    • 公司
    • 企业社会责任
    • 招聘
    • 联系存储专家
  • 其他资源
    • 资源中心
    • 新闻资讯
    • 荣誉与奖项
    • DDN USA
    • DDN Japan
  • 联系方式
    • +86 180 4998 8826
    • Team-China.Marketing@ddn.com
    • 办公地点
© 2025 DDN 版权所有
  • 隐私策略