The KR9298-E3 AI Server is a general-purpose heterogeneous computing platform designed for large language model (LLM) inference, AI cloud services, and scalable HPC workloads.
Equipped with dual SP5 AMD EPYC™ 9005 processors (up to 500W each with air cooling) and 8x AMD Instinct™ MI350X GPUs, it delivers exceptional compute density, ultra-high memory capacity (2TB HBM3e), and 48 TB/s memory bandwidth, ensuring fast training, inference, and data processing.
Main Application Areas:
-
AI & LLM Inference: Optimized for GPT-class models, generative AI, and multimodal AI
-
Cloud Computing Services: Multi-tenant AI and HPC clusters
-
High-Performance Computing (HPC): Scientific research, simulations, and engineering workloads
-
Data-Intensive Workloads: Real-time analytics and GPU-accelerated big data processing
The KR9298-E3 provides enterprise-grade reliability, scalability, and performance density for AI-driven data centers and next-generation HPC infrastructures.
Price on request – availability must be checked before ordering.