Specifications & Features
AI servers are specialized computing systems designed to handle the demanding workloads of artificial intelligence (AI) and machine learning (ML) applications. These servers are optimized for tasks like training deep learning models, running inference, and processing large datasets. Below are the key features of AI servers that make them suitable for AI/ML workloads
Application
AI, Machine Learning, Infer, Automotives, High Performance
End Users
Nvidia, SMC, Dell, HP, Huawei, T-Head, Cambricon
Model | CPU | Motherboard | RDIMM | GPU | SSD/HDD | Power | Network | Other Card | Size | Remarks |
Inspur H20 | Xeon 8558P *2 | D5-5600 *32 | HGX-H20 141G *8 | 960G*2 SATA+ 3.84T *4 NVMe SSD | CX7 400G *4 | Raid Card | ||||
SMC H200 | SPR 8558 48C 2.1G 330W *2 | SYS-821GE-TNHR | D5-5600 64G *32 | HGX H200 141G*8 | 960G NVMe PCIe4.0 M.2 *2 | IB 400G Gen5 *8 | Intel X550 10G RJ45 | Retail Pack | ||
Dell H100 | Xeon Platinum 8468 *2, 2.1G 48C/96T 105M 350W | XE9680 6U with 8GPU*2.5NVMe | D5-5600 *32 | HGX H100 SXM 80GB 700W *8 | M.2 960G *2+3.84T U.2 Gen4*8 | Mellanox 100G | Nvidia ConnectX-7 400G NDR *8, Browdcom 5720 Dual Port | |||
QCT H200 D74H-7U | Platinum 8558 48C96T 105M *2 | HGX H200 141GB*8 | D5-5600 *32 | 3.84T*4 | 4000W | IB 400G | ||||
Unknown | Xeon 6530 Core32,2.1G *2 | D5-4800 *16 | RTX4090 48G *8 | Enterprise 480G SATA *2+3.84T NVMe *2 | 25G Network | Raid Card | ||||
VFG-SYS-821GE-TNHR | 8558*2, 260MB 330W | 8U X13 8GPU | 64GB D5-5600 2R*4, x32 | HGX-H200 141GB*8 | 7.68T NVMe PCIe4.0*2, 960G NVMe PCIe4.0*2 | 3000W(3+3) | iB 400G | SMC | ||
GH200-888K-A1 | SoC | H200 Server GPU | ||||||||
GH100-885K-A1 | SoC | H100 Server GPU | ||||||||
AD102-301-A1 | SoC | RTX4090 | ||||||||
Ascend 910B | 910B | HBM2e Huawei | ||||||||
Ascend 910C | 910C | HBM3 Huawei |