AMD Instinct MI300X
The MI300X is AMD's challenge to NVIDIA's data centre GPU dominance. Released in December 2023, it carries 192GB of HBM3 memory — 2.4× the H100's 80GB — on a single package, enabling large model inference without multi-GPU memory pooling. Its memory bandwidth of 5.3 TB/s exceeds the H100 by 58%. On inference workloads where memory capacity is the bottleneck, the MI300X is competitive with or better than the H100. The limiting factor is software: NVIDIA's CUDA ecosystem has a decade-long head start, and AMD's ROCm platform, while improving, requires more engineering effort to achieve comparable performance. Microsoft deployed MI300X accelerators in Azure in 2024. AICI tracks the MI300X as the most credible hardware threat to NVIDIA's AI chip monopoly position.
Specifications
Architecture: CDNA3 | Process: TSMC 5nm | FP16: 1,307 TFLOPS | Memory: 192GB HBM3 | Bandwidth: 5.3 TB/s | TDP: 750W | Interconnect: Infinity Fabric