In a nutshell: JEDEC has announced the HBM3 standard. And, like any good revision to a memory standard, it features a minor decrease in voltage, a slew of added conveniences, and a doubling of all the performance-related specifications. Bandwidth? doubled. Layers? doubled. Capacity? doubled.
In numbers, an HBM3 stack can reach 819 GB/s of bandwidth and have 64 GB of capacity. In comparison, the HBM2e stacks used by the AMD MI250 have half the bandwidth, 410 GB/s, and a quarter of the capacity, a mere 16 GB.
At eight stacks, the MI250 has a total of 128 GB and 3277 GB/s of bandwidth. Eight stacks of HBM3 would have 512 GB with 6552 GB/s of bandwidth.
HBM3 | HMB2e | HBM2 | ||
---|---|---|---|---|
Specification | JESD238 | JESD235C | JESD235B | JESD235A |
Bandwidth (per stack) | 819 GB/s | 410 GB/s | 307 GB/s | 256 GB/s |
Die (per stack) | 16 – 4 layers | 12 – 2 layers | 8 – 2 layers | |
Capacity (per die) | 4 GB | 2 GB | 1 GB | |
Capacity (per stack) | 64 GB | 24 GB | 8 GB | |
Voltage | 1.1 V | 1.2 V |
For now, there aren’t any HBM2e memory modules that meet the maximum specification.
HBM3 also doubles the number of independent channels, from eight to 16. And it’s introducing “pseudo-channels” that allow it to support up to 32 virtual channels.
According to JEDEC, HBM3 additionally addresses the “market need for high platform-level RAS (reliability, availability, serviceability)” with “strong, symbol-based ECC on-die, as well as real-time error reporting and transparency.”
JEDEC expects the first generation of HBM3 products to appear on the market soon but notes that they won’t meet the maximum specification. A more realistic outlook, it says, would be 2 GB modules in 12-layer stacks.
Image credit: Stephen Shankland