HBM3E
Industry’s fastest, highest-capacity HBM to advance generative AI innovation
Industry’s fastest, highest-capacity HBM to advance generative AI innovation
生成的人工智能 opens a world for new forms of creativity and expression, 就像上面的图片, by using large language model (LLM) for training and inference. Utilization of compute and memory resources make the difference in time to deploy and response time. Micron HBM3E provides higher memory capacity that improves performance and reduces CPU offload for faster training and more responsive queries when inferencing LLMs such as ChatGPT™.
AI unlocks new possibilities for businesses, IT, engineering, science, medicine and more. As larger AI models are deployed to accelerate deep learning, maintaining compute and memory efficiency is important to address performance, 确保所有人受益的成本和权力. Micron HBM3E improves memory performance while focusing on energy efficiency that increases performance per watt resulting in lower time to train LLMs such as GPT-4 and beyond.
科学家们, 研究人员, and engineers are challenged to discover solutions for climate modeling, curing cancer and renewable and sustainable energy resources. 高性能计算 (HPC) propels time to discovery by executing very complex algorithms and advanced simulations that use large datasets. Micron HBM3E provides higher memory capacity and improves performance by reducing the need to distribute data across multiple nodes, 加快创新步伐.
Micron extends industry-leading performance across our data center product portfolio with HBM3E. 提供更快的数据速率, 改善热响应, and 50% higher monolithic die density within same package footprint as previous generation.
With advanced CMOS innovations and industry leading 1β process technology, Micron HBM3E provides higher memory bandwidth that exceeds 1.2TB/s.1
内存容量增加50%2 per 8-high 24GB cube, HBM3E enables training at higher precision and accuracy.
Micron designed an energy efficient data path that reduces thermal impedance, enables 大于2.性能/瓦特提高5倍3 与上一代相比.
With increased memory bandwidth that improves system-level performance, HBM3E减少了30%以上的训练时间4 and allows >50% more queries per day.5,6
Micron HBM3E is the fastest, highest capacity high bandwidth memory to advance AI innovation. 一个8高的24GB立方体,提供超过1.2TB/s带宽和卓越的电源效率. Micron is your trusted partner for memory and storage innovation.
4 基于Micron内部模型的参考 ACM出版,与当前的运输平台(H100)相比
5 Based on internal Micron model referencing Bernstein’s research report, NVIDIA (NVDA): A bottoms-up approach to sizing the ChatGPT opportunity, 2月27日, 2023,与当前的运输平台(H100)相比
6 Based on system measurements using commercially available H100 platform and linear extrapolation
1.2tb /s带宽, 8-high 24GB HBM3E from Micron delivers superior power efficiency enabled by advanced 1β process node.
Micron's Girish Cherussery, Sr. Director, 高性能存储器, sits down with Patrick Moorhead and Danial Newman from Six Five to discuss High Bandwidth Memory (HBM) and Micron's newest HBM3E product.
Micron is shipping the industry’s first DRAM manufactured on next-generation 1-beta process technology. It represents state-of-the-art innovation from Micron’s continued investment in R&D、工艺技术进步. Micron’s 1-beta process technology allows development of memory products with increased performance, 更大的容量, 更高的密度, and lower relative power consumption than prior generations.
了解更多 >