Excessive Bandwidth Memory
Graciela Bruno 于 22 小时之前 修改了此页面


Excessive Bandwidth Memory (HBM) is a pc memory interface for 3D-stacked synchronous dynamic random-entry memory (SDRAM) initially from Samsung, AMD and SK Hynix. RAM in upcoming CPUs, and FPGAs and in some supercomputers (such as the NEC SX-Aurora TSUBASA and Fujitsu A64FX). HBM achieves greater bandwidth than DDR4 or GDDR5 while utilizing much less energy, and in a considerably smaller type issue. This is achieved by stacking up to eight DRAM dies and an optionally available base die which might include buffer circuitry and check logic. The stack is commonly related to the memory controller on a GPU or CPU by a substrate, comparable to a silicon interposer. Alternatively, the memory die could be stacked straight on the CPU or GPU chip. Inside the stack the dies are vertically interconnected by by-silicon vias (TSVs) and microbumps. The HBM know-how is analogous in principle but incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Technology. HBM memory bus could be very vast compared to other DRAM reminiscences such as DDR4 or GDDR5.


An HBM stack of four DRAM dies (4-Hi) has two 128-bit channels per die for MemoryWave Community a total of eight channels and a width of 1024 bits in complete. A graphics card/GPU with 4 4-Hello HBM stacks would therefore have a memory bus with a width of 4096 bits. In comparison, the bus width of GDDR recollections is 32 bits, with sixteen channels for a graphics card with a 512-bit memory interface. HBM supports as much as 4 GB per package. The larger variety of connections to the memory, relative to DDR4 or GDDR5, required a new methodology of connecting the HBM memory to the GPU (or Memory Wave other processor). AMD and Nvidia have both used goal-built silicon chips, known as interposers, to attach the memory and GPU. This interposer has the added benefit of requiring the memory and processor to be bodily shut, reducing memory paths. Nonetheless, as semiconductor system fabrication is significantly costlier than printed circuit board manufacture, this adds cost to the ultimate product.


The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into unbiased channels. The channels are fully independent of one another and will not be necessarily synchronous to each other. The HBM DRAM makes use of a wide-interface structure to attain high-speed, low-energy operation. Every channel interface maintains a 128-bit knowledge bus working at double knowledge rate (DDR). HBM supports transfer charges of 1 GT/s per pin (transferring 1 bit), yielding an total bundle bandwidth of 128 GB/s. The second generation of High Bandwidth Memory, HBM2, also specifies as much as eight dies per stack and doubles pin transfer rates as much as 2 GT/s. Retaining 1024-bit extensive entry, HBM2 is able to succeed in 256 GB/s memory bandwidth per package deal. The HBM2 spec permits up to eight GB per package deal. HBM2 is predicted to be especially helpful for performance-sensitive shopper functions similar to digital actuality. On January 19, 2016, Samsung announced early mass production of HBM2, at up to eight GB per stack.


In late 2018, JEDEC announced an update to the HBM2 specification, offering for increased bandwidth and capacities. As much as 307 GB/s per stack (2.5 Tbit/s effective data fee) is now supported within the official specification, though products working at this velocity had already been available. Additionally, the update added support for 12-Hello stacks (12 dies) making capacities of as much as 24 GB per stack doable. On March 20, 2019, Samsung announced their Flashbolt HBM2E, that includes eight dies per stack, a transfer rate of 3.2 GT/s, providing a complete of sixteen GB and 410 GB/s per stack. August 12, 2019, SK Hynix introduced their HBM2E, featuring eight dies per stack, a switch rate of 3.6 GT/s, offering a total of sixteen GB and 460 GB/s per stack. On July 2, 2020, SK Hynix announced that mass production has begun. In October 2019, Samsung introduced their 12-layered HBM2E. In late 2020, Micron unveiled that the HBM2E normal can be updated and alongside that they unveiled the following commonplace often known as HBMnext (later renamed to HBM3).