GDDR5X vs. HBM vs. HBM2

Coolio

Weaksauce
Joined
Jan 8, 2021
Messages
118
Hi folks,

i've heard that "increasing the bandwidth won't improve game performance if you're exceeding game max bandwidth requirements". If this is true, could you please kindly comment on the following:

  1. Which memory type is enough (in terms of its bandwidth) for 85% of games today and will stay enough for at least 50% (rough estimation of course!) in 3-4 years : GDDR5X (80-112Gb/s), HBM (128Gb/s), HBM2 (256Gb/s)?
  2. In terms of power consumption there's no difference between GDDR5X and HBM (both are 1.35V), however HBM2 is 1.2V. Could you please explain this difference from the practical point of view? E.g. how much money can I save on electricity bills or how less TDP will my GPU generate (= potential savings on the cooling solution), etc.
Thank you for your comments!
 
The performance difference is negligible compared to a bigger GPU with more shaders. Infinity Cache on Big Navi makes a bigger difference than HBM vs GDDR, at least in gaming. The difference in power and heat is a few watts - negligible compared to the GPU.

Any GPU will be painfully slow in 4 years. You'd be better off selling it and buying a newer mid-range card.
 
Infinity Cache on Big Navi makes a bigger difference than HBM vs GDDR,
I'll need to go back and look - but the initial results I saw for infinity cache was that it wasn't all that game changing.

More of an 'interesting idea, let's see what industry does with it' thing than something that makes current titles go 'zing!'
 
Hi folks,

i've heard that "increasing the bandwidth won't improve game performance if you're exceeding game max bandwidth requirements". If this is true, could you please kindly comment on the following:

  1. Which memory type is enough (in terms of its bandwidth) for 85% of games today and will stay enough for at least 50% (rough estimation of course!) in 3-4 years : GDDR5X (80-112Gb/s), HBM (128Gb/s), HBM2 (256Gb/s)?
  2. In terms of power consumption there's no difference between GDDR5X and HBM (both are 1.35V), however HBM2 is 1.2V. Could you please explain this difference from the practical point of view? E.g. how much money can I save on electricity bills or how less TDP will my GPU generate (= potential savings on the cooling solution), etc.
Thank you for your comments!

Your memory type won't mean jack when the card itself cannot push games in 3-4 years.
 
Memory type shouldn’t really be your concern, just get the card that works for your needs. They’re all-in-one packages.

Regarding power consumption and electric bills, same deal. They’re all in one packages, so just look at the overall power consumption and go from there. From a cost standpoint though, remember that this isn’t a refrigerator or an air conditioner. A few watts here and there might translate to a few dollars per year under normal use scenarios, but nothing that will blow up your annual budget. If you’re only getting a single card, it’s not a huge deal. I’d be more worried about that if you were buying multiples and running them 24/7.
 
HBM is too expensive compared to GDDR and doesn't carry the bandwidth or power advantage it did in its earlier iteration. You're more likely to hit the performance cap of the GPU itself before being bottlenecked by memory bandwidth. There are few examples of where bandwidth is the limiting factor right now, but it wouldn't be relieved by simply changing to HBM.
 
OP, unless you are simply trying to broadly educate yourself on the topic of GPU design, you're better off realizing that none of these tech specs mean a damn thing to someone who is just going to play games (which your post implies that you will be doing).

You are grossly overthinking and focusing too much on the minutia of individual component specs instead of looking at the bigger picture. As tired as car analogies are, you are basically going around asking for the differences between 3 different engines to inform your purchase decision, without telling us which fuggin car those engines are attached to, which would be a significantly more important detail to include.
 
Hi folks,

i've heard that "increasing the bandwidth won't improve game performance if you're exceeding game max bandwidth requirements". If this is true, could you please kindly comment on the following:

  1. Which memory type is enough (in terms of its bandwidth) for 85% of games today and will stay enough for at least 50% (rough estimation of course!) in 3-4 years : GDDR5X (80-112Gb/s), HBM (128Gb/s), HBM2 (256Gb/s)?
  2. In terms of power consumption there's no difference between GDDR5X and HBM (both are 1.35V), however HBM2 is 1.2V. Could you please explain this difference from the practical point of view? E.g. how much money can I save on electricity bills or how less TDP will my GPU generate (= potential savings on the cooling solution), etc.
Thank you for your comments!
Most newer cards actually have GDDR6. G5X was limited to the fancier nvidia 10 series cards, as far as I know.

The savings on power with HBM2 (or GDDR6, which is spec'd at 1.35V) is negligible. Over the life of the card, it might make a difference of a few dollars' worth of electricity. The cost difference between any one particular card versus another with the same GPU on it will likely be greater than this. The bigger benefit from HBM2 in terms of its power consumption is that it leaves a few more watts in the power budget for the core.

Something else to consider with HBM architectures is longevity. HBM is made as effective as it is by very tightly coupling the memory and the core using a component called the "interposer." The interposer tends to be very delicate, and it's not uncommon to see HBM cards with failed memory interfaces as a result. Once this happens, they're done. There's no repairing them.

Unless you're writing an application that needs to be very thoroughly optimized to run on one specific GPU and memory architecture, the deciding factors in which GPU model you buy are basically "how much are you prepared to spend?" and "do you mostly play a game that works better on AMD or nvidia?" If you're prepared to spend $500, then you're getting either a 3070 or a 6800. If you're prepared to spend $700-$800, then you're getting a 3080 or 6800XT, and so on. There's no magical insight that will allow you to get 3090 performance for 3070 money. If you don't know which architecture your games of choice work better on, then you're likely better off with nvidia, but not by so much that you need to worry about it. Price and availability will be the deciding factors here.

So, how much are you actually prepared to spend?
 
I'll need to go back and look - but the initial results I saw for infinity cache was that it wasn't all that game changing.

More of an 'interesting idea, let's see what industry does with it' thing than something that makes current titles go 'zing!'
That's because Big Navi uses slower GDDR than Ampere, and Infinity Cache makes up for it. If the VRAM had the same performance as Ampere's, Infinity Cache would probably deliver a bigger performance uplift.
 
Thank you all guys, I never could have expected so many people and plenty of useful information on 1 page - appreciate your help a lot!

The interposer tends to be very delicate, and it's not uncommon to see HBM cards with failed memory interfaces as a result. Once this happens, they're done. There's no repairing them.
Good to know that! If the VRAM bandwidth becomes "outdated" much later than the GPU itself and the difference in energy costs is minimum, the reliability simply can't be ignored. So GDDR be it.

Infinity Cache on Big Navi makes a bigger difference than HBM vs GDDR, at least in gaming.
I'll need to go back and look - but the initial results I saw for infinity cache was that it wasn't all that game changing.
What Infinity Cache and Big Navi are? Sorry for a possibly noob question, but I only know what Infinity Fabric is.

you are basically going around asking for the differences between 3 different engines to inform your purchase decision, without telling us which fuggin car those engines are attached to, which would be a significantly more important detail to include.
If you don't know which architecture your games of choice work better on, then you're likely better off with nvidia
Well, I definitely haven't analyzed my future games from the architecture perspective (if by "car" doubletake meant that), but here are those I'd like to at least give a try: Mafia Definitive Edition, Red Dead Redemption 2, GTA 5, Witcher 3, Assassin's Creed Odyssey. I haven't dived deep into the games market yet, so these games are just that I've heard of somewhere and based on the description they seem to fit my interests so to say. :)

If you're prepared to spend $500, then you're getting either a 3070 or a 6800. If you're prepared to spend $700-$800, then you're getting a 3080 or 6800XT, and so on.
Any GPU will be painfully slow in 4 years. You'd be better off selling it and buying a newer mid-range card.
So, how much are you actually prepared to spend?
Well, as there should be a CPU/GPU balance, makes sense to mention I'm gonna buy an AMD Ryzen 5000. It will be either a 5600X or 5800X (or possibly a 5900X instead - with a better "core for a buck" balance). As for the GPU I'm initially planning to spend within $500, but the market is crazy these days, so in hope it will settle down in several months let's assume these are "old", "pre-Covid" $500. However, as you guys recommend to renew GPU every 2 (?) years makes sense to choose the category which loses less money (in %) so that I can sell it without significant losses. Again - I'm not talking about these days, when people sell for much more money than they've bought for.
I assume $1000+ or <$200 pricing categories are a narrow secondary market (compared to segments in between), so which price range do you guys recommend as being always on demand when sold?

Thank you!
 
Last edited:
What Infinity Cache and Big Navi are? Sorry for a possibly noob question, but I only know what Infinity Fabric is.
Infinity Cache is an additional level of cache memory on the GPU to move high priority data in and out of the frame buffer. It's a small amount of low latency memory, similar to how the ESRAM functioned on the original Xbox One. Big Navi is just a colloquialism for RDNA 2 coined by the community since the original Navi was a smaller, less powerful chip compared to that found in the 6800 and 6900. The chip is literally larger in size.
Well, I definitely haven't analyzed my future games from the architecture perspective (if by "car" doubletake meant that), but here are those I'd like to at least give a try: Mafia Definitive Edition, Red Dead Redemption 2, GTA 5, Witcher 3, Assassin's Creed Odyssey. I haven't dived deep into the games market yet, so these games are just that I've heard of somewhere and based on the description they seem to fit my interests so to say. :)
You'll have no issue in those games no matter who you go with. NVIDIA is stronger at ray tracing currently, but it's lasting advantage depends on how adopted RT becomes in the future. It is more likely now that the current gen consoles support ray tracing. Ray tracing is also where you will run into some bandwidth limitations at higher resolutions (4K and higher, in particular). Both AMD and NVIDIA are about on equal footing when it comes to traditional rasterization like the games you listed all do, but NVIDIA has double the ray tracing performance in the same price category compared to AMD.
 
OK, thank you guys for your help, and I'm totally persuaded now that GPU is the thing to be selected purely on the budget affordability basis.
Armenius yep, ray tracing is a good thing, but true is that not many games support it (same as DLSS technology).

A couple of technical questions though (if you guys find time to comment):
  1. Is it true that GPU running on PCIe 4.0 bus has the same efficiency as that on PCIe 3.0 bus?
  2. To calculate the PSU power of the future build there's no need to know how much GPU gets through specifically the 12V rail - just its overall power consumption is enough, right?
 
1. No, but the difference isn’t that big. Get a pcie 4 motherboard if you can, but don’t worry about it if your desired setup doesn’t support it.

2. It does matter, but new power supplies are made to provide most of their power on the 12v rails, so you don’t need to worry about it unless you’re looking at very old used parts. Which you shouldn’t do.

edit: just make sure you get a power supply that provides at least whatever the official requirement is, in watts. It doesn’t hurt to go 100 watts or so higher, especially if you’re also using a monster cpu.
 
Last edited:
1. No, but the difference isn’t that big. Get a pcie 4 motherboard if you can, but don’t worry about it if your desired setup doesn’t support it.

2. It does matter, but new power supplies are made to provide most of their power on the 12v rails, so you don’t need to worry about it unless you’re looking at very old used parts. Which you shouldn’t do.

Extending to point 2, your GPU is only allowed to take a maximum of 10W from the 3.3V rail (via the PCIe slot) and nothing via 5V (unless it has a 4 pin molex or sata power connector for some reason). Several years ago one of the tech sites measured mobo power draws, and only 1 of the GPU brands was using the 3.3V rail at all; the second limited itself to the 65W of 12V the slot was speced to provide. At this point I don't recall if it was NVidia or AMD that was still using the legacy power rail - originally added to make porting parrallel PCI cards easier by providing an identical power source (in addition to the newly added 12V from the PCIe standard).

On the power generation side, modern PSU designs create all their DC as 12V (and thus can deliver their rated power at that voltage give or take rounding and shenanegans with auxiliary rails) and then use DC-DC converters to make 3.3/5V as needed. Older designs, still used for some new units at lower efficiency standards, might have their 12V max out at 100-150W below the headline number.
 
Back
Top