Please Explain Memory Bus To Me - Why Only 256 bits for GTX1080?

Blackstone

2[H]4U
Joined
Mar 8, 2007
Messages
3,580
So the GTX 780 has like a 384 bit bus.
980 has 256 bits.
980 ti has 384 bits
Now 1080 is back to 256.

How does this make sense? Why are they going back to 256 for the 1080 and what does this mean for performance. I fail to see why a better card than 980ti would have only a 256 bit bus.

Is there any scenario where a 1080 could be bottle-necked by the 256 bit bus?
 
the gtx 980 has 33% less bandwidth than a 780ti yet its faster most of the time, that is because the gtx 980 has bandwidth saving techniques beyond that of the 780ti.

Its all about effectively using the bandwidth that the GPU has.
 
My concern is that there is a pattern developing where they follow up with a ti variant with a larger bus. My thesis, is that NVIDIA understands what kind of workloads and demands games over the next two years or so will put on this cards, and they build a reason to upgrade later into their cards intentionally. For example, only 4GB of VRAM on the 980 makes even SLI application of that card at 1080p insufficient for some games like GTA V. Then they follow up with the ti and give the extra vram.

Now we have 8 GB of VRAM and 256 bus, and Battlefield 5. But is Battlefield 5 going to really need 12 GB of vram for all the anti-aliasing and all the stuff that we actually build these systems to actually do. You see my point. What I have want DSR and anti-aliasing in Battlefield 5. I spend $1,200 on two cards which ostensibly should blow the game out of the water, but ooops it only has a 256 bus. So maybe there is a bottleneck hidden in there. Then out comes the ti variant...
 
Anti aliasing has been going away from MSAA for quite some time now as MSAA doesn't do well with many shader effects, not to mention doesn't work at all with shader aliasing. Shader based AA is definitly the way to go and shader based AA doesn't use as much bandwidth as MSAA.

The Ti version will most likely come with a larger bus, p100 or variants of that chip will be bigger so I do expect that.

PS the TI version is not what is coming out in a week or so ;), the 980ti is being EOLed is because the new chip performs at least that, so there is no reason for the TI version anymore. Expect nV to do the same, Titan version coming out in 6 months and the Ti version after, or at the same time based on what AMD is doing with Vega and this is because HBM2 volume production will be enough to sustain both their compute cards and desktop cards.
 
Last edited:
I guess it sort of depends on Battlefield 5 and how it performs and what it needs. If they go back to World War Two I am going to turn my mancave into a command bunker and rebuild my PC into a camouflaged case. THIS IS IMPORTANT.

I really need to study up on AA I am still under the impression that MSAA is the best for image quality.
 
The GTX 1080 is rumored to be using GDDR5X ram, which is roughly twice as fast as conventional GDDR5.

In theory its bus runs exactly 2x as fast, but early GDDR5X chips don't run at as high a clock speed as equivalent GDDR5 does after several years on the market.

What you're getting is the equivalent of a GGDR5-512 bit bus for a probable 25-30% increase in memory bandwdith; but without the stupidly expensive PCB it would require.

If GP100 will come in a GDDR5X variant with a 384/512bit bus or be exclusively HBM2 at 4096 bits is still TBD. (Rumors are that it'll support both ram types; but if HBM2 is sufficiently available they might not launch any GDDR5X variants. OTOH if there're major availability problems most/all consumer GP100 cards might be GDDR5X, with HBM2 not mainstreaming until a year or two later.)
 
what about GTX 280 512bit bus to GTX 480 384 bit bus? the bus cross-generational speaking doesn't mean too much when as razor1 said better texture/color compression techniques, bandwidth management, also depend on how bandwidth starving the architecture can be..
 
what about GTX 280 512bit bus to GTX 480 384 bit bus? the bus cross-generational speaking doesn't mean too much when as razor1 said better texture/color compression techniques, bandwidth management, also depend on how bandwidth starving the architecture can be..

Same deal as this time. The 280 used GDDR3, the 480 GDDR5, with bandwidth at specsheet clocks going from 159 to 174 GB/sec. Faster memory allowed a narrower bus while still providing enough bandwidth to keep the GPU fed.


Memory bandwidth starvation is never a problem on high end cards; because with them performance goals write the specsheet and determine the manufacturing price. Where it does become an issue is on low end cards where price points are fixed and the GPUs are massively cut down versions of flagship architectures instead of being something tuned directly for the low end of the market (eg anything where regular DDR ram is used instead of GDDR).
 
How much faster would a 980 be with a wider bus and would it be worth the additional complexity and cost? You can't really declare that 256-bit is not good enough without looking at the bigger picture.

Compression, larger L2 caches and other optimizations can go a long way to reducing raw bandwidth requirements.
 
exactly, for a given GPU, everything has to be looked into to see what is beneficial and what is not. This is how GPU's are designed, and very few GPU's have ever come out that were unbalanced.

Of course as games evolve, things change as they will stress GPU's differently when talking about older generation GPU's, but that is a totally different matter and is to be expecting.
 
They never want to give too much away that it would be harder to beat their own tech next release cycle, and so that people will have to buy the new cards.
 
Same deal as this time. The 280 used GDDR3, the 480 GDDR5, with bandwidth at specsheet clocks going from 159 to 174 GB/sec. Faster memory allowed a narrower bus while still providing enough bandwidth to keep the GPU fed.


Memory bandwidth starvation is never a problem on high end cards; because with them performance goals write the specsheet and determine the manufacturing price. Where it does become an issue is on low end cards where price points are fixed and the GPUs are massively cut down versions of flagship architectures instead of being something tuned directly for the low end of the market (eg anything where regular DDR ram is used instead of GDDR).

Bingo, exactly my point. =) couldn't write it better.
 
I've always wondered if this compression technology also impacts colour accuracy and reproduction. Would love to see some tests on that.

E.g. are they going to (numbers for example sake) 6 bit, then extrapolating for 8 or 10 bit etc? Is there information lost or is it like a typical compression scenario where it isn’t? I just struggle to see how it's done in real time. Megapixelsx60-144 per second...
 
Back
Top