AMD HBM High Bandwidth Memory Technology Unveiled @ [H]

Power to AMD however i want to see this in action. AMD had said a number of times they have got something great but it turns out to be meh.
 
This is another step towards something that will end up being better. The change from GDDR3->GDDR5 was at the start maybe not jaw dropping but as technology matures advances will push HBM technology.

Not to sure why people expect something radical Gaming these days hardly pushes the GPU unless you are using close to metal drivers.
 
The best we have of GDDR5 is on par with the "worst" entry level of HBM ... it will only get better, where GDDR5 is near its cap.
 
The best we have of GDDR5 is on par with the "worst" entry level of HBM ... it will only get better, where GDDR5 is near its cap.

This is the takeaway that readers should get from this IMO.
 
Hear, hear! We shall be seeing lots of cool shit with graphics cards soon. This should effectively eliminate any memory bandwidth caps for... Quite a long time, I would think.

It will only move where the bottleneck is in the system and that will be the next priority.
 
The power usage gains mean more room in the TDP budget for clock-speed which is cool. Not that 20-30W is going to change the world but it's all progress! Should open up some cool options for Mini-ITX builds also. Always cool to see new tech coming through.
 
Zarathustra[H];1041611649 said:
HBM is undoubtedly the future memory tech, at the very least for Video cards.

I question how much of a practical performance benefit it will actually have in this generation of video cards - however.

Sure, it may MASSIVELY increase the available video ram bandwidth, but we already know from overclocking that current gen GPU's only marginally benefit from upping the memory bandwidth, and there is no reason to believe that this upcoming generation of GPU's from AMD will be any different.

Power savings have also been listed as a huge benefit of HBM. And sure, the 3x better performance per watt sure is impressive. Considering - however - how small a portion of the overall video card power use is in the RAM, this will have a rather marginal impact on high performance products. (it will be huge on low power mobile devices though)

Lets say you have a 300W video card with GDDR5 RAM, which uses 275W for the GPU and 25 W for the ram. Cut the RAM power use by a factor of 3, and you are now using 8.3W for the RAM, giving you an additional 16.7W to use for the GPU while still staying within the 300W power envelope.

So that's moving from 275w to 291.7W, and increase of ~6%. Not bad. Every little bit counts, but it's not enough to be a game changer

If anything, limited early supply of early HBM production will be the biggest impact of going HBM this gen, in reducing the availability of these cards.

I hate to say it, but Nvidia's "wait and see" approach, going with HBM in its second generation instead, was likely the smarter approach, though I will be happy to be proven wrong on launch day :p

Over time HBM will be of huge importance, but its importance will only grow slowly, bit by bit each generation. Those expecing an overnight change because OMG HBM are going to be hugely disappointed.

There is a simple misunderstanding that by simply adding a lot of something to a GFX card it simply become faster.
Add 8GB to a 750ti? Is it going to play modern games at hi-res? No.
Simply put, most all GFX cards are designed to meet a level of performance. It is simple really.
Take a TitanX. Yes it has some memory bandwidth to spare. Does cranking it up give linear performance? No. It simply does not. Why? Simple. It was designed and released to meet certain performance metrics. Cranking RAM speed simply doesn't make it a higher level card. If it did, Nvidia wouldn't just sit on that wasted performance. They would have simply released it as a higher performing part.
I hope you understanding the simplicity of it all. Now back in the day, (or in more simple times) there have been many castrated cards. I remember a GeForce2 MX. Now it was SDR, and in cranking up the memory speed the cards performance blossomed.
This can be seen even now in lower level cards that have been held back. But full part sku's simply are not designed that way.
You will get the usual bump of %10-15 from cranking clocks, but it will use more power/heat. And that is the simple reason why reference clocks are usually the sweet spot.
Now simply put, how do you think AMD will design a GPU with extremely high memory bandwidth? Are they simply going to make an architecture that see's no benefit?
I think the answer is fairly simple. Don't you?
 
The thermal dissipation is what really worries me. That could be a real mess. Here's to hoping AMD pulls this one out if the bag and maintains their viability.
 
More bandwidth is always good. However I'm more concerned about what arstechnica says about this. Their article (linked) says that 4gb max has been confirmed?? That would mean the 390x is limited to competing with the 980 and isn't any competition for the titan-x. I'm a new owner of a titan-x, but I was still hoping for AMD to show up for some competition as it's good for the market. 4GB max would be very bad IMO.

http://arstechnica.com/information-...nfirms-4gb-limit-for-first-hbm-graphics-card/

Apples and oranges. This is newer and better tech. Wait for it to come. AMD is saying they can make it work with 4GB.
 
I can see the use in a GPU.

I am wondering what it would do if properly set up on my motherboard (well a future board).

Is it not true that one of the big bottle necks on a main board is waiting for dram? If I remember right lvl 2/3 caching takes up a bit of space on the CPU die and if that could be moved off the CPU wouldn't it seriously decrease heat and allow for higher clocks?

This doesn't go on the mobo. It and the *PU go on an interposer together eliminating the complexity and space needed for the board. Also the connections can be smaller and much more precise.
 
Hear, hear! We shall be seeing lots of cool shit with graphics cards soon. This should effectively eliminate any memory bandwidth caps for... Quite a long time, I would think.

Memory bandwidth isn't an issue currently, more than enough based on Titan X over (and under) clocking of ram MHz. Doesn't truly affect net fps, at all, even at 4k. I can clock mine down to 6000MHz or all the way to 8000MHz, and the net fps change from that entire range is perhaps 2fps total.

So HBM, from both AMD and Nvidia, is a big yawn despite the hyperactivity we're seeing online from hopeful consumers looking for a revolution.

If anything DX12 is likely the much bigger story, and even that is fraught with peril in terms of actually being worth a damn given MS' shoddy history. Stacked VRAM? Mixing different mfg cards in some sort of bastardized SLI+Crossfire setup? I'll believe it when I see it.

We can only hope, but it will be interesting to revisit things in a few months.
 
How exactly do you pull heat from the layers underneath? I'm guessing this is why there's a voltage/clock nerf? It shows 4 stacks on the slide, which would be 400"+" GB/s versus the traditional example they gave at 448 GB/s. I'm wondering what kind of room we have to move from here. Just more stacks on the interposer? What about overclocking?
 
SA indepth HBM article
http://semiaccurate.com/2015/05/19/amd-finally-talks-hbm-memory/

That means the next generation of GPUs with HBM could not only be much smaller physically, everything other than the GPU+memory package can be much simpler and cheaper. Since you more or less only need to route PCIe in and video signals out, everything gets smaller, cheaper, and simpler. This is a win/win/win for almost everyone, and once volume comes up, it won’t cost any more than the ‘old way’.
 
Four gigs confirmed as the limit for the first HBM cards, but with the performance, what does that exactly mean? How would it compare to a 6gig card in eyefinity or 4K?
 
How exactly do you pull heat from the layers underneath? I'm guessing this is why there's a voltage/clock nerf? It shows 4 stacks on the slide, which would be 400"+" GB/s versus the traditional example they gave at 448 GB/s. I'm wondering what kind of room we have to move from here. Just more stacks on the interposer? What about overclocking?

TSVs and dummy bumps, probably around 10-15% for the dummy bumps, will evenly distributed the heat throughout the stack. This allows the heat to travel to the top die to be transferred to the heatsink.
 
Zarathustra[H];1041611649 said:
HBM is undoubtedly the future memory tech, at the very least for Video cards.

I question how much of a practical performance benefit it will actually have in this generation of video cards - however.

Sure, it may MASSIVELY increase the available video ram bandwidth, but we already know from overclocking that current gen GPU's only marginally benefit from upping the memory bandwidth, and there is no reason to believe that this upcoming generation of GPU's from AMD will be any different.
...
I hate to say it, but Nvidia's "wait and see" approach, going with HBM in its second generation instead, was likely the smarter approach, though I will be happy to be proven wrong on launch day :p.....

I think you've nailed it. Nvidia in particular have being talking about stacked DRAM since Fermi came out. So it seems to me, given Nvidia's larger R&D budget, the only reason AMD will bring HBM to market first is because Nvidia let them. As you noted, the real world performance benefit probably won't be significant enough, at least on 28nm.

Kinda like how their 512 bit bus on their current cads isn't enough to give them an advantage over the competition.

Also, notice how those slides focus on efficiency: space saving, power saving etc. There's little mention/promotion of benefits to the high end market.
 
I think you've nailed it. Nvidia in particular have being talking about stacked DRAM since Fermi came out. So it seems to me, given Nvidia's larger R&D budget, the only reason AMD will bring HBM to market first is because Nvidia let them. As you noted, the real world performance benefit probably won't be significant enough, at least on 28nm.

Kinda like how their 512 bit bus on their current cads isn't enough to give them an advantage over the competition.

Also, notice how those slides focus on efficiency: space saving, power saving etc. There's little mention/promotion of benefits to the high end market.

Yeah... because Nvidia had a choice in the matter... AMD co-developed HBM with Hynix. They have co-developed every new GDDR standard for the last ~10years.

Realworld performance benefits will be good all around. The experience and knowledge gained, and all that they have already gathered from developing HBM, is priceless and gives them a huge advantage going forward. It is especially important to AMD since they plan to use HBM in every market within ~2years.

Their 512bit bus was designed to meet their bandwidth target with the smallest and most power efficient memory controllers. Hawaii's IMC is a testament to AMD's engineering prowess.

GPUs need bandwidth like humans need oxygen.
 
Surely this will help the memory bandwidth starved APU's if you can put a 1/2 stack or full stack for 512mb or 1gb of RAM right there for it, yes? Really, how many of these could you fit onto an existing APU? Can you fit one of them?

And for AMD's next CPU, can this be used as well? Either as a replacement for L3 cache or as L4 Cache before turning to the DDR3/4?
 
Surely this will help the memory bandwidth starved APU's if you can put a 1/2 stack or full stack for 512mb or 1gb of RAM right there for it, yes? Really, how many of these could you fit onto an existing APU? Can you fit one of them?

And for AMD's next CPU, can this be used as well? Either as a replacement for L3 cache or as L4 Cache before turning to the DDR3/4?

I'd imagine there is plenty of space on the APU package for an additional HBM chip, but from the little I know about the tech, I think the processes are different enough that you won't see HBM on die.
 
Surely this will help the memory bandwidth starved APU's if you can put a 1/2 stack or full stack for 512mb or 1gb of RAM right there for it, yes? Really, how many of these could you fit onto an existing APU? Can you fit one of them?

And for AMD's next CPU, can this be used as well? Either as a replacement for L3 cache or as L4 Cache before turning to the DDR3/4?

You could easily fit 2 stacks onto existing APUs using a reticle limited interposer, ~832mm2. www.amkor.com/go/Kalahari-Brochure.
Depending on the size of the main/host die and cost restrictions you could potentially get up to 4 stacks, which is likely what we will see on the future HPC APUs from AMD.
I guess it could technically be used for an L4 cache but it will likely replace system ram except in niche cases.

Zarathustra[H];1041613747 said:
I'd imagine there is plenty of space on the APU package for an additional HBM chip, but from the little I know about the tech, I think the processes are different enough that you won't see HBM on die.
We will eventually see HBM or something similar stacked ontop of a host die, once the thermal roadblock is figured out.
The next step from 2.5d stacking is TOP-PIM which we could potentially see next year on 16/14nm GPUs, well from AMD.
 
Last edited:
Power to AMD however i want to see this in action. AMD had said a number of times they have got something great but it turns out to be meh.

AMD have - in the last 15 years - had a history of being ahead of the curve, launching tech that would become crucial and mainstream 5 - 6 years down the road, paving the way for their competitors.

AMD spends the money and time to develop it for a market that is not ready, and gets little market advantage from it. Then their competitors come along a few years later and profit off of the tech more than AMD does.

On die memory controllers, 64bit x86 cpu's, multi-core/many-core CPU's, Heterogeneous unified memory access, HMB, you name it.

The market has benefited tremendously from their innovations. To bad they haven't :(
 
Zarathustra[H];1041613758 said:
AMD have - in the last 15 years - had a history of being ahead of the curve, launching tech that would become crucial and mainstream 5 - 6 years down the road, paving the way for their competitors.

AMD spends the money and time to develop it for a market that is not ready, and gets little market advantage from it. Then their competitors come along a few years later and profit off of the tech more than AMD does.

On die memory controllers, 64bit x86 cpu's, multi-core/many-core CPU's, Heterogeneous unified memory access, HMB, you name it.

The market has benefited tremendously from their innovations. To bad they haven't :(

Yeah, yes.. hyper-transport too.. I don't how widespread is that though.
Honestly lately AMD seems to be doing just some dead-cat bounces lately.. I guess there is good RD in there still... I hope they keep limping towards some profit soon.
I know they have the Xbox and the PS4 and what not, but I imagine that is low margin stuff. ... meh, good luch AMD, still in my PC, and will always be in my PC, I don't care how much Intel 'crushes' them.
 
AMD's problem is marketing. They have no idea at all. They want to put out a halo card with 4GB. They will sell it as 4K ready. 4GB vs 12GB. There is a real disconnect there. They already lost the marketing battle. If they have to go into details why 4GB HBM is better than 12GB GDDR5: more bandwidth, blah, blah, blah. Nobody cares about that. Can it hold 4K super hi res texture with all post processing enabled? Does it offer more frame rates than GDDR5? They are offering technology that is not mature (hence only 4GB right now) against mature technology that doesn't have that limitation. Their GCN Cores are also inefficient (4096 vs 3072). It takes 25% more cores and 25% more power to equal a Titan X. You save $150 but lose 8GB of frame buffer. It'll be hard sell to consumers that somehow the Fiji is a better deal. Then there's the 980Ti to undercut it in both memory and price. It's going to be ugly for AMD, again.
 
AMD's problem is marketing. They have no idea at all. They want to put out a halo card with 4GB. They will sell it as 4K ready. 4GB vs 12GB. There is a real disconnect there. They already lost the marketing battle. If they have to go into details why 4GB HBM is better than 12GB GDDR5: more bandwidth, blah, blah, blah. Nobody cares about that. Can it hold 4K super hi res texture with all post processing enabled? Does it offer more frame rates than GDDR5? They are offering technology that is not mature (hence only 4GB right now) against mature technology that doesn't have that limitation. Their GCN Cores are also inefficient (4096 vs 3072). It takes 25% more cores and 25% more power to equal a Titan X. You save $150 but lose 8GB of frame buffer. It'll be hard sell to consumers that somehow the Fiji is a better deal. Then there's the 980Ti to undercut it in both memory and price. It's going to be ugly for AMD, again.

Amazing! You know specs, price, performance, power consumption for 2 unreleased cards...
 
AMD's problem is marketing. They have no idea at all. They want to put out a halo card with 4GB. They will sell it as 4K ready. 4GB vs 12GB. There is a real disconnect there. They already lost the marketing battle. If they have to go into details why 4GB HBM is better than 12GB GDDR5: more bandwidth, blah, blah, blah. Nobody cares about that. Can it hold 4K super hi res texture with all post processing enabled? Does it offer more frame rates than GDDR5? They are offering technology that is not mature (hence only 4GB right now) against mature technology that doesn't have that limitation. Their GCN Cores are also inefficient (4096 vs 3072). It takes 25% more cores and 25% more power to equal a Titan X. You save $150 but lose 8GB of frame buffer. It'll be hard sell to consumers that somehow the Fiji is a better deal. Then there's the 980Ti to undercut it in both memory and price. It's going to be ugly for AMD, again.

Fair enough, I guess we will know with the H's reviews that are sure to come.
One thing.. although I am struggling a bit to understand the tech, it would seem that it makes certain parts of the GPU itself simpler also. It might help further in power/heat savings, squeeze more performance then?
 
AMD Addresses Potential Fiji 4GB HBM Capacity Concern – Investing In More Efficient Memory Utilization

Joe Macri said:
If you actually look at frame buffers and how efficient they are and how efficient the drivers are at managing capacities across the resolutions, you’ll find that there’s a lot that can be done. We do not see 4GB as a limitation that would cause performance bottlenecks. We just need to do a better job managing the capacities. We were getting free capacity, because with [GDDR5] in order to get more bandwidth we needed to make the memory system wider, so the capacities were increasing. As engineers, we always focus on where the bottleneck is. If you’re getting capacity, you don’t put as much effort into better utilising that capacity. 4GB is more than sufficient. We’ve had to go do a little bit of investment in order to better utilise the frame buffer, but we’re not really seeing a frame buffer capacity [problem]. You’ll be blown away by how much [capacity] is wasted.

From article said:
According to Macri, GDDR5 fed GPUs actually have too much unused memory today. Because to increase GPU memory bandwidth, wider memory interfaces are used. And because wider memory interfaces require a larger amount of GDDR5 memory chips, GPUs ended up with more memory capacity than is actually needed.

Macri also stated that AMD invested a lot into improving utilization of the frame buffer. This could include on-die memory compression techniques which are integrated into the GPU hardware itself. Or more clever algorithms on the driver level.

I think it's interesting that people were criticizing NVIDIA for going with compression on the 256-bit Maxwell parts, and now AMD is doing to same. Some people are already parroting the line that "4GB is enough" because AMD said so. I wonder what people with 4k Eyefinity would say about that.

A lot of tough talk. We'll see how that plays out in the real world.
 
Last edited:
AMD Addresses Potential Fiji 4GB HBM Capacity Concern – Investing In More Efficient Memory Utilization





I think it's interesting that people were criticizing NVIDIA for going with compression on the 256-bit Maxwell parts, and now AMD is doing to same. Some people are already parroting the line that "4GB is enough" because AMD said so. I wonder what people with 4k Eyefinity would say about that.

A lot of tough talk. We'll see how that plays out in the real world.

What I don't get about texture compression is the following.

Is this just making up for poor game developer optimization?

If textures can be compressed without much (any?) quality loss, wouldn't it be better to do so one time up front, rather than have the GPU waste cycles doing it as the game is running?
 
Fair enough, I guess we will know with the H's reviews that are sure to come.
One thing.. although I am struggling a bit to understand the tech, it would seem that it makes certain parts of the GPU itself simpler also. It might help further in power/heat savings, squeeze more performance then?

It will help, but the problem is that memory is already only a small portion of the TDP usage already. HBM will help but so will 66% less memory than the competitor. If I have to bet, it will still end up a 250-300 watt card because unless their GCN 1.3 is markedly improved efficiency-wise from their older cards, it's still going to pull power and exhaust heat like crazy.
 
Zarathustra[H];1041613833 said:
What I don't get about texture compression is the following.

Is this just making up for poor game developer optimization?

If textures can be compressed without much (any?) quality loss, wouldn't it be better to do so one time up front, rather than have the GPU waste cycles doing it as the game is running?

Compression is a bandaid solution for limitations. Nvidia's answer to limited bandwidth and AMD's answer to insufficient RAM. Any time you throw in compression, then you have to create ASICS for that task (I doubt uses the CPU for that) as well as introduce latency. In the end, I doubt they can compress 8GB into 4GB, no matter how efficient their compression is.
 
Compression is a bandaid solution for limitations. Nvidia's answer to limited bandwidth and AMD's answer to insufficient RAM. Any time you throw in compression, then you have to create ASICS for that task (I doubt uses the CPU for that) as well as introduce latency. In the end, I doubt they can compress 8GB into 4GB, no matter how efficient their compression is.

Just get Pied Piper on that.

I think the increased bandwidth and lower power will be great for mobile phones.

This first generation of cards should learn a lot.

I'm concerned that a GPU sitting right next to these in the same package will fry them..

As far as the slides go, it seemed they were showing a performance comparison of 2 290x's one with hbm, one with gddr5... where's the last slides?
 
Surely this will help the memory bandwidth starved APU's if you can put a 1/2 stack or full stack for 512mb or 1gb of RAM right there for it, yes? Really, how many of these could you fit onto an existing APU? Can you fit one of them?

And for AMD's next CPU, can this be used as well? Either as a replacement for L3 cache or as L4 Cache before turning to the DDR3/4?

It won't be much use as a cache as it's still DRAM so won't have the lower latency that made external SRAM caches work. I would expect it see it as video memory and/or main memory for their APUs. I can see this enabling APUs to match the lower end cards that used to beat them just because of VRAM bandwidth advantage.

The lower power may be key for mobile (tablet and laptop) applications.
 
Compression is a bandaid solution for limitations. Nvidia's answer to limited bandwidth and AMD's answer to insufficient RAM. Any time you throw in compression, then you have to create ASICS for that task (I doubt uses the CPU for that) as well as introduce latency.

I've thought about it a little more though, since my post above.

VRAM has very many uses, but in a traditional graphics pipeline there are two primary purposes.

1.) Store the frame buffer.

Lately people have started to use frame buffer as a name for the entire VRAM. This is wrong. The frame buffer is the finished rendered frame, presumably in raw digital format, able to be dumped out to a digital panel (or into a RAMDAC)

Using 4k resolution that's 3840*2160*32bits (24bit color + alpha) ~32MB per frame

Let's assume worst case is three frames stored in triple buffering (I['m not sure if this is an accurate worst, case, but it is a guess.

That means we have ~96MB in a worst case used by the frame buffer.

2.) Store the textures

So, we should have total VRAM - 96MB = whats left for texture storage (and other miscellaneous stuff, like compute calculations, anti-aliasing calculations, etc.)

In the case of a 4GB card, that means ~4000MB

Wer store the textures here so they are available immediately for rendering, and you don't have to go back over the relatively slow PCIe bus to system RAM (or even worse, disk) for textures when you suddenly have to start rendering...

In the end, I doubt they can compress 8GB into 4GB, no matter how efficient their compression is.

Take Jpeg as an example. Compared to raw image data, a jpeg with little to no noticeable quality loss can be a tiny fraction of the original image size.

You'd probably need a faster algorithm than jpeg as it would need to be done in real time, and for a faster algorithm would likely compress less than jpeg, but its certainly possible.

Back in the day S3TC was the method used. It's algorithms were lossy, but not terrible, and resulted in a 4:1 compression ratio. (Wheee, now we have 16000MB equivalence.)

Jpeg can compress ~ 10:1 without the difference being noticeable to the typical eye, without zooming.

Go up to 100:1 compression with jpeg, and artifacting is definitely noticeable, but the image is still usable.

And these are only textures, so limited artifacting would be less noticeable in the final output than when compressing a finished image.

So the appropriate level of compression (if using jpeg) would be somewhere in between these two, I'd imagine. If we use Flickr as an example (they are a photography oriented site, and pgotographers tend to be concerned with quality) their jpeg's use roughly 45:1 compression ratio without much in the way of visible artifacting or quality loss (at least not to my eyes)

The question is, if you can decompress textures coming out of vram fast enough without using too much of the GPU power (needed for rendering) to make jpeg effective. My guess is that even today, the answer would be no.

We certainly have faster GPU's than we did in the 90s when S3TC was first used. Not sure what algorithms are used today for something like this.

I'm also not sure what baseline we are comparing it to. I mean, is any compression at all used on the baseline comparison GPU?

And how large can textures really be? I don't really play any of the newest games, so I don't know how large they are. I remember reading that GTAV was a 60GB download. How much of that 60GB is in textures? Also, how many of those textures are used at the same time, vs how many are loaded for specific scenes?
 
Last edited:
I love the idea of HBM from AMD. However, if it requires ANY driver/developer cooperation at all, it's going to be borked.
 
It won't be much use as a cache as it's still DRAM so won't have the lower latency that made external SRAM caches work. I would expect it see it as video memory and/or main memory for their APUs. I can see this enabling APUs to match the lower end cards that used to beat them just because of VRAM bandwidth advantage.

The lower power may be key for mobile (tablet and laptop) applications.

Also, for the great number of people in this world 4gb of system RAM is enough. Wouldn't an APU with 4gb of on-die RAM (shared with GPU) be the perfect setup for a super small form factor unit? You need onboard NIC, sound, and SATA and you're done.

Also, didn't AMD say with the newest version of Mantle that X-fire cards get to access the VRAM as a single pool? Wouldn't that mean a hypothetical X395 with 8gb of HMB... actually be a true 8GB? You could see a lot more dual solutions down the line, no? 385, 375, etc etc..
 
Zarathustra[H];1041613833 said:
If textures can be compressed without much (any?) quality loss, wouldn't it be better to do so one time up front, rather than have the GPU waste cycles doing it as the game is running?

The big change with Maxwell was NOT TEXTURE COMPRESSION, but improving the frame buffer color compression efficiency (moving from just compressing blocks to patterns):

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/3

AFAIK, the R9 285 has the same improvement, although HOW competitive it is with Nvidia's implementation I have no idea. But including these same improvements on the R9 390X to decrease frame buffer size and bandwidth is a no-brainier, so expect it.

http://www.anandtech.com/show/8460/amd-radeon-r9-285-review/3

This is a frame buffer compression, not texture compression. It allows for more free memory for either higher levels of MSAA, higher resolution, or more room in vram for assets.

Texture compression was the first major target for GPUs to compress (because it was easy). S3 was the first out the gate, and with every manufacturer having their own incompatible format (like S3TC and FXT1), the only way to make it work was to compress on-load. S3TC became a DirectX 6 standard because it was first out the gate, and games started to ship with compressed textures. It's been extended by ATI with BC5 in DirectX 10, which compresses normal maps, and potentially replaced by ARM's ASTC for DirectX 12, which could be quite impressive depending on how developers use it.
 
Last edited:
Back
Top