AMD HBM High Bandwidth Memory Technology Unveiled @ [H]

AMD Addresses Potential Fiji 4GB HBM Capacity Concern – Investing In More Efficient Memory Utilization
I think it's interesting that people were criticizing NVIDIA for going with compression on the 256-bit Maxwell parts, and now AMD is doing to same. Some people are already parroting the line that "4GB is enough" because AMD said so. I wonder what people with 4k Eyefinity would say about that.

A lot of tough talk. We'll see how that plays out in the real world.

It was already stated in an earlier post that 2 GTX 980's in SLI work fine for 4k gaming. 4 gigs is going to be fine. :rolleyes:
 
AMD Addresses Potential Fiji 4GB HBM Capacity Concern – Investing In More Efficient Memory Utilization





I think it's interesting that people were criticizing NVIDIA for going with compression on the 256-bit Maxwell parts, and now AMD is doing to same. Some people are already parroting the line that "4GB is enough" because AMD said so. I wonder what people with 4k Eyefinity would say about that.

A lot of tough talk. We'll see how that plays out in the real world.

It's not the 4GB that's the problem. 256 bit limits bandwidth (overall speed). The HBM on Fiji will have ~4X the bandwidth of the 980.
 
Again, we don't know what the actual performance gains are from HBM, especially to be grasping at negative results. 4GB is possibly limiting right now simply because of latency and number of instructions. 4GB HBM can process 8 channels in parallel and is 3x faster? Isn't the 2x instructions the more important factor here when coupled with the 3x bandwidth bit?

What I mean to say is 3x the bandwidth is probably overkill and will be hardware limited by the GPU...but...4GB HBM stack has 8 128-bit channels operating at this speed. The fill on that is much higher than 8GB GDDR5 at least 50 percent faster, in cases where you would be trying to maximize the memory of an 8GB GDDR5 card, and substantially faster in situations that are not as taxing.
 
Last edited:
....

A game console with HBM APU sounds awesome though, Nintendo?

Nintendo seem to have avoided playing the high end console game. Last time was the N64 really.

But damn if they're not gunna be in a prime position in a few years to bring out an Xbone/PS4 killer, fully exploiting HBM with a stunning new Mario64 style game.

I mean, that would like, shiiiiiiiiiit! :cool:
 
It's not the 4GB that's the problem. 256 bit limits bandwidth (overall speed). The HBM on Fiji will have ~4X the bandwidth of the 980.

There is absolutely nothing to suggest that current GPU's are starved for memory bandwidth.

If memory bandwidth were restricting performance on todays GPU's, one would expect to see a linear increase in framerate with memory clock rate increases. In other words, increase ram clocks by 5%, get 5% frame rate increase, etc. As anyone who has overclocked a videocard lately (or ever) has noticed, memory overclocks are usually a disappointment. This means, current RAM tech is sufficient for generation to generation incremental performance improvements.

If anything the additional bandwidth offered by HBM will be wasted by the first few generations of GPU's using HBM. You can have all the memory bandwidth in the world, but if you are only using a small portion of it, who cares?

I'm not expecting HBM to have any noticeable impact on performance at all compared to incrementally increased GDDR5 on equivalent GPU's.

Long term, as memory bandwidth requirements grow generation over generation HBM will become absolutely crucial, but don't expect anything night and day, as the GPU is still going to be the bottleneck, not the VRAM.

Unfortunately we will likely never get to test this theory, as the need for an interposer makes it likely that we will never see equivalent GPU's, one with GDDR5 and one with HBM to make a real comparison.
 
Last edited:
It's like the "memory compression" technology being touted by both parties for the last set of releases, a interesting marketing point but largely transparent functionally for the end user. Fiji will be what it is for the the end user, HBM is not going to be some sort of added "secret" benefit.

AMD needs something akin to 2x Full Tonga to be in GM210 performance territory based upon what is known of its current uarch. However scaling up that way would result an immense die size of 700mm^2+ along with a very high TDP. HBM allows space savings on the memory subsystem as well as power savings (among other factors). This why the current information fits and suggests Fiji will have 64 CUs (which is 2x Tonga).

HBM simply is part of the building blocks which enables Fiji.

Zarathustra[H];1041626119 said:
Unfortunately we will likely never get to test this theory, as the need for an interposer makes it likely that we will never see equivalent GPU's, one with GDDR5 and one with HBM to make a real comparison.

Actually something interesting is that with certain lower end parts and especially mobile parts you do have otherwise identical GPUs configured much less (near half) the memory bandwidth using DDR3 as opposed to GDDR5 (bandwidth it was designed for). The performance difference however is not half in this type extremely bandwidth starved scenario.
 
Yeah, it was fun testing the performance difference GDDR5 brought to the HD 4870 versus the HD 4850.

But it was only about 3-7%, if I recall correctly.
 
Actually something interesting is that with certain lower end parts and especially mobile parts you do have otherwise identical GPUs configured much less (near half) the memory bandwidth using DDR3 as opposed to GDDR5 (bandwidth it was designed for). The performance difference however is not half in this type extremely bandwidth starved scenario.

Well, yes, move down the memory bandwidth specs and you'll quickly see performance slow down, because now you have made your VRAM bandwidth the bottleneck

What I am saying is the following:
  • HBM provides ~3x the bandwidth
  • Current high end GPU's are in most cases not bandwidth starved. We know this because increasing RAM clocks provide little to no benefit.
  • Future releases will require more bandwidth to keep up with GPU's
  • Fiji XT will very unlikely be 3x faster than current top end GPU's, so chances are it won't need 3x more RAM bandwidth.
  • Performance like that of Fiji XT could probably have been supported by GDDR5 for this generation, but not many more speed increases after that.
  • Over time new GPU's will require more and more bandwidth, and there will come a time when HBM will be pushed to its limits.

Essentially, all I am saying is that HBM won't be a big deal performance wise in this coming generation. Long term it will, but all HBM does now is increase VRAM bandwidth when the GPU - not the RAM - is already the bottleneck.
 
Last edited:
HBM for graphics (at the present technology node) seems more about ramping up interposers for a wide variety of future needs (I'm looking at you mobile, server/virtualization, and APU) where high speed off-chip signals are going to benefit tremendously from die bonding as opposed to chip->PCB->chip. Start maturing this process now, so transition is smoother. Plus, it gives an exit strategy to GDDR5, which is already working heroically (albeit, more than sufficient presently).

AMD/Hynix/whomever else is contributing may benefit far more from the IP of this work than anything in terms of near-term GFX needs.

And, heck, you can probably do the interposers on polysilicon, so that greatly reduces the cost of the silicon itself (which is a huge %age of the total cost, regardless the process technology)
 
Just ignore Zara, he keeps saying the say bullet points without being able to see the larger picture.

As far as cheapening the interposer, a passive silicon interposer that is reticle limited could easily get down to the ~$5-$10 range, and is likely to not be more expensive than that currently. Looking around at articles over the last year or two, they were in the $15-$20 neighborhood on 65nm, so that price should have dropped a bit since then.

They are also looking into organic and glass interposers. They each have their drawbacks, so I doubt we will see a move from silicon in the near future.
 
Just ignore Zara, he keeps saying the say bullet points without being able to see the larger picture..

If you want to continue believing in fairy tales and unicorns, that's up to you. :p

There is no way on earth HBM will be a game changer in performance this generation. If you believe it will, you are delusional.

All HBM does is to allow the continued incremental speed increases we see generation over generation which otherwise would have been held back by GDDR5 within a couple of generations.
 
Zarathustra[H];1041627057 said:
If you want to continue believing in fairy tales and unicorns, that's up to you. :p

There is no way on earth HBM will be a game changer in performance this generation. If you believe it will, you are delusional.

All HBM does is to allow the continued incremental speed increases we see generation over generation which otherwise would have been held back by GDDR5 within a couple of generations.

Actually, it doesn't.
That is the issue with you posting without fully grasping the advantages that are being brought about by HBM.

HBM IS a gamechanger, it just won't be fully utilized this generation. That doesn't mean that there won't be tangible benefits due to HBM being introduced on Fiji.

If you want to learn more, please feel free to read my previous posts about HBM.
 
Actually, it doesn't.
That is the issue with you posting without fully grasping the advantages that are being brought about by HBM.

HBM IS a gamechanger, it just won't be fully utilized this generation. That doesn't mean that there won't be tangible benefits due to HBM being introduced on Fiji.

If you want to learn more, please feel free to read my previous posts about HBM.

I'm having trouble imagining where else this will benefit aside from GPUs (and the obvious benefit on mobile GPU miniaturization). That is one of the only cases I can think of where:

1. You need high bandwidth in a separate, independent memory space that is (mostly) unaffected by communication bottlenecks.
2. You need only 1-2 fixed sizes of ram for a product.

And of course that will benefit GPU Compute, where many computational workloads tend to be bandwidth-limited. But I can't see it benefiting any other aspect of comute.

In in single-image multi-socket servers it would be nice to have this incredible level of bandwidth by putting DRAM on interposer for each CPU die, but processor intercommunication will still limit the benefit. That, and I just can't see a cost-effective way to build multiple SKUs with different numbers of ram chips, because everyone's needs are different, and RAM is a considerable slice of your average high-performance server's price tag.

What am I missing here? Smaller, lower-power cheaper components, and faster GPU Compute will happen, but what else? Usually when someone calls a technology a "game changer," they're implying that it will open up entirely new markets for the tech, or create entirely new product lines.
 
Last edited:
I'm having trouble imagining where else this will benefit aside from GPUs (and the obvious benefit on mobile GPU miniaturization). That is one of the only cases I can think of where:

1. You need high bandwidth in a separate, independent memory space that is (mostly) unaffected by communication bottlenecks.
2. You need only 1-2 fixed sizes of ram for a product.

And of course that will benefit GPU Compute, where many computational workloads tend to be bandwidth-limited. But I can't see it benefiting any other aspect of comute.

In in single-image multi-socket servers it would be nice to have this incredible level of bandwidth by putting DRAM on interposer for each CPU die, but processor intercommunication will still limit the benefit. That, and I just can't see a cost-effective way to build multiple SKUs with different numbers of ram chips, because everyone's needs are different, and RAM is a considerable slice of your average high-performance server's price tag.

What am I missing here? Smaller, lower-power cheaper components, and faster GPU Compute will happen, but what else? Usually when someone calls a technology a "game changer," they're implying that it will open up entirely new markets for the tech, or create entirely new product lines.

Mobile. It wants the lower power and compact form factors and APUs want the extra bandwidth so they can do both workloads on the same memory at the same time. Many mobile devices have limited RAM configurations, especially tablets and ultrabooks. No DIMM slot might be a hard sell for some but if you don't gouge for reasonable amounts of RAM.
 
Mobile. It wants the lower power and compact form factors and APUs want the extra bandwidth so they can do both workloads on the same memory at the same time. Many mobile devices have limited RAM configurations, especially tablets and ultrabooks. No DIMM slot might be a hard sell for some but if you don't gouge for reasonable amounts of RAM.

Exactly if they put 4-8 gb of hbm on a soldered in soc in an ultra book no dimm slots to make the book thinner you could make a mac book air thin ultra book with the power and speed of a much bigger unit.
 
Actually, it doesn't.
That is the issue with you posting without fully grasping the advantages that are being brought about by HBM.
.

I was focused on the performance of the next generation.

There ARE other benefits as well (most notably space, power use and heat) but they weren't necessarily relevant to the point I was trying to make, so they weren't worth noting.
 
Cell phones are an order of magnitude more compact than an ultrabook, lest we forget. :)

Might be able to park a bunch of coprocessors and the main proc on a single interposer + memory, and maybe even a communication chip or two (even though it's not likely for bandwidth at this point as much as allowing for better holistic packaging). There's the added advantage of reducing each respective chip by going to a an interposer than driving a much longer trace on a pcb. Take what was a large %age of the phone's internal volume (it's impressive how well they pack those PCB's!) and shrink that by 20-30%. That's huge.

When I was thinking virtualized servers using APU+local DRAM--push down TDP a bit, shrink the daughter board/cartridges down. I realize this article is old, but think about shrinking this part by 20-30% and drop its TDP down some? It's a big deal.

Yes, GFX benefits from the newfound bandwidth, but it also allows for a huge jump in density and a (hopeful) modest bonus in power. Other applications are chomping at the bit for that kind of integration.
 
Cell phones are an order of magnitude more compact than an ultrabook, lest we forget. :)

Might be able to park a bunch of coprocessors and the main proc on a single interposer + memory, and maybe even a communication chip or two (even though it's not likely for bandwidth at this point as much as allowing for better holistic packaging). There's the added advantage of reducing each respective chip by going to a an interposer than driving a much longer trace on a pcb. Take what was a large %age of the phone's internal volume (it's impressive how well they pack those PCB's!) and shrink that by 20-30%. That's huge.

When I was thinking virtualized servers using APU+local DRAM--push down TDP a bit, shrink the daughter board/cartridges down. I realize this article is old, but think about shrinking this part by 20-30% and drop its TDP down some? It's a big deal.
Yes, GFX benefits from the newfound bandwidth, but it also allows for a huge jump in density and a (hopeful) modest bonus in power. Other applications are chomping at the bit for that kind of integration.
i am thinking more of where they should take arm the arm server segment is not a huge market yet they are devoting all of the arm division to that area...

they could very well dive in to tablet and phone soc and i am sure there are 100's of Chinese company who would love to put amd soc in cheap tablets toshiba and samsung sony might too...

Qualcomm seems to have soured thier standing in everyone's eyes... due to the 810...
 
Exactly if they put 4-8 gb of hbm on a soldered in soc in an ultra book no dimm slots to make the book thinner you could make a mac book air thin ultra book with the power and speed of a much bigger unit.

So who's going to absorb the extra expense of having two to three different memory capacity SKUs for every single processor Intel makes? Or are you just going to ship everything with 16GB ram?

And really folks, 8GB ram soldered to the motherboard didn't stop the 2015 MacBook from being super tiny. If they made it any smaller/thinner it would not be able to keep the processor cool. And the keys already have shitty travel, which means it's already thin enough to compromise basic usability.

https://d3nevzfk7ii3be.cloudfront.net/igi/bgZDFB3MHBQaSVWC.huge

See those two tiny red chips on the end? That's all the space 8GB ram takes. You do get more bandwidth from HBM, but I've been trying to explain to you: in power-constrained places where space matters, you're going to hit the power wall before the extra bandwidth HBM brings is a benefit.

Again, I'm still not seeing this sea-change happening, aside from you wishing hard for companies like Intel to just give shit away for free. It could make a difference in handheld parts, but again it has to compete with LPDDR4 (already shipping). Can you show me proof that it uses less power than LPDDR4?
 
Last edited:
Zarathustra[H];1041626119 said:
There is absolutely nothing to suggest that current GPU's are starved for memory bandwidth.

If memory bandwidth were restricting performance on todays GPU's, one would expect to see a linear increase in framerate with memory clock rate increases. In other words, increase ram clocks by 5%, get 5% frame rate increase, etc. As anyone who has overclocked a videocard lately (or ever) has noticed, memory overclocks are usually a disappointment. This means, current RAM tech is sufficient for generation to generation incremental performance improvements.

Actually, for the first time in who knows how long, overclocking the 980 memory, in particular, has shown to significantly improve performance, especially in 4K scenarios.

I've been following this massive 980 overclocking thread for some time now, and everyone there has concluded that when overclocking the 980, you're looking to find the best balance between core and mem, and not throwing everything into the core at the sacrifice of mem speed (like pretty much every card prior).

It seems on the 980, the more you overvolt/overclock the core, the less you can overclock the memory, and thus we are starting to see mem bandwidth be an issue.


That said, I totally agree with your posts about HBM not being a game changer for this gen. For AMD, I'm betting they're more concerned about cost/power saving than pure performance. That's how they'll be selling it to investors.
 
Actually, for the first time in who knows how long, overclocking the 980 memory, in particular, has shown to significantly improve performance, especially in 4K scenarios.

I've been following this massive 980 overclocking thread for some time now, and everyone there has concluded that when overclocking the 980, you're looking to find the best balance between core and mem, and not throwing everything into the core at the sacrifice of mem speed (like pretty much every card prior).

It seems on the 980, the more you overvolt/overclock the core, the less you can overclock the memory, and thus we are starting to see mem bandwidth be an issue.


That said, I totally agree with your posts about HBM not being a game changer for this gen. For AMD, I'm betting they're more concerned about cost/power saving than pure performance. That's how they'll be selling it to investors.


Interesting.

I'll have to take some of that back then. I haven't had a chance to play with a 980 (or Titan X) personally yet.
 
Less space and power used for memory means more space and power free for actual GPU hardware.
At least that's what I am getting from all of this.
And I am fairly certain LPDDR4 is going to use slightly more power than HBM.
 
That said, I totally agree with your posts about HBM not being a game changer for this gen. For AMD, I'm betting they're more concerned about cost/power saving than pure performance. That's how they'll be selling it to investors.

Performance wise HBM is something that is for videocards trivial at this point in time what is important is the process. If this ends up working quite well next generations will benefit from this more then anything else. When you know you can have more memory bandwidth you can do a lot with it.

Maybe the first thing will be the new Nintendo console that might have HBM and is not constraint by a design that is typical of what we see on PS4/Xbox1.
 
4GB just seems lack luster, i think most of us were expecting 8 at launch. But maybe it will be better than what we all think.

The ram bus on my HD 7850 is 256-bits wide (4x64-bit x-bar, IIRC.) If you are tempted to think quadrupling that bus width is "lackluster" consider the difference between a card running on a 128-bit bus and the same exact card running on a 256-bit ram bus...the performance difference is quite substantial...;) It's similar to comparing a cheapo 64-bit card to a 256-bit card--the performance differential can be great--especially in high-resolution scenarios where the system is primarily gpu limited.

The main reason we haven't seen a lot more 512-bit wide buses (or greater) is because the cost of doing it traditionally is very high, and the results often a matter of diminishing returns because of the complexities of the traditional process. AMD's HBM is not limited to the traditional discrete gpu ram-bus manufacturing processes.
 
The ram bus on my HD 7850 is 256-bits wide (4x64-bit x-bar, IIRC.) If you are tempted to think quadrupling that bus width is "lackluster" consider the difference between a card running on a 128-bit bus and the same exact card running on a 256-bit ram bus...the performance difference is quite substantial...;) It's similar to comparing a cheapo 64-bit card to a 256-bit card--the performance differential can be great--especially in high-resolution scenarios where the system is primarily gpu limited.

The main reason we haven't seen a lot more 512-bit wide buses (or greater) is because the cost of doing it traditionally is very high, and the results often a matter of diminishing returns because of the complexities of the traditional process. AMD's HBM is not limited to the traditional discrete gpu ram-bus manufacturing processes.


You are comparing apples and oranges.

Bus bit width (and its resulting bandwidth) and capacity are two very very different things, and one does not make up for the other.

High bandwidth is great. You can move data back and forth to the VRAM fast and feed that GPU that needs it.

Having higher bandwidth does nothing to address capacity though.

The bandwidth of your VRAM becomes completely irrelevant the second you need more VRAM than you have.

The video card tries to intelligently swap stuff in and out of vram to make sure the textures, etc you need are available to the GPU when you need them.

If you don't have enough VRAM it doesn't matter how high bandwidth you have, because once the GPU needs something and it can't find it in vram, it has to go across the relatively slow PCIe bus and grab it from system RAM, which means it won't be immediately available and WILL cause stuttering, frame drops lower frame rates or worse.

No amount of additional vram bandwidth will solve a problem where you need more vram than you have.

Now, I am not convinced that 4GB is too small in this generation. It might actually be just fine. But if it isn't, all the VRAM bandwidth in the world won't make up for it in the slightest.
 
@ Z...

Don't know why you think 4GB's isn't a substantial amount of vram...nVidia launched two 3.5GB cards masquerading as 4GB cards, the 970 & 980, and although performance of those products drops by ~50% when they exceed that 3.5GB boundary, it doesn't seem to have hurt the popularity of either product. Even the Witcher 3 rarely if ever exceeds the 2GB vram level...it's just not an issue for 95%+, currently. I suppose it will be, though, 2-3 years from now...when 4k monitors are selling for $300...
 
@ Z...

Don't know why you think 4GB's isn't a substantial amount of vram...nVidia launched two 3.5GB cards masquerading as 4GB cards, the 970 & 980, and although performance of those products drops by ~50% when they exceed that 3.5GB boundary, it doesn't seem to have hurt the popularity of either product. Even the Witcher 3 rarely if ever exceeds the 2GB vram level...it's just not an issue for 95%+, currently. I suppose it will be, though, 2-3 years from now...when 4k monitors are selling for $300...

I don't think you read my entire post. Try the last paragraph again :p
 
@ Z...

Don't know why you think 4GB's isn't a substantial amount of vram...nVidia launched two 3.5GB cards masquerading as 4GB cards, the 970 & 980, and although performance of those products drops by ~50% when they exceed that 3.5GB boundary, it doesn't seem to have hurt the popularity of either product. Even the Witcher 3 rarely if ever exceeds the 2GB vram level...it's just not an issue for 95%+, currently. I suppose it will be, though, 2-3 years from now...when 4k monitors are selling for $300...

Only the 970 had the 3.5GB nonsense.

Generally as the cards increase in performance the amount of IQ you can do improves, therefore you need more VRAM. Like at 60FPS I can use anywhere from 4-5GB in the Witcher 3 with TiX SLI. If the new AMD cards match the TiX in performance and are 4GB they might be ok for single card in most situations, but crossfire would be an easy no go until 8GB show up.

HBM should be perfect for the high Hz guys where you need the bandwidth.
 
Only the 970 had the 3.5GB nonsense.

Generally as the cards increase in performance the amount of IQ you can do improves, therefore you need more VRAM. Like at 60FPS I can use anywhere from 4-5GB in the Witcher 3 with TiX SLI. If the new AMD cards match the TiX in performance and are 4GB they might be ok for single card in most situations, but crossfire would be an easy no go until 8GB show up.

HBM should be perfect for the high Hz guys where you need the bandwidth.

Just be careful in what conclusions you draw.

Just because you are using greater than 4GB doesn't mean you NEED that much for it to run well.

GPU's typically swap textures and other stuff out of VRAM only when something new needs that VRAM.

A large portion of the VRAM in use may just be old stuff kicking around not in active use.

We don't know if 4GB on FIJI is sufficient at this point, but time will tell.

Based on existing benchmarks - however - dual 980s in SLI seem to perform pretty well in GTAV, so my initial thought is that it won't be needed immediately. If you tend to hold on to your video cards for a long time - however - all bets are off.

In a lot of cases where settings have forced video cards into high ram use situations where the lack of ram is actually slowing things down, it has been at setting that the GPU isn't able to provide playable rates anyway, and as such is a moot point.
 
Zarathustra[H];1041641190 said:
....In a lot of cases where settings have forced video cards into high ram use situations where the lack of ram is actually slowing things down, it has been at setting that the GPU isn't able to provide playable rates anyway, and as such is a moot point.

If I remember rightly, Nvidia pretty much said this themselves with the 780 'only' having 3GB. That if you had a game that truly USED 4GB, the GPU wouldn't be able to cope anyway. A 780 can only really use 3GB and likewise a 980 can only really use 4GB.

Fiji will be competing in this range and I think 4GB is easily enough, now and in the future, for that GPU.
 
So, This was posted over in the "coming soon few weeks" thread, but I haven't seen it over here yet, and it seems relevant.

https://twitter.com/dankbaker/status/609445259597213696

From a couple days ago, didnt notice anyone posting it. He does kind of elude to 4GB VRAM still just being 4GB of VRAM.

@IcnO Different things really. More VRAM past your working set size doesn't help much. Kind of like having extra seats on the bus

What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...

Do I get to say "told you so" now? :p
 
Is AMD dead if they just release rebrands?

Nope.

As long as they provide lots of information and a firm release date on Fiji XT, they should be OK.

Also it really depends on what we are calling "release" these days.

My guess/best case hope for tomorrow is as follows:
  • R9 300 series release and immediate availability. All rebrands.
  • Lots of information about Fiji XT (Fury, or whatever they wind up calling it)
  • Future release date for Fiji XT (6/30 or maybe 7/31?)
  • Review samples of Fiji XT with major reviewers with NDA lifting after press release (this one may be wishful thinking)
  • Expect very limited availability of Fiji based boards, making them almost impossible to find for rest of year, due to everything we have been hearing about HBM availability.

If they fail to deliver full bench information, review samples to major publications and firm release dates for Fiji, there will probably be issues.

I'm actually kind of annoyed at all the people getting impatient and pissed off at AMD for not releasing Fiji sooner. They never promised anything. Just because someone not associated with AMD at all can post a forum post online titled "coming soon few weeks" doesn't mean AMD has to, or even should abide by it.

These things have multiple year long development cycles. You can't just release them because some fanboy online says so. They have to be done first.

In the immortal words of John Carmack, when will it be released? "When it's done"...
 
Zarathustra[H];1041666777 said:
My guess/best case hope for tomorrow is as follows:
  • R9 300 series release and immediate availability. All rebrands.
  • Lots of information about Fiji XT (Fury, or whatever they wind up calling it)
  • Future release date for Fiji XT (6/30 or maybe 7/31?)
  • Review samples of Fiji XT with major reviewers with NDA lifting after press release (this one may be wishful thinking)
  • Expect very limited availability of Fiji based boards, making them almost impossible to find for rest of year, due to everything we have been hearing about HBM availability.

Yes, 300 series release and availability.
Yes, lots of info on Fiji/Fury.
Release date of Fiji/Fury, 6/24 - 6/30.
Review sites get Fiji/Fury samples at or shortly after E3. NDA lifts likely on the same day as availability.
No limited availability. There are no HBM availability issues.
 
Back
Top