Fun Speculation-Fury refresh by when...

  • Thread starter Deleted member 93354
  • Start date
D

Deleted member 93354

Guest
So with AMD Deflated Expectations Gate now happening...

How long before AMD announced 8GB and HDMI 2.0 Fury?

I predict this late Fall.

Edit: How long was it before the FX5700 was replaced by nVidia because of their fiasco? Wasn't it like a matter of months?
 
Last edited by a moderator:
8GB is impossible.

I would be interested in a GDDR5 Fury card for $400ish. Also impossible.
 
No dreams please, HBM1 only can do 4GB so no 8gb HBM until next Gen.. same with HDMI 2.0... hardly they suddenly will add that feature as they could had done that already with 390/390X even with Fury.

topic closed =).
 
There is no technical limitation with HBM1 that is restricting it to 4GB...

Yep. The problem is they designed for a smaller process than they were able to get manufactured in the end. Had they been able to get on the smaller process, space would've been left for another 4GB.
 
8GB is impossible.

I would be interested in a GDDR5 Fury card for $400ish. Also impossible.

images


^^This + Windows 10 + DX12 + VRAM Stacking + Price of $800 = Possible WIN ;)
 
Problem is that DX12 is not the magic sauce we all hope it to be, games have to be specifically programmed to take advantage of it.

So for games that don't support DX12, DX12 won't save it. It probably won't be another few GPU gens before we see the fruits of DX12 (which by then we'd probably have GPU with bigger VRAMS than their Storage devices, making VRAM stacking possibly moot).
 
ill believe that hole vram stacking thing after i see more than one game doing it... being technically possible is irrelevant to me especially with an 800 dollar investment.....fuk that
 
ill believe that hole vram stacking thing after i see more than one game doing it... being technically possible is irrelevant to me especially with an 800 dollar investment.....fuk that

VRAM stacking is snake oil and you realize that very quickly if you do even light reading on the technical realities of split frame rendering. If one GPU renders left half of screen and other GPU renders right half, do ya think those halves are going to have textures in common? Yeah, probably 99.999%. And the only way to get around that and have unique textures in each half frame would be to design a whole game around trying to maintain separate textures on each half of the screen. How many developers will do that? What would even be the sense in that?



Split frame rendering is the real deal and the smoothness is really apparent in a game like Civ Beyond Earth, but VRAM stacking = horseshit
 
There is likely some misunderstanding about VRAM stacking. People might be assuming that it is almost like a "switch" you can enable in DX12 with the actual technicalities essentially abstracted (mostly) for the developer. Basically MS has done the work it will be relatively easy for developers to achieve high VRAM scaling.

While in reality what the situation is DX12 simply gives developers more low level control (including for memory allocation) and so they in theory can implement something like this.

You can think about it like multi-threading support for applications. Actually achieving equal work distribution (and therefore performance scaling) via multi-threading is very difficult and so in practice the actual tasks that can do this are rather limited (especially once you factor in resource constraints). This is will likely be even harder, especially more so if you factor in how experienced (or lack of) developers will be in this area on the onset and more resource constrained (given the relatively small audience it will benefit).

TLDR: DX12 gives developers the tools/ability but actual real world implementation effectiveness likely extremely iffy.
 
Last edited:
Yes I saw some people beliving that this will be as simple as switching few flags in the code because DX12 supports it.
 
I suspect no refresh anytime soon and a price drop of atleast $100 as soon as initial cards stop selling out. They are just demanding a premium from fanboys and amateur reviewers who have to have the cards now. That's the only good reason I can see for the current pricing.
 
I believe they could do 8GB HBM but I do not see them adding HDMI 2.0 when active adapters will be on market by then.
 
they cannot, they are physically limited by the size of the interposer.
 
they cannot, they are physically limited by the size of the interposer.

Yes they can. They just need more vertical stacks on the HBM by Hynixx. But the word is the yield is already low. So more vertical stacks would lead to even more supply issues.

Once Hynix gets their @#$%@# fixed we should see 8 Gigs
 
can we get a "amd r9 fury x refresh in a few weeks" thread? we better start that up now.
 
Yes they can. They just need more vertical stacks on the HBM by Hynixx. But the word is the yield is already low. So more vertical stacks would lead to even more supply issues.

Once Hynix gets their @#$%@# fixed we should see 8 Gigs


No they can't where would they put the stacks? They have to be connected to the interposer, that is what Mac is saying there is no more space on the interposer, and if they were able to get it in there, you still need traces to the GPU, the GPU will need to be redesigned and since its near the reticular limit, most likely can't be done.
 
No dreams please, HBM1 only can do 4GB so no 8gb HBM until next Gen.. same with HDMI 2.0... hardly they suddenly will add that feature as they could had done that already with 390/390X even with Fury.

topic closed =).


Exactly. If they had to come in at $650 retail for their current gen flagship card with sub par features, a model with better specs and more memory would be what $1300?
 
No they can't where would they put the stacks? They have to be connected to the interposer, that is what Mac is saying there is no more space on the interposer, and if they were able to get it in there, you still need traces to the GPU, the GPU will need to be redesigned and since its near the reticular limit, most likely can't be done.

exactly.

However, i think he is implying more layers on the actual modules.

Which currently cant be done within thermal envelope of the package.

Supposedly 2.0 will be 4HIHI (2GB per module)
 
yes there is, physical size.

There just isnt enough room to cram 4 more stacks on there.
Like I said, there is no technical limitation with HBM1 that would limit it to 4GB.

No they can't where would they put the stacks? They have to be connected to the interposer, that is what Mac is saying there is no more space on the interposer, and if they were able to get it in there, you still need traces to the GPU, the GPU will need to be redesigned and since its near the reticular limit, most likely can't be done.
Again, they could have used a larger, more expensive interposer if they really wanted/needed 8GB. They are at the limit on what they can do with the interposer they chose, but again, that doesn't mean that it is a limitation of HBM.
 
The easiest thing to do would probably be work on drivers and pay developers to use DP heavy features. Although that would also indirectly help nvidia sell pascal if they decide to place DP back in their next gen cards.
 
Again, they could have used a larger, more expensive interposer if they really wanted/needed 8GB. They are at the limit on what they can do with the interposer they chose, but again, that doesn't mean that it is a limitation of HBM.

the interposer would have to be redesigned to accommodate the additional traces, as well the interface logic on the GPU package itself.

additional cost of the larger silicon interposer, additional HBM modules, and redesigned GPU package logic would put the cost way beyond reasonable market pricing expectation.

Again, its a physical limitation.

And thats just assuming you could do all those things without exceeding the process reticular limit, as razor1 pointed out.
 
Last edited:
the interposer would have to be redesigned to accommodate the additional traces, as well the interface logic on the GPU package itself.

additional cost of the larger silicon interposer, additional HBM modules, and redesigned GPU package logic would put the cost way beyond reasonable market pricing expectation.

Again, its a physical limitation.

And thats just assuming you could do all those things without exceeding the process reticular limit, as razor1 pointed out.

No, I'm not talking about redesigning Fiji now... I'm saying that if they went a different way when designing Fiji, they could have had 8GB.

An interposer doesn't have to be silicon based... you can have much, much larger interposers that are not limited by reticle limits.

I'm simply stating that you are wrong in thinking that there is a technical limitation to 8GB of HBM. There is not.
That doesn't mean AMD went down the wrong path, they have to consider all the factors; cost, time to market, manufacturability, etc.
 
Yes they can. They just need more vertical stacks on the HBM by Hynixx. But the word is the yield is already low. So more vertical stacks would lead to even more supply issues.

Once Hynix gets their @#$%@# fixed we should see 8 Gigs


It was rumored months ago that they would jerry rig HBM 1.0 to make 8GB cards possible thus kind of making it somewhat of a HBM 2.0 product. However, I'm assuming as you put it already that the juice wasn't worth the squeeze. HBM 2.0 isn't far off from mass production so there is little incentive to do such a thing when A) It isn't really needed as of now and B) HBM 2.0 isn't that far off and would roll perfectly with both sides next-gen products.

Officially there is no such thing as HBM 1.0 products beyond 4GB. It was designed to have 4 x 1GB stacks that can be configured anywhere in between. Anything above that would be out of spec for the 1.0 standard.
 
Officially there is no such thing as HBM 1.0 products beyond 4GB. It was designed to have 4 x 1GB stacks that can be configured anywhere in between. Anything above that would be out of spec for the 1.0 standard.
No. There is no "standard" to how many stacks you use. The only standard is the specifications of the stacks themselves.
That would be like saying you can only use 4 GDDR5 ICs on a PCB or only 4 DDR4 ICs on a stick.
 
I think what most people will find is that HBM stacks will be used as a form of L4 cache and GDDR will still be used on the PCB. Imagine if Fury had 8GB GDDR5 and 4GB HBM combined? I reckon we'll see that be a natural evolution.

Its a bit like SSDs. Ten years ago, people thought SSDs would take over HDDs as a storage medium, when really, we all still use lots of HDDs, just now we have SSDs for applications and OS: no sign of that changing anytime soon. We have 6, 8 and even 10 TB HDDs trickling down into the consumer space, and SSDs are only able to hit 1TB. doesen't mean we don't use SSDs, it just means they add something to the overall system plan.
 
I think what most people will find is that HBM stacks will be used as a form of L4 cache and GDDR will still be used on the PCB. Imagine if Fury had 8GB GDDR5 and 4GB HBM combined? I reckon we'll see that be a natural evolution.
That would completely remove some of the key benefits to HBM.
Power savings - You are now adding 50% more power by having both rather than decreasing power by ~50%.
PCB complexity - You now are adding the increased package size of a GPU with HBM with the increased PCB size of a GDDR5 arrangement.
Memory Controller/PHY complexity- You are adding two very different types of memory controllers and increasing the complexity of the PHY while potentially becoming pad limited.

It makes no sense.
 
Just because something is "technically possible" doesn't make it reasonable, cost effective, or good use of engineering resources. It is "technically possible" I could create a mini worm hole and choke the shit out of AMD fanboys who are looking for anything they can downplay the fiasco of the fury X performance vs the price-point.
 
Just because something is "technically possible" doesn't make it reasonable, cost effective, or good use of engineering resources. It is "technically possible" I could create a mini worm hole and choke the shit out of AMD fanboys who are looking for anything they can downplay the fiasco of the fury X performance vs the price-point.

That is right. You don't have anything to disprove my statements. Nice strawman.
 
the interposer would have to be redesigned to accommodate the additional traces, as well the interface logic on the GPU package itself.

additional cost of the larger silicon interposer, additional HBM modules, and redesigned GPU package logic would put the cost way beyond reasonable market pricing expectation.

Again, its a physical limitation.

And thats just assuming you could do all those things without exceeding the process reticular limit, as razor1 pointed out.

You could have said the same about the fiasco that was the FX5700. But look how quickly nvidia killed it and had it replaced for a similar amount of money.

Redesigning pins and boards substrates (interposer) isn't a big deal. Thermal limits, signal issues, and silicon redesign is more of a hurdle. As soon as Hynix gets a die shrink we'll see 2GBx4 chip packages. And from what I understand they are already sampling the next node.

The memory industry (along with Intel) has been able to scale quite well compared to AMD, Samsung, etc...
 
Last edited by a moderator:
i think you've missed the point.

On the current process it would be cost prohibitive, feasible or not.

No one is going to pay what they would have to sell it for.
 
That is right. You don't have anything to disprove my statements. Nice strawman.

The posts above mine did DISPROVE your statements. Just because I didn't repeat the same thing again that you will refuse to believe doesn't make it any less true. Or is that your argument...ignore post that might refute your position?

Again, I could make a 100 MW Silicon Carbide (SiC) converter that would be more efficient than anything out there...doesn't make it smart...in reality it would be downright stupid.
 
i think you've missed the point.

On the current process it would be cost prohibitive, feasible or not.

No one is going to pay what they would have to sell it for.

Pure speculation without proof.

I would think it would be more costly to keep mfg' a card that doesn't sell.

"The most expensive room on a ship/hotel is the empty one"
 
Not speculation, based on my EE degree, knowledge of manufacturing processes, logistics, and BOM costing.

Your instance that its doable and cost effective is pure speculation.
 
Back
Top