390X coming soon few weeks

FYI..Good job everyone, we did go to 300 pages (of mostly speculation, with a hint of truth and an overdose of hallucinogenic rumors)!
 
No idea on furries.

furry4.jpg
 
I love how the canucks article hopes for non-rebrands & retail availability on Tuesday when we already know the exact opposite is happening. Sounds passive aggressive as fuck.
Whole article sounds like a future tense "AMD is about to fuck up royally" under the guise of hopeful prediction...
 
I love how the canucks article hopes for non-rebrands & retail availability on Tuesday when we already know the exact opposite is happening. Sounds passive aggressive as fuck.
Whole article sounds like a future tense "AMD is about to fuck up royally" under the guise of hopeful prediction...

Based on some retailers "accidentally" selling 300 series early, we know that these are rebrabds with only slight improvements.

We know nothing about benchmark and retail availability of Fiji based cards yet.

We have some pretty reasonable suspicions on it all, but no facts. We don't "know" anything about this yet. We just suspect.

The constant inability of people to sort fact from rumor in this thread has been ridiculous.
 
So, I apologize if this has been stated already, but I can't find it.

Does anyone know what time tomorrow AMD's E3 presentation is?
 
You will see benchmarks on release day.

Review samples are in the hands of the bigger sites. I'm sure kyle and crew have one already.

They cannot confirm it of course, but by tomorrow evening the picture should be painted for everyone to see.
 
You will see benchmarks on release day.

Review samples are in the hands of the bigger sites. I'm sure kyle and crew have one already.

They cannot confirm it of course, but by tomorrow evening the picture should be painted for everyone to see.

Source?
 
You will see benchmarks on release day.

Review samples are in the hands of the bigger sites. I'm sure kyle and crew have one already.

They cannot confirm it of course, but by tomorrow evening the picture should be painted for everyone to see.

haha nope.. being optimistic we have to wait 1 full week(at least) for a complete review of Fury Cards, that review will be honestly late in [H]OCP.. with the 980TI they only made a fast PREVIEW 2 weeks ago and no more.. yes 2 weeks for a single card..
 
https://twitter.com/dankbaker/status/609445259597213696

From a couple days ago, didnt notice anyone posting it. He does kind of elude to 4GB VRAM still just being 4GB of VRAM.

@IcnO Different things really. More VRAM past your working set size doesn't help much. Kind of like having extra seats on the bus

What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...

Do I get to say "told you so" now? :p
 
Zarathustra[H];1041666495 said:
What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...

Do I get to say "told you so" now? :p

Count me in as well for an "I told you so" on the vram spin AMD is trying to pull over on us.
 
Count me in as well for an "I told you so" on the vram spin AMD is trying to pull over on us.

I don't recall there being much intentional AMD spin.

I just remember them saying something to the effect of:

"I think you will be happy with the performance of 4GB of HBM *wink* *wink*"

...and certain people reading WAY too much into it.
 
Zarathustra[H];1041666495 said:
What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...

Do I get to say "told you so" now? :p

I think the hope was that HBM could somehow swap shit in and out so fast that you wouldn't fill the 4GB of VRAM, making the 4GB limitation a non-issue. Sounding like it's still going to be an issue though.
 
Zarathustra[H];1041666495 said:
What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...

Do I get to say "told you so" now? :p

Count me in as well for an "I told you so" on the vram spin AMD is trying to pull over on us.

I think "4GB means 4GB" will soon take on a whole new meaning :D
 
I think the hope was that HBM could somehow swap shit in and out so fast that you wouldn't fill the 4GB of VRAM, making the 4GB limitation a non-issue. Sounding like it's still going to be an issue though.

Two things:

1.) Sure, with more bandwidth you can swap things in and out more quickly, but where are you swapping it from (and to)? System RAM? In that case your bottleneck is going to be your PCIe bus, followed by your System RAM, not your VRAM.

2.) I'm still not convinced 4GB is going to be an issue. SLI 980's perform well in todays most demanding 4K titles, suggesting that 4GB is sufficient for 4K performance. Sure we have seen some utilization numbers that are high, but just because something is sitting in VRAM taking up space, doesn't mean it is in active use. Typically to save RAM cycles, things aren't flushed out of the RAM until that RAM is needed for something else.

I could picture level load times being a second or so longer as more has to be shuttled across to the VRAM at level load as it has had to be flushed sooner than if more VRAM were available, but I have not yet seen anything that suggests to me that 4GB won't be enough to play games at high frame rates at 4k. We will see though, reviews ought to settle this.

It is possible 4GB may mean that the Fiji cards won't have as long longevity as if they had more RAM, but by then the GPU is probably serving second hand duty in your cousins/brothers/stepsons 1080p rig, and it will be a moot point.
 
I won't read too much into a vague statement on twitter and will wait for real benchmarks, because I'm guessing there are some usage scenarios (4K+) that HBM excels at, but I was hoping AMD would demonstrate how/why HBM was necessary and GDDR5 was suddenly bandwidth constrained. Because I wasn't really aware it was - GPU is still the bottleneck even on a 980 Ti OC'd to absolute limits.
I think its a good forward facing tech that should bear fruit eventually, but existing GDDR5 speeds seem appropriate for existing GPU power.

TL;DR ever notice how putting a big overclock on your GDDR5 doesn't make a whole hell of a lot of difference in FPS? Or underclocking it for that matter.
 
Last edited:
What is the latency of HBM vs GDDR5? Is there a big difference there or is it simply a bandwidth benefit?
 
I won't read too much into a vague statement on twitter and will wait for real benchmarks, because I'm guessing there are some usage scenarios (4K+) that HBM excels at, but I was hoping AMD would demonstrate how/why GDDR5 had suddenly become bandwidth constrained. Because I wasn't really aware it was - GPU is still the bottleneck even on a 980 Ti OC'd to absolute limits.
I think its a good forward facing tech that will hopefully bear fruit eventually, but existing VRAM speeds seem perfectly appropriate for existing GPU power.

TL;DR ever notice how putting a big overclock on your GDDR5 doesn't make a whole hell of a lot of difference in FPS? Or underclocking it for that matter.


Agreed.

And I dont think anyone is suggesting that memory bandwidth would have been an issue with Fiji XT. It's a forward looking technology, that AMD is out ahead of the curve on.

Nvidia plans on using it next generation.

I don't think HBM will amount to a hill of beans difference in this generation, except for allowing a smaller AIO WC board, and lower VRAM power use.

Fiji XT may turn out to be an amazing GPU beating out Titan X and 980ti, but if it does, it will be because of GPU improvements, not because of HBM VRAM.
 
Don't forget though... the 4096-bit bus (1024-bit to each stacks...)
maybe the GDDR5 would have perform better (when overclocking them) on a bigger bus...
 
Honest question as it's before I was really reading most of the news and forums, what were peoples thoughts on GDDR3 being bandwidth constricted around the time GDDR5 was being developed and put into use?
 
Honest question as it's before I was really reading most of the news and forums, what were peoples thoughts on GDDR3 being bandwidth constricted around the time GDDR5 was being developed and put into use?

I really don't think its at the actual bandwidth because there is plenty of memory bandwidth right now. The reason may be the interconnection speed between the GPU and HBM stacks. That may be the reason it might not be necessary for now to have more than 4GB and should be enough to get by for a few years. But eventually you will see 8GB and beyond in HBM gen 2.
 
Who cares about the 3xx series? I mean, really? This thread is at 300+ pages and over 6,000 posts. I've learned about hotdogs, so I'm good.

Looking for the [H] review on EVERY card which AMD is about to announce, to be posted within 10 minutes of the announcement tomorrow. ;)
 
Honest question as it's before I was really reading most of the news and forums, what were peoples thoughts on GDDR3 being bandwidth constricted around the time GDDR5 was being developed and put into use?

I could be wrong, as my memory is a bit hazy, but as I recall, GDDR5 wasn't a big deal in the beginning either. I don't think it was until a couple of generations later, when we saw a huge performance difference between GDDR3 and GDDR5, as GPU's improved and made the old GDDR3 a bottleneck.

I expect something similar for HBM. No dramatic difference on launch, but a few years from now looking back and wondering how we ever got by without it :p
 
Memory bandwidth is only necessary to keep the GPU fed when it needs it, anything more than that is not going to do much. To put it another way, if you are the GPU, you put out your hand out and a hot dog appears and you eat it. Now if the difference between the 2 memory architectures is with one, you have a half second wait before the hot dog appears, and the other appears in half that time, the latter is better but it doesn't mean all that much if you are still trying to swallow the previous hot dog ;)
 
Memory bandwidth is only necessary to keep the GPU fed when it needs it, anything more than that is not going to do much. To put it another way, if you are the GPU, you put out your hand out and a hot dog appears and you eat it. Now if the difference between the 2 memory architectures is with one, you have a half second wait before the hot dog appears, and the other appears in half that time, the latter is better but it doesn't mean all that much if you are still trying to swallow the previous hot dog ;)

I like this hotdog analogy!
 
Zarathustra[H];1041666791 said:
I could be wrong, as my memory is a bit hazy, but as I recall, GDDR5 wasn't a big deal in the beginning either. I don't think it was until a couple of generations later, when we saw a huge performance difference between GDDR3 and GDDR5, as GPU's improved and made the old GDDR3 a bottleneck.

I expect something similar for HBM. No dramatic difference on launch, but a few years from now looking back and wondering how we ever got by without it :p
FYI- Early GDDR5 allowed AMD to compete with a massive G200 using only half the die size and half the bus width.

Memory bandwidth is only necessary to keep the GPU fed when it needs it, anything more than that is not going to do much. To put it another way, if you are the GPU, you put out your hand out and a hot dog appears and you eat it. Now if the difference between the 2 memory architectures is with one, you have a half second wait before the hot dog appears, and the other appears in half that time, the latter is better but it doesn't mean all that much if you are still trying to swallow the previous hot dog ;)

Depends on the application.
With gaming, yes there is a diminishing return for bandwidth.
With everything else, you want all the bandwidth you can get. GPUs are inherently good at latency hiding, which makes them starving for bandwidth.
 
Memory bandwidth is only necessary to keep the GPU fed when it needs it, anything more than that is not going to do much. To put it another way, if you are the GPU, you put out your hand out and a hot dog appears and you eat it. Now if the difference between the 2 memory architectures is with one, you have a half second wait before the hot dog appears, and the other appears in half that time, the latter is better but it doesn't mean all that much if you are still trying to swallow the previous hot dog ;)

It's not to late to petition AMD to change the name from Fury to Kobayashi before they present!
 
Zarathustra[H];1041666495 said:
What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...

Do I get to say "told you so" now? :p

Why are people viewing this as something bad? I read it as saying 4GB is plenty. How come no one posted the good news from that tweet.
hmm... this new GPU I have totally rocks at 4k. Used to need 2 GPUs to do 4k well, now just need 1.
 
Back
Top