Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I don't think any review cards have gone out. 300-series or Fury.
Nobody has anything yet.
I love how the canucks article hopes for non-rebrands & retail availability on Tuesday when we already know the exact opposite is happening. Sounds passive aggressive as fuck.
Whole article sounds like a future tense "AMD is about to fuck up royally" under the guise of hopeful prediction...
I'm guessing they are extrapolating on this info for maximum clickbait?Review samples were sent
http://www.purepc.pl/karty_graficzne/jak_to_jest_z_premiera_kart_graficznych_amd_radeon_r9_300
Not sure how much sense it will make after google translate as you need to read beetween lines but it confirms samples were sent to selected reviewers while others have been skipped.
I'm guessing they are extrapolating on this info for maximum clickbait?
Who has confirmed they have a review card?
You will see benchmarks on release day.
Review samples are in the hands of the bigger sites. I'm sure kyle and crew have one already.
They cannot confirm it of course, but by tomorrow evening the picture should be painted for everyone to see.
Zarathustra[H];1041665941 said:Source?
Noon eastern.Zarathustra[H];1041665926 said:So, I apologize if this has been stated already, but I can't find it.
Does anyone know what time tomorrow AMD's E3 presentation is?
You will see benchmarks on release day.
Review samples are in the hands of the bigger sites. I'm sure kyle and crew have one already.
They cannot confirm it of course, but by tomorrow evening the picture should be painted for everyone to see.
https://twitter.com/dankbaker/status/609445259597213696
From a couple days ago, didnt notice anyone posting it. He does kind of elude to 4GB VRAM still just being 4GB of VRAM.
https://twitter.com/dankbaker/status/609445259597213696
From a couple days ago, didnt notice anyone posting it. He does kind of elude to 4GB VRAM still just being 4GB of VRAM.
@IcnO Different things really. More VRAM past your working set size doesn't help much. Kind of like having extra seats on the bus
Zarathustra[H];1041666495 said:What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...
Do I get to say "told you so" now?
Count me in as well for an "I told you so" on the vram spin AMD is trying to pull over on us.
Zarathustra[H];1041666495 said:What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...
Do I get to say "told you so" now?
Zarathustra[H];1041666495 said:What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...
Do I get to say "told you so" now?
Count me in as well for an "I told you so" on the vram spin AMD is trying to pull over on us.
I think the hope was that HBM could somehow swap shit in and out so fast that you wouldn't fill the 4GB of VRAM, making the 4GB limitation a non-issue. Sounding like it's still going to be an issue though.
I won't read too much into a vague statement on twitter and will wait for real benchmarks, because I'm guessing there are some usage scenarios (4K+) that HBM excels at, but I was hoping AMD would demonstrate how/why GDDR5 had suddenly become bandwidth constrained. Because I wasn't really aware it was - GPU is still the bottleneck even on a 980 Ti OC'd to absolute limits.
I think its a good forward facing tech that will hopefully bear fruit eventually, but existing VRAM speeds seem perfectly appropriate for existing GPU power.
TL;DR ever notice how putting a big overclock on your GDDR5 doesn't make a whole hell of a lot of difference in FPS? Or underclocking it for that matter.
Honest question as it's before I was really reading most of the news and forums, what were peoples thoughts on GDDR3 being bandwidth constricted around the time GDDR5 was being developed and put into use?
Honest question as it's before I was really reading most of the news and forums, what were peoples thoughts on GDDR3 being bandwidth constricted around the time GDDR5 was being developed and put into use?
Memory bandwidth is only necessary to keep the GPU fed when it needs it, anything more than that is not going to do much. To put it another way, if you are the GPU, you put out your hand out and a hot dog appears and you eat it. Now if the difference between the 2 memory architectures is with one, you have a half second wait before the hot dog appears, and the other appears in half that time, the latter is better but it doesn't mean all that much if you are still trying to swallow the previous hot dog
FYI- Early GDDR5 allowed AMD to compete with a massive G200 using only half the die size and half the bus width.Zarathustra[H];1041666791 said:I could be wrong, as my memory is a bit hazy, but as I recall, GDDR5 wasn't a big deal in the beginning either. I don't think it was until a couple of generations later, when we saw a huge performance difference between GDDR3 and GDDR5, as GPU's improved and made the old GDDR3 a bottleneck.
I expect something similar for HBM. No dramatic difference on launch, but a few years from now looking back and wondering how we ever got by without it
Memory bandwidth is only necessary to keep the GPU fed when it needs it, anything more than that is not going to do much. To put it another way, if you are the GPU, you put out your hand out and a hot dog appears and you eat it. Now if the difference between the 2 memory architectures is with one, you have a half second wait before the hot dog appears, and the other appears in half that time, the latter is better but it doesn't mean all that much if you are still trying to swallow the previous hot dog
Memory bandwidth is only necessary to keep the GPU fed when it needs it, anything more than that is not going to do much. To put it another way, if you are the GPU, you put out your hand out and a hot dog appears and you eat it. Now if the difference between the 2 memory architectures is with one, you have a half second wait before the hot dog appears, and the other appears in half that time, the latter is better but it doesn't mean all that much if you are still trying to swallow the previous hot dog
Zarathustra[H];1041666495 said:What I have been saying since day 1, but have been made fun of for not understanding how HBM magically changes the definitions of capacity and bandwidth...
Do I get to say "told you so" now?
hmm... this new GPU I have totally rocks at 4k. Used to need 2 GPUs to do 4k well, now just need 1.