390X coming soon few weeks

I'm with you.

The one that killed me was the guy claiming that some other company would rise up to take on nVidia if AMD completely imploded, because obviously there are upstart companies trying to break into this market all the time with millions/billions to invest in getting off the ground.

Honestly, with AMD doing so poorly, I wish someone bigger would buy them and whip them into shape. Why can't some stupidly rich playboy buy the company for his teenage son who likes consoles or something...
 
Honestly, with AMD doing so poorly, I wish someone bigger would buy them and whip them into shape. Why can't some stupidly rich playboy buy the company for his teenage son who likes consoles or something...

Only person I would think that would buy AMD/ATI would be Intel. Which would be bad because then we would have a CPU monopoly.
 
Someone on Tomshardware wrote an op-ed about AMD being a perfect purchase for Samsung. Had several very good points.
 
I love new hardware and all and this does look exciting. However, since I already have a R9 290 flashed to a 290x, I think I will stick with that since I would not see enough of a difference to spend the money on it. I will probably keep my 290 at least 3 years but, this 380x and 390x stuff does look interesting.

since you mention that, I noticed on my XFX card that it has the switch on it like the 290Xs for Bios switching. I haven't tried anything or really looked into it too hard. But do you think Mine could be a 290X not a 290, or rather capable being flashed to a 290X?
 
since you mention that, I noticed on my XFX card that it has the switch on it like the 290Xs for Bios switching. I haven't tried anything or really looked into it too hard. But do you think Mine could be a 290X not a 290, or rather capable being flashed to a 290X?

I am not sure about yours but, mine is a reference card that I bought right in November of 2013. I did not actually flash it until May of 2014 because I did not want to take the risk and brick my card initially.

I used the actual XFX R9 290X reference bios though to avoid any issues. The key is to go to overclock.net, look in the AMD video card section for a 290 unlock thread and run the utility found there. It was say if it is unlockable or not.

Still, it is a risk so be very, very careful nonetheless.
 
Letting nV suck up market share for another 4 months surely won't help things.

I'm sure they are doing the best they can with a far more limited budget.
BD must have cost them a lot of coin, and they can't afford another mistake like that.

It is a bit discerning to not see anything really new for a good while.
 
Only person I would think that would buy AMD/ATI would be Intel. Which would be bad because then we would have a CPU monopoly.

They are the only ones that could, otherwise, AMD's x86 license is kaput.

Part of why the nVidia-AMD "merger" could never happen.
 
I am not sure about yours but, mine is a reference card that I bought right in November of 2013. I did not actually flash it until May of 2014 because I did not want to take the risk and brick my card initially.

I used the actual XFX R9 290X reference bios though to avoid any issues. The key is to go to overclock.net, look in the AMD video card section for a 290 unlock thread and run the utility found there. It was say if it is unlockable or not.

Still, it is a risk so be very, very careful nonetheless.

Never would have thought to look into it except for that switch. Damn temptation.
 
High expectations, for sure.
Even for AMD this is a really long timelapse. I hope it's not a side-effect of their financial troubles.

I'd imagine it has more to do with HBM. That has to be the biggest news in GPUs in a dogs age. Don't want to chance having issues. Just as in the Nvidia 970 issue thread, if it were AMD they'd have the pitchforks out.
 
Yeah the 3.5GB issue is a real kick in the nuts to new owners of the GTX970.
If the 3x0 series really has HBM then its going to make that situation look even worse for Nvidia.
 
That might go backwards if the 970 really does have a hardware issue, and it gets recalled.

Pretty serious right now since it seems almost all 970's are affected.

Yep, even the new revised ones. Returned mine immediately when I noticed it.
 
That might go backwards if the 970 really does have a hardware issue, and it gets recalled.

Pretty serious right now since it seems almost all 970's are affected.

Would seem to me it is a total non-issue.

The cards perform just as all the review samples did on launch prior to people buying them.

People got what they were paying for, and what they were paying for was one hell of a bang for a buck card.

How Nvidia accomplishes that performance architecturally internal to the card is irrelevant.

Besides, their response seems to suggest the issue is just how the figures are being reported, and doesn't impact actual performance anyway.

If I were shopping for a mid to high end system today, the GTX970 would still be on my shopping list, and I would have no qualms recommending it to others.
 
Zarathustra[H];1041382971 said:
Would seem to me it is a total non-issue.

The cards perform just as all the review samples did on launch prior to people buying them.

People got what they were paying for, and what they were paying for was one hell of a bang for a buck card.

How Nvidia accomplishes that performance architecturally internal to the card is irrelevant.

Besides, their response seems to suggest the issue is just how the figures are being reported, and doesn't impact actual performance anyway.

If I were shopping for a mid to high end system today, the GTX970 would still be on my shopping list, and I would have no qualms recommending it to others.

So the people with stuttering are lying?
 
Have had the same thoughts. This would make a great partnership.

If Samsung bought them it would be to get the IP. I don't see Samsung being interested in getting into the dGPU business. Not enough volume/profit.
 
Zarathustra[H];1041382971 said:
Would seem to me it is a total non-issue.

The cards perform just as all the review samples did on launch prior to people buying them.

People got what they were paying for, and what they were paying for was one hell of a bang for a buck card.

How Nvidia accomplishes that performance architecturally internal to the card is irrelevant.

Besides, their response seems to suggest the issue is just how the figures are being reported, and doesn't impact actual performance anyway.

If I were shopping for a mid to high end system today, the GTX970 would still be on my shopping list, and I would have no qualms recommending it to others.

Actually I tested my 970 and can see stuttering going over 3.5gb. So I might be lieing as a 970 owner.
 
Maybe this 390x means i'll be able to pick up a 290x cheaper ;)

I'm really curious if there's a drawback to HBM memory. New technology is rarely perfect from revision 1.0. So here's to hoping it is.
 
Latest rumor is that AMD is so confident that this new GPU family is going to wipe the floor with Nvidia that they are only releasing a R380X gpu in the very beginning of Q2 which is just under their flagship. Expectations are that this chip will even wipe the floor with Titan X (Titan 2 etc..) They are also planning to price this chip under $500. This should kick nvidia in the groin with their rumored $1350 pricing. Finally Sometime in the Beginning of Q3 or possibly at the end of Q2 they are releasing their R390X flagship champion for $649.99 which will take everything's lunch money. :D
 
I'm really curious if there's a drawback to HBM memory. New technology is rarely perfect from revision 1.0. So here's to hoping it is.

I've been wondering this too. From what I've seen, it looks like the potential is there for higher latencies but since the bandwidth gain is so large its not going to make much difference. If the specs I saw a while back were accurate, a single chip stack should be able to push 250 GB/s - and we would expect at least two of those paired with a decent GPU.
 
I've been wondering this too. From what I've seen, it looks like the potential is there for higher latencies but since the bandwidth gain is so large its not going to make much difference. If the specs I saw a while back were accurate, a single chip stack should be able to push 250 GB/s - and we would expect at least two of those paired with a decent GPU.

Where did you get those impressions from?

I'd assume the latency should be better as HBM can be situated on the same module effectively acting as eDRAM.

The common configuration being currently being mentioned would be 4 stacks of 4 dies each.

Speculatively limitations would be cost and availability. Also the current available product information and leaked road maps suggest capacity would be limited to 4GB at 4x4.
 
http://www.jedec.org/standards-documents/docs/jesd235
The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into independent channels. Each channel is completely independent of one another. Channels are not necessarily synchronous to each other. The HBM DRAM uses a wide-interface architecture to achieve high-speed, low-power operation. The HBM DRAM uses differential clock CK_t/CK_c. Commands are registered at the rising edge of CK_t, CK_c. Each channel interface maintains a 128b data bus operating at DDR data rates.
Not sure if this narrows it down a whole lot but it is the JDEC specification.(above)

hbm.png

http://tech4gamers.com/high-bandwidth-memory-hbm/
As we can see in the picture above, proposed by AMD for memory chips. The Stacks would be parallel to the core of the CPU / GPU rather than superimposed on this and be called 2.5D, unlike the proposed 3D layer after layer vertically. This will cause even the first implementations of HBM offer bandwidth at least equivalent to the graphic stop current range, but greatly improving latencies due to the proximity to the core.

Still, what if we see is a vast improvement in bandwidth theorist who achieved using HBM, because this is in the 128-256GB/s, although not the only improvement as we get better latency, lower design complexity and less power consumption. Analysts believe that HBM will be around 9X faster than current GDDR5.

The bandwidth alone is mouthwatering. And when they add this to an APU (the second pic in the first photo) That whole bandwidth starved naysaying will be a thing of the past.
 
So the people with stuttering are lying?

I hadn't seen any stuttering reports, but if they are caused by this issue then they are a concern.

That being said, this very site and many others tested them at 4k resolutions which should have gone over the 3.5GB of VRMA use at launch...
 
Are there rumors that the 390X is stuttering?
If you want to complain about Nvidia scamming their customers with cheap hack-job GPU's then I'm sure there are plenty of people crying on the Nvidia Forum already.

I don't think we need any more if it here. But that's a good point though, the 380X and 390X will probably have at least 4 GB of fully-functioning VRAM which already puts them a mile ahead of their competition.
 
The same way Nvidia did not lead with big Maxwell. It gives them a test run to optimize the new architecture before they commit. and keep the ace in the hole
 
The 3.5gb ram thing from what I read is not a problem, 1-3% performance loss when using 3.5gb+, anyway the new AMD cards look exciting and will hopefully be good enough for a single card to run 1440p as well as a single 970 runs 1080p, so that would be 35-40% performance over the 970 to make it a worthwhile upgrade + freesync also needs to be good and not overpriced like gsync and they will have a winner.
 
Last edited:
http://www.jedec.org/standards-documents/docs/jesd235
Not sure if this narrows it down a whole lot but it is the JDEC specification.(above)
(Snipped for sanity)
The bandwidth alone is mouthwatering. And when they add this to an APU (the second pic in the first photo) That whole bandwidth starved naysaying will be a thing of the past.

I would worry about thermal dissipation. Unless there is cooling channels passing between chips, the thermal dissipation may be too much. Hopefully the have solved that. It could become an issue with overclocking also. But yes, That bandwidth is certainly mouth watering.

This all pure speculation until the rubber meets the road.
 
I would worry about thermal dissipation. Unless there is cooling channels passing between chips, the thermal dissipation may be too much. Hopefully the have solved that. It could become an issue with overclocking also. But yes, That bandwidth is certainly mouth watering.

This all pure speculation until the rubber meets the road.

I believe the TSVs will spread the heat evenly throughout the stack.
Individual GDDR5 modules don't need to be cooled, why would something that uses ~40% less power have cooling issues?

Edit- They have been working on this technology for a long time, at least 6 years.
Fottemberg found some other interesting hints.
 
Last edited:
Where did you get those impressions from?

I'd assume the latency should be better as HBM can be situated on the same module effectively acting as eDRAM.

The common configuration being currently being mentioned would be 4 stacks of 4 dies each.

Speculatively limitations would be cost and availability. Also the current available product information and leaked road maps suggest capacity would be limited to 4GB at 4x4.

I read something a while back that mentioned serial links between the gpu and memory controller on the ram stack, and I thought there was some implication of *potentially* higher latencies - but that was a while ago. It seems that this silicon interposer tech would actually improve both bandwidth and latencies.

Very cool stuff:
http://i.imgur.com/48WZ7je.jpg
 
I'll post this here too:
https://www.youtube.com/watch?v=ZQE6p5r1tYE

Some games are worse than others. I didn't get the rainbow artifacts, but I did have the annoying stuttering in COD:AW.

Holy shit!
That looks like someone playing on an old Atari 2600 and yanked the cartridge out while playing.

NVidia is going to have a dumb down software fix, Im sure. Setting will automatically stop from using over 3.5GB memory/

Looks like AMD is going to be picking up some NVidia users in the next few weeks..
 
Back
Top