GTX 980 May Lose The Performance Crown To AMD’s R9 380X in February 2015

Unknown-One

[H]F Junkie
Joined
Mar 5, 2005
Messages
8,905
furthermore, people arent going to upgrade for power efficiency, great i just saved 4$ on my electric bill by switching to nvidia, … said no one ever!
People ARE going to upgrade for efficiency, but saving money on their electric bill has NOTHING to do with it.

There's a whole sub-community of enthusiasts that actually care about noise. An inefficient card that runs hot is harder to keep quiet than an efficient card that runs cool.
More efficient cards also mean you can do crazy multi-GPU setups without needing a crazy-expensive power supply.
More efficient cards also mean the massive cooler someone had to buy in an attempt to keep a much hotter card under-control is now MASSIVE overkill for the new card. Plenty of headroom for overclocking / silent running.

As a personal annoyance, I'm not a huge fan of massive plumes of hot air wafting out from under my desk while gaming. More efficient cards help prevent that as well, and keep my entire office cooler.
 

Araxie

Supreme [H]ardness
Joined
Feb 11, 2013
Messages
6,451
People ARE going to upgrade for efficiency, but saving money on their electric bill has NOTHING to do with it.

There's a whole sub-community of enthusiasts that actually care about noise. An inefficient card that runs hot is harder to keep quiet than an efficient card that runs cool.
More efficient cards also mean you can do crazy multi-GPU setups without needing a crazy-expensive power supply.
More efficient cards also mean the massive cooler someone had to buy in an attempt to keep a much hotter card under-control is now MASSIVE overkill for the new card. Plenty of headroom for overclocking / silent running.

As a personal annoyance, I'm not a huge fan of massive plumes of hot air wafting out from under my desk while gaming. More efficient cards help prevent that as well, and keep my entire office cooler.

Sticky post please mod..! This should be in the main Video Card section.. :D.
 
Joined
Jul 20, 2013
Messages
524
i agree, power effciciency is great, but lets not forget amd released the 285 which matches the kepler efficiency, so bermuda will be even more efficient than kepler and may even match maxwell. when the 390x hits sith a 200 watt tdp, aint no one going to care bout 25 watts… and the 390x will be faster
 

Terpfen

Supreme [H]ardness
Joined
Oct 29, 2004
Messages
6,079
i agree, power effciciency is great, but lets not forget amd released the 285 which matches the kepler efficiency, so bermuda will be even more efficient than kepler and may even match maxwell. when the 390x hits sith a 200 watt tdp, aint no one going to care bout 25 watts… and the 390x will be faster

You just speculated yourself into a knot.
 

BroHamBone

[H]ard|Gawd
Joined
Apr 6, 2013
Messages
2,030

Lmao, been doing the same thing w/ this thread. :D
 

Hulk

Supreme [H]ardness
Joined
Nov 4, 2005
Messages
5,973
I hope that it looses its crown ASAP so I can buy it super cheap! :)
 

Ruoh

Supreme [H]ardness
Joined
Sep 16, 2009
Messages
5,858
Performance-per-watt is more important, these days. Or should be.
 

Liger88

2[H]4U
Joined
Feb 14, 2012
Messages
2,657
bro, youll be seeing those 980s on ebay for like <$100 once 390x arrives

#nvidiabacktohondalawnmowers



Don't bring the WCCFTech/Videocardz fanboyism here of all places man. It isn't necessary and promotes the retards going for Red or Green team to continue to act like morons. You don't want to be like those sub-human pieces of trash do you?
 

DejaWiz

Fully [H]
Joined
Apr 15, 2005
Messages
20,608
I'm going to love coming back to this thread when the 390X comes out and AMD takes the GPU lead again...for about a month or three until nVidia releases their Ti variants and die shrink. :p
 

Unknown-One

[H]F Junkie
Joined
Mar 5, 2005
Messages
8,905
I'm going to love coming back to this thread when the 390X comes out and AMD takes the GPU lead again...for about a month or three until nVidia releases their Ti variants and die shrink. :p
Given all the headroom on the 980, I doubt it would take Nvidia a month to release a competing card.

Uncap the power target a bit, up the clocks, double the density of the RAM to make it an 8GB card... boom! GTX 980 Ti.

It's going to be interesting seeing what happens with 980 overclocking when we can finally get a custom BIOS on it that doesn't have a TDP limit.
 

Lord_Exodia

Supreme [H]ardness
Joined
Sep 29, 2005
Messages
7,008
Given all the headroom on the 980, I doubt it would take Nvidia a month to release a competing card.

Uncap the power target a bit, up the clocks, double the density of the RAM to make it an 8GB card... boom! GTX 980 Ti.

It's going to be interesting seeing what happens with 980 overclocking when we can finally get a custom BIOS on it that doesn't have a TDP limit.

QFT. A unbound GTX 980 is a good example of what a GTX 980 Ti or Titan 2 might be. Castrated by the 165Watt TDP these cards are overclocking and getting an extra 25% performance = 30-35 % faster than a 780Ti and 35-40% faster than a stock R290X. Wait until the ankle locks come off, it might be even more powerful (much less energy efficient at the same time but Meh who cares)

Still though I think Pirate Islands will take the crown back and if it takes 6 months to come out then it should beat it. It would be horrible if it didn't. Kind of like a Bulldozer esque letdown.
 

tajoh111

Limp Gawd
Joined
Jan 26, 2012
Messages
211
This is a general tip to predict the performance of products on the same architecture, same manufacturing node.

Take a products performance per watt and scale it up to where you think a manufacturer is willing to place a product and you got your future performance for a product.

E.g lets take the gtx 680 for example. How much power does a gtx 680 consume? about 160 to 180 watts. How much power does a gtx 780 ti consume, about 250-270 watts. How much is the performance difference between the two cards? The gtx 780 ti is almost exactly 50% faster at 2560*1600 where it is least likely to encounter a CPU bottleneck.

Lets take another example, the r9 260x and hawaii. The r9 260x consumes about 90 watts of power, the r9 290x consumes generally around 260 watts to 300watts. A pretty big range looking at reviews. How much faster is a 290x? At 2560*1600 it is about 3x faster.

The same is remarkably similar to the gtx 750ti and the gtx 980. The gtx 980 ti consumes about 3x the power of a gtx 750 ti, but it is almost exactly 3x faster. Look at tech powerup for this information.

Where this didn't pan out so much is with the 7870 and the 7970. The 7870 has significantly better performance per watt compared to a 7970, but it is generally agreed upon that the 7970 was generally overvolted(something amd hasn't done since), which skewed its performance per watt. It why the 7970 overclock so well without any volts added.

This brings us to Tonga and the r9 285. Tonga is the first GPU to use AMD's newest version of GCN. The r9 285 generally uses similar power to the gtx 980, maybe a tad less on average.

What happens if you took r9 285 power consumption and performance and scale it up to something along the lines of the 290x and gtx 780 ti? I.e increase the power envelope by 1.5?

http://www.techpowerup.com/reviews/Sapphire/R9_285_Dual-X_OC/25.html

You actually get performance very similar to 290x.

So what happens if your AMD, and making a upscaled version of Tonga, only get you back to square one and performance along the lines of 290x?

You throw out heat and power consumption out the windows because cooling a power hungry chip has a quick fix in terms of the product development cycle. AMD doesn't have the time to reengineer a new chip to compete with maxwell. but what they can do is put on a cooling solution that an cool a 600watts of heat.

So what they can do along with increasing the size of the chip as well as increase the chip frequency since they can't double chip size.

So scaling up by double what do we get from this tech powerup chart?

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980/26.html

We get a card that in total consumes about 340-370watts of power(which is why we need the water hybrid solution), but is about 14% faster than a gtx 980 at 2560*1600. This is a decent performance difference and people would be willing to pay for it. But that's a lot of power to pay for that performance.

The problem is what happens when you do the same to the gtx 980 but scale it only 50% since Nvidia wants to stay on air cooling?

You basically have a card that is 50% percent faster than a gtx 980(which is what current rumors suggest), but only uses about 250-270 watts. I say only because you can still cool this with a reference air solution.

This is where performance per watt matters at the extreme end. Its difficult to see AMD catching up without doing something drastic. They need to improve or change their architecture because stuff like HBM and 14-16nm finfet is stuff Nvidia can use to.
 

The Mac

Supreme [H]ardness
Joined
Feb 14, 2011
Messages
4,492
Nividia is locked out of HBM for at least a year, its AMD/Hynix developed IP.
 

tajoh111

Limp Gawd
Joined
Jan 26, 2012
Messages
211
Nividia is locked out of HBM for at least a year, its AMD/Hynix developed IP.

The problem and bottleneck that HBM alleviate shouldn't really be felt at 28nm. Right now, shader power this generation isn't quite there to demand it. So waiting a year is fine for Nvidia.

Considering the new color compression technology, we have enough effective bandwidth for anything on 28nm.

E.g look at the effective bandwidth of the the gtx 980. Around 220gb/sec but with the color compression technology, its around 280 or 290gb/sec. And it shows, we get a card that outperforms all cards even at 4k. Nvidia can still push out a card that has a 384bit bus which increases bandwidth by 50%. If the bandwidth was really a bottleneck, the gtx 980 should perform slower than the gtx 780 ti but it doesn't.

The problem is when we get a die shrink next gen and shader power doubles but bandwidth stays the same.

For performance HBM is absolutely overkill for 28nm, but it won't be for the next generation. So nvidia should be okay until pascal when HBM will be incorporated.

HBM is pointless aside from the power consumption savings, but the cost of making the card more than offsets this. HBM needs to increase the performance and power consumption before it can justify its presence in cards.
 

The Mac

Supreme [H]ardness
Joined
Feb 14, 2011
Messages
4,492
Since AMDs next card will be 20nm, this was my point.

Nvidia will have to use GDDR5 on their 20nm card, as they wont have access to the HBM IP yet.
 

tajoh111

Limp Gawd
Joined
Jan 26, 2012
Messages
211
Since AMDs next card will be 20nm, this was my point.

Nvidia will have to use GDDR5 on their 20nm card, as they wont have access to the HBM IP yet.

I am certain AMD's next card is going to be a 28nm.

Not only is the current 20nm not suitable for GPU's(a card released in february + apple is consuming all the 20nm wafers), it goes against the only confirmed thing at this point and that's the water cooling hybrid solution.

Would AMD need a water cooling hybrid solution if they made their next card on 20nms?

Heck no. It would be pointless unless AMD was releasing a 450+mm2 card at 20nm as their first card. And that would be the worst idea ever. Releasing a monolithic chip, on a process that almost doubles the cost per mm2 for manufacturing compared to the last gen, would be ridiculously stupid. The yields would be terrible and AMD doesn't have the professional market to pay for such a risky endeavor. Nvidia skipped 20nm for a reason.

Considering the hybrid cooling is supposed to released to AMD sometime in the first half of next year, it makes no sense to release anything but small 20nm chips which are far easier to manufacture and learn the growing pains of 20nm manufacturing.

Anything released on 20nm and 220mm2 or larger in size is likely to compete in the same performance space as this large 500mm2 28nm, hybrid cooled design that is AMD's next card.

This results in a redundancy in performance which prevents AMD from recouping its R and D cost for the developement of such cards.

E.g if AMD released the hybrid cooled 28nm chip at 599-650 dollars in say march.

What would be the point to release a 20nm 3-6 months after with the same performance or even greater?

It would kill the price of AMD big 28nm chip and prevent it from recouping its investment cost.

The only source for this 20nm chip in february is wccftech and they don't use logic to analyze any rumor.

Next year, I am pretty sure AMD lineup is going to consist of the following for 2015.

Tonga Derivatives(AMD still has to make several cards and the time to sell them to recovered the R and D costs of this), this Fiji chip and its derivatives and lastly if AMD does make 20nm chips, small lowerend market chips. None of which use need HBM.

I have a feeling the first products that will use HBM for AMD is their APU's. APU's are really bandwidth starved.
 
Last edited:

The Mac

Supreme [H]ardness
Joined
Feb 14, 2011
Messages
4,492
I should rephrase:

AMDs next FLAGSHIP card will be 20nm with HBM. 390 and 390x specificially by mid-2015

So the rumors go.

There is also a rumor there could be a mid-range 28nm part with HBM in february.

http://www.tweaktown.com/news/40339...early-next-year-20nm-with-hbm-tech/index.html

http://wccftech.com/amd-20nm-r9-390x-feautres-20nm-hbm-9x-faster-than-gddr5/#ixzz3Eq9tZBeo

http://www.redgamingtech.com/amd-r9-380x-february-r9-390x-370x-announced/

google around, there are lots of site with this rumor
 
Last edited:
Joined
Jul 20, 2013
Messages
524
i'd be worried for intel's discrete gpu...perhaps one day, AMD and nvidia will unite to face a common foe...
 

tajoh111

Limp Gawd
Joined
Jan 26, 2012
Messages
211
I should rephrase:

AMDs next FLAGSHIP card will be 20nm with HBM. 390 and 390x specificially by mid-2015

So the rumors go.

There is also a rumor there could be a mid-range 28nm part with HBM in february.

http://www.tweaktown.com/news/40339...early-next-year-20nm-with-hbm-tech/index.html

http://wccftech.com/amd-20nm-r9-390x-feautres-20nm-hbm-9x-faster-than-gddr5/#ixzz3Eq9tZBeo


http://www.redgamingtech.com/amd-r9-380x-february-r9-390x-370x-announced/

google around, there are lots of site with this rumor

I'll bump this come February.

Multiple websites, posting the same story from the same source and the same crap, doesn't make it anymore true.

AMD just recently released its midrange. That's tonga or the r9 285. The chip is a little bigger than Tahiti and is definitely not below midrange.

They could release a full Tonga chip by February, but tonga doesn't need HBM. It simply not powerful enough. Heck, even a gtx 980 doesn't need HBM and it schools anything build from Tonga chips.

If AMD manages to release anything on 20nm this year, it won't be the highend.
 

Liger88

2[H]4U
Joined
Feb 14, 2012
Messages
2,657
i'd be worried for intel's discrete gpu...perhaps one day, AMD and nvidia will unite to face a common foe...


lol the day Intel can do GPU's and do them great is the day I eat my own shoe. That day will be a long ways away and if history shows it'll never happen. Even IBM learned in its hayday that there are some things no matter how big you are, others do it better.
 

limitedaccess

Supreme [H]ardness
Joined
May 10, 2010
Messages
7,587
Intel actually does compete against AMD and Nvidia GPUs at various market segments. While most are probably familiar with Intel's IGPs competing against low end discrete GPUs (although somewhat shifting upwards slightly with Intel's Iris Pro line) they also compete against the ultra high end (higher than consumer). Intel's Xeon Phi lines competes against AMD's Firestream and Nvidia's Tesla line in the HPC market. Intel's upcoming Knights Landing coprocessor has actually already been formally announced as using a stacked memory configuration, MCDRAM which is a variant of HMC.

With this in mind if you can also see why the delta color memory compression introduced by Nvidia and AMD is not a complete substitute for actual higher bandwidth depending on the workload. Granted for a gaming workload this might be as important to the end users on here.

So onto HMB, HMB is actually a JEDEC standard and not a technology specific to Hynix or AMD. It also isn't the only next generation memory technology that is emerging or even the only stacked memory technology.

Hynix also lists their HMB currently in two speed configurations, 128GB/s or 102 GB/s for a 4Hi stack. So a 4x4 configuration can be 512GB/s or 408GB/s. By comparison the 290x has 320GB/s and in theory a 512 bit bus running at 7ghz would net 448 GB/s (a little more further down on why we don't, or won't, necessarily see something like this). The original hype from stacked memory stemmed from 1 TB/s of bandwidth on 4 stacks posted on a slide by Nvidia, however it seem like that will not be achievable until 2nd Gen HBM (or stacked memory implementations in general).

Also keep in mind this is the maximum speed of the memory and may not be the maximum speed of the implementation. You'll notice many current video cards using 6 Gbps rated GDDR 5 running below that speed for example. I'm not familiar with what the complexities would involve regarding the memory controller, so I can't really assume whether or not implementations will be able to handle the full bandwidth or not (or other issues).

The other interesting thing about what Hynix has presented so far relates to actual capacity. Currently according to what Hynix lists you'd only get 2GB from a 4x4 configuration as what is available (datasheet PDF lists only 1Gb, or 256MB, dies as available) while Gen1 seems to top out at 4GB for a 4x4 configuration according to the leaked PDFs (2Gb dies).

As for the market impact, and I've posted this quite a few times regarding the 970/980, you always take a risk when buying the "lead" product so to speak. This is why the staggered release schedule from both is bad for consumer decision making. Especially with the 980 rather aggressively priced high, due to the performance premium right now, that runs a very high risk of a significant value drop. The 970 is somewhat buffered due to the reverse situation of being priced more aggressively low. You saw this play out problematically for consumers with the 7970 and 680 for example.
 

BurntToast

2[H]4U
Joined
Jun 14, 2003
Messages
3,674
Why the hell are we so concerned about the holiday season? We arent talking about a console, we care talking about graphic cards. Wouldn't you say the GPU "holiday season" is more in tune with the tax season?
 

harmattan

Supreme [H]ardness
Joined
Feb 11, 2008
Messages
4,603
...

AMDs next FLAGSHIP card will be 20nm with HBM. 390 and 390x specificially by mid-2015

...

At which point nV will release 980 ti beating 390/390x in power efficiency, and likely performance.

Maxwell architecture has a hell of a lot of upwards scalling left in it, Tonga does not.
 

harmattan

Supreme [H]ardness
Joined
Feb 11, 2008
Messages
4,603
Why the hell are we so concerned about the holiday season? We arent talking about a console, we care talking about graphic cards. Wouldn't you say the GPU "holiday season" is more in tune with the tax season?

Yes, it's specifically more in line with the fiscal (quarterly reporting) calendar whereas console releases are all about holiday sales.
 

The Mac

Supreme [H]ardness
Joined
Feb 14, 2011
Messages
4,492
At which point nV will release 980 ti beating 390/390x in power efficiency, and likely performance.

Maxwell architecture has a hell of a lot of upwards scaling left in it, Tonga does not.

It wont be Tonga (Volcanic Islands), it will be Bermuda (Pirate Islands).
 
Last edited:

fanboy

[H]ard|Gawd
Joined
Jul 4, 2009
Messages
1,057
AMD is the King of disinformation to no information and for all we know the cards may be here next week and Nvidia has no idea whats coming so the 980Ti could just be a waste of PCB board ..

but all you have to do is look back at the jump in performance from 4870 >5870>6970>7970>290x>390x and the 980ti would need to be 50% faster then the 290x just to have a chance at the 390x if AMD thinks the 380x can handle the 980GTX.
 

timmay

n00b
Joined
Jul 30, 2014
Messages
34
4 months is an eternity in this industry...
Agreed.

nVidia has released a great product in the 980 and all AMD can do is swear to a competitive card in four months or more.

Not good. nVidia has been allowed to keep card prices high for too long.

Remember when the 5850 walked all over Team Green and nVidia released their biggest bang-for-the-buck card ever as a response, the GTX 460 (150 Euro for the 1 Gig version, 120 for 768 MB) ? The 560 Ti was already daring to be more expensive and by the time of the 660 Ti the price was back to the 250 Euro price point.
 

DejaWiz

Fully [H]
Joined
Apr 15, 2005
Messages
20,608
AMD is the King of disinformation to no information and for all we know the cards may be here next week and Nvidia has no idea whats coming so the 980Ti could just be a waste of PCB board ..

but all you have to do is look back at the jump in performance from 4870 >5870>6970>7970>290x>390x and the 980ti would need to be 50% faster then the 290x just to have a chance at the 390x if AMD thinks the 380x can handle the 980GTX.

If (a big if) the 380X can take on the GTX980 and the 390X will be 50% faster than the 290X (it would have to be if what you stated about the 980ti needing to be 50% faster than 290X turns out to be true), then NVidia will likely counter by bringing some cool stuff to market once the die shrink refresh happens: wider memory bus, more IPC, more Cuda cores, more ROP units, higher core clocks, higher vram clocks, lowest power draw dual GPU card ever, etc.

Then AMD will release their R3xx refresh to counter the Maxwell refresh and, hopefully, the price war will continue that NVidia just started with the vastly lower MSRP of the freshly-released 9xx series.

It's a great time to be a GPU buyer. :)
 

The Mac

Supreme [H]ardness
Joined
Feb 14, 2011
Messages
4,492
Agreed.

nVidia has released a great product in the 980 and all AMD can do is swear to a competitive card in four months or more.

Not good. nVidia has been allowed to keep card prices high for too long.

Remember when the 5850 walked all over Team Green and nVidia released their biggest bang-for-the-buck card ever as a response, the GTX 460 (150 Euro for the 1 Gig version, 120 for 768 MB) ? The 560 Ti was already daring to be more expensive and by the time of the 660 Ti the price was back to the 250 Euro price point.

Bullshit, its allways 4-6 months between competitive releases. Sometimes more
 
Top