390 the better card for longevity over 970?

the 960 comparison was clearly a joke, but if someone intends to heavily overclock their card then a 970 may be a better choice. But I think the % of the discreet gpu buying public that actually overclocks their cards is less than 10%. Does anyone have stats on that?

Not sure how accurate any info could be on that. Even 3DMark sometimes doesn't show my OC, especially on CPUs when you use FSB and don't change the Multi from stock.
 
Async looks to be great. But what is creating the bulk of your guys argument or debate rather is the lack of its existence yet, at least in a discernable way. I loved AMDs presentation of what it did and how it worked. Very dumbed down so to speak, layman terms so anyone could understand. It has some serious potential. Consoles are making good use of it and that will only help propel them to great performance standards. Hopefully we can soon see some real use of it on PC.

Correct me if I am wrong here but won't majority of gaming evolved titles like deus ex, tomb raider, battlefield 5 be asynchronous heavy.
 
I don't have any special knowledge, I am a total lay person, just my impressions gleaned from piecing together information I've read on the web. I could be mistaken on some of the implications, but I am going off of some of the commentary of people who know more than I do, like David Kanter. That said, lay person =/= completely incapable of coming to some sort of reasonable conclusion based off readings. I don't have to have gone into all the details and minutia of healthcare policy to come away with the impression that an employer based system is intrinsically inferior to scores of other UHC systems around the world in their results.

And two results currently reported work either better or with lower latency on amd cards. concurrent async compute is much better supported on gcn, and lower context switching seems to give gcn cards a latency edge over maxwell counterparts. In games where these are not large factors or nvidia can lean on code paths that are not as reliant on these metrics, they can and likely will perform as well or better in general. But I don't expect that to hold over time with newer titles. I could be wrong, but I expect to see more utilization of concurrent compute/graphics workloads in more modern games/engines because I suspect both amd AND nvidia cards released in 2016 will be more effective at that than maxwell. These are merely impressions based off what I have read and my own expectations about where the market is likely to head, but they could be wrong.


In the end though, this can only be settled with actual game data and results.


In future dx12 titles and vr titles, which cards perform better? Which cards produce lower latency in vr? How much async are the upcoming dx12 games actually using? Are there tangible benefits of using more concurrent graphics/compute vs the more serialized nvidia approach assuming code was optimized for either architecture? This is why I want to see examples of the same engine and the same game with heavier use of async/compute for amd and less of that for nvidia, to see what can be achieved with each.

This will probably never happen due to time and resource reasons, but people make assumptions about a cards capabilities without a full analysis of how the software is made to take advantage of the hardware. I still want to see how a dx12 witcher would perform on amd/nvidia hardware. Ditto for hairworks and tressfx 3.0. But what we typically get, especially with a lot of the gameworks titles are rendering effects that have been designed with an nvidia architecture in mind first and foremost. Initial benchmarks come out, and lo and behold, the nvidia cards trounce the amd cards. Must mean amd makes sh*t cards or has sh*t drivers. I've read comments in such articles like the batman benchmarks with gameworks that that made their decision to break towards a 980 over say, a fury or a 970 over a 390. This stuff matters, and the software stack can be used to stack the deck in one vendors favor by cherry picking techniques that are written to run better on their particular hardware strengths over their competitors.


There is no such thing as concurrent async compute don't make things up.

and now you are throwing gameworks into this, what you can't stick to what you started off?

I think you should just say what you really think, and be done with it, because at this point its obvious you are grasping at straws.

Games will show that async compute is fine on maxwell, and you can already see it working well in AOTS benchmarks. Fable as well to a lesser degree. But AOTS there are seperate paths for both hardware, and I think I know your response is its not doing async, but its getting the same performance increases as Fiji and the cards are where they should be with performance right around the same. So if its not doing async and its still getting the same performance as Fiji based chips well that then what is? You can't say Aysnc is the future for these chips and then say AOTS when not using Async nV's hardware has the same performance increases as AMD can you?
 
Not sure I agree that the 290x is the obvious choice out of the three. AMD fanboy here, picked up a 970 to replace my 7970 and I've been quite happy. Enjoying the temperature, power consumption, and noise...I was willing to pay an extra $30 over the 290x price to get those categories improved.
 
also, just out of humor, I realize that I mentally phased out the entire argument with Razor without even intending to. Does this mean by brain has been trained to ignore dumb forum arguments? I'm evolving :D
 
If you're willing to buy used wait for someone to unload a 980 on the FS/FT here. Bet you can get one if you stretch the budget just a bit. Warranty and all.

If this thread has devolved in to a nonsensical screaming match is it appears to have, forgive my attempt to make a relevant suggestion. Have nice day.
 
Correct me if I am wrong here but won't majority of gaming evolved titles like deus ex, tomb raider, battlefield 5 be asynchronous heavy.

Cant say for sure but that was the talk. But look at AOS. It was the talk and come to find out it wasn't using a lot, a bit but not the level we thought. It comes down to how much they want parity in the release. As has been stated by most reasonable posters here, GE titles tend to be more impartial so if we keep with the original assumption AMD is greatly favored with async then we can likely assume they will curb the amount for more parity.

Actually I wonder exactly what benefit in some of these titles async would give. What I garnered from the massive amount of threads and articles is a greater number of objects and such on the screen per frame.
 
There is no such thing as concurrent async compute don't make things up.

and now you are throwing gameworks into this, what you can't stick to what you started off?

I think you should just say what you really think, and be done with it, because at this point its obvious you are grasping at straws.

Games will show that async compute is fine on maxwell, and you can already see it working well in AOTS benchmarks. Fable as well to a lesser degree. But AOTS there are seperate paths for both hardware, and I think I know your response is its not doing async, but its getting the same performance increases as Fiji and the cards are where they should be with performance right around the same. So if its not doing async and its still getting the same performance as Fiji based chips well that then what is? You can't say Aysnc is the future for these chips and then say AOTS when not using Async nV's hardware has the same performance increases as AMD can you?

Talking about concurrent async + graphics workloads and gcns superior ability to handle both types of workloads in tandem.


As for async working fine on maxwell, the oxide guys mentioned that their implementation of async compute "pales in comparison" to what some of the console guys are doing. I expect to see bigger boosts going forward once more game devs start to transfer that over to the pc. fable legends reportedly uses even less async compute than aots, so little that I think they might be able to serially schedule the compute workloads in with the graphics work and not take much of a hit.

But yes, in both of the games where the devs have spelled out openly that their engines/games are not making heavy use of async compute to mix in the the graphics workloads, nvidia performance is right up there with amd (after a good deal of delay, I wonder if there is a radically increased effort needed to manually prop up their cards to just BARELY get parity at most gpu levels in titles with minimal mixed workloads, we might see a reversal of day 1 performance if that trend holds for maxwell cards). Do you expect that performance parity to hold for maxwell going forward? I don't.

How's that 780ti holding up?

I remember nvidia launching it to blunt the dominance of the 290x, and it worked, except that today it lags behind. No legs.


With the way maxwell handles concurrent async/graphics workloads, and a shiny new pascal released in 2016, I don't expect maxwell to age well at all. And this is why it is leading people off a cliff to suggest a 970 is just as wise a purchase as a 390. For someone looking to hold the graphics card for a year or less, then it's probably fine.

Most people are still breaking towards the 970s as the amazon sales charts show, but they made a poor decision imo, just like all those people that traded in both 780s and 290x models for 780tis. But that's one of the perks of being the apple of the gpu world. Nvidia does not have to actually be better, they just need to get people to think they're better.

And most people do, and all it costs, is their quality of experience over time, and being forced to upgrade sooner to maintain decent gameplay in newer titles.
 
Talking about concurrent async + graphics workloads and gcns superior ability to handle both types of workloads in tandem.


As for async working fine on maxwell, the oxide guys mentioned that their implementation of async compute "pales in comparison" to what some of the console guys are doing. I expect to see bigger boosts going forward once more game devs start to transfer that over to the pc. fable legends reportedly uses even less async compute than aots, so little that I think they might be able to serially schedule the compute workloads in with the graphics work and not take much of a hit.

But yes, in both of the games where the devs have spelled out openly that their engines/games are not making heavy use of async compute to mix in the the graphics workloads, nvidia performance is right up there with amd (after a good deal of delay, I wonder if there is a radically increased effort needed to manually prop up their cards to just BARELY get parity at most gpu levels in titles with minimal mixed workloads, we might see a reversal of day 1 performance if that trend holds for maxwell cards). Do you expect that performance parity to hold for maxwell going forward? I don't.

How's that 780ti holding up?

I remember nvidia launching it to blunt the dominance of the 290x, and it worked, except that today it lags behind. No legs.


With the way maxwell handles concurrent async/graphics workloads, and a shiny new pascal released in 2016, I don't expect maxwell to age well at all. And this is why it is leading people off a cliff to suggest a 970 is just as wise a purchase as a 390. For someone looking to hold the graphics card for a year or less, then it's probably fine.

Most people are still breaking towards the 970s as the amazon sales charts show, but they made a poor decision imo, just like all those people that traded in both 780s and 290x models for 780tis. But that's one of the perks of being the apple of the gpu world. Nvidia does not have to actually be better, they just need to get people to think they're better.

And most people do, and all it costs, is their quality of experience over time, and being forced to upgrade sooner to maintain decent gameplay in newer titles.

390 will age well.
970 is dead in water.
 
Talking about concurrent async + graphics workloads and gcns superior ability to handle both types of workloads in tandem.
There is no such thing as concurrent async, there is concurrent execution of compute and graphics kernels, and async, two different things. concurrent execution of compute and graphics kernels can be done on both Maxwell and GCN hardware. Async compute can be done on Kepler, Fermi, Maxwell and GCN hardware.

As for async working fine on maxwell, the oxide guys mentioned that their implementation of async compute "pales in comparison" to what some of the console guys are doing. I expect to see bigger boosts going forward once more game devs start to transfer that over to the pc. fable legends reportedly uses even less async compute than aots, so little that I think they might be able to serially schedule the compute workloads in with the graphics work and not take much of a hit.
Oh so the original argument of Maxwell doesn't work/ sucks at Async is no more? Oh you can do some async but later games with more async will hurt it? Come one have a backbone and stick with your original argument and to prove it.
But yes, in both of the games where the devs have spelled out openly that their engines/games are not making heavy use of async compute to mix in the the graphics workloads, nvidia performance is right up there with amd (after a good deal of delay, I wonder if there is a radically increased effort needed to manually prop up their cards to just BARELY get parity at most gpu levels in titles with minimal mixed workloads, we might see a reversal of day 1 performance if that trend holds for maxwell cards). Do you expect that performance parity to hold for maxwell going forward? I don't.
Async compute doesn't need to mix into the graphics workloads if the latency of the graphics workload is already low enough. And we have seen this, with GPU veiw in a number of games. If you like take any game with or without async and just let me know, I'll do the same we can test it out.

How's that 780ti holding up?
780ti has much less shader power than the 980 series, so yeah its expected to get hurt.
I remember nvidia launching it to blunt the dominance of the 290x, and it worked, except that today it lags behind. No legs.
The 980 and Fury gens were stop gaps because the 20nm flop, so now we are talking another card again, if a true next gen chip came out, which would have been if 20nm node as good, both the 780 ti and 290 would have put to rest, to try to prove your point? Come one stick with what you know, lets trough the kitchen sink in there too.

With the way Maxwell handles concurrent async/graphics workloads, and a shiny new pascal released in 2016, I don't expect Maxwell to age well at all. And this is why it is leading people off a cliff to suggest a 970 is just as wise a purchase as a 390. For someone looking to hold the graphics card for a year or less, then it's probably fine.
oh back to no there is no concurrent async its not concurrent async, its concurrent or async compute, get that through your head, other wise it will get confusing.

Maxwell is very good at both, just go over to nV's website and dl the utilization documents for Maxwell, it goes though the latency differences.
 
also, just out of humor, I realize that I mentally phased out the entire argument with Razor without even intending to. Does this mean by brain has been trained to ignore dumb forum arguments? I'm evolving :D


You were phased out even before that was obvious by your initial responses must be some good purple haze. :)
 
Why do you defend nvidia so much?. I am asking out of genuine curiosity,


Its not defending its correcting assumptions based on crap marketing and false information, two different things.

General consumers aren't expected to know indepth information about architecture, and people who post about these type of things should know what they are posting about.

Yes I do see issues in the future where doing more async code in *certain ways* will hurt Maxwell, as with GCN architectures but its highly code dependent. GCN might have a short lived advantage because its being used in consoles but with the way Fury is set up we already see the same code that runs well on GCN doesn't give all the benefits to Fury, because Fury is getting bottlenecked somewhere. Is that future going to be coming up soon, oh yeah, because next generation architectures are going to be quite different from what we have now. Shader array structures from both AI and Pascal are going to be different, these are not renamed parts or similar parts with more units.

Edit: Logically speaking if you want to buy something based on async performance, marketshare is what game companies base their optimizations and coding preferences on just like in any other industry, so nV cards with much more marketshare should be the better buy, but I'm not saying that, I think AMD will recapture some more marektshare (because their loss was due to availability or delayed release not in ability to compete on performance) so, either of these cards are good in the future, it just comes down to price.
 
Last edited:
Its not defending its correcting assumptions based on crap marketing and false information, two different things.

General consumers aren't expected to know indepth information about architecture, and people who post about these type of things should know what they are posting about.

Yes I do see issues in the future where doing more async code in *certain ways* will hurt Maxwell, as with GCN architectures but its highly code dependent. GCN might have a short lived advantage because its being used in consoles but with the way Fury is set up we already see the same code that runs well on GCN doesn't give all the benefits to Fury, because Fury is getting bottlenecked somewhere. Is that future going to be coming up soon, oh yeah, because next generation architectures are going to be quite different from what we have now. Shader array structures from both AI and Pascal are going to be different, these are not renamed parts or similar parts with more units.

Edit: Logically speaking if you want to buy something based on async performance, marketshare is what game companies base their optimizations and coding preferences on just like in any other industry, so nV cards with much more marketshare should be the better buy, but I'm not saying that, I think AMD will recapture some more marektshare (because their loss was due to availability or delayed release not in ability to compete on performance) so, either of these cards are good in the future, it just comes down to price.

Actually I think what he is alluding to is that thus far in this thread and many others you have had not one negative thing to say about Nvidia even when it is a negative aspect or point. Say the 3.5gb Vram. What bothers me most is the lie, and I mean lie not oversight. The performance impact is negligible for most but doesn't excuse Nvidias obvious attempt to not be outdone on the box against the 290/x. That is one. Or maybe GW or tessellation (akin to GW somewhat). Granted none of these warrant the mob mentality some portray but do warrant some caution.

At any rate you are very knowledgeable when it comes to Nvidia. Though sometimes it looks like you believe whatever Nvidia prints and comments. This is likely what he refers to. Don't take it too personal it is just what some other see even though I am sure it is not what you intend.
 
When you compare GPUs and want to decide which one you want to buy, only capability you need is for your brain to be able to read numbers and tell which number is greater out of a set of two. (FPS)
anything else is irrelevant, there could be a magic gnome under that heatsink or a potato rendering those pixels, what technology, year etc they use is not relevant to the common user.
So posting these things just confuses the op (any op) and destroys the point of him/her asking for your opinion on GPUs.
 
Actually I think what he is alluding to is that thus far in this thread and many others you have had not one negative thing to say about Nvidia even when it is a negative aspect or point. Say the 3.5gb Vram. What bothers me most is the lie, and I mean lie not oversight. The performance impact is negligible for most but doesn't excuse Nvidias obvious attempt to not be outdone on the box against the 290/x. That is one. Or maybe GW or tessellation (akin to GW somewhat). Granted none of these warrant the mob mentality some portray but do warrant some caution.

At any rate you are very knowledgeable when it comes to Nvidia. Though sometimes it looks like you believe whatever Nvidia prints and comments. This is likely what he refers to. Don't take it too personal it is just what some other see even though I am sure it is not what you intend.


I don't have anything negative to say about the 390x outside of power consumption. Which we haven't even discussed yet. So what the fuck is that? You want me to talk about power consumption comparisons too? I don't see much difference between the 390x and the gtx 970, one has more ram which probably won't come in handy as bottlenecks shift with newer games towards shader performance, we have seen this time and time again with newer games. We can't really discuss async because one of the parties that wants to discuss it tends to be clueless on basic terminology let alone actually understanding of it so that person should differ to someone who knows what the hell is going instead of using generalized marketing statements which are in no way correct, or using misunderstood technical documents which its incomprehensible to him because he doesn't even know the terminology.

And I'm very knowledgeable on both architectures, but I still defer to others who know more than me as well when the time arises. Which I haven't seen people who want to throw async out there as a buying option do, they should really know what they are talking about before they talk about it.

Hence my first post in this thread
http://hardforum.com/showpost.php?p=1041984019&postcount=2

I don't think there will be any longevity difference between these two cards, possible with the 8gb of ram yeah that might give you a bit more, but outside of that, nope, the 970 you can get around $250 on sale which I don't think you will be able to get the same price on the 390, you maybe able to get them for 290ish.
It all comes down to price, what the fellow posters want me to say is async is screwed up on nV hardware because X, Y, Z told us so in an article, which is not correct because the author's of those articles didn't have the correct information and didn't have the technical background to ask the proper questions to get those answers. If you go back to my discussion of this async topic with Mahigan who brought this topic to light, I was asking questions, and reading documentation, and figure out what was going on because there wasn't enough information or the information wasn't straight forward in one document. So it took a few days to put the information together with other peoples input on the subject.
 
Last edited:
the 960 comparison was clearly a joke, but if someone intends to heavily overclock their card then a 970 may be a better choice. But I think the % of the discreet gpu buying public that actually overclocks their cards is less than 10%. Does anyone have stats on that?

yeah but most 970s are sold pre overclocked.

People can hit 1450+ on a pre overclocked 970 without touching anything.

Regarding stuttering, I have seen on a few occasions where I havent rebooted for many weeks stutter in games, which is fixed by a reboot, I believe this happens because the vram gets fragmented over time and eventually the only large free vram is in the slow 0.5gig.

Otherwise I have been lucky to not have any stuttering caused by my 970 and even with nvidia been assewipes its a good card.

I think also nvidia will eventually back down on freesync but they will milk gsync for as long as possible, just remember freesync isnt on all monitors, for some reason its been sold as a premium feature.
 
Last edited:
Say the 3.5gb Vram. What bothers me most is the lie, and I mean lie not oversight. The performance impact is negligible for most but doesn't excuse Nvidias obvious attempt to not be outdone on the box against the 290/x.

This lie must bother you even more, since it actually resulted in a lawsuit.
 
Razor is obviously putting up a defense of his preferred card vendor. Nvidia. He keeps shifting the goal posts, the big flap was about gcn being able to handle both graphics and async compute workloads concurrently. Now he's trying to laser focus attention on just the compute part being able to execute fine as if that was the issue. The entire power of the flap was that maxwell cards have to be engaged in either compute or graphics workloads more serially than amd gcn cards are capable of. Everything else he says on this point is hand waving to try to save face for nvidia. Like the attempts at disqualification he engages in by saying anyone without intimate technical knowledge of the inner workings understands nothing, so stay silent.

Talk about nvidia getting similar performance using other code paths that do NOT rely as heavily or at all on mixed compute/graphics workloads is a dodge to divert attention away from maxwells handicap.

Talk about marketshare and the lower likelihood of game devs to implement more advanced mixed compute/graphics effects is another dodge, more attention taken away from the handicap. He never mentions the higher cost of context switching that will lead to reduced latency in vr on gcn cards, it's all diversion diversion diversion.

Most of his time is spent running cover for nvidia. For my part, I spend a lot of time offering SOME defense and cover for amd, in part because I'm one of their fans who wants to see them do well but more importantly, I want to counteract the fear and uncertainty over amd products that is rampant all over the net. Most of the reason the sales of certain nvidia cards are so lopsided is because of his kind of nvidia boosting all over the place.
 
yeah but most 970s are sold pre overclocked.

People can hit 1450+ on a pre overclocked 970 without touching anything.

Regarding stuttering, I have seen on a few occasions where I havent rebooted for many weeks stutter in games, which is fixed by a reboot, I believe this happens because the vram gets fragmented over time and eventually the only large free vram is in the slow 0.5gig.

Otherwise I have been lucky to not have any stuttering caused by my 970 and even with nvidia been assewipes its a good card.

I think also nvidia will eventually back down on freesync but they will milk gsync for as long as possible, just remember freesync isnt on all monitors, for some reason its been sold as a premium feature.


Like I said earlier, for people that intend to heavily overclock their cards, the 970 is probably a better bet. I don't think that is anywhere close to most people, but for those that want to they can go that route.

On the freesync side, I think nvidia will refuse to support freesync so long as they have the kind of dominant marketshare they have. They will milk the additional fees gained from the gsync monitors for both increased revenue and a desire for greater lockin/costs associated with switching to a competing brand like amd. New amd card does better in a certain price bracket? But oh wait... that expensive gsync monitor won't get any benefits, better stick with an nvidia card.

It's quite transparent. Unless and until nvidia pays a cost for limiting their cards to gsync displays, they won't change. And I don't expect them to need to anytime soon based on the apple like devotion of so many of their fans.

But there will come a time when building an amd system from scratch will cost up to hundreds less for similar performance once the cost differential for a gsync display vs a freesync display is factored in. Or the same price with a persons budget with amd allowing them to spend less on the monitor and more on the gpu.

Those who choose the amd path will have better performance, better experiences on all but the top end where price is no issue. Nvidia users on a budget will suffer. Will that be enough to make them care? It hasn't so far.
 
Cores of the past never even had FP units. Lawsuit is without merit, FX processors (consumers) have up to 8 integer cores. Just running simple tests with multi threaded programs/benchmarks using Windows Affinity to run 4 threads on four modules and 8 threads on 4 modules (8 cores) will show about a 80% increase in performance, like CineBench. If those were not separate cores then that would be impossible for that performance gain. AMD should sue this idiot for a false lawsuit.
 
To all those people quoting COD - https://www.youtube.com/watch?v=rkAiTFm1aGE

The 390 isnt getting any extra fps because of its Vram here. 970 and 390 are about the same.
The 390 has an edge because of more horsepower here. not vram! Misinformation spreads faster than ebola.

Nobody was "quoting" COD. A benchmark was posted to show that some newer games like BO3 have exhibited higher VRAM usage.

And that video is of the last gen title, which uses around 3.4GB vram at 1080p if you look at benchmarks. So it doesn't make sense to say there is no difference when trying to make your point, as that amount wouldn't bottleneck either the 970 or 390.
 
Dude just last week, people could have gotten the 970 for $250 with rebates, the cheapest 390 was around $310 (with rebates), that's not a few bucks and 20% difference, now the same 970 is $280 with 2 games (with discount no rebate).....

Dude just today a deal came across slickdeals for an r9 390 for $225
3 days ago there was one on sale for $230


The kicker is how you are going on about how last week there was an amazing deal for a 970 at $250 there was a short lived deal for a 390 at $201...

People really are acting like the 970 is the only one that goes on sale.

That said i would think about it. The VRAm difference is questionable. Im willing to bet the 970 will perform admirably at 1080p for quite a while to come. At 1440P its a different matter, especially going into the future. At 1440P especially if you plan on going SLI/CF later on that VRAm become even more of a difference.
 
Nobody was "quoting" COD. A benchmark was posted to show that some newer games like BO3 have exhibited higher VRAM usage.

And that video is of the last gen title, which uses around 3.4GB vram at 1080p if you look at benchmarks. So it doesn't make sense to say there is no difference when trying to make your point, as that amount wouldn't bottleneck either the 970 or 390.

3.4GB could very easily bottleneck the 970 and its 3.5GB of VRAM the second that usage tipped over to 3.5GB.
 
It all comes down to price, what the fellow posters want me to say is async is screwed up on nV hardware because X, Y, Z told us so in an article, which is not correct because the author's of those articles didn't have the correct information and didn't have the technical background to ask the proper questions to get those answers. If you go back to my discussion of this async topic with Mahigan who brought this topic to light, I was asking questions, and reading documentation, and figure out what was going on because there wasn't enough information or the information wasn't straight forward in one document. So it took a few days to put the information together with other peoples input on the subject.

It comes down to more than price, especially with games already maxing the VRAM on the 970, 2016 could literally crush that card into obsolescence, especially at higher resolutions.
 
http://wccftech.com/amd-r9-fury-x-nano-price-cuts/
Add massive price cuts for the entire AMD line up into the debate :), R9 390 is now $279 ($259 with rebate)
R9-390-price-cuts.jpg


Update: Seems like the slickdeals effect happened and the XFX R9 390 sold out at newegg
V5KDJVg.jpg
 
Last edited:
Razor is obviously putting up a defense of his preferred card vendor. Nvidia. He keeps shifting the goal posts, the big flap was about gcn being able to handle both graphics and async compute workloads concurrently. Now he's trying to laser focus attention on just the compute part being able to execute fine as if that was the issue. The entire power of the flap was that maxwell cards have to be engaged in either compute or graphics workloads more serially than amd gcn cards are capable of. Everything else he says on this point is hand waving to try to save face for nvidia. Like the attempts at disqualification he engages in by saying anyone without intimate technical knowledge of the inner workings understands nothing, so stay silent.

Talk about nvidia getting similar performance using other code paths that do NOT rely as heavily or at all on mixed compute/graphics workloads is a dodge to divert attention away from maxwells handicap.

Talk about marketshare and the lower likelihood of game devs to implement more advanced mixed compute/graphics effects is another dodge, more attention taken away from the handicap. He never mentions the higher cost of context switching that will lead to reduced latency in vr on gcn cards, it's all diversion diversion diversion.

Most of his time is spent running cover for nvidia. For my part, I spend a lot of time offering SOME defense and cover for amd, in part because I'm one of their fans who wants to see them do well but more importantly, I want to counteract the fear and uncertainty over amd products that is rampant all over the net. Most of the reason the sales of certain nvidia cards are so lopsided is because of his kind of nvidia boosting all over the place.

Spot on.
 
Like I said earlier, for people that intend to heavily overclock their cards, the 970 is probably a better bet. I don't think that is anywhere close to most people, but for those that want to they can go that route.

On the freesync side, I think nvidia will refuse to support freesync so long as they have the kind of dominant marketshare they have. They will milk the additional fees gained from the gsync monitors for both increased revenue and a desire for greater lockin/costs associated with switching to a competing brand like amd. New amd card does better in a certain price bracket? But oh wait... that expensive gsync monitor won't get any benefits, better stick with an nvidia card.

It's quite transparent. Unless and until nvidia pays a cost for limiting their cards to gsync displays, they won't change. And I don't expect them to need to anytime soon based on the apple like devotion of so many of their fans.

But there will come a time when building an amd system from scratch will cost up to hundreds less for similar performance once the cost differential for a gsync display vs a freesync display is factored in. Or the same price with a persons budget with amd allowing them to spend less on the monitor and more on the gpu.

Those who choose the amd path will have better performance, better experiences on all but the top end where price is no issue. Nvidia users on a budget will suffer. Will that be enough to make them care? It hasn't so far.

Its already happening.

As someone who in the past 8 months has owned a 980GTX, 980Ti and now an R9 Fury pro I can say that the 980 is pretty useless as a 4K card. You simply have to drop far too much graphical setting to compared to the other two cards. Obviously the 980Ti is faster than an R9 Fury but not by as much as you would imagine, even when overclocked. Especially with the recent AMD driver improvements and voltage unlocks.

R9 Fury and Fury X are nowhere near the overclockers dream claimed by AMD (shame on them for that BS) but with Voltage control I am getting a stable OC of 1140 on my core and 550 on VRAM. Though to be honest I only went AMD because I wanted adaptive sync (g-syng/freesync) on a 4K, 21:9, 32" or above monitor.

Overall I would say my R9 Fury is around 15% slower than my 980Ti (1450 core 8GHz VRAM) OC vs OC at 4K.

Yeah I think I'm going to go AMD next year, when they get the new cards out with more than 4gb vram (I tend to buy an entire system then wait 2 or 3 years before upgrading so didn't want to jump on fury's just in case they lacked future proofing with vram) and they'll have the die shrink too for bigger performance increases.

This monitor is just awesome, IPS, freesynch and 4k.
https://www.overclockers.co.uk/lg-2...g-widescreen-led-monitor-black-mo-138-lg.html

You can't get anywhere near that on Nvidia G-synch without paying nearly double the price. I've never been too bothered about huge fps (as long as I can maintain 60 or so I don't need the 144 hz stuff) but yeah, the next series of fury cards and a good freesynch monitor and nothing Nvidia has could offer anywhere near the same value or experience without paying way over the odds. There monitors and GPU's tend to be quite a bit more but like you said, not that big a difference tbh so I'm going to make sure I keep an eye on these good components and get a real nice rig next year. If AMD tries to learn from this pricing next year too then we could see AMD claw back some market share if people actually notice what is on offer.

That is an excellent spec monitor for the price.

I had no intention of selling my 980Ti until I saw that the range of G-sync monitors had nothing that met my needs. I ended up with this and can not be happier as I even got a hacked driver that increase Freesync range to 33-60 Hz. A single R9 Fury is running most of my games at high-ultra settings. Of course faster is always better but Freesync (and G-Sync) definitely make even 33 FPS very playable.

https://www.overclockers.co.uk/sams...g-widescreen-led-monitor-black-mo-217-sa.html

Yeah it's pretty nice to say when you want gsync you've got the choice of either going down a resolution (like 1440p) or paying an extra 200 or 300 to get similar spec. If it was just graphics cards then Nvidia would have some edge but when looking at the full package (if getting a new monitor) then it costs far more to get similar from Nvidia.

I think Nvidia are going to keep riding on the massive sales but at least there's some good deals to be had if people look around. I was considering going to Nvidia's end too but when looking at the range of monitors I had the same issue as you, nothing worth the price or fitting of what I wanted. I think Nvidia are charging too high a premium at the moment and a lot of people seem happy to swallow it up but when you look at it objectively Nvidia aren't going to offer the same experience when it comes to what you can get with certain budgets.

http://forums.overclockers.co.uk/showthread.php?t=18704498&page=2
 
Last edited:
Dude just today a deal came across slickdeals for an r9 390 for $225
3 days ago there was one on sale for $230


The kicker is how you are going on about how last week there was an amazing deal for a 970 at $250 there was a short lived deal for a 390 at $201...

People really are acting like the 970 is the only one that goes on sale.

That said i would think about it. The VRAm difference is questionable. Im willing to bet the 970 will perform admirably at 1080p for quite a while to come. At 1440P its a different matter, especially going into the future. At 1440P especially if you plan on going SLI/CF later on that VRAm become even more of a difference.

most of the 970 come with 2 free games and are priced around 280 after a rebate, so even if one of those games is something the consumer is looking for they are getting the card for 230, and this is happening for the past 2 or 3 months, the 970 has been less expensive than the 390 on average. Yeah there was a black Friday sale on the 390 for $201 but it was done in a few hours, I got an email from a friend but by that time it was done.

So right now the 390 and 970 are equally priced, but the 970 you still get those games, if those games are something the consumer will play its more money in their pocket.

SLI and Cross fire later will not be good for this generation. The reason for this Pascal and AI will have double the theoretical shader performance (actually usage of the shader ALU will go up based on architecture) with double the vram per card, high end cards are going to be coming with 16gb of ram and double the flops for single precision. Neither the 390 or the 970 will fair well in next gen games at higher resolutions even in multi card solutions the 390 with 8 gigs will have a better chance than the 970 I guess, but its not something I would bet on. This is a major upgrade in graphics cards which should have happened with 20nm so it is going to look like what happened with the g80 release, it pretty much rendered the 7900 and x1900xt multicard solutions obsolete over night. This gen will probably be even more because there are two node changes (we can look at it as 1.5 because of the half node) that these companies are jumping.

These two node changes are important, because both these companies are going try to out do the other, they aren't going to make small chips and save on money they are going to put as much as they can into this next gen GPU's.

When you are gambling on the future you better look at everything that is going on instead of the just the past, otherwise you will get burned. I got burned with the 7800gt sli, and that was the last time I ever touched multi graphics solutions. Partially because it wasn't future proof and mostly because of issues with multi adapters.
 
Last edited:
Not surprised to see AMD lowering the price of the 390, the 970 easily outperforms it and was already priced lower than it.
 
All these discounts NV and AMD is starting is telling me 1 thing.

Get rid of the old stock for the new 14/16nm tech coming out.

I would hold out unless you need a card now.
 
All these discounts NV and AMD is starting is telling me 1 thing.

Get rid of the old stock for the new 14/16nm tech coming out.

I would hold out unless you need a card now.


Yep was going to say the same thing.
 
http://wccftech.com/amd-r9-fury-x-nano-price-cuts/
Add massive price cuts for the entire AMD line up into the debate :), R9 390 is now $279
See my response:
https://www.reddit.com/r/Amd/commen...l_gpu_prices_including_r9_fury_x_fury/cxdmegw

In summary... They are Black Friday sales. The same sales also being applied to Nvidia GPUs, which are in some cases cheaper.

http://wccftech.com/nvidia-cuts-prices-900-serie/

It also seems WCCF edited their headline for those articles. The original was something like "AMD issues price cuts". It now simply says "Holiday price cuts".
 
Ended up grabbing a 970, nvidia experience has always been better for me and evga has always treated me well so its win win!
 
Ended up grabbing a 970, nvidia experience has always been better for me and evga has always treated me well so its win win!

bookmarked


It will be interesting to see how these cards stack up over 2016. For now, apple wins.
 
Back
Top