390 the better card for longevity over 970?

Those things take forever to come out and often aren't made for a generation. There isn't yet a Fury X Lightning but the 980 Classi was out within a month, IIRC.

I simply pointed out top end cards exist for AMD which is contrary to your claim. Somehow we went from "AMD has no top end cards" to "well they take forever to come out".

But in any case, Lightning cards typically come out 3-5 months after ref board launch (for both teams), which is pretty consistent with the Classified's release schedule, give or take a month. Yes the 980 Ti Classified released very quickly but it's an exception and not the norm, and has more to do with EVGA receiving a lot of flak for releasing the 980 Classified too late for too high a price, and not providing enough bios support. Then there's also the ridiculous 980 (non-Ti) Kingpin which released like literally 1 month before Titan X launch and cost $800, which made a lot of owners upset because again too little too late with too high a price tag. Plus there was a debacle with EK waterblocks when EVGA failed to inform the retail version had a slightly changed PCB layout (one of the caps got moved I believe), and thus anyone who bought the initial batch of waterblocks were SOL, and EVGA simply told them "tough luck" until EK put pressure on them.

980 Ti Classified is also "only" $50 more than the ref card, unheard of for a Classified card. So again, it's very much an odd exception.

Fury X is AMD's Titan so it's unlikely to get anything besides reference treatment. MSI wasn't even one of the AIBs that got seeded Fiji chips so of course no Lightning there either.
 
Those things take forever to come out and often aren't made for a generation. There isn't yet a Fury X Lightning but the 980 Classi was out within a month, IIRC.

There will most likely never be an aftermarket designed Fury X:

In case it isn’t obvious yet, the Fury X uses a very unique design. So unique, in fact, that AMD’s add-in board partners (like Asus, MSI, and Sapphire) won’t be able to customize the card with their own cooling solutions. The Fury X will be reference design-only, though AIBs will be able to tinker with the air-cooled Radeon R9 Fury released in July.

That means all Fury X cards will be physically similar no matter which manufacturer you buy from.

The same as the Nvidia Titan.
 
For all those saying "8GB is more future proof", that point is totally moot in my view. Hawaii Pro is never going to have the oomph to need more than 4GB, even at 1440p. I'd say 3.5-4GB is all the VRAM 970 or 390 (Hawaii Pro) will ever need. When considering 390 vs 970, I'd go 970 at 1080p because of the much higher overclocking headroom, and 390 at 1440p due to the larger mem bus and bit more usable VRAM.

That said, you should be looking at a 290x (full Hawaii) which trumps both 390 and 970 in horsepower, and still doesn't need >4GB.

On a side note, did I really just read through 3 pages of bickering? Can I get the last 5 minutes of my life back please?
 
Last edited:
For all those saying "8GB is more future proof", that point is totally moot in my view. Hawaii Pro is never going to have the oomph to need more than 4GB, even at 1440p. I'd say 3.5-4GB is all the VRAM 970 or 390 (Hawaii Pro) will ever need. When considering 390 vs 970, I'd go 970 at 1080p because of the much higher overclocking headroom, and 390 at 1440p due to the larger mem bus and bit more usable VRAM.

That said, you should be looking at a 290x (full Hawaii) which trumps both 390 and 970 in horsepower, and still doesn't need >4GB.

On a side note, did I really just read through 3 pages of bickering? Can I get the last 5 minutes of my life back please?

You do know that there are game that use more than 4GB of VRAM @1080p out today?
 
For all those saying "8GB is more future proof", that point is totally moot in my view.

Current games are already pushing 4GB+ at 1080p.
http://www.guru3d.com/articles-page...-graphics-performance-benchmark-review,9.html

As for OP, the 390 is the better choice. The 970 has a lot of drama surrounding it. There's the 3.5GB+0.5GB issue, rumors that it will have future issues due to the async compute limitations, and G-Sync is starting to look like overpriced proprietary hardware with recent announcements of the advancements to Freesync.

The 970 also had stuttering issues once a game starts to push 3.5GB+ vram in quite a few games last time I read.
 
Lol @ people quoting COD for performance, bad port, looks like 2008 still, people who play this fall in the segment of people who can't point and tell what part is the RAM in their computer? It is a yearly refresh with exact same graphics, 1 hr campaign.

The guy is pretty much right that hawaii doesn't really need 8gb of ram unless CFed. Ram will always come 2nd after raw Horsepower, a GTX 970 with 1.5gb ram will be better than a GTX 960 with 10 Gb Ram.
 
Current games are already pushing 4GB+ at 1080p.
http://www.guru3d.com/articles-page...-graphics-performance-benchmark-review,9.html

As for OP, the 390 is the better choice. The 970 has a lot of drama surrounding it. There's the 3.5GB+0.5GB issue, rumors that it will have future issues due to the async compute limitations, and G-Sync is starting to look like overpriced proprietary hardware with recent announcements of the advancements to Freesync.

The 970 also had stuttering issues once a game starts to push 3.5GB+ vram in quite a few games last time I read.


The vram usage changes based on the card, many engines do this now, its persistent levels, in other words as the character is moving through the world the system recognizes based on character location the assets around that character and dynamically loads based on how much memory is available to it. Originally this was made for MMO's, but now many if not all games are using it now.

Also nV has been doing this with drivers for the 970 on a per game basis as is AMD with Fury X, you end up loosing a little bit of frame rates because the assets still have to be processed through the gpu as its loading from system memory into vram but the other side of that you don't get shuttering, as long as its not too much over max vram sizes. There will be a point where a system like this will fail and it will be different based on different cards.
 
You do know that there are game that use more than 4GB of VRAM @1080p out today?


Do you know that, it depends on the graphics card and how much vram it has, it changes based on the engine a game uses?
 
Wow didn't mean to start a flame war lol. I actually grabbed a 970 today, but I had to cancel the order so now I have a chance to back out as I have been wavering back and forth lol.
 
Just enjoy it! It's a great card! (Unless you can get a better deal or some games with it, etc...)
 
Wow didn't mean to start a flame war lol. I actually grabbed a 970 today, but I had to cancel the order so now I have a chance to back out as I have been wavering back and forth lol.

You can get 390Xs for $350.... :cool:
 
Lol @ people quoting COD for performance, bad port, looks like 2008 still, people who play this fall in the segment of people who can't point and tell what part is the RAM in their computer? It is a yearly refresh with exact same graphics, 1 hr campaign.

The guy is pretty much right that hawaii doesn't really need 8gb of ram unless CFed. Ram will always come 2nd after raw Horsepower, a GTX 970 with 1.5gb ram will be better than a GTX 960 with 10 Gb Ram.

Regardless of what you say, the game can fill 4GB VRAM, period. It's designed like this if you bothered to read the first sentence of the linked article. You can argue about optimization and laugh all you want, but you're not adding anything of value to the discussion by trying to make insults a part of your point. Grow up.

And there are plenty of other games that push 4GB at 1080p, for instance, Shadows of Mordor from 2014, Farcry 4.

And lastly, let me make this point. The VRAM usage will increase with time as game engines make advancements and take advantage of existing hardware, this is proven over time. Look at the 8800GT that came in variants of 512mb and 256mb in 2007. Does anyone remember how the 512mb was considered overkill for 1080p and was only recommended for higher tier resolutions? Look at VRAM usage now.

The 970 isn't as good future-proof card if you don't upgrade frequently, the 390 is a wiser choice. And this is coming from a 970 owner.
 
Last edited:
Regardless of what you say, the game can fill 4GB VRAM, period. It's designed like this if you bothered to read the first sentence of the linked article. You can argue about optimization and laugh all you want, but you're not adding anything of value to the discussion by trying to make insults a part of your point. Grow up.

And there are plenty of other games that push 4GB at 1080p, for instance, Shadows of Mordor from 2014, Farcry 4.

And lastly, let me make this point. The VRAM usage will increase with time as game engines make advancements and take advantage of existing hardware, this is proven over time. Look at the 8800GT that came in variants of 512mb and 256mb in 2007. Does anyone remember how the 512mb was considered overkill for 1080p and was only recommended for higher tier resolutions? Look at VRAM usage now.

The 970 isn't a good future-proof card if you don't upgrade frequently, the 390 is a wiser choice. And this is coming from a 970 owner.


Neither of these cards are "future proof" if this was the 290x vs the 780 yes you can hold out longer with an 8 gb card (there wasn't any at the time though) but next gen is going to have double the shader performance and double the vram, games will need both shader performance and vram, not just one or the other.

There is no such thing as future proofing for the most part, the only time graphics cards have really lasted a long time is when one or the other company screwed up and made a product that wasn't competitive enough to stay in the race, AKA r600 and nv30, both were not as good as its competitors, (lets exclude marketshare out of this because its hard to quantify but there will be affect of this as well) and then they lasted as long as they did. You can say the 290x lasted a long time too, but this all happened because 20nm node didn't make it viable for next gen chips, so back plans were initiated by both companies (we can both companies screwed up but it wasn't there fault), hence we got what we see now and it would have been hard to for see this with the 290x, since these companies will not talk about products in preproduction. Also we have to see the x290/x390 did drop a tier performance wise, Fury line surpassed it, so in essence its now marketed as a main stream card not really an enthusiast card.
 
^ Which is why it's best to buy the card that will play the games you want to play right now the best and not sweat the rest of it!
 
Neither of these cards are "future proof" if this was the 290x vs the 780 yes you can hold out longer with an 8 gb card (there wasn't any at the time though) but next gen is going to have double the shader performance and double the vram, games will need both shader performance and vram, not just one or the other.

Sure they are. If you consider the context I've stated being 1080p, both those cards will last a very long time. I just think the 390 will age better than the 970, especially with Freesync becoming an industry standard over Displayport. I also don't think the 3-5 fps difference the 970 has in some games over a 390 is worth the sacrifice of extra VRAM offered by the 390. Meanwhile the 390 can show gains of 10fps in games that use a lot of VRAM, as shown here in Shadow of Mordor that is documented of pushing 6GB VRAM. Then you have the whole upcoming async compute fiasco that NVIDIA is rumored to not have proper support for.

If we go above 1080p, I can agree, it becomes harder to label something future-proof.

^ Which is why it's best to buy the card that will play the games you want to play right now the best and not sweat the rest of it!

Maybe you should read the topic title.
 
Last edited:
Sure they are. If you consider the context I've stated being 1080p, both those cards will last a very long time. I just think the 390 will age better than the 970, especially with Freesync becoming an industry standard over Displayport. I also don't think the 3-5 fps difference the 970 has in some games over a 390 is worth the sacrifice of extra VRAM offered by the 390. Meanwhile the 390 can show gains of 10fps in games that use a lot of VRAM, as shown here in Shadow of Mordor that is documented of pushing 6GB VRAM.
If you say this then you are also saying Fury line is in the same boat as any other 4gb and less cards? I don't think that is correct, this is because of what I stated before, developers will push more pixels if they can just as they will push more vram if its available. Engines are developed for cards with less memory by shifting assets from system ram to vram, it does drop frames a little bit because the gpu has to process the assets as they are being shifted, and at certain point if there is too much transfer it will hurt but if the engine is coded properly its a fairly significant amount, around 2 or 3 gigs more, and also it has to saturate the available bandwidth too, which is all controlled by the software and drivers.

Then you have the whole upcoming async compute fiasco that NVIDIA is rumored to not have proper support for.
Should I really comment on this lol, I think you should go read my paragraphs of post this matter (which is many many posts, and indepth architectural posts), first off async isn't an issue, that isn't what the difference between nV and AMD hardware is lol. And at this juncture anyone that throws out Async as problem just doesn't know what the hardware is doing
 
If you say this then you are also saying Fury line is in the same boat as any other 4gb and less cards? I don't think that is correct, this is because of what I stated before, developers will push more pixels if they can just as they will push more vram if its available.

OP is asking about 970 against a 390.

Should I really comment on this lol, I think you should go read my paragraphs of post this matter, first off async isn't an issue, that isn't what the difference between nV and AMD hardware is lol. And at this juncture anyone that throws out Async as problem just doesn't know what the hardware is doing.

I will disregard the topic title comment is because that is what we are talking about isn't it? Kinda of snide remark when these posts seem to be geared toward it.

If you want, go ahead, but I won't bother reading it. If you read what I said, it's rumored. Nobody yet knows how async compute will play out at this point until games start utilizing it and benchs are published, as NVIDIA has announced they will be implementing async compute support into their drivers. We've had one published already by oxide games that didn't favor nvidia and they fell behind pretty badly to amd, which sparked the async compute debate.
 
Last edited:
Lol @ people quoting COD for performance, bad port, looks like 2008 still, people who play this fall in the segment of people who can't point and tell what part is the RAM in their computer? It is a yearly refresh with exact same graphics, 1 hr campaign.

The guy is pretty much right that hawaii doesn't really need 8gb of ram unless CFed. Ram will always come 2nd after raw Horsepower, a GTX 970 with 1.5gb ram will be better than a GTX 960 with 10 Gb Ram.

Yeah, you're not going to last long here.
 
OP is asking about 970 against a 390.

I still disagree based on my knowledge of how newer game engines work
If you want, go ahead, but I won't bother reading it. If you read what I said, it's rumored. Nobody yet knows how async compute will play out at this point until games start utilizing it and benchs are published, as NVIDIA has announced they will be implementing async compute support into their drivers. We've had one published already by oxide games that didn't favor nvidia and they fell behind pretty badly to amd, which sparked the async compute debate.
I don't care about the millions of debates of people who don't know what they are talking about, I do care about information from developers from the industry that have stated what I have been stating since the Oxide opened up this can of worms.

No its not Async lol, Async has nothing to do with what the actual differences are.....

That's the issue. Async is a dx spec all dx12 complaint cards can do async compute, even Kepler and to a less degree latency hiding wise Fermi, and they are good at it and Maxwell is even better at it, to a degree better than AMD's GCN hardware, we can see this with compute based applications where latency and utilization of nV's shader array is close to 50% better than GCN's shader array. The application doesn't have real control over how concurrent instructions are dispatched to the different queues but it can give guidance to it and this is where the code is sensitive to different hardware . Concurrent kernel and instruction execution is where the difference is between AMD and nV hardware, which is not the same as Async, or multi engine, you can say this is like hyper threading, Hyperthreaded code on CPU's has its upsides and downsides based on architecture as well

Graphics cards are inherently high parallel so when executing concurrent kernels which Maxwell is capable of doing, but does it differently than AMD's hardware from a hardware point of view hence both hardware behave differently based on the coding of the application. Both hardware are sensitive to different code, even generational differences between AMD hardware shows sensitivity, Fiji vs. the r3xx or r2xx series.

Back to async compute, async compute is used to fill in gaps where the shader array units are idle hiding latency, now if the program is well made for parallelism and the hardware is already good at utilization of its units, which Maxwell is excellent at, that is why it can do better than GCN with half the flops, this shouldn't be an issue because latency between which kernel and instruction is being processed should be limited. *old adage its not how big you are its how you use it :D*

The whole IPC from AMD to Intel is about this as well. IF you have a chip with great IPC it will function faster even though it can do less threads.

Now with nV having 80% of the market how do you think developers will code concurrent execution? And this is why we shouldn't really even be talking about it. There is nothing to talk about.


See the difference?

This is just the over view of it, I can get into much more detail if you like...... pm me and I will.
 
Last edited:
That's a lot of work to try to say async compute does not matter. The fact remains that currently, AMD has a better implementation of async compute. Who cares if nvidia maxwell cards can perform some async calculations quickly if you have to WAIT for that to be finished to perform graphics work?

You?

Why?

To prop up your nvidia love? Get over it. Nvidias tech in this regard is inferior. Period. They will probably fix it next year, but it's behind. And pointing to market share is a neon sign saying that you agree that it's technically inferior at doing it's intended job (increasing performance - getting latency down and context switching lower) but never mind that because no one will actually CODE for the superior technology since nvidia has larger market share.


Well guess what, async is on the consoles, so every god damn cross platform console game dev that uses async to get more performance there will have some knowledge of how it works. If they choose to hold back that kind of optimization for the pc because nvidia has an inferior implementation, that is not a perk, it's a disgrace to both the game devs, and nvidia for holding the market back.
 
That's a lot of work to try to say async compute does not matter. The fact remains that currently, AMD has a better implementation of async compute. Who cares if nvidia maxwell cards can perform some async calculations quickly if you have to WAIT for that to be finished to perform graphics work?

You?

Why?

To prop up your nvidia love? Get over it. Nvidias tech in this regard is inferior. Period. They will probably fix it next year, but it's behind. And pointing to market share is a neon sign saying that you agree that it's technically inferior at doing it's intended job (increasing performance - getting latency down and context switching lower) but never mind that because no one will actually CODE for the superior technology since nvidia has larger market share.


Well guess what, async is on the consoles, so every god damn cross platform console game dev that uses async to get more performance there will have some knowledge of how it works. If they choose to hold back that kind of optimization for the pc because nvidia has an inferior implementation, that is not a perk, it's a disgrace to both the game devs, and nvidia for holding the market back.


What tell me why its inferior and then we can talk lol, because until you can you are just a mouth piece based off of really bad information, and also information that wasn't available to the general public at the time because it was quite new. I have been saying an Intel engineer has been saying it *if you want I can post you the link to his post too*..... many of the programmers at B3D have concurred with him as well on it. Come on......

Context switching should not be used in Async compute, Context switching is when you need to stop one kernel either graphics or compute and switch to the other. Multi engine is when you interleave compute instructions into the graphics pipeline, both kernels have be to running! This is why I stated, you have wrong information at hand or read it wrong lol. Aync compute is when you take instructions from one compute queue and put it into anther compute queue, which doesn't require Context switching because you are still running the compute kernel. GCN does have better context switching btw, since its also has fine grain context switching too. But this has nothing to do with Async compute.

And this isn't optimization, optimization is how you can speed up an algorithm by changing it so that it functions faster on a hardware with same or similar output. Aync is about how hardware reads the code and reduces latency based on what it does internally.

Oh and coming from a hardware perspective, please leave out ACE number of units and things of that nature, because that on itself without a basis of what the code is doing and how the code is written, can't really be talked about. It will just put stupid numbers in to a conversation that can't really be quantified without the background of the program.
 
Last edited:
Maybe you should read the topic title.

Or perhaps I gave the most sensible answer I could. If the question is misguided, is it better to steer the person in the right direction or speculate wildly on a future? How well has that worked for AMD, who, at least recently, have been relatively mismatched in hardware to software demands?

Speculating on consumer electronics is fun to its own end but makes for poor buying decisions.
 
I still disagree based on my knowledge of how newer game engines work
I don't care about the millions of debates of people who don't know what they are talking about, I do care about information from developers from the industry that have stated what I have been stating since the Oxide opened up this can of worms.

No its not Async lol, Async has nothing to do with what the actual differences are.....

That's the issue. Async is a dx spec all dx12 complaint cards can do async compute, even Kepler and to a less degree latency hiding wise Fermi, and they are good at it and Maxwell is even better at it, to a degree better than AMD's GCN hardware, we can see this with compute based applications where latency and utilization of nV's shader array is close to 50% better than GCN's shader array. The application doesn't have real control over how concurrent instructions are dispatched to the different queues but it can give guidance to it and this is where the code is sensitive to different hardware . Concurrent kernel and instruction execution is where the difference is between AMD and nV hardware, which is not the same as Async, or multi engine, you can say this is like hyper threading, Hyperthreaded code on CPU's has its upsides and downsides based on architecture as well

Graphics cards are inherently high parallel so when executing concurrent kernels which Maxwell is capable of doing, but does it differently than AMD's hardware from a hardware point of view hence both hardware behave differently based on the coding of the application. Both hardware are sensitive to different code, even generational differences between AMD hardware shows sensitivity, Fiji vs. the r3xx or r2xx series.

Back to async compute, async compute is used to fill in gaps where the shader array units are idle hiding latency, now if the program is well made for parallelism and the hardware is already good at utilization of its units, which Maxwell is excellent at, that is why it can do better than GCN with half the flops, this shouldn't be an issue because latency between which kernel and instruction is being processed should be limited. *old adage its not how big you are its how you use it :D*

The whole IPC from AMD to Intel is about this as well. IF you have a chip with great IPC it will function faster even though it can do less threads.

Now with nV having 80% of the market how do you think developers will code concurrent execution? And this is why we shouldn't really even be talking about it. There is nothing to talk about.


See the difference?

This is just the over view of it, I can get into much more detail if you like...... pm me and I will.

http://wccftech.com/nvidia-devs-computegraphics-toggle-heavyweight-switch/

https://developer.nvidia.com/dx12-dos-and-donts
 
It seems like AMD cards get better over time with drivers.

I have to buy the card from newegg. So used 970 is not an option. So in that case the 390 is a little cheaper.

This has pretty much the case since ATI and the Radeon 8500, if anyone remembers that. I still feel that holds true somewhat. They seem to invest in the hardware and improve the drivers over time. It's really why I never have had a problem ever considering them knowing exactly what I'm buying.
 

Context switching is not to be used with async, the dx12 dos and donts applies to best practices for dx12 programming.

Hallock stated that, they should have asked if it should have been used for Aysnc, it shouldn't be, and it wasn't being used for the first queue of instructions for Maxwell, but for some reason the code forced a context switch which didn't need to be done this is probably the driver issue nV has/had to fix, now if you read the do's and donts doc it even states in there don't use context switching for certain things lol.

And now PM me if you want to discuss this further, because its OT.
 
The 970 will start running out of VRAM long before the 390 does, so the 390 is objectively more future-proof.
How much it will actually matter is anybody's guess.
 
The 970 will start running out of VRAM long before the 390 does, so the 390 is objectively more future-proof.
How much it will actually matter is anybody's guess.

He pretty much says it, it doesn't matter that much but at the same price why wouldn't you get extra ram.


But it's not fun without flame wars.
 
He pretty much says it, it doesn't matter that much but at the same price why wouldn't you get extra ram.


But it's not fun without flame wars.


Yep and it all comes down to price, if the 970 and 390 are the same price than the 390 is a better buy. But with the 290 in the mix that is definitely neither of these two cards will beat the price of the 290.
 
I picked up a 390 to replace a 780 I sold because of the recent deals lately. Card seems good... no problem switching camps.

I doubt the extra ram will ever matter... but maybe I'll get lucky!
 
What tell me why its inferior and then we can talk lol, because until you can you are just a mouth piece based off of really bad information, and also information that wasn't available to the general public at the time because it was quite new. I have been saying an Intel engineer has been saying it *if you want I can post you the link to his post too*..... many of the programmers at B3D have concurred with him as well on it. Come on......

Context switching should not be used in Async compute, Context switching is when you need to stop one kernel either graphics or compute and switch to the other. Multi engine is when you interleave compute instructions into the graphics pipeline, both kernels have be to running! This is why I stated, you have wrong information at hand or read it wrong lol. Aync compute is when you take instructions from one compute queue and put it into anther compute queue, which doesn't require Context switching because you are still running the compute kernel. GCN does have better context switching btw, since its also has fine grain context switching too. But this has nothing to do with Async compute.

And this isn't optimization, optimization is how you can speed up an algorithm by changing it so that it functions faster on a hardware with same or similar output. Aync is about how hardware reads the code and reduces latency based on what it does internally.

Oh and coming from a hardware perspective, please leave out ACE number of units and things of that nature, because that on itself without a basis of what the code is doing and how the code is written, can't really be talked about. It will just put stupid numbers in to a conversation that can't really be quantified without the background of the program.


Why?

The ability to more easily handle workloads from compute/graphics concurrently is an advantage. It means the game engine / rendering system is intrinsically more flexible and able to handle changes on the fly. That's the latency advantage for amd cards.

Nvidia got caught with their britches down, and people like you can't stand it. You're trying to run defense and spread uncertainty about any potential benefits, shifting the conversation to no one REALLY taking advantage of it in software. Copouts and attempts to put the lipstick on the pig of nvidias technical choices as it relates to longevity.

And remember, THAT was the point of this thread. How will the 970 and 390 age over time? We have already learned that the way amd can handle concurrent compute/graphics workloads is superior to nvidia, and EVEN IF no one uses that to any large extent today, or writes separate code paths for nvidia due to them having a larger pc install base, going forward, what do you expect to see the market shift towards? The current more serialized designed for dx11 nvidia style? Or the gcn focus ?

Performance differences are not large today, it's mostly a wash, but EVEN WITH that, you STILL get VASTLY superior performance with context switching and vr latency, so tell me, which card seems like it will have more legs?

the 970 or the 390?


Anyone suggesting the 970 is either confused or lying based on the current state of the state of evidence we have.


Nvidia does not win this trade, deal with it.
 
Context switching is not to be used with async, the dx12 dos and donts applies to best practices for dx12 programming.

Hallock stated that, they should have asked if it should have been used for Aysnc, it shouldn't be, and it wasn't being used for the first queue of instructions for Maxwell, but for some reason the code forced a context switch which didn't need to be done this is probably the driver issue nV has/had to fix, now if you read the do's and donts doc it even states in there don't use context switching for certain things lol.

And now PM me if you want to discuss this further, because its OT.

Dos and don't specifically relating to the limitations of maxwell, where that lower power usage nvidia fans used to bludgeon amd users over the heads with had an ACTUAL hidden cost of removing the additional hardware schedulers that assist with lower performance hits with context switching and compute.


That advice does not advice does not apply to gcn in the same way, because it was not technically crippled in favor of greater dx11 performance at the expense of more long tail dx12/vr performance.
 
Why?

The ability to more easily handle workloads from compute/graphics concurrently is an advantage. It means the game engine / rendering system is intrinsically more flexible and able to handle changes on the fly. That's the latency advantage for amd cards.

Nvidia got caught with their britches down, and people like you can't stand it. You're trying to run defense and spread uncertainty about any potential benefits, shifting the conversation to no one REALLY taking advantage of it in software. Copouts and attempts to put the lipstick on the pig of nvidias technical choices as it relates to longevity.

And remember, THAT was the point of this thread. How will the 970 and 390 age over time? We have already learned that the way amd can handle concurrent compute/graphics workloads is superior to nvidia, and EVEN IF no one uses that to any large extent today, or writes separate code paths for nvidia due to them having a larger pc install base, going forward, what do you expect to see the market shift towards? The current more serialized designed for dx11 nvidia style? Or the gcn focus ?

Performance differences are not large today, it's mostly a wash, but EVEN WITH that, you STILL get VASTLY superior performance with context switching and vr latency, so tell me, which card seems like it will have more legs?

the 970 or the 390?


Anyone suggesting the 970 is either confused or lying based on the current state of the state of evidence we have.


Nvidia does not win this trade, deal with it.


Not in all cases, its dependent on the type of instructions vs dependenies vs the amount latency between the instructions in the graphics pipeline or compute pipeline. If the latency is already lower than being able to feed an instruction into the pipeline or based on other dependencies doing async compute will hurt performance not help it.

And this is what I'm thinking happened with what Oxide dev stated async wasn't working, the code was telling nV hardware to do async when nv's latency was already very low and it had an adverse affect by forcing nV cards to do context switching when it they didn't need to.

Hey if you want to call this lipstick on a pig, I will call you what you are? A person that talks about something he has no clue about is a what? If you know about this stuff, post that you know about it, because I have in many many posts.

Tell me how async functions based on architecture. And why don't you deal with this, you don't know what you are talking about. You might as well put lipstick on and prance around like a pig.
 
Last edited:
Dos and don't specifically relating to the limitations of maxwell, where that lower power usage nvidia fans used to bludgeon amd users over the heads with had an ACTUAL hidden cost of removing the additional hardware schedulers that assist with lower performance hits with context switching and compute.


That advice does not advice does not apply to gcn in the same way, because it was not technically crippled in favor of greater dx11 performance at the expense of more long tail dx12/vr performance.


No only one part of that document is specific to Maxwell and its at the end of the document, please reread, and if you want I can parallel that document to videos and documents from AMD which say the same thing. If you have problems reading, I don't even know how you can understand what we have been talking about ;)
 
Where are people getting the impression that the 970 and 390 even compete with each other? It's been well documented that the majority of 970s can OC to the point where they perform on par with 390x's and 980's, all while being $100 cheaper than a 390 and $200 cheaper than a 390x.. The 390 is pretty much just a really poor raw performance competitor to cards like the 960 at this point.
 
No only one part of that document is specific to Maxwell and its at the end of the document, please reread, and if you want I can parallel that document to videos and documents from AMD which say the same thing. If you have problems reading, I don't even know how you can understand what we have been talking about ;)

I don't have any special knowledge, I am a total lay person, just my impressions gleaned from piecing together information I've read on the web. I could be mistaken on some of the implications, but I am going off of some of the commentary of people who know more than I do, like David Kanter. That said, lay person =/= completely incapable of coming to some sort of reasonable conclusion based off readings. I don't have to have gone into all the details and minutia of healthcare policy to come away with the impression that an employer based system is intrinsically inferior to scores of other UHC systems around the world in their results.

And two results currently reported work either better or with lower latency on amd cards. concurrent async compute is much better supported on gcn, and lower context switching seems to give gcn cards a latency edge over maxwell counterparts. In games where these are not large factors or nvidia can lean on code paths that are not as reliant on these metrics, they can and likely will perform as well or better in general. But I don't expect that to hold over time with newer titles. I could be wrong, but I expect to see more utilization of concurrent compute/graphics workloads in more modern games/engines because I suspect both amd AND nvidia cards released in 2016 will be more effective at that than maxwell. These are merely impressions based off what I have read and my own expectations about where the market is likely to head, but they could be wrong.


In the end though, this can only be settled with actual game data and results.


In future dx12 titles and vr titles, which cards perform better? Which cards produce lower latency in vr? How much async are the upcoming dx12 games actually using? Are there tangible benefits of using more concurrent graphics/compute vs the more serialized nvidia approach assuming code was optimized for either architecture? This is why I want to see examples of the same engine and the same game with heavier use of async/compute for amd and less of that for nvidia, to see what can be achieved with each.

This will probably never happen due to time and resource reasons, but people make assumptions about a cards capabilities without a full analysis of how the software is made to take advantage of the hardware. I still want to see how a dx12 witcher would perform on amd/nvidia hardware. Ditto for hairworks and tressfx 3.0. But what we typically get, especially with a lot of the gameworks titles are rendering effects that have been designed with an nvidia architecture in mind first and foremost. Initial benchmarks come out, and lo and behold, the nvidia cards trounce the amd cards. Must mean amd makes sh*t cards or has sh*t drivers. I've read comments in such articles like the batman benchmarks with gameworks that that made their decision to break towards a 980 over say, a fury or a 970 over a 390. This stuff matters, and the software stack can be used to stack the deck in one vendors favor by cherry picking techniques that are written to run better on their particular hardware strengths over their competitors.
 
Where are people getting the impression that the 970 and 390 even compete with each other? It's been well documented that the majority of 970s can OC to the point where they perform on par with 390x's and 980's, all while being $100 cheaper than a 390 and $200 cheaper than a 390x.. The 390 is pretty much just a really poor raw performance competitor to cards like the 960 at this point.

the 960 comparison was clearly a joke, but if someone intends to heavily overclock their card then a 970 may be a better choice. But I think the % of the discreet gpu buying public that actually overclocks their cards is less than 10%. Does anyone have stats on that?
 
Async looks to be great. But what is creating the bulk of your guys argument or debate rather is the lack of its existence yet, at least in a discernable way. I loved AMDs presentation of what it did and how it worked. Very dumbed down so to speak, layman terms so anyone could understand. It has some serious potential. Consoles are making good use of it and that will only help propel them to great performance standards. Hopefully we can soon see some real use of it on PC.
 
Back
Top