Nvidia publicly bashing Stardock developer over an ALPHA level game

Yeah, Thats the thing. I had no end of trouble trying to get my 590 to work in surround, it had crashing issues as well: switched to my 7970, and I still have the occasional bug in niche software and driver crashes here and there. Ultimately the experience is better on my 7970, but the general public hear:

"blah blah, faulty 590 blah, blah blah AMD DRIVER CRASH ALL THE TIME blah blah."

I think it's just too easy to default to the "AMD DRIVERS SUXLOLOLOL" response when you know at least 2/3 of the audience would be in agreement. The worst part about that is, the majority of those who carry that sentiment actually don't have a clue what the specific sucky issues are, which only serves to perpetuate the perception that AMD "just sucks".
 
I think it's just too easy to default to the "AMD DRIVERS SUXLOLOLOL" response when you know at least 2/3 of the audience would be in agreement. The worst part about that is, the majority of those who carry that sentiment actually don't have a clue what the specific sucky issues are, which only serves to perpetuate the perception that AMD "just sucks".

Yep. Its a shame because the graphics (and CPU) markets saw the BEST products and the BIGGEST jumps when the competition between chip vendors was strong. Now we get %10-%20 increases and virtually no price drops because its a 1-sided battle now.
 
Thread bookmarked for a revisit in 6 months.

This thread will make an exccelent example in the future...on how not to use pre-gold software.
 
Serious question.

Why is the i7 so much faster in dx12 than the 8370?

I thought dx12 would make games less cpu dependent?


ashesheavy-r9390x.png



Is something not being efficiently scheduled correctly on the amd cpus? Even the i3 posts better numbers with the 390x. Something seems off, does ipc still matter more than core count? What's going on?
 
Serious question.

Why is the i7 so much faster in dx12 than the 8370?

I thought dx12 would make games less cpu dependent?


ashesheavy-r9390x.png



Is something not being efficiently scheduled correctly on the amd cpus? Even the i3 posts better numbers with the 390x. Something seems off, does ipc still matter more than core count? What's going on?

I think that test pushes so hard it makes the CPU the bottleneck.
 
Serious question.

Why is the i7 so much faster in dx12 than the 8370?

I thought dx12 would make games less cpu dependent?


Is something not being efficiently scheduled correctly on the amd cpus? Even the i3 posts better numbers with the 390x. Something seems off, does ipc still matter more than core count? What's going on?

Its a CPU heavy game since its a lot of simulations for units and not a FPS / 3rd person shooter.
 
It depends if the API is truly as "low level" as everyone is wanking on about (I doubt it is). But if it is, then the game code has to be written to match the underlying hardware architecture. E.g., CUDA. If you write a CUDA program for Fermi it's not necessarily (in fact it's unlikely to be) optimal for Kepler or Maxwell, and vice versa. You have to put in separate hooks to run slightly different code for the different architectures... if you want maximum performance.

That why I referenced earlier that the regression suggests there is likely a fault somewhere in the software stack, either the game, drivers or DX12 itself possibly. As opposed to the less likely scenario that there is an inherent weakness in the hardware architecture. Less relative performance gains due to the hardware architecture is a possibility but actual significant regression (well at least in terms of how DX12 has been presented so far)?

Serious question.

Why is the i7 so much faster in dx12 than the 8370?

I thought dx12 would make games less cpu dependent?

Is something not being efficiently scheduled correctly on the amd cpus? Even the i3 posts better numbers with the 390x. Something seems off, does ipc still matter more than core count? What's going on?

DX12 only addresses overhead of a portion of the entire pipeline. This advantage could, in theory, also be offset in practice depending on certain developer decisions resulting in essentially a zero sum gain. Also the threading gains were only ever advertised as spreading (somewhat more, still not perfectly) evenly across 4 threads.

So what would be possible explanations? Physics, simulation, AI, and etc. still take up signficant CPU cycles and these are not as well threaded. You still do not have enough load distribution to signficantly leverage above 4 cores (if at all). Well at least for this particular benchmark (maybe AI for example can thread perfectly to each AI player, meaning an actual 8 AI match would show more scaling on 8 cores).
 
Its a CPU heavy game since its a lot of simulations for units and not a FPS / 3rd person shooter.

I get that normally cpu heavy games get larger advantages in dx11 because of having to rely on a single master thread to do a bunch of scheduling, but I thought dx12 made it easier for multiple cpu threads to talk to the gpu in tandem?


So while intel has an ipc advantage, the i3 has a core count deficiency that can be compensated for by using a more parallel capable engine like dx12.... except it seems like ipc is the true metric over everything else. Even on the i7 models, the 6700 with fewer cores does slightly better than the 5xxx series since I guess it has a slight ipc edge being based on skylake. But again, I thought the greater capacity to spread the workload to more cpu cores made that less of a constraint?
 
...

DX12 only addresses overhead of a portion of the entire pipeline. Also the threading gains were only ever advertised as spreading (somewhat more, still not perfectly) evenly across 4 threads.

So what would be possible explanations? Physics, simulation, AI, and etc. still take up signficant CPU cycles and these are not as well threaded. You still do not have enough load distribution to signficantly leverage above 4 cores (if at all). Well at least for this particular benchmark (maybe AI for example can thread perfectly to each AI player, meaning an actual 8 AI match would show more scaling on 8 cores).

Ok, that might be part of the issue, workloads that are more serialized and can't be spread as easily.


More reasons to upgrade to zen next year along with a new gpu.
 
I get that normally cpu heavy games get larger advantages in dx11 because of having to rely on a single master thread to do a bunch of scheduling, but I thought dx12 made it easier for multiple cpu threads to talk to the gpu in tandem?


So while intel has an ipc advantage, the i3 has a core count deficiency that can be compensated for by using a more parallel capable engine like dx12.... except it seems like ipc is the true metric over everything else. Even on the i7 models, the 6700 with fewer cores does slightly better than the 5xxx series since I guess it has a slight ipc edge being based on skylake. But again, I thought the greater capacity to spread the workload to more cpu cores made that less of a constraint?

There is still limitations. And DirectX12 actually has consistently been advertised as scaling well to 4 threads and not more.

Look at the extremely speciliazed 3dmark test for instance (so extremely limited CPU usage for anything else that would be required for a game) - http://www.pcper.com/reviews/Graphics-Cards/3DMark-API-Overhead-Feature-Test-Early-DX12-Performance

Scaling essentially stops at 6 cores. Even then 4->6cores is much lower and that is not considering the inbetween scenario of hyperthreading (Intel's 4c/8t CPUs).

Or starswarm, note the scaling from 4->6cores (or lack thereof) - http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4
 
I get that normally cpu heavy games get larger advantages in dx11 because of having to rely on a single master thread to do a bunch of scheduling, but I thought dx12 made it easier for multiple cpu threads to talk to the gpu in tandem?


So while intel has an ipc advantage, the i3 has a core count deficiency that can be compensated for by using a more parallel capable engine like dx12.... except it seems like ipc is the true metric over everything else. Even on the i7 models, the 6700 with fewer cores does slightly better than the 5xxx series since I guess it has a slight ipc edge being based on skylake. But again, I thought the greater capacity to spread the workload to more cpu cores made that less of a constraint?

If you are trying to calculate 1000 things per second even 8 cores won't handle it efficiently ;)

So since Intel's per core is so much better, each one can do a lot more per second.

So yes, DX 12 is better for multithreading as you can see by it almost doubling the FPS, but the CPU core "quality" is still the limiting factor in CPU heavy workloads.

Now in GPU heavy games, the weaker cores on AMD processors won't matter as much, because it will be less overhead, and more of the cores will work.

Also remember the pricing on these, even removing the lower cost for amd motherboards, the chips are:

Intel:
5960X - $1050
6700K - $350ish?
4430 - $185

AMD:
8370 - $199
6300 - $99
 
Pricing doesn't mean much because lack of cpu competition, intel just prices based on AMD performance and goes from there.
 
Serious question.

Why is the i7 so much faster in dx12 than the 8370?

I thought dx12 would make games less cpu dependent?


ashesheavy-r9390x.png



Is something not being efficiently scheduled correctly on the amd cpus? Even the i3 posts better numbers with the 390x. Something seems off, does ipc still matter more than core count? What's going on?

Comparing the 6700K to the 5960 it doesn't look like it currently scales beyond 4 cores.
 
Nvidia write bad code and driver, blame someone else and got it wrong again.
see the pattern with that company?

Dx12 are a welcome addition.
 
FWIW

I have had a GTX 970 for the last few months (since maybe April/May). I have had more crashes with this thing than any other card that I've owned for the last few years.

Including...
GTX 680
7970
7970 GHz
280x
290
290x

I'm not sure where this legendary driver quality for NV comes from, I had to DITCH their WHQL driver and use a beta driver since it was crashing so often.

NVIDIA dropped the ball big time the past 3 months or so with drivers.
 
DX12-Batches-1080p-4xMSAA.png

Pretty much what we expected...Good showing for the underdogs (for now) I wonder how the older series cards perform?
 
Oh well i guess, i was talking about the 200 series and 7000 series like what i use:D
 
I haven't used Nvidia since they could not release Crysis drivers on the first day. It took them half a week to get their shit together...

Man. It must have been TERRIBLE for you!

:rolleyes:
 
I think that test pushes so hard it makes the CPU the bottleneck.

And, if it is (as has been said) in Alpha stage, you're probably dealing with a lot of debugging code too. So we could see an efficiency increase in the game down the road as the debugging code is disabled/removed.
 
I'm going to be unfair here and not even read the article, but still take Nvidia's side because Brad Wardell seems like an asshole.
 
I'm going to be unfair here and not even read the article, but still take Nvidia's side because Brad Wardell seems like an asshole.

You probably should read the article then...and see that Nvidia actually screwed up...
 
Basically they blamed Oxide in the media without bothering to check first, then realized it was their drivers after they had already opened their mouths.

Im not sure how anyone can take Nvidia's side on this one.
 
Basically they blamed Oxide in the media without bothering to check first, then realized it was their drivers after they had already opened their mouths.

Im not sure how anyone can take Nvidia's side on this one.

Ignorance is a bliss, right?
 
Basically they blamed Oxide in the media without bothering to check first, then realized it was their drivers after they had already opened their mouths.

Im not sure how anyone can take Nvidia's side on this one.

Because fanboys are typically idiots when it comes to actually understanding the pros and cons of technology?
 
One can hope nvidia gets performance up, but if it happens that the same hacks they had going in their dx11 drivers won't work in dx12, then there shall be salt.
 
Pricing doesn't mean much because lack of cpu competition, intel just prices based on AMD performance and goes from there.

I brought up pricing because it matters. When you buy the 8 core AMD for the same price as a i3 4core and performance is the same... well thats what matters. You can't compare the $200 part to a $1000 one and wonder why the $1000 is twice as fast.
 
Brilliance is often mistaken for arrogance.

Yep his portfolio is truly amazing

https://en.wikipedia.org/wiki/Brad_Wardell

And the Oxide as company have such an amazing record off:

- releasing custom made benchmark to showcase Mantle
- annoucing 3 Mantle games in development - the games were in so advanced production that two of them still don't have titles and first one just entered Alpha stage as DX12 game
 
Yep his portfolio is truly amazing

https://en.wikipedia.org/wiki/Brad_Wardell

And the Oxide as company have such an amazing record off:

- releasing custom made benchmark to showcase Mantle
- annoucing 3 Mantle games in development - the games were in so advanced production that two of them still don't have titles and first one just entered Alpha stage as DX12 game

Did you mean to list the games they have done here, or just the ones they haven't released since since forming a new company in 2013?

As designer

Galactic Civilizations for OS/2 (1994)
Star Emperor (1996)
Entrepreneur (1997)
The Corporate Machine (2001)
Galactic Civilizations(2003)
The Political Machine (2004)
Galactic Civilizations II (2006)
Elemental: War of Magic (2010)
Galactic Civilizations III (2015)


As Executive Producer

Havok (1995)
Trials of Battle (1996)
Avarice (1996)
Stellar Frontier (1997)
Links Golf for OS/2 (1998)
Sins of a Solar Empire (2008)
Demigod (2009)
Sins of a Solar Empire: Rebellion (2011)
Elemental: Fallen Enchantress (2012)
Elemental: Fallen Enchantress - Legendary Heroes (2013)
 
Did you mean to list the games they have done here, or just the ones they haven't released since since forming a new company in 2013?

This just shows that people will turn over a rock that they want to and when presented with something under that rock, they can hide it.

Kind of a terrible analogy.
 
Yep his portfolio is truly amazing

https://en.wikipedia.org/wiki/Brad_Wardell

And the Oxide as company have such an amazing record off:

- releasing custom made benchmark to showcase Mantle
- annoucing 3 Mantle games in development - the games were in so advanced production that two of them still don't have titles and first one just entered Alpha stage as DX12 game

So all they have done is Mantle work? Nothing suspicious here nothing at all.

hKpZ5FJ.gif
 

If only you could actually read articles you're incredible bias might change. That would require you to learn though.
 
I've said my piece on this subject already, but guys, the one thing I ask in this thread is not to fall for, or quote, the obvious trolling.

Personal attacks... Not worth reading.

FYI, Brad Wardell has been around a looooong time in one form or another.
 
Back
Top