Nvidia publicly bashing Stardock developer over an ALPHA level game

Nothing was disproved, that's a direct quote from the developer.

I'm adding you to the ignore list like everyone else has.

This makes me think of a zombie that complains about a ghoul for eating brains.
 
That solves absolutely nothing, just makes the issue worse.

Stop trying to make PC gaming like Consoles with exclusives and features only meant for certain hardware.

I want to play X game, you'll need X gpu. If you want to play Y game you need Y gpu.
 
That solves absolutely nothing, just makes the issue worse.

Stop trying to make PC gaming like Consoles with exclusives and features only meant for certain hardware.

PC gaming is already exclusive. You need Windows and DX.

I said nothing about being exclusive. A game company could use both libraries if it so wishes.
 
Wow, I don't expect many good things to come out of this thread...

oh_lawd_help_this_thread.png
 
Tesselation in Crysis 3 is not related to Gameworks. Crysis 3 was an AMD tied game and not even related to Nvidia.

He probably meant to say Crysis 2, which was a TWIMTBP title, and had heavy and unnecessary tessellation.
 
I like how your stance swings 180 depending on whether it suits your narrative or not.

Another thing about this particular user (who shal not be named lest ye receive an infraction) Is that she/he NEVER responds to direct queries or comments. So, its almost useless pointing out hypocrisy, almost.
 
Thats because its already been proven many times. If the developer wants access to gameworks source, they have to pay Nvidia and the licensing agreement prohibits them from sharing the source code with amd/intel to optimize it.

I've posted the links a dozen times so I don't feel like doing it again, please feel free to view any of the other posts I made to get to them.

Yes, and why is that NVIDIA's fault? Why is it not AMD's fault for being incompetent at optimizing via binary builds? You know, the practice that has been around ever since the dawn of software? It's not like they're being denied access to the whole game and is optimizing blind.

NVIDIA forbidding the sharing of source code is also common practice in the industry, not sure why people are complaining. If you don't do that, you can't protect yourself against IP theft.

From a dev's stand point, what NVIDIA does is perfectly understandable. It's too bad that AMD can't offer an alternative to their GameWorks. What's keeping AMD?
 
Yes, and why is that NVIDIA's fault? Why is it not AMD's fault for being incompetent at optimizing via binary builds? You know, the practice that has been around ever since the dawn of software? It's not like they're being denied access to the whole game and is optimizing blind.

NVIDIA forbidding the sharing of source code is also common practice in the industry, not sure why people are complaining. If you don't do that, you can't protect yourself against IP theft.

From a dev's stand point, what NVIDIA does is perfectly understandable. It's too bad that AMD can't offer an alternative to their GameWorks. What's keeping AMD?

I keep saying this time and time again.

Nvidia's NDA for Gameworks Partners only relates to Gameworks source-code. All other code and compiled binaries are A-Okay for the Dev to share with whoever the hell they feel necessary.
 
Yes, and why is that NVIDIA's fault? Why is it not AMD's fault for being incompetent at optimizing via binary builds? You know, the practice that has been around ever since the dawn of software? It's not like they're being denied access to the whole game and is optimizing blind.

NVIDIA forbidding the sharing of source code is also common practice in the industry, not sure why people are complaining. If you don't do that, you can't protect yourself against IP theft.

Wow AMD is incompetent? Then what is Nvidia even when they do receive source code for the game.

"We are aware of performance and stability issues with GeForce GPUs running Tomb Raider with maximum settings. Unfortunately, NVIDIA didn’t receive final game code until this past weekend which substantially decreased stability, image quality and performance over a build we were previously provided. We are working closely with Crystal Dynamics to address and resolve all game issues as quickly as possible," read a Nvidia statement.

"Please be advised that these issues cannot be completely resolved by an NVIDIA driver. The developer will need to make code changes on their end to fix the issues on GeForce GPUs as well. As a result, we recommend you do not test Tomb Raider until all of the above issues have been resolved."

"In the meantime, we would like to apologize to GeForce users that are not able to have a great experience playing Tomb Raider, as they have come to expect with all of their favorite PC games.”

- See more at: http://www.gamewatcher.com/news/201...-raider-stability-issues#sthash.yBaPEoQO.dpuf

From a dev's stand point, what NVIDIA does is perfectly understandable. It's too bad that AMD can't offer an alternative to their GameWorks. What's keeping AMD?

Funny, AMD gives out their Gameworks alternatives as open source:

http://developer.amd.com/tools-and-sdks/graphics-development/

http://developer.amd.com/tools-and-sdks/graphics-development/amd-radeon-sdk/
 
Mental Gymnastics:
The Thread.

We already knew that Jen-Hsun Huang was the reigning champion, but it still amazes to see how his team is made of aces that avoid reading even the literal statements done officially by their own Team Leaders.
 
I don't think that was ever under question though? It was always an issue with gamework specific features causing games to go much slower on non-Nvidia (or even nvidia latest gen) hardware.

Maybe not for you specifically (I don't think my comment was ever remotely directed at a specific individual much less you)? But in general yes there is a lot of incorrect associations that occur (well this is quite a common phenomena). Just look at some of the commentary in this thread. As an example several posters have associated the inability to share Gameworks source or comments by developers of hinderances in optimizing gameworks features to the entire rest of the game.

Or when AC: Unity was out (this was a high profile example). AC: Unity had performances issues (more so on AMD hardware) even without Gamework's features enabled (in general it is a extremely high demanding game, no point in picking at the rest of the details here). There was rampant association that it was simply due to Gameworks regardless if the features were used or not.


I'm not sure what you are trying to say here. Yes reviewers can turn off gameworks features and do so in reviews, especially since they are the major causes of slowdowns.
You're comment I believe was suggesting that a negative of Gameworks is the impact on reviews. I don't believe that is the case or relavent.


Yes the features are optional, and they are often a huge selling point for the game. Hell look at Batman, they plaster all of the special effects on every trailer or demo shown. So why wouldn't gamers expect to run those settings if they have high end systems?


-----


The main issue here isn't the ability to disable gameworks, we already know that is an option. The issue is Non-Nvidia hardware can not be optimized for gameworks features because they can not access the source.

Here is how Nvidia felt gamers should act when they weren't able to get TressFx working at launch:



Yet now they charge developers to be able to optimize Gameworks for other vendors, and prevent those developers from working with the vendors to optimize.

If that isn't Hypocrisy and if you can't see how that is bad for all gamers I don't know how else to explain it.

Also to try and get off the issues with gameworks and get back on topic: Look at how this developer is willing to work with all companies (Intel, Microsoft, Nvidia and AMD) and openly share their source code to make sure it runs as best as possible on all hardware and platforms. This is how gaming will succeed

I'm not the one who intially brought up Gameworks.

Honestly I don't really care about the Gamework's debate. I think this is the first extensive set of posts I've ever made on the controversey of the subject and I already feel like I've wasted pointless time. To me it is a pretty simple situation from a pragmatic standpoint. If you like it you can use and try to use it. If not it doesn't affect you and it doesn't matter. PhysX GPU accelerated effects didn't have this level of controversy and that was more or less hardlocked to one vendor.
 
Yes, and why is that NVIDIA's fault? Why is it not AMD's fault for being incompetent at optimizing via binary builds? You know, the practice that has been around ever since the dawn of software? It's not like they're being denied access to the whole game and is optimizing blind.

NVIDIA forbidding the sharing of source code is also common practice in the industry, not sure why people are complaining. If you don't do that, you can't protect yourself against IP theft.

From a dev's stand point, what NVIDIA does is perfectly understandable. It's too bad that AMD can't offer an alternative to their GameWorks. What's keeping AMD?

TressFX is something that has been used in the past and it ships with source for any vendor.
Nvidia is trying to polarize the market by making these things exclusive. In the end Nvidia have made this their way of binding people to their hardware PhysX CUDA and now GameWorks.

Makes you think about why do they need all this to make a good product working _if_ CUDA was worthwhile then everyone would have a Nvidia card already same goes for PhysX , the sad reality of things is that it is not working for Nvidia whatever they come up with it is never enough.
 
I keep saying this time and time again.

Nvidia's NDA for Gameworks Partners only relates to Gameworks source-code. All other code and compiled binaries are A-Okay for the Dev to share with whoever the hell they feel necessary.

Exactly. It only forbids the sharing of source code of GameWorks and nothing else.

TressFX is something that has been used in the past and it ships with source for any vendor.
Nvidia is trying to polarize the market by making these things exclusive. In the end Nvidia have made this their way of binding people to their hardware PhysX CUDA and now GameWorks.

Makes you think about why do they need all this to make a good product working _if_ CUDA was worthwhile then everyone would have a Nvidia card already same goes for PhysX , the sad reality of things is that it is not working for Nvidia whatever they come up with it is never enough.

It's fortunate that TressFX is now widely used and has become a go to FX package. Oh wait that's GameWorks.

Shipping with source has no advantages for the provider, and huge risks. It can be a huge blow to your company and your brand if the code is badly optimized and underperforms or is buggy. Meanwhile, you get nothing in return. So then why should you ship source to vendors?

It's also very fortunate that OpenCL is the de facto language for compute. No wait, that's CUDA.

Apparently many people think CUDA is much more worthwhile than OpenCL ;)

http://create.pro/blog/open-cl-vs-cuda-amd-vs-nvidia-better-application-support-gpgpugpu-acceleration-real-world-face/

https://www.google.com/trends/explore#cat=0-5&q=opencl,cuda

http://www.simplyhired.com/k-cuda-jobs.html vs http://www.simplyhired.com/search?q=openCL

And finally, it seems nearly everybody already has NVIDIA cards ;)
According to Jon Peddie Research, NVIDIA dominates the workstation graphics market. It has 81% market share. AMD (AMD) has 18% market share
http://marketrealist.com/2014/11/nvidia-quadro-key-growth-area/

47105_06_amds-gpu-market-share-drops-again-even-release-fury_full.png
 
I think the main reason Nvidia has the bigger market is due to inept. management at team Red. as in, to little, to late..

and also, worst ever presentation.. this card will be an overclokers dream - yeah right!

i don't think that gameworks, physx or CUDA rocks that boat.
 
It's also very fortunate that OpenCL is the de facto language for compute. No wait, that's CUDA.
Apparently many people think CUDA is much more worthwhile than OpenCL ;)
http://create.pro/blog/open-cl-vs-cuda-amd-vs-nvidia-better-application-support-gpgpugpu-acceleration-real-world-face/
https://www.google.com/trends/explore#cat=0-5&q=opencl,cuda
http://www.simplyhired.com/k-cuda-jobs.html vs http://www.simplyhired.com/search?q=openCL
And finally, it seems nearly everybody already has NVIDIA cards ;)
http://marketrealist.com/2014/11/nvidia-quadro-key-growth-area/
47105_06_amds-gpu-market-share-drops-again-even-release-fury_full.png

Yeah somehow this is a little weird it is a flashback where reviews from Nvidia cards had to have PRO: CUDA, PHYSX and if the website was good to Nvidia they would review AMD cards with negative for not having this. Thus securing their next review sample ....

If the workstation market is such a success according to Nvidia (or whichever researcher they paid of)why did they dump Tesla cards which were going for $3000 in the consumer market for a lot less?

One thing Nvidia is good at is "marketing" even tho what they are saying is not something I would ever take for granted....

And especially in this case Oxide is right .....
 
One thing Nvidia is good at is "marketing" even tho what they are saying is not something I would ever take for granted....

In one year they went from 62% market share to 82% market share. They must be good at something more than just marketing.
 
In one year they went from 62% market share to 82% market share. They must be good at something more than just marketing.

That is the power of marketing. Hell even here Nvidia already stated it was problem with their drivers and Developers said that drivers would evolve, yet tons of Nvidia fans attacked the developers and tried to put all of the blame on them.
 
Yeah somehow this is a little weird it is a flashback where reviews from Nvidia cards had to have PRO: CUDA, PHYSX and if the website was good to Nvidia they would review AMD cards with negative for not having this. Thus securing their next review sample ....

If the workstation market is such a success according to Nvidia (or whichever researcher they paid of)why did they dump Tesla cards which were going for $3000 in the consumer market for a lot less?

Lol which sites do you go to for your reviews?

As for your Tesla argument, well, that's due to supply and demand:
Industry watcher Jon Peddie Research reported that the industry shipped approximately 912.4 thousand workstations in Q1'15, equating to a sequential decline of 11.7% and a more modest 3.5% year-over-year decline. The professional GPU industry, primarily composed of the Nvidia/AMD duopoly, saw similarly glum results, with worldwide shipment total of approximately 1.014 million units in the first quarter

http://jonpeddie.com/publications/workstation_report

I'm sure even the titans line move many more units than that.

That is the power of marketing. Hell even here Nvidia already stated it was problem with their drivers and Developers said that drivers would evolve, yet tons of Nvidia fans attacked the developers and tried to put all of the blame on them.

Yeah, and meanwhile I see AMD fans waving the results about like it's the truth.

The masses are usually both dumb and ignorant. Just see how the polls are going over in the States. :p
 
We are talking about gameworks specific features here, not the whole game. Developers pay Nvidia to be able to try to optimize the code to add in optimizations for AMD/Intel and are not allowed to directly work with them for those optimizations. How is that not anti-competitive?

Not to mention, that those code changes are then game specific so every developer has to pay Nvidia for every game they want to optimize for AMD/Intel hardware. Thats a lot of free money to Nvidia just to make the game work well on competitor hardware.

This is so harmful to the gaming community.

That's a real interesting point and I wonder if it's true.

Say developer X is working on game X and they notice poor performance on amd card. So they get the source code from nvidia and optimize it for amd. Code works great everyone is happy.


Does nvidia implement this into the gameworks dlls? Or does the next developer need to reoptimize the same code as developer X.
 

After reading that post I'm even less concerned about this than I was when I saw the benchmarks yesterday.

AMD is using the DX12 massively parallel idle CPU cores to feed more draw calls into their massively parallel shader execution system.

Nvidia is using the DX11 massively parallel idle CPU cores to "re-compile and replaces shaders which are not fine tuned to their architecture on a per game basis." These are then executed without bottleneck by their GPU.

It looks to me like Nvidia is able to keep up with GCN when they put their mind to it. I'm pretty sure that they're betting that most games won't be making these kinds of outrageous draw call for at least a few years. And TODAY we have exactly ONE outrageous draw call game, so we're a long way from "problem" territory yet for Maxwell :D

If they win the bet, nobody cares that they cut this out of Maxwell. If they lose the bet, then they work overtime optimizing their drives for the next couple years. It's not a dead-end for current Maxwell owners, any way you look at it, because optimized DX11 can still be competitive!

That said, I will have to keep this in mind when recommending Maxwell purchases in the future on this forum. The later in product lifetime that you buy Maxwell, the more this could potentially hurt you.
 
Last edited:
Edit: rofl ninja'd by TS and didn't even notice, oh well

I don't usually enjoy cross forum reposting, but this guy's posts seemed too informative to pass up. Last few paragraphs in bold especially pertinent to discussion. Wall of text + picture heavy so the background bits are in spoilers.

Well I figured I'd create an account in order to explain away what you're all seeing in the Ashes of the Singularity DX12 Benchmarks. I won't divulge too much of my background information but suffice to say that I'm an old veteran who used to go by the handle ElMoIsEviL.

First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that's an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here's why...

Maxwell's Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:

900x900px-LL-489247b8_Async_Aces_575px.png


Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.

900x900px-LL-89354727_asynchronous-performance-liquid-vr.jpeg


This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.

Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:

900x900px-LL-7d8a8295_drawcalls.jpeg


A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:

900x900px-LL-92367ca0_Compute_01b.jpeg


What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):

900x900px-LL-bd73e764_Compute_02b.jpeg

As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn't hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.

nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2) though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):

900x900px-LL-56aa0659_anandtech_wrong.jpeg


People wondering why Nvidia is doing a bit better in DX11 than DX12. That's because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia's driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.

But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel... in comes in Mantle, Vulcan and Direct X 12.


People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That's your answer.

A video which talks about Ashes of the Singularity in depth: https://www.youtube.com/watch?v=t9UACXikdR0

PS. Don't count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it's all on the developer to make efficient use of both nVIDIA and AMDs architectures.

The AMD logo is there because the developers first started to program their game using the AMD Mantle API. The game they wanted to build was pretty much impossible without Mantle. They built their game on AMD Mantle only to port it over to Direct X 12 afterwards (Mantle and Direct X 12 being incredibly similar).

The developer also worked closely with both nVIDIA and AMD. That's why you see nVIDIA's rather impressive DX11 performance. nVIDIA has had access to the code for over a year now (as have AMD). All of this is verifiable on the Developers blog: http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/

I see what you guys are saying and I think I've figured out something new that didn't come to me before...

nVIDIA mentioned that Maxwell 2 is capable of queuing 32 Compute or 1 Graphics and 31 Compute. Everyone agrees with this. Now the question becomes... what are these queues tied too?

I assumed that Maxwell 2 had no Asynchronous Compute Engines because nVIDIA has been quite quiet about this. I assumed that they scheduled these queues from within their SMM Units (Compute Units). This would account for the performance deficit.

Now I'm thinking that something else could account for this performance deficit. That is if Maxwell 2 has only a single Asynchronous Compute Engine tied to 32 Queues.

So I began to read up on Kepler/Maxwell/2 CUDA documentation and I found what I was looking for. Basically Maxwell 2 makes use of a single ACE-like unit. nVIDIA name this unit the Grid Management Unit. The CPUs various Cores send Parallel streams to the Stream Queue Management. The Stream Queue Management sends streams to the Grid Management Unit (Parallel to Serial thus far). The Grid Management unit can then create multiple hardware work queues (1 Graphics and 31 Compute or 32 Compute) which are then sent in a Serial fashion to the Work Distributor (one after the other or in Serial based on priority) . The Work Distributor, in a Parallel fashion, assigns the work loads to the various SMMs. The SMMs then assigns the work to a specific array of CUDA cores. nVIDIA call this entire process "HyperQ".
Here's the documentation: http://docs.nvidia.com/cuda/samples/6_Advanced/simpleHyperQ/doc/HyperQ.pdf

GCN 1.1/1.2, on the other hand, works in a very different manner. The CPUs various Cores send Parallel streams to the Asynchronous Compute Engines various Queues (up to 64). The Asynchronous Compute Engines prioritizes the work and then sends it off, directly, to specific Compute Units based on availability. That's it.
HyperQ is thus bottlenecked at the Grid Management and then Work Distributor segments of its pipeline.

nVIDIAs HyperQ can be thus deemed "in order". HyperQ contains only a single pipeline.

AMDs Asynchronous Compute Engine implementation can be thus deemed "out of order". AMDs implementation contains eight pipelines.

AMDs implementation incurs less latency as well as being superior in terms of efficiently utilizing the available Compute resources.

This explains why Maxwell 2 (GTX 980 Ti) performs so poorly under Ashes of the Singularity. Asynchronous Shading kills its performance compared to GCN 1.1/1.2 (in this case an R9 290x) which barely loses any performance.

GCN 1.1/1.2 are clearly being limited elsewhere, and the explanation I gave (regarding the Peak Rasterization Rate or Gtris/s) still stands as the most plausible cause.

I've been away from this stuff for a few years so I'm quite rusty but Direct X 12 is getting me interested once again.

http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/390_30#post_24321843
http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/390_30#post_24322004
http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/480_30#post_24325746

/thread for real now? :p
 
Last edited:


From the last link, I found this especially insightful:

Ashes of the Singularity also demands that a gigantic amount of units be drawn onto the screen at once. Each unit is independent of the other. Each unit requires a triangle setup. What we're seeing is AMD GCN 1.1/1.2 hitting its Peak Rasterization Rate (expressed in Gtris/s). AMD are bottlenecked, in this game, by their rather limited array of RBE (Render Back Ends).

Both architectures are limited in Ashes performance by one shortcoming or another.

And that shortcoming is going to change from one game to another. Guess we'll have to get used to that with DX12 :D
 
Edit: rofl ninja'd by TS and didn't even notice, oh well



/thread for real now? :p

Would you stop making sense. Certain people you can't mention without getting banned will ignore your post and argue about something else!
 
Notice I mentioned the majority of implementations. I don't really follow Project Cars (or care at all much about racing games) so I will refrain from specific commentary regarding that situation. However I will point that CPU PhysX has been used in the past prior to Gamework's branding, rather transperantly and without controversy. It was used in both Metro 2033 and Metro: Lastlight as an example. Seems like work from both sides (game developer and AMD's drivers) were able to drastically improve AMD's relative performance to Nvidia (GCN against Kepler) inspite of PhysX.

Tesselation in Crysis 3 is not related to Gameworks. Crysis 3 was an AMD tied game and not even related to Nvidia.

The GTX 980ti was the first wide release post Witcher 3 (Nvidia's latest Gamework showcase with Hairworks). Here are two sample reviews from major tech sites that used the Witcher 3 (both of which have received accusations of Nvidia bias, granted the bias accusation is somewhat easily thrown out when opinions differ) -

http://www.hardocp.com/article/2015...x_980_ti_video_card_gpu_review/3#.VdXIzZdSI8I
http://techreport.com/review/28356/nvidia-geforce-gtx-980-ti-graphics-card-reviewed/5

Pcper, the site most in vogue that people rail against for being pro Nvidia, did not even use the Witcher 3 for their 980ti review (they actually had 4 games with AMD ties vs 1 with Nvidia ties...).

PCgamehardware.de is typically one of the early sources to run game specific tests. Despite the language difference they are still commonly used a "first" in community discussions because of this - http://www.pcgameshardware.de/The-Witcher-3-Spiel-38488/Specials/Grafikkarten-Benchmarks-1159196/

Yes I'm sure there probaly is some source out that are problematic but if indvidiuals do not properly critique and apply critical thinking to their sources that is the fault of the individual.

Also I'd like to point out that one of the initial reasons for the controversey regarding Gameworks and Hairworks is when CDPR specifically reffered to Hairwork's regarding optimization for AMD hardware. This was never some sort of big secret or conspiracy.

Sorry, a mistype. It was Crysis 2, but did you really not know what game I was referring to? The rest of your post doesn't really dispute what I said. It just seems to make excuses for it.
 
Edit: rofl ninja'd by TS and didn't even notice, oh well

I don't usually enjoy cross forum reposting, but this guy's posts seemed too informative to pass up. Last few paragraphs in bold especially pertinent to discussion. Wall of text + picture heavy so the background bits are in spoilers.







http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/390_30#post_24321843
http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/390_30#post_24322004
http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/480_30#post_24325746

/thread for real now? :p

Thank you for posting this. I thought it was something along these lines but not in as detailed a way as this. (I understand what he is saying but I could not ever be a programmer so save my life. :D) I do appreciate it but also, I am now considering buying this game just because of NVidia's whining about it.

The game does look interesting and I do remember Stardock from my OS/2 days and Object Desktop for OS/2 Warp. It looks like a game that I could play for a few minutes in passing if I want or more if I have the time.
 
Edit: rofl ninja'd by TS and didn't even notice, oh well

I don't usually enjoy cross forum reposting, but this guy's posts seemed too informative to pass up. Last few paragraphs in bold especially pertinent to discussion. Wall of text + picture heavy so the background bits are in spoilers.







http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/390_30#post_24321843
http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/390_30#post_24322004
http://www.overclock.net/t/1569897/...gularity-dx12-benchmarks/480_30#post_24325746

/thread for real now? :p

Buy AMD enjoy DX12 and win 10 gaming.
:D
 
Buy AMD enjoy DX12 and win 10 gaming.
:D

No, it's more like:

"BUY AMD enjoy this particular DX12 game."

We have no idea what other games will push the hardware to do, since there are no limits to the programmers. If you actually read the linked threads, you would know that:

In Ashes, the AMD hardware hits it's triangle setup limit drawing all those units before it can fully exercise it's massive parallel shader power advantage (roughly 2x). This is why the scores are so close.

In Ashes the Nvidia hardware hits it's parallel shader power wall in all those draw calls before it can exercise it's massive triangles/second advantage.

Summary: both pieces of hardware have severe limits in their compromised designs, and in this FIRST case the AMD design is slightly better optimized. The next game to come along may use more polys than draw calls. But we won't know until that happens, now will we?
 
Last edited:
Edit: rofl ninja'd by TS and didn't even notice, oh well

I don't usually enjoy cross forum reposting, but this guy's posts seemed too informative to pass up. Last few paragraphs in bold especially pertinent to discussion. Wall of text + picture heavy so the background bits are in spoilers.

Good read. Everything is inline with what we experience with Mantle games. Basically Mantle was always part of AMD's choice and strategy to favor parallelism and now mainstream is finally catching up to this vision. :)

This also makes nvidia's taking this public action towards Stardock even worst in light of this information which they surely knew, but tried to blame and shame the developer for anyways. :mad:
 
The actual cases where games are going to use a trillion drawcalls per second is going to be very, very rare. Sometimes I think some of these "games" are just made up to make a statement.

However, there should be a pretty large improvement over DX11 when you consider that it didn't have a means for bindless textures, multidraw indirect, command lists, and all that other AZDO stuff (DX11 is old). If we look at the difference in performance between core OpenGL 4.0 and 4.5 + AZDO, we should be seeing a similar improvement between DX11 and DX12 if the same concepts are being applied by the game developer. If DX12 is "low-level" enough to give you all of that just through memory management and such, then we still should be seeing the improvement.
 
If we are going to accept that (for nvidia at least) DX12 performance will be the same as DX11 performance, then we would have to accept as myth the idea that developers need a "low-level" API to get the most performance from the hardware. Instead it would show that that could just as well be accomplished through the driver implementations.
 
The actual cases where games are going to use a trillion drawcalls per second is going to be very, very rare. Sometimes I think some of these "games" are just made up to make a statement.

However, there should be a pretty large improvement over DX11 when you consider that it didn't have a means for bindless textures, multidraw indirect, command lists, and all that other AZDO stuff (DX11 is old). If we look at the difference in performance between core OpenGL 4.0 and 4.5 + AZDO, we should be seeing a similar improvement between DX11 and DX12 if the same concepts are being applied by the game developer. If DX12 is "low-level" enough to give you all of that just through memory management and such, then we still should be seeing the improvement.

At least developers now can do things instead of being stuck in 10K-15K batches. Also the Nitrous engine is some what of different engine which is suited for their goal (RTS, lots of units).
There will be different engines with different goals.

If we are going to accept that (for nvidia at least) DX12 performance will be the same as DX11 performance, then we would have to accept as myth the idea that developers need a "low-level" API to get the most performance from the hardware. Instead it would show that that could just as well be accomplished through the driver implementations.

Wow you still don't get it. If you run a small number of batches the difference wouldn't be there because under DX11 running 10K batches would nearly be the same as running 10K batches under DX12. This point is moot , but if engines scale above 40K+ batches there won't be anything but a slide show under DX11.
 
Last edited:
Pascal might feature similar bottlenecks, so it might be a good area for AMD to stretch their legs.
 
Wow you still don't get it. If you run a small number of batches the difference wouldn't be there because under DX11 running 10K batches would nearly be the same as running 10K batches under DX12.

Umm, it shouldn't be the same. You seem to think that the only room for improvement is increasing draw call count. You know there is a whole world out there beyond just draw calls.

It is true that this benchmark seems to be all about draw calls and that's it. Which gives credence to nvidia's argument that it is not a good representation of what DX12 can do.
 
Pascal might feature similar bottlenecks, so it might be a good area for AMD to stretch their legs.
Not all DX12 games will care so much about draw calls like aots or 3dmark both use way more calls then your avg console game, which like it or not is what games tend to start out as first. It's like a game with heavy tessellation or geometry favors nvidia a game with heavy amount of calls favors amd's arch.
 
Back
Top