It's official, Nvidia will not support Mantle

Khronos has announced that they are accepting proposals for a ground up rewrite of OpenGL; work hasn't actually started on the coding. Microsoft has modified DirectX with DX12, but that's not a ground-up rewrite at all.

DX10 was their rewrite. OpenGL has been rumored for a rewrite for ever and I really don't think it will happen.
 
Perhaps standing on the top of a mountain with view distance extended out much farther, or perhaps a scene with exotic spells and effects flying around and a MASSIVE real time battle with hundreds upon hundreds if not thousands of soldiers running around. That last would make perfect sense in the dragon age setting, and might finally be technically achievable.
They're both technically achievable today, without Mantle or other low-level APIs. Extraordinary view distance is the domain of displacement maps for terrain and texture virtualization (tiled textures), and large-scale battles with many units is the domain of instancing and texture atlases. You can render 1,000 identical soldiers with one draw call, and thousands of soldiers with tens or hundreds of geometry and texture variations with a small handful of them (if approached cleverly). The key is, simply, designing for performance.

One thing that Mantle could assist with is implementing fast virtualized geometry, but I think geometry shaders actually end up providing all the capability you would need for that at reasonable performance levels. The approach could probably be faster (and neater) in Mantle, but I don't think it's in any way intractable in D3D/OGL.
 
Took both the entrenched standards and uprooted them? Hardly, the entrenched standard (DirectX) is STILL the entrenched standard :rolleyes:

And Microsoft is not doing a ground-up rewrite on DirectX (in fact, Microsoft has had branches of DirectX with low-level hardware access for about 12 years. Just take a look at the Xbox SDK some time).

Also, again, only 8% performance improvement when in a CPU-limited scenario (which is where Mantle is supposed to help THE MOST)? Yawn?

You can deny it all you want to. Doesn't change what is actually happening.
 
You can deny it all you want to. Doesn't change what is actually happening.
Deny what? I'm stating exactly what the posted graph shows, which is that when a CPU-limited scenario was introduced to both API's, there was only an 8% performance difference between Mantle on the AMD card and DX11 on the Nvidia card.

DX12 is not a total rewrite of the API, exactly as I stated.

DirectX is still the entrenched standard, exactly as I stated.
 
Unknown-one

Why don't you do this for me. Use whatever driver you want and play bf4 mp at max 4xaa 1080p on a 64 player map and record your results. Than rerun the test with you CPU at 2ghz and record your results. I'll do the same with my 290 and we can compare results.

Up for the challenge???

No, he's not. It would bee too much hard data to continue talking nonsense. But it was fun reading the expected refusal, the only question was what will he make up.
 
if people only knew what current PC GPU's are capable of there would no need to upgrade GPU's every 2 years, ohhhh crap, agents!!!, runs.
 
No, he's not.
Incorrect, I'm up for it, but not with his dissimilar hardware configuration.

I already pointed out this problem in a previous post. He has a faster CPU with support for twice as meany threads from a newer generation. That will skew results when we're talking about testing CPU-limited scenarios.

It would bee too much hard data to continue talking nonsense. But it was fun reading the expected refusal
What nonsense? I didn't post the graph showing minimal performance gain in a CPU-limited scenario under Mantle, I merely pointed out the inconvenient truth that the numbers represent :rolleyes:

And what refusal? I didn't refuse, I questioned his testing methodology. Learn to read please.

the only question was what will he make up.
And what, exactly, did I make up? Again, I just read the graph that trick0502 posted (and so far, nobody has actually refuted my conclusion).

When both API's were CPU-bottlenecked in the above test, there was only an 8% performance difference between the two. It's there, plain as day.

No CPU bottleneck: The faster GPU performs faster (this is expected and obvious)
Introduce CPU bottleneck: The more efficient API performs faster (this is also expected and obvious)

But there's only an 8% difference in the latter situation. Where's the "wow" factor that this low-overhead API is supposed to offer?
 
Last edited:
Incorrect, I'm up for it, but not with his dissimilar hardware configuration.

I already pointed out this problem in a previous post. He has a faster CPU with support for twice as meany threads from a newer generation. That will skew results when we're talking about testing CPU-limited scenarios.


What nonsense? I didn't post the graph showing minimal performance gain in a CPU-limited scenario under Mantle, I merely pointed out the inconvenient truth that the numbers represent :rolleyes:

And what refusal? I didn't refuse, I questioned his testing methodology. Learn to read please.


And what, exactly, did I make up? Again, I just read the graph that trick0502 posted (and so far, nobody has actually refuted my conclusion).

When both API's were CPU-bottlenecked in the above test, there was only an 8% performance difference between the two. It's there, plain as day.

No CPU bottleneck: The faster GPU performs faster (this is expected and obvious)
Introduce CPU bottleneck: The more efficient API performs faster (this is also expected and obvious)

But there's only an 8% difference in the latter situation. Where's the "wow" factor that this low-overhead API is supposed to offer?

The part you can't seem / don't want to understand is the nvidia gpus performance dropped 50% and the amd gpus 10%. And if you look at the graph again, the nvidia gpu was 40% faster than the amd gpu when using the $1000 CPU, but then using the $50 CPU the nvidia gpu was 10% slower than the amd gpu.
 
Incorrect, I'm up for it, but not with his dissimilar hardware configuration.

I already pointed out this problem in a previous post. He has a faster CPU with support for twice as meany threads from a newer generation. That will skew results when we're talking about testing CPU-limited scenarios.


What nonsense? I didn't post the graph showing minimal performance gain in a CPU-limited scenario under Mantle, I merely pointed out the inconvenient truth that the numbers represent :rolleyes:

And what refusal? I didn't refuse, I questioned his testing methodology. Learn to read please.


And what, exactly, did I make up? Again, I just read the graph that trick0502 posted (and so far, nobody has actually refuted my conclusion).

When both API's were CPU-bottlenecked in the above test, there was only an 8% performance difference between the two. It's there, plain as day.

No CPU bottleneck: The faster GPU performs faster (this is expected and obvious)
Introduce CPU bottleneck: The more efficient API performs faster (this is also expected and obvious)

But there's only an 8% difference in the latter situation. Where's the "wow" factor that this low-overhead API is supposed to offer?

You're twisting conclusions from those graphs and avoiding tests that can simply be made comparable, but you are not interested in doing that. No matter what, you'll always come up with an excuse.
 
It's official: the Mantle vs DX11/12/OGL argument will continue in this thread until 2017

Among the same 5 posters
 
The part you can't seem / don't want to understand is the nvidia gpus performance dropped 50% and the amd gpus 10%.
What you don't seem to understand is that fact is irrelevant.

The Nvidia GPU being tested was vastly superior to the AMD GPU being tested. Of course it offered better performance when no CPU limit was present. What, exactly, is your point?

I already stated exactly this fact, it's obvious. Here's a direct quote from my previous post:
"No CPU bottleneck: The faster GPU performs faster (this is expected and obvious)
Introduce CPU bottleneck: The more efficient API performs faster (this is also expected and obvious)"

And if you look at the graph again, the nvidia gpu was 40% faster than the amd gpu when using the $1000 CPU
And again, what's your point? The Nvidia GPU in question was only a GTX 750 Ti. It was severely GPU-limited when paired with such an expensive CPU.

but then using the $50 CPU the nvidia gpu was 10% slower than the amd gpu.
Only 8% slower.

And that just backs up my point, once again. When CPU-limited (which is where Mantle is supposed to help most, right?), Mantle was only 8% faster. What's the big deal? Where's the wow-factor?

Total rewrite of what API DX11 or Mantle ?

So you have developer access to DX12 now ?
I simply said DX12 isn't a total rewrite [of DX11]. Most of it is going to look near-identical to DX11.

I don't need developer access to DX12 to know that this is an almost total certainty, it's extremely obvious. The Xbox version of DirectX already has low-level access, and it also looks EXTREMELY similar to DX11.

You're twisting conclusions from those graphs and avoiding tests that can simply be made comparable, but you are not interested in doing that. No matter what, you'll always come up with an excuse.
What was twisted, exactly?

The posted graph shows only an 8% performance gap between the two API's in a CPU limited scenario. I'm not twisting anything, that's EXACTLY what the graph shows.

And I'm not avoiding any tests. I'll gladly run benchmarks against a COMPARABLE system. As I said, the system I was asked to test against was, in no way, directly comparable.
 
Last edited:
Only 8% slower.

And that just backs up my point, once again. When CPU-limited (which is where Mantle is supposed to help most, right?), Mantle was only 8% faster. What's the big deal? Where's the wow-factor?

Yes, the gpu that was 40% faster became 8% slower when the $1000 CPU was removed.
 
Yes, the gpu that was 40% faster became 8% slower when the $1000 CPU was removed.
And I ask again, what's your point? I already explained what's going on there, and why it's totally irrelevant.

Seriously, of COURSE the faster GPU performs faster when there's no CPU limit... that doesn't really prove anything except that the GTX 750 Ti has more raw horsepower than the 260x. I think we all already knew that.

The important factor is what happens when both GPU's are hamstrung by a low-end CPU. The performance of BOTH solutions is now limited by the CPU (meaning the GPU is mostly irrelevant as long as it's "fast enough" to keep up with the bottleneck). All else being equal, the performance difference when CPU-limited should come down to the CPU overhead exhibited by the API that's being used.

So, once again, only an 8% difference between Mantle and DX11 when CPU-limited. What am I supposed to be impressed by here? The ONE scenario where Mantle really matters on that graph, and the performance improvement is miniscule.
 
Last edited:
And I ask again, what's your point? I already explained what's going on there, and why it's totally irrelevant.

Seriously, of COURSE the faster GPU performs faster when there's no CPU limit... that doesn't really prove anything except that the GTX 750 Ti has more raw horsepower than the 260x. I think we all already knew that.

The important factor is what happens when both GPU's are hamstrung by a low-end CPU. The performance of BOTH solutions is now limited by the CPU (meaning the GPU is mostly irrelevant as long as it's "fast enough" to keep up with the bottleneck). All else being equal, the performance difference when CPU-limited should come down to the CPU overhead exhibited by the API that's being used.

So, once again, only an 8% difference between Mantle and DX11 when CPU-limited. What am I supposed to be impressed by here? The ONE scenario where Mantle really matters on that graph, and the performance improvement is miniscule.


My head hurts. Save $950 on the CPU, and spend maybe half as much on your GPU, and lose 8% performance. Most people will gladly do that.
 
My head hurts. Save $950 on the CPU, and spend maybe half as much on your GPU, and lose 8% performance. Most people will gladly do that.

***fine print

in 3 games that you may or may not (PROBABLY WONT) like, last I checked Mantle covered about .0001% of all games. Look, Mantle is great and all but this argument is absurd. Once Mantle covers 100% of games (IE never) you might have a compelling argument, but those Mantle games will vary in performance and most games will not have Mantle.

Also, save 950$ on the CPU? Who on earth buys a x960X? Just get a 4670k/4790K like most normal people and enjoy the best performance or within 1% of the best performance you can get in all games for only 2 or 3 benjamins (i5 or i7). Not like CPUs cost that much.....I mean Pentiums in the late 90s cost upwards of 700$ at release. In the grand scheme of things prices for CPUs have gone down with more choices over time, not higher. Unless you're just nuts over the HEDT platform which is overkill for just gaming. Spending 1k on a gaming CPU unless you're using your system as a workstation (the intended use of the HEDT platform) is just stupid, unless you have tons of spare cash and just don't care. Or if you want quad SLI or something like that. And let's face facts if you're considering quad SLI then you have tons of spare cash and don't care.

Time to dipset again, i'm acting like UO now.
 
Last edited:
...
The Nvidia GPU being tested was vastly superior to the AMD GPU being tested. Of course it offered better performance when no CPU limit was present. What, exactly, is your point?
...
And that just backs up my point, once again. When CPU-limited (which is where Mantle is supposed to help most, right?), Mantle was only 8% faster. What's the big deal? Where's the wow-factor?
...

Both of your paragraphs are referring to the same test, so no "out of context" excuse here. Care to comment?
 
There is no wow factor. Because as Graham Sellers (AMD) has said, you will hit hardware limits before a modern API bottlenecks.

But Mantle makes for great press.

yup, no idea why a ground up redesigning is in the pipeline - absolutely pointless.

nobody actually wants this and it's really being done for literally no reason.
 
yup, no idea why a ground up redesigning is in the pipeline - absolutely pointless.

nobody actually wants this and it's really being done for literally no reason.

You seem to have difficulty grasping that it's only AMD's low level graphics API that the argument applies to. Msft and Khronos (OpenGL), which coincidentally nVidia also supports, is ZOMG awesome. This is also a case where coming up with something 2 years later is a plus. Not that I can remember there ever being any other time that logic applied. Imagine after the Wright Bros. flew someone came along and said that in 2 years they will have powered flight as well. Then imagine that there would be people who would say, that's great I'd rather take the train or a boat and wait because I'm sure theirs will be awesome and I really don't have to fly yet. :cool:
 
Sad thing is, our APIs used to be much closer to the metal then they are now.
So in reality Mantle is a more of a return to what our APIs had once been and not as quite revolutionary as we make it out to be.
The NT kernel's application layer and reparse points kind of screwed us out of a lot of potential performance from our GPUs.
But hey, I'm just thrilled with Mantle. I have a Nvidia card right now (but I'm usually an AMD guy) and can't use it, but its pushing MS and Khronos to improve their APIs.
 
I simply said DX12 isn't a total rewrite [of DX11]. Most of it is going to look near-identical to DX11.

I don't need developer access to DX12 to know that this is an almost total certainty, it's extremely obvious. The Xbox version of DirectX already has low-level access, and it also looks EXTREMELY similar to DX11.
.

You know that we already pointed out that it was more Mantle http://semiaccurate.com/2014/03/18/microsoft-adopts-mantle-calls-dx12/

And Mantle uses almost the same method as DX11 shader programming.
Still waiting for you to show some sources on this DX12 issue....

It's official: the Mantle vs DX11/12/OGL argument will continue in this thread until 2017

Among the same 5 posters


Congrats this post is as useless as the other ones you made in this thread :) .
 
There is no wow factor. Because as Graham Sellers (AMD) has said, you will hit hardware limits before a modern API bottlenecks.

But Mantle makes for great press.
Bingo. My point exactly.

Still waiting for you to show some sources on this DX12 issue....
Still waiting on a reason you need a source for something that's basically a given.

Why on earth would DX12 need to be a total rewrite? How does a total rewrite of the API make ANY sense whatsoever? Why would they choose to re-invent the wheel?

Microsoft already have a version of DX11 with low-level extensions on the Xbox One... why throw out all that work and do a ground-up rewrite? That would also effectively mean fully re-implementing DX11 from the ground-up to support legacy applications. Why the HELL would they do that instead of just keeping existing code around? Explain how that even enters into your thought process.

Both of your paragraphs are referring to the same test, so no "out of context" excuse here. Care to comment?
Comment on what? You didn't actually point anything out.

The two sentences you left in the quote-block are not contradictory in any way. The first sentence was discussing a non-CPU-limited scenario, the second sentence was discussing a CPU-limited scenario. This was made very clear.
 
Last edited:
Why on earth would DX12 need to be a total rewrite? How does a total rewrite of the API make ANY sense whatsoever? Why would they choose to re-invent the wheel?

Microsoft already have a version of DX11 with low-level extensions on the Xbox One... why throw out all that work and do a ground-up rewrite? That would also effectively mean fully re-implementing DX11 from the ground-up to support legacy applications. Why the HELL would they do that instead of just keeping existing code around? Explain how that even enters into your thought process.

You are answering a question with a question where is your proof, you have access to a DX12 development kit ?
 
You are answering a question with a question where is your proof, you have access to a DX12 development kit ?
I already told you, don't need access to the DX12 dev kit to know it's not a total rewrite. It's common sense :rolleyes:

There's ZERO reason for them to throw out all the code they've built for DX11, it's still needed for legacy purposes and for almost all of the driver-interface layer. They wont rewrite it for the sake of rewriting it.

Do you have any proof that they're rewriting portions of the API that will have identical functionality to DX11? No, you don't, because they'd be stupid to waste time doing that.

Proposing that DX12 will be a full ground-up rewrite is, frankly, preposterous. It's like all the people that cry about how Microsoft should rewrite Windows from the ground-up every few years. It WONT happen because mountains of code is reusable and has already been bug-checked and security-hardened. There's NO reason to throw it out.
 
Last edited:
New software doesn't have to be completely rewritten from the ground up in order to be something completely new or much better. For example Unreal Engine 4 still uses code written in 1998.
 
There's ZERO reason for them to throw out all the code they've built for DX11, it's still needed for legacy purposes and for almost all of the driver-interface layer. They wont rewrite it for the sake of rewriting it.

Do you have any proof that they're rewriting portions of the API that will have identical functionality to DX11? No, you don't, because they'd be stupid to waste time doing that.

are the very large swaths of stuff that changed not good enough to count in whatever point you're trying to make
 
are the very large swaths of stuff that changed not good enough to count in whatever point you're trying to make
The point I'm making is that DX12 wont be a total rewrite of the API.

They're certain to reuse large chunks of the code they've already written for DX11... doesn't matter how much new stuff they add, keeping any amount of old code means it's not a total rewrite. Simple as that.
 
I already told you, don't need access to the DX12 dev kit to know it's not a total rewrite. It's common sense :rolleyes:

There's ZERO reason for them to throw out all the code they've built for DX11, it's still needed for legacy purposes and for almost all of the driver-interface layer. They wont rewrite it for the sake of rewriting it.

Do you have any proof that they're rewriting portions of the API that will have identical functionality to DX11? No, you don't, because they'd be stupid to waste time doing that.

Proposing that DX12 will be a full ground-up rewrite is, frankly, preposterous. It's like all the people that cry about how Microsoft should rewrite Windows from the ground-up every few years. It WONT happen because mountains of code is reusable and has already been bug-checked and security-hardened. There's NO reason to throw it out.

It laughable that you keep persisting it is not a total rewrite , why don't you find people whom can explain low level API to you on your level then realize DX11 is a high level API.

There have been 2 sources that DX12 looks more Mantle. Semiaccurate and JA (DICE) twitter account.
 
It laughable that you keep persisting it is not a total rewrite
Not laughable at all, because it's not a total rewrite. End of story.

why don't you find people whom can explain low level API to you on your level then realize DX11 is a high level API.
Not sure why anything about low-level API's need explanation in this instance. Both Mantle and DX12 still use abstraction, and as such, are not true bare-metal APIs. They'll both have some low-level access to the hardware, but aren't even CLOSE to running on bare metal.

DX12 going to be largely based on DX11, with additional low-level extensions. This is straightforward common sense.

There have been 2 sources that it is more as Mantle is. Semiaccurate and JA (DICE) twitter account.
Neither of which claim it's a total rewrite either. They don't dismiss the notion of code being shared from previous versions of the API.

What's more likely, that Microsoft will throw out ALL of the code that they've written and re-implement their entire 3D subsystem (including a full re-implementation of legacy DirectX to support legacy applications on top of the new API)... or that they'll simply upgrade DX11 with new features and low-level access, up the version number, and call it good?
 
Last edited:
Legacy doesnt need to be supported, it has its own DLLs.

Each version of DX has its own runtime libraries, they are not included in the libraries of subsequent versions.

Even subversions have separate libraries.

Mantle, and OGL can co-exsist with DX for just this reason, it doesnt rely on the DX libraries in order to be able to function.

So yes, they can do a complete re-write without losing a single iota of legacy compatibility.
 
Last edited:
So yes, they can do a complete re-write without losing a single iota of legacy compatibility.
I never said you couldn't, I said it was HIGHLY impractical to do so.

It also makes no sense to do so from a time or effort standpoint. Why reinvent the wheel when low-level extensions can just be added to their existing and mature API?

They've ALREADY demonstrated exactly this modus operandi with the Xbox SDK, which includes branch of DirectX with low-level extensions added (rather than a total rewrite of DirectX).
 
This whole debate is pointless. We don't know — beyond any doubt — that D3D12 is or isn't a ground-up, from-scratch API. Yet the involved parties seem to parade as though they can speak with absolute definitiveness on the subject.

Lot of that seems to go on in AMD-related threads, I find. The "root for the underdog" mentality and its antithesis seem to instill that quality in people.
 
The point I'm making is that DX12 wont be a total rewrite of the API.

They're certain to reuse large chunks of the code they've already written for DX11... doesn't matter how much new stuff they add, keeping any amount of old code means it's not a total rewrite. Simple as that.

Looks like a pretty fucking large reimagining to me.

http://channel9.msdn.com/Events/Build/2014/3-564

Maybe you know better than Microsoft though?
 
Back
Top