It's official, Nvidia will not support Mantle

AMD spent millions getting Mantle into BF4 and it was all a waste.

T8MQMGm.jpg
 
Mantle is the future. Nvidia will accept Mantle once they are outclassed in future AAA titles. Performance will only increase from here on out with Mantle. Its going to surpass anything DX can do. No matter how much driver tweaking under DirectX, its not going to beat a souped up finalized optimized Mantle setup. Everyone will know there and then its time to accept defeat. Integrate or become obsolete.
 
Mantle is the future. Nvidia will accept Mantle once they are outclassed in future AAA titles. Performance will only increase from here on out with Mantle. Its going to surpass anything DX can do. No matter how much driver tweaking under DirectX, its not going to beat a souped up finalized optimized Mantle setup. Everyone will know there and then its time to accept defeat. Integrate or become obsolete.

3dfx and Glide disagree with you.
 
1. I'm not a NV shill. I like both companies and don't have a favorite or video card company evangelical agenda. I just call them how I see them.
2. I've seen you call people shills so I guess NV must be doing something right and it makes you an AMD troll.

Pointing out what an industry insider said in an interview is hardly being a shill, so i have to assume you are just trolling.
 
AMD spent millions getting Mantle into BF4 and it was all a waste.

T8MQMGm.jpg


According to this graph, Mantle still has a higher performance increase than just drivers alone. Oh sure, a GTX-780ti can beat a Radeon R9 290X, but with its price premium, it damn well better.

Nice try.
 
Mantle is the future. Nvidia will accept Mantle once they are outclassed in future AAA titles. Performance will only increase from here on out with Mantle. Its going to surpass anything DX can do. No matter how much driver tweaking under DirectX, its not going to beat a souped up finalized optimized Mantle setup. Everyone will know there and then its time to accept defeat. Integrate or become obsolete.

Actually it does matter how much better DX becomes. Even if DX12 isn't as good in pure performance, it will likely be good enough. DX already has the market share going for it, it has the developer support, it has a large user install base, it has the support from AMD, nVidia and Qualcomm (for mobile) add to that the fact that CPU limited games really aren't an issue and I don't see how anyone who's looking at this from an objective perspective, and knows what they're talking about can claim that Mantle is the future. It has a whole heck of a lot to overcome before that is even a possibility.
 
Actually it does matter how much better DX becomes. Even if DX12 isn't as good in pure performance, it will likely be good enough. DX already has the market share going for it, it has the developer support, it has a large user install base, it has the support from AMD, nVidia and Qualcomm (for mobile) add to that the fact that CPU limited games really aren't an issue and I don't see how anyone who's looking at this from an objective perspective, and knows what they're talking about can claim that Mantle is the future. It has a whole heck of a lot to overcome before that is even a possibility.

Heh, depends how MS decides to release DX12. If it's only available to Windows 9+ users, it might not have that large of an install base for quite some time.
 
Heh, depends how MS decides to release DX12. If it's only available to Windows 9+ users, it might not have that large of an install base for quite some time.

It'll have more than Mantle in a very short time
 
Heh, depends how MS decides to release DX12. If it's only available to Windows 9+ users, it might not have that large of an install base for quite some time.

AMD has maybe 30% of the market . Once you factor in that Mantle only supports GCN and Windows that number gets even smaller.

Bottom line once AMD's checks start bouncing Mantle will go the way of GLIDE and AMD will go the way of 3DFX.

Mantle is nothing more than marketing fluff to try and push their APUs. Benchmarks clearly show that it's irrelevant to high end gaming.

Maybe VIA will buy AMD.
 
Nvidia talks the talk but are they walking the walk? If low level drivers not benefit them why would they support DX12?
Who said Nvidia wouldn't benefit from low-level drivers?

They managed to squeeze more performance out of DX11 by implementing threading and shader caching, but that doesn't mean they can't reduce overhead even farther.

So where is this magical DX11 driver that will beat the pants out of Mantle for Nvidia hardware ?
That driver was released months ago.

According to this graph, Mantle still has a higher performance increase than just drivers alone.
That could simply mean that AMD's DX11 implementation is sub-par, inflating the size of the performance increase.

As far as I'm aware, AMD still hasn't fully implemented threading for DX11 (missing support for command lists), and they don't do anything like Nvidia's disk-cache for dynamically compiled shaders. That would pretty obviously lead to AMD's DX11 implementation under-performing.
 
Last edited:
So they won't give it to Intel and NVIDIA does not want it. Mantle's popularity it dropping faster than AMD's stock. Mantle was just a marketing gimmick anyways, so nothing of value is lost.

Lol.

They said they won't give it to Intel EARLY.

When it comes out of beta, then Intel can have it.

Also lol at Mantle being a gimmick.

Lets see, its better then OpenGL, its better then DirectX and its not OS bound. And its easier to develop for then either OpenGL or DirectX.
 
Last edited:
Also lol at Mantle being a gimmick.

Lets see, its better then OpenGL, its better then DirectX and its not OS bound. And its easier to develop for then either OpenGL or DirectX.
Define "better"

OpenGL works on all graphics cards and also isn't OS-bound, that makes it "better" for a lot of scenarios.
OpenGL is also mature and available on all of those platforms and graphics cards RIGHT NOW. Mantle is still GCN-Only and Windows-only. That fact could also make OpenGL the "better" choice.

Performance isn't everything. As it stands, right at this moment, Mantle works on fewer systems than even DirectX.
 
Lol.

They said they won't give it to Intel EARLY.

When it comes out of beta, then Intel can have it.

Open would mean everyone could work on it and add to it from the start. You really don't believe their marketing BS do you?

Also lol at Mantle being a gimmick.

Lets see, its better then OpenGL, its better then DirectX and its not OS bound. And its easier to develop for then either OpenGL or DirectX.

So far it has not been developed for. It's only been ported to from DirectX. That's not easier, it's a pain in the ass extra step that no developer will do for free. NVIDIA has proven they can reduce overhead and beat mantle using DX11.

It's hard to support AMD will they spread so much fertilizer.
 
how is that beside the point?

if mantle is ported to other platforms, given the choice, do you really think developers will choose OGL?

i think not.
 
if mantle is ported to other platforms, given the choice, do you really think developers will choose OGL?
The problem is they won't have the choice: Mantle works only on a limited subset of AMD hardware. Do you genuinely believe Linux and OS X developers are simply going to eschew ~85-90% of the install base on those platforms by not providing an OpenGL path?

What other viable options do you believe they have on non-Windows platforms?
 
Ask any developer about OpenGL, its a complete nightmare to develop for.

That is not a universal opinion.

I am assuming we're staying within the context of game developers here of course. Obviously CAD, etc. industries aren't even interested in anything other than OGL.

And if you know software developers you know that they like what they're familiar with. This is all in the same league of the endless C# vs. Java debates. They never go anywhere and there's never really any substance there.
 
how is that beside the point?
Already said. If OpenGL is the only option on your chosen platform, it doesn't matter how much it sucks, you'll use it.

if mantle is ported to other platforms, given the choice, do you really think developers will choose OGL?

i think not.
You just restated the exact problem I pointed out previously. OpenGL is already available (and might be the only option depending on the hardware and OS), Mantle isn't.
 
That was the 337.5's

Rmrmber the single GPU performance claims?

This is what Anandtech really found out

The change in those drivers involved making command lists more efficient, so obviously the performance gains were targeted towards CPU-limited scenarios and ones with heavy draw calls -- you know, the exact same target as Mantle.

That AnandTech then went and made a bunch of GPU-limited benchmarks at 1440p is asinine. And AT knew better than to do that. But then again, the site is sponsored by AMD so we can't expect everything.
 
D3D12 is not the only api. There's also OpenGL.

Probably Mantle improvements will make it into a new version of OpenGL. But the need for a better API is there.

We all know that current API were not designed for computer games. OpenGL for example is all about CAD software and what OpenGL 4 should have been could not be due to some companies using their control in Khronos Group to steer OpenGL into their camp, that is, "do not change anything so our old software doesn't break, otherwise we should spend money on helping our clients porting their software to the new version".

An OpenGL "Game Profile" should not be a bad idea.

D3D, well, it's only for Windows and there are a lot of companies wanting to do OpenGL games for GNU/Linux (and SteamOS), Mac OS X, Android and iOS (Apple can dream with their new Metal API, it only hides their shitty OGL implementation). And why not, for the web with WebGL.

Driver optimizing is not the way and NVIDIA pretends it to be. No NVIDIA, no. I won't be paying you to put a pair of engineers inside my game development team to apply specific NVIDIA patches which break compatibility with other OGL implementations and then I need to add a less untested code for those other drivers.

A better API with more control is what is needed, and Mantle brings that. Performance is to be improved by clever programming and not by driver specific patches.
 
Does Nvidia control DX12? No...but Mantle did originate with, and is controlled by, their competitor.

I see 3 API's that developers can use in the immediate future:
1. DX11/12
2. OpenGL
3. Mantle

Which one costs more? Which leverages a larger installed base? Which one will bring in a higher revenue stream ti the developer?

As a consumer, I like competition. I have 3 rigs; 2 AMD video, one Nvidia. I am agnostic. But, the possibilities of Mantle sound enticing.
 
According to this graph, Mantle still has a higher performance increase than just drivers alone. Oh sure, a GTX-780ti can beat a Radeon R9 290X, but with its price premium, it damn well better.

Nice try.

since when did price matter on Hard?
no really i should dig up all the hard gold winners where every one on the forms bitched about the price
Killer Nic any one?
 
since when did price matter on Hard?
no really i should dig up all the hard gold winners where every one on the forms bitched about the price
Killer Nic any one?

You can't be serious. When does price matter ever? If you're spending more money on a video card, its performance should justify its price premium, correct? All I was doing was giving a rebuttal of PRIME1's graph. Yes, the GTX-780ti is faster than the R9. It costs a lot more too, so it SHOULD perform better.

If you can't see this, I hear that Nvidia has a GTX-Titan-Z that they'd be willing to sell you.
 
We all know that current API were not designed for computer games. OpenGL for example is all about CAD software and what OpenGL 4 should have been could not be due to some companies using their control in Khronos Group to steer OpenGL into their camp, that is, "do not change anything so our old software doesn't break, otherwise we should spend money on helping our clients porting their software to the new version".

That's only a problem for GPU vendors who have to maintain compatibility with older versions of the API.

To the developer it is of no consequence.

Every framework deals with this. If you're writing your own JVM, you have to handle the AWT toolkit paths. But if you're a developer writing a GUI, you don't need to use a single line of AWT, and you won't. You can use Swing or JavaFX or whatever the cool kids are using these days to build UIs.

And frankly CAD, with its heavy reliance on physics simulations, is leading the gaming industry right now. All of the new and good stuff is going into OGL years before it makes it into D3D and that's not happening because GPU makers are bored, but because those features are demanded by people working in that industry.

An OpenGL "Game Profile" should not be a bad idea.

We already have an OpenGL ES that's supposed to be a small subset of the core and even still you have all the morons in the media running around talking about how Metal is da bomb because it's not "bloated" like "heavy" GLES.

At what point are we just going to stop and acknowledge that most crap you read in the press is written by people who are full of chit and don't know their asses from holes in the ground? If it's the case with the daily news, it sure isn't any different in this industry.
 
You can't be serious. When does price matter ever? If you're spending more money on a video card, its performance should justify its price premium, correct? All I was doing was giving a rebuttal of PRIME1's graph. Yes, the GTX-780ti is faster than the R9. It costs a lot more too, so it SHOULD perform better.

If you can't see this, I hear that Nvidia has a GTX-Titan-Z that they'd be willing to sell you.

if i had the income i would look at the Titan Z
i would also be running an 8 core Xeon and 3x 30" monitors
 
We all know that current API were not designed for computer games. OpenGL for example is all about CAD software and what OpenGL 4 should have been could not be due to some companies using their control in Khronos Group to steer OpenGL into their camp, that is, "do not change anything so our old software doesn't break, otherwise we should spend money on helping our clients porting their software to the new version".
That's actually not true at all. The Compatibility profile was introduced specifically for this purpose, allowing the Core profile to evolve on an entirely different trajectory. What you're suggesting is true and what is actually true are different things.

Out of curiosity, though, what, specifically, should OpenGL 4.0 have been that is isn't due to legacy interests? What extensions, specifically, were precluded from inclusion? Do you have a list?

An OpenGL "Game Profile" should not be a bad idea.
Covered already. OpenGL ES covers a good 90-95% of the bases, eschewing features so far only utilized by a very limited number of developers for their AAA titles, and is the most minimal strict subset useful for any kind of real-time application.

The "Game Profile", of course, is called "Core".

Apple can dream with their new Metal API, it only hides their shitty OGL implementation
The robustness of Apple's implementation is among the best in the mobile industry, so I'm not sure where you're coming from with this comment.
 
Yes, the GTX-780ti is faster than the R9. It costs a lot more too, so it SHOULD perform better.
Which circles back to Prime1's point, we didn't need Mantle to get better performance... we just needed AMD and Nvidia to better-implement DX11's optional performance-boosting features.

All else remaining equal, if you swap to a faster graphics card and performance minimums improve, you weren't CPU limited (and these low-level API's wont do much for you).

I'd really like to see AMD optimize their DX11 implementation so they can close the performance gap like Nvidia has, but AMD has given themselves an incentive NOT to optimize for DX11 by releasing their own competing API.
 
We do need lower-level access to exploit greater efficiency from current hardware. That is not the question. Whether that needs to come in the form of a vendor-specific API is the question.
 
We do need lower-level access to exploit greater efficiency from current hardware.
"Greater" efficiency? No, its been proven time and time again that greater efficiency can be garnered without low-level APIs. Nvidia has been iterating fairly quickly on squeezing greater efficiency out of DirectX11 on current hardware.

"Maximal" efficiency? That we'll need a low-level API for. But, and here's the important question, will it be worth the extra effort?

Whether that needs to come in the form of a vendor-specific API is the question.
DX12 will solve the problem for most PC gamers. It's compatible with all DX11-class hardware, doesn't care about vendor, and will provide low-level access through an updated version of the DirectX SDK (which developers are already very familiar with).

It's locked to a specific OS, but that's nothing new for DirectX. It might also benefit from easy-porting from consoles (Microsoft specifically mentioned that the Xbox One would be getting some version of DX12 as well).
 
Last edited:
I'd really like to see AMD optimize their DX11 implementation so they can close the performance gap like Nvidia has, but AMD has given themselves an incentive NOT to optimize for DX11 by releasing their own competing API.

you mean boost thermals and update sli profiles?
 
By your own logic, has NVIDIA not already solved the problem with driver optimizations?
How would that be the case, by my logic?

I said they had managed to attain greater efficiency with the current API
I said that maximal efficiency would require a low-level API.

By my own logic, Nvidia has NOT solved the problem (they have, however, gotten current-gen tech closer to an optimally performing configuration WITHOUT a low-level API).

Closer is good, but it's NOT perfect yet.

you mean boost thermals and update sli profiles?
Not sure I understand your question. Are you claiming that Nvidia has tinkered with how Boost works in order to increase performance?

Nvidia was pretty clear in what they did to attain the posted figured. They fully implemented DX11 threading and implemented shader caching.
 
Last edited:
The problem is they won't have the choice: Mantle works only on a limited subset of AMD hardware. Do you genuinely believe Linux and OS X developers are simply going to eschew ~85-90% of the install base on those platforms by not providing an OpenGL path?

What other viable options do you believe they have on non-Windows platforms?

It has been said before that Mantle is easy to port from DX12 and vice versa , having said that it means that you are saying you can not do both. Which is weird because of the portability it is not a big task to make it work on Linux. If developers have a Mantle windows version what exactly changes for Linux? OpenGL can be supported also there is no drawback in creating and maintaining separate versions.
 
Which circles back to Prime1's point, we didn't need Mantle to get better performance... we just needed AMD and Nvidia to better-implement DX11's optional performance-boosting features.

All else remaining equal, if you swap to a faster graphics card and performance minimums improve, you weren't CPU limited (and these low-level API's wont do much for you).

I'd really like to see AMD optimize their DX11 implementation so they can close the performance gap like Nvidia has, but AMD has given themselves an incentive NOT to optimize for DX11 by releasing their own competing API.

Somehow the difference between low level API and high level API is lost on you.

When you can get minor performance from optimizing DX11 drivers versus _huge_ performance increase with Mantle what is better for consumers 2% increase or 20% increase?
 
How is that the case, by my logic?

I said they had managed to attain greater efficiency with the current API
I said that maximal efficiency would require a low-level API.

By my own logic, Nvidia has NOT solved the problem (they have, however, gotten current-gen tech closer to an optimally performing configuration WITHOUT a low-level API).

Closer = good.
Closer != perfect.


Not sure I understand your question. Are you claiming that Nvidia has tinkered with how Boost works in order to increase performance?

Nvidia was pretty clear in what they did to attain the posted figured. They fully implemented DX11 threading and implemented shader caching.


they boosted thermals and update sli profiles and called that "API efficiency improvements". look at independent reviews of those drivers and you will see the results. they did nothing to improve api efficiency.
 
By my own logic, Nvidia has NOT solved the problem (they have, however, gotten current-gen tech closer to an optimally performing configuration WITHOUT a low-level API).
So you acknowledge that the inability to exploit maximum efficiency of available hardware is a problem?

It has been said before that Mantle is easy to port from DX12 and vice versa , having said that it means that you are saying you can not do both.
No. It's easily possible to leverage all graphics APIs through their respective codepaths, but what value is derived from implementing a D3D12 renderer for Linux and OS X applications? It cannot be used on those platforms.

You are aware that Direct3D is available only on Microsoft platforms, correct? Was the term "non-Microsoft platforms" in any way unclear?
 
Back
Top