AMD Mantle Performance Preview in Battlefield 4 @ [H]

I think AMD will pull a physx, Claiming other vendors are free to use and them blaming them for not supporting it.

Nvidia's "fee requirement" to allow Physix support was that AMD had to open up and allow access to their proprietary Catalyst drivers. No competitive company in their right mind would do that. Another one of those NV misdirects.
 
10% is the kind of per-app performance gain I would expect from a merely ample driver release. It's not all that uncommon to see per-app improvements twice as great from 'simple' driver optimizations.
There are very few driver releases that have legitimately given across the board 10%+ performance increases. Usually those performance claims are for specific configurations/scenarios and not applicable to everyone. It looks like Mantle offers at least 10% for almost everyone and much higher gains than that in specific scenarios. I think that's a better improvement than you are giving them credit for.
 
Last edited:
Nvidia's "fee requirement" to allow Physix support was that AMD had to open up and allow access to their proprietary Catalyst drivers. No competitive company in their right mind would do that. Another one of those NV misdirects.

AFAIK that's not true, but even if it was, why would AMD do differently?
 
Mantle is painted in a better light when tested in its intended situations, on weak CPUs (preferably that latest APU) with these supported GPUs. I guess this is a way to say it gives a tiny bit of a boost on hardcore equipment..?

Don't forget multi gpu setups......those that have got it working are having very good results. Scaling appears very high and reducing the cpu load imposed by multi gpu systems is allowing for high perfomance increases....one guy at OC.net reporting 95% increase going from DX to mantle in a trifire 290 system.....so not just for apu's
 
AFAIK that's not true, but even if it was, why would AMD do differently?

Yes its true...don't confuse AMD's open philosophy with Nvidia's we keep it proprietary and why should we give it away for free mentality. They both want to make money but the two go about it differently that's all.
 
Last edited:
I think that you'll are forgetting that the CPU charts in the Oxide demo showed long periods of rest where they had nothing to do when running Mantle. So if your 4930K is suddenly idle for long periods, it can do other computations. Maybe in the next salvo of Mantle games, owners of a high end CPU can have increased physics rendering capabilities. Maybe the new PhysX 3.2 which runs on the CPU, has more resources to calculate positioning and car physics in games better. I could see Star Citizen doing something like this where if you are running Mantle the explosions look more realistic because more particles are being processed with a finer level of precision in the physics engine instead of the usual "poof more flares and sparks" we have today.

Then there are the multiple GPU situations that are held back by even a 4930K.
 
Instead of guessing what happens with slower cpus, why not use an actual slower cpu or underclock the cpu used? Throw an i3 or slower i5 in there and see. Running higher or lower resolutions lets you guess, but you don't KNOW unless you use a slower cpu.

This is the sort of thing hardocp rightfully slams other reviewers for. If you're going to make claims about cpu limited conditions, then throw in a slow cpu and prove it.
 
Lower end CPUs are able to breathe and not hold faster GPUs back, and high end CPUs get more free time to be tapped in and used for better AI or anything a developer finds useful. I think Mantle is a great thing for future games.
 
There are very few driver releases that have legitimately given across the board 10%+ performance increases. Usually those performance claims are for specific configurations/scenarios and not applicable to anyone.
I'm not sure you meant to say "not applicable to anyone" here. If you did, then I'd obviously have to take some issue with that claim.

I'm not disclaiming that Mantle delivers greater benefits for other configurations. I'm just pointing out that performance improvements of that relatively small magnitude aren't uncommon in driver optimizations.
 
Yes its true...don't confuse AMD's open philosophy with Nvidia's we keep it proprietary and why should we give it away for free mentality. They both want to make money but the two go about it differently that's all.

what amd's open philosophy?
 
I'm not sure you meant to say "not applicable to anyone" here. If you did, then I'd obviously have to take some issue with that claim.

I'm not disclaiming that Mantle delivers greater benefits for other configurations. I'm just pointing out that performance improvements of that relatively small magnitude aren't uncommon in driver optimizations.
Sorry, I meant "everyone" not anyone.
 
Considering Mantle is Windows-only at this time, they have no compelling reason to compete.

Hmm I'm sure MS would love to become even more marginalised in yet another market they currently have quite a bit of influence.

If they don't need your APIs then what else do they need? Mantle then goes Linux etc.

MS and the markets have seen the results of waiting and delaying. It's not pretty.

Plus a reboot of Direct X is a long time coming.
 
Nice preview.

So it doesn't help as much, when the CPU is powerful. It would appear that it might be said to "offload some processing from the CPU to the GPU" (as a general statement)

If that is partially or wholly true, wouldn't it hurt the max possible performance of the GPU? It seems that in cpu limited situations, it's a good tradeoff. So even if the GPU is handling a bit of the work that was traditionally the CPU's job, the net result is better performance. The improvement for users running more powerful desktops will be less, or even potentially a performance hit?

Can't wait to see real-world testing at the acceptable performance levels.

The frame time results look like big improvements for AMD gpus.. Could this improvement just mean that the gpu architecture was not very compatible to DX11? Or perhaps DX was more skewed towards nvidia gpu architectures? Or that DX just hasn't been updated sufficiently? Perhaps the very nature of DX just requires CPU intervention to function. In this day and age of thousands of stream processors on today's GPU's, it seems apparent that DirectX needs to catch up with reality.

Bring on the full-fledged reviews [H]. Glad something exciting is happening with pc gaming and GPU's.
 
So it doesn't help as much, when the CPU is powerful. It would appear that it might be said to "offload some processing from the CPU to the GPU" (as a general statement). If that is partially or wholly true, wouldn't it hurt the max possible performance of the GPU?
Generally speaking, Mantle provides a less frictional path to getting data to the GPU rather than moving tasks that are typically done on the CPU to the GPU. Graphics APIs could be thought of as being pure overhead, which Mantle simply attempts to lessen by providing more direct paths through fewer abstractions.
 
Good description.

It seems that the evolution of GPU's has resulted in designs that have exceeded the current DX's design/architecture.
 
Lets break down what mantle is. Feel free to correct me when my logic is wonkie.

At the simplest level, Mantle is an Application Programming Interface (API), or a language that game developers can use to write code that creates the beautiful graphics on your screen.
So it is an API which puts it in direct competition with the established APIs

In its current iteration, the Mantle API uniquely leverages the hardware in the Graphics Core Next architecture (GCN) of modern AMD Radeon™ GPUs for peak performance.
I'm going out on a limb here and say this suggest closely tied to the AMD's GPU architecture.

More broadly, Mantle is functionally similar to DirectX® and OpenGL, but Mantle is different in that it was purpose-built as a lower level API.
Confirms its an API but I smell something and it isn't apple pie. It could high, low medium, or the left. An API is just that. It still translates and so there is the Context switch CPU cost and the Run time CPU cost. That's all explain in "Andrew S. Tanenbaum: Modern Operating Systems", a core text for any Programmers degree.
Let's assume Mantle is so "Low Level" that its practically bare metal and that the API just passes the call to the driver with little or no translation. Haven't we just bypassed the HAL of the OS. Again refer to Tanenbaum.

By "lower level," it’s meant that the language of Mantle more closely matches the way modern graphics architectures (like AMD’s own GCN) are designed to execute code.
Alarms should be going off here if anyone thinks its not proprietary/tailored to their own hardware yet.

The primary benefit of a lower level API is a reduction in software bottlenecks, such as the time a GPU and CPU must spend translating/understanding/reorganizing code on-the-fly before it can be executed and presented to the user as graphics.
This I can whole heatedly agree with.

Mantle comes in contrast to the "high level API," which offers broader compatibility with multiple GPU architectures, but does so at the expense of lower performance and efficiency.
"high level API" = broader compatibility (NV?)
Mantle = AMD GCN
What happens when there is a architecture change?
 
Last edited:
I was thinking it might be bypassing the HAL as well. Which if true, means a less stable OS. And given the buggy nature of AMD drivers, for eon's now (in computer years), this may not be all it is cracked up to be. And it might also allow games to crash the OS in ways we have been protected from by the HAL.

I am definitely in a wait and see mode.

The better frame times and delays are good for amd users, but the nvidia ones look like that with DX. It's almost like AMD's drivers/programmers can't play nice with the DX api. One reason could be that DX is somehow more nvidia friendly, or it could simply be AMD doesn't have the same manpower or expertise in their driver team, I don't know.
 
Mantle is an API. Its is a "Lower Level" than DX API, not a bare metal API.

There no reason Nvidia could not create its own driver to execute the exact same commands on its own architecture through whatever means it wishes.

an api still has a language, required functions. and parameters just like any other language.

As long s you have a complete list of said API functions and syntax, writing a driver is just a matter of translating those API calls to your own hardware. This is the function of a driver.

In a sense, this is what a wrapper does, although it does it through interpretation to directx rather than directly compiled to the hadrware through a driver.

@GoodBoy, its less about not playing nice with Dx, and more about bypass the onerous restrictions the DX API forces on the developers. The developers (well, dice anyway) are pissed at DX because it hampers their ability to do the things they want from an artistic standpoint as it forces the artists as well as the code monkeys to work within constrained resource budgets. Mantle frees up those budgets, at the cost of requiring more granular optimization.
 
Last edited:
There no reason Nvidia could not create its own driver to execute the exact same commands on its own architecture through whatever means it wishes.

True. Why pay the cost of developing a driver for an API that is optimized for competition architecture and so will probably never match the performance level even with comparable performing hardware and very little adoption by devs?

M$ even said they are (unsurprisingly) sticking with DX for the Xbox One for their console which would have been the prime situation to do it. I've not read anything about PS4 and Mantle.
 
There is no reason nvidia could not at least match, maybe even improve on AMDs performance.
 
what a poor comparison.
glide didn't have to deal with multicore cpu, multiple threads and multiple gpu.
 
what a poor comparison.
glide didn't have to deal with multicore cpu, multiple threads and multiple gpu.
Hahah, wow. The entire reason 3dfx became dominant in the 90s was that you could link multiple Voodoo cards together in SLI.
 
Hahah, wow. The entire reason 3dfx became dominant in the 90s was that you could link multiple Voodoo cards together in SLI.

Not really.

They were dominant because the first Voodoo product was years ahead of its competitors.
 
Not really.

They were dominant because the first Voodoo product was years ahead of its competitors.

Well... sort of. back in the day 3dfx was pretty much the only choice. But once nvidia came into light, it quickly catched up and then surpassed it.
 
And Nvidia didn't really offer a competitive product until the TNT2, which was years after the original Voodoo card (and after Voodoo2 as well). Hence my original post. :)
The TNT2 released in 1999, the Voodoo2 released in 1998. It wasn't that big of a gap. While the original Voodoo cards were years ahead of the competition, competitors started to catch up quickly. One of the biggest advantages to the Voodoo2 was that in SLI you could run resolutions that were otherwise unavailable. So it was a pretty big selling point of the card. There wasn't much that could touch the Voodoo2 cards at their time. But this is getting off topic now...
 
One of the biggest advantages to the Voodoo2 was that in SLI you could run resolutions that were otherwise unavailable.

That was one advantage of the Voodoo2 but not its biggest (and its competitors could run at higher resolutions, that limit was largely a Voodoo thing) Its BIGGEST advantage at the time was it ran a proprietary API called GLIDE!
 
"high level API" = broader compatibility (NV?)
Mantle = AMD GCN
What happens when there is a architecture change?

I get the feeling AMD will be running with GCN derivatives for awhile. I mean, GPU's have kind of hit a wall. 'err, the way we use them has, at least. It's true, they are so fast and so advanced now, the single core minded DirectX with it's restrictive communication is holding things back. and CPUs really have been begging for better multi-core use, from top to bottom. It makes a lot more sense to look at ways to use GPUs efficiently and more completely, rather than cranking out the next $500 chip. And look at ways to make CPUs and GPU talk to eachother better.
Thanks to the next gen console releases, we will finally start seeing games actually take advantage of the current architectures. AMD sees an opportunity to really leverage the situation and get some of that lower level, tighter focus onto PC. it makes a lot of sense. If they stick with GCN for a couple more years, it's pretty feasible that most AMD users will have some type of GCN chip in their computers. and at some point, MS/Windows/and PC gaming in general is going to say DirectX 11 only. That's a much narrower spread than 10 years of graphics cards to support. and if nearly everything has the same base architecture, it will be that much easier to predict performance and make an engine scale well to the hardware. AMD and Nvidia have both essentially been on the same architecture (unique of course to their own brand) already for three refresh cycles.

Another eventual goal for Mantle is once it's basically full worked out, the idea of having to get new drivers for each game should disappear. One of the longer term goals for Mantle is to shift GPU<->Engine optimization over to the devs themselves. Rather than the devs having to send requests to Nvidia/AMD driver teams and then hoping for the best in drivers that eventually come out months later.
 
Last edited:
I get the feeling AMD will be running with GCN derivatives for awhile. I mean, GPU's have kind of hit a wall. 'err, the way we use them has, at least. It's true, they are so fast and so advanced now, the single core minded DirectX with it's restrictive communication is holding things back. and CPUs really have been begging for better multi-core use, from top to bottom. It makes a lot more sense to look at ways to use GPUs efficiently and more completely, rather than cranking out the next $500 chip. And look at ways to make CPUs and GPU talk to eachother better.
Thanks to the next gen console releases, we will finally start seeing games actually take advantage of the current architectures. AMD sees an opportunity to really leverage the situation and get some of that lower level, tighter focus onto PC. it makes a lot of sense. If they stick with GCN for a couple more years, it's pretty feasible that most AMD users will have some type of GCN chip in their computers. and at some point, MS/Windows/and PC gaming in general is going to say DirectX 11 only. That's a much narrower spread than 10 years of graphics cards to support. and if nearly everything has the same base architecture, it will be that much easier to predict performance and make an engine scale well to the hardware. AMD and Nvidia have both essentially been on the same architecture (unique of course to their own brand) already for three refresh cycles.

Another eventual goal for Mantle is once it's basically full worked out, the idea of having to get new drivers for each game should disappear. One of the longer term goals for Mantle is to shift GPU<->Engine optimization over to the devs themselves. Rather than the devs having to send requests to Nvidia/AMD driver teams and then hoping for the best in drivers that eventually come out months later.
From what I have read on Mantle, it's agnostic enough at the hardware level that even if AMD shifts architectures, it should still be possible to write Mantle drivers for compatibility.

I think the wall right now with GPUs is just their sheer size and the limitations of the process node they are on. They need to get down to 20nm before we are going to see any improvements. NVIDIA's GK110 is 7.1 billion transistors. AMD's Hawaii is over 6 billion. For comparison, Intel's 6-core 4960X processor is 1.8 billion. GPU's are complex and they aren't going to be able to keep the heat output reasonable until there is a die-shrink.
 
From what I have read on Mantle, it's agnostic enough at the hardware level that even if AMD shifts architectures, it should still be possible to write Mantle drivers for compatibility.

I think the wall right now with GPUs is just their sheer size and the limitations of the process node they are on. They need to get down to 20nm before we are going to see any improvements. NVIDIA's GK110 is 7.1 billion transistors. AMD's Hawaii is over 6 billion. For comparison, Intel's 6-core 4960X processor is 1.8 billion. GPU's are complex and they aren't going to be able to keep the heat output reasonable until there is a die-shrink.

Yeah I mean, I'm sure there are ways to make a better GPU.

But AMD is saying there are better ways to use existing GPUs and CPUs. Our games could run better on the same hardware or we could build better looking games and play them well, now. Not wait for the next $500 card to brute force it's way through archaic ways of doing things. Consoles have shown some of what is possible. I mean, Metro Last Light just came up free on PSN+ for PS3. I cannot believe how good that game looks on PS3. Ignoring the resolution delta, it looks remarkably similar to "high" on PC.


**ooh here we go, Metro Last Light PS3 VS. PC at what is said to be "max" settings:

https://www.youtube.com/watch?v=q08SWib1rus
 
From what I have read on Mantle, it's agnostic enough at the hardware level that even if AMD shifts architectures, it should still be possible to write Mantle drivers for compatibility.

I think the wall right now with GPUs is just their sheer size and the limitations of the process node they are on. They need to get down to 20nm before we are going to see any improvements. NVIDIA's GK110 is 7.1 billion transistors. AMD's Hawaii is over 6 billion. For comparison, Intel's 6-core 4960X processor is 1.8 billion. GPU's are complex and they aren't going to be able to keep the heat output reasonable until there is a die-shrink.

AMD is betting the farm on this and from the looks of it will lose it
this is worthless once you hit over 1080p
all they can do is add more shaders and bump the clocks
they cant touch the base architecture of the chip or breaks the API just go back and look at the posts on page 1

unless this lets you bump the setting up over using DX there is no point
WOO a hole 10fps more when im already at 60+ meh and its little help the 120hz people
OpenGL does every thing this does and does it just as well if you dont want to use windows
how are them AMD linux drivers coming along any way... and you think they can maintain an API?
 
Yeah I mean, I'm sure there are ways to make a better GPU.

But AMD is saying there are better ways to use existing GPUs and CPUs. Our games could run better on the same hardware or we could build better looking games and play them well, now. Not wait for the next $500 card to brute force it's way through archaic ways of doing things. Consoles have shown some of what is possible. I mean, Metro Last Light just came up free on PSN+ for PS3. I cannot believe how good that game looks on PS3. Ignoring the resolution delta, it looks remarkably similar to "high" on PC.


**ooh here we go, Metro Last Light PS3 VS. PC at what is said to be "max" settings:

https://www.youtube.com/watch?v=q08SWib1rus
I don't disagree that Mantle is a vast improvement on the software side at least in theory (very few have seen the SDK so I'll take DICE's word on it) but there is still a lot of room left on the GPU side to innovate, even if it's just brute force processing power. Intel is working on 14nm CPUs, and we still have 28nm GPUs. The next die shrink should be interesting. If we can combine that with a leaner, cleaner API as well then fantastic!

I also think you are discounting how huge of a resolution difference exists between the PS3 and a modern computer. Modern console games are not written as close to the metal as you might think. Yes, they do have a more efficient API that cuts overhead. But the PS3 runs Metro last light at 1152x640 (sub 720p) which is 737,280 pixels. I game on my computer at 2560x1440 which is 3,686,400 pixels. That's literally a 5x increase in resolution, and I can play at higher settings to boot. At 720p, pretty much any discrete GPU >$100 can run MetroLL @ High.
 
Last edited:
Mr. Bennett, are you saying that mantle will help midrange budget minded builders squeeze more performance out of their hardware?
 
I don't disagree that Mantle is a vast improvement on the software side at least in theory (very few have seen the SDK so I'll take DICE's word on it) but there is still a lot of room left on the GPU side to innovate, even if it's just brute force processing power. Intel is working on 14nm CPUs, and we still have 28nm GPUs. The next die shrink should be interesting. If we can combine that with a leaner, cleaner API as well then fantastic!

I also think you are discounting how huge of a resolution difference exists between the PS3 and a modern computer. Modern console games are not written as close to the metal as you might think. Yes, they do have a more efficient API that cuts overhead. But the PS3 runs Metro last light at 1152x640 (sub 720p) which is 737,280 pixels. I game on my computer at 2560x1440 which is 3,686,400 pixels. That's literally a 5x increase in resolution, and I can play at higher settings to boot. At 720p, pretty much any discrete GPU >$100 can run MetroLL @ High.

Well I did say "ignoring the resolution delta". I'm aware of what resolution the PS3 runs Metro at. and I didn't say all console games are written direct to the metal Plenty of them use several middle-wares patched together. The PS3 in terms of GPU (Nvidia 7900 derived), VRAM (256mb), and System RAM (less than 256mb. I think the PS3 OS needs 50mb or something), is way below the minimum spec , for the PC version of Metro Last Light: Yet, with hardware specific optimization and an efficient OS and APIs, they were able to get a version of the game running, that looks very much like what Metro should look like, with good settings.

So getting to the point I'm surrounding: We don't need the next GPU.

There is a lot of merit in a more efficient API, I mean that should be obvious. But there is also a lot of merit in focusing on improving how we use existing hardware. So, not just improving the API, but making specific customization for specific CPU and GPU architectures. Which is in the plans for Mantle and the GCN architecture. and also some other more general things like tiled resources and streaming assets. and quad cores have been out forever. Let's get stuff threaded out more with more parallelism. Our GPUs have 'Compute' on them that is begging for parallel code, they've had this, for at least two years.

If it weren't for the fact that the PS4 has the same 'compute' structure as a 290x, I doubt we'd see much real use of GPU compute in games. (the PS4 has 64 compute queues, just like the 290x. In comparison, a 7870 has 4 queues).

I'm kind of going on a limb here, because AMD really hasn't said anything about this that I know of. But with their push for GCN, I really think that they will be using derivatives of that, for awhile. Certainly the next step will be an improved GCN. But it will be GCN. and then AMD will push for better specific use of that architecture.

As you said, you can run current games at 2K resolution, with great settings and framerate. OK, cool. Before Octoberish last year, current games were decidedly "last gen". Multiplatform games were designed with old consoles and older hardware in mind. It's no wonder that a GPU that's costs as much or moere than a whole PS3 with 7 year old hardware, can run PS3 derived games at sky high settings and resolution. Even with dense OS and API overhead.
I mean these are games which even after adding higher res texture packs, are averaging 1000mb of VRAM use at 1080p. Only some of them creeping more towards 1500-1700mb and basically only need two CPU threads. It wasn't until Octoberish, that multiplatform games are now finally swinging more in line with current PC hardware.

But we still aren't there yet. Part of that is due to the nature of developing multiplatform and having to pick a base goal. that goal being the consoles and those consoles are brand new.

The other part is that only about 1/3 of Steam users (and presumably all PC gamers) have a GPU that makes their computer roughly as good or maybe a little better, than a PS4. The amount of steam users with a GPU that is clearly better than a PS4, they barely register in the percentages.

So we have a split. 2/3 of PC gamers are swinging under the consoles (some of them are more inline with an Xbone, most are less) and then the upper third are as good or better than a PS4.

But that's not all; only 2% of steam users have a display capable of 1080p or better.

So, higher rendering resolution isn't the big dream it seems to be. I mean, it's cool. But even a lot of people that might be capable of doing it, can't do it, due to their display being the limitation.

So instead of flogging the model where graphics companies flood us with waves of incremental GPUs; let's use what we've already got. Because they aren't even close to tapped out, especially if we focus on the resolutions that most people actually have in a display. I mean we have these graphics cards with 2+ GB of VRAM, tons of shading power, seemingly limitless ability for texturing. But the games are not really living up to the hardware. and neither is the performance.

I think this is the medium term goal for Mantle. Improve relative performance and encourage better use of hardware.

RANT:
I'd love to see a shift in focus and I think that's what Mantle is. Maximize the hardware right now and then maybe see a shift in focus from pumping resolution to pumping up asset detail, texture detail, and richness of shaders. Thread stuff out, get it more parallel. Use "compute" to offload high end lighting, compression, and other stuff. As far as I'm concerned there's absolutely no reason why we shouldn't have a game engine that looks as good or better than 3DMark's Firestorm Demo and runs minimum 30fps at 1080p, on single 7 series cards.
and let's start streaming assets, like a console. 2 or 3GB of VRAM is plenty. There's no reason we need to be holding every last bit of a level in ram all at once. That's ridiculous. Tile it out to System ram, if available. Otherwise, stream it off the HDD. With smart LOD, nobody will notice. The consoles have proven it is possible. Again, the PS3 does a hell of a lot, with a laughable amount of ram.

and I cannot figure out why MLAA and SMAA aren't a standard option in every single game. Why are we still using MSAA? It's expensive on performance and has to be specifically coded to work with a deferred engine. It's really only feasible for high end cards to actually use MSAA.
SMAA doesn't even need to be worked into the engine. It can be dropped in right at the end of the pipeline, just before the HUD is rendered. It's very low performance cost and SMAA 2.0 resolves sub-pixel motion way better than just about any other mainstream AA method. It also treats the whole screen as a 2-D field. So it works with all types of effects, you don't have to worry about deferred effects, alpha to coverage, etc. Again, the PS3 has been running with MLAA and it's derivatives for quite some time, to great result.
 
Last edited:
Yeah, some games try to hide or divert attention away from medium to low res textures with costly effects; I'd rather high res textures. In addition, sometimes said effects don't add much to the game save drastically lowering FPS. Sure, I do like well done lighting.
 
So getting to the point I'm surrounding: We don't need the next GPU.

I am going to have to disagree with you right there.. This is [H] for one.

... But the games are not really living up to the hardware. and neither is the performance.

2 points. 1, what are you playing? Because today's games are pushing GPU's to the limits. Hello-kitty online might be good with a 4 year old GPU... 2, the game developers aren't going to push the limits if there isn't hardware that can run it.
 
Back
Top