Mantle pushes FX-8350 to beyond i7-4960X performance.

once again nVIdia's latest drivers are faster then mantle in starswarm

maybe u wanna check the draw batch /ship count before making wild statements.
(starswarm isn t a fps benchmark) and nvidia just can't touch mantle's min fps :cool:
 
You're not making sense here mate. Clockspeed helps, of course... but even though the 8350 is higher-clocked, it's still slower then the i7 in question.

Look at the graph. Obviously what you are posting is wrong. In this workload the 8350 is not the slower processor. Live with it.

People can hypothesize but nobody here can give you a definitive answer. Any possible reason people give you instantly dismiss and simply repeat yourself again, and again, and again. If you really want to know find out for yourself. You are making 0% contribution to this thread and instead you're simply crapping all over it.

If you are doubting the results as legit then take some of this monumental amount of time you have to dedicate to this thread and use it to find your proof that the 8350 is indeed slower in this scenario.
 
lol I really like amd cards and there drivers as well...but there cpus are a joke....no offense to people that own them but my 5-6 year old system can still run circles around even the newest overclocked amd cpus....mantle will never make me consider buying a amd cpu...there just to weak in 99% of everything else
 
lol I really like amd cards and there drivers as well...but there cpus are a joke....no offense to people that own them but my 5-6 year old system can still run circles around even the newest overclocked amd cpus....mantle will never make me consider buying a amd cpu...there just to weak in 99% of everything else


unless your talking about a heavly overclocked hexcore then no.
 
Regardless whether the processor is from Intel or AMD, Mantle has shown us inefficiencies in how gaming graphics are currently done today through DirectX. Hopefully, DX12 brings a more efficient API.

When I first read this, the first thing that came to my mind is that Mantle allows 4960X-equivalent frame rates. That says a lot about the AMD processor because it needs a near bare metal API to achieve this. On the other hand, do this in DirectX and the tables are turned and does show how CPU-inefficient DX is and the limited performance of the Bulldozer-style modules. Come 2016 when AMD moves to a new architecture, then there is a possibility that this may change.

However, Mantle shows that by freeing up the CPU as much as possible will allow it to do other things like physics or AI. It's still up to the game developer to showcase this possibility. So far we've only seen tech demos and a handful of games that showcase how much faster Mantle is.
 
Consider just how long Kepler has been out now. Many people here own one or two and Maxwell will be shipping soon.

Now ask yourself... how many games take advantage of one of Kepler's best performance / efficiency features: bindless multidraw indirect?

Probably close to........... yeah zero.


Something to think about when forking over $500 for a vid card.


There are some fundamental problems in the PC gaming space right now and Valve seems to be the only one with the balls to try to fix them.
 
Consider just how long Kepler has been out now. Many people here own one or two and Maxwell will be shipping soon.

Now ask yourself... how many games take advantage of one of Kepler's best performance / efficiency features: bindless multidraw indirect?

Probably close to........... yeah zero.


Something to think about when forking over $500 for a vid card.


There are some fundamental problems in the PC gaming space right now and Valve seems to be the only one with the balls to try to fix them.

So that means that developers don't appreciate it yet at least. Wonder why? Too bad Maxwell isn't going to be DX12 fully compatible. Would have been nice.
 
lol I really like amd cards and there drivers as well...but there cpus are a joke....no offense to people that own them but my 5-6 year old system can still run circles around even the newest overclocked amd cpus....mantle will never make me consider buying a amd cpu...there just to weak in 99% of everything else

Sad but true. If AMD ditched a bit of L3 cache and released a 100w 10 core steamroller chip, I think we would see a little competition. But in all honesty, AMD doesn't give an ass about high-end CPUs anymore. They re-release an overclocked version of a 2 year old CPU and call that 'high end'...

But you are right: their GPUs are boss.
 
lol I really like amd cards and there drivers as well...but there cpus are a joke....no offense to people that own them but my 5-6 year old system can still run circles around even the newest overclocked amd cpus....mantle will never make me consider buying a amd cpu...there just to weak in 99% of everything else

And this is why this is so interesting, you see on old code your old Intel will run circles around the new FX series. But when you are getting people in software development that are serious about optimizing for multiple cores where does that take you.

When there is serious progress to be made in gaming engines the only solutions is more cores. This is not something out of the blue everyone knows this.
When old software hinges around bogged down API the only thing that mattered was IPC. And even then it was stupendously hard to get more cores to work well for you

For more then a few years already this was coming. That the Frostbite engine is here to stay means that Mantle or DX12 can do a lot more so within 2 to 5 years developers will do more with "idle" cpu thus giving us more to enjoy in gaming.
 
Last edited:
And this is why this is so interesting, you see on old code your old Intel will run circles around the new FX series. But when you are getting people in software development that are serious about optimizing for multiple cores where does that take you.

When there is serious progress to be made in gaming engines the only solutions is more cores. This is not something out of the blue everyone knows this.
When old software hinges around bogged down API the only thing that mattered was IPC. And even then it was stupendously hard to get more cores to work well for you

For more then a few years already this was coming. That the Frostbite engine is here to stay means that Mantle or DX12 can do a lot more so within 2 to 5 years developers will do more with "idle" cpu thus giving us more to enjoy in gaming.
So all along AMD has only been developing Mantle to sell more AMD CPUs is what my take away from this whole thing is... :cool: The funniest thing was seeing people clamouring for AMD APUs and pairing an R9 290X with one after Mantle's release :D.
 
So all along AMD has only been developing Mantle to sell more AMD CPUs is what my take away from this whole thing is... :cool: The funniest thing was seeing people clamouring for AMD APUs and pairing an R9 290X with one after Mantle's release :D.

If you checked this thread then you could have spotted someone with an old core2duo at 2,4ghz playing BF4 at decent framerates.

But what I am talking about is the difference in cpu architecture that is now more or less paying of.

If anything you need an AMD graphics card what cpu you have at this point in time matters less.
 
So all along AMD has only been developing Mantle to sell more AMD CPUs is what my take away from this whole thing is... :cool: The funniest thing was seeing people clamouring for AMD APUs and pairing an R9 290X with one after Mantle's release :D.

Oh that works great also since the CPU doesn't have that big of a load. Remember Mantle keeps the CPU in a dead state a great deal of the time as the GPU is doing most of the work.

Grab a $40 motherboard, a cheap AMD APU, and a 290x and game on the same settings that everyone else does. So you want to render video and think you need a bigger CPU? Use AMD VCE and render directly on the video card and use utilize the APU's GPU section to make it even faster.

So yes, for a budget system an AMD APU + R9 290x does make sense. Glad to see that others had mentioned it before according to your post. :)
 
If you checked this thread then you could have spotted someone with an old core2duo at 2,4ghz playing BF4 at decent framerates.

But what I am talking about is the difference in cpu architecture that is now more or less paying of.

If anything you need an AMD graphics card what cpu you have at this point in time matters less.
I did see that, and it is configurations of that type for which low-level APIs like Mantle and OpenGL see the most benefit.
Oh that works great also since the CPU doesn't have that big of a load. Remember Mantle keeps the CPU in a dead state a great deal of the time as the GPU is doing most of the work.

Grab a $40 motherboard, a cheap AMD APU, and a 290x and game on the same settings that everyone else does. So you want to render video and think you need a bigger CPU? Use AMD VCE and render directly on the video card and use utilize the APU's GPU section to make it even faster.

So yes, for a budget system an AMD APU + R9 290x does make sense. Glad to see that others had mentioned it before according to your post. :)
While that may be true, it seems odd to me that someone would buy and build a new system with such a high-end GPU and an APU with mid-range performance like the Kaveri. Not saying Kaveri is a bad piece of hardware by any stretch of the imagination, but I think they would be better suited to an HTPC or SFF PC. I would definitely not pair the 290X with something like a cheap Richland or Trinity. Personally, if I was considering an R9 290X for a new system, I would go balls out on the rest of it to get the best performance possible.

But I realize most people can't spend 4 figures on a PC, so if you need to build around a budget it would make sense to make it toward the strength of Mantle. So with Mantle, you could get away with a cheap CPU and have the money left to get a bitchin' GPU.
 
I did see that, and it is configurations of that type for which low-level APIs like Mantle and OpenGL see the most benefit.
While that may be true, it seems odd to me that someone would buy and build a new system with such a high-end GPU and an APU with mid-range performance like the Kaveri. Not saying Kaveri is a bad piece of hardware by any stretch of the imagination, but I think they would be better suited to an HTPC or SFF PC. I would definitely not pair the 290X with something like a cheap Richland or Trinity. Personally, if I was considering an R9 290X for a new system, I would go balls out on the rest of it to get the best performance possible.

But I realize most people can't spend 4 figures on a PC, so if you need to build around a budget it would make sense to make it toward the strength of Mantle. So with Mantle, you could get away with a cheap CPU and have the money left to get a bitchin' GPU.

Exactly! And if you want a cheap, tiny, silent box in your living room it also makes sense. I'm with you though on pairing a 290x with a badass CPU. Just makes you feel better that you have all bases covered. But if you decide to do an APU, you don't lose much in my opinion, but that is up to the individual. And the price to performance ratio is off the charts! Link to the pricing of the motherboard + APU bundles.
 
And the price to performance ratio is off the charts! Link to the pricing of the motherboard + APU bundles.

Yep. The fact that there is still people on this forums complaining about an AMD 8 core for 129.99 CPU for the last couple years proves to me how jealous they're Intel doesn't make a budget 8 core to compete with it. Wah wah wah. Kyle Benett come ban me bro. I know you don't want people around who like a certain brand. :rolleyes:
 
I asked Dan Baker of Oxide

https://twitter.com/dankbaker/status/484047075697360897
@dykebeard Sure, if you are a single core, but we are looking at 8-16 cores. OpenGL needs to tackle threading correctly at some point

Alternatively, Graham Sellers of AMD:

http://forum.beyond3d.com/showpost.php?p=1813740&postcount=7

Overall, I think that making each and every API call as cheap as possible and setting things up so that you can run on multiple CPU cores and such is fundamentally not a forward looking approach to high performance graphics.

In my experiments on AMD hardware, I've been able to easily hit a range of 10 to 20 million draws per second (enough for at least 200K draws per frame at 60Hz) using stock OpenGL (no extensions that aren't in core). To hit this rate, the draws have to be really pretty light weight, otherwise you end up bumping into other limits (generally in the front end). Once you put real, physical state changes in there (the kind that hits registers), the hardware just doesn't go that fast no matter how you drive it.
 
Alternatively, Graham Sellers of AMD:

http://forum.beyond3d.com/showpost.php?p=1813740&postcount=7

Overall, I think that making each and every API call as cheap as possible and setting things up so that you can run on multiple CPU cores and such is fundamentally not a forward looking approach to high performance graphics.

In my experiments on AMD hardware, I've been able to easily hit a range of 10 to 20 million draws per second (enough for at least 200K draws per frame at 60Hz) using stock OpenGL (no extensions that aren't in core). To hit this rate, the draws have to be really pretty light weight, otherwise you end up bumping into other limits (generally in the front end). Once you put real, physical state changes in there (the kind that hits registers), the hardware just doesn't go that fast no matter how you drive it.

and so here we are, with the people writing the actual engines, saying it's not the way they want to go.
 
So, what some here are saying is that they prefer inefficient API's that require top of the line CPU's to get good game performance instead of an efficient API that allows you to game on less expensive CPU's? And if AMD gives us a more efficient API it's merely a conspiracy to make their CPU's more marketable?
 
Alternatively, Graham Sellers of AMD:

http://forum.beyond3d.com/showpost.php?p=1813740&postcount=7

Overall, I think that making each and every API call as cheap as possible and setting things up so that you can run on multiple CPU cores and such is fundamentally not a forward looking approach to high performance graphics.

In my experiments on AMD hardware, I've been able to easily hit a range of 10 to 20 million draws per second (enough for at least 200K draws per frame at 60Hz) using stock OpenGL (no extensions that aren't in core). To hit this rate, the draws have to be really pretty light weight, otherwise you end up bumping into other limits (generally in the front end). Once you put real, physical state changes in there (the kind that hits registers), the hardware just doesn't go that fast no matter how you drive it.

The thing is it is lipstick on a pig. You can tinker around OpenGL you can do a lot with it but what you can not do is make it work as a low level API.

The biggest problem is that you still need a driver you have to develop on both sides to implement features which programmers will be able to use. Where as Mantle allows programmers to do their own features and become responsible for bugs that are in their game instead of tracking problems with the drivers.

That AMD is still doing OpenGL is because people are using it. Same goes for DX.

You seek solutions that are not really worth pursuing as a software developer that needs to have total control over what they are working on.
 
Last edited:
So, what some here are saying is that they prefer inefficient API's that require top of the line CPU's to get good game performance instead of an efficient API that allows you to game on less expensive CPU's? And if AMD gives us a more efficient API it's merely a conspiracy to make their CPU's more marketable?

Yes, because nvidia and intel
 
So, what some here are saying is that they prefer inefficient API's that require top of the line CPU's to get good game performance instead of an efficient API that allows you to game on less expensive CPU's?
No, people are saying they prefer hardware-agnostic API's that are not bound to a specific vendor / architecture.

The fact that the current leading hardware-agnostic API's (DirectX and OpenGL) have additional CPU overhead is an unwanted side-effect.

And if AMD gives us a more efficient API it's merely a conspiracy to make their CPU's more marketable?
Conspiracy? Not sure I'd label it as-such, but it does make their CPU's more viable by making games less CPU-dependent.
 

Well since you can't get enough of it. it says battlefield 4.
But you are welcome to proof that DICE gets paid for extra sales of AMD cpu/apu hardware?
Last time around you couldn't get past the same innuendo you are posting now. Wasn't it the thing that you mentioned last time that they paid for Mantle?

Boring.......
 
He keeps posting that as if it means something other then the bundle deal they got with the game plus the advertisements of their hardware in the game itself.

shitty troll is a shitty troll...
 
The fact that the current leading hardware-agnostic API's (DirectX and OpenGL) have additional CPU overhead is an unwanted side-effect.
And is a side-effect of their ability to run ubiquitously on hardware and offer high-level facilities for developers (like managed resource creation and destruction). That they're not able to adequately exploit draw calls across threads is a shortcoming of their respective design legacies, and is unfortunate, but certainly doesn't kill the vast majority of games in the crib performance-wise.

Most developers, in fact, seem to readily accept their games performing poorly. Low-level APIs give them opportunities to do a better job of exploiting hardware, but just as many opportunities to shoot themselves squarely in the feet. Suffice it to say that low-level APIs and high-levels APIs will happily co-exist for some time to come.
 
And is a side-effect of their ability to run ubiquitously on hardware and offer high-level facilities for developers (like managed resource creation and destruction). That they're not able to adequately exploit draw calls across threads is a shortcoming of their respective design legacies, and is unfortunate, but certainly doesn't kill the vast majority of games in the crib performance-wise.
Exactly what I was hinting at :)
 
The thing is it is lipstick on a pig. You can tinker around OpenGL you can do a lot with it but what you can not do is make it work as a low level API.

The biggest problem is that you still need a driver you have to develop on both sides to implement features which programmers will be able to use. Where as Mantle allows programmers to do their own features and become responsible for bugs that are in their game instead of tracking problems with the drivers.

That AMD is still doing OpenGL is because people are using it. Same goes for DX.

You seek solutions that are not really worth pursuing as a software developer that needs to have total control over what they are working on.

Frankly that's hogwash. No software developer wants to get anywhere near writing a device driver or even anywhere near touching one as complicated as a GPU driver. And it's all moot anyway because GPU drivers are closed and nobody is going to be getting device drivers into an OS like Windows.

Mantle is an API into a driver just like everything else. It may be "lighter" than D3D only in the sense that it's newer and therefore exposes modern hardware features in a better way, but that's also true of OGL.
 
thats asboslutley NOT true of OGL

OGL has the same abstraction level as DX.

The only way to expose proprietary hardware functionality is through extensability.

This changes nothing about thrededness, CPU bindings, and draw call limitations.
 
And this is why this is so interesting, you see on old code your old Intel will run circles around the new FX series. But when you are getting people in software development that are serious about optimizing for multiple cores where does that take you.

When there is serious progress to be made in gaming engines the only solutions is more cores. This is not something out of the blue everyone knows this.
When old software hinges around bogged down API the only thing that mattered was IPC. And even then it was stupendously hard to get more cores to work well for you

For more then a few years already this was coming. That the Frostbite engine is here to stay means that Mantle or DX12 can do a lot more so within 2 to 5 years developers will do more with "idle" cpu thus giving us more to enjoy in gaming.


Mantle = 3DNow! 2.0 .....

and nice as 3DNow! openGL was for Quake2 it was the only game to use it
this the same thing that AMD tried back in the 90s and didnt do them much good till they came out with an all new CPU
fact it there is no "old code" just code thats not using every thing the CPU supports
if you used every thing the lastest i7 supports it would blow away every thing more then it does now
but you cant make software that only 1% of people can run

thats asboslutley NOT true of OGL

OGL has the same abstraction level as DX.

The only way to expose proprietary hardware functionality is through extensability.

This changes nothing about thrededness, CPU bindings, and draw call limitations.

amusingly 3DNow! openGL extensions would like a word with you
 
Last edited:
The point was the post was about studio's belief in mantle, any so called preference is muddied by the fact AMD is paying EA to put games out there with mantle support.

As quotes of quotes aren't nested it's all starting from this statement

Originally Posted by Relayer View Post
So, what some here are saying is that they prefer inefficient API's that require top of the line CPU's to get good game performance instead of an efficient API that allows you to game on less expensive CPU's? And if AMD gives us a more efficient API it's merely a conspiracy to make their CPU's more marketable?


So yes AMD paying EA is pertinent to the argument unlike the link you posted.
 
Mantle = 3DNow! 2.0 .....

and nice as 3DNow! openGL was for Quake2 it was the only game to use it
this the same thing that AMD tried back in the 90s and didnt do them much good till they came out with an all new CPU
fact it there is no "old code" just code thats not using every thing the CPU supports
if you used every thing the lastest i7 supports it would blow away every thing more then it does now
but you cant make software that only 1% of people can run



amusingly 3DNow! openGL extensions would like a word with you

what are you even talking about

3dNow! was an x86 instruction set, like MMX
 
Mantle = 3DNow! 2.0 .....

amusingly 3DNow! openGL extensions would like a word with you

whats that? EXTENTIONS would like a word with me?

they can go fuck themselves...

extensions are junk....

anyone who even a modicum of coding experience can tell you about managing proprietary extensions....they are a rats nest..

its been said, over, and over again by coders in the gaming industry, OGL on the desktop is pure shite....

its worse than DX.

There is a reason Valve is the only major publisher using it (Gabe is a cheap fuck, and refuses to be at the mercy of a licensed tech)...

Its interesting really, all this BS from the nvidia fanbois will mean nothing when all the new games come out supporting mantle, they will just find another way to marginalize it.

and those of us with the tech will just enjoy it.

personally, im looking forward to DA...
 
Last edited:
lol... In terms of pushing the limits, the CAD industry has been out-pacing the gaming industry on the PC for a long time now.

PC games are basically Xbox retreads with a decade-old programming philosophy to match decade-old hardware designs. Those are your developers you're appealing to: people stuck in a technology that precedes Android 1.6 Donut by about five years.
 
We dont live in that world, i doubt there is more than a handful here that even care...
 
Back
Top