Much like AMD, Mantle is an epic failure.
Irrelevant and non-contributing.
You keep telling yourself that sunshine...
Ill keep playing mantle games with fantastic performance...
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Much like AMD, Mantle is an epic failure.
Irrelevant and non-contributing.
glNext will just be the AZDO stuff plus the new command list thingy from nvidia, and a re-do of how the shaders are going to be done and probably an easier programming API since everybody complains about that. I also suspect they're going to merge all the different GLs like WebGL, GLES, etc. into one.
The two Valve employees presenting glNext at GDC are actually former nvidia devs.
TDP = Thermal Design Power
This represents how much heat the cooling solution is designed to dissipated, in watts. The GTX 980 has a cooler that can handle 165w of heat, while the R9 290 needs a cooler that can deal with 275w of heat.
The GTX 980 is an efficient card, and doesn't throw out as much heat (even when drawing similar amounts of power).
TDP = Thermal Design Power
This represents how much heat the cooling solution is designed to dissipated, in watts. The GTX 980 has a cooler that can handle 165w of heat, while the R9 290 needs a cooler that can deal with 275w of heat.
The GTX 980 is an efficient card, and doesn't throw out as much heat (even when drawing similar amounts of power).
No, that's not even close to right.
First of all, let's assume thay ran the power consumtpion test at Extreme settings, because they didn't include it.
The DX12 performance increase of the GTX 980 is 30% over the 290X. This means the CARD is doing 30% more work, which means that the GTX 980 is running at a much higher voltage/frequency.
Since dynamic power consumption scales with frequency * voltage^2, and since the highest frequencies always require the biggest bump in voltage, the Nvidia card is likely at full-tilt (165w) while the AMD card (assuming it's stock) is running at around 700-800 MHz, which should drop it's voltage as well, so the power is way down from the typical 250w you get in games.
I would expect that both companies have aggressive frequency/voltage throttling based upon graphics load, so it's not hard to believe that the AMD card is under 200w.
I'm going to call not a chance - it's going to be a massive API change or that will be all she wrote.
Wrong, just because the core is doing more work doesn't mean the voltage has to scale up with it. Overclocking with the 980 shows this time and again.
Have you used CrossFire recently? When I had 5870s it was a disappointing experience but the frametimes and compatibility on my 290X CF setup have been fantastic. Crossfire is arguably smoother than SLI right now. NVIDIA will probably need to drop the bridge soon too.If I was AMD I'd take the Mantle funding and throw it at their hardware. Get xfire to work as well as a single card and I'll give them a few grand next year.
It's too bad this review didn't look at trisli 980s and if there's better scaling with more cores.
AMD has stated that it will be open once Mantle is out of beta. Obviously this is one where we'll have to wait and see, but I can't remember a time they have promised something like that and then lied.Mantle isn't open source, how is that good for the gaming community? Windows is the dominant gaming platform and it matters the most. A tiny subset who play games on osx or Linux are largely irrelevant.
It's a ground up rewrite from what they have stated. It pretty much has to be. The current version of OpenGL's biggest strength, and it's Achilles heel, is the massive amount of legacy support it provides. They have to ditch that to really provide performance improvements the level of which Mantle and DX12 are going to offer.I'm going to call not a chance - it's going to be a massive API change or that will be all she wrote.
This is not about YOU overclocking your GTX 980. This is about the automatic power control built into all modern GPUs.
The automatic power control has several different speed/voltage settings that it can step between in a heartbeat based upon load. In a STOCK card (from the article), the user has NO CONTROL over these voltage levels. When you overclock the card (without modifying the voltage), it simply adds a higher frequency to the top-end power block.
If you don't believe me, why don't you talk to all the people having problems with overclocking SLI GTX 980s because the cards are using different voltages?
AMD has stated that it will be open once Mantle is out of beta. Obviously this is one where we'll have to wait and see, but I can't remember a time they have promised something like that and then lied.
.
Who cares where the idea for DX12 came from? It's a much better alternative than mantle because it's not locked to a single video card vendor. If AMD was the catalyst then I say thank you AMD. If not, then thank you MS. Time to get W10 out and some games working with DX12!!
Man. The AMD cards really benefit from DX12/Mantle over DX11 in those tests.
I wonder why there's such a disparity in the performance improvement for the AMD cards vs the NVIDIA ones? The 290X numbers go from 8.3FPS to 42.9FPS - an increase of 400%. But the 680 shows a 50% increase, since it's already pulling 23FPS and hits 36. Likewise the 980, about a 150% increase.
It's also odd that the 285 and 260X both had framerates right around 8FPS in DX11. I wonder if there's room for optimization in the DX11 driver path on the AMD cards still, or if that was a bug in the current driver for Win10 that's putting some other kind of limitation in place?
It's too bad this review didn't look at trisli 980s and if there's better scaling with more cores.
Man. The AMD cards really benefit from DX12/Mantle over DX11 in those tests.
I wonder why there's such a disparity in the performance improvement for the AMD cards vs the NVIDIA ones? The 290X numbers go from 8.3FPS to 42.9FPS - an increase of 400%. But the 680 shows a 50% increase, since it's already pulling 23FPS and hits 36. Likewise the 980, about a 150% increase.
It's also odd that the 285 and 260X both had framerates right around 8FPS in DX11. I wonder if there's room for optimization in the DX11 driver path on the AMD cards still, or if that was a bug in the current driver for Win10 that's putting some other kind of limitation in place?
Maybe I read a different preview article than some of you, but I do not see this as anything other than a win for AMD. Sure their Windows 10 drivers could use some work in the "legacy" D3D11 department but otherwise Mantle is more efficient than DX12 for their hardware (may change with further optimized drivers). You cannot compare those results to current Win 7 D3D11 numbers if you are then you did not read their test setup and what OS they are using.
Perhaps the biggest thing that many of you are forgetting is that AMD has been trying to push their APU business. This preview shows that you can now push some top notch GPUs with them even though they have weaker CPU cores. The market has been moving to laptops over desktops for a while now and has lately been trending into tablets and other portables. APU are more ideal chips for those places and the general user. We power users are a very small piece of the pie.
MS gains from this with Xbone due to the AMD APU inside. They also get to decrease fragmentation in the Windows user base (ask any Android dev what this means).
What did Nvidia gain? Their current GPU stay in the same performance positions they currently have in Win 7 and D3D 11. Their mobile SoC's are generally garbage. However K1 shows some promise if they can get it to run actual Windows it may gain from DX12. So really they just get a bump in current GPU performance which AMD will get a bump as well.
That right there is an example of really looking hard for the silver lining in the cloud. Mantle is dead on arrival and AMD has piss poor drivers in dx 11 as shown by this test.
Actually it took me about 30 seconds to see the big picture. Sad that so many others just see poor Windows 10 drivers and rush to bash AMD for an OS that is not even released other than a preview. GPU and ARM SoC is all Nvidia has and so far only one of those markets can gain from DX12. AMD has GPU, CPU, and APU of which all 3 can benefit. If the GPU market was all gravy then Nvidia would not be trying so hard to break into mobile which is where the majority of "gamers" are these days. I would rather have 3 market segments get an improvement in their value rather than 1, perhaps you disagree? If the tables were turned I would be saying the exact same thing in Nvidia's favor. Simple math 3 > 1.
How is Mantle dead on arrival? It was/is in games and showed an improvement, yes? Are there more games coming out with it? Yes. Will DX12 launch with tons of games using it? No. Will it take on faster than Mantle? Yes. In the mean time AMD users can get that boost without an unreleased Windows 10 and non-beta drivers, can Nvidia users? Overall Mantle is about as useful as PhysX, it's an item that boosts one camps appeal but does not really give it total market dominance due to slow adoption. While yes one is an API and another is a feature set the principle of why they were created are the same: provide value to their brand (AMD states they did it to further the market, but nobody is 100% altruistic in business).
Just looked at the CPU used and again a very expensive high end Core i7-4960X, there is more to it then just threads, IPS, cache and other aspects so no just reducing how many cores are used and clock speeds is not accurate of emulating other CPU scaling at all, they should of least used a AMD CPU as well because you simply can not use a Intel CPU to represent it..
When the hell are we going to get UE4 games?
Yeah, how's AMD's bottomline these days with their awesome APUs and Mantle? Oh that's right they've got one foot in the grave. And Mantle is doa and even with it in BF4, 290x still loses to the 980.
Again how is Mantle DOA? You have provided no evidence.
I too do believe that with DX 12 and glNext, AMD's Mantel will be redundant. There's just no way Mantel can compete with both API that works independent of vendor.
Perhaps Mantel may have pushed the industry to put the development of DX and OpenGL into high gears, and we can all be grateful if it did. But there's just simply no reason for dev to continue (or start) working with Mantel when you have API that will work everywhere in terms of hardware vendor.
Not false, TDP = heat related, not a direct measure of power consumption.False. We already discussed this...
The Nvidia Maxwell TDP numbers is the average power consumption in specific gaming situations, cherry picked by Nvidia, at the base clocks, aka without boost.
When it draws similar amounts of power, it is dissipating the same amount of heat. The coolers are designed for ~200-225w.
Power consumption does NOT directly equal the heat output of a graphics card.No, that's not even close to right. POWER = HEAT, didn't they learn you anything in skool? It is a more efficient card, so it DOES MORE WORK for the same unit of power.
Not necessary.Well darn. Better tell my buddy to increase the size of his power supply then.
Neither card does much work. The fan is the only thing doing any work of any substance.Wrong, just because the core is doing more work doesn't mean the voltage has to scale up with it. Overclocking with the 980 shows this time and again.
Yes, I know. This was also already discussed.Not false, TDP = heat related, not a direct measure of power consumption.
And a card can throw out more heat than its its cooler is designed for (TDP) for short periods of time because the sheer mass of the cooler can soak up non-sustained temperature spikes and then dissipate them over time. The heatsink acting as a temporary heatsoak is accounted for when determining TDP.
Basically, as long as heat-output averages below the TDP (without the cooler reaching thermal saturation), the cooling solution is good to go and the rating is fine.
TDP is NOT power consumption.
TDP is NOT the maximum heat output the components.
TDP IS how much heat the cooling system is designed to deal with.
Power consumption does NOT directly equal the heat output of a graphics card.
Not false, TDP = heat related, not a direct measure of power consumption.
And a card can throw out more heat than its its cooler is designed for (TDP) for short periods of time because the sheer mass of the cooler can soak up non-sustained temperature spikes and then dissipate them over time. The heatsink acting as a temporary heatsoak is accounted for when determining TDP.
Basically, as long as heat-output averages below the TDP (without the cooler reaching thermal saturation), the cooling solution is good to go and the rating is fine.
TDP is NOT power consumption.
TDP is NOT the maximum heat output the components.
TDP IS how much heat the cooling system is designed to deal with.
Power consumption does NOT directly equal the heat output of a graphics card.
If the heat from the gfx card isnt responsible for the removal of power used, where and how does the power used get dissipated?Power consumption does NOT directly equal the heat output of a graphics card.
If the heat from the gfx card isnt responsible for the removal of power used, where and how does the power used get dissipated?
Perhaps its one of those KWatt LEDs attached to the gfx card?
Ah yes I forgot.
Red/Silver stripes are great, multiple 0.15mm thick with 20nm spacing work best.
Mantle is garbage, waste of time and money... Both of which are in short supply at AMD.
What were they thinking? Total failure on their part.
It's comforting to know every game I play will run ~20% slower than the equivalent Nvidia GPU, and AMD is too busy tweaking Mantle rather than fix their DX11/12 performance.
Thanks AMD.
The GTX 980 is an efficient card, and doesn't throw out as much heat (even when drawing similar amounts of power).
GTFO. My junk 2x 780s would like to have a word with you.Mantle is garbage, waste of time and money... Both of which are in short supply at AMD.
What were they thinking? Total failure on their part.
It's comforting to know every game I play will run ~20% slower than the equivalent Nvidia GPU, and AMD is too busy tweaking Mantle rather than fix their DX11/12 performance.
Thanks AMD.