AMD GPU 14 Product Showcase Livestream

im worried will end up back in the same mess we where in in the 90s
with one good render path and the rest total shit
IF they even bother to put a DX path in the game
One big difference with now vs the early/mid 90's is that there are only 2 major IHV's that really need any support by game and middleware developers. Lots more IHV's back then, heck even VIA was a serious player.

The other big one is that most developers don't write their games from the ground up anymore. They use 3rd party engines for the game itself, physics, audio, even stuff like trees.

These 3rd party middleware developers will do most if not all the work to support Mantle and TrueAudio. Both for the consoles and the PC. It may very well end up being that for consoles the default API they target is Mantle and then they'll port as necessary to DX11.x for general PC use...but still include the Mantle renderer a la BF4 as well for AMD GPU's.

edit:\/\/\/\/\/\/
nV doesn't have their hardware in all of the consoles though. Or any of them for that matter. They have something like 60% of the PC add in GPU market which is no where near enough to push their own API. AMD has nearly all of the rest of the 40% of the PC add in GPU market + all the consoles.
 
its a big deal for now because frostbite 3 defaults to mantle on ati cards, and all AAA EA games going foward are going to use the frostbite 3 engine.

I would except shortly that nvidia and team up with another engine publisher and announce a similiar deal.
 
im worried will end up back in the same mess we where in in the 90s
with one good render path and the rest total shit
IF they even bother to put a DX path in the game

nah, the main difference is dice and other developers wanted this to happen, they want it so amd gave it to them.
if you want it then your going to use it.

BF4 will likely to achive a nice boost with mantle.
 
You too would feel fear if you knew where your little dream scenario was really going.



Obviously you have no idea what you're talking about. The whole point of a thin-to-the-metal API is that you remove abstraction! They can't really update the API without breaking all software using it!

Choices

a) Push out new incompatible versions of API each hardware generation. Essentially each game only works (on that path) on hardware from the generation when the game came out. Got a Pentium 2 running Windows 95 and a 3dfx card? Congratulations, you can play on the glide path!

b) Keep hardware generations closely on GCN. Basically set hardware progress back to keep existing software from breaking. Sounds great, right? Feeling the fear yet?

c) Update driver layer with translation between hardware. Sacrifice performance to keep existing software from breaking. We're now in "this seemed like a good idea, but now we have another giant driver to maintain and why did we do this, again?" territory.

d) Abandon Mantle after a few years. It was fun guys, we're moving on.



Yeah, that's what I'm afraid of you dimwit! You people think I'm some sort of nVidia fanboy that's afraid AMD will pull ahead? You couldn't possibly be more wrong if you set up a fucking commission to determine how wrong someone could be.



Yeah, that sounds like a fucking dystopian hell-hole to me.

Really good post that points out the negatives.

What you're not taking into account, though, is that the developers are already using these tools and hardware, or something incredibly similar to them, on the consoles when developing anyway. It's not a case of "this is completely new to you" but rather "you have the option of using the same approach for the PC too." That's exactly what they've been asking for.

The architectural/hardware benefits is going to depend on just how low level we're talking, isn't it? If AMD truly built something GCN 1.1 specific then those with GCN 1.0 will likely still get some in the way of benefits while those on VLIW4 are in the same position as nVidia, 'no difference.'

Given the nature of microarchitecture, it's very likely AMD has the next 3 generations of GPUs already mapped out and would like to see Mantle as being unified around the basic principles of GCN architecture. Whether that's true or not, I don't know. It's pure conjecture on my part, but I reckon it makes some sense. GPU architectures don't drastically change overnight, but they do tend to change more than CPU architectures (which at this point are just becoming wider or scaling up in core count/frequency).

This all depends on how AMD structured Mantle. If it's truly derived from the console low level API (or perhaps even the same thing?) then it's very likely that AMD will make adjustments and require developers to do a little bit more with each generation until they abandon GCN and therefore abandon Mantle.

It also highlights why I think this will in some be both 'not a big deal' and potentially quite big. On one hand, if entire engines are built with this in mind then we'll see AMD GCN chips will get a natural performance bump when compared to non-GCN cards in games developed on that engine (DICE's Frostbite). But given the limited scope of GCN on the desktop (nVidia still owns a lion's share of the discrete market and Intel owns the biggest portion of GPU's overall), developers likely aren't going to bother. We'll get some games and engines that use Mantle past the console stage, but it's never going to take off like wildfire unless AMD owns an Intel-like stake in both the discrete and overall GPU market share.

I still see OpenGL as the go to API of choice here. It's clear that MS is dropping the ball and leaving gamers and enthusiasts out to dry and their relevance in computing has taken a tumble. For a small minority of 'incentivized' developers, stretching Mantle past the console stage will mean a small portion of titles that run on all forms of hardware, but just run better on AMD's GCN stuff.

This announcement isn't going to make high level APIs like OpenGL or DirectX disappear. The only way that AMD would get an 'AMD hardware only' game out is if AMD made that game themselves.
 
From the sound of it since Mantle is compatible with all GCN hardware, I'd assume that AMD is planning on building on the GCN architecture going forward, they would have to put in some backwards compatibility to not break games.
 
What you're not taking into account, though, is that the developers are already using these tools and hardware, or something incredibly similar to them, on the consoles when developing anyway. It's not a case of "this is completely new to you" but rather "you have the option of using the same approach for the PC too." That's exactly what they've been asking for.

Apples and oranges. Console hardware doesn't change for ~7-8 years at a time. GPU architectures change much faster than that, every 2-3 years similar to the Intel tick-tock sequence of architecture and die shrink changes.

This seems like a short-term (one generation) win for AMD and engines who write to it, for a longer-term fail from compatibility issues or lock-in.

Why not drive improvements in OpenGL and Direct3D instead? Why not approach this by a multi-vendor standards approach to shortcutting driver abstractions for critical memory copy operations?
 
So im wondering -- with my 7970's -- will Mantle work? since it's GCN.

With a driver update, and the software to take advantage of it, is there any reason this new fangled low level stuff wouldn't work with my 7970?
 
So im wondering -- with my 7970's -- will Mantle work? since it's GCN.

With a driver update, and the software to take advantage of it, is there any reason this new fangled low level stuff wouldn't work with my 7970?

depends on how low a level it is if its tuned for the new chip it may not work at all on older hardware
one of the drawbacks of a low level API is any major changes in the hardware break the API

Glide wile great on the Voodoo1 held back 3Dfx down the road as they had to keep the hardware close enough to the voodoo1 that it would still work

Mantle may be great wile AMD rules the console roost but if ether AMD loses out next gen after this or get forced by MS to make a major change to GPUs it will break the API and any chance of running Mantle based games on future GPUs
 
Apples and oranges. Console hardware doesn't change for ~7-8 years at a time. GPU architectures change much faster than that, every 2-3 years similar to the Intel tick-tock sequence of architecture and die shrink changes.

This seems like a short-term (one generation) win for AMD and engines who write to it, for a longer-term fail from compatibility issues or lock-in.

Why not drive improvements in OpenGL and Direct3D instead? Why not approach this by a multi-vendor standards approach to shortcutting driver abstractions for critical memory copy operations?

Apples and oranges :)

OpenGL and DirectX are compatible with a variety of hardware, but in order to get there there has to be a significant amount of performance left on the table. It's the reason why it takes a PC more powerful hardware to get equivalent FPS as a console on those same settings. The low level APIs in consoles extract performance much more easily. There are fewer abstraction layers in a closer-to-metal approach, meaning fewer performance penalties.

You can improve DirectX and OpenGL so far (really, only OpenGL. Microsoft has pretty much given up with DirectX at this point). Ultimately, a lower level API will always beat it out with respect to performance.

Considering game development targets consoles first, having access to the same low level API means developers create better console ports. Remember that consoles are using PC hardware anyway, so it's understandable that the developers want access to the same tools for the PC.

The "lock in" starts at the console. It's always been like this, but it also has to be. You can't expect developers to use OpenGL and DirectX to create games when those two APIs were never built to squeeze out every last bit of performance from a given set of hardware anyway. If they were to use them, PC ports and console games both would look like utter shit

http://www.maximumpc.com/amd_r9_290x_will_be_much_faster_titan_battlefield_4

With AMD unveiling its new series of GPUs, many gamers want to know how well it performs, namely against Nvidia’s flagship GeForce GTX Titan graphics card.

We had a chance to sit down with AMD Product Manager Devon Nekechuck to see how AMD’s new top dog R9 290X stacks up against the green team’s best single-GPU offering. According to Nekechuck, even though the R9 290X uses a 438 square mm die, which is significantly smaller than the Titan’s 550 sq. mm GK110 offering, it “will definitely compete with the GTX 780 and Titan.” When we asked what this means in real-world terms, he stated, “with Battlefield 4 running with Mantle (AMD’s new graphics API), the card will be able to ‘ridicule’ the Titan in terms of performance.”

When we asked him what he meant by “ridicule,” he simply stated that it will run Battlefield 4 “much faster than the Titan.” Again, this is provided that you run the game using AMD's Mantle API, which is set to launch in December.

While no pricing has been announced for AMD's high-end GPU just yet, It is worth mentioning that Nekechuck did confirm to us that the company does not plan to release single-GPU cards in the $1,000 price range because AMD thinks that is such a small, niche market. When you consider that the GeForce Titan runs for one grand, it’s safe to assume that it will be a fair bit cheaper than $1K. Rumors abound the event are that the R9 290X will retail at around the 780 price point, which currently hovers around $650. May we see Titan+ performance for less than 780 price from AMD? Only time will tell.

This is an AMD marketer speaking so take this with copious amounts of salt, but the potential for the performance increase is very much real.
 
That's the big question now. It looks like 290x ~ $600, so if the 290 come in $100 less there is a big hole in $300 - $500 range. If they have a rabbit, that would be a good one to pull out and price it at $400, GTX 770 becomes a "don't care" and AMD could own that space for a long time.



Does it look like the 290x will be 600 though? Does AMD really want that big of a gap in pricing? based on all their other prices, they look to be targeting a card at every 100 dollar price point. It would be a lot more elegant and seamless if they just sold the cards for 399/499 respectively. The die is a good chunk smaller than nvidias offering on the top end, they have the economic breathing room to still make similar if not greater profit per unit, AND move more units based on economics, it would be foolish not to take advantage of both pricing and relative performance since they can afford to with lower base chip costs.

But perhaps that makes too much sense.
 
Does it look like the 290x will be 600 though? Does AMD really want that big of a gap in pricing? based on all their other prices, they look to be targeting a card at every 100 dollar price point. It would be a lot more elegant and seamless if they just sold the cards for 399/499 respectively. The die is a good chunk smaller than nvidias offering on the top end, they have the economic breathing room to still make similar if not greater profit per unit, AND move more units based on economics, it would be foolish not to take advantage of both pricing and relative performance since they can afford to with lower base chip costs.

But perhaps that makes too much sense.

That's not the way economics works - when you have a hot new product the competition can't touch for 6 months minimum -- you exploit the hell out of it.

It's like if Porsche came out with a new 911 that blows the doors off it's direct competitor, they know they can sell it for $120k but then suddently decide to sell it for $99k. What sense would that make?

Of course I'd love the bleeding edge for $499 -- I doubt that's going to happen though. i'm in for one at 599.

Honestly though - if that mantle rollout to BF4 is compatible with 7970's though... I might just hold off.
 
Does it look like the 290x will be 600 though? Does AMD really want that big of a gap in pricing? based on all their other prices, they look to be targeting a card at every 100 dollar price point. It would be a lot more elegant and seamless if they just sold the cards for 399/499 respectively. The die is a good chunk smaller than nvidias offering on the top end, they have the economic breathing room to still make similar if not greater profit per unit, AND move more units based on economics, it would be foolish not to take advantage of both pricing and relative performance since they can afford to with lower base chip costs.

But perhaps that makes too much sense.

The leaks indicated that the card would perform in DX11 games similar to the Titan (trading blows, basically). That isn't taking mantle into account which has the potential to increase performance by 10X, but surely the Mantle will be relegated to Glide-type status (eg very few games will use it). With that being the case (trading blows with the Titan in terms of performance), a 600$ pricetag is easily justifiable; Keep in mind that there is also a 290 standard SKU (as opposed to the 290X) which will be in the 400-500$ price range.
 
Apples and oranges :)

OpenGL and DirectX are compatible with a variety of hardware, but in order to get there there has to be a significant amount of performance left on the table. It's the reason why it takes a PC more powerful hardware to get equivalent FPS as a console on those same settings. The low level APIs in consoles extract performance much more easily. There are fewer abstraction layers in a closer-to-metal approach, meaning fewer performance penalties.

You can improve DirectX and OpenGL so far (really, only OpenGL. Microsoft has pretty much given up with DirectX at this point). Ultimately, a lower level API will always beat it out with respect to performance.

You completely ignored my point, pitting low-level hardware access against the current state of DirectX or OpenGL. You're either ignoring, or are completely ignorant of other possible approaches to providing abstractions while minimally affecting performance. I suggest you read up on many of the breakthroughs that have been made in hardware virtualization that are allowing close-to-baremetal performance for CPU, memory IO, network IO, even GPU IO. We're talking 10% overhead, not 10x as the Mantle:DirectX comparison implies.
 
That's not the way economics works - when you have a hot new product the competition can't touch for 6 months minimum -- you exploit the hell out of it.

It's like if Porsche came out with a new 911 that blows the doors off it's direct competitor, they know they can sell it for $120k but then suddently decide to sell it for $99k. What sense would that make?

Of course I'd love the bleeding edge for $499 -- I doubt that's going to happen though. i'm in for one at 599.

Honestly though - if that mantle rollout to BF4 is compatible with 7970's though... I might just hold off.


4870, 5870, 6970?
All 3 where within striking distance of their competition yet cost less(considerably).
 
4870, 5870, 6970?
All 3 where within striking distance of their competition yet cost less(considerably).

I'm not in charge of AMD's financial or market strategy division -- they could surprise us all and release the top of the line 290x for 500-550 dollars... I hightly doubt that will happen, but they COULD.

Nobody knows what AMD's long term goal is -- they could pull a power play and rake in the cash if they know they have a hit on their hands. Or they could play it cool, sell for a lesser price to try and make a big dent in market penetration. Nobody knows what direction they will take.

If I knew I was the most badass card on the block... you charge whatever you like and get away with it (that last part is the tricky part)

I'd love a 290x -- but if my 7970s are able to work with the Mantle API (which at this point I'm guessing yes since it was said in AMD's slides "works with all GCN chips") then I might wait and just enjoy BF4 with my top end 7000 series.
 
I'm not in charge of AMD's financial or market strategy division -- they could surprise us all and release the top of the line 290x for 500-550 dollars... I hightly doubt that will happen, but they COULD.

Nobody knows what AMD's long term goal is -- they could pull a power play and rake in the cash if they know they have a hit on their hands. Or they could play it cool, sell for a lesser price to try and make a big dent in market penetration. Nobody knows what direction they will take.

If I knew I was the most badass card on the block... you charge whatever you like and get away with it (that last part is the tricky part)

I'd love a 290x -- but if my 7970s are able to work with the Mantle API (which at this point I'm guessing yes since it was said in AMD's slides "works with all GCN chips") then I might wait and just enjoy BF4 with my top end 7000 series.

At some point they stated they wanted to release smaller chips that were more efficient for less and use more of them (multi-gpu) for super high end.
 
You completely ignored my point, pitting low-level hardware access against the current state of DirectX or OpenGL. You're either ignoring, or are completely ignorant of other possible approaches to providing abstractions while minimally affecting performance. I suggest you read up on many of the breakthroughs that have been made in hardware virtualization that are allowing close-to-baremetal performance for CPU, memory IO, network IO, even GPU IO. We're talking 10% overhead, not 10x as the Mantle:DirectX comparison implies.

The hardware virtualization is just another penalty atop the broad brush strokes that DirectX takes. How is adding yet another abstraction layer working around the inherent difficulties with DirectX if you've still got to use DirectX?

"AMD has an interesting opportunity with Mantle because of their dual console wins, but I doubt Sony and MS will be very helpful," tweeted John Carmack, before adding, "Considering the boost Mantle could give to a Steambox, MS and Sony may wind up being downright hostile to it."

The ramifications could potentially go well beyond upsetting console platform holders. AMD has traditionally championed open source over proprietary code (think OpenCL vs. CUDA) but in the case of Mantle, the firm has been very specific about the fact that Mantle is designed around its own Graphics Core Next (GCN) architecture. In theory, this puts arch-rival Nvidia in a very difficult position. Potentially, we could see key games on AMD graphics cards significantly out-performing the same software running on more expensive Nvidia products. Even if Nvidia produces its own API, we have to wonder if there would be the appetite - or the budget - to support it. By our reckoning, developing Mantle versions won't be cheap, and will probably be limited to big budget games and middlewares like Unreal Engine 4.

Indeed, we could even see DirectX itself under threat. Indie game makers are more likely to target OpenGL as their API of choice as it allows them to port more easily across to Mac, SteamOS, iOS and Android. Competition from Mantle in the triple-A space could make things highly uncomfortable for Microsoft. AMD sources told us today that part of the problem they have faced historically, and which helped drive Mantle development, is that Microsoft is so focused on developing its operating systems, that DirectX development has been sluggish as a consequence.

http://www.eurogamer.net/articles/digitalfoundry-could-amd-mantle-revolutionise-pc-gaming

The timing is interesting. This very much sounds like what AMD purpose built for the consoles is being expanded to encompass the PC landscape, bypassing both DirectX and OpenGL in the process. Unlike OpenGL, DirectX is strictly MS only and lagged behind in development. This should have MS shivering in their booties. For lots of folks that 'gaming issue' was the last hurdle towards Linux or OS X. I hope Microsoft takes note and realizes that they can't dangle DirectX carrots to sell their crappy operating systems anymore
 
That isn't taking mantle into account which has the potential to increase performance by 10X,

10x? I don't think so. It is supposed to increase draw calls up to 9x, but that's hardly the only work the CPU and GPU are doing.
 
10x? I don't think so. It is supposed to increase draw calls up to 9x, but that's hardly the only work the CPU and GPU are doing.
Not to mention DirectX 10 and 11 also have features that allow you to uncap draw calls, which may be easier to implement than programming for bare-metal.

DirectX 10 and 11 also allow multithreading of draw calls, so if you have a quad core with hyper-threading you're already pretty well off even if a developer abuses them...

And then there's DX10's instancing, where if you draw multiple copies of the same object it doesn't actually cost you any additional draw calls. This allows you to do things like leave bullet casings / shells everywhere with no performance penalty, draw a room full of crates, a forest of trees, etc. Anything where the same model is repeated a lot = no additional overhead due to DirectX.
 
Last edited:
The hardware virtualization is just another penalty atop the broad brush strokes that DirectX takes. How is adding yet another abstraction layer working around the inherent difficulties with DirectX if you've still got to use DirectX?

Re-read what I wrote. I didn't suggest that there should be another abstraction on top of the DirectX abstraction. I suggested that the breakthroughs being used in hardware virtualization (VT-x/VT-d, IOV, etc) represent a possible path forward. Possibly as extensions of Directx and/or OpenGL.

Abstraction layers are clumsy at call queueing and multi-copy memory operations (like between GPU and main mem). By defining standard structures for DMA and queueing that are supported across platforms, much of the really expensive overhead of the abstractions can be circumvented for high-throughput or high-ops scenarios, while maintaining all the compatibility advantages of high level abstractions for the low-throughput or low-ops functions. From a GPU implementation standpoint, vendor x can still maintain their architecture intact with minor changes to a limited number of inputs on their front side, OR implement a translation shim in hardware.

Have you ever written a device driver?
 
Re-read what I wrote. I didn't suggest that there should be another abstraction on top of the DirectX abstraction. I suggested that the breakthroughs being used in hardware virtualization (VT-x/VT-d, IOV, etc) represent a possible path forward. Possibly as extensions of Directx and/or OpenGL.

We're still at a point where GPU virtualization is still essentially 1 GPU per client. The rumblings of 'multiple clients per GPU' is essentially still in alpha. You'll have to excuse my doubts here given we've seen very little activity on this front from either company and that's something that virtualization has been in dire need of for years.

Abstraction layers are clumsy at call queueing and multi-copy memory operations (like between GPU and main mem).

This is already complicated as AMD pushes towards a unified memory architecture. Unless I'm mistaken, both consoles are using the same approach. Main memory and GPU via the same pool

By defining standard structures for DMA and queueing that are supported across platforms, much of the really expensive overhead of the abstractions can be circumvented for high-throughput or high-ops scenarios, while maintaining all the compatibility advantages of high level abstractions for the low-throughput or low-ops functions. From a GPU implementation standpoint, vendor x can still maintain their architecture intact with minor changes to a limited number of inputs on their front side, OR implement a translation shim in hardware.

You're talking about standardizing, and releasing details of a significant portion of their architectures. No offense, but this is a pipe dream. Even on the CPU side we see that that's not the case - FMA3/4 and SSE2 ring a bell (and that's WITH a cross licensing agreement)

I do wish it came to this and some sort of standard was adopted such that we can simply do away with as many abstraction layers as possibly, but that's asking a lot of 3 corporations that hate each other, never mind the onslaught of ARM manufacturers.

In some respects, AMD and others have already taken some significant strides

slide-33-728.jpg


and there are some really big names that have joined them in pushing HSA so there are at least certain hardware aspects that can be bridged via a unified approach. Frankly, HSAIL has more potential to do what you're hoping for than any dumb low level unified API between AMD and nVidia. But that's not what's in the consoles and therefore not what we'll see via Mantle.

Have you ever written a device driver?

No. Don't ever intend to :)
 
We're still at a point where GPU virtualization is still essentially 1 GPU per client. The rumblings of 'multiple clients per GPU' is essentially still in alpha. You'll have to excuse my doubts here given we've seen very little activity on this front from either company and that's something that virtualization has been in dire need of for years.

Agreed, although I was using IOV as a figurative example rather than a literal. Virtualization for the purpose of multi-client use doesn't benefit gaming AFAIK, although the performance speedup of hybrid DMA+abstraction even for single client applications is significant (up to 3x) versus fully abstracted. Still kind of tangential.

This is already complicated as AMD pushes towards a unified memory architecture. Unless I'm mistaken, both consoles are using the same approach. Main memory and GPU via the same pool

Agree, and Intel is pushing towards unification as well. High level APIs will have to address soon.

You're talking about standardizing, and releasing details of a significant portion of their architectures. No offense, but this is a pipe dream. Even on the CPU side we see that that's not the case - FMA3/4 and SSE2 ring a bell (and that's WITH a cross licensing agreement)

Actually, no. Just standardizing on a few frontend inputs doesn't expose their architectures in significant ways -- open source drivers have been exploring these frontend inputs anyway, and it's not a critical leak. Some IHVs already document their proprietary frontend inputs anyway.

I do wish it came to this and some sort of standard was adopted such that we can simply do away with as many abstraction layers as possibly, but that's asking a lot of 3 corporations that hate each other, never mind the onslaught of ARM manufacturers.

In some respects, AMD and others have already taken some significant strides

and there are some really big names that have joined them in pushing HSA so there are at least certain hardware aspects that can be bridged via a unified approach. Frankly, HSAIL has more potential to do what you're hoping for than any dumb low level unified API between AMD and nVidia. But that's not what's in the consoles and therefore not what we'll see via Mantle.

Agree, especially on ARM-related GPU licensees and crossplatform support. SoC is another unified architecture story developing quickly.

I think we mainly agree that "it would be good" to get some virtualized GPU ISA going. Of course we're not seeing that in Mantle, I'm not sure why you explicitly pointed that out. I'm not sure is HSAIL is the right community to make it happen though -- it seems to be very mobile focused and largely ignores console and PC. Maybe we'll see two stages -- a new version of DirectX+DirectCompute / OpenGL+OpenCL which exposes more of the frontend, then later a merge with the mobile world. That would be a shame, but preferable over fracturing into separate AMD, NV, Intel, Imagination, Apple implementations.
 
Agree, especially on ARM-related GPU licensees and crossplatform support. SoC is another unified architecture story developing quickly.

I think we mainly agree that "it would be good" to get some virtualized GPU ISA going. Of course we're not seeing that in Mantle, I'm not sure why you explicitly pointed that out. I'm not sure is HSAIL is the right community to make it happen though -- it seems to be very mobile focused and largely ignores console and PC. Maybe we'll see two stages -- a new version of DirectX+DirectCompute / OpenGL+OpenCL which exposes more of the frontend, then later a merge with the mobile world. That would be a shame, but preferable over fracturing into separate AMD, NV, Intel, Imagination, Apple implementations.

The console APUs are very close to full-fledged SoCs themselves

I see those two roads converging far sooner than do others. Intel has been clearly focused on mobile while AMD seems to be spear-heading gaming/GPU. Oddly enough, OpenCL was initially Apple's endeavor :eek: It's nVidia that's really looking to be the odd man out in the grander scheme of things. The only discrete GPU maker that can challenge them with a proprietary CUDA-like ISA with accompanying hardware to leverage it is AMD, and they're fully backing OpenCL.
 
Two posts from Beyond3D, one directly from an AMD spokesman should clear up quite a bit:

Andrew Lauritzen

I don't believe it, or rather I'm sure there was a miscommunication/misinterpretation involved. Even if AMD intends to publicly disclose the API/specification, they have made it very clear that it is designed to target GCN... not older AMD architectures, definitely not other hardware vendor's architectures. Besides, if they were really intending to make it portable to other hardware there would be no reason to not propose it to Microsoft/Khronos and try to do it through changes to DX/GL first. To my knowledge, they haven't done anything of the sort.

Feel free to correct me if I'm wrong Dave, but I doubt it.

Dave Baumann (from AMD)

To your knowledge.

We are taking the first step with Mantle, albeit one that already has direct target implementations with developers. HSA started life as something very specific to AMD's Fusion direction and now has fairly broad industry support.

So this looks like it is, in fact, the start of a much more broad lower level API approach that could likely entail other hardware vendors. (nVidia?)

Secondly, with respect to "open" they meant as an open for developers to use and cross-platform but not with respect to its actual usability. Mantle is GCN specific, therefore any sort of licensing would be fruitless - which makes sense considering this is likely what we're seeing the consoles, and the developers for the consoles, using to make the games. An 'nVidia Mantle' isn't going to happen. A 'Mantle in place of DirectX' or 'Mantle in place of OpenGL' and stretching across various OSes/platforms is pretty certain, I'd say.
 
You completely ignored my point, pitting low-level hardware access against the current state of DirectX or OpenGL. You're either ignoring, or are completely ignorant of other possible approaches to providing abstractions while minimally affecting performance. I suggest you read up on many of the breakthroughs that have been made in hardware virtualization that are allowing close-to-baremetal performance for CPU, memory IO, network IO, even GPU IO. We're talking 10% overhead, not 10x as the Mantle:DirectX comparison implies.

You can type all you want but it doesn't make it so.

MANTLE = thin layer API.

You can pretend that you can make those optimizations work under open-gl or DirectX, but those will never be there the last few revisions of DX have been lackluster to say the least and all MS wants to do is to pretend that you need windows 8 for it to work.

With a company at the helm of DirectX that really does not care about thin layer why would they start now all of a sudden.

Like it or not as soon as it starts paying of for games that need it then it will only change for a few the rest can use DX or Open-GL.

It is purely a situational approach not something that would dominate the market, 2 birds one stone so to speak , build for console workable on the pc . AMD could not have done it any better.
 
AMD, and they're fully backing OpenCL.

AMD is "fully backing OpenCL". And that's why they implemented TressFX w/ DirectCompute. Another example of how actions divide from bullet-point slides.

So this looks like it is, in fact, the start of a much more broad lower level API approach that could likely entail other hardware vendors. (nVidia?)

Secondly, with respect to "open" they meant as an open for developers to use and cross-platform but not with respect to its actual usability. Mantle is GCN specific, therefore any sort of licensing would be fruitless - which makes sense considering this is likely what we're seeing the consoles, and the developers for the consoles, using to make the games. An 'nVidia Mantle' isn't going to happen. A 'Mantle in place of DirectX' or 'Mantle in place of OpenGL' and stretching across various OSes/platforms is pretty certain, I'd say.

I don't take his words too seriously. Well he does confirm what we already know: that this Mantle is directly tied to GCN so the idea of other vendors just hopping into place is silly talk. But him talking about "ohh yeah... in the future though... it'll be a lot broader." Ohh sure, when other vendors go down the same path and roll their own solutions. But "a Mantle in place of OpenGL / Direct3D" doesn't really make any sense. Those are abstract APIs that are implementation (at the architecture level) agnostic. The concept is antithetical to a low-level API like Mantle.

If they want something lighter, just lean out OpenGL. But otherwise these are two different animals.

And you have to understand how AMD uses various terms. To them, TressFX was "open". So open that it runs on one OS: Windows.
 
TressFX, Mantle, and the TA API are open in the sense that paying AMD to license or any license for that matter is not required. Any other IHV could put out hardware that is compatible with it and take advantage of games that use it.

Unlike PhysX which requires money + some sort of licensing deal with nV for another IHV to use. As a consumer it means you're forced to buy nV hardware. That is not necessarily true with the hardware features and API's that AMD is introducing here.

Sure for now its AMD only, but nV can produce a GPU that is compatible with Mantle if they want. Creative or ASUS could put out a compatible audio controller if they wanted to as well. There might be some practical issues with that since those add in cards would have to work over the PCIe bus but still, that isn't AMD's fault at all.
 
Sure for now its AMD only, but nV can produce a GPU that is compatible with Mantle if they want. Creative or ASUS could put out a compatible audio controller if they wanted to as well. There might be some practical issues with that since those add in cards would have to work over the PCIe bus but still, that isn't AMD's fault at all.

shortest turn around for nvidia to pump out a GCN based card... 5 years...
Maxwell is already in the pipe along with what ever is after it + its respin
intel also cant just turn on a time look how long it took them to move off of Netburst

more likely nv will make there own API if at all more likely they will push OpenGL harder since they are already teaming up with Valve on the front
 
If AMD plays its cards right they could own the graphics market for the next several years. I'm OK with that after some of the shit Nvidia has pulled. That new api is truly impressive if they can pull it off.
 
AMD is "fully backing OpenCL". And that's why they implemented TressFX w/ DirectCompute. Another example of how actions divide from bullet-point slides.

AMD was the founding member of the HSA foundation, a group that's goal is to unify a good portion of what we're seeing in Mantle and DirectX/OpenGL across not just GPU vendors, but ARM SoC vendors as well.

And I'd love for TressFX to work well with OpenGL, but the reality is that the most popular high level API is DirectX and will be for the, hopefully short, foreseeable future.

I don't take his words too seriously. Well he does confirm what we already know: that this Mantle is directly tied to GCN so the idea of other vendors just hopping into place is silly talk. But him talking about "ohh yeah... in the future though... it'll be a lot broader." Ohh sure, when other vendors go down the same path and roll their own solutions. But "a Mantle in place of OpenGL / Direct3D" doesn't really make any sense.

It does. It's exactly what developers have been asking AMD for, and now that AMD has all three consoles (and plays a part in SteamOS) then you have to figure that it makes perfect sense. Porting to the PC can't be made any easier than when developers are dealing with the same PC hardware // IP blocks and (roughly) the same tools. That makes a low level API that they've been using on the console a real possibility on the PC side as well, and consequently a good portion of the PC gaming audience that uses GCN will benefit from that.

The reason Glide didn't take off was mostly due to too many vendors and no unifying theme. There were 5+ GPU makers and each had their own vastly different architecture and nothing to tie them all together. That's not the case today, though. Today there are only two discrete GPU makers, and only one of them targets the actual development platform and primary audience - the consoles. Furthermore, the goto API of choice is a high level API: DirectX. That won't disappear. Developers aren't going to suddenly start coding for Mantle and forget everybody else.

(most of the the above isn't directed at you but rather the folks who still think mantle will wholly replace DirectX or OpenGL. If they still believe that then I've got a few investment opportunities they might want to take a look at)

Those are abstract APIs that are implementation (at the architecture level) agnostic. The concept is antithetical to a low-level API like Mantle.

If they want something lighter, just lean out OpenGL. But otherwise these are two different animals.

This isn't possible, as you're still dealing with IHVs that you need to kick out in order to 'lean it out' in the first place. You can make OpenGL better, but it takes an entire consortium and hardware compatibility is a must. What's the point then?

They serve different purposes. A low level API like Mantle is there to target a very specific hardware configuration. DirectX is there for Microsoft to sell Windows licenses :p OpenGL is a high level API, like DirectX, that's there to unify development across a plethora of hardware configurations (even mobile) but at the cost of a pretty significant performance penalty.

Now if you're in AMD's and the developers' shoes:
- You see that DIrectX development is being dangled like a carrot and there will be no DirectX 12 (from a Microsoft internal email that was circulated last year)
- You've already got the consoles bagged. It's your hardware that's being developed for for the next 5+ years.
- PC gamers are getting the short end of the stick. Whatever is built using a lower level API on a PC hardware console will mean that using a strictly high level API to target the same hardware will eradicate any potential gains they could have seen otherwise. That really sucks.
- OpenGL, though it ties a lot of strings together, doesn't offer anywhere near the performance and flexibility required for the consoles.


Why not give developers and PC gamers what they want and extend this low level API beyond the consoles? You're using roughly the same hardware anyway.

There's no way we're going to see a lot of titles support Mantle. Instead it'll likely be a "Gaming Evolved+" where there are slightly more titles but they give more of a performance boost. It'll take a lot of effort on AMD's part to get developers to do some extra work, despite it now being a lot less than at any point in console/pc -porting history. Targeting the game engines, like Frostbite, is the right way to go about it if they want to see larger adoption. Something like 10+ games are being built atop this engine, so it's not a huge stretch to say we'll see 10+ titles utilizing Mantle.

nVidia has lost here. In order for developers to use tools similar to AMD's for the PC, nVidia will have to throw an insane amount of money at the developers - and even then the developers still have to use AMD's tools/hardware for the consoles. That's a huge problem that I'm not sure they're going to tackle. Convincing developers to do a little bit more work (Mantle) that's close to the work they've already been doing (consoles) is difficult enough, but convincing them to use a completely different set of hardware and a different API that's completely optional and makes no impact on console sales whatsoever is like asking Sisyphus if he needs a break because you'd like to cover for him.

And you have to understand how AMD uses various terms. To them, TressFX was "open". So open that it runs on one OS: Windows.

"Open" was meant as in cross-platform. They stated this on their slides. It's Windows initially (again, DirectX is still king unfortunately), but because it's a very low level API that means it can be ported to various OSes without too much effort. Certainly a lot less effort than it would take DirectX to be ported to another OS.

For you nVidia/Intel folks and AMD folks that don't use a GCN card, nothing changes. You'll still use DirectX or OpenGL just like you did before. For people with a GCN card, you can download a patch or change the setting in-game to "Mantle" and reap some free benefits. For those that date back to pre-2000, you'll remember when we had to pick our API of choice in-game.
 
For you nVidia/Intel folks and AMD folks that don't use a GCN card, nothing changes. You'll still use DirectX or OpenGL just like you did before. For people with a GCN card, you can download a patch or change the setting in-game to "Mantle" and reap some free benefits. For those that date back to pre-2000, you'll remember when we had to pick our API of choice in-game.

waiting benchmarks with mantle is killing me.
;)
 
AMD is "fully backing OpenCL". And that's why they implemented TressFX w/ DirectCompute. Another example of how actions divide from bullet-point slides.



I don't take his words too seriously. Well he does confirm what we already know: that this Mantle is directly tied to GCN so the idea of other vendors just hopping into place is silly talk. But him talking about "ohh yeah... in the future though... it'll be a lot broader." Ohh sure, when other vendors go down the same path and roll their own solutions. But "a Mantle in place of OpenGL / Direct3D" doesn't really make any sense. Those are abstract APIs that are implementation (at the architecture level) agnostic. The concept is antithetical to a low-level API like Mantle.

If they want something lighter, just lean out OpenGL. But otherwise these are two different animals.

And you have to understand how AMD uses various terms. To them, TressFX was "open". So open that it runs on one OS: Windows.

Good post. Also, sure PhysX is "propietary" which I hear many people complain about, but it's not like you can't just run a game with software PhysX and enjoy it the exact same way except without some extra eye candy. Try that if a Mantle-only game comes out ;).
 
In all seriousness, if you think developers are going to target only <3% of the entire market PC gaming market then I'd recommend you get your head checked. It sounds like you may have taken a severe blow to the head.
 
mrmandude how much is AMD paying you
its funny you show up right when this all comes out
 
mrmandude how much is AMD paying you
its funny you show up right when this all comes out

I haven't read any of his posts, but raghu76 appeared around a different launch referring to AMD as "my employer" and "our company", posting on various forums since then after quickly ditching using that terminology under the guise of an educated fan defending a corporation from unjust words :p. He hangs out here, on OCN, and a couple of other places...
 
mrmandude how much is AMD paying you
its funny you show up right when this all comes out

They aren't, actually. I'm buds with a developer ;) And I try to ignore these forums as they tend to be filled with people who know little but spread a lot of uneducated opinions as if they were fact (read: you). People here are emotionally invested in shiny plastic, and I find that to be highly disturbing.
 
Back
Top