RX490 Speculation - Vega 10, not Polaris.

Assignment is defined by the programmer for each and every task that requires it.

You're not addressing the question. How will the programmer know which task to assign to which GPU? You can use the very simple example I laid out earlier.

In reality there are hundreds of tasks and and dozens of hardware configurations and resolution/quality settings to deal with. As a programmer are you going to specify an allocation for every possible scenario?
 
You're not addressing the question. How will the programmer know which task to assign to which GPU? You can use the very simple example I laid out earlier.In reality there are hundreds of tasks and and dozens of hardware configurations and resolution/quality settings to deal with. As a programmer are you going to specify an allocation for every possible scenario?
And not all of them are time sensitive and since it basically up to what engine does what exactly I can not possibly guess as to where the priorities lie in different engines it can be different tasks. This is nothing new this was already discussed ages ago in the Mantle Q&A session at GPU13 ....
Where they used a AMD APU with dedicated GPU as an example as said that the APU would do excellent post processing (from Johan Andersson (DICE)).
 
You're not addressing the question. How will the programmer know which task to assign to which GPU? You can use the very simple example I laid out earlier.

In reality there are hundreds of tasks and and dozens of hardware configurations and resolution/quality settings to deal with. As a programmer are you going to specify an allocation for every possible scenario?
If your model is multiple chips acting more like one chip transparent to the application, for example:
  • Two or more GPU's on a interposer with massive bandwidth/communications between each other (why else would you really need 1Tb+ bandwidths?)
  • Liken it to the 4 ACE's but the 4 ACE's become four separate chips on one Memory bus
  • Since programmers do not have to program different between smaller GPU's with less streaming processors or more, having multiple chips on a common memory configuration could virtually look the same to the program being one chip or multiple chips
Currently dual GPU's are really not connected well and are separate entities - future designs could be transparent in operations and well connected.
 
I still speculate Vega 11 will be a 14nm updated Hawaii, and Vega 10 will be a 14nm updated Fiji with HBM2.
 
If your model is multiple chips acting more like one chip transparent to the application, for example:
  • Two or more GPU's on a interposer with massive bandwidth/communications between each other (why else would you really need 1Tb+ bandwidths?)
  • Liken it to the 4 ACE's but the 4 ACE's become four separate chips on one Memory bus
  • Since programmers do not have to program different between smaller GPU's with less streaming processors or more, having multiple chips on a common memory configuration could virtually look the same to the program being one chip or multiple chips
Currently dual GPU's are really not connected well and are separate entities - future designs could be transparent in operations and well connected.

That's one way to crack the problem. Essentially NUMA for GPUs. This is what nVidia is trying to do with NVLink but bandwidths are still pretty low.

It would also require OS and API support which is nowhere in sight.

For now we will have to rely on developers taking on the almost impossible task of writing engines that can scale consistently over multiple generations of GPUs from multiple vendors. Very few will even attempt to do so.
 
Ha, you must be from the future.
Some days it feels that way, I could swear I had done the exact same thing on those days some time in the past (minus a few discrepancies). D:
Sounds like Nuvi (what's after vega) Dispatching to multiple small rendering units using a new hybrid memory (from what I read)

I sat out over lunch and tried to draw the logic blocks of the pipe and how the hardware would be laid out plus the problems with concurrent request and synchronizing them

All of that would have to be handled via some complex scheduler.
Sounds interesting. Is that your hobby or work? I'd like to work on stuff like that, but despite my sound logic, my dyslexia always seems to get in the way.
 
An interesting take on a series of evidence but when strung together by WTF Tech... We'll see.
 
Their last dual card was the Radeon Pro Duo , not to sure what this card would be beneficial to which user segment ? Don't see AMD selling "this" card for $350.

That would pretty much make it stillborn as it would put it up against the 1070 at >$350. And we already saw the CF RX480 against a 1080 and how badly it lost.
 
That would pretty much make it stillborn as it would put it up against the 1070 at >$350. And we already saw the CF RX480 against a 1080 and how badly it lost.
so negative... Not sure what garners so much hate for you to constantly complain so. Dual GPU cards are never high volume and only serve as a talking piece really. But for a great deal of those with single slot MoBos or constrained spaces these cards are somewhat appealing. And even though not really the most efficient purchase, some people buy just to buy.
 
so negative... Not sure what garners so much hate for you to constantly complain so. Dual GPU cards are never high volume and only serve as a talking piece really. But for a great deal of those with single slot MoBos or constrained spaces these cards are somewhat appealing. And even though not really the most efficient purchase, some people buy just to buy.

Not negative. Just sound reasoning.
 
Don't ever believe anything WCCFTECH posts. I call it WTFtech.com , sersiously I go there to laugh at their articles and to see how the just put their imagination down on paper and make things up as they go. They are a perfect example of shooting in the dark and believing you will eventually hit your target! I am sure they get 1 out of 20 stories right.

The main guy quit posting for a bit after he got called out and was put to shame of his constant hyping of the rx480. No wonder AMD tried to get him to shut up and sign NDA and give him a card. The guy was posting anything and everything to get hits.
 
Raja said some pretty interesting things about mGPU in interviews. It's entirely feasible they will go this path sooner than later.
 
Raja said some pretty interesting things about mGPU in interviews. It's entirely feasible they will go this path sooner than later.

Well yes and no

Rise of the Tomb Raider, Explicit DirectX 12 MultiGPU, and a peek into the future

The problem with the past was they made it up to the drivers to handle multiple GPUs. Let me give you an example of how this is problematic

There are multiple GPUs rendering techniques. The most common are afr and alr. Split ratio is still in use but not as common

So let's take the first case. GPU 1 handles frame 1. GPU 2 handles frame 2.

Well let's say you implement some crazy fun post processing effects like motion blur. (Post process is one way to handle it but isn't the best). How does motion blur work? By comparing the current frames data to the previous frames data. Problem is card 1 doesnt have access to card 2's data and vice versa. So the driver has to know when these effects take place and manually intervene. Reverse engineering code is no fun let me tell ya.

So this is why sli and crossfire generally suck.

So Microsoft is going..".look this is a pain in the duckass. So we're going to create low level extensions to access the cards directly." You can know if there are multiple GPUs and access the resources on each accordingly. So if I need frame 1 data to GPU 2...no problem. This is also why you can treatment as unified across multiple GPUs.

Problem is this put the onus on game devs to implement. Is it that bad to implement? It makes the overall process easier and scales better BUT most game houses don't want to put in the extra work necessary for such a small audience. (That's my humble opinion)

That said there is a silver lining in that it is possible to write a really robust game engine that harness mgpu really well in a custom wrapper api with very little end developer work.
 
Last edited by a moderator:
What I'm more interested in is nuvi in this regard. I'm still trying to work out in my head how the pipeline works.

Multiple concurrent access is great for split cu sp rops. For the scheduler it's a nightmare. Gpu1 does pixels 1 to 100. GPU2 does pixels 101 to 200. But gpu2 needs pixel 100 data. Normally this is just a small latency on a ring bus in a GPU with a cache. But add in multiple mini renders then you have large latency issues.
 
Frankly this always should have been on the devs to implement. No real way around it.

That said, All we really need is the 4 major engines (UE4, CryEngine, Frostbite, Unity) to bake this support in, and the rank and file who use those engines will simply have it available to them to use.
 
Frankly this always should have been on the devs to implement. No real way around it.

That said, All we really need is the 4 major engines (UE4, CryEngine, Frostbite, Unity) to bake this support in, and the rank and file who use those engines will simply have it available to them to use.

As long as it's a switch for them to flip, then there shouldn't be an issue. Every game developer wants to patch their games after launch anyways as the early adopters are Beta testers. Let them test the mGPU support while they are testing the rest of the engine.
 
Frankly this always should have been on the devs to implement. No real way around it.

That said, All we really need is the 4 major engines (UE4, CryEngine, Frostbite, Unity) to bake this support in, and the rank and file who use those engines will simply have it available to them to use.

I'm curious about this viewpoint.

Let's say you as a developer have a hypothetical 100 man hour budget for optimization in which multi adapter support would be lumped under. How much of that should you devote to multi adapter support versus focusing on optimizations that would purely benefit single GPU users?

The other side of this is that it isn't inherently up to game developers to help push hardware.
 
I'm curious about this viewpoint.

Let's say you as a developer have a hypothetical 100 man hour budget for optimization in which multi adapter support would be lumped under. How much of that should you devote to multi adapter support versus focusing on optimizations that would purely benefit single GPU users?

The other side of this is that it isn't inherently up to game developers to help push hardware.


I don't think you exactly understand my point. If it is implemented in the major engines, then the developers would be able to devote less time to it. But from a technology standpoint, having it baked into the engines themselves has always been the optimal solution. At which point, for developers who use those engines, mGPU just becomes another tool to use, rather than something that needs to be hard coded into a game from scratch.

At that point, you could sub mGPU for async shaders, or even hairworks, or any other feature set developers would want to implement. And my point is rather than SLI and Crossfire, mGPU should have always been that way in the first place.
 
I don't think you exactly understand my point. If it is implemented in the major engines, then the developers would be able to devote less time to it. But from a technology standpoint, having it baked into the engines themselves has always been the optimal solution. At which point, for developers who use those engines, mGPU just becomes another tool to use, rather than something that needs to be hard coded into a game from scratch.

At that point, you could sub mGPU for async shaders, or even hairworks, or any other feature set developers would want to implement. And my point is rather than SLI and Crossfire, mGPU should have always been that way in the first place.

But I don't understand why it should be the developers that shoulder the burden to showcase hardware when they themselves do not inherently benefit from hardware sales?

This is a bit of generalization but what I find interesting is that people want both -

1) Developers to do "last mile" type optimizations/features that benefit a relative minority of their user base to show case the products of hardware vendors.

but also

2) Dislike hardware vendors developing partnerships with developers to implement said optimizations/features. Even though they can actually benefit from the resource investment.

This isn't really directed at multi-GPU support but discussion of responsibility of feature support and optimizations in general.
 
But I don't understand why it should be the developers that shoulder the burden to showcase hardware when they themselves do not inherently benefit from hardware sales?

This is a bit of generalization but what I find interesting is that people want both -

1) Developers to do "last mile" type optimizations/features that benefit a relative minority of their user base to show case the products of hardware vendors.

but also

2) Dislike hardware vendors developing partnerships with developers to implement said optimizations/features. Even though they can actually benefit from the resource investment.

This isn't really directed at multi-GPU support but discussion of responsibility of feature support and optimizations in general.

Under DX12 it's a Microsoft thing; not an AMD or Nvidia thing. It's adhering to basic DX12 rules so that your engine is ready for multiple GPUs. Can AMD and Nvidia help a developer to understand how DX12 will see GPUs and what is necessary to turn on features? Sure. But in the end it is a developer's desire to implement DX12 / Microsoft code into their engine.

Crossfire and SLi are dead technologies when it comes to DX12 as Microsoft Windows should be handling mGPU. For example Nvidia doesn't have SLi fingers on the GTX 1060. So no SLi support.

Doesn't mean that you can't toss 2, 3, 4 of them into a system as Ashes of the Singularity can use all of them in DX12 mode.
 
Users want eyecandy, but they also want smooth and fast rendering without bugs, if mGPU can give them that where SLI/xf couldn't, then that's incentive for developers to implement it. If it ends up a buggy mess like sli/xf (often, but not always) was, then it probably won't be used much. AotS has shown that it can be done (in rts type games, at least), we'll see if it catches on or not.
 
Under DX12 it's a Microsoft thing; not an AMD or Nvidia thing. It's adhering to basic DX12 rules so that your engine is ready for multiple GPUs. Can AMD and Nvidia help a developer to understand how DX12 will see GPUs and what is necessary to turn on features? Sure. But in the end it is a developer's desire to implement DX12 / Microsoft code into their engine.

Crossfire and SLi are dead technologies when it comes to DX12 as Microsoft Windows should be handling mGPU. For example Nvidia doesn't have SLi fingers on the GTX 1060. So no SLi support.

Doesn't mean that you can't toss 2, 3, 4 of them into a system as Ashes of the Singularity can use all of them in DX12 mode.

Exactly this - couldn't have said it better.
 
So, no, you don't.

If you ask for proof for something that you can just look up in 10 seconds on Google I'm not going to do it. I'm not responsible for spoon-feeding you, that's your mommy's job.
 
If you ask for proof for something that you can just look up in 10 seconds on Google I'm not going to do it. I'm not responsible for spoon-feeding you, that's your mommy's job.

Maybe you should learn to post links to the articles you reference when YOU are trying to make a point. My statements were speculative, not inclusive of further evidence such as yours. So, no, it is not the job of others to do your investigative work.

Also, take your childish personal attacks elsewhere. They're not welcome at the [H].
 
If you ask for proof for something that you can just look up in 10 seconds on Google I'm not going to do it. I'm not responsible for spoon-feeding you, that's your mommy's job.

If you make a claim, you should be able to back it up. The fact that you CAN'T back it up is telling.
 
If you ask for proof for something that you can just look up in 10 seconds on Google I'm not going to do it. I'm not responsible for spoon-feeding you, that's your mommy's job.

Man up. You made the statement now back it up. Even though with your maturity I doubt you can man up.
 
Users want eyecandy, but they also want smooth and fast rendering without bugs, if mGPU can give them that where SLI/xf couldn't, then that's incentive for developers to implement it.
A developer should consider how many extra sales they will get by implementing mGPU support. If the number of potential buyers using mGPU is small (and it probably is) and/or some number of those with mGPU would buy the game regardless of whether it supports mGPU or not (probably true) then implementing mGPU is a waste of the developer's time. They'd be better off focusing on getting the single-GPU version game out on time with fewer bugs.

mGPU used to be a way for NVidia and AMD to compete. Once mGPU becomes a generic Microsoft thing, who has the incentive to pour money into it anymore? Even Microsoft doesn't, they may be doing it just to remove another place a client's money might be spent, other than with Microsoft. Hell, they may abandon it once NVidia and AMD drop their proprietary solutions, perhaps to kill another technology that made PCs better than their proprietary XBOX console.

Always look to the money. Once DX12 mGPU support has killed SLI and CF, who has a financial motive, and what is that motive, to spend money on mGPU support?
 
If you make a claim, you should be able to back it up. The fact that you CAN'T back it up is telling.

If I had to guess, he fell for the same placeholder on that VideoCardz site or whatever I did.

Only difference is, when it was pointed out to me that it was a place holder, I actually took that critique on face value rather than pick a fight trying to win a stupid internet fight...
 
I would not be surprised to see more AMD dual-gpu cards coming out sooner rather than later. Especially given that Microsoft has said they are on the verge of putting out a very basic EMA support for dx12.

Developers won't need to code for the base level of multi-gpu support. Therefore combining gpus into a single card, provided that EMA doesn't suffer from micro stutter and some of the other comment multi-gpu configurations, maybe how AMD deals with nvidia's future cards.

MS is not "putting out a basic level of EMA support" -- that is all already there. The developers still need to and ALWAYS WILL HAVE TO code for EMA support in their game. They can fall back to IMA, but EMA will always require developers to code for it. What MS is releasing now/soon is EXAMPLE CODE onto github on how to implement basic EMA. But the devlopers of games will still need to use that stuff. It's not just automatic or built in.
 
MS is not "putting out a basic level of EMA support" -- that is all already there. The developers still need to and ALWAYS WILL HAVE TO code for EMA support in their game. They can fall back to IMA, but EMA will always require developers to code for it. What MS is releasing now/soon is EXAMPLE CODE onto github on how to implement basic EMA. But the devlopers of games will still need to use that stuff. It's not just automatic or built in.


Can someone else confirm? Sounds plausible, but I am not a programmer.

And I am not sure people are clear on what I am saying.


I know EMA is in DX12. I know that some level of coding will be required by developers for EMA. My current understanding is that when Microsoft - in their own words - implements an abstraction layer in DX12 (which, excuse my ignorance, sounds lot more involved than "puts example code out on "github"), then developers will be able to make use of MGpu the same way they would make use of any other feature of DX12, and it would put MGPU about on par with implementing some of the other technologies available to them.

Also, my understanding is that UE4 engine development team is already in the process of implementing EMA into itself, which makes a lot of this rending of clothes and gnashing of teeth kind of silly to me over developers implementing it, because, again admittedly based on my non-programming understanding, if MGPU was enabled in UE4, then wouldn't someone who leased the engine then also have access to EMA/MGPU, and further would have to do a lot less work to implement it since the UE4 Engine Development team already did the heavy lifting?


I don't mind being corrected if I am getting what is real information, but more often than not, I find that when it comes to technology, you are just as likely to find people who can talk the talk and sound convincing, but really don't know shit and are pushing their own pre-conceived notions - they just know how to dress it up and make it sound authoritative.

So please, if my own thoughts are wrong on this, educate me. I'm willing, but I warn people, I am not gullible ;)
 
Can someone else confirm? Sounds plausible, but I am not a programmer.

EMA isn't easy to do. It's not a switch. Tasks must be divided up between the GPUs present, and those tasks must be completed in sync. This takes a lot of planning, so even if an engine has support baked it, it still leaves a lot on the programmer. If this were easy to do, we'd hear of more games using it.
 
EMA isn't easy to do. It's not a switch. Tasks must be divided up between the GPUs present, and those tasks must be completed in sync. This takes a lot of planning, so even if an engine has support baked it, it still leaves a lot on the programmer. If this were easy to do, we'd hear of more games using it.

See though, that's contrary to what I heard, and more fits an Implicit Multi Adapter, which is what Crossfire and SLI were, based on my understanding. With the DX12 Adapter being Explicit, my understanding is that DX12 then does more of the legwork for the feature.

In fact, the whole marketing thing behind EMA was that it was easier to implement and smoother than the IMA SLI and Crossfire solutions.
 
Back
Top