Daniel_Chang
[H]ard|Gawd
- Joined
- Jan 4, 2016
- Messages
- 1,313
I"m not your mommy, I don't have to do your work for you.
You made the claim. It's your work, not his.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I"m not your mommy, I don't have to do your work for you.
I"m not your mommy, I don't have to do your work for you.
Assignment is defined by the programmer for each and every task that requires it.
And not all of them are time sensitive and since it basically up to what engine does what exactly I can not possibly guess as to where the priorities lie in different engines it can be different tasks. This is nothing new this was already discussed ages ago in the Mantle Q&A session at GPU13 ....You're not addressing the question. How will the programmer know which task to assign to which GPU? You can use the very simple example I laid out earlier.In reality there are hundreds of tasks and and dozens of hardware configurations and resolution/quality settings to deal with. As a programmer are you going to specify an allocation for every possible scenario?
If your model is multiple chips acting more like one chip transparent to the application, for example:You're not addressing the question. How will the programmer know which task to assign to which GPU? You can use the very simple example I laid out earlier.
In reality there are hundreds of tasks and and dozens of hardware configurations and resolution/quality settings to deal with. As a programmer are you going to specify an allocation for every possible scenario?
If your model is multiple chips acting more like one chip transparent to the application, for example:
Currently dual GPU's are really not connected well and are separate entities - future designs could be transparent in operations and well connected.
- Two or more GPU's on a interposer with massive bandwidth/communications between each other (why else would you really need 1Tb+ bandwidths?)
- Liken it to the 4 ACE's but the 4 ACE's become four separate chips on one Memory bus
- Since programmers do not have to program different between smaller GPU's with less streaming processors or more, having multiple chips on a common memory configuration could virtually look the same to the program being one chip or multiple chips
Vega 10 leaked from linkedin. V10 is 4096 ALUI still speculate Vega 11 will be a 14nm updated Hawaii, and Vega 10 will be a 14nm updated Fiji with HBM2.
Some days it feels that way, I could swear I had done the exact same thing on those days some time in the past (minus a few discrepancies). D:Ha, you must be from the future.
Sounds interesting. Is that your hobby or work? I'd like to work on stuff like that, but despite my sound logic, my dyslexia always seems to get in the way.Sounds like Nuvi (what's after vega) Dispatching to multiple small rendering units using a new hybrid memory (from what I read)
I sat out over lunch and tried to draw the logic blocks of the pipe and how the hardware would be laid out plus the problems with concurrent request and synchronizing them
All of that would have to be handled via some complex scheduler.
same as fijiVega 10 leaked from linkedin. V10 is 4096 ALU
Dual GPU? AMD spills the beans?
AMD Lists The Radeon RX 490 Flagship - Polaris based Dual GPU Graphics Card For 4K Ready Gaming
Their last dual card was the Radeon Pro Duo , not to sure what this card would be beneficial to which user segment ? Don't see AMD selling "this" card for $350.
so negative... Not sure what garners so much hate for you to constantly complain so. Dual GPU cards are never high volume and only serve as a talking piece really. But for a great deal of those with single slot MoBos or constrained spaces these cards are somewhat appealing. And even though not really the most efficient purchase, some people buy just to buy.That would pretty much make it stillborn as it would put it up against the 1070 at >$350. And we already saw the CF RX480 against a 1080 and how badly it lost.
so negative... Not sure what garners so much hate for you to constantly complain so. Dual GPU cards are never high volume and only serve as a talking piece really. But for a great deal of those with single slot MoBos or constrained spaces these cards are somewhat appealing. And even though not really the most efficient purchase, some people buy just to buy.
Dual GPU? AMD spills the beans?
AMD Lists The Radeon RX 490 Flagship - Polaris based Dual GPU Graphics Card For 4K Ready Gaming
Raja said some pretty interesting things about mGPU in interviews. It's entirely feasible they will go this path sooner than later.
Frankly this always should have been on the devs to implement. No real way around it.
That said, All we really need is the 4 major engines (UE4, CryEngine, Frostbite, Unity) to bake this support in, and the rank and file who use those engines will simply have it available to them to use.
Frankly this always should have been on the devs to implement. No real way around it.
That said, All we really need is the 4 major engines (UE4, CryEngine, Frostbite, Unity) to bake this support in, and the rank and file who use those engines will simply have it available to them to use.
I'm curious about this viewpoint.
Let's say you as a developer have a hypothetical 100 man hour budget for optimization in which multi adapter support would be lumped under. How much of that should you devote to multi adapter support versus focusing on optimizations that would purely benefit single GPU users?
The other side of this is that it isn't inherently up to game developers to help push hardware.
I don't think you exactly understand my point. If it is implemented in the major engines, then the developers would be able to devote less time to it. But from a technology standpoint, having it baked into the engines themselves has always been the optimal solution. At which point, for developers who use those engines, mGPU just becomes another tool to use, rather than something that needs to be hard coded into a game from scratch.
At that point, you could sub mGPU for async shaders, or even hairworks, or any other feature set developers would want to implement. And my point is rather than SLI and Crossfire, mGPU should have always been that way in the first place.
But I don't understand why it should be the developers that shoulder the burden to showcase hardware when they themselves do not inherently benefit from hardware sales?
This is a bit of generalization but what I find interesting is that people want both -
1) Developers to do "last mile" type optimizations/features that benefit a relative minority of their user base to show case the products of hardware vendors.
but also
2) Dislike hardware vendors developing partnerships with developers to implement said optimizations/features. Even though they can actually benefit from the resource investment.
This isn't really directed at multi-GPU support but discussion of responsibility of feature support and optimizations in general.
Under DX12 it's a Microsoft thing; not an AMD or Nvidia thing. It's adhering to basic DX12 rules so that your engine is ready for multiple GPUs. Can AMD and Nvidia help a developer to understand how DX12 will see GPUs and what is necessary to turn on features? Sure. But in the end it is a developer's desire to implement DX12 / Microsoft code into their engine.
Crossfire and SLi are dead technologies when it comes to DX12 as Microsoft Windows should be handling mGPU. For example Nvidia doesn't have SLi fingers on the GTX 1060. So no SLi support.
Doesn't mean that you can't toss 2, 3, 4 of them into a system as Ashes of the Singularity can use all of them in DX12 mode.
So, no, you don't.
If you ask for proof for something that you can just look up in 10 seconds on Google I'm not going to do it. I'm not responsible for spoon-feeding you, that's your mommy's job.
If you ask for proof for something that you can just look up in 10 seconds on Google I'm not going to do it. I'm not responsible for spoon-feeding you, that's your mommy's job.
If you ask for proof for something that you can just look up in 10 seconds on Google I'm not going to do it. I'm not responsible for spoon-feeding you, that's your mommy's job.
A developer should consider how many extra sales they will get by implementing mGPU support. If the number of potential buyers using mGPU is small (and it probably is) and/or some number of those with mGPU would buy the game regardless of whether it supports mGPU or not (probably true) then implementing mGPU is a waste of the developer's time. They'd be better off focusing on getting the single-GPU version game out on time with fewer bugs.Users want eyecandy, but they also want smooth and fast rendering without bugs, if mGPU can give them that where SLI/xf couldn't, then that's incentive for developers to implement it.
If you make a claim, you should be able to back it up. The fact that you CAN'T back it up is telling.
I would not be surprised to see more AMD dual-gpu cards coming out sooner rather than later. Especially given that Microsoft has said they are on the verge of putting out a very basic EMA support for dx12.
Developers won't need to code for the base level of multi-gpu support. Therefore combining gpus into a single card, provided that EMA doesn't suffer from micro stutter and some of the other comment multi-gpu configurations, maybe how AMD deals with nvidia's future cards.
MS is not "putting out a basic level of EMA support" -- that is all already there. The developers still need to and ALWAYS WILL HAVE TO code for EMA support in their game. They can fall back to IMA, but EMA will always require developers to code for it. What MS is releasing now/soon is EXAMPLE CODE onto github on how to implement basic EMA. But the devlopers of games will still need to use that stuff. It's not just automatic or built in.
Can someone else confirm? Sounds plausible, but I am not a programmer.
EMA isn't easy to do. It's not a switch. Tasks must be divided up between the GPUs present, and those tasks must be completed in sync. This takes a lot of planning, so even if an engine has support baked it, it still leaves a lot on the programmer. If this were easy to do, we'd hear of more games using it.