Concentric
[H]ard|Gawd
- Joined
- Oct 15, 2007
- Messages
- 1,028
My understanding of SLI and Crossfire is that the individual application (e.g. a game) is presented with separate GPUs and has to be coded to split its rendering tasks between them.
This means compatibility and performance is flaky, unpredictable, etc - some devs do it well, others don't, some don't bother, etc
I don't understand why Nvidia and AMD don't implement the load balancing of the GPUs themselves, inside the hardware and inside their driver, so that from the perspective of the application it doesn't need to know or care how many GPUs there are?
Is there a good reason why it's not done this way?
And before someone says "Who needs multi-GPU anyway, just buy a 3090", that's not the point. Asking a specific question because I'm interested in the technical reasons.
This means compatibility and performance is flaky, unpredictable, etc - some devs do it well, others don't, some don't bother, etc
I don't understand why Nvidia and AMD don't implement the load balancing of the GPUs themselves, inside the hardware and inside their driver, so that from the perspective of the application it doesn't need to know or care how many GPUs there are?
Is there a good reason why it's not done this way?
And before someone says "Who needs multi-GPU anyway, just buy a 3090", that's not the point. Asking a specific question because I'm interested in the technical reasons.