Will we ever need more than 1 top end card from now on? Will SLI ever be seamless hardware wise?

I was hoping that sli would grain traction again with RT.

It seems RT would benefit a lot from mGPU, but there's no support at all.

Nvidia is paying people to implement RT. They aren't paying anyone to implement MGPU.
 
I think GPGPU will continue to be able to use more than one card. But mGPU from the standpoint of playing games.... eh I'm not so sure. With a product stack that ranges from $150-$1000 it doesn't really make sense for anyone to go mGPU. Just buy a faster card, it's easier to support and will give more consistent frame times if for no other reason than complexity. For the people at the ultra-high end, willing to buy 2x $1000 GPU's, I guess they're shit out of luck. But the percentage of people that are in that category are not worth nVidia or AMD supporting. It's already incredibly rare for people to own a single 2080Ti (based on hardware stats). Killing themselves over code bases to make mGPU transparent isn't worth the amount they would earn through the incredibly low increase in GPU sales. In other words, it costs more than they would make. They don't have any incentive to do so.

EDIT: I think as graphics and just compute in general becomes increasing parallel, eventually mGPU may come back. But I think it will be at scale. There's a greater chance of having mGPU on a single card at that point rather than simply filling more card slots. This is especially considering that PCIE4 and PCIE5 will have the bandwidth to support those kinds of setups. My response in general was for the immediate future. 5-10 years out, who can say? Maybe the only way GPU's will be able to get faster at that point is more parallel. But I certainly can't see that far.
 
Last edited:
And? I'm assuming that you don't mind playing at 60 hz then (and prefer eyecandy over framerate?) I mean, those old LCDs had some of the highest input lag we've ever seen! You can buy your new 120 hz 4k monitor, then turn on adaptive sync (to make your 60hz experience smooth).

You can get that kind of framerate from and RTX 2080 Ti, on Ultra in the majority of games. Unless you suddenly grew a hard-on for 120 hz, I don't see what the problem is.

And all of you whiny bitches will be satisfied in another two years (when Ampere's successor ships). Should have more than enough horsepower or 120 hz 4k (in every game), and 75hz RTX.

BUT IT WON'T BE CHEAP. Because demaning gamers (like yourselves) are a corner case of a corner case.

Actually the ZR30W from 2009 had really low input lag.
 
I think mGPU will comeback with chiplet designs. They have to make all the chiplets work as one.

AFAIK intel is working on that.
 
To answer the two questions that make up your thread title:

1. Will we ever need more than 1 top end card from now on?

...it's not so much a matter of "need" and a matter of "want", and it's purely subjective. If one top-tier GPU doesn't meet the desired needs that someone wants, then it may behoove them to purchase a second one.

2. Will SLI ever be seamless hardware wise?

...perhaps. It's been the rumor since AMD Mantle (which evolved into much of the current iterations of Khronos Vulkan and MSFT D3D12) that mGPU would be treated as a singular GPU device, with the capability to even stack each instance of VRAM into a singular pool (example: separate 8GB + separate 8GB = singular 16GB).
 
I'm actually curious why a newer version of SCANLINE Interleave hasn't been attempted. I would think that it would provide more consistency than even AFR if properly implemented. NV owns all that IP too. They probably just don't care at this point, but I wonder about that from time to time. I'm guessing that with increasing resolutions on analog CRT connections during 3DFX > Nvidia transition era, that there could have been artifacts caused by this, (and maybe other things that I'm not thinking of like maybe RAMDAC issues, not to mention external cable links, etc.) but it seems like it could be very viable with current technology. (done digitally)
 
Last edited by a moderator:
I tried to make SLI work for me when the Nvidia 7600GT came out in mid 2000s. It wasn't great.
I tried to make SLI work last generation with dual GTX 1080's and it was ok for the 2-3 titles it worked for, on par with 1x 1080 on most titles, and worse than a single 1080 on other titles.
I tried one more time with dual RTX 2080ti's and even though Nvidia released a brand new high bandwidth bridge (nvlink) and boasted how it was no longer a bottleneck, it was the same story as the 1080.

I'm not buying SLI for the next generation. I wouldn't be surprised if Nvidia just got rid of the ability to do it on their non quadro cards.
 
The most recent multi-gpu attempt I did was both a GTX-295 card and also a dual GTX-285 around the same time. At that point in time, with the games available, I had very little if any trouble. PhysX had also just moved to NV if I remember correctly. (at least within the same year or so) I believe I played a few games at the time that worked very well with mGPU+PhysX. Maybe Mirror's Edge or something like that. Can't quite remember. I do remember being quite impressed with my 295 though. After a while though single cards started blowing it away, so I just moved on.
 
I'm actually curious why a newer version of SCANLINE Interleave hasn't been attempted.

1. 3dfx SLI didn't scale perfectly in every game. Sorry, that's a myth.

2. SLI in a modern system would mean each card would have to waste time re-calculating the geometry, but no 3dfx card ever did that. The CPU did it, so (in theory) it scaled perfectly.

AFR was he easiest hack to get around these limitations. Split-frame rendering should have identical scaling to 3dfx SLI
 
1. 3dfx SLI didn't scale perfectly in every game. Sorry, that's a myth.

2. SLI in a modern system would mean each card would have to waste time re-calculating the geometry, but no 3dfx card ever did that. The CPU did it, so (in theory) it scaled perfectly.

AFR was he easiest hack to get around these limitations. Split-frame rendering should have identical scaling to 3dfx SLI

It didn’t scale perfectly, but it did very well, and a modern implementation would probaby do better.

Also, geometry isn’t the biggest bottleneck on a GPU these days, and where things weren’t massively parallel before, they are now.

I just think it could be worth revisiting. Then again I’m not a GPU engineer. :p
 
DX12 can do Split frame rendering. They did it with Ashes of the Singularity. Problem is that devs need to implement it themselves, not the GPU makers.

Agreed. Anything that isn’t basically free to the devs has no guarantee of compatibility. Something that can be implemented in hardware cheaply enough and at the driver level would be an improvement. Even if it didn’t scale 100% but just worked would be better.
 
SLI can go the way of the dino if you ask me.

Checkerboard rendering is the new! nVidia is working on it. Checkerboard rending can be read about here:

It is different than splitframe - it is better.

nV CFR

I’m also fine with it going away. If id can make the Doom games fly like they do on modest(ish) hardware and scale up from there, then other devs should work a bit harder to get their games looking good on a similar range of hardware. I guess they all can’t be id though can they :D

Checkerboard... hmm... sounds like PowerVR’s tile rendering. :D That was always a good idea, just never implemented well until fancier mobile chipsets really. Never really hit its stride on PC platforms. Of course I need to read the article. Could be nothing like it. Just thought of PowerVR when I read that.
 
Just to clarify, when you say Hz, you really mean FPS.
While its technically correct (I guess), Hz is more commonly used for monitor refresh rate, while FPS is used for frame rate.

But I really don't want to nitpick cause there is variable refresh rate and then it just gets more confusing.

just my $.02

im not correcting you - just adding to your description.

FPS and Hz are two different things.

Hz is the frequency that an LCD refreshes its own panel per second.

FPS is the sum of full screen images rendered by the GPU per second. An LCD can be running at 100 hz while a GPU can be rendering at 150 Frames per second. The frequency of monitor refresh has no negative effect on the rendering power and speed of your GPU. Only when a VRR link is established between the GPU and Screen will the GPU slow its role to keep up with your much slower LCD unless your rocking super fast LCD tech.
 
Where VRR comes into play is a communication link between the monitors control board and the GPU whereas the GPU can synchronize the monitors rate of gtg refreshes with the out put frequency in rendered frames so that there is no perceived difference in the two.
Not to disagree but to add another description: VRR does V-Sync in reverse. Where V-Sync has the GPU wait for the next monitor refresh, VRR has the monitor wait for the next frame.
 
Not to disagree but to add another description: VRR does V-Sync in reverse. Where V-Sync has the GPU wait for the next monitor refresh, VRR has the monitor wait for the next frame.

You should have waited to quote me ... "Only when a VRR link is established between the GPU and Screen will the GPU slow its role to keep up with your much slower LCD unless your rocking super fast LCD tech. "

I already said that. What you quoted was before I edited it and before you posted your reply. But yes.
 
I have actually held off on the 4k/120hz new monitor purchase because the 2080 ti is almost there for 1440p/144hz imo. That's with no RTX consideration. Hopefully the 3080 ti can do 4k/120hz w/ rtx enabled.
 
I have actually held off on the 4k/120hz new monitor purchase because the 2080 ti is almost there for 1440p/144hz imo. That's with no RTX consideration. Hopefully the 3080 ti can do 4k/120hz w/ rtx enabled.
Only in Fantasyland will the 3080 ti be doing 120 FPS with ray tracing at 4K especially in next-generation titles.
 
Yeah, I'm struggling with RTX at 1080p (well ultrawide, but still). I can get in the 90 fps range, which is decent enough for GSync.

At 4K, you'd have to be smoking some shit to get 120 fps.
 
I have actually held off on the 4k/120hz new monitor purchase because the 2080 ti is almost there for 1440p/144hz imo. That's with no RTX consideration. Hopefully the 3080 ti can do 4k/120hz w/ rtx enabled.

3080Ti would literally have to be over 100% faster than 2080Ti to pull that off. In other words, a generational performance leap that has quite literally, never happened. At least not since I've been gaming
 
3080Ti would literally have to be over 100% faster than 2080Ti to pull that off. In other words, a generational performance leap that has quite literally, never happened. At least not since I've been gaming
LOL only 100% faster? You're not even remotely close as it would have to be many times faster to get 120 FPS at 4k with ray tracing in demanding games. And next-generation titles will be a hell of a lot more demanding.
 
LOL only 100% faster? You're not even remotely close as it would have to be many times faster to get 120 FPS at 4k with ray tracing in demanding games. And next-generation titles will be a hell of a lot more demanding.

I did say "over" 100% ;)

But I think we're on the same page... It's not gonna happen with 3000 series.
 
3080Ti would literally have to be over 100% faster than 2080Ti to pull that off. In other words, a generational performance leap that has quite literally, never happened. At least not since I've been gaming

Well it's a good thing I don't give a rat's ass about RTX then I guess! :)
 
The only way to get sli working seamlessly is to take driver and game support out of it. Make it look like one card to the OS and let it handle everything. When SLI was introduced back in the days of 3DFX there were no tweaks to do. You ran your damn VGA snake loop and let the hardware sort it out and it was pretty damn fast at the time.

Now it's all done in the software layer for games to use it and it's supid expensive and a pain in the ass to configure/run application by application.

They need to go back to doing it all in hardware and make it just freaking work.
 
Well it's a good thing I don't give a rat's ass about RTX then I guess! :)
Well you were the one that mentioned 120 FPS at 4K with ray tracing though. And even without ray tracing there's no way demanding games are going to be at 120 FPS at 4K when you have some games now that can't even average 60 FPS for the 2080ti.
 
Well you were the one that mentioned 120 FPS at 4K with ray tracing though. And even without ray tracing there's no way demanding games are going to be at 120 FPS at 4K when you have some games now that can't even average 60 FPS for the 2080ti.

Yeah but if it's close I'd be okay. But chances are I'll just hold on to my 1440p monitor longer. Best thing I bought several years ago.
 
So there are two main cases where people would do multi-GPU:

1) They are buying two of the top-of-the-line cards and want better performance (for example at 4K, HFR, triple-head, etc.).

The issue here is that something like a 2080 Ti can still run great on those setups. Maybe not max everything, but with tweaked settings, you can definitely have something running well and beyond just playable. It's not like these people can't play the game.

2) Users with mid-range cards that (maybe 1 or 2 years later) want to buy a second used card on the cheap to extend the life of the system. I've done this myself in the past, and when SLI worked well it was a nice option.

However, these days, it is better to just sell the old card and get something new with comparable performance. This is totally possible, and the used market is pretty good for sellers, so this is probably a better play today.

The main issue is that GPU makers don't make money from used sales, so #2 is useless to them from a business standpoint. For developers, it's probably a burden to support and test multi-GPU systems, when it isn't a popular configuration.

And #1 is probably such as small market that there is no ROI for developers. I mean a single 2080 Ti is only at 0.72% of gamers according to Steam. So the people that have *two* of them, I don't know, you are talking about a small fraction of 1%. Not worth it for anyone.
 
Maybe not max everything, but with tweaked settings, you can definitely have something running well and beyond just playable. It's not like these people can't play the game.
This is something everyone seems to ignore. You can generally lower some settings and have the quality remain the same virtually in motion. Even hard to tell in screenshots. Add in dlss 2.0 and raytracing and I bet you could hit 120fps in new games on 4k next gen still. You don't need full ultra to play a game.

See borderlands 3 as a recent prime example of having a hard time finding differences in screenshots between settings that can affect performance by 30 to 40 percent! Let alone try finding it in motion ;).
 
Last edited:
Correct. Resident Evil 2 is another prime example.

You can turn settings down and get a visually almost identical picture with 30% or more performance boost (in fact, one or two settings actually look better lower but this is to taste).

 
I would like to see AMD allow Polaris to live on and evolve for all the pass work with CX as I still have a pair of RX 570 's on my old x58 with the Xeon x5660 and it can even run the RX 580 with the RX 570 in CX without a crossfire ribbon which I needed when I bought my board new and ran 5850's in CX .
 
The shift to DX12 and Vulcan means the game developers and game engines are in control of the GPUs now. Even back in the DX11 and older days though, nVidia and AMD had to work with game developers to optimize SLI/crossfire profiles in their drivers to ensure maximum scaling.
I'm pretty disappointed that DOOM doesn't support multi-GPU despite the heavy push in Vulkan.
 
I'm pretty disappointed that DOOM doesn't support multi-GPU despite the heavy push in Vulkan.

Yeah, but it runs ridiculously well on just about anything already. I'm not sure what we'd get out of it if it did support it. In a recent video id was talking about how they've seen 400fps on Doom Eternal on some of their internal hardware with the upper limit (obviously in the future) of 1000. Let's assume for a moment that on top end consumer hardware (single GPU) we can get 200fps. What more could you ask for in a game that looks that good?
 
Let's assume for a moment that on top end consumer hardware (single GPU) we can get 200fps. What more could you ask for in a game that looks that good?
I'd ask them to remove the 200fps frame cap so I can run it on my 240Hz monitor, but they've done that for DOOM Eternal.
 
I'd ask them to remove the 200fps frame cap so I can run it on my 240Hz monitor, but they've done that for DOOM Eternal.

Except there is no 200 fps cap. I was using that as a rough number. The cap is 1000 according to the video. They've seen 400. So I halved that to illustrate a point about just what "good enough" might look like.

For me 60 is "good enough", and 120 is optimal. Over 120, sure, great, but no longer interesting to me. I don't want to get into what "humans can see" or other discussions like that. I'm just saying for me, 120 is spot on, 60 is fine, below 60 I'm not happy with, above 120 is cool but not something I care about.

This is [H] so yeah, push those boundaries if that's your thing. :D At some point though I think everyone should have a "this looks good enough for me to enjoy without questioning it" level. If your sole purpose in life, or your biggest hobby is to just push things as far as they go, that's cool too. I'm just a bit more moderate. I'm kind of an anomaly on this site. I like to push things, but I have parameters that if met, I'm happy.

I'm not a resolution guy past a certain point. I'm fine with 1080. I think 4K is beautiful. (I have an 87" 4K TV, and love it.) However, I'm happy to play games at 1080. I also am not an anti-aliasing nut. I like some, but I've never been one to push those settings unless they had no impact on performance (at the level I enjoy). I like my frames synced. I accept a touch of latency for consistency. If a setting improves the image in a way I find pleasing, I want it pushed to the maximum. (effects, shaders, lighting, shadows, that sort of thing). So, if I can max graphical settings, play at 1080 at 60 or 120, then I'm totally happy. Anything above that is cool, and I appreciate it, but it's not a requirement. (which is not like most people on this site I think. :D ) I find ray tracing to be the most interesting thing now, because it has the most dramatic effect on the actual image. However, it's early. That means it's interesting to watch the industry, dabble in the tech, try to push it (like we used to back in the earlier 3DFX, PowerVR, Nvidia, ATI days).

I'm like this with other things too. I like fast cars. My car has 300HP. There are faster cars that have 400-900HP, but I have no desire to own them because I'm happy with the way my 300HP car peforms.
 
Last edited by a moderator:
I was talking about DOOM 2016. There is an engine locked 200 fps limit. But most people probably don't notice, I didn't until I got a 240 Hz monitor.

Game still runs great, no question 200 fps is plenty, but I can't say I wasn't at least a bit disappointed. It would have been the best showcase for 240 Hz as one of the few modern games that can reach those levels.
 
Back
Top