Eyefinity/Multi-7970 users bad news

Vega

Supreme [H]ardness
Joined
Oct 12, 2004
Messages
7,115
An update to my original 7970 installation woes: Somewhere between uninstalling some AMD beta drivers (I think they were 8.94), rebooting and installing the 7000 series driver it completely hosed my Windows 7 install. No matter if I used ATI-Man uninstaller, driver sweeper or anything with the new drivers the new cards fell flat on their face.

The good news after a five hour Win 7 installation marathon (I am picky about my Win 7 installs and getting them just right), these two 7970's I have are running BF3 120Hz Eyefinity beautifully!

All of the stuttering, slow downs and hiccups of my 6990's are completely gone! This new architecture completely blows away the previous series. My hat is off to AMD.

Now I have to hear of some people's Eyefinity experiences using quad-crossfire as I think expanding my setup to four 6GB 7970's when they release and five 120Hz screens in portrait Eyefinity may be in order!

Time for the 6990 and 580 to hit Ebay...
 
Last edited:
I had 3x5870 3x6970 reference 2x6970 Asus DC II on eyefinity setup and of course as you can see I know the problem with multigpu/multimonitor gaming and believe me both sides Amd/Nvidia have problems.
If I can advice you the best way to play games on eyefinity/surround setup with no stuttering is to stay with two gpu cores.
Three or four cores depend on game made problems.
I played all the series of CoD fine but on Dirt 2/3 and BF3 I had stuttering with trifire.
Same problem with my friend with 580 3sli but he preferred to play with vsync on to solve problems.
Another friend has your setup 6990CF and play BF3 with 3x1 monitor ok but when he did 5x1 quadcore gpu setup made problems.
I dont know if you prefer to play with vsync but I am not.
I want to have all the power from my gpu to my screen and no mouse lag.
Both companies Amd/Nvidia have their plus and minus.
 
When using my 2x 6990's on 3x1 portrait, turning on Vsync did not make the stuttering any better. With this report my 5x1 Eyefinity plans are cancelled and I will be going with a single 7990 on my 3x1 setup.
 
Your 6990s play much better BF3 than Nvidia.
Just look the drops/stuttering and believe me it is not only from less vram.

marsy.jpg

http://www.hardocp.com/article/2011/11/08/asus_rog_mars_ii_gtx_580_quad_sli_video_card_review/3
 
That result is not valid. With that resolution and Ultra settings, BF3 is way beyond 1.5GB VRAM. Look at that 19 FPS min.

But that result also doesn't show stuttering which is the achilles heel of AMD multi-gpu/Eyefinity. The FPS could be showing 100+ but feel like 20, as happens to me.
 
I have experienced this with every multi GPU setup I've owned in the past (up to 5870 CF), both from Nvidia and AMD.

I stick with single GPUs only now. I prefer a consistent FPS, even if lower it's better than the choppiness at high FPS sometimes exhibited by multi GPU.

My last CF (5870s) setup was pretty terrible, even at 100+ frames in L4D2 the game felt jerky. Enabling vsync fixed it but introduced mouse lag. Switched to a 6970 and silky smooth even with vsync off.
 
has anyone bothered to screw around with the AFR modes now that you can edit/make custom profiles in CCC 12.1 to see if it might fix the problem?
 
5040x1050 lol They should be doing that at 5760x1200 or higher! Unfortunately Nvidia has always scaled better than ATI, but even so, I cannot play a lot of games in tri screen because of this very problem. It's the same with the Nvidia cards I am afraid. :(
 
Disappointed :(

Is this problem due to crossfire, or is this a bandwidth problem? Would PCIe 3 help the stuttering at all?

I was planning on cfx 7970s eventually, but not if performance is really that horrible. Hope we get some 7990 news soon
 
I dunno I mean I had Tri-fire, and never once noticed slowdown with my eyefinity setup. I also always run with Vsync on.

I think its a user to user issue, Some people are more sensitive to stuttering then others (personally I cannot stand stuttering)
 
Not sure if I'm reading it right, but was the reviewer doing this tri-fire test on a P67 sabertooth board? While I'm not claiming this is an explanation (I don't have enough experience in this side of things at all), I have to wonder.
 
Not sure if I'm reading it right, but was the reviewer doing this tri-fire test on a P67 sabertooth board? While I'm not claiming this is an explanation (I don't have enough experience in this side of things at all), I have to wonder.

Thats a good point....why choose P67.

I want to see some X79 or big bang mobo's doing xfire
 
I dunno I mean I had Tri-fire, and never once noticed slowdown with my eyefinity setup. I also always run with Vsync on.

I think its a user to user issue, Some people are more sensitive to stuttering then others (personally I cannot stand stuttering)

I never experienced actual stuttering on a multi GPU setup, it was just that the FPS didn't match up to what I saw on screen. For example, 30 FPS would feel more like 20. With L4D2, 60+ FPS felt like 30 unless I turned VSync on.
 
I never experienced actual stuttering on a multi GPU setup, it was just that the FPS didn't match up to what I saw on screen. For example, 30 FPS would feel more like 20. With L4D2, 60+ FPS felt like 30 unless I turned VSync on.

Did you have triple buffering enabled?
 
Not sure if I'm reading it right, but was the reviewer doing this tri-fire test on a P67 sabertooth board? While I'm not claiming this is an explanation (I don't have enough experience in this side of things at all), I have to wonder.

I'm hoping/thinking that. According to Asus' website, P67 sabertooth runs at "2 x PCIe 2.0 x16 (x16 or dual x8)" I'm going to guess that x8 is going to bottle neck the cards, but I don't know if it would be the source of the stuttering.
 
I'm hoping/thinking that. According to Asus' website, P67 sabertooth runs at "2 x PCIe 2.0 x16 (x16 or dual x8)" I'm going to guess that x8 is going to bottle neck the cards, but I don't know if it would be the source of the stuttering.

This is actually what I meant - the sabertooth board doesn't even have a nf200 - trifire will be running 8/8/4 best case scenario, unless I'm missing something. I have to think that could play into exacerbating microstuttering.
 
I've had my 3 1.5 GB 580s for 14 months now. It's the best multi-GPU setup I've ever had in terms of fluid gaming. Multi-GPU isn't perfect but it tends to work well, at least my current setup has, more often than not especially with updated drivers and game patches.

2011 was a fucking incredible gaming year for me. Solid games, lots of 3D, multi-monitor and multi-GPU support, so much worked so well, I've simply never had such a trouble-free year getting everything working well the vast majority of the time. I'm just scared to go all out with Radeons these days. Very good hardware, the 7970 looks impressive but it just seems that on the software side to me that nVidia has an edge, plus AMD is well behind in 3D support.
 
No the reviewer tested on X58 and 980X, its right in the article.

That means the cards were running at 16x/8x/8x yet he says with TRI-7970 BF3 felt like 20 FPS in Eyefinity. :(
 
No the reviewer tested on X58 and 980X, its right in the article.

That means the cards were running at 16x/8x/8x yet he says with TRI-7970 BF3 felt like 20 FPS in Eyefinity. :(

Hm, I see now where he mentions the 980x, but down further in the article he references using a p67 sabertooth as the test platform:

"we could make measurements with two graphics cards, our test platform (P67, Asus Sabertooth) is not expected to support tri-GPU:"

Doesn't help with clarity that this is going through Google translate, but the 980x is probably right as its listed under the earlier test protocol.
 
Crossfire without stuttering just wouldn't be crossfire now would it? ;)
As always I'll hope AMD gets their shit together drivers-wise while enjoying my super fast single gpu card until I get the itch to buy it a friend.
 
The 5970 I had before felt the same way in BF3. On one monitor the fps was in the 90s but it felt like 25-30fps.
Now I have a GTX 570 and it's a butter @ 50fps. The higher FPS on AMD doesn't feel like it in CF when playing BF3.
 
I have a hunch this is happenning because of the zero core power saving tech. The only time I experienced stuttering on my cards was when I was on single screen for some testing and my 2nd gpu fell asleep in ulps. When I launched a game for some reason my card woke up but I was getting crazy microstutter. I end tasked the game and launched it again and all was good. As many know on eyefinity your second gpu doesn't go idle, it's lowest core clock drops to 500mhz instead of 250 or completely idle. That changes with 6990 though, as in quadfire I believe 2 gpu's go into ulps and 1 stays at 500mhz and 1 250mhz. I believe the primary on gpu 1 stays at 250 secondary on gpu 1 goes into ulps and primary on 2 stays at 500mhz and secondary ulps as well. Vega can confirm.

My theory is they are not waking properly when a game is launched. If you disable powertune this may fix the stuttering issue. You can do this using MSI afterburner by setting enable unofficial overclocking = 2 or sapphire trixx simply by disabling ulps.

Maybe I'm off here but it's what I've noticed on my dual card setup and I'd imagine in tri and quad setups it would be worse. I'd imagine zero core would only make the problems worse.
 
I have a hunch this is happenning because of the zero core power saving tech. The only time I experienced stuttering on my cards was when I was on single screen for some testing and my 2nd gpu fell asleep in ulps. When I launched a game for some reason my card woke up but I was getting crazy microstutter. I end tasked the game and launched it again and all was good. As many know on eyefinity your second gpu doesn't go idle, it's lowest core clock drops to 500mhz instead of 250 or completely idle. That changes with 6990 though, as in quadfire I believe 2 gpu's go into ulps and 1 stays at 500mhz and 1 250mhz. I believe the primary on gpu 1 stays at 250 secondary on gpu 1 goes into ulps and primary on 2 stays at 500mhz and secondary ulps as well. Vega can confirm.

My theory is they are not waking properly when a game is launched. If you disable powertune this may fix the stuttering issue. You can do this using MSI afterburner by setting enable unofficial overclocking = 2 or sapphire trixx simply by disabling ulps.

Maybe I'm off here but it's what I've noticed on my dual card setup and I'd imagine in tri and quad setups it would be worse. I'd imagine zero core would only make the problems worse.

Hmm I heard AMD's new thing on the 7k series was in Crossfire, they will shutoff/down the 2nd gpu crossfire to keep power/heat down.

You think its possible this is causing Xfire issueS?
 
Hate to break this to you boys but any time you have external chip to chip communications, there are going to be instances of microstutter. Does not matter AMD or Nvidia....
 
I have a hunch this is happenning because of the zero core power saving tech. The only time I experienced stuttering on my cards was when I was on single screen for some testing and my 2nd gpu fell asleep in ulps. When I launched a game for some reason my card woke up but I was getting crazy microstutter. I end tasked the game and launched it again and all was good. As many know on eyefinity your second gpu doesn't go idle, it's lowest core clock drops to 500mhz instead of 250 or completely idle. That changes with 6990 though, as in quadfire I believe 2 gpu's go into ulps and 1 stays at 500mhz and 1 250mhz. I believe the primary on gpu 1 stays at 250 secondary on gpu 1 goes into ulps and primary on 2 stays at 500mhz and secondary ulps as well. Vega can confirm.

My theory is they are not waking properly when a game is launched. If you disable powertune this may fix the stuttering issue. You can do this using MSI afterburner by setting enable unofficial overclocking = 2 or sapphire trixx simply by disabling ulps.

Maybe I'm off here but it's what I've noticed on my dual card setup and I'd imagine in tri and quad setups it would be worse. I'd imagine zero core would only make the problems worse.

I have disabled ULPS with my 2x 6990's and with the GPU's locked at their set 3D Mhz I still got the stuttering. It wasn't micro-stuttering either, it was full blown stuttering at 120+ FPS.

Hate to break this to you boys but any time you have external chip to chip communications, there are going to be instances of microstutter. Does not matter AMD or Nvidia....

That is why I would advise waiting for a single 7990 if people want Eyefinity. My single 6990 works a crap ton better than 2x 6990. Th question is will PCI-E 3.0 help at all with more than one GPU having to communicate over the PCI-E bus?
 
When you can't read french, you don't quote a french article.

I'm french. And there not saying that AMD is worst then Nvidia at all. All they are saying is that it's a problem with both AMD and Nvidia in multi-GPU multi-screen set-ups.

Vega. Stop whining about everything AMD. Go back to Nvidia. You probably don't have enough money, but go buy back 4 overpriced 580 3Gb, please.

You're constantly whining and complaining on every forum... Get a grip.
 
Crossfire without stuttering just wouldn't be crossfire now would it? ;)
As always I'll hope AMD gets their shit together drivers-wise while enjoying my super fast single gpu card until I get the itch to buy it a friend.

same goes for nvidia..
 
Hate to break this to you boys but any time you have external chip to chip communications, there are going to be instances of microstutter. Does not matter AMD or Nvidia....

Yup, that's because of the extra latency traversing from one card to the other.

It will vary from card to card, manufacturer to manufacturer. Most likely it would take an effort of both Nvidia and AMD to rethink how inter-card communication works and make GPU workload more efficient.

I believe Techreport had a very a good article regarding microstutter. I will have to find the link again, but in the tests they found that the faster GPUs/cards had the least issue of microstutter or delay between rendered frames. On Nvidia-based cards, the latencies were less pronounced than AMD-based cards if I recall.

What Techreport found is also mentioned in an [H] user's previous post here: http://hardforum.com/showpost.php?p=1032646751&postcount=1

It doesn't matter what card it is, either Nvidia or AMD will be affected by it. As with Techreport and MrWizard's post, to nearly eliminate it, you need a card able to produce framerates faster than the monitor's refresh rate. In Techreport's article, the faster the GPU, the least amount of microstutter. This same conclusion is also backed up by Tom's Hardware's article regarding microstutter.

At the moment, I'm guessing we're looking at driver optimization problems with the Catalyst 12 drivers and might be fixed in the next update.

But, if microstuttering were to be fixed or eliminated completely, I can only think of the following as possible solutions:
- Higher intercommunication speeds between multiple cards via the SLI/Crossfire bridges. If I recall, graphics data is sent to all cards via PCI-E with the bridges used to communicate what renders what. Maybe eliminate the bridges completely and have them communicate over the faster PCI-Express lanes or increase the transfer rate used via the SLI/Crossfire bridges.

Hopefully PCI-E 3.0 eliminates this delay. From what I read, there is an extra overhead with PCI-E 2.x encoding using 8-bit/10-bit encoding. This is about 20% overhead. In PCI-E 3.0, this overhead is nearly eliminated and is reduced to 1.5% approximately.

- Unified memory between multiple cards. For example, if two cards have 2 GB of VRAM each, each GPU would see 4 GB total, not the 2 GB dedicated to them.​
Maybe Nvidia and AMD can learn something from how multiple server processors communicate when used in SMP systems. They all see the same data and the same amount of RAM. Also, many multiprocessors have dedicated high-speed links between each CPU. For example, Power7 processors have a dedicated, high-speed SMP link between multiple processors which can reach a couple TB/sec. transfer rates when the CPU multi-chip modules (MCM) are maxed out. In a way, the processors don't see the latencies between each other, but as if they were processor cores on the same CPU die. For Intel, it's the QPI, or Quick Path Interconnect, running at 25.6 GB/s dedicated to each CPU.

Something similar to that might be the saving grace of SLI/Crossfire cards in my opinion. Treat the GPUs as server CPUs and give them dedicated high-speed links between them to eliminate or nearly eliminate the latencies between each other. If the GPUs are slow since the faster GPUs have lower instances of microstutter, have a way to cache the finished rendered graphics data between graphics cards so that there is always something sent to the monitor without having to wait for the next frame to finish rendering.
 
But by the time you go through all that... look, there's a simpler way of accomplishing the same (or better) performance gains, and it's already in common use.

What are you really after when you add a second graphics card? The additional cores on the second GPU. The entirety of the second graphics card is redundant hardware needed to facilitate access to those extra cores; it's nothing but life-support.

So, lets simplify... just put all those cores in the original GPU to begin with. All those multi-GPU problems go right out the window.


GPU manufacturers already employ this method when it's financially and technically feasible to create dies that large. Look at the HD 7970; it has 2048 cores all on one GPU, and I can guarantee you it would be far more predictable and consistent (not to mention faster) than two hypothetical 1024-core cards in crossfire.

Multi-GPU solutions are just hackjob stop-gaps. A single GPU with that many cores doesn't exist yet, so they slap two together in order to accomplish similar performance gains as best they can. If you want a real, proper upgrade, wait for the next beastly single-GPU card.
 
But by the time you go through all that... look, there's a simpler way of accomplishing the same (or better) performance gains, and it's already in common use.

What are you really after when you add a second graphics card? The additional cores on the second GPU. The entirety of the second graphics card is redundant hardware needed to facilitate access to those extra cores; it's nothing but life-support.

So, lets simplify... just put all those cores in the original GPU to begin with. All those multi-GPU problems go right out the window.


GPU manufacturers already employ this method when it's financially and technically feasible to create dies that large. Look at the HD 7970; it has 2048 cores all on one GPU, and I can guarantee you it would be far more predictable and consistent (not to mention faster) than two hypothetical 1024-core cards in crossfire.

Multi-GPU solutions are just hackjob stop-gaps. A single GPU with that many cores doesn't exist yet, so they slap two together in order to accomplish similar performance gains as best they can. If you want a real, proper upgrade, wait for the next beastly single-GPU card.

You have a very good point. That's why I've always preferred a more powerful single-card solution instead of two because I know already the problems associated with running two or more cards in SLI/Crossfire. Besides microstutter, you have to worry about game compatibility and game support. And, that'll vary widely between different games. One game may not show significant improvements with an additonal card while another does.

The closest is having something like the GTX 590, 6990 or upcoming 7990 (or GTX 790/690 or whatever Nvidia labels it). At the very least the GPUs are on the same PCB but still isn't perfect enough to eliminate microstutter or inter-GPU latencies.

When the Radeon 8970 (?) is out we'll probably be looking at closer to 3000 or 4000 GCN (or its successor) cores on the same die. Just like with CPUs, GPUs are still limited by how small a transistor they can get and squeeze them onto a single GPU die. Intel is looking for ways around it with 3D transistors and different CPU materials to shrink the transistor smaller and smaller, and also going to smaller node processes-- from 45 nm to 32 nm to 22 nm and 12 or 10 nm in the future.

It would be great to see a single 4096 GCN core 7990 instead of two 2048 GCN cores on the same PCB. However, the technology isn't there yet to squeeze that many cores and transistors small enough to eliminate any instance of microstutter or other latencies.
 
Yup, that's because of the extra latency traversing from one card to the other.

It will vary from card to card, manufacturer to manufacturer. Most likely it would take an effort of both Nvidia and AMD to rethink how inter-card communication works and make GPU workload more efficient.

I believe Techreport had a very a good article regarding microstutter. I will have to find the link again, but in the tests they found that the faster GPUs/cards had the least issue of microstutter or delay between rendered frames. On Nvidia-based cards, the latencies were less pronounced than AMD-based cards if I recall.

What Techreport found is also mentioned in an [H] user's previous post here: http://hardforum.com/showpost.php?p=1032646751&postcount=1

It doesn't matter what card it is, either Nvidia or AMD will be affected by it. As with Techreport and MrWizard's post, to nearly eliminate it, you need a card able to produce framerates faster than the monitor's refresh rate. In Techreport's article, the faster the GPU, the least amount of microstutter. This same conclusion is also backed up by Tom's Hardware's article regarding microstutter.

At the moment, I'm guessing we're looking at driver optimization problems with the Catalyst 12 drivers and might be fixed in the next update.

But, if microstuttering were to be fixed or eliminated completely, I can only think of the following as possible solutions:
- Higher intercommunication speeds between multiple cards via the SLI/Crossfire bridges. If I recall, graphics data is sent to all cards via PCI-E with the bridges used to communicate what renders what. Maybe eliminate the bridges completely and have them communicate over the faster PCI-Express lanes or increase the transfer rate used via the SLI/Crossfire bridges.

Hopefully PCI-E 3.0 eliminates this delay. From what I read, there is an extra overhead with PCI-E 2.x encoding using 8-bit/10-bit encoding. This is about 20% overhead. In PCI-E 3.0, this overhead is nearly eliminated and is reduced to 1.5% approximately.

- Unified memory between multiple cards. For example, if two cards have 2 GB of VRAM each, each GPU would see 4 GB total, not the 2 GB dedicated to them.​
Maybe Nvidia and AMD can learn something from how multiple server processors communicate when used in SMP systems. They all see the same data and the same amount of RAM. Also, many multiprocessors have dedicated high-speed links between each CPU. For example, Power7 processors have a dedicated, high-speed SMP link between multiple processors which can reach a couple TB/sec. transfer rates when the CPU multi-chip modules (MCM) are maxed out. In a way, the processors don't see the latencies between each other, but as if they were processor cores on the same CPU die. For Intel, it's the QPI, or Quick Path Interconnect, running at 25.6 GB/s dedicated to each CPU.

Something similar to that might be the saving grace of SLI/Crossfire cards in my opinion. Treat the GPUs as server CPUs and give them dedicated high-speed links between them to eliminate or nearly eliminate the latencies between each other. If the GPUs are slow since the faster GPUs have lower instances of microstutter, have a way to cache the finished rendered graphics data between graphics cards so that there is always something sent to the monitor without having to wait for the next frame to finish rendering.

Unfortunately PCI 3.0 won't cure the microstutter. Even though the chips have the same reference clock signal the traces to each chip are ever so slightly different (electrically speaking). Additionally you have to factor in the delays in chip to chip communications. Pile on some memory errors and card to PC communications and voilla, issues arrise. Hiding it behind the monitor refresh rate does not "fix" the issue, it just masks it...

The only real way to cure this is with a faster single chip card or by packaging the two dies together (this is expensive) in the same package so to minimize intrachip communication delays.
 
Last edited:
First has absolutely nothing to do with 7970 specifically as it deals with all AMD cards and, however true, makes the thread title is misleading.

Second has absolutely nothing to do with 7970's and everything to do with AFR on absolutely any card configuration.
 
Multi-GPU solutions are just hackjob stop-gaps. A single GPU with that many cores doesn't exist yet, so they slap two together in order to accomplish similar performance gains as best they can. If you want a real, proper upgrade, wait for the next beastly single-GPU card.

Not to sound rude, but who here doesn't realize this?

Of course if they offered a single gpu card that could do the same as quad-sli/fire of the most current card, I think most people would be all for that, for a variety of reasons. Unfortunately, they don't, because that's how tech progresses and what product demand warrants; yet people want more, and have reasons/ways to utilize that additional power. Hence, sli/xfire have a purpose. It may not be for everyone, but its definitely for some.
 
Multi-GPU solutions are just hackjob stop-gaps. A single GPU with that many cores doesn't exist yet, so they slap two together in order to accomplish similar performance gains as best they can. If you want a real, proper upgrade, wait for the next beastly single-GPU card.

I would disagree with that assessment as multi GPU cards are an effective way from a space point of view to get multiple cards into the system, not to metion increasing performance
 
Back
Top