Possible fix for GTX 970/980 Voltage Discrepancy & Driver Bug thread

I'm starting to believe the voltage discrepancy has as much to do with ASIC quality as wonky drivers.

Before I RMA'd my dolphin Gigabyte 970s, one had ASIC of 64%, and the other 66%, and at stock one card ran 1.212V, and the other 1.206V.

Now the card I got back from RMA, one has an ASIC of 74.4%, and the other 66.1%, and lo and behold, one runs at 1.156V stock, while the other sits at 1.206V. Can you guess which one runs at 1.156V? I think it'd be a good idea to have people who are seeing major voltage discrepancies list the ASIC quality of their cards. I'm willing to bet there's a >5% difference in ASIC quality between the cards.

And btw the cards I got back from RMA are rev 1.1 instead of the 1.0 cards I had before RMA. Haven't had time to do much testing yet but thus far no apparent differences apart from a different BIOS version.
 
Last edited:
I'm starting to believe the voltage discrepancy has as much to do with ASIC quality as wonky drivers.

Before I RMA'd my dolphin Gigabyte 970s, one had ASIC of 64%, and the other 66%, and at stock one card ran 1.212V, and the other 1.206V.

Now the card I got back from RMA, one has an ASIC of 74.4%, and the other 66.1%, and lo and behold, one runs at 1.516V stock, while the other sits at 1.206V. Can you guess which one runs at 1.516V? I think it'd be a good idea to have people who are seeing major voltage discrepancies list the ASIC quality of their cards. I'm willing to bet there's a >5% difference in ASIC quality between the cards.

And btw the cards I got back from RMA are rev 1.1 instead of the 1.0 cards I had before RMA. Haven't had time to do much testing yet but thus far no apparent differences apart from a different BIOS version.

Holy Hell!! 1.516V Keep a ABC fire extinguisher close bro :eek: :eek::eek:

In all seriousness I believe ASIC quality and flaky drivers are both related to this problem and it's a combination of both. I think drivers normally compensate for asic discrepancies but this driver is not doing that. Most drivers probably look at a cards shortcomings and boost voltage to match them where needed. If one card is running 30 mhz slower and requiring xyz voltage it'll up the voltage to match the better card and bring the core clock down 30 mhz on the good card etc... I think these drivers are failing to do the expected.
 
ROFL goddamn typos

I meant 1.156V sorry :D Yeah if I saw 1.516V I would've rang up Gigabyte and cussed them out.

Also noticed boost clock went up from 1354MHz to 1366MHz on the master and 1367MHz on the slave. Very interesting. Maybe all they did with rev 1.1 was to provide a new BIOS. And yeah I think you're right. With my mobile Kepler cards there's also a 5% discrepancy in ASIC quality, but they run at the same voltage and boost clock, so there's a definitely a driver component in there as well.

On the plus side, because the master can now run full boost with only 1.156V, temps dropped by 5C, which really helps since in SLI config the top card basically sucks up all the hot air from the bottom card. Top card now maxes at 67C in Heaven after 15 minutes, while the previous card that ran 1.212V sat at a cozy 72C.
 
Last edited:
OK so this is some really funny stuff. Using MSI AB, I set the top GPU (lower voltage card) at +110MHz and the bottom card (higher voltage card) at +140MHz, and Firestrike shows both cards are boosting to 1506MHz, but now the top card is running @ 1.225V and the bottom card still 1.206V. So basically role reversal where the top card now overshoots the bottom card.

However, if I only let the top card run 20MHz instead of 30MHz slower than the bottom card (+120 top card, +140 bottom card), both cards still boost to 1506MHz, but the bottom card only sips a cool 1.200V, while the bottom card is still at 1.206V.

And just for kicks I made the lower voltage card run 20MHz faster than the higher voltage card, Firestrike insta crashed and gave an irrecoverable red screen that needed a hard reset.

This is just mind-boggling for 2 reasons:

1. You'd think you need to let the higher voltage card lag behind the lower voltage card since you'd expect more headroom on the lower voltage card, but no it's the opposite

2. Somehow having the lower voltage card run too slow (-30MHz) actually causes it to overvolt (1.225V) and overshoot the higher voltage card. But have it run slightly faster (-20MHz), and it now sips less juice (1.200V), and matches the bottom card.

So basically do everything opposite of what common sense would dictate lol.
 
Btw this is completely fucked:

http://www.3dmark.com/compare/fs/2938077/fs/3056893

Care to guess which result belongs to which set of cards? Man I just can't get a fucking break! :mad:

EDIT: So after a very quick test it seems the lower voltage card is bogging down the entire SLI setup, including itself rofl.

With SLI disabled and +120 core/+300 mem, boost increased to 1524MHz, while voltage also jumped to 1.225V while running Firestrike. Now keep in mind the top card is 20MHz behind the bottom card (higher volt card). At +140 core/+300 mem, boost further increased to 1545MHz and voltage stayed put at 1.225V.

So what this means is that if everything were to have worked properly, I should see a boost of 1524MHz in Firestrike. But I don't, instead I see 1506MHz, and the reason for that is because the top card is downvolting to 1.200V instead of 1.225V, so it simply does not have enough juice to eek out another boost bin or two. Even with the exact same OC, the cards that had comparable ASICs and voltages worked much better than the other set. Clearly the "mismatched set" is losing about 40MHz of core clock even though on paper everything appeared to be the same, which would explain the 3% loss in Firestrike score.

On the bright side, +120/+140 core and +300 mem (=1506 boost 7600 mem) is completely game stable without even touching volts, and cards stay at 1.200V/1.206V. So basically stock 980 level performance without needing to add any juice for this new set of Gigabyte 970s. I guess I really shouldn't complain.

But still, GODDAMN NVIDIA GET YOUR SHIT TOGETHER AND UNBORK THE DRIVERS KTHX.
 
Last edited:
Btw this is completely fucked:

http://www.3dmark.com/compare/fs/2938077/fs/3056893

Care to guess which result belongs to which set of cards? Man I just can't get a fucking break! :mad:

EDIT: So after a very quick test it seems the lower voltage card is bogging down the entire SLI setup, including itself rofl.

With SLI disabled and +120 core/+300 mem, boost increased to 1524MHz, while voltage also jumped to 1.225V while running Firestrike. Now keep in mind the top card is 20MHz behind the bottom card (higher volt card). At +140 core/+300 mem, boost further increased to 1545MHz and voltage stayed put at 1.225V.

So what this means is that if everything were to have worked properly, I should see a boost of 1524MHz in Firestrike. But I don't, instead I see 1506MHz, and the reason for that is because the top card is downvolting to 1.200V instead of 1.225V, so it simply does not have enough juice to eek out another boost bin or two. Even with the exact same OC, the cards that had comparable ASICs and voltages worked much better than the other set. Clearly the "mismatched set" is losing about 40MHz of core clock even though on paper everything appeared to be the same, which would explain the 3% loss in Firestrike score.

On the bright side, +120/+140 core and +300 mem (=1506 boost 7600 mem) is completely game stable without even touching volts, and cards stay at 1.200V/1.206V. So basically stock 980 level performance without needing to add any juice for this new set of Gigabyte 970s. I guess I really shouldn't complain.

But still, GODDAMN NVIDIA GET YOUR SHIT TOGETHER AND UNBORK THE DRIVERS KTHX.

Yea thats what is stock for me. I usually boost to 1506 in game and 7600 on the memory (although sometimes I will randomly get lower boost clocks to 1493).

Really is a pain in the ass right now.
 
Well, I tried running DP + DP + DVI (instead of DVI + DVI + DP)

No dice, DSR still vanishes as soon as I enable Surround. :(

Is there anything else I can try, or should I just wait and hope the next driver fixes it?
 
Well, I tried running DP + DP + DVI (instead of DVI + DVI + DP)

No dice, DSR still vanishes as soon as I enable Surround. :(

Is there anything else I can try, or should I just wait and hope the next driver fixes it?

Yeah, looks like it will hopefully be fixed in a driver soon. Same issue for people with Gsync and some people with SLi. I guess they only implemented it so that it works enough for us to get our feet wet. All the other Niche configurations will get some love later. E.G. My config SLI Surround 120hz. :(
 
Btw this is completely fucked:

http://www.3dmark.com/compare/fs/2938077/fs/3056893

Care to guess which result belongs to which set of cards? Man I just can't get a fucking break! :mad:

EDIT: So after a very quick test it seems the lower voltage card is bogging down the entire SLI setup, including itself rofl.

With SLI disabled and +120 core/+300 mem, boost increased to 1524MHz, while voltage also jumped to 1.225V while running Firestrike. Now keep in mind the top card is 20MHz behind the bottom card (higher volt card). At +140 core/+300 mem, boost further increased to 1545MHz and voltage stayed put at 1.225V.

So what this means is that if everything were to have worked properly, I should see a boost of 1524MHz in Firestrike. But I don't, instead I see 1506MHz, and the reason for that is because the top card is downvolting to 1.200V instead of 1.225V, so it simply does not have enough juice to eek out another boost bin or two. Even with the exact same OC, the cards that had comparable ASICs and voltages worked much better than the other set. Clearly the "mismatched set" is losing about 40MHz of core clock even though on paper everything appeared to be the same, which would explain the 3% loss in Firestrike score.

On the bright side, +120/+140 core and +300 mem (=1506 boost 7600 mem) is completely game stable without even touching volts, and cards stay at 1.200V/1.206V. So basically stock 980 level performance without needing to add any juice for this new set of Gigabyte 970s. I guess I really shouldn't complain.

But still, GODDAMN NVIDIA GET YOUR SHIT TOGETHER AND UNBORK THE DRIVERS KTHX.

Wait, are you bent because of 20mhz, or am I misreading?
 
Posted this in the 980 OC SLI review, thought I would resurrect this thread with it as well.
http://hardforum.com/showpost.php?p=1041224721&postcount=14

Apparently NVIDIA's solution to the problem was to add a note in the release notes for the 344.65 driver that differing voltages in SLI is normal behavior...
Page 10 of 344.65 Release Notes said:
Differing GPU Voltages in SLI Mode
When non-identical GPUs are used in SLI mode, they may run at different voltages. This occurs because the GPU clocks are kept as close as possible, and the clock of the higher performance GPU is limited by that of the other. One benefit is that the higher performance GPU saves power by running at slightly reduced voltages.

An end-user gains nothing by attempting to raise the voltage of the higher performance GPU because its clocks must not exceed those of the other GPU.
Moderators on GeForce forums are apparently now deleting posts that go against this narrative in the master thread started by GoldenTiger.
 
Yeah I just posted on there so they'll probably delete my post. I posted the following:

Lord Exodia said:
MONKEYpatch said:
Have you overclocks been stable since and would you mind sharing your results?

Yes my overclocks have been stable. 1520mhz Core, 8000mhz Vram 1.275v no throttling in games. Slight throttling in benchmarks/stresstests. Since both cards have ample voltage to run the clocks I at throwing at them. The cards are running the way they should have been from day one now. This debacle was clearly a bios screw up/mixup by engineers at Nvidia. They will silently begin providing a corrected bios to AIB partners while continuing to deny a mistake ever happened via damage control comments such as "It's normal for them to run at different voltages".

skywalker99 said:
...and I can confirm that your semi baked statement doesn't make sense :)

I have the most up to date bios version and still experience voltage issues

It's not a matter or updating your bios, but a matter of modifying your bios. The voltage issue can be manually fixed by modifying the bios and re-flashing them to your card/s. Maybe in a month or 2 a new bios will be available from your manufacturer to fix the issue if they dare come out and offer it. Going against Nvidia can be costly to AIB partners.

DirtySouthWookie said:
I think what he is saying is that there is no up to date bios. You have to flash a custom bios from OCN to fix this.

We have a winner! You got it sir. exactly what I meant to say, however you don't need a bios from OCN, you can extract your own bios, modify it slightly (takes less than 5 minutes) flash it right back onto your card and correct the problem.
 
Wow fuck nVidia. This voltage issue, broken drivers and broken SLI really makes me want to jump ship to AMD.
 
Bump??? Is nvidia just going to ignore this problem? Seems stupid as heck, considering you can't even run two cards in SLI at STOCK frequencies without instability. You got guys dropping down to 1.5v, which isn't even enough volts to run at STOCK frequency.

Shame on NV to say this is normal....absolutely ridic
 
Bump??? Is nvidia just going to ignore this problem? Seems stupid as heck, considering you can't even run two cards in SLI at STOCK frequencies without instability. You got guys dropping down to 1.5v, which isn't even enough volts to run at STOCK frequency.

Shame on NV to say this is normal....absolutely ridic

Yeah, it's completely ridiculous and a black eye for nvidia with how they've handled this issue.
 
Yeah, it's completely ridiculous and a black eye for nvidia with how they've handled this issue.

Yea looks like its happening with the 700 series now too...Pathetic Nvidia.

If AMD pulled this shit people would of been making a big deal, Like the Brown Screen of Death issue AMD had with the 5870. If it wasn't for Kyle emailing AMD about the issue it might of never been fixed.

Maybe we can get Kyle to do the same thing with Nvidia.
 
Yea looks like its happening with the 700 series now too...Pathetic Nvidia.

If AMD pulled this shit people would of been making a big deal, Like the Brown Screen of Death issue AMD had with the 5870. If it wasn't for Kyle emailing AMD about the issue it might of never been fixed.

Maybe we can get Kyle to do the same thing with Nvidia.

Nvidia definitely screwed this situation up. Saying that this is normal when it clearly isn't has really destroyed consumer confidence. I once looked at them as a company who listened and cared about their loyal customer concerns but now they seem to be a PR media machine sweeping things under the rug to avoid any negative publicity.

Proof of the simple fact that the cards shouldn't behave this way is seen with the GTX 7xx series behaving the same way now shows that this was a glitch introduced by the driver team, likely for some power optimization they wrote into the drivers. Either that or the 7 series operated "out of spec" for over a year.

I'll vote with my wallet when the R390 series comes out. Nvidia really needs to get knocked off of its high horse. The gpu industry could stand to have them go hungry for a litle while and struggle while AMD flourishes so that they can refocus on what's important. The things that got them to where they are. Customer loyalty, Rock solid drivers (remember those days?) and amazing products.

So how about that HBM on the 390X's? ;)
 
Bump??? Is nvidia just going to ignore this problem? Seems stupid as heck, considering you can't even run two cards in SLI at STOCK frequencies without instability. You got guys dropping down to 1.5v, which isn't even enough volts to run at STOCK frequency.

Shame on NV to say this is normal....absolutely ridic

That's weird. My two STRIX 970s run at 1.200 and 1.212 volts in SLI with no issues.
 
Bump??? Is nvidia just going to ignore this problem? Seems stupid as heck, considering you can't even run two cards in SLI at STOCK frequencies without instability. You got guys dropping down to 1.5v, which isn't even enough volts to run at STOCK frequency.

Shame on NV to say this is normal....absolutely ridic

You are dramatizing this a bit too much. Even with the voltage dip, SLI usually runs "fine" at stock settings. The people really complaining about this are the overclpackers and people that play with MSI Afterburner more than they play games.
 
You are dramatizing this a bit too much. Even with the voltage dip, SLI usually runs "fine" at stock settings. The people really complaining about this are the overclpackers and people that play with MSI Afterburner more than they play games.

Sorry I noticed a mistake. I meant to say 1.15v, not 1.5v

And no, I don't think I am dramatizing too much. Try asking your single 1400mhz stock boost clock Maxwell to run at 1.15v (or even less), and you'll what I mean. It's not MEANT TO run at that voltage. Try asking your sandy bridge to run at stock clocks, but put the voltage to 1.05v...I'm sure most samples are unable to do this. So no, I don't think me and the thousands of others are dramatizing this too much. If it does not work at stock clocks it's clearly a broken product, whether software or hardware. In this case, it's a software problem because both GPUs ran individually boost up the voltage correctly.
 
Last edited:
Guys I was upset at first about the voltage issues but after hours and hours of testing having the cards at the same voltages yields me no advantages on benchmarks or fps in games or higher stable overclocks. I can't complain about this anymore, I was wrong it just is a placebo thing having them match.
 
Guys I was upset at first about the voltage issues but after hours and hours of testing having the cards at the same voltages yields me no advantages on benchmarks or fps in games or higher stable overclocks. I can't complain about this anymore, I was wrong it just is a placebo thing having them match.

MY GOD, you must be trolling!!!!!
 
That's weird. My two STRIX 970s run at 1.200 and 1.212 volts in SLI with no issues.


Duh. I am simply telling you it's not happening for me. Pretty sure that's not trolling, it's evidence.

After ~15 minutes of BF4 64-multiplayer:

 
I bet a round of beer that your cards have similar ASICs, probably not more than 3% apart.
 
Duh. I am simply telling you it's not happening for me. Pretty sure that's not trolling, it's evidence.

After ~15 minutes of BF4 64-multiplayer:


Well good for you. Apparently just about the whole internet reports otherwise.

The way you replied to me earlier...it just sounded like, "well it doesn't happen to me so it must be untrue." That's why I replied back with the trolling part.
 
I never said it was untrue, friend, just that it wasn't happening to me. Why so tense?
 
I never said it was untrue, friend, just that it wasn't happening to me. Why so tense?
This really upsets some people, so they were acting irrational and reactionary to your post.

Some people have very small discrepancies like in your case, while other have a 50mV or more discrepancy. A small discrepancy like 12mV may not make much of an impact, but it was in my case with one running at 1.200V and the other at 1.150V at stock clocks.

I bet a round of beer that your cards have similar ASICs, probably not more than 3% apart.
Even if they were, a user on the NVIDIA GeForce forums debunked this assumption through his own testing before the post was deleted.
DirtySouthWookie said:
Long Read Warning: Cliffs notes= "Nvidia you're wrong about ASIC voltages!"

The feedback I sent:

Is the SLI voltage issue mentioned in the driver notes officially being blamed on asic quality?

Out of 8 EVGA GTC 980 cards I own I have matched TWO of them with identical 80% ASIC quality. The problem persists even though the ASIC quality is identical. Furthermore, a lower asic quality card will draw MORE voltage regardless of the PCIE slot or SLI enabled.

These cards perform with the proper voltages only when hooked up without SLI.

For testing purposes, I used my lowest asic quality card (out of 8) which was 65% to use in the top PCIE SLI slot. I paired this with my highest card which is 85% and ran the 85% card in the lower PCIE slot.

The 65% asic quality card was set at proper voltage 1.21v during load while the 85% asic quality card ran a 1.15v.

If Nvidias theory about the asic quality is correct, once I swap the cards ( top to bottom ) the voltages should be the same on each card due to their individual ASIC quality.

This is quite the opposite. No matter what the cards ASIC value, the top (primary) card will always run at the correct voltage (1.21v) and the second card will run at lower voltage (1.15v) regardless of their individual ASIC quality. This causes instability and is being complained about across all of your geforce forums.

I'm looking forward to resolving this issue. Advanced users are reporting using custom BIOS resolves this issue. I do not believe the customer should have to flash custom user bios because this voids any warranty's.

This issue will determine if I need to return these 8 GTX 980s and go back to the 700 series, which DID hold a constant matching voltage regardless of ASIC %.

I do not understand how ASIC quality was ignored for years and never caused problems, but now it is being blamed for SLI instability on new flagship cards.

My 660s 680s 770s 780s 780ti cards did not suffer any of these problems and none of them had anywhere close to asic % numbers matching.
https://forums.geforce.com/default/...n-the-other-driver-bug-/post/4361150/#4361150

I have a sneaking suspicion it may actually affect users differently depending on their platform. I'm on a Z87 motherboard from Gigabyte and have a 50mV discrepancy. Starrbuck on X99 only has a 12mV discrepancy.

UPDATE: Just saw someone in the master thread at NVIDIA say they have a 60mV discrepancy with an X99 motherboard. I guess that theory isn't sound...
 
Last edited:
All good info and opinion. All the discrepancies may be making it very hard for NVIDIA to track down.
 
I never said it was untrue, friend, just that it wasn't happening to me. Why so tense?

Not tense at all. And I never said that you said it was untrue. It's just the way you approached this thread, it makes you sound like a troll. It's the same thing if I went into the huge GTX 970 coil whine thread. Imagine if I went in and said "well thats weird, my two GTX 970s have zero coil whine and perform with no issues." And then just walk away and say nothing else.
 
Not tense at all. And I never said that you said it was untrue. It's just the way you approached this thread, it makes you sound like a troll. It's the same thing if I went into the huge GTX 970 coil whine thread. Imagine if I went in and said "well thats weird, my two GTX 970s have zero coil whine and perform with no issues." And then just walk away and say nothing else.
In my opinion he was just adding quantifiable data. Often times threads like these can turn into echo chambers, but you have to put things into the larger perspective. There are probably 10s of thousands of Maxwell owners out there at this point, and we would be lucky if 1% of them said anything online. Out of that 1%, how many are going to be SLI users?

People with issues are always going to be the loudest, and in my opinion the breaking up of the monotonous echo is good for discussion. Also remember that many people may not correctly portray their tone in text, so it's important when dealing with anything on the internet to not take it personally.
 
In my opinion he was just adding quantifiable data. Often times threads like these can turn into echo chambers, but you have to put things into the larger perspective. There are probably 10s of thousands of Maxwell owners out there at this point, and we would be lucky if 1% of them said anything online. Out of that 1%, how many are going to be SLI users?

People with issues are always going to be the loudest, and in my opinion the breaking up of the monotonous echo is good for discussion. Also remember that many people may not correctly portray their tone in text, so it's important when dealing with anything on the internet to not take it personally.

Oh absolutely. I totally understand and get the it can be very difficult to understand someone thru text sometimes. And good thing he did return back with a screenshot, because if he didn't then he would be considered trolling in my book. I mean you just don't walk into a thread and act like if it doesn't happen to you, then everything must be fine.

Anyway back on topic. If anybody has any relative information, please post.
 
Duh. I am simply telling you it's not happening for me. Pretty sure that's not trolling, it's evidence.

After ~15 minutes of BF4 64-multiplayer:


its not happening to you because you overclocked your cards, voltage does sort even out if you play with the clocks and the voltages are still off going by the pic you posted.

run both cards at stock clocks and you will see that one card will run much lower in voltage.

I do think you are aware of the issue, that poster is right, you could be a trolling employee for some company who is trying to keep this fuk up under the carpet.
 
I wish I would have seen this thread before I ordered my second 970 last night. Ugh. Has anyone tried using custom bios to see if it does anything for this?
 
I'd rather try OC the lower V card before I ever mess with GPU's bios, lest I brick it. From what I seen on the first post, it seems to be a very simplye 'fix' (even easier than flashing custom bios).
 
I wish I would have seen this thread before I ordered my second 970 last night. Ugh. Has anyone tried using custom bios to see if it does anything for this?

I started the thread, and can confirm that you can fix this via a bios update. All the major bioses you can get out there have the fix for this implemented. e.g. Game Stable and No Limits by Zoson @ overclockers.net
 
I started the thread, and can confirm that you can fix this via a bios update. All the major bioses you can get out there have the fix for this implemented. e.g. Game Stable and No Limits by Zoson @ overclockers.net

Good to hear. Thanks for chiming in.

I have played around with his bios and my card. They crank the volts pretty high, I wonder how the temps would be with the high voltage in an SLI setup. I guess I could mod the bios to run a lower voltage and less aggressive clock.
 
I'd rather try OC the lower V card before I ever mess with GPU's bios, lest I brick it. From what I seen on the first post, it seems to be a very simplye 'fix' (even easier than flashing custom bios).
It's the opposite. You have to OC the higher voltage card to bring the lower voltage one up to par. It forces the lower voltage card to use more power to keep up since the hardware and software is designed to match clocks in SLI.

Good to hear. Thanks for chiming in.

I have played around with his bios and my card. They crank the volts pretty high, I wonder how the temps would be with the high voltage in an SLI setup. I guess I could mod the bios to run a lower voltage and less aggressive clock.
I would find out what voltage the card runs under load at stock and start from there, like 1.200V with the STRIX 970. I agree that I wouldn't be comfortable running a downloaded custom BIOS without tweaking it for fear of frying my card.
 
I do think you are aware of the issue, that poster is right, you could be a trolling employee for some company who is trying to keep this fuk up under the carpet.

I'm actually feeling sorry I said anything and I'll try harder to keep my experiences and opinions to myself here.. It's not worth getting called a troll, liar, and company shill over.

Y'all have a Happy Thanksgiving. :)
 
It's the opposite. You have to OC the higher voltage card to bring the lower voltage one up to par. It forces the lower voltage card to use more power to keep up since the hardware and software is designed to match clocks in SLI..

I see, thanks, I must have misunderstood it.

But how well does that fix the problem? Will it be a good enough fix?

Oh absolutely. I totally understand and get the it can be very difficult to understand someone thru text sometimes. And good thing he did return back with a screenshot, because if he didn't then he would be considered trolling in my book. I mean you just don't walk into a thread and act like if it doesn't happen to you, then everything must be fine.

Anyway back on topic. If anybody has any relative information, please post.

Not tense at all. And I never said that you said it was untrue. It's just the way you approached this thread, it makes you sound like a troll. It's the same thing if I went into the huge GTX 970 coil whine thread. Imagine if I went in and said "well thats weird, my two GTX 970s have zero coil whine and perform with no issues." And then just walk away and say nothing else.

Just read this, I totally did not expect that kind of post would be trolling...
 
Last edited:
Back
Top