[H] users 7970 Overclock Results thread

I got 00000174:000037a0.

I have no idea how to translate that.

I'm learning... ;)
37a is 890, thanks to a google search
so 890/1023= 86.99%

That sound right?

It looks like that is close to mine but a little higher, mine being 883/1023= 86.3%
 
Diamond 7970
EK Acetal/EN Nickel Block

Ran CCC and maxed both Core and Memory Sliders with no problem at 1.174V (Afterburner) 2.2 Beta 11

I unlocked everything in AB, but my Core slider won't go any higher than 1125!. What am I missing?

Edit: Nevermind. I haven't run an AMD card for 6 years, and didn't know about changing the EULA line in the CFG:)
 
Last edited:
3700 = 880/1023 = 86%?

Still trying to figure out why Anno 2070 keeps locking up my system so quickly at core OCs above 1100 when everything else seems fine at much higher clocks. I noticed there are new drivers I can try when I get home, though not sure if that'll help anything.
 
Last edited:
Mine is 699/1023..

68%.. maybe I missed it somewhere earlier in the thread.. but what is the 1023 number?

I assume 68% in this case is a bad thing.. it would make sense because I haven't been able to OC this card very much at all..
 
Just punched this card up to my maximum overclock of 1340/1775 and got 10402 in 3DMark11:
http://3dmark.com/3dm11/2606439

These are clocks I'd never run everyday, simply because overvolting the vRAM I think will shorten its life significantly, but it's still nice to see what these cards' capabilities.

Those of you having trouble overclocking - try changing variables to see if it helps. Run your fans at 100% and keep the cards as cool as possible to see if it's a heat issue. Try more voltage. Always isolate your clocks, do core first, then RAM.
 
Running heaven, looking at statue

volts @1218 trixx = 1.115 in gpuz 1180mhz will crash at 87 degrees (fan 57%)
volts @1193 trixx = 1.152 in gpuz 1180mhz will crash at 90 degrees (fan 54%)
volts @1170 trixx = 1.068 in gpuz 1180mhz will ---- artifact at 95 degrees (fan50%) - took longer to heat

volts @1170trixx
1240Mhz artifacts occur @71 degrees
1230Mhz artifacts occur @73 degrees
1220Mhz artifacts occur @75 degrees
1210Mhz artifacts occur @80 degrees
1180Mhz artifacts occur @95 degrees


so the main thing i notice before crash, the vddc and Vddc current change drastically
when the card starts out VDDC current would be around 140 amps
when the card heats up the Vddc keeps going up, eventually around 190, 1 recording log from gpuz even captured an odd 250 (followed by 2x 180 then crash)
The load is always the same, standing still looking at statue, seems once the temps raise the voltage control needs to be cooler, raising the voltage only hindered my OC efforts, resulting in crashes at lower temps


A) volts x amps + high temp = Vddc amps out of control = crashes
B) lower volts x amps = higher temps before vddc out of control
C) higher fanspeed = insane warhammer behind the computer


oc tools,
CCC, Sapphire Trixx, left mouse button
GPU-z 0.5.8 (new release with full volt monitors for 7xxx)

NICE post.
With all the people QQ'ing about their cards being defective because they cant reach 1125, it's finally nice to see a very helpful and insightful post again.

I can verify your findings directly.
My 1175mv (stock) card is rock stable at 1150 mhz, and 1700 Ram still at 1175 mv. I tested it in Gputool, and no artifacts up to 86C, where I stopped the test.

However at 1200 mhz, it seems that the artifacts start happening around 76C.
What's weird is, it doesn't seem to matter what the voltage is.I tried 1.215v (which was about 1.13 under load) up to 1.250v (which was about 1.16v under load), and all it did was raise the artifact level by 1 total C....basically it didnt seem to matter at all.

I do know that these GPUtool artifacts don't really tell the whole story, since Battlefield 3 is fully stable at 1.231v and 1200 mhz, but I'll see weird artifact lines on the planes in chase cam view at 1.21'ish volts. It seems like in games at least, raising the voltage does have some effect, so it's really hard to say what is causing the artifacts in gputool and what is causing the artifacts in games.

Similar results on the VDDC current. Saw that doing a furmark test at 1215 mhz and 1.25v, watched the amps shoot up slowly to around 250 (!) as the card reached 82C, and then the card itself *shut down*. The screen turned black and the fan speed went way down and the system was locked hard. That black screen videocard crarsh didn't happen at 1200 mhz but artifacts appeared at 86C.

I think we need more time to find out what's really going on here. And I don't even know if a vdroop mod would do anything or not for this...
 
Got to 1125, ran it through Furmark didn't see any artifacts or isntability. Then I fire up Skyrim and an insta-crash (of the display driver stopped responding type). Is Skyrim super sensitive to OCs or something?
 
So here is the question then, If some of these cards use .125V less @ stock (1.05 vs 1.175), how much less power are they pulling?: This would be interesting to see as that is a 12.5% decrease in voltage.
 
NICE post.
With all the people QQ'ing about their cards being defective because they cant reach 1125, it's finally nice to see a very helpful and insightful post again.

I can verify your findings directly.
My 1175mv (stock) card is rock stable at 1150 mhz, and 1700 Ram still at 1175 mv. I tested it in Gputool, and no artifacts up to 86C, where I stopped the test.

However at 1200 mhz, it seems that the artifacts start happening around 76C.
What's weird is, it doesn't seem to matter what the voltage is.I tried 1.215v (which was about 1.13 under load) up to 1.250v (which was about 1.16v under load), and all it did was raise the artifact level by 1 total C....basically it didnt seem to matter at all.

I do know that these GPUtool artifacts don't really tell the whole story, since Battlefield 3 is fully stable at 1.231v and 1200 mhz, but I'll see weird artifact lines on the planes in chase cam view at 1.21'ish volts. It seems like in games at least, raising the voltage does have some effect, so it's really hard to say what is causing the artifacts in gputool and what is causing the artifacts in games.

Similar results on the VDDC current. Saw that doing a furmark test at 1215 mhz and 1.25v, watched the amps shoot up slowly to around 250 (!) as the card reached 82C, and then the card itself *shut down*. The screen turned black and the fan speed went way down and the system was locked hard. That black screen videocard crarsh didn't happen at 1200 mhz but artifacts appeared at 86C.

I think we need more time to find out what's really going on here. And I don't even know if a vdroop mod would do anything or not for this...
I saw you post in my thread over at Guru3D (Dr. K6 over there). Unwinder says there's no Vdroop, just the sensors reading incorrectly. The newest MSI AB adjusts for this. I believe what you're seeing above is a temperature wall. Temperature increases the resistance of the IC, hence the amps go up in order to maintain the voltage. However, I believe at some point the OCP kicks in and shuts down the card. Keep the card cool and it will clock better. At ~45C, I hit 1340MHz on the core with 1.3V.

Got to 1125, ran it through Furmark didn't see any artifacts or isntability. Then I fire up Skyrim and an insta-crash (of the display driver stopped responding type). Is Skyrim super sensitive to OCs or something?
Furmark is useless now since both AMD and NVIDIA have protection in their cards against power viruses. The program can't max out the card, therefore you can't produce the full load needed to test for stability. You need to test your overclocks in games. Games that run very high FPS (older source games, like Left 4 Dead are awesome here), work the best IMO. I've found running Crysis 2 benches works well too.
 
Still debating watercooling, so just want to make sure I understand this correctly:

Since I have a 1.050V card, this gives me less headroom regarding how high I can raise the voltage past 1.175V before OVP/OCP kicks in. Is that right?

Still, would be nice to get my temps to where they're not occasionally hitting 89C @ 1.175V in normal game usage. I could use a custom fan profile, but the fan already gets pretty loud.
 
Still debating watercooling, so just want to make sure I understand this correctly:

Since I have a 1.050V card, this gives me less headroom regarding how high I can raise the voltage past 1.175V before OVP/OCP kicks in. Is that right?

Still, would be nice to get my temps to where they're not occasionally hitting 89C @ 1.175V in normal game usage. I could use a custom fan profile, but the fan already gets pretty loud.

I'm definitely getting a waterblock as soon as I can afford one. I hate the sound of this fan!! I don't know for sure about your OC headroom, but my card (1.05v) is stable in 3Dmark11 running 1220/[email protected] thus far (haven't tested this setting in games yet; I expect them to have to go back a little) using a custom fan curve in AB (temps around 68-70*C with fans maxing around 60-70%). It sounds like a plane taking off; my wife makes fun of me right now because of it, and that's unacceptable ;)
 
I'm definitely getting a waterblock as soon as I can afford one. I hate the sound of this fan!! I don't know for sure about your OC headroom, but my card (1.05v) is stable in 3Dmark11 running 1220/[email protected] thus far (haven't tested this setting in games yet; I expect them to have to go back a little) using a custom fan curve in AB (temps around 68-70*C with fans maxing around 60-70%). It sounds like a plane taking off; my wife makes fun of me right now because of it, and that's unacceptable ;)
Even if you have to back them down a bit, I think you have a better card than I do. At 1.175V everything looked fine at 1175 (core) until I noticed some serious artifacting in BF3 after a while, and then discovered my system locks up in Anno 2070 at anything past 1100. So I'm curious if getting my temps down will help.

My fan gets pretty loud on auto as it is, and I'm usually not one to complain about noise. The default custom fan curve that's automatically set in Afterburner will keep it cool at around 74C, but that's reminiscent of of my old Vantec 92mm Tornado (which was too much even for me). I could mess around with it, but I'm not sure what the best settings are, and I'm not entirely sure how much more noise I want to deal with on a regular basis. :(

So, the whole watercooling thing is pretty tempting. Still, there are a lot of other things I could use that money for. Decisions...
 
This is kinda weird. I just reinstalled Windows and now MSI Afterburner is limiting my memory to 1790 MHz. :(

Only thing I can think of that is different is I downloaded the latest driver from AMD, RC11.

I installed TriXX which allows me to overclock to 1825 again, but then it doesn't have the monitoring window that Afterburner does.
 
Even if you have to back them down a bit, I think you have a better card than I do. At 1.175V everything looked fine at 1175 (core) until I noticed some serious artifacting in BF3 after a while, and then discovered my system locks up in Anno 2070 at anything past 1100. So I'm curious if getting my temps down will help.

My fan gets pretty loud on auto as it is, and I'm usually not one to complain about noise. The default custom fan curve that's automatically set in Afterburner will keep it cool at around 74C, but that's reminiscent of of my old Vantec 92mm Tornado (which was too much even for me). I could mess around with it, but I'm not sure what the best settings are, and I'm not entirely sure how much more noise I want to deal with on a regular basis. :(

So, the whole watercooling thing is pretty tempting. Still, there are a lot of other things I could use that money for. Decisions...

I think watercooling is definitely the way to go if you can. When I watercooled my 6950's, my temps came way down, as well as the voltage! The voltage dropped significantly for a given overclock. I don't know how those compare to these new cards, but I would expect better temps and performance, even on a less extravagant loop.

I played a little bit in just a few games: I think it was 1175/1600@~1.12 Crysis, Metro 2033, Deus Ex, and everything seems stable, granted it was just a short play through and messing around. I'm not sure what'll happen in extended playing for say ~2-3 hours...

I have 2 monitors hooked up right now, and my card is at 48C @41% fan. I can hear that more distinctly over my 6 rad fans on full, and it's driving me nuts lol.


This is kinda weird. I just reinstalled Windows and now MSI Afterburner is limiting my memory to 1790 MHz. :(

Only thing I can think of that is different is I downloaded the latest driver from AMD, RC11.

I installed TriXX which allows me to overclock to 1825 again, but then it doesn't have the monitoring window that Afterburner does.

My MSI AB limits the memory to 1790 also. Weird because I thought I heard some pushing 1800, but then again they may have been doing it on trixx
 
I saw you post in my thread over at Guru3D (Dr. K6 over there). Unwinder says there's no Vdroop, just the sensors reading incorrectly. The newest MSI AB adjusts for this. I believe what you're seeing above is a temperature wall. Temperature increases the resistance of the IC, hence the amps go up in order to maintain the voltage. However, I believe at some point the OCP kicks in and shuts down the card. Keep the card cool and it will clock better. At ~45C, I hit 1340MHz on the core with 1.3V.


Furmark is useless now since both AMD and NVIDIA have protection in their cards against power viruses. The program can't max out the card, therefore you can't produce the full load needed to test for stability. You need to test your overclocks in games. Games that run very high FPS (older source games, like Left 4 Dead are awesome here), work the best IMO. I've found running Crysis 2 benches works well too.

Unwinder didn't say that. He said he's adding the "Target" voltage back as default, due to too many noobs crying about the voltage reading incorrectly. That's what he said.
There is very definitely vdroop. You can use the latest version of GPU-Z, and notice weird stuff happening with the volts and amps especially in furmark...really bizarre..but that's how it is.

You can unlock the real measured voltage iin AB (to make it like beta 10) by editing the profiles folder after installing AB and rebooting.

Put

VDDC_CHL8228_VIDReadback = 0
MVDDC_CHL8228_VIDReadback = 0

in the [settings] under profiles and the VEN... .cfg profile, (not in the main msiafterburner.cfg file).
 
(warhammer)
volts @1218 trixx = 1.115 in gpuz 1180mhz will crash at 87 degrees (fan 57%)
volts @1193 trixx = 1.152 in gpuz 1180mhz will crash at 90 degrees (fan 54%)
volts @1170 trixx = 1.068 in gpuz 1180mhz will ---- artifact at 95 degrees (fan50%) - took longer to heat

volts @1170trixx
1240Mhz artifacts occur @71 degrees
1230Mhz artifacts occur @73 degrees
1220Mhz artifacts occur @75 degrees
1210Mhz artifacts occur @80 degrees
1180Mhz artifacts occur @95 degrees


I saw you post in my thread over at Guru3D (Dr. K6 over there). Unwinder says there's no Vdroop, just the sensors reading incorrectly. The newest MSI AB adjusts for this. I believe what you're seeing above is a temperature wall. Temperature increases the resistance of the IC, hence the amps go up in order to maintain the voltage. However, I believe at some point the OCP kicks in and shuts down the card. Keep the card cool and it will clock better. At ~45C, I hit 1340MHz on the core with 1.3V.


Dear K6,

Saying 1.3 volts and 45c is nice. BUT, what would be your hard and fast rule for NEW overclockers running on STOCK cooling?
Over 70% fan is too loud, my stable overclock are best acheived at stock volts, raising the volts for my stock cooler helps nothing. We have plenty of people buying these cards, they raise volts because thats what others have said, CRASH, then they want to RMA their cards because they wont oc to 1300mhz...

edit : also how does the temp wall affect artifacts? I understand the higher Amps at higher temps /crash, but why does it artifact at the higher temperature?
 
Last edited:
Furmark is useless now since both AMD and NVIDIA have protection in their cards against power viruses. The program can't max out the card, therefore you can't produce the full load needed to test for stability. You need to test your overclocks in games. Games that run very high FPS (older source games, like Left 4 Dead are awesome here), work the best IMO. I've found running Crysis 2 benches works well too.

I found heaven usefull for finding artifact point in the card.
3dm11 would freeze sometimes, but as it is short run, doesnt heat the card enough.
BF runs a good temperature, would consider an hour of this a good test. On an older game, that only runs 60% gpu load, this game actually crashed easier than BF3, but was temperature related, not gpu usage% related...


BF3 runs no issues at 1200mhz gpu clock, at 65-70% fanspeed, default volts
BF3 runs no issues at 1070mhz gpu clock, at 49% fanspeed, default volts (automatic fan profile from factory)
this may explain why amd set gpu @925mhz, keep fan speed down and reviews are happy campers, come Nvidia crunch time re-release @1050/1150mhz and call it 7975 /7980 - also keeps temps down in 2 years time for people who put in graphics card then dont clean fan in the next decade, prevent crashes
 
Last edited:
I find it a little strange everyone is saying max ram OC's
running 3d mark 11 i can max out RAM too......
but over 1450ram, the score goes down, so i set to 1420.
Has everyone been testing their 3dm11 scores with 10mhz increments till score goes down? I know it takes time but your 2mega jillion hertz (RAM) overclocks dont seem right to me....
 
Furmark is useless now since both AMD and NVIDIA have protection in their cards against power viruses. The program can't max out the card, therefore you can't produce the full load needed to test for stability. You need to test your overclocks in games. Games that run very high FPS (older source games, like Left 4 Dead are awesome here), work the best IMO. I've found running Crysis 2 benches works well too.

Ah, I figured. Just gonna leave it stock for now. Don't have the energy to do it right.
 
Unwinder didn't say that. He said he's adding the "Target" voltage back as default, due to too many noobs crying about the voltage reading incorrectly. That's what he said.
There is very definitely vdroop. You can use the latest version of GPU-Z, and notice weird stuff happening with the volts and amps especially in furmark...really bizarre..but that's how it is.
I wouldn't trust Furmark anyway because, as I said above, both AMD and NVIDIA specifically code their drivers to kill power virus type programs. It was my understanding that Unwinder changed the coding simply to show what people wanted to see, not what the sensors actually read. It's my understanding that the sensors read as a percentage of what the current is, and use VID (1.175V) as a max, and therefore a percentage of that in relation. If I'm wrong that's fine, but my card is stable as can be with no issues overclocking. Also note that when I go from stock to 1.3V applied, my power consumption goes up some 100W, so it's definitely taking more voltage. The difference here is I'm on water cooling, that's why I think it's a cooling issue.

You can unlock the real measured voltage iin AB (to make it like beta 10) by editing the profiles folder after installing AB and rebooting.

Put

VDDC_CHL8228_VIDReadback = 0
MVDDC_CHL8228_VIDReadback = 0

in the [settings] under profiles and the VEN... .cfg profile, (not in the main msiafterburner.cfg file).
This doesn't change anything for me, I'm still reading the same (1.17 @ 1.3V applied, typically 1.12V load, and 0.85V idle), and my voltage never fluctuates or changes, in GPUZ or Afterburner.

Dear K6,

Saying 1.3 volts and 45c is nice. BUT, what would be your hard and fast rule for NEW overclockers running on STOCK cooling?
Over 70% fan is too loud, my stable overclock are best acheived at stock volts, raising the volts for my stock cooler helps nothing. We have plenty of people buying these cards, they raise volts because thats what others have said, CRASH, then they want to RMA their cards because they wont oc to 1300mhz...

edit : also how does the temp wall affect artifacts? I understand the higher Amps at higher temps /crash, but why does it artifact at the higher temperature?
The first rule of overclocking is there are no hard/fast settings :D. Cheekiness aside, my point there was that once cold, these cards can clock. If you're looking for a place to start, I would find a fan setting you can enjoy your card at, and then leave it at that the whole time during your testing. Then, start upping your clocks, watch your temps, and hope for the best. If you crash out, raise your voltage a little, and try again. These cards are designed to run at 90C, but the colder you keep in IC, the higher the frequency you can push at the same voltage.

Artifacts happen as a byproduct of the chip becoming unstable, meaning that it's running too fast (too high a frequency) for the current temperature and voltage. You start to see artifacts as the chip loses logic and draws errors, instead of what is called for. Therefore, you either have to lower the frequency, lower the temperature, or raise the voltage.

As an off shoot, maybe that's the problem with these things, in order to get the high clocks, you need to keep them cold, since AMD designs them to run at 90C, they had to keep the stock clocks very low to pass certification. Just a thought. I think Falkentyne posted about the different leakage chips, and I believe he may be on to something there.

I find it a little strange everyone is saying max ram OC's
running 3d mark 11 i can max out RAM too......
but over 1450ram, the score goes down, so i set to 1420.
Has everyone been testing their 3dm11 scores with 10mhz increments till score goes down? I know it takes time but your 2mega jillion hertz (RAM) overclocks dont seem right to me....
I use 25MHz increments, but yeah, my score actually increases until the RAM flat out crashes. Even if ECC is kicking in earlier, my FPS and score keep going higher. People with high RAM clocks are probably overvolting the vRAM to 1.7V (stock is 1.6V). At stock, my vRAM tops out at 1625MHz, but with 1.7V I can get 1775MHz stably. However, I don't recommend overvolting the RAM for very long, as vRAM has typically been very sensitive to voltage and can get killed quickly. Unless of course that's changed with these cards, but I'm not going to be the one to test it :D.
 
Okay, I had my card crash at 1175 and 1125...so I'm jumping back down to stock :( I wonder why my system can't handle this

^^
I was having problems overclock my first time through, and found out it's because I had Overdrive enabled while using Trixx to overclock. I am currently pulling 1.2/1600 at stock voltage and the card hanging around 70c. I'm testing it in SWTOR maxed out at 5760x1080 and so far no problems and no artifacts.
 
lol mine is the worst of the worst....

Reg 00000174 : 000025f0.......... its like... 59% or something.

But for the record i currently have it on 1090/1375 at 1.174 Volts .
Highest i can go is around 1190 at 1.28 Volts (but it only gives me like 3-5 fps more in Battlefiled 3 - so i said screw it lol not worth it). I initially had 1575 memory BUT what i noticed in battlefield 3 was if i enable in console "render.perfoverlayvisible 1", and have my memory increased, it causes more spikes in the graph (= bad). But currently getting around 85-100+ fps (depending on the map) with MSAAx2 so i'm very pleased lol.

But it makes me jealous to see people with 1175 with stock volts. btw furmark etc are rubbish lol i ran them for my clocks, and no problems, then i play battlefield 3 and driver stops responding - always use the game your playing as benchmark in my opinion
 
XFX 7970 BEDD

Core: 1125
Memory: 1600
Volts: 1.17

Heaven: 642
Max: 57.8
Min: 15.0

Temp: ~72 C

I pushed the core up to 1150 and ran the benchmark, it started artifacting on the desktop afterwards. The heaven bench came back at 650, so I said the hell with that. I could probably squeeze more out of the core through volts, but during the run at 1150 it was already pushing ~76 C, so I think I'm pretty happy. Maybe I'll go water in the not too distant future.

I run my fans at 1:1 from 30* to 80*, then from 80* to 90* they scale from 80% to 100%. it idles around 50* in my Silverstone FT02, which is pretty quiet compared to my XFX 5970 BE. The fan noise is more "static" with the 7970 BEDD while the 5970 was a "whooosh" sound.

I'm using TRIXX to set core/memory/volts, and afterburner with everything but monitoring disabled to view status (I have a custom Logitech G19 profile that polls and graphs the Afterburner statistics realtime).
 
Hmm, why are people doing 200+ mhz overclocks with auto fan or without addt'l voltage? You probably need manual fan. Its gonna be loud. You can solve with water, or aftermarket cooling.
 
Last edited:
same

either that, or final fantasy VII


FF VII = 3rd, and 4th longest gaming sessions. 30+ hrs

Mainly breeding Golden Chocobos and fighting the Weapons. It was easy to come home from work on Friday evening and sit down to start gaming, then notice it was early Sunday morning...

The EQ2 XPAC where the fae were introduced was the second longest session at a little over 35hrs. I was grinding up my Fae Dirge to the level cap.
 
I find it a little strange everyone is saying max ram OC's
running 3d mark 11 i can max out RAM too......
but over 1450ram, the score goes down, so i set to 1420.
Has everyone been testing their 3dm11 scores with 10mhz increments till score goes down? I know it takes time but your 2mega jillion hertz (RAM) overclocks dont seem right to me....

@warhammer and @K6
My Ram keeps giving an increase in 3dmark 2011 up to 1725 mhz. The biggest increases are up to 1650 mhz, then slower increases. I also verified this with vantage, too. So I settled on 1700 mhz.
I haven't increased the ram voltage, either.
 
^^
I was having problems overclock my first time through, and found out it's because I had Overdrive enabled while using Trixx to overclock. I am currently pulling 1.2/1600 at stock voltage and the card hanging around 70c. I'm testing it in SWTOR maxed out at 5760x1080 and so far no problems and no artifacts.

^^ possibly a BAD mistake, although YMMV here. You need to run benches to verify....

You can do 1200 mhz because you disabled overdrive, which ALSO sets powertune at 0% (what we want is for powertune to be completely ignored/disabled..but that just isn't happening). This means that when the card gets heavily stressed, it will throttle the clocks. You can actually see the throttling if you run furmark and the latest GPU-Z at the same time (MSI Afterburner doesn't show this). I just finished testing this. You'll see the core go down from 1200 mhz to 750-850 mhz....

I'm not sure if powertune gets completely disabled (which is what would be nice) if you overclock with AB, though (haven't tested that).
The throttling may only get noticed in heavy GPU stress though. Probably no effect in SWTOR.

Something you can test is whether disabling overdrive (which sets powertune to 0%) gives you higher benchmark and game scores than powertune at 20%. I think you will find that your scores won't be as high (assuming you can pass the test without crashing). I didn't see a difference in 3dmark '11. But furmark throttled them clocks instantly, even at 50C.

My only issues with overdrive enabled is, 1) making sure I don't have multiple monitoring programs running at the same time, 2) sometimes changing a 3D settings profile causes the clocks (but not fan or powertune) to reset back to stock, so I have to go back in Trixx and reload it.

I'm fully stable at 1150/1700 with overdrive enabled and using Trixx to overclock.

There ARE some issues where you will get artifacts even at stock (after an overclock run, usually after using different monitoring programs), not sure if that's related to stuff going on between multiple monitoring programs running and Overdrive, but if that happens, you can go into S3 sleep mode and it's fixed. However if you get horrible artifacting when MSI afterburner's OSD is running, you -have- to reboot the computer.
 
Last edited:
I was using gpuz 5.7. IT reported my sapphire 7970 to have 1.175 volts (while battlefield 3 is playing in window mode). I was using the drivers on the sapphire cd. I did get some minor artifacts, but I was able to play with all 3 sliders maxed (in the ati overdrive tool). I tried the 1/9 drivers; I can no longer do the 3 sliders maxed (without bf3 crashing to the desktop randomly). I can do like 1100/1575. I reformatted since then however, and I am using the latest 7970 drivers (came out a couple of days ago). GPUZ also released a new version, and I am using it. This gpuz is showing 1.07 voltage, with bf3 in window mode. SO WTH?




MY stock volts was apparently 1.175, but now it is 1.07 (under load)?
 
I was using gpuz 5.7. IT reported my sapphire 7970 to have 1.175 volts (while battlefield 3 is playing in window mode). I was using the drivers on the sapphire cd. I did get some minor artifacts, but I was able to play with all 3 sliders maxed (in the ati overdrive tool). I tried the 1/9 drivers; I can no longer do the 3 sliders maxed (without bf3 crashing to the desktop randomly). I can do like 1100/1575. I reformatted since then however, and I am using the latest 7970 drivers (came out a couple of days ago). GPUZ also released a new version, and I am using it. This gpuz is showing 1.07 voltage, with bf3 in window mode. SO WTH?




MY stock volts was apparently 1.175, but now it is 1.07 (under load)?

Old gpuz didn't monitor voltage correctly on the 7970. The new 0.5.8 does monitor correctly. I originally thought my voltage was 1175mv but it was actually 1050mv.
 
Old gpuz didn't monitor voltage correctly on the 7970. The new 0.5.8 does monitor correctly. I originally thought my voltage was 1175mv but it was actually 1050mv.

so thats why the max 3 sliders are giving artifacts. I will have to up the voltage to 1.175. So 1.175 is supposed to be the default voltage (as in perfectly safe. Like 100% safe.)? I know some cards come with 1.05-1.175. I am not sure why this is the case. Why can't they all by 1.175. I guess I ain't a hardware technician, that makes silicon shit. So I would not know the reason.
 
so thats why the max 3 sliders are giving artifacts. I will have to up the voltage to 1.175. So 1.175 is supposed to be the default voltage (as in perfectly safe. Like 100% safe.)? I know some cards come with 1.05-1.175. I am not sure why this is the case. Why can't they all by 1.175. I guess I ain't a hardware technician, that makes silicon shit. So I would not know the reason.

Yeah idd i wonder to that if you put 1.175 on 1.05v or 1.125v card will be bad. I hope someone can explain me if it is. I think personally all cards are same so all should be safe 1.175 if im wrong please say so :)
 
Yeah idd i wonder to that if you put 1.175 on 1.05v or 1.125v card will be bad. I hope someone can explain me if it is. I think personally all cards are same so all should be safe 1.175 if im wrong please say so :)

Yep, this is what I want to know ^^^^^^^. Anyone?
 
Currently runnnig my 73% 7970 at 1300Mhz core, 1790Mhz memory. 1.3v core and 1.7v memory on my FW900 at 2304x1440. Played a couple hours of Crysis 2 with DX11 max and hi-res textures and she was stable as a rock. Heaven 2.5 is currently at about the 2 hour mark looping without a single artifact.

The best part is how cool the stock cooler at 100% runs the card, 62C max. Zero noise for the win! You could not run the card 100% fan though if the computer is near you.

I doubt my 83% 7970 will be able to clock this high. I will test that later tonight.
 
so thats why the max 3 sliders are giving artifacts. I will have to up the voltage to 1.175. So 1.175 is supposed to be the default voltage (as in perfectly safe. Like 100% safe.)? I know some cards come with 1.05-1.175. I am not sure why this is the case. Why can't they all by 1.175. I guess I ain't a hardware technician, that makes silicon shit. So I would not know the reason.

If your card is a 1.05v card, then you shouldn't need to go as high as 1.175v to max CCC. I think I maxed CCC with 1.1v...:confused:
:D
 
If your card is a 1.05v card, then you shouldn't need to go as high as 1.175v to max CCC. I think I maxed CCC with 1.1v...:confused:
:D

The irritating thing is that if you set to 1.175 it will still be like 1.12/1.114 in load somewhere around that. Also anyone has something that will set voltage to 0.9 in 2d?. Im using trixx and cant find something which does set this. Im happy with trix tho as it uses powertune +20% without using ccc. Afterburner seemed to bugg all the time.
 
If anyone is interested I made a quick video of my nice air-cooled 1300Mhz 7970 overclock. I cannot believe how well the stock cooler holds up under this load!

http://www.youtube.com/watch?v=KRh3P3GJZWI&feature=youtu.be

Anyone else reach 1300Mhz on air?

Well if i put alot of volts on it i have a feeling i can do even higher. But why would i :p dont want to run my gpu at 1.3v. At 1.12v between 1.14v load i can run 1220 so maybe 1300 should be doable at 1.2v
 
Well if i put alot of volts on it i have a feeling i can do even higher. But why would i :p dont want to run my gpu at 1.3v. At 1.12v between 1.14v load i can run 1220 so maybe 1300 should be doable at 1.2v

It doesn't work like that. Very few 7970's will reach anything near 1300Mhz no matter what volt's you give it. 1.3v is fine for the GPU if you keep temps under control. Give it a try.

I'll try when I get my cards under water. 70%+ fan is just too loud for me, and 1250+ overclocks require that on my end ;)

Ya, water should be nice. I don't care about the fan speed as I am not co-located with my computer so noise isn't a factor and running 100% fan is fine for my use.
 
Back
Top