ccityinstaller
Supreme [H]ardness
- Joined
- Feb 23, 2007
- Messages
- 4,236
Hey guys,
I swapped my old AsRock Z77 Extreme4 out for a Z77MPower since I am going to go Tri-Xfire (paid for with mining) and I am having some issues with the build. I am posting it in the subforum since the issue(s) seem to be more related to the GPU then the CPU. I am running the latest CAT 14.2 BETA drivers (also had the old 13.12, upgraded hoping it was a funky driver issue).
I have the CPU running @ 5.026Ghz (50x100 plus the little MSI roundup on the BLK) @ 1.4V (she is a golden chip)..I have tested the CPU through 25+ Passes of IBT on Extreme Mode with Max Memory, and everything is rock solid. I have verified the CPU Vcore via Multimeter and it stays right at 1.407V. Never flickers a single digit when loaded 100%. CPU core temps are 59, 60, 63, and 61C.
I have my first 290 running @ 1075Mhz/1325Mhz (also tested at stock) @ it's default Voltage (1.19V and 50% board power increase via TRIXX) and have it run through 3+ Hours of FurMark without a single issue. GPU Core peaks @ 35C and VRM 1 peaks @ 52C.
Now that I explained each component is working quite well under stress testing, we get to the issue I am having: running them both balls to wall. I am a firm believer that a component isn't 100% stable (if in a single loop) unless every device in the loop can produce it's maximum heat load without any issues.
My problem is that whenever I load both the CPU with Prime95 (small FFTS) and the GPU with Furmark, everything runs fine for 10~20 minutes, and then I get a random reboot. Nothing crashes, no blackscreens, just a reboot. I have noticed that when both the CPU and GPU are loaded, the GPU core clock wants to drop from 1075Mhz down to ~1036Mhz. I have noticed this ONLY when both are loaded 100%.
Anyone have any ideas? The things I have tried so far include:
1) Bumping the cpu Vcore from 1.39 (actual) to 1.407..I did since a tiny amount since the CPU alone was 100% stable @ the 1.39V setting.
2) Bumped CPU I/O from default (1.05) to 1.2V.
3) Tested CPU Vcore and 12V rail of PSU (reads 12.07V under load) with a higher end Fluke multimeter.
4) Upgraded the GPU drivers from CAT 13.12 to the latest 14.2 BETA.
I am ready to tear my hair out with this..I have had to tear the loop down twice already to due one stubborn sneaky ass leak that ended up coming from a tiny crack (yes a fucking crack) in one of my Koolance QDCs. I just want the damn thing to pass the stability tests so I cam resume mining/gaming and know that everything is tuned properly.
I swapped my old AsRock Z77 Extreme4 out for a Z77MPower since I am going to go Tri-Xfire (paid for with mining) and I am having some issues with the build. I am posting it in the subforum since the issue(s) seem to be more related to the GPU then the CPU. I am running the latest CAT 14.2 BETA drivers (also had the old 13.12, upgraded hoping it was a funky driver issue).
I have the CPU running @ 5.026Ghz (50x100 plus the little MSI roundup on the BLK) @ 1.4V (she is a golden chip)..I have tested the CPU through 25+ Passes of IBT on Extreme Mode with Max Memory, and everything is rock solid. I have verified the CPU Vcore via Multimeter and it stays right at 1.407V. Never flickers a single digit when loaded 100%. CPU core temps are 59, 60, 63, and 61C.
I have my first 290 running @ 1075Mhz/1325Mhz (also tested at stock) @ it's default Voltage (1.19V and 50% board power increase via TRIXX) and have it run through 3+ Hours of FurMark without a single issue. GPU Core peaks @ 35C and VRM 1 peaks @ 52C.
Now that I explained each component is working quite well under stress testing, we get to the issue I am having: running them both balls to wall. I am a firm believer that a component isn't 100% stable (if in a single loop) unless every device in the loop can produce it's maximum heat load without any issues.
My problem is that whenever I load both the CPU with Prime95 (small FFTS) and the GPU with Furmark, everything runs fine for 10~20 minutes, and then I get a random reboot. Nothing crashes, no blackscreens, just a reboot. I have noticed that when both the CPU and GPU are loaded, the GPU core clock wants to drop from 1075Mhz down to ~1036Mhz. I have noticed this ONLY when both are loaded 100%.
Anyone have any ideas? The things I have tried so far include:
1) Bumping the cpu Vcore from 1.39 (actual) to 1.407..I did since a tiny amount since the CPU alone was 100% stable @ the 1.39V setting.
2) Bumped CPU I/O from default (1.05) to 1.2V.
3) Tested CPU Vcore and 12V rail of PSU (reads 12.07V under load) with a higher end Fluke multimeter.
4) Upgraded the GPU drivers from CAT 13.12 to the latest 14.2 BETA.
I am ready to tear my hair out with this..I have had to tear the loop down twice already to due one stubborn sneaky ass leak that ended up coming from a tiny crack (yes a fucking crack) in one of my Koolance QDCs. I just want the damn thing to pass the stability tests so I cam resume mining/gaming and know that everything is tuned properly.