I'm running out of ideas (GTX 1070)

Stoly

Supreme [H]ardness
Joined
Jul 26, 2005
Messages
6,713
I have a GTX 1070 which I had on my home PC. Since I had little time to game and wanted to try mining, i moved it to a pc at work where it mined for about 4 months.

Since ETH became much less profitable and now I have more time to game, I moved it back to my home pc where I gamed at 1080p for a while on my 4k TV. (desktop is at 1080p since 4k scaling really sucks). I wanted to try 4k for gaming, so I figured I should try Battlefront at 4k and see how it did.

That's where problems started. Upon switching resolutions the screen would either go black or show severe pixel corruption as if it was severely memory OC. I tried Fifa16 and SC2 with the same results. I switched the desktop to 4k and yup there was the pixel corruption. Switching back to 1080p and everything was fine.

Then I had the brilliant idea to try desktop resolutions between 1080p and 4k to see if only 4k was affected. So I started with 4k-pixel corruption, 1600p-pixel curruption, 1440p- pixel corruption, and finally 1080p PIXEL CORRUPTION.

From that moment on, no matter what resolution I choose, I get pixel corruption or blank screen.

So the easy answer is "you fool, you broke the card by mining". Well, no the card works perfectly on the pc at work at 1080p and under. (I don't have 4k at work)

Ok so then the OS is borked. Nope, I tried a new windows 10 with fall update and windows 7 installations on a separate drive. Exact same results.

Things I've tried.

Several Drivers on a fresh win 10 and win 7 installations. From 384.94 (the one I originally installed and worked) to the latest. They all fail.
Updated windows 10 to latest fall update. Same result
Tried different memory modules. Same result
Updated mobo bios. Same result
Tried a GT 630. Didn't work but little time to troubleshoot so WIP.
Tested on 2 PCs at work. Works like a charm. Didn't try games but tested with 3dmark and it works fine.

To do.
Switch PS
Further testing with the GT630
Different hdmi cable
Using a dvi-to-hdmi cable
Different hdmi port on TV
I have a core i7 920 and mobo I can use.
I think I have another ivy bridge mobo I can test.

Note that the PC does boot and won't freeze. It will even run games, it just either black screen or have pixel corruption.
 
Sounds like its RMA time man. I know that sucks but you can always buy yourself a 1070ti to hold you over and then sell the 1070 when you get a new one.
 
Sounds like its RMA time man. I know that sucks but you can always buy yourself a 1070ti to hold you over and then sell the 1070 when you get a new one.
I really don't want to get a new card and still have problems. The 1070Ti does sound really tempting though. Too bad I can't afford it right now.
 
"you fool, you broke the card by mining"

If you weren't running 100% fan while mining completely memory-hard ETH then you are the only person to blame for burning out your GDDR5 and/or your IMC.

1070s (generally) don't have temperature sensors on either of these components, which means if you were running on auto-fan, then you would have been cooking your GDDR5 and your IMC for 4 months straight 24/7 at most likely 105C+ at maximum amperage ( since you were using the full memory bandwidth of the IMC/GDDR5 the entire time) while the card was regulating the fans based on the GPU core temperature, which do not reflect properly on the memory-hard use as ETH has fairly low GPU utilization compared to a proper GPU algo like the old Vertcoin algo as it (ETH) relies almost entirely on the memory-hardness for it's ASIC resistance.

If you were using one of the cards with memory cooling inferior to the stock reference blower cooler (which is most of them), then your situation is even more likely, as the delta between GPU core temperature and the IMC and GDDR5 temperatures would be even greater.
 
The memory was at 9.1ghz. The fan was at 80% and the card never reached over 70ºC. But lets pretend I fried the video card memory. Why does it work on 2 different PCs even when overclocked?
 
The memory was at 9.1ghz. The fan was at 80% and the card never reached over 70ºC. But lets pretend I fried the video card memory. Why does it work on 2 different PCs even when overclocked?

I don't think you quite understand how GDDR5 works.

It doesn't take many bad/marginal bits for a game to go crazy. Games occupy more of the VRAM than a random desktop application. Games heat up the GDDR5/IMC more than a random desktop application.

The fact that you see this problem "start" when you switch to a higher resolution in your game as well as when you increase relative load on your GPU that doesn't seem to go away after you close the game maps exactly to how this phenomenon works.

Your game used up a bunch of VRAM, so the applications you are opening while the game is still open is now using a different portion of VRAM than the portion it would be using had you never opened the game.

Your overclocking of the VRAM would obviously have exacerbated the problem due to a little thing called Electromigration

https://en.wikipedia.org/wiki/Electromigration

Thermal effects[edit]
In an ideal conductor, where atoms are arranged in a perfect lattice structure, the electrons moving through it would experience no collisions and electromigration would not occur. In real conductors, defects in the lattice structure and the random thermal vibration of the atoms about their positions causes electrons to collide with the atoms and scatter, which is the source of electrical resistance (at least in metals; see electrical conduction). Normally, the amount of momentum imparted by the relatively low-mass electrons is not enough to permanently displace the atoms. However, in high-power situations (such as with the increasing current draw and decreasing wire sizes in modern VLSI microprocessors), if many electrons bombard the atoms with enough force to become significant, this will accelerate the process of electromigration by causing the atoms of the conductor to vibrate further from their ideal lattice positions, increasing the amount of electron scattering. High current density increases the number of electrons scattering against the atoms of the conductor, and hence the speed at which those atoms are displaced.

You done fucked up your card, if the company lets your RMA it, don't repeat your mistakes.

Things like this are literally what the graphics card AIBs are complaining about when they talk about "miners abusing RMA"

Go to ebay and you will see tons of people selling cards "never used for mining" that are sold as-is due to having this exact problem.

Those people are exactly like you except they weren't able to RMA their card.
 
Last edited:
I don't think you quite understand how GDDR5 works.

It doesn't take many bad/marginal bits for a game to go crazy. Games occupy more of the VRAM than a random desktop application. Games heat up the GDDR5/IMC more than a random desktop application.

The fact that you see this problem "start" when you switch to a higher resolution in your game as well as when you increase reletive load on your GPU that doesn't seem to go away after you close the game maps exactly to how this phenomenon works.

Your game used up a bunch of VRAM, so the applications you are opening while the game is still open is now using a different portion of VRAM than the portion it would be using had you never opened the game.

Your overclocking of the VRAM would obviously have exacerbated the problem due to a little thing called Electromigration

https://en.wikipedia.org/wiki/Electromigration

Thermal effects[edit]
In an ideal conductor, where atoms are arranged in a perfect lattice structure, the electrons moving through it would experience no collisions and electromigration would not occur. In real conductors, defects in the lattice structure and the random thermal vibration of the atoms about their positions causes electrons to collide with the atoms and scatter, which is the source of electrical resistance (at least in metals; see electrical conduction). Normally, the amount of momentum imparted by the relatively low-mass electrons is not enough to permanently displace the atoms. However, in high-power situations (such as with the increasing current draw and decreasing wire sizes in modern VLSI microprocessors), if many electrons bombard the atoms with enough force to become significant, this will accelerate the process of electromigration by causing the atoms of the conductor to vibrate further from their ideal lattice positions, increasing the amount of electron scattering. High current density increases the number of electrons scattering against the atoms of the conductor, and hence the speed at which those atoms are displaced.


Thanks for the lecture, still, why does it work on 2 different PCs? I have OC cards since my ATI 3d Rage Pro. This would be my first ever RMA on a video card. All my cards have lasted for years and none of them has died on me. (Ok one but I baked it back to life)

BTW I'm using it right now on the PC at work. I have to push the memory to upwards 9.5ghz to start seeing pixel corruption and it won't replicate to what my home pc shows at stock.
 
The information is all there, re-read it if you are not getting certain portions of the explanation.
 
All the mumbo jumbo you posted doesn't explain why it works on the other 2 PCs. The card even mines at the same settings it had before. The miner would crash within seconds if the memory was bad.
 
All the mumbo jumbo you posted doesn't explain why it works on the other 2 PCs. The card even mines at the same settings it had before. The miner would crash within seconds if the memory was bad.

This isn't an argument, every post I have posted here is pure spoon-feeding.

The very fact that you are saying what I am spoon-feeding you is "mumbo jumbo" means you have no clue what you are talking about.

RMA your card. If you cannot, don't bother trying to scam someone with your card on ebay.

This is my last response.
 
This is my last response.

You should have started with this.

RMA is easy but for me its the last resort, simply because I don't think its the card. If it failed on any of the other 2 pcs I've tested it then I'd RMA. I'll try it with another ivy bridge mobo and then with an intel core i7 920/mobo. If it fails then RMA, if not, I might keep it with the i7 920.
 
You should have started with this.

RMA is easy but for me its the last resort, simply because I don't think its the card. If it failed on any of the other 2 pcs I've tested it then I'd RMA. I'll try it with another ivy bridge mobo and then with an intel core i7 920/mobo. If it fails then RMA, if not, I might keep it with the i7 920.
Actually, it explains it quite well. I'm not going to say it is definitely the issue, but it explains your symptoms perfectly.

The GPU itself is fine. 3dMark is going to load the GPU more than the graphics card's RAM. So when you're running 3dMark, it will pass. Once you start loading things into memory and it hits the bad memory, that's when you're seeing the artifacting. When you're on the desktop <4k, it is the same thing. You have fewer pixels to load into memory

While you're mining, you likely get a lower hash rate because you are performing calculations with corrupted data, but because you don't have an image you see with your eyes, your card is "fine."

Like I said in the beginning, I'm not saying this is 100%, absolutely the issue, but Communism's explanation describes your issue well.
 
Ok things just got weirder.

Just testing with a dvi to hdmi and it works flawlessly at 4K (Is that even possible?, I thought DVI topped at 1080p.)

Gotta buy a new HDMI cable.
 
Ok things just got weirder.

Just testing with a dvi to hdmi and it works flawlessly at 4K (Is that even possible?, I thought DVI topped at 1080p.)

Gotta buy a new HDMI cable.
It's not possible to output a native 4k signal with DVI. Single link tops out at 1200P while dual link will get you to 1600P.
 
Finally, I found a 3rd HDMI cable and now it works.

I knew it was not the card.

Odd, I tried the other 2 HDMI cable that failed with the GTX1070 and both work with my shield console and xbox360 on my kids TV.

Anyway I'm happy. No more testing

You see RMA is not the only choice. I just saved PNY from replacing a FULLY WORKING GTX1070 just because some guy thought mining is NO NO.
 
It's not possible to output a native 4k signal with DVI. Single link tops out at 1200P while dual link will get you to 1600P.

I don't know what to tell you the TV and the nvidia CP says it 4k, maybe it was 30hz it didn't feel right.
 
Actually, it explains it quite well. I'm not going to say it is definitely the issue, but it explains your symptoms perfectly.

The GPU itself is fine. 3dMark is going to load the GPU more than the graphics card's RAM. So when you're running 3dMark, it will pass. Once you start loading things into memory and it hits the bad memory, that's when you're seeing the artifacting. When you're on the desktop <4k, it is the same thing. You have fewer pixels to load into memory

While you're mining, you likely get a lower hash rate because you are performing calculations with corrupted data, but because you don't have an image you see with your eyes, your card is "fine."

Like I said in the beginning, I'm not saying this is 100%, absolutely the issue, but Communism's explanation describes your issue well.

Reading is fundamental. Specially this line

Then I had the brilliant idea to try desktop resolutions between 1080p and 4k to see if only 4k was affected. So I started with 4k-pixel corruption, 1600p-pixel curruption, 1440p- pixel corruption, and finally 1080p PIXEL CORRUPTION.

The pixel corruption was ON DESKTOP, not just when gaming so all the mumbo jumbo communism's posted would only be true if it failed in ALL pcs not just one. That's how I knew it was not the card.
 
So I had to go back to the DVI-HDMI cable as the HDMI cable it too short.

Low and behold it is indeed 4K/60

JrenzQhIX63qMVfCPaUlYBfU_g1ubOK9rIK0kaLKOCBvpmKWe_vQdSR6ITEC62XhOcFmPIUSVqNBx-x4ugj5WKKDUrL4ckQQ1DBqHAlrnDKyat_SFxeevmaqw6Kg3FlAJoM2xA2Q1728Lklr95O5XT6WKnMO9F5S1Bl-SKnzdjZS0VbzryZUBlhVKNbQhnOzO_cTmwSQKqcfjRTi9F6Z5UHqBSdmuszYaAxTQ7tKebjc0OA0mzcc7PHRNdOBHxUuszbnA1EqwOTAsGgMnBQ-VBElAmbRMAQ-MDC3g72lvv3tOn2XvKwR33XHhqVf4UAzQ09MLsjmUggjzlQdqgEm8aqllSZKPan7mSexiSxXWnewYMQS7t1iZpyK9NwvYJ4HU-PDx1vPDMWx7kkZYp00umTmfRBJqx_nyCCoXtHcWlVFBeL0cjC_8S3fHmvDPC9nS8rnIh9F5JK31Yq3-milrgfKtNNQjoOftElp9a70tyACNuYfSQ7saWsfgcTcBRiRtiexEYOulgJs4S_RcwTviUG6hDa0iudQA-Sb1oqZSCC99si5reQpoXmKmF2MTKisB7oW-aV535KNVJRReRJcJSz9-CegNZ0ko8VXOOBp140=w1582-h949-no

3-9gnQ2c7buXoAEeh_83RXIFOFjDgNbWzkj6o99qtkDdUvLaM67Z9kLQyziKNp7igUVdVbfTLRkLUYynmYVFteWz9yCpFign4EtwUv3prA2ZGtne5Haa2l83de_U-x14xUXugifJbvuZUfSrte5X9F_0ZDyQPfWW6z-WxNzQRJmwcvjMvc_24f2DYwP68HSZCCY_4srRxtWjMH4-H-LqkY5PFuqExUbzSFyLvz5r74iU663AkZ9j26_wsUrgJQWaZG3nndgkHrO377JpuBUi5rY95MjV6mpMXJ6frxluSWTHGazGxuGKaCdepVqBd1AKaNJzJ7bqPBetpxymo5g9D11OubV-MpNSQ92DjjFKiJg2cw_cAKHhNKsObWk_CIBLNxHpVx7SMUnws0Cs8GmNyl1ro4tx3BZXoftiXj5bCJT8qRfv_nnHZaEsWD_bUsLpPM0ScHxkPh3uNBD_gyF5mmH0Cl7gd0o-4sNwfEK_13ITO25YPgxVBR47lE-sp9CHw48o3cJBYL_I_NpQD79rCfCzTheECBoGpg7k_kba_J84QSapTxuam5qZrE3GsxospoLbU4GXaIsuEJXeET05BbOlxPm8Otr-HNh08tZl3E8=w1582-h949-no

_Xr6QcKuH07IHSch0fEoseR6JIURbhT24A6PZM6TIxFIshbW__KbRK0BiHwM4P2iGoO6pgX9bS3_kT2Br79AocthCU99-PO3VU1kmCiS1rIj3u03D2201Qd--ajfC-j2Dr20vC_3T3iYIl8wXuetrZ81Siw16Pk7fNGZ9ezLVbWKTieyQVmO106Mid8GQrFwHENrE64UjPFqpmWqAXwz2tAu7IeRuNZztkNwMUHd0Vqy1__jHi2LXqtlOwx0h7hHcq1HUMWljI7t_FaEc59XPUj7RmGgKhkMmQjbXzdsYdBTIQu59ZMnHa9H2MLw-t7ew-9uMnEAajtdds_2J2yMi4eP50loSJ_3WJ1DKDp_M5AntOSgfaLNRlXvJoJkVoXEO5TQP2mRdjjlNCwjDxVFwMae3_wUi7E4LRbIp54ZermBZ5XIFn694YlWOwJxmE0dbENptnbQP7WfoNUSKDRj9U7_76eFj2R61nBGRFPDfUIaYfW1cA5Lt_0Fb1QAIXmTtjyMCiRveKnPh2C-RG7GKjNA8SUTcrIEgmaZ_r6X0z6B1e7t_BU-v2XZhVODZ5Zvw5lhVmftXFewHIomQ-dFHyxIpdDO0KA-IXd14-HwLng=w1582-h949-no


Sorry my cell camera is the worst.
 
Back
Top