does mining really lower the lifespan of GPU cards?

Any use of a GPU will effect it's lifespan. What you meant to ask is, does mining have more of an effect than "normal" use. The fact of the matter is you cannot tell, unless you have lots of data concerning how the card was used during it's mining period of use. I could be no more worse than gaming everyday at 1080P, or it could be vastly worse. It also depends on the quality of the card itself, and power that was supplied to the card. In general, the hours put on the card are going to be much more than your typical usage scenario, so for the sake of argument, mining typically "uses up" a card faster than a non-mining usage scenario.
 
Any use of a GPU will effect it's lifespan. What you meant to ask is, does mining have more of an effect than "normal" use. The fact of the matter is you cannot tell, unless you have lots of data concerning how the card was used during it's mining period of use. I could be no more worse than gaming everyday at 1080P, or it could be vastly worse. It also depends on the quality of the card itself, and power that was supplied to the card. In general, the hours put on the card are going to be much more than your typical usage scenario, so for the sake of argument, mining typically "uses up" a card faster than a non-mining usage scenario.
Does it? At what rate? 1 day more? 2? Or does OC and running it hot gaming make it worse?
 
Does it? At what rate? 1 day more? 2? Or does OC and running it hot gaming make it worse?

There is no definitive answer, even if you had all the usage data, since the lifespan itself is an unknown variable to begin with. But it stands to reason that increased usage of any kind is going to push the card toward failure, simply because the components have a limited lifespan that is impacted by usage. Most often, if no defects are present, and no negative events like surges or static discharges are introduced, the capacitors will eventually cease to suppress ripple, causing a partial failure from uncorrectable errors. Capacitors are the limiting factor in most electronics concerning lifespan. They even fail if you leave them on the shelf, non-electrolytic last a lot longer, but still eventually fail.
 
There is no definitive answer, even if you had all the usage data, since the lifespan itself is an unknown variable to begin with. But it stands to reason that increased usage of any kind is going to push the card toward failure, simply because the components have a limited lifespan that is impacted by usage. Most often, if no defects are present, and no negative events like surges or static discharges are introduced, the capacitors will eventually cease to suppress ripple, causing a partial failure from uncorrectable errors. Capacitors are the limiting factor in most electronics concerning lifespan. They even fail if you leave them on the shelf, non-electrolytic last a lot longer, but still eventually fail.
That is the question. Running it at 100% and close to 80c+ 4 hrs a day, overclocking and overvolting and 80c+ 4 hours a day, or does mining at 55% below 50c 24/7, which one does the most/least damage? and by how much?
 
That is the question. Running it at 100% and close to 80c+ 4 hrs a day, overclocking and overvolting and 80c+ 4 hours a day, or does mining at 55% below 50c 24/7, which one does the most/least damage? and by how much?

I have read that a capacitors lifespan doubles for every 10c drop in temp so if you are gaming with OC and hitting 70c+ for 8 hours it would be similar to 24 hours at 60c the are similar numbers you can get when looking at voltage.

http://jianghai-america.com/uploads/technology/JIANGHAI_Elcap_Lifetime_-_Estimation_AAL.pdf
 
the effects of electromigration from high density current and cycling temperature leads to more rapid failure than compare to the monolithic loading condition. not sure why there is 3 pages on this issue. thermal cycling is what you should be concerned with. even then cards are designed to at least last as long as the warranty even with heavy cycling temps.
 
the effects of electromigration from high density current and cycling temperature leads to more rapid failure than compare to the monolithic loading condition. not sure why there is 3 pages on this issue. thermal cycling is what you should be concerned with. even then cards are designed to at least last as long as the warranty even with heavy cycling temps.
There is 3 because nobody has provided facts that pertain to the amount it lowers the GPU's life span. Just guesses about what it might do.
 
Again, there are too many variables to give a conclusive response. The fact remains that any usage will reduce lifespan, so it stands to reason mining is worse than gaming, how much worse cannot be determined.
 
Of curiosity, how many folks here have had a GPU actually die on them? I'm not talking about a manufacturer defect (where the card dies in less than a year from purchase) or a simple fan failure or such. I mean a known good GPU, used in any use case, having an actual silicon failure that isn't the result of abuse. If this happened to you, how old was the card when it died?
 
Of curiosity, how many folks here have had a GPU actually die on them? I'm not talking about a manufacturer defect (where the card dies in less than a year from purchase) or a simple fan failure or such. I mean a known good GPU, used in any use case, having an actual silicon failure that isn't the result of abuse. If this happened to you, how old was the card when it died?

I had a GeForce 4 Ti4400 did on me, well technically it died on my friend who I gave the card away to after I had upgraded. I did start having issues with it while I had it and it died completely a few months after I gave it away.

I also had a GTX 7600 or 7600GT (forget the actual model) blow a cap while it was sitting on a shelf. I heard the pop and had no idea what it was until many weeks, perhaps a couple months later when I went to use it in a spare parts build I was throwing together.

A couple personal friends of mine each had a Radeon 9800 Pro die on them.
 
I had a 5770 go out on me... started artifacting and then a few months later, kaput. I re-pasted the fans which made it last the few extra months but it was not enough. That card was maybe 4 years old?
 
A long time ago I had the original Geforce 256 ddr, refurb. It started producing visual artifacts after being light use for 1-2 years.
 
other than extra wear on fans running at higher speeds all the time, no different than any other used card, they all throttle if they hit a certain temp to stay safe so its not like theyve overheated, sure they run hot 24/7 but still within spec

took this off the msi site for the 1080ti,
  • 10 years long lifetime under full load.
so lets say it does shorten the like huge, like 50% huge... in 5 years no one will care about a 1080ti anyways
 
depends on who the miner was for the most part the gpus will be set up and ran as is for long periods of time without breaks or any cooling what so ever

id think breaks of cooling would be harder on things than constant heat, heating and cooling means expansion and contraction and that is probably harder on the components than just staying hot, but within spec.
 
other than extra wear on fans running at higher speeds all the time, no different than any other used card, they all throttle if they hit a certain temp to stay safe so its not like theyve overheated, sure they run hot 24/7 but still within spec

took this off the msi site for the 1080ti,
  • 10 years long lifetime under full load.
so lets say it does shorten the like huge, like 50% huge... in 5 years no one will care about a 1080ti anyways

Temps and throttling only apply to the GPU on most cards, and modern cards don't run the fans until the GPU hits a certain temp. Usually that temp is near throttle temps, and the VRM suffers for it. It's why I took the shroud off my 1060, and put a double 120mm fan bracket below it running 100% at all times.
 
id think breaks of cooling would be harder on things than constant heat, heating and cooling means expansion and contraction and that is probably harder on the components than just staying hot, but within spec.

That's true for the silicon, but not for the VRM components, especially the capacitors.
 
That's true for the silicon, but not for the VRM components, especially the capacitors.

eh, the aluminum solid caps are rated for 10 years @ 100% load, dont believe anyone uses the old electrolytic style caps anymore which yes, dont take well to heat or time...they were the downfall of Abit.

https://www.gigabyte.com/webpage/8/article_02_all_solid.htm


From MSI - All Solid Capacitors

  • 10 years long lifetime under full load.

the solder joints are whats going to suffer with the expansion and contraction, as for the fans not kicking in till a certain temp... were talking about mining here, if its mining the gpu will no doubt be up to temp and fans running almost full tilt, hence the additional wear on the fans.
 
What matters is the card temp itself... a gpu can do 122-160 degrees F without issue. I cringe when I think about the working temp of the old AMD 290s around 90-95C

90-95c is hot no doubt about that, but as long as the Vregs were not "cooking" even higher (likely were) at least the AMD cards spec higher temp resistant vreg/main caps at 115-125c vs Nv generally using 85-105c instead.

Put another way, IMO I rather something rated for a temp it is likely to not come close to vs one that often comes within spitting distance of its "good enough" spec (not counting the ones that are crazy hot running out of the gate such as the 290s were) but even then, if the vreg were actively cooled and maxing out at this temp, it is hot but well within spec limits ^.^

as for mining killing cards, well, in my experience any "engine" electrical based or combustion based prefer to be in the ~25-75% actual load range, too low and it is not efficient as can be, too much load on the other hand wear it out quicker (higher temps for longer periods of time shorten lifespan of capacitors after all) keep it warmed up and the fluids keep doing what they should be sort of speak, if the cooling (design in question) was always within temp spec limits than it would not be an out of the norm situation vs the running outside or in a hot box type condition (which many do of course)

++++++++++++++++++++++
_________________

going on this train of thought, I kind of wish AIB (asus etc) did not do the zerosilent fan designs because it is VERY hard on the bearings to go from no/small % on one fan then have ramp up great deal in % just to drop again a few seconds later, have had quite a few gpu fans burn out way quicker than should have because of this....be better if they always start at say 30% where they is next to zero noticeable noise (but still cool) vs no noise at all and cook things before the fans jet engine alive.

++++++++++++++++++++++
_______________

course depends on the miner, how they used it, how hard it was ran and in what conditions, basically unless you know the individual you really are taking the chance, if anything might be said, the card functions which is a good thing likely they got rid of lemons and kept the better ones so there is that, but as others have said beyond the aging bit, the fans themselves tend to wear out much sooner than the card itself (nout counting some older gen Nv specifically that had major solder issues)
 
Of curiosity, how many folks here have had a GPU actually die on them? I'm not talking about a manufacturer defect (where the card dies in less than a year from purchase) or a simple fan failure or such. I mean a known good GPU, used in any use case, having an actual silicon failure that isn't the result of abuse. If this happened to you, how old was the card when it died?

I had 3 EVGA GTX 570's die on me.
First one lasted about 2 years, RMA'ed it and the replacement lasted about 2 more years, RMA'ed that one and it's replacement lasted about 3 years.
They sent me back a GTX 960 the last time. But one could argue that the 570's track record for longevity wasn't exactly stellar. Thank the gods for a lifetime warranty!

I have come across plenty of dead video cards in first time clients machines though, but they mostly happen to be low-end budget cards such as passive 7300LE's, 9500GT's etc.
 
only dead cards i have come across in 20 years in a pc shop then sys admin were pretty much always either doa, bad memory out of the box (graphical corruption) after the new out of the box issues, failed fan which had been failed for sometime-generally low end crap with tiny 40mm fans and a tiny heatsink, bit of dust gets in tiny fan, doesn't have the torque to spin it up and then it bakes the plastic where the fan hardly moves even using your hand


wonder how many guys on here remember these days lol
https://images-na.ssl-images-amazon.com/images/I/41PVq5nIRtL.jpg
https://images.anandtech.com/old/video/tnt2comp799/cl/board.jpg
 
Temps and throttling only apply to the GPU on most cards, and modern cards don't run the fans until the GPU hits a certain temp. Usually that temp is near throttle temps, and the VRM suffers for it. It's why I took the shroud off my 1060, and put a double 120mm fan bracket below it running 100% at all times.

I'm confused, why wouldn't you just make a custom fan curve where it never goes to 0%? Did your card come with a really shitty HSF?
 
I'm confused, why wouldn't you just make a custom fan curve where it never goes to 0%? Did your card come with a really shitty HSF?
Not to go too far of topic but I'm getting sick of running tons of junk software just to make the fans spin on my equipment. If I pay $400 something for a 1600 watt power supply, or twice that for a high end gpu, I'm probably not putting it in a dead silent build, and I probably don't want components running 25C higher for zero gain. I'm ready for this fad to end, but it probably won't because manufacturers don't benefit from long component life.
 
I'm confused, why wouldn't you just make a custom fan curve where it never goes to 0%? Did your card come with a really shitty HSF?

Covered above, junk software I don't want. It's a MSI Gaming X, and excellent sink indeed, and even better with two 120mm Scythe Kaze Flex pumping air constantly over it.
 
eh, the aluminum solid caps are rated for 10 years @ 100% load, dont believe anyone uses the old electrolytic style caps anymore which yes, dont take well to heat or time...they were the downfall of Abit.

https://www.gigabyte.com/webpage/8/article_02_all_solid.htm


From MSI - All Solid Capacitors

  • 10 years long lifetime under full load.

the solder joints are whats going to suffer with the expansion and contraction, as for the fans not kicking in till a certain temp... were talking about mining here, if its mining the gpu will no doubt be up to temp and fans running almost full tilt, hence the additional wear on the fans.

Even solid caps suffer from higher tamps and dirty power, just not nearly as bad as electrolytic.
 
Of curiosity, how many folks here have had a GPU actually die on them? I'm not talking about a manufacturer defect (where the card dies in less than a year from purchase) or a simple fan failure or such. I mean a known good GPU, used in any use case, having an actual silicon failure that isn't the result of abuse. If this happened to you, how old was the card when it died?
my 280x was a "lightly" used miner card. it popped 1 month and 3 days after the 3 year warranty expired. I had it for less than two years.
 
Personally, I think it has to do with the miners personal care towards the gpu. The cooling system n maintance
 
  • Like
Reactions: noko
like this
Covered above, junk software I don't want. It's a MSI Gaming X, and excellent sink indeed, and even better with two 120mm Scythe Kaze Flex pumping air constantly over it.

MSI's AB is the best of the bunch imo. I've done the same thing for a 980 but mainly because it had 3 small fans that could get fairly loud. The MSI gaming 10 series cards are great though, even the default fan profile does a good job I've found.
 
Unfortunately, you have a lot of self-shills obfuscating and deflecting in the mining argument, trying to prop up the resale market of their cards since they want to dump them for as much as possible down the line -- which is, not in itself wrong, but undeniably, comes at the expense of non-mining gamers.

Kinda reminds me of some other current debates, with vested interests at play. I feel like this place is becoming more politicized every day, as if the brand wars weren't bad enough.
 
Unfortunately, you have a lot of self-shills obfuscating and deflecting in the mining argument, trying to prop up the resale market of their cards since they want to dump them for as much as possible down the line -- which is, not in itself wrong, but undeniably, comes at the expense of non-mining gamers.

Kinda reminds me of some other current debates, with vested interests at play. I feel like this place is becoming more politicized every day, as if the brand wars weren't bad enough.

So...you complain about "politicized" debates

But...you open your post by calling those you disagree with "self-shills" who essentially are lying about their cards.

?format=1000w.jpg
 
IMO running anything 24/7 will obviously wear it out faster than if its run only a few hrs a day.

Also IMO, a better question would be what is the average lifespan power on hrs of a GPU before failure, then you could make an educated guess on how much running a GPU 24/7 effects the remaining lifespan of a GPU.

But unlike SSDs, i dont believe i've ever seen such a test done.
 
IMO running anything 24/7 will obviously wear it out faster than if its run only a few hrs a day.

Also IMO, a better question would be what is the average lifespan power on hrs of a GPU before failure, then you could make an educated guess on how much running a GPU 24/7 effects the remaining lifespan of a GPU.

But unlike SSDs, i dont believe i've ever seen such a test done.
Yeah, but there are other variables like power level, heat, etc. Some people run their cards 70C+, 80C+, while some are well below that.
 
Yeah, but there are other variables like power level, heat, etc. Some people run their cards 70C+, 80C+, while some are well below that.
I think we can all agree on that.
It's amazing nobody has ever even attempted to answer this question about average GPU lifespan that i know of. (regardless of mining or not)

edit: I mean , we see tests of SSDs running all out 24/7 to determine average lifespan, yet Ive never seen a similar test of GPUs.
i guess there are just too many variables, and all we have is everyone's "guess" with no real data to back any theory up.

I'd be interested to know how many people who would buy a used mining GPU would also buy a used mining PSU....hmmm...
 
Last edited:
In this thread we find the non-miners trying to argue with the miners about failure.

It is entirely possible that the miners have a larger sample size to draw from then the non miners. Since you know, they have a shit ton of video cards.

But gamers, go ahead and tell us how unreliable our cards are.
 
In this thread we find the non-miners trying to argue with the miners about failure.

It is entirely possible that the miners have a larger sample size to draw from then the non miners. Since you know, they have a shit ton of video cards.

But gamers, go ahead and tell us how unreliable our cards are.
I will! My gaming card went out after 4 months. The pump gave out. EVGA RMA rocked and I had a new one within a few weeks. It was just one of those things.
I think my mining cards will last a good while since they are running at like 40C or less 24/7.
 
Hey you know the last time I killed a card during operation, it was from overclocking and benchmarking. Pushed it too hard. Magic smoke came out.

I did also kill a mining card. Turn the rig off, moved cards around for my nice new set of 1070's, and managed to kill a 1070 during the shuffle. Never worked again after moving it.
 
In this thread we find the non-miners trying to argue with the miners about failure.

It is entirely possible that the miners have a larger sample size to draw from then the non miners. Since you know, they have a shit ton of video cards.

But gamers, go ahead and tell us how unreliable our cards are.
That's a very good point too.
Actually it could be argued both ways about mining GPU's.
If a GPU has a defect it will show up and fail when running 24/7 mining, so you'd know it's a "bad one"
But if a GPU can survive mining 24/7 for months, then it could be argued "its a good one"

So we can logically assume that any GPU that has survived mining 24/7 is a "good one" and free of defects.

So the next logical question would be what is the average power-on/full load lifespan of a GPU, to determine how much more / if any mining wears a GPU out vs gaming a few hrs a day.
Until that question can be answered with actual repeatable data...we don't know.

My personal opinion?
Would i buy a used mining GPU? ....well that all depends on how much you want for it vs a new one! (same as buying any used parts) ;)
 
Last edited:
Back
Top