Is the mining thing slowing down?

razor1 reached out to me. he said he has over 440 GPUs mining, and only 1 has failed so far.

Now consider if 440 + unique gamers owned these GPUs. I'd expect a much higher failure rate because of incompetence in cooling arrangement, incompetency in install, unreliable PSU issues, and lack of care (dust, handling, etc)
 
1. Yes wear and tear, things have a finite amount of times they can operate before they fail. Electronics are rated in hours of use and the more hours on them the more likely to have a critical failure.

2. Mining on a card is more extreme due to the hours of use. Also most gaming cards are ran at stock by gamers, only some enthusiasts push the cards to extremes. Also unless cooling to sub ambinet your not doing a extreme hot/cold cycle. Also your 60C is the processor not everything else on the board which will not be cooled properly without the needed airflow.

3. Been to a mining farm with rack after rack, trust me they didnt care as long as the card was stable and at max production, most were running 80C or so. Were talking much larger operations then a guy running a single card or two. They simply dont have the time to care as much as a guy running a rig or two.

4. Running the card 24 hours a day shortens it's lifespan, dont see how that is hard to understand. You have only so many hours a card can run until the manufacture figures it will fail. Based on the fact their are tons more threads on dead cards then cpu's this tells me that a video card is far more likely to fail. Take a car engine and run it at freeway speeds just cruising and it will last for a long time as it's not being pushed, take that same engine and keep it running hard and it wont go the same distance due to the increased wear from the high demand placed on it.

5. It's extremely volatile and most people pay more for electricity in the summer months then the winter months. I expect it to decline but who knows.

I'm familiar with the concept of MTBF, but I look at it as in 5 years it won't matter anyway as the card will be obsolete. I disagree with your "extreme due to hours of use" argument. Lots of people have bought datacenter CPU's that run 24/7 for years and don't have problems. Ok, I'll grant you that the use of the word "extreme" is suspect due to not stating "sub-ambient" but the point remains. 80C to 20C to 80C to 20C over and over again is different than a constant 60C. Mining farm? Never been to one, but the average household miner with say 10 cards can't really dissipate the heat generated by running 80C on every card.

Running anything 24 hours a day will shorten it's lifespan. I get it. But instead of running for 100 years it runs for 10 years, who cares? Plus, I would argue that the threads on "dead video cards" are due to the fluctuation of heat cracking the solder joints (hence baking them will bring them back to life), vs. mining failure (generally speaking).

I think your car analogy is actually MY argument instead of yours :p. Mining is exactly like running freeway speed.
 
Are you guys done? WHO GIVES A FUCK? If you don't want to buy a used mining card, then don't! That fucking simple. My 11 cards have been going for 7 months without failure... OF COURSE THAT MEANS ONE THING: I AM RIGHT.
 
You're so logical. So if I can't give you the exact damage figure from electro migration, it isn't happening? Is that right?

If I can't tell you exactly how many times you'll be able to dart across a busy street before getting ran over, does that also mean it'll never happen?

Electromigration isn't a theory. The increased rate of electro-migration with increased current isn't a theory either. Do you know how to use google?

Try again, this time think before posting.

You're right. Electromigration isn't a theory, but your argument that mining is causing fatal electromigration within the useful lifespan of the card is.
 
Datacenter CPU's aren't typically overclocked and aren't typically at 100% load 24/7

Apples and Oranges
 
You're right. Electromigration isn't a theory, but your argument that mining is causing fatal electromigration is.

My argument is that in increases that rate of electromigration. Which is a fact.

You aren't decreasing your vram voltage and you are increasing clocks, to a level that cannot handle a single hour of gaming. These are cards that are marketed as gaming cards, and when you run them so out of spec that can't even handle that for a single hour, then yes, I think you are certainly increasing the failure rate, or more accurately, decreasing the lifespan of the card, specifically vram in this example.
 
Last edited:
My argument is that in increases that rate of electromigration. Which is a fact.

You aren't decreasing your vram voltage and you are increasing clocks, to a level that cannot handle a single hour of gaming.

Maybe I missed something about bumping memory speeds to something ludicrous because I never mentioned it.

Let me give you an example: Sandy Bridge CPUs. 7 years old at this point. Many running in the high 4.8+ Ghz range with extra voltage. Just now are you starting to run into electromigration issues at higher speeds, and by reducing to near stock levels you have no issues. So instead of running for 30 years at stock speeds, it will last 15 after being highly overclocked for half of its life. Even with electromigration, the useful life of the card will be long past before you run into problems generally speaking.
 
I haven’t seen any appreciable difference in memory clocks when doing voltage modifications going to the GPU, and neither have you. So the better question is why would you assume vram voltage is decreasing? What method are you using to decrease VRAM voltage? If the voltage is dynamic, again what makes you think the voltage is decreasing while you're pushing +800MHz? You're literally making things up as you go at this point.

How am I making things up? Please specifically tell me what I made up so far. Here's my reasoning: I'm lowering voltage so it's ~.800V. If I don't use any power limiting the card goes to 1.00V+ So If I have a memory OC on what makes you think the card is dynamically applying more voltage than what I set as the limit? Are you saying the nvidia engineers would allow the card to apply unsafe voltages to the memory?
 
Maybe I missed something about bumping memory speeds to something ludicrous because I never mentioned it.

Let me give you an example: Sandy Bridge CPUs. 7 years old at this point. Many running in the high 4.8+ Ghz range with extra voltage. Just now are you starting to run into electromigration issues at higher speeds, and by reducing to near stock levels you have no issues. Even with electromigration, the useful life of the card will be long past before you run into problems generally speaking.


This debate started when I mentioned that miners who try to sell off their mining cards always talk about how they babied the GPU by setting a lower TDP or voltage but almost never mention how high they pushed the VRAM clocks to. +500-800MHz isn't uncommon. With most cards at these speeds you can't even run a game on the card because the game will either crash and/or artifact like crazy.

You can argue how much of an effect electromigration has, and that's a fair debate to have. My issue really boils down to sales tactics and pretending to have full disclosure when you really don't. This doesn't apply to everyone, some people will tell you everything, but many won't
 
How am I making things up? Please specifically tell me what I made up so far. Here's my reasoning: I'm lowering voltage so it's ~.800V. If I don't use any power limiting the card goes to 1.00V+ So If I have a memory OC on what makes you think the card is dynamically applying more voltage than what I set as the limit? Are you saying the nvidia engineers would allow the card to apply unsafe voltages to the memory?

I never said anything about unsafe voltages. Seems you joined in pretty late if you're asking me this question, so instead of rehashing what has already been said, I'll just suggest you go back to the beginning.
 
I never said anything about unsafe voltages. Seems you joined in pretty late if you're asking me this question, so instead of rehashing what has already been said, I'll just suggest you go back to the beginning.

I'm just trying to understand your reasoning. You said I made stuff up, but I don't believe I've made anything up. Are you seriously saying that a high memory OC is going to damage the card?
 
I'm just trying to understand your reasoning. You said I made stuff up, but I don't believe I've made anything up. Are you seriously saying that a high memory OC is going to damage the card?

Again, you're asking questions that have already been addressed. But yes, high OC's damaging and/or reducing life span of an IC certainly does happen.

Are you seriously saying that a high overclock has no effect?
 
This debate started when I mentioned that miners who try to sell off their mining cards always talk about how they babied the GPU by setting a lower TDP or voltage but almost never mention how high they pushed the VRAM clocks to. +500-800MHz isn't uncommon. With most cards at these speeds you can't even run a game on the card because the game will either crash and/or artifact like crazy.

You can argue how much of an effect electromigration has, and that's a fair debate to have. My issue really boils down to sales tactics and pretending to have full disclosure when you really don't. This doesn't apply to everyone, some people will tell you everything, but many won't

Fair enough, I won't prolong the debate on my end because I'm a fan of full disclosure when buying and selling stuff.
 
Again, you're asking questions that have already been addressed. But yes, high OC's damaging and/or reducing life span of an IC certainly does happen.

Are you seriously saying that a high overclock has no effect?

Yes I am saying that. Similar to what others have said above. If you have a CPU that you OC but run it under-volted compared to what it normally would go to, if say it was set on "auto" voltage, I can't see how that would have a material effect on the lifespan of the CPU.
 
What they don’t tell you is how high they crank up the vram clocks. They pretend like they’re doing you a favor by selling you a mining card that was under bolted but they were a,so pushing memory speeds that would be nowhere near stable during gaming.


ughh... so?

This is basically how this started. I didn't say the card is doomed or is on the verge of imminent failure. I said they don't disclose "the bad" ie: how high they pushed the memory. They'll gladly tell you how kind they were to the GPU though and they'll gladly tell you it never went above 60C because that's what buyers want to hear, and you can see the response. Archaea went on to say that basically if it's stable while mining there's no need to disclose the memory clocks because it's not hurting anything... Which I personally find to be a very disingenuous sales tactic.
 
Yes I am saying that. Similar to what others have said above. If you have a CPU that you OC but run it under-volted compared to what it normally would go to, if say it was set on "auto" voltage, I can't see how that would have a material effect on the lifespan of the CPU.

And you'd be wrong. You aren't under-volting the memory. You can keep using the under volting of the CPU as an example it wont' change anything. It wasn't accurate the first time you used it, it's not accurate now, and it won't be accurate the next time you use it.

It's very simple, if you really believe you are doing no harm at all, don't stop at "run at 65% power limit, GPU kept at or below 60C" by all means tell us the good, but don't forget to add in there VRAM set to +800MHz for 12 months. If you want to sugar coat it by saying you had 100% stability while mining, go right ahead, but don't stop at how you babied the GPU, even though at the same time you pushed the VRAM as high as you could and backed down by 50MHz just to ensure mining stability, not longevity of the card.
 
Last edited:
And you'd be wrong. You aren't under-volting the memory. You can keep using the under volting of the CPU as an example it wont' change anything. It wasn't accurate the first time you used it, it's not accurate now, and it won't be accurate the next time you use it.

Why is it not accurate? Also where do you draw the line on the memory clocks that are "too high" and what do you base that number on?
 
This is basically how this started. I didn't say the card is doomed or is on the verge of imminent failure. I said they don't disclose "the bad" ie: how high they pushed the memory. They'll gladly tell you how kind they were to the GPU though and they'll gladly tell you it never went above 60C because that's what buyers want to hear, and you can see the response. Archaea went on to say that basically if it's stable while mining there's no need to disclose the memory clocks because it's not hurting anything... Which I personally find to be a very disingenuous sales tactic.

way to both paint a one sided argument and re-invent the argument - in the middle of said argument. :rolleyes:

First off you misquoted me:




I told you subsequently I'd have no problem telling my buyer how I ran the cards, and my precise memory O/C, and I almost always relayed the exact clock speeds and overlock and power target settings that I used to my buyers!!! and I told you that I'd have no problem buying from someone that did run their cards as I am. I also mentioned that I, just last week, bought 9 used 1070 cards that were used for mining, and that they had no issue, at all and that I'm not afraid of whatever mining settings the previous owner did to them during that time because I know he was like me - in that he had about 60 cards mining and had learned from experience what he was doing along the way. I participate in a text group of about 13 crypto miners local here to Kansas City. We all run our mining cards similarly -- about a 550-600mhz memory overclock for NVidia and about a 65-75% power target. None of us have encountered anything remotely conclusive to your concerns about degradation that you are masquerading as fact.

I told you that in my 20 years of overclocking experiences with CPUs and GPUs -- your pushed hypothesis is nonsense as it relates to realworld use. So yeah, I'm still at the same opinion -- "ughhh so"
 

Attachments

  • derp.JPG
    derp.JPG
    86.8 KB · Views: 0
Last edited:
Your last several posts are all asking things we've already hit in this thread. I'll save my next reply to you for when you ask something new or make a valid point. You and I have personally already hit your very first question.
 
way to both paint a one sided argument and re-invent the argument - in the middle of said argument. :rolleyes:

I told you I'd have no problem telling anyone that was my buyer how I ran the cards, I almost always relayed the exact clock speeds and overlock and power target settings that I used to my buyers!!! and I told you that I'd have no problem buying from someone that did run their cards as I am. I also mentioned that I, just last week, bought 9 used 1070 cards that were used for mining, and that they had no issue, at all and that I'm not afraid of whatever the previous owner did to them during that time.

I told you that in my 20 years of overclocking experiences with CPUs and GPUs your pushed opinion is nonsense. So yeah, I'm still at the same opinion -- "ughhh so"

Re-invent? Those are quotes and you just ended this post the same way you started the other one. That's not a reinvention, it's a reminder
 
I'm familiar with the concept of MTBF, but I look at it as in 5 years it won't matter anyway as the card will be obsolete. I disagree with your "extreme due to hours of use" argument. Lots of people have bought datacenter CPU's that run 24/7 for years and don't have problems. Ok, I'll grant you that the use of the word "extreme" is suspect due to not stating "sub-ambient" but the point remains. 80C to 20C to 80C to 20C over and over again is different than a constant 60C. Mining farm? Never been to one, but the average household miner with say 10 cards can't really dissipate the heat generated by running 80C on every card.

Running anything 24 hours a day will shorten it's lifespan. I get it. But instead of running for 100 years it runs for 10 years, who cares? Plus, I would argue that the threads on "dead video cards" are due to the fluctuation of heat cracking the solder joints (hence baking them will bring them back to life), vs. mining failure (generally speaking).

I think your car analogy is actually MY argument instead of yours :p. Mining is exactly like running freeway speed.

Mining is nothing like running at freeway speeds, your putting the card at high demand and most are overclocking the ram. Processors are rated for 10 years and pushing it out of spec will shorten that.. by how much is anyones guess. Datacenters are not running at their max constantly, most run at 50% to leave room for higher demands and growth. Also temperature of the card has nothing to do with the thermal waste heat, it's still 200 watts of heat being dumped into the room. A blow torch wont heat a room faster then a heater putting out more therms despite the blow torch being hotter. I dont want to drag this on, so simply to say we just dont agree on the wear and tear and how long things last.
 
way to both paint a one sided argument and re-invent the argument - in the middle of said argument. :rolleyes:

I told you I'd have no problem telling anyone that was my buyer how I ran the cards, I almost always relayed the exact clock speeds and overlock and power target settings that I used to my buyers!!! and I told you that I'd have no problem buying from someone that did run their cards as I am. I also mentioned that I, just last week, bought 9 used 1070 cards that were used for mining, and that they had no issue, at all and that I'm not afraid of whatever the previous owner did to them during that time.

I told you that in my 20 years of overclocking experiences with CPUs and GPUs your pushed opinion is nonsense. So yeah, I'm still at the same opinion -- "ughhh so"

I think it's clear he just thinks there is some poor engineering on these cards that would allow the memory to be dangerously over-volted if you increase the mem clocks. I'd actually agree there is some cause for concern on older cards where people ran custom bios that modified the stuff beyond stock. I did that myself on my 770 (only used for gaming).
 
I think it's clear he just thinks there is some poor engineering on these cards that would allow the memory to be dangerously over-volted if you increase the mem clocks. I'd actually agree there is some cause for concern on older cards where people ran custom bios that modified the stuff beyond stock. I did that myself on my 770 (only used for gaming).

The only clear thing here is you're skipping posts left and right and reading what isn't there. I never said anything about over-volting. In fact, this is the 2nd time I've mentioned that I never said anything about over-volting to you specifically, but somehow you're still seeing it. What I said was you are not UNDER-volting yet you keep translating that to over-volting.

People like Archaea are trying to play both sides. "i'll gladly disclose everything about how I ran my cards, but if I don't, that's ok too"
 
You're so logical. So if I can't give you the exact damage figure from electro migration, it isn't happening? Is that right?

If I can't tell you exactly how many times you'll be able to dart across a busy street before getting ran over, does that also mean it'll never happen?

Electromigration isn't a theory. The increased rate of electro-migration with increased current isn't a theory either. Do you know how to use google?

Try again, this time think before posting.

I'm no EE, but your argument pretty much dies within 10 seconds of googling.

Wiki said:
In modern consumer electronic devices, ICs rarely fail due to electromigration effects. This is because proper semiconductor design practices incorporate the effects of electromigration into the IC's layout.[5] Nearly all IC design houses use automated EDA tools to check and correct electromigration problems at the transistor layout-level. When operated within the manufacturer's specified temperature and voltage range, a properly designed IC device is more likely to fail from other (environmental) causes, such as cumulative damage from gamma-ray bombardment.

Nevertheless, there have been documented cases of product failures due to electromigration. In the late 1980s, one line of Western Digital's desktop drives suffered widespread, predictable failure 12–18 months after field usage. Using forensic analysis of the returned bad units, engineers identified improper design-rules in a third-party supplier's IC controller. By replacing the bad component with that of a different supplier, WD was able to correct the flaw, but not before significant damage to the company's reputation.

Electromigration due to poor fabrication processes was a significant cause of IC failures on Commodore's home computers during the 1980s. During 1983, the Commodore 64 computer for a time had a nearly 50% customer return rate.

Electromigration can be a cause of degradation in some power semiconductor devices such as low voltage power MOSFETs, in which the lateral current through the source contact metallisation (often aluminium) can reach the critical current densities during overload conditions. The degradation of the aluminium layer causes an increase in on-state resistance, and can eventually lead to complete failure.
 
  • Like
Reactions: oblox
like this
The only clear thing here is you're skipping posts left and right and reading what isn't there. I never said anything about over-volting. In fact, this is the 2nd time I've mentioned that I never said anything about over-volting to you specifically, but somehow you're still seeing it. What I said was you are not UNDER-volting yet you keep translating that to over-volting.

People like Archaea are trying to play both sides. "i'll gladly disclose everything about how I ran my cards, but if I don't, that's ok too"

Nah you're just being unreasonable because you don't want to lose an internet argument. That's fine I see people dig in their heels all the time, plus who really cares it's obvious no one is going to change your mind. You keep getting caught up on semantics like over-volting or under-volting without acknowledging my point that the card decides how much voltage is applied, the only thing we know is that we are setting it to a lower overall voltage and assuming it's not applying anything higher than normal voltage to the memory. Add to the fact that where do you draw the line on an OC that's too high? 100,200,400,600 mhz? As I've mentioned earlier I've never heard anyone talking about those values being too high (unless you encounter instability).
 
Nah you're just being unreasonable because you don't want to lose an internet argument. That's fine I see people dig in their heels all the time, plus who really cares it's obvious no one is going to change your mind. You keep getting caught up on semantics like over-volting or under-volting without acknowledging my point that the card decides how much voltage is applied, the only thing we know is that we are setting it to a lower overall voltage and assuming it's not applying anything higher than normal voltage to the memory. Add to the fact that where do you draw the line on an OC that's too high? 100,200,400,600 mhz? As I've mentioned earlier I've never heard anyone talking about those values being too high (unless you encounter instability).

You’re not reading. Still. If you had been you’d see you’re actually agreeing with me. You do encounter instability. Try and play a game at the same clocks as miners set their vram to regularly And see what happens. If you bothered reading the posts prior to your engagement in this thread you’d have known this.
 
You’re not reading. Still. If you had been you’d see you’re actually agreeing with me. You do encounter instability. Try and play a game at the same clocks as miners set their vram to regularly And see what happens. If you bothered reading the posts prior to your engagement in this thread you’d have known this.

The "game" is mining and they're not encountering instability, if they were the mining app would crash and they would change settings like I and others have mentioned time and time again earlier. Still waiting for what memory OC you consider too high.
 
I'm no EE, but your argument pretty much dies within 10 seconds of googling.

Where in that article does it account for higher current pushes through the IC due to over clocking? Seems it’s referring to normal usage scenario in which case I agree absolutely. Keep googling.
 
The "game" is mining and they're not encountering instability, if they were the mining app would crash and they would change settings like I and others have mentioned time and time again earlier. Still waiting for what memory OC you consider too high.

Well we can narrow down a few things

1) you disagree with physics that overclocking does cause more wear

2) you think if the card can’t do what it was designed to do (play a game) due to over clocking the vram that it’s ok, and you justify this by changing the definition of what is and isn’t a game.

You just said it’s fine as long as it’s stable and it’s not stable. The card can not reliably do what it was designed to do.

Do you normally write your own dictionary when you come up in the losing end of a debate?
 
You’re not reading. Still. If you had been you’d see you’re actually agreeing with me. You do encounter instability. Try and play a game at the same clocks as miners set their vram to regularly And see what happens. If you bothered reading the posts prior to your engagement in this thread you’d have known this.
Where in that article does it account for higher current pushes through the IC due to over clocking? Seems it’s referring to normal usage scenario in which case I agree absolutely. Keep googling.

Umm, 99% of the time mining cards are underclocked, not overclocked. You have no idea what the ICs are rated for to begin with, so you're just taking a shot in the dark, and claiming it as fact
 
Umm, 99% of the time mining cards are underclocked, not overclocked. You have no idea what the ICs are rated for to begin with, so you're just taking a shot in the dark, and claiming it as fact

Not the memory smart guy. You suffer from the same issue as styles. Come in at the end of a discussion thinking you got it figured out. Like I said keep googling

It’s very clear what they are NOT rated for when you launch a game and you get a hard lock, driver crash or artifacts. So yes, I do have an idea. Don’t make the mistake of thinking I’m as clueless as you are.
 
Well we can narrow down a few things

1) you disagree with physics that overclocking does cause more wear

2) you think if the card can’t do what it was designed to do (play a game) due to over clocking the vram that it’s ok, and you justify this by changing the definition of what is and isn’t a game.

You just said it’s fine as long as it’s stable and it’s not stable. The card can not reliably do what it was designed to do.

Do you normally write your own dictionary when you come up in the losing end of a debate?

Yep, as expected, lol. Will not be swayed by any evidence to the contrary.
 
Pretty sure I am going to overclock my intoxication level over this thread.
 
Yep, as expected, lol. Will not be swayed by any evidence to the contrary.

As expected is right. You are after all the guy that reinvented the definition of gaming when your own metric of what is and isn’t ok backfired on you.
 
Pretty sure I am going to overclock my intoxication level over this thread.

I miss these threads. The lack of competition from AMD in recent years has really dampened the flame war threads. Things are looking good once again on the CPU front though.
 
Not the memory smart guy. You suffer from the same issue as styles. Come in at the end of a discussion thinking you got it figured out. Like I said keep googling

It’s very clear what they are NOT rated for when you launch a game and you get a hard lock, driver crash or artifacts. So yes, I do have an idea. Don’t make the mistake of thinking I’m as clueless as you are.

Just because the memory is clocked at 5000mhz, doesn't mean it's rated for it. a lot of time higher rated memory is used. Again, you have no idea, you're just guessing.

Additionally, even if it's rated for 5500mhz, and it's running at 6000mhz. How much of an impact is there to it's life expectancy? Do you know? Or again, are just guessing?
 
As expected is right. You are after all the guy that reinvented the definition of gaming when your own metric of what is and isn’t ok backfired on you.

Nah as I said earlier it's obvious that you don't care about what anyone else posts you are just digging those heels in because you don't want to lose an argument. You'll dig a hole 10ft deep doesn't matter. Keep telling people to go google stuff and provide no evidence of your own. It's laughable but fun.
 
Just because the memory is clocked at 5000mhz, doesn't mean it's rated for it. a lot of time higher rated memory is used. Again, you have no idea, you're just guessing.

Additionally, even if it's rated for 5500mhz, and it's running at 6000mhz. How much of an impact is there to it's life expectancy? Do you know? Or again, are just guessing?

Is there a reason you’re pretending to be this dumb?

If you play a game at default clocks and everything works great then you increase your vram by 500-800mhz and games crash are you seriously stupid enough to not figure out what the cause was or is that a “clue” that maybe your vram was clocked too high?
 
Back
Top