So, apparently MOSTLY everyone needs to return their RTX 3080 / 3090 because of "cheap" components. I'm def returning my EVGA 3090.

I love how there are so many armchair engineers chiming in on a very technical analysis with zero expertise. I don't mean to be rude but unless you are actually familiar with the design of these systems I don't think you are qualified to determine which design is "good" or "bad" because there's way more going on with the power regulation than simply "MLCC vs POSCAP".

An actual EE chimed in on this on Reddit. There are more than 100,000 MLCC caps for sale on Digikey and over 10,000 POSCAPs with all sorts of varying attributes. Designing these power regulation systems is a full time job. You can't second guess the design based on speculation posted on the internet.
 
so going forward what is the optimal solution?...all 3080/3090 cards have to be re-manufactured?
That’s the expensive and less likely scenario. The more likely scenario is that a firmware update will be applied. The firmware updates notes will say something along the lines of “increase system stability” when in reality the boost clock will be limited and reduced. This scenario is essentially free.
 
If you're concerned with the launch cards right now - don't buy one. At this point given the scarcity, lots of incompatibility issues with various waterblocks, and this reported problem, I'm waiting to see what AMD offers up and also going to see what EVGA brings to the table with the 3080 Hydro Copper.
 
That’s the expensive and less likely scenario. The more likely scenario is that a firmware update will be applied. The firmware updates notes will say something along the lines of “increase system stability” when in reality the boost clock will be limited and reduced. This scenario is essentially free.

what about with higher end/more expensive custom variants of the 3080?...EVGA, Gigabyte, MSI will have multiple tiers of 3080 cards...so will the higher end ones have better designs to deal with the capacitor/power issues?
 
what about with higher end/more expensive custom variants of the 3080?...EVGA, Gigabyte, MSI will have multiple tiers of 3080 cards...so will the higher end ones have better designs to deal with the capacitor/power issues?
Not sure. It’s better to wait and see how things unfold over the coming weeks. Everything is still in the early stages of problem investigation (at least outside of Nvidia/AIBs).
 
If you're concerned with the launch cards right now - don't buy one. At this point given the scarcity, lots of incompatibility issues with various waterblocks, and this reported problem, I'm waiting to see what AMD offers up and also going to see what EVGA brings to the table with the 3080 Hydro Copper.

anyone and everyone thinking about getting an Ampere should be concerned about this...it's not something only electrical engineers should be discussing...and Buildzoid (who I'm guessing is an EE) posted his analysis of the 3080...

 
I love how there are so many armchair engineers chiming in on a very technical analysis with zero expertise. I don't mean to be rude but unless you are actually familiar with the design of these systems I don't think you are qualified to determine which design is "good" or "bad" because there's way more going on with the power regulation than simply "MLCC vs POSCAP".

An actual EE chimed in on this on Reddit. There are more than 100,000 MLCC caps for sale on Digikey and over 10,000 POSCAPs with all sorts of varying attributes. Designing these power regulation systems is a full time job. You can't second guess the design based on speculation posted on the internet.
934 poscaps, unless you include tantalum polymer capacitors from other brands.
 
so going forward what is the optimal solution?...all 3080/3090 cards have to be re-manufactured?
No, they will just drop boost speeds so they run right... They are boosting higher than advertised, so lowering boost clocks still keeps them out of legal trouble since it's not false advertising. Then day 1 benchmarks won't represent actual cards in circulation, but they are still "as advertised", just not "as benchmarked". I would be very surprised if their was any sort of recall or swap unless they are crashing at advertised speeds. So far it's all been > advertised speeds, so just dropping clocks is "acceptable". Probably piss a few people off, but hey, everyone who wants them are still going to buy their cards, then come back and make fun of AMD driver issues a d how you should stick with Nvidia and never buy AMD because only AMD has release issues. They both have issues, neither is immune. We'll see how they respond, but their silence this far is deafening.
 
No, they will just drop boost speeds so they run right... They are boosting higher than advertised, so lowering boost clocks still keeps them out of legal trouble since it's not false advertising. Then day 1 benchmarks won't represent actual cards in circulation, but they are still "as advertised", just not "as benchmarked". I would be very surprised if their was any sort of recall or swap unless they are crashing at advertised speeds. So far it's all been > advertised speeds, so just dropping clocks is "acceptable". Probably piss a few people off, but hey, everyone who wants them are still going to buy their cards, then come back and make fun of AMD driver issues a d how you should stick with Nvidia and never buy AMD because only AMD has release issues. They both have issues, neither is immune. We'll see how they respond, but their silence this far is deafening.

GTX 970 3.5GB VRAM...I received some sort of $30 check from that class action lawsuit even though Nvidia denied any wrongdoing
 
anyone and everyone thinking about getting an Ampere should be concerned about this...it's not something only electrical engineers should be discussing...and Buildzoid (who I'm guessing is an EE) posted his analysis of the 3080...

I'm not saying no one should be discussing the problem, I very intentionally said I don't think you are qualified to determine which design is "good" or "bad". Because that is the truth. You can't look at the design and determine that "4+2 is better than 5+1 is better than 6 POSCAPs" because you don't have any of the underlying technical information. There are cheap MLCC caps and expensive ones and these are very complicated power delivery systems with hundreds of components beyond just these particular caps.
 
Yes, POSCAP are the brand, Panasonic's brand of Tantalum-Polymer capacitors, but there are also other competitors with similar designs. But that's kind of the point. POSCAPs vs MLCC is like saying Ford vs Truck, as the Reddit post stated. It's a pretty good read.

https://www.reddit.com/r/hardware/comments/j09yj5/poscap_vs_mlcc_what_you_need_to_know/
Yeah, except none of the 30xx cards use poscaps–they use sp-caps (aluminum polymer)...and there is quite a bit more of them than poscaps (or their like).
 
  • Like
Reactions: Rizen
like this
anyone and everyone thinking about getting an Ampere should be concerned about this...it's not something only electrical engineers should be discussing...and Buildzoid (who I'm guessing is an EE) posted his analysis of the 3080...



From what I could understand of his analysis, seems the FE is very well built so I don't see anything to be concerned with here.
 
Just reinforces my opinion that AIB == cheap, gimmicky garbage these days.
Remember when AIBs actually had custom/beefier circuitry, components, and even PCBs? Pepperidge farms remembers.

You're absolutely right. I've been buying reference cards for a few years now, with a few EVGA GPUs sprinkled here and there. It's actually sad when the reference design is so much better than whatever AIBs have to offer.
 
From what I could understand of his analysis, seems the FE is very well built so I don't see anything to be concerned with here.
Yep, best and at the cheapest price point. Too bad they are hard to get. I think I will wait for 20gb versions and/or AMD cards.
 
I could have been a 12GB GPU...
Yes, it could have used DDR6 with a 384 bit bus, 12gb with equalivalent overall memory bandwidth with 2gb more memory as well as being cheaper overall to make with less power requirements. What Nvidia did here makes little sense. They could have saved the DDR6x stuff for the 3090 and 3080Ti versions.
 
  • Like
Reactions: STEM
like this
From what I could understand of his analysis, seems the FE is very well built so I don't see anything to be concerned with here.

The impression I got from that video is that the FE cards are almost "over engineered"...no cheap parts used 🤷‍♂️
 
Yes, it could have used DDR6 with a 384 bit bus, 12gb with equalivalent overall memory bandwidth with 2gb more memory as well as being cheaper overall to make with less power requirements. What Nvidia did here makes little sense. They could have saved the DDR6x stuff for the 3090 and 3080Ti versions.

NVIDIA likely has their reasons for going GDDR6X, they wouldn't have spent the engineering resources into designing and implementing it if it didn't benefit the overall performance.
 
Yeah, except none of the 30xx cards use poscaps–they use sp-caps (aluminum polymer)...and there is quite a bit more of them than poscaps (or their like).
Looks like 3080 founders edition uses poscaps to me.
 
  • Like
Reactions: Nobu
like this
Double post.
 

Attachments

  • 1601252294281.png
    1601252294281.png
    41.6 KB · Views: 0
Last edited:
NVIDIA likely has their reasons for going GDDR6X, they wouldn't have spent the engineering resources into designing and implementing it if it didn't benefit the overall performance.
Two memory controllers are turned off on the GA102, hence the 320bit bus, adding two more memory chips and turning those memory controllers on with the crossbar design allows any memory controller to talk to each other and supply data. Basically 16GB/s DDR 6 with 384 bit bus will give the same bandwidth as 19.5GB/s with 320 bit bus. Maybe GA102 cannot use DDR6 and it has to be DDR6x and hence the weird configuration.
 
Your fucking kidding me????? can you take pics? NOW I AM REALLY pissed and worried. $1800 is a chunk of bread.

My 3080 OC Ventus has 5x1 and, knock on wood, no crashing issues and OC's past 2000mhz like a dream.
But those EVGA FTW3 3090's are much juicer cards and @ $1800 buckaroonies should have all six caps of the good stuff!
 
I had a few multi hour gaming sessions since getting MSI 3080 Ventus 3x OC (5+1 cap), think same model l88bastard got. Haven't seen any CTDs yet, but card mainly runs around 1950MHZ at stock settings at around 73c (ambient room around 27c) . Highest I've seen it hit is 2025MHZ (MS Flight Simulator 2020, Project Cars 2, Forza 4 and Overwatch). So maybe 2 GHz is max the release chips can do without some firmware magic
 
You're absolutely right. I've been buying reference cards for a few years now, with a few EVGA GPUs sprinkled here and there. It's actually sad when the reference design is so much better than whatever AIBs have to offer.
Umm.. think you have this confused. The AIB's are building the reference design cards... nvidia chose not to. So, I'm not sure how the reference design is better than AIB offerings when they are one and of the same.
 
Two memory controllers are turned off on the GA102, hence the 320bit bus, adding two more memory chips and turning those memory controllers on with the crossbar design allows any memory controller to talk to each other and supply data. Basically 16GB/s DDR 6 with 384 bit bus will give the same bandwidth as 19.5GB/s with 320 bit bus. Maybe GA102 cannot use DDR6 and it has to be DDR6x and hence the weird configuration.
They used DDR6X because it can run at higher speeds than DDR6. It uses completely different signalling so they are not compatible. They would have had to design a different (or reuse their old) memory controller in order to utilize DDR6. They aren't compatible, so once they decided on DDR6X that's all they could use.
 
  • Like
Reactions: noko
like this
I had a few multi hour gaming sessions since getting MSI 3080 Ventus 3x OC (5+1 cap), think same model l88bastard got. Haven't seen any CTDs yet, but card mainly runs around 1950MHZ at stock settings at around 73c (ambient room around 27c) . Highest I've seen it hit is 2025MHZ (MS Flight Simulator 2020, Project Cars 2, Forza 4 and Overwatch). So maybe 2 GHz is max the release chips can do without some firmware magic
Yeah, and it probably has to do with some binning as well. Obviously not all die's are created equal, so some may be able to hit 2.1ghz, and others 2.0ghz (or w/e), so some may see the issue and others may not, even given the same design.
 
Two memory controllers are turned off on the GA102, hence the 320bit bus, adding two more memory chips and turning those memory controllers on with the crossbar design allows any memory controller to talk to each other and supply data. Basically 16GB/s DDR 6 with 384 bit bus will give the same bandwidth as 19.5GB/s with 320 bit bus. Maybe GA102 cannot use DDR6 and it has to be DDR6x and hence the weird configuration.

Nah, it's actually planned obsolescence. Even they can't predict the future that accurately, otherwise, we wouldn't see a screw up every so many generations. Basically, they need their customers to keep upgrading.
 
  • Like
Reactions: noko
like this
Umm.. think you have this confused. The AIB's are building the reference design cards... nvidia chose not to. So, I'm not sure how the reference design is better than AIB offerings when they are one and of the same.

I meant Founders Edition... what a fancy name.
 
I meant Founders Edition... what a fancy name.
Ok, that makes much more sense now :). Yeah, either way though, reference design should not be crashing even if it doesn't clock as well as some higher end models. I expect MSRP cards to at least work properly. I expect higher end models to be better binned and clock higher and/or use less power, whether it's FE or not. This cycle NVIDIA released the FE cards lower than they normally do, but so far the stock of them has been abysmal. We'll see if that improves or if it was a decision to generate hype @ MSRP prices and then not be able to actual obtain that performance for MSRP.
 
I had a few multi hour gaming sessions since getting MSI 3080 Ventus 3x OC (5+1 cap), think same model l88bastard got. Haven't seen any CTDs yet, but card mainly runs around 1950MHZ at stock settings at around 73c (ambient room around 27c) . Highest I've seen it hit is 2025MHZ (MS Flight Simulator 2020, Project Cars 2, Forza 4 and Overwatch). So maybe 2 GHz is max the release chips can do without some firmware magic

Yea, I have the same exact card.....and it hits 2055 in MSFS2020 which is the highest I have seen it go....
But it usually cruises around 1950 like yours.

I run the fan 100%, and bought a brand new EVGA SuperNOVA 1000 G3 to power it with each 8bit pin connected by its own cable to the PSU.
 
Your fucking kidding me????? can you take pics? NOW I AM REALLY pissed and worried. $1800 is a chunk of bread.
Yap runs like a champ too. Pretty messed up how evga made the same models with slight differences.

usually hover around 1920-1950
 

Attachments

  • 789D45D0-5FEC-4F10-9EAB-ADAA4784132F.jpeg
    789D45D0-5FEC-4F10-9EAB-ADAA4784132F.jpeg
    380.1 KB · Views: 0
Last edited:
Yap runs like a champ too. Pretty messed up how evga made different models.

usually hover around 1920-1950
It seems they don't have issues until about 2000MHz, so for that reason it seems to happen more on 3080s. At least that is what I've seen.
 
My problem with this is, why are AIBs charging us more for lower quality than the FE cards? Remember the days when OC cards were a few bucks more and real custom cards were ranging around ~50$...

Wasn't that long ago. With my GTX 970, base models were actually $330. The good MSI Gaming version I had with a good OC and two fan cooler was $350. $360 got you a slightly better OC.

These days, they charge $50-90 more it seems.
 
if that's the case, everyone, EVERYONE needs to return their shit.
they wouldn't be lowering the advertised boost clock, both rather the boost table. It would effectively make all the GPUs perform as advertised, but more like a lower bin than a higher bin. It seems like crossing 2000MHz is when the issues start, and that is still ~300MHz faster than advertised boost.
 
if that's the case, everyone, EVERYONE needs to return their shit.
On what grounds? All of the cards easily boost past their guaranteed boost clock. Read the specs. EVGA's highest end card only advertises 1800MHz boost clock.
 
On what grounds? All of the cards easily boost past their guaranteed boost clock. Read the specs. EVGA's highest end card only advertises 1800MHz boost clock.

Yeah, I thought it odd that the "boost" clock for most of these cards was around 1740-ish, but reviews saying they were boosting to 1950+. I think maybe they went a little fast and loose with the boost clocks beyond advertised spec.
 
Back
Top