So, apparently MOSTLY everyone needs to return their RTX 3080 / 3090 because of "cheap" components. I'm def returning my EVGA 3090.

My EVGA 3090 FTW has been a little crashy (comparatively), but what's to say that isn't driver-related? Supposedly that model follows Nvidia's FE and isn't affected. I'm not returning anything until some driver updates arrive and I'm certain this isn't just simply software. Performance on this thing is out of this world and literally exactly what I wanted, so I'm not about to return something after 2 days that might be fixable via drivers or firmware.

When I used Precision X for my 3090 it gave me a firmware update. Not sure if you did that or not.
 
So, apparently MOSTLY everyone needs to return their RTX 3080 / 3090 because of "cheap" components. I'm def returning my EVGA 3090.

No way IN HELL am I going to accept this on a $1800 with tax video card.

But, then again, my card is very stable. I've not had any crashing but, will my components degrade over time?

Wait, yes, I do have a strange crash. Timespy crashes right off the jump. Let me go turn off my overclocking tool within Geforce Experience.

Trust me, watch this video.



If you have an array of six cheapies for power delivery then you've got a problem. On the 3080, a 1 and 5 split seems fine, though that might be questionable for the increased power needs of a 3090. The Nvidia FE cards for the 3090 have a 2 and 4 arrangement. If you have at least a 2 and 4 arrangement you should be fine, presumably for the reasonable life of the card since that is what Nvidia itself is doing. I think all the AiB need to come clean and announce their power delivery arrangements PER MODEL so that the users can know what they have.
 
Thing is the minimum spec should be 100% stable or it's not really a minimum spec is it? Also there's no way all the aibs even evga would make a card that wouldn't work my bet is nvidia test cores they sent out worked 100% or where caped in the bost clocks for testing then when they got the for production cores something changed/ the uped the clocks or something and the fact it's not all cards but only some suggest to me something has a defect and making it more sensitive to what caps are used.

But for all we know it could be people just uesing lame psu/cables or drivers
I completely agree, I was responding to someone that said they were building under-specced cards. I was pointing out that they were building cards in spec, even if they were building to the lower side of spec, it's still spec and should still 100% work (at stock speeds).

Who knows what happened or who knew what and when; I'm patiently waiting for nvidia to issue a statement but I think they're just going to leave the AIB's hanging at this point, lol. What a crap-tastic launch this was. This is something that would happen to a struggling AMD with no budget... not something that should be happening to the market leader with some of the highest earnings ever. I don't know what is going on, but if AMD doesn't completely botch their launch they'll look like friggin all-stars, lol. But hey, if anyone can follow up this fiasco and make it look normal, it'll be AMD graphics division. I have a bit more faith in them now that Raja is gone though.
 
Shitty for those of you who bought cards with problems.

Seems like all AMD has to do is have cards in stock that are faster than a 3080 and not crash.

That's a pretty low bar AMD. You can do it!
Yeah, here's to hoping they don't trip over their own feet and land on the bar :). Seriously, wtf is going on. I really hope nvidia gets this sorted quickly, which they tend to, but this is like they were really trying to rush a product that wasn't fully ready.
 
I had 3 x RTX 3080 all were fine. Sold two made some cash but the RTX 3080 is fast and works fine.
Also can even play most games 4K 60FPs Ultra which is amazing and even some game 8K 120FPS
 
Funny you guys always say only AMD has these kind of launches ;) Think I will be staying away until Nvidia can prove they have this problem fixed.
 
I remember when the 2080ti space invader problem crept up. Nvidia stepped up and had a replacement at my door within 3 days of reporting problem, and that is before I sent my card back. I would give them the benefit of doubt to get this sorted, although from what I read, it is the aftermarket cards with the issues.
 
It has been a week and my RTX 3080 and LG OLED problems are not solved. Nvidia seems to get a pass always.If it was AMD there be murder in the streets.
 
IT CAN'T GET ANY WORSE GUYS!!!!!!!!!!
Nvidia-GTX-1080-EVGA-FTW-Catches-Fire.jpg
 
Yeah, here's to hoping they don't trip over their own feet and land on the bar :). Seriously, wtf is going on. I really hope nvidia gets this sorted quickly, which they tend to, but this is like they were really trying to rush a product that wasn't fully ready.

Watch miners snatch up all the 6x00 gpus.
 
This is why Im never the guinea pig and buy new cards as soon as they come out. Waiting for AMD but also all the issues to be worked out. Some people are just not patient.
Nirad9er Same for me, i have never wanted to be the first to buy something not tested by real people, and yet SOOOO many people are willing to do it just so they can brag they got the latest and greatest! I guess we owe a thanks to them for suffering through this every time (you would think they would learn eventually?)

Even once AMD cards are out and retail it will prob be a 1-2 months before I am buying anything, assuming there is any inventory.
 
C'mon Dan the video host wouldn't lie to me!

It could always get worse and I've seen some pretty big blunders in the hardware world. Pentium FDIV bug, i820 MTH issue and NVIDIA nForce 680i SLI reference board issues. Similarly, I remember bad capacitors on GeForce 4 GPU's. We had dozens of GeForce 4 Ti 4200 GPU's at work which all died in months to a year or so. They had shoddy VRM's that burnt out and multiple capactor failures. Also, speaking of which, the capacitor issues that plagued the entire industry lasted for years. I remember Apple continuing to use shitty caps more than a year after everyone else stopped in iMacs.

Also, let's not forget the NVIDIA space invaders issues. The list goes on and on.
 
It could always get worse and I've seen some pretty big blunders in the hardware world. Pentium FDIV bug, i820 MTH issue and NVIDIA nForce 680i SLI reference board issues. Similarly, I remember bad capacitors on GeForce 4 GPU's. We had dozens of GeForce 4 Ti 4200 GPU's at work which all died in months to a year or so. They had shoddy VRM's that burnt out and multiple capactor failures. Also, speaking of which, the capacitor issues that plagued the entire industry lasted for years. I remember Apple continuing to use shitty caps more than a year after everyone else stopped in iMacs.

Also, let's not forget the NVIDIA space invaders issues. The list goes on and on.
This. Bad releases happen. Bad designs happen. This may or may not be one, we’ll see. But they do happen.
 
I would not accept less than MLCC 2 + 4 capacitor array. The FE cards are reliable according to extensive testing. There's a reason for that.

The really shameful manufacturers are Gigabyte and Zotac though. 6 poscops on their 3090 model? WTF. A nearly $2000 graphics card with that much corner cutting is unacceptable!!
 
I would not accept less than MLCC 2 + 4 capacitor array. The FE cards are reliable according to extensive testing. There's a reason for that.

The really shameful manufacturers are Gigabyte and Zotac though. 6 poscops on their 3090 model? WTF. A nearly $2000 graphics card with that much corner cutting is unacceptable!!
Yeah, those stupid cheap bastards, how dare they follow and respect Nvidia's spec!
 
Last edited:
So much FUD spread when a new product launches. It's amazing. Don't fall into the "I hate my new product because someone posted a bad review". Test it. If there is a true issue, it'll be able to be remedied, but don't write it off before you have it in hand.
 
Yeah, those stupid cheap bastards, how dare they follow and respect Nvidia's spec!

Nvidia spec DID include MLCC caps. It's the cheapo AIBS that decided, *shrug* we'll just skip this one and shave off 5 bucks on our bottom line margin. Let's just go all POSCAPS cos it's close enough LULZ!

Minimum reference design would have been 1x MLCC + 5x poscaps. Nvidia went slightly above that spec with FE 2+4 design.

From reports 1+5 'should' work but the 2+4 seems to be the safest most stable design choice. And the 6 poscap designs are the most problematic.
 
Last edited:
Nvidia spec DID include MLCC caps. It's the cheapo AIBS that decided, *shrug* we'll just skip this one and shave off 5 bucks on our bottom line margin. Let's just go all POSCAPS cos it's close enough LULZ!

How do you explain the FE cards having issues then? It doesn't seem like it's 100% limited to popscap vs. mlcc issues.
 
Nvidia spec DID include MLCC caps. It's the cheapo AIBS that decided, *shrug* we'll just skip this one and shave off 5 bucks on our bottom line margin. Let's just go all POSCAPS cos it's close enough LULZ!

Minimum reference design would have been 1x MLCC + 5x poscaps. Nvidia went slightly above that spec with FE 2+4 design.

From reports 1+5 'should' work but the 2+4 seems to be the safest most stable design choice. And the 6 poscap designs are the most problematic.
Poscaps are more expensive than MLCC; so no, actually, they didn’t cheap out on the hardware, they just may not have tested under the right loads. The two designs are for different things, not more expensive or cheaper
 
  • Like
Reactions: Nenu
like this
Poscaps are more expensive than MLCC; so no, actually, they didn’t cheap out on the hardware, they just may not have tested under the right loads. The two designs are for different things, not more expensive or cheaper

Are ten MLCCs cheaper than one Poscap? (Serious question)

Edit: changed six to ten
 
Poscaps are more expensive than MLCC; so no, actually, they didn’t cheap out on the hardware, they just may not have tested under the right loads. The two designs are for different things, not more expensive or cheaper

Incorrect. Buildzoid did a video discussing the Poscap vs MLCC issue on 3080s. To recap:

1. To be technically correct these are SP-caps and not Poscaps.

2. MLCC arrays are more expensive than SP-Caps.

3. To create an MLC array such as the one diagrammed by Nvidia you need an array of TEN of the chips to create a single MLCC array. On TOP of that the machining costs for 10 chips takes 10x as long so again just placing them on the card is inherently more expensive than placing 1 big SP-cap.

Now if you go and read EVGA's letter describing the problem. They had to delay their 3000 series cards because they caught the stability issues with using 6x PS-Caps and had to fix it by placing 20 MLCs which = 2x MLC array. Which means more expense but now the cards were stable after the hardware revision.

To be FAIR, Nvidia is partially responsible by not being more specific in their reference guidelines and saying YO AIBs - you need at LEAST one MLC array to make these cards stable at higher frequencies. This isn't a suggestion, this is a requirement!

PPS on top of this there are different levels of SP Caps in terms of performance/price. 470, 330, 270, etc. If you examine the Nvidia FE editions they use 2x MLC, 2x 470 SP Cap, and 2x270 SP Cap. This variety in capacitors probably helps handle a wider variety of transient load states. In any case the FE edition seems to be very reliable.
 
Last edited:
Yeah I caught that and was fixing it as you posted. Too many 5’s 1’s 4’s 2’s and 6’s in this thread.
No problem, I hate replying on my tablet. To you or me buying full retail one mlcc with this rating is just under 1.50ish while a sp-cap from Panasonic is 1.83. Just literally by looking on google shopping.
 
Incorrect. Buildzoid did a video discussing the Poscap vs MLCC issue on 3080s. To recap:

1. To be technically correct these are PS-caps and not Poscaps.

2. MLCC arrays are more expensive than PS-Caps.

3. To create an MLC array such as the one diagrammed by Nvidia you need an array of TEN of the chips to create a single MLCC array. On TOP of that the machining costs for 10 chips takes 10x as long so again just placing them on the card is inherently more expensive than placing 1 big PS-cap.

Now if you go and read EVGA's letter describing the problem. They had to delay their 3000 series cards because they caught the stability issues with using 6x PS-Caps and had to fix it by placing 20 MLCs which = 2x MLC array. Which means more expense but now the cards were stable after the hardware revision.

To be FAIR, Nvidia is partially responsible by not being more specific in their reference guidelines and saying YO AIBs - you need at LEAST one MLC array to make these cards stable at higher frequencies. This isn't a suggestion, this is a requirement!
i've seen 6 arrays of MLCCs on the asus tuf cards.
 
i've seen 6 arrays of MLCCs on the asus tuf cards.
That's because Asus doesn't care because they will pass the cost on to you. Their line of mil spec is because when given a reference they don't sub down.
 
That's because Asus doesn't care because they will pass the cost on to you. Their line of mil spec is because when given a reference they don't sub down.
the tuf cards I've seen are going for MSRP or MSRP +$100. Basically still one of the cheaper AIBs.
 
i've seen 6 arrays of MLCCs on the asus tuf cards.

Buildzoid commented that the ASUS 3080 seemed to use six of the 470 MLCs that he prefers so he seemed to find that design to be 'very neat'. Some would call that type of card "overengineered" but apparently it makes a difference on the 30 series. Maybe because this video card draws more power than ever. Maybe it's the high frequencies. But the MLCs definitely make a difference over the designs that only use SP caps - that are called poscaps.

It seems like ASUS cards have the highest quality design of any other AIB for the RTX 3000 series.
 
the tuf cards I've seen are going for MSRP or MSRP +$100. Basically still one of the cheaper AIBs.
Yeah and the reason for that is how bad theTuf brand has been perceived before this series. They've switched it to being in the same level of the previous strix generation going forward. You won't see a tuff product cutting corners like their dual or turbos.
 
Did some testing last night on the 3090 using my CX tv. Please a good solid 3-4 hours of COD:MW and did not have 1 crash. Boost speeds were around 1950-1980 most of the time.

Really does make me wonder if its a driver issue though. I know if I enable Gsync on the CX tv it will drop its signal.
 
Back
Top