GeForce RTX 3080 sees increasing reports of crashes in games

Mate, you do realize there is usually only one 12v rail inside a PSU? :)
1601094852355.png
 
The homegrown geek in taco just couldn't resist, mate😁👍

What I was trying to say is.....typically in the past I would run one cable from the psu with two 8 bit pins and just plug it in. On this one I ran two cables from different sections of the psu for each 8 bit pin. I don't know if that helps or is voodoo economics....but all seems very nice with my 3080 msi as I have run 2ghz clocks on lots of games with zero crashing issues.

Then again my GPU is boosted by Chiiiiiiuahahahahahaha power!

The 3080 is a beast of a card and 10bit RGB 4k120 is thoroughly impressive on the LG 48cx.
 
Well, if it is truly multiple rail, that might in theory pose a problem if there are slight fluctuations in output and they both feed the same circuit on the GPU. But if nothing exploded, probably gravy, mate.

I'm biting a finger and it doesn't even feel like my finger. Feel like taco gona womit.

Egh g'night mates.

Hmmm easy enough to switch back....I got cable extenders so just unplug and then plug into the other...will test out.

I am having weird flickering issues on desktop after playing BFV & MW....like halloween horror house....reboot solves it
 
Most of the reason for canceling the Ventus was because the small sampling of "reviews" i have seen show it running hotter than i would like. I didn't cancel anything over the crashing issue specifically.

Yeah, happy I went with my gut instinct and canceled my pre-order for the Ventus last Saturday and pre-ordered the Asus TUF instead. It will be a long wait, but at least for the right card.
 
so AIB's are not going to re-manufacture the cards and instead will just release BIOS updates that reduce clocks?...sounds like a bad option...so Asus and Founders Edition cards seem like the best choice
 
so AIB's are not going to re-manufacture the cards and instead will just release BIOS updates that reduce clocks?...sounds like a bad option...so Asus and Founders Edition cards seem like the best choice
At the moment there is no clear indication that is the route they’re going to take. Jayz only suggests that is a likely possibility since it’s simple and free.
 
If they do that they are signing their death warrants. The AIBs are responsible for this. They could have charged $10 more for each card with the proper hardware and all would be well. I don't understand why they are trying to take such shortcuts. They know NV cards are money makers so why bring discredit on your company by trying to make an extra couple of dollars?
The AIB's are NOT responsible for this. Well not completely. All parts on that board are apart of the spec. If a cheaper component is guarunteed to fail then it shouldn't be apart of the spec. This is basic engineering.
 
Is taco missing something here? Didn't evga catch them trouble and .fixed in time?

Yes, the XC3 series is 5+1 and the FTW is 4+2. 5+1 should be fine but people are scared because it's less than th FE model which is 4+2.
 
The AIB's are NOT responsible for this. Well not completely. All parts on that board are apart of the spec. If a cheaper component is guarunteed to fail then it shouldn't be apart of the spec. This is basic engineering.

If NV said they could substitute with lesser parts then NV is at fault. If the AIBs substituted to save money they are at fault.
 
If NV said they could substitute with lesser parts then NV is at fault. If the AIBs substituted to save money they are at fault.
The lesser parts ARE apart of the spec. That's why NV is at fault.

When you engineer a product you provide what's called a spec list. It includes every part and what is acceptable to use for guarunteed operation. This may include cheaper parts. You provide this spec list precisely to prevent failures. You are providing parts that meet the specification for operation.

When you begin manufacturing you have to monitor and test for weeks to guaruntee everything is operating within tolerance. If you don't, well it looks alot like this. AIB's generally speaking would have caught this. But within only 1 month. That's crazy tight. There's no allowance for a mistake. So do AIB's share some blame? Yes. But not the majority. Far from it.
 
The lesser parts ARE apart of the spec. That's why NV is at fault.

When you engineer a product you provide what's called a spec list. It includes every part and what is acceptable to use for guarunteed operation. This may include cheaper parts. You provide this spec list precisely to prevent failures. You are providing parts that meet the specification for operation.

When you begin manufacturing you have to monitor and test for weeks to guaruntee everything is operating within tolerance. If you don't, well it looks alot like this. AIB's generally speaking would have caught this. But within only 1 month. That's crazy tight. There's no allowance for a mistake. So do AIB's share some blame? Yes. But not the majority. Far from it.

Please show me the spec sheets. If they show the configuration and parts used then I'll buy your story. Otherwise this is pure conjecture.
 
Please show me the spec sheets. If they show the configuration and parts used then I'll buy your story. Otherwise this is pure conjecture.
It's literally in the article from Igor which includes the spec. He says they should have caught it. That's correct. But the people who told them they could use those parts is nVidia, which he also mentions. Nvidia engineers. The AIB's build.
 
Last edited:
I thought 5 cheap ones and 1 spensive one was reference design, not 6 cheap ones.. those are the ones with problems I think.
 
I thought 5 cheap ones and 1 spensive one was reference design, not 6 cheap ones.. those are the ones with problems I think.
That kind of warning would be in the spec usually. It would be if you wanted to prevent failures. All of the cards are different because Nvidia didn't specifiy.
 
I think the simple fact is that AIBs just got very little time to design, build and test compared to Nvidia's own special FE-brew design. All had to rush it a bit to get something out on launch day, and some have good enough QA departments to catch the issue, otherwise were not as "lucky".

Really marvellous launch, Nvidia! :hungover:
 
It's literally in the article from Igor which includes the spec. He says they should have caught it. That's correct. But the people who told them they could use those parts is nVidia, which he also mentions. Nvidia engineers. The AIB's build.

I read it again and NV has some blame because they didn't exclude lesser components, but the AIBs in question knowingly used lesser components for nothing other than cost reasons. Here's the money shot...

Sometimes things are so obvious that you really have to look several times to see them. But once you have understood it, many things suddenly go from nebulous to plausible. NVIDIA, by the way, cannot be blamed directly, because the fact that MLCCs work better than POSCAPs is something that any board designer who hasn’t taken the wrong profession knows. Such a thing can even be simulated if necessary.

Anyway, there's enough blame to go around. At least it was found quickly and will be remedied.
 
I read it again and NV has some blame because they didn't exclude lesser components, but the AIBs in question knowingly used lesser components for nothing other than cost reasons. Here's the money shot...

Sometimes things are so obvious that you really have to look several times to see them. But once you have understood it, many things suddenly go from nebulous to plausible. NVIDIA, by the way, cannot be blamed directly, because the fact that MLCCs work better than POSCAPs is something that any board designer who hasn’t taken the wrong profession knows. Such a thing can even be simulated if necessary.

Anyway, there's enough blame to go around. At least it was found quickly and will be remedied.
There is enough blame to go around but the point is you wouldn't even have been able to identify what a lesser component was if it wasn't in Nvidia's spec. So putting that all on AIBs? Oh hell no.
 
There is enough blame to go around but the point is you wouldn't even have been able to identify what a lesser component was if it wasn't in Nvidia's spec. So putting that all on AIBs? Oh hell no.

There may be other issues...

1601138382888.png
 
There may be other issues...

View attachment 283035
No its not all on the caps. This isn't hard to identify the problem. With only 1 month of production there's only a certain amount of chips that meet nVidia's design spec. The 3080 and the 3090 ARE Titan cards given their size. NVidia went all in on a lesser design node because they had to. AMD is on a better process. To mitigate that Nvidia is running with a much larger chip than they wanted which would not be a problem if they had production of more than 1 month. 1 month of production for Nvidia before launch is 100% ABSURD! They are like the Intel of GPU design.

So some chips use more power than others. Bingo Bango that's why you have a variety of cards failing. Blaming it "on drivers" shouldn't be uttered EVER when it comes to nVdia by ANY reviewer given the crap they have spewed.
 
No its not all on the caps. This isn't hard to identify the problem. With only 1 month of production there's only a certain amount of chips that meet nVidia's design spec. The 3080 and the 3080 ARE Titan cards given their size. NVidia went all in on a lesser design node because they had to. AMD is on a better process. To mitigate that Nvidia is running with a much larger chip than they wanted which would not be a problem if they had production of more than 1 month. 1 month of production for Nvidia before launch is 100% ABSURD! They are like the Intel of GPU design.

Dude, chill. Your inner AMD isn't hard to discern.
 
Dude, chill. Your inner AMD isn't hard to discern.
Not my inner AMD. I just don't like BS. I have done software design and engineering so when someone blames the builders but the spec has the inferior part IN IT. That's a problem.

As for what cards I usually buy it's a 10:1 ratio and that's not leaning toward AMD when it comes to GPU's. Now CPU's sure. GPU's nope because of many factors.
 
Last edited:
I find it hard to believe that EVGA (explicitly stating) and ASUS (no statement but caps changed) discovered issues with the caps prior to launch---so close they even had images released showing the older cap config---but that Hardware Unboxed is to believed that it wasn't the issue. Like, come on now. There are likely other issues related to crashes apart from the ones identified with hardware by the manufacturers.
 
I find it hard to believe that EVGA (explicitly stating) and ASUS (no statement but caps changed) discovered issues with the caps prior to launch---so close they even had images released showing the older cap config---but that Hardware Unboxed is to believed that it wasn't the issue. Like, come on now. There are likely other issues related to crashes apart from the ones identified with hardware by the manufacturers.

I'm gonna go out on a limb and say that because Nvidia used a custom PCB this time around and told the rest to use another design "reference" that something happened. I'd like to know if Nvidia actually tested the reference design in house before giving it to the AIB partners.
 
  • Like
Reactions: kac77
like this
No its not all on the caps. This isn't hard to identify the problem. With only 1 month of production there's only a certain amount of chips that meet nVidia's design spec. The 3080 and the 3080 ARE Titan cards given their size. NVidia went all in on a lesser design node because they had to. AMD is on a better process. To mitigate that Nvidia is running with a much larger chip than they wanted which would not be a problem if they had production of more than 1 month. 1 month of production for Nvidia before launch is 100% ABSURD! They are like the Intel of GPU design.

So some chips use more power than others. Bingo Bango that's why you have a variety of cards failing. Blaming it "on drivers" shouldn't be uttered EVER when it comes to nVdia by ANY reviewer given the crap they have spewed.
This is not a node issue, it is a design flaw that should have been caught but wasn't. The option of using 6 sp-caps shouldn't have been on the table. I think there should be another underlying issue that amplifies the signal sensitivity.
 
I find it hard to believe that EVGA (explicitly stating) and ASUS (no statement but caps changed) discovered issues with the caps prior to launch---so close they even had images released showing the older cap config---but that Hardware Unboxed is to believed that it wasn't the issue. Like, come on now. There are likely other issues related to crashes apart from the ones identified with hardware by the manufacturers.
Exactly. If people are catching crap last minute then it's NOT the AIB's. Nvidia is their biggest selling customer. People really think they would screw their entire launch for 0.15? You're crazy.
 
This is not a node issue, it is a design flaw that should have been caught but wasn't. The option of using 6 sp-caps shouldn't have been on the table. I think there should be another underlying issue that amplifies the signal sensitivity.
We aren't talking design flaw. It's not design. It's a production problem and the willingness to go forward with chips that don't meet spec.
 
We aren't talking design flaw. It's not design. It's a production problem and the willingness to go forward with chips that don't meet spec.
They do meet spec, though? Or, then, nvidia green-lighted out of spec cards?
 
We aren't talking design flaw. It's not design. It's a production problem and the willingness to go forward with chips that don't meet spec.
I don't think you understand the difference between production and design. Since you say that is a production problem, what variable was added to cause this issue from initial materials to final assembly?
Are we hearing about contaminated materials, faulty components? Worker error? Static damage?
 
I don't think you understand the difference between production and design. Since you say that is a production problem, what variable was added to cause this issue from initial materials to finale assembly?
When you fab chips not all of them meet the specification. You don't know this?
 
They do meet spec, though? Or, then, nvidia green-lighted out of spec cards?
What i think happened is a 2fer:
1) A late push to increase clocks
2) Not enough chips that can hit those clocks at the predetermined spec for power delivery.
 
Back
Top