lowering boost clocks in vbios is most likely the fix....so going forward what is the optimal solution?...all 3080/3090 cards have to be re-manufactured?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
lowering boost clocks in vbios is most likely the fix....so going forward what is the optimal solution?...all 3080/3090 cards have to be re-manufactured?
That’s the expensive and less likely scenario. The more likely scenario is that a firmware update will be applied. The firmware updates notes will say something along the lines of “increase system stability” when in reality the boost clock will be limited and reduced. This scenario is essentially free.so going forward what is the optimal solution?...all 3080/3090 cards have to be re-manufactured?
That’s the expensive and less likely scenario. The more likely scenario is that a firmware update will be applied. The firmware updates notes will say something along the lines of “increase system stability” when in reality the boost clock will be limited and reduced. This scenario is essentially free.
Not sure. It’s better to wait and see how things unfold over the coming weeks. Everything is still in the early stages of problem investigation (at least outside of Nvidia/AIBs).what about with higher end/more expensive custom variants of the 3080?...EVGA, Gigabyte, MSI will have multiple tiers of 3080 cards...so will the higher end ones have better designs to deal with the capacitor/power issues?
If you're concerned with the launch cards right now - don't buy one. At this point given the scarcity, lots of incompatibility issues with various waterblocks, and this reported problem, I'm waiting to see what AMD offers up and also going to see what EVGA brings to the table with the 3080 Hydro Copper.
934 poscaps, unless you include tantalum polymer capacitors from other brands.I love how there are so many armchair engineers chiming in on a very technical analysis with zero expertise. I don't mean to be rude but unless you are actually familiar with the design of these systems I don't think you are qualified to determine which design is "good" or "bad" because there's way more going on with the power regulation than simply "MLCC vs POSCAP".
An actual EE chimed in on this on Reddit. There are more than 100,000 MLCC caps for sale on Digikey and over 10,000 POSCAPs with all sorts of varying attributes. Designing these power regulation systems is a full time job. You can't second guess the design based on speculation posted on the internet.
No, they will just drop boost speeds so they run right... They are boosting higher than advertised, so lowering boost clocks still keeps them out of legal trouble since it's not false advertising. Then day 1 benchmarks won't represent actual cards in circulation, but they are still "as advertised", just not "as benchmarked". I would be very surprised if their was any sort of recall or swap unless they are crashing at advertised speeds. So far it's all been > advertised speeds, so just dropping clocks is "acceptable". Probably piss a few people off, but hey, everyone who wants them are still going to buy their cards, then come back and make fun of AMD driver issues a d how you should stick with Nvidia and never buy AMD because only AMD has release issues. They both have issues, neither is immune. We'll see how they respond, but their silence this far is deafening.so going forward what is the optimal solution?...all 3080/3090 cards have to be re-manufactured?
No, they will just drop boost speeds so they run right... They are boosting higher than advertised, so lowering boost clocks still keeps them out of legal trouble since it's not false advertising. Then day 1 benchmarks won't represent actual cards in circulation, but they are still "as advertised", just not "as benchmarked". I would be very surprised if their was any sort of recall or swap unless they are crashing at advertised speeds. So far it's all been > advertised speeds, so just dropping clocks is "acceptable". Probably piss a few people off, but hey, everyone who wants them are still going to buy their cards, then come back and make fun of AMD driver issues a d how you should stick with Nvidia and never buy AMD because only AMD has release issues. They both have issues, neither is immune. We'll see how they respond, but their silence this far is deafening.
Yes, POSCAP are the brand, Panasonic's brand of Tantalum-Polymer capacitors, but there are also other competitors with similar designs. But that's kind of the point. POSCAPs vs MLCC is like saying Ford vs Truck, as the Reddit post stated. It's a pretty good read.934 poscaps, unless you include tantalum polymer capacitors from other brands.
I'm not saying no one should be discussing the problem, I very intentionally said I don't think you are qualified to determine which design is "good" or "bad". Because that is the truth. You can't look at the design and determine that "4+2 is better than 5+1 is better than 6 POSCAPs" because you don't have any of the underlying technical information. There are cheap MLCC caps and expensive ones and these are very complicated power delivery systems with hundreds of components beyond just these particular caps.anyone and everyone thinking about getting an Ampere should be concerned about this...it's not something only electrical engineers should be discussing...and Buildzoid (who I'm guessing is an EE) posted his analysis of the 3080...
Yeah, except none of the 30xx cards use poscaps–they use sp-caps (aluminum polymer)...and there is quite a bit more of them than poscaps (or their like).Yes, POSCAP are the brand, Panasonic's brand of Tantalum-Polymer capacitors, but there are also other competitors with similar designs. But that's kind of the point. POSCAPs vs MLCC is like saying Ford vs Truck, as the Reddit post stated. It's a pretty good read.
https://www.reddit.com/r/hardware/comments/j09yj5/poscap_vs_mlcc_what_you_need_to_know/
anyone and everyone thinking about getting an Ampere should be concerned about this...it's not something only electrical engineers should be discussing...and Buildzoid (who I'm guessing is an EE) posted his analysis of the 3080...
Just reinforces my opinion that AIB == cheap, gimmicky garbage these days.
Remember when AIBs actually had custom/beefier circuitry, components, and even PCBs? Pepperidge farms remembers.
From what I could understand of his analysis, seems the FE is very well built so I don't see anything to be concerned with here.
Yep, best and at the cheapest price point. Too bad they are hard to get. I think I will wait for 20gb versions and/or AMD cards.From what I could understand of his analysis, seems the FE is very well built so I don't see anything to be concerned with here.
Yes, it could have used DDR6 with a 384 bit bus, 12gb with equalivalent overall memory bandwidth with 2gb more memory as well as being cheaper overall to make with less power requirements. What Nvidia did here makes little sense. They could have saved the DDR6x stuff for the 3090 and 3080Ti versions.I could have been a 12GB GPU...
I could have been a 12GB GPU...
From what I could understand of his analysis, seems the FE is very well built so I don't see anything to be concerned with here.
Yes, it could have used DDR6 with a 384 bit bus, 12gb with equalivalent overall memory bandwidth with 2gb more memory as well as being cheaper overall to make with less power requirements. What Nvidia did here makes little sense. They could have saved the DDR6x stuff for the 3090 and 3080Ti versions.
Looks like 3080 founders edition uses poscaps to me.Yeah, except none of the 30xx cards use poscaps–they use sp-caps (aluminum polymer)...and there is quite a bit more of them than poscaps (or their like).
Two memory controllers are turned off on the GA102, hence the 320bit bus, adding two more memory chips and turning those memory controllers on with the crossbar design allows any memory controller to talk to each other and supply data. Basically 16GB/s DDR 6 with 384 bit bus will give the same bandwidth as 19.5GB/s with 320 bit bus. Maybe GA102 cannot use DDR6 and it has to be DDR6x and hence the weird configuration.NVIDIA likely has their reasons for going GDDR6X, they wouldn't have spent the engineering resources into designing and implementing it if it didn't benefit the overall performance.
Your fucking kidding me????? can you take pics? NOW I AM REALLY pissed and worried. $1800 is a chunk of bread.
Umm.. think you have this confused. The AIB's are building the reference design cards... nvidia chose not to. So, I'm not sure how the reference design is better than AIB offerings when they are one and of the same.You're absolutely right. I've been buying reference cards for a few years now, with a few EVGA GPUs sprinkled here and there. It's actually sad when the reference design is so much better than whatever AIBs have to offer.
They used DDR6X because it can run at higher speeds than DDR6. It uses completely different signalling so they are not compatible. They would have had to design a different (or reuse their old) memory controller in order to utilize DDR6. They aren't compatible, so once they decided on DDR6X that's all they could use.Two memory controllers are turned off on the GA102, hence the 320bit bus, adding two more memory chips and turning those memory controllers on with the crossbar design allows any memory controller to talk to each other and supply data. Basically 16GB/s DDR 6 with 384 bit bus will give the same bandwidth as 19.5GB/s with 320 bit bus. Maybe GA102 cannot use DDR6 and it has to be DDR6x and hence the weird configuration.
Yeah, and it probably has to do with some binning as well. Obviously not all die's are created equal, so some may be able to hit 2.1ghz, and others 2.0ghz (or w/e), so some may see the issue and others may not, even given the same design.I had a few multi hour gaming sessions since getting MSI 3080 Ventus 3x OC (5+1 cap), think same model l88bastard got. Haven't seen any CTDs yet, but card mainly runs around 1950MHZ at stock settings at around 73c (ambient room around 27c) . Highest I've seen it hit is 2025MHZ (MS Flight Simulator 2020, Project Cars 2, Forza 4 and Overwatch). So maybe 2 GHz is max the release chips can do without some firmware magic
Two memory controllers are turned off on the GA102, hence the 320bit bus, adding two more memory chips and turning those memory controllers on with the crossbar design allows any memory controller to talk to each other and supply data. Basically 16GB/s DDR 6 with 384 bit bus will give the same bandwidth as 19.5GB/s with 320 bit bus. Maybe GA102 cannot use DDR6 and it has to be DDR6x and hence the weird configuration.
Umm.. think you have this confused. The AIB's are building the reference design cards... nvidia chose not to. So, I'm not sure how the reference design is better than AIB offerings when they are one and of the same.
Ok, that makes much more sense now . Yeah, either way though, reference design should not be crashing even if it doesn't clock as well as some higher end models. I expect MSRP cards to at least work properly. I expect higher end models to be better binned and clock higher and/or use less power, whether it's FE or not. This cycle NVIDIA released the FE cards lower than they normally do, but so far the stock of them has been abysmal. We'll see if that improves or if it was a decision to generate hype @ MSRP prices and then not be able to actual obtain that performance for MSRP.I meant Founders Edition... what a fancy name.
if that's the case, everyone, EVERYONE needs to return their shit.lowering boost clocks in vbios is most likely the fix....
I had a few multi hour gaming sessions since getting MSI 3080 Ventus 3x OC (5+1 cap), think same model l88bastard got. Haven't seen any CTDs yet, but card mainly runs around 1950MHZ at stock settings at around 73c (ambient room around 27c) . Highest I've seen it hit is 2025MHZ (MS Flight Simulator 2020, Project Cars 2, Forza 4 and Overwatch). So maybe 2 GHz is max the release chips can do without some firmware magic
Yap runs like a champ too. Pretty messed up how evga made the same models with slight differences.Your fucking kidding me????? can you take pics? NOW I AM REALLY pissed and worried. $1800 is a chunk of bread.
It seems they don't have issues until about 2000MHz, so for that reason it seems to happen more on 3080s. At least that is what I've seen.Yap runs like a champ too. Pretty messed up how evga made different models.
usually hover around 1920-1950
My problem with this is, why are AIBs charging us more for lower quality than the FE cards? Remember the days when OC cards were a few bucks more and real custom cards were ranging around ~50$...
they wouldn't be lowering the advertised boost clock, both rather the boost table. It would effectively make all the GPUs perform as advertised, but more like a lower bin than a higher bin. It seems like crossing 2000MHz is when the issues start, and that is still ~300MHz faster than advertised boost.if that's the case, everyone, EVERYONE needs to return their shit.
On what grounds? All of the cards easily boost past their guaranteed boost clock. Read the specs. EVGA's highest end card only advertises 1800MHz boost clock.if that's the case, everyone, EVERYONE needs to return their shit.
On what grounds? All of the cards easily boost past their guaranteed boost clock. Read the specs. EVGA's highest end card only advertises 1800MHz boost clock.