RTX 4xxx / RX 7xxx speculation

is this the first time Nvidia is pushing an old gen card alongside their new ones?...they placed the 3090 (or 3090ti) alongside the 4000 series on some of their slides...so it's part of their marketing...I'm guessing the 3000 series will be the hot ticket this gen as far as price/performance...the 4090 is the best of the new cards but the $1600 price is crazy
 
is this the first time Nvidia is pushing an old gen card alongside their new ones?...they placed the 3090 (or 3090ti) alongside the 4000 series on some of their slides...so it's part of their marketing...I'm guessing the 3000 series will be the hot ticket this gen as far as price/performance...the 4090 is the best of the new cards but the $1600 price is crazy
Wouldn't be shocked if this gets more common, particularly with parts shortages the way they've been. Apple has been doing this awhile.
 
Hdmi 2.1 does that.
I didn't realize that DSC could push it that much that nearly 80 gbs I think needed for this, https://k.kramerav.com/support/bwcalculator.asp, around 72 at 10 bit per pixels.

You seem to be right:
https://www.tomshardware.com/features/displayport-vs-hdmi-better-for-gaming

8K-120hz and 4k@240hz is available at 24 bpp with DSC, there even are monitors out there doing it:

https://www.samsung.com/ca/monitors/gaming/odyssey-neo-g8-g85nb-32-inch-ls32bg852nnxgo/
https://www.rtings.com/monitor/reviews/samsung/odyssey-neo-g8-s32bg85

The monitor is only:
https://news.samsung.com/ca/samsung...0hz-4k-gaming-monitor-odyssey-neo-g8-globally
Display Port 1.4 (1EA),
HDMI 2.1 (2EA)
 
Do you doubt those claim from AMD:
View attachment 512729

They have in their recent track record, full benefit of the doubt of not exagerating, specially with the confidence of using an >50% instead of an ~50%

Or do you think that a 50% boost (with some room to increase power if they need) will not be enough ?

if you mean by: going to get slaughtered, that they will again sales less than 30% of discrete video cards this falls and next years, maybe yes, but in term of competitive video card for the price ? You are basing this on what exactly ?
If they keep prices in check, a 50% uplift will be very attractive. As you mention, they haven't been prone to exaggerate lately in their marketing.

And better, if they can keep power reasonable, that's another pull. If you would have asked me five years ago about power requirements, I wouldn't have cared: now, power efficiency is definitely a consideration for me.
 
Can't afford one, even if I could I probably wouldn't but it. Well, maybe if I made 3-400k a year I wouldn't care but below that amount I don't think i'd spend more than 800 bucks on a GPU. Also, this forum is not representative of PC gaming at large. Prices jumping this much makes me feel like PC gaming is going to price almost everyone out of the market. What's a 4060 going to cost, 600? These prices make the future look pretty dire. I'd rather us have a large user base than be a niche, it'll mean less software coming to PC 10 years down the line when no one can afford it any longer.
A GPU is not a child's plaything. And people talk like they have EVGA Kingpin all the time.
The future is nice because of the supremacy of TSMC N4P. And EVGA as a board manufacturer should've blamed TSMC as well.
 
Finally some non DLSS benchmark, seem to pop up:
https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

A 4090 seem to be 65% more efficient than a MSI RTX 3090 Ti SUPRIM X and 67% performant, while being much much more cooler by 20 degrees C.

If that common, the price of $1599 could be an actual reasonable one relative to the 3090 that launched at $1,715 USD in 2022 dollar, still to be seen if card near or at that MSRP will exist and be purchasable.

I imagine that raytracing less scenario would not have that one of the best increase ever type of performance of 70%, but not many RTX less title will be a big issue to run if you have just 50% more than a 3090TI overclocked edition performance boost, one could imagine.

It could be a good gen on gen upgrade at the very top and make you wonder why the rest of the launch is not at least more in that vain.
 
Last edited:
while being much much more cooler by 20 degrees C.
10 pound 600W cooler is going to pay off at stock settings. Going to be interesting seeing it turned up to 600W.

"With DLSS 3 disabled, the NVIDIA GeForce RTX 4090 graphics card consumed 461 Watts on average with a performance per watt (Frames/Joule) of 0.135 points. As soon as DLSS 3 was enabled, the GPU saw the wattage drop to 348W or a 25% reduction. This also increased the perf per watt to 0.513, an increase of 3.8x."
 
Finally some DLSS benchmark, seem to pop up:
https://wccftech.com/nvidia-geforce...k-2077-dlss-3-cuts-gpu-wattage-by-25-percent/

A 4090 seem to be 65% more efficient than a MSI RTX 3090 Ti SUPRIM X and 67% performant, while being much much more cooler by 20 degrees C.

If that common, the price of $1599 could be an actual reasonable one relative to the 3090 that launched at $1,715 USD in 2022 dollar, still to be seen if card near or at that MSRP will exist and be purchasable.

I imagine that raytracing less scenario would not have that one of the best increase ever type of performance, but not many RTX less title would be an big issue if you have just 50% more than a 3090TI overclocked edition performance boost.

It could be a good gen on gen upgrade at the very top and make you wonder why the rest of the launch is not at least more in that vain.
That is very promising. 70% increase without DLSS. That is more realistic. DLSS 3 does look like a game changer with performance and power efficiency if this is true. I am impressed with the temperature as well as the power draw. Only a few watts more than a 3090ti when we thought it was going to be 100-150 more watts.
 
What's everyone's thoughts on being able to buy a 4080/4090 the second they're available?

Do we need bots this time around? Or do you think the majority of us who don't do this bot crap have a fighting chance to click add to cart and then buy once they're available?
 
What's everyone's thoughts on being able to buy a 4080/4090 the second they're available?

Same as it always goes: wait for the competition to launch their cards. Even if it's just to drive down pricing and increase availability. Early adopting is just beta-testing with extra steps.
 
What's everyone's thoughts on being able to buy a 4080/4090 the second they're available?

Do we need bots this time around? Or do you think the majority of us who don't do this bot crap have a fighting chance to click add to cart and then buy once they're available?
Only the RTX 4090 will be available before AMDs launch nov 3rd. But, there will not be as much miners wanting to buy everything to farm, there is a surplus of older cards and probably will be a lot of used cards for cheap. A lot of people needed GPUs the last years due to Covid and thats not the case anymore. The marked has changed.

If Nvidia reduces or delays chips from TSMC in order to get rid of their 3XXX GPUs, it might be an artificial shortage for a while. If you need it day 1, then nothing of the above matters and just use a bot, just in case.

Otherwise, if you can wait, chances are that prices will be less and you will avoid early adopter taxes.

If AMD comes in strong with good prices, we might see another over night price drop from Nvidia as they did with the GTX 280 if I remember correctly.
 
There is more 4090 stock moving into North America than I have ever seen for a top end video card ever before. Nvidia bought a awful lot of those wafers before the market went to shit.
And they pushed the release a good amount, building stock for longer, combined with the high enough price to keep scalping away, mining down, nothing similar to the previous buzz of 2020... many variable going all the same direction in terms of possible availability this time.
 
And they pushed the release a good amount, building stock for longer, combined with the high enough price to keep scalping away, mining down, nothing similar to the previous buzz of 2020... many variable going all the same direction in terms of possible availability this time.
Yeah, Nvidia might not even be able to keep the initial launch price, depending on AMDs next move. Its not like previous years where we just had to eat the cost, since there were no other choices.
 
Yeah, Nvidia might not even be able to keep the initial launch price, depending on AMDs next move. Its not like previous years where we just had to eat the cost, since there were no other choices.
I do not expect AMD to try and fight a perf war with the 4090. I think AMD will focus on efficiency and cost in the current economy going for the meaty part of the TAM. The super high end does not create high percentage of revenue. I do not see Nvidia selling out of the 4090 cards in retail given current inventories. I think Nvidia is in serious danger of collapsing it's product stack. These next two quarters might be a watershed moment for Nvidia. There is going to be such a huge amount of inventory out in the channel and I just don't believe it is going to move as quickly as needed. Nvidia, unlike AMD, does not have the ability to move it's wafer commitments with TSMC to others products easily.
 
I do not expect AMD to try and fight a perf war with the 4090. I think AMD will focus on efficiency and cost in the current economy going for the meaty part of the TAM. The super high end does not create high percentage of revenue. I do not see Nvidia selling out of the 4090 cards in retail given current inventories. I think Nvidia is in serious danger of collapsing it's product stack. These next two quarters might be a watershed moment for Nvidia. There is going to be such a huge amount of inventory out in the channel and I just don't believe it is going to move as quickly as needed. Nvidia, unlike AMD, does not have the ability to move it's wafer commitments with TSMC to others products easily.
Just curious, have you heard anything about what power target AMD is shooting for with the 7900xt?
 
Will have to wait for the reviews and depend on the game-mode you play I feel like, a 3090 TI could be quite the horizontal move, when we look at this:
735763_21100644451l.jpg

* the DLSS frame generation available when applicable, DSSL performance on the 3090TI when available is not super clear, but NVIDIA own slide show scenario with a modest gap between a 3090TI and a 4080 16GB, the 3090TI was a $2000 dollars card just 3 months or so ago and you can see massive gain in some context (yet to see the what the experience value are) and you have path tracing.

If you can sell that 3090Ti to someone that prefer the 24 gig of ram the price of your 4080 16gig has a gamer could be well worth it.

That generation could need a killer app (a game for which path tracing do change the experience) or for that frame insertion to be quite nice to be worth it for the 3090-3090TI owner under that 4090 model.
Sorry where did that picture and graph come from? There is no such thing on the nvidia page.
Therefore, some future games could benefit greatly from the rtx 4000 series.

On the nvidia site there are RTX ON and RTX OFF pictures from some games and what I see is a huge difference when RTX is ON and it looks much, much better with RTX ON than OFF, it's not a small difference.
https://www.nvidia.com/en-eu/geforce/graphics-cards/40-series/
 
Been waiting on verification of the 4N process being 5nm, as NVIDIA spec sheets were incorrect on "4N" being 4nm. I have been told this for a while, but had not seen anything solid on it.

Furthermore, HKEPC reports that NVIDIA also clarified what TSMC 4N really means, which is not to be confused with N4. This process is a die shrink of TSMC 5N process, but it is still a 5nm architecture. https://twitter.com/hkepcmedia/status/1573176032475348992

Edit: a few more backing sources.
https://en.overclocking.com/nvidia-ada-lovelace-rtx-40xx-on-4n-tsmc/

From TSMC's 5nm page: In addition, TSMC plans to launch 4nm (N4) technology, an enhanced version of N5 technology. N4 provides further enhancement in performance, power and density for the next wave of N5 products. The development of N4 technology is on schedule with good progress, and volume production is expected to start in 2022.
https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_5nm
 
Last edited:
Looks like 4090 options are limited if you don’t have 350mm+ clearance. I will likely buy an NVIDIA card since it’s closer to 300mm.
 
Sorry where did that picture and graph come from? There is no such thing on the nvidia page.
https://www.nvidia.com/en-us/geforce/news/rtx-40-series-community-qa/

On the nvidia site there are RTX ON and RTX OFF pictures from some games and what I see is a huge difference when RTX is ON and it looks much, much better with RTX ON than OFF, it's not a small difference.
Do you have a source, the one being Portal has more of a difference than just RTX on or off I think, seem to have gone through their asset augmentation system has well.

There some game with significant difference with RTX on but something it is quite subjective if it actually look better (versus more realist, which are 2 different things)
 

Dumb video is dumb. There are adapters with the sense pins. The sense pins don't fucking talk, lol. It's literally just saying at all times how much power the PSU will supply (it's not dynamically changing with load or something). They can be wired to the ground on the pcie 8pins on the other end no problem.

Literally everything he says could be done right now with people being stupid with the way they wire their video cards with splitters, etc. Right now you could take a molex->pcie 6pin->dual 6pin->dual 6pin->6pin to 8pin adapters on all 6pins and try to power a 3090 off a single molex plug. That's a problem with being stupid, not with pcie 8pins, lol.
 
exactly what I was asking earlier...



the issues of running an older power supply on a new 40 series graphics card with adaptors...there's a risk of fire
I think the issue is more in the connector itself. In the video from GN where they display the report from PCI-SIG, the measurements are taken on the PSU side with a native 12+4 pin connector. On an older PSU, load will be balanced through 3X or 4X 8 pin connectors from the PSU side, so the adapter might be the problem then. JayZs adapter was only rated for 30 cycles. The 4 sense pins only tells the GPU how much juice the PSU is capable of delivering.
Failure is also tested with very high wattage (a single pin, pin 2, had a load of 436.8W reaching a cable hotspot of 180C). This might explain why its claimed 600W 4090 was in the works as Kyle Bennett said here, but released as 450W.

Note that the slides from PCI-SIG is from 8/25/2022:

Root_causes.png

 
https://www.nvidia.com/en-us/geforce/news/rtx-40-series-community-qa/


Do you have a source, the one being Portal has more of a difference than just RTX on or off I think, seem to have gone through their asset augmentation system has well.

There some game with significant difference with RTX on but something it is quite subjective if it actually look better (versus more realist, which are 2 different things)
So this slider, what they put on their page, with rtx on is much better and as you say more realistic, water is really like water, everything is much better and more detailed, brighter with rtx on.
Ray tracing is the right thing and raises the graphics to a whole new dimension, the other thing is that this ray tracing does not work with itself because I guess it is very demanding and you need to have dlss on at the same time.
If it's going to be like that in every game then it's a great thing for rtx 4090 owners.
 
the issues of running an older power supply on a new 40 series graphics card with adaptors...there's a risk of fire
It might come down to the PSU manufacturer, Corsair says their PSUs will be fine. My assumption is their PSUs won't run out of spec. This will limit some people on what they can buy, I myself will be limited to 4080-16GB unless I want to buy a 1000w/1200w PSU, which I likely wouldn't because I'd have to scrap my SFF build.

https://www.corsair.com/newsroom/pr...atibility-for-nvidia-40-series-graphics-cards

"The official CORSAIR 12VHPWR 600W PSU cable is available to order immediately from the CORSAIR webstore. Orders will ship beginning September 29th.
To check if your PSU is compatible with the CORSAIR 12VHPWR 600W PSU cable, please check https://www.corsair.com/12vhpwr
*600W load requires a 1200W rated CORSAIR PSU or higher. 450W load requires 1000W or higher. 300W load requires 750W or higher."


https://www.corsair.com/us/en/psu-cable-compatibility
https://www.corsair.com/us/en/Categ...0-12VHPWR-Type-4-PSU-Power-Cable/p/CP-8920284
 
It might come down to the PSU manufacturer, Corsair says their PSUs will be fine. My assumption is their PSUs won't run out of spec. This will limit some people on what they can buy, I myself will be limited to 4080-16GB unless I want to buy a 1000w/1200w PSU, which I likely wouldn't because I'd have to scrap my SFF build.

https://www.corsair.com/newsroom/pr...atibility-for-nvidia-40-series-graphics-cards

"The official CORSAIR 12VHPWR 600W PSU cable is available to order immediately from the CORSAIR webstore. Orders will ship beginning September 29th.
To check if your PSU is compatible with the CORSAIR 12VHPWR 600W PSU cable, please check https://www.corsair.com/12vhpwr
*600W load requires a 1200W rated CORSAIR PSU or higher. 450W load requires 1000W or higher. 300W load requires 750W or higher."


https://www.corsair.com/us/en/psu-cable-compatibility
https://www.corsair.com/us/en/Categories/Products/Accessories-|-Parts/PC-Components/Power-Supplies/600W-PCIe-5-0-12VHPWR-Type-4-PSU-Power-Cable/p/CP-8920284
Nonsense, first of all, all rtx 4000 should come with these adapters in the package, all manufacturers should provide that.
And secondly, all power supplies, even those over 10 years old, should be sufficient for rtx 4000 as long as they have enough watts and enough of those pcie8xpin connectors.
Is there a risk, there is, but if the power supplies are of good quality, there shouldn't be any problems.
And no one even has a power supply more than 5 years old without an rtx 3000 and some amd 5000 processor or intel, it doesn't matter.
Youtubers talk nonsense.
You only need a good case to cool everything well,and everyone has these today.
And whoever buys RTX 4000 or rtx 3000 shouldn't have a problem buying a completely new power supply anyway.And that's probably how it will be.Who doesn't want to mess with adapters and cables will buy a new pcie 5.0 PSU and that's it.
 
I think the issue is more in the connector itself. In the video from GN where they display the report from PCI-SIG, the measurements are taken on the PSU side with a native 12+4 pin connector. On an older PSU, load will be balanced through 3X or 4X 8 pin connectors from the PSU side, so the adapter might be the problem then. JayZs adapter was only rated for 30 cycles. The 4 sense pins only tells the GPU how much juice the PSU is capable of delivering.
Failure is also tested with very high wattage (a single pin, pin 2, had a load of 436.8W reaching a cable hotspot of 180C). This might explain why its claimed 600W 4090 was in the works as Kyle Bennett said here, but released as 450W.

Note that the slides from PCI-SIG is from 8/25/2022:

View attachment 513210


It blows my mind someone was ok with only 30 cycles. That should be a huge red flag to everyone, even non-technical people should have said, “wait, what?”

Hopefully this blowback gets it changed by the time I need a GPU lol.
 
It blows my mind someone was ok with only 30 cycles. That should be a huge red flag to everyone, even non-technical people should have said, “wait, what?”

Hopefully this blowback gets it changed by the time I need a GPU lol.
Who other than a reviewer unplugs/replugs the power on their GPU more than 30 times in it's life?

If the problem is insertions of the 12VHPWR connector...why is anyone thinking new PSUs are the answer? You know what's on the end of the cable on those new PSUs? ...The same 12VHPWR connector...

How many insertions are current 8pin power connectors rated for?
 
Who other than a reviewer unplugs/replugs the power on their GPU more than 30 times in it's life?

If the problem is insertions of the 12VHPWR connector...why is anyone thinking new PSUs are the answer? You know what's on the end of the cable on those new PSUs? ...The same 12VHPWR connector...

How many insertions are current 8pin power connectors rated for?

It’s more that a normal production process will cause that to vary. Say the manufacturer screws up and it’s 30 with a 10 sigma, a lot of people will have fires.

Or just natural variances on the gpu and psu sides could essientially take that 30 much lower. Or how is it affected by time?

Just the general robustness of it.
 
It’s more that a normal production process will cause that to vary. Say the manufacturer screws up and it’s 30 with a 10 sigma, a lot of people will have fires.

Or just natural variances on the gpu and psu sides could essientially take that 30 much lower. Or how is it affected by time?

Just the general robustness of it.
The published reliability data isn't gonna be 30 if the median reliability is 30. They're going to account for production variances, go with the low end of that, and then subtract some more.

But faulty parts happen, that isn't anything new, and there's no reason an 8pin connector couldn't be faulty and catch on fire too. Even in the alarmist wccftech article about the adapters they measured the temperature on the connectors under load and they're all completely fine.
 
It blows my mind someone was ok with only 30 cycles. That should be a huge red flag to everyone, even non-technical people should have said, “wait, what?”

Hopefully this blowback gets it changed by the time I need a GPU lol.
People should be wary at least about this. The data tested have been submitted by Nvidia and its in their best interest to push this to be solved. I have no doubt Nvidia have been testing it against current cards and new cards, finding it within acceptable limits, not to be an alarmist about this.

But, it seems flimsy with 30 cycles limit and also being sensitive to bending. I like to cable manage, so things can get a little bend or some tension sometimes. Not to mention that Nvidia would never have raised the issue to begin with, if this was not of concern. I have a strong feeling this might be one of the reasons we havent heard more about 600W cards or bioses, even though the coolers presented seems to have been made with higher power consumption in mind.

Also, considering it looks like its the connector itself that is flimsy and people use their PSU over many generations, this should be resolved before they are rolling out native ATX 3.0/PCI-E 5 PSUs. You get a new adapter with the GPUs, but doesnt nessesarily get a new PSU cable when you change PSU.

Currently, I haven´t heard it be a widespread problem with todays GPUs, but it seems wattage is increasing for each generation ...

Some of the slides from Nvidias testing. Remember, this document is from only 1 month ago and not send as press release (it was just sent to too many people, so Gamers Nexus got hold of it):

background.png

test conditions.png

test conditions 2.png

observations.png


So, from observations, if its not bend or have low cycles, it didn´t fail. Still, flimsy enough to make a concern, especially when doing cable management.
 

Attachments

  • observations.png
    observations.png
    616.5 KB · Views: 0
  • observations.png
    observations.png
    616.5 KB · Views: 0
And better, if they can keep power reasonable, that's another pull. If you would have asked me five years ago about power requirements, I wouldn't have cared: now, power efficiency is definitely a consideration for me.

It definitely is for me. I'm still rocking a 1080ti, the 20 series didn't have enough performance uptick to justify an upgrade for me, considering demand/prices, with the tiny exception of pre-30 series announcement sell-off which I missed. 30 series prices were outrageous so I held off. I'm very interested to try the RT features, which I have only seen in videos thus far, however, I've moved full time to my off-grid cabin, where I produce or harness all my own electricity, and subsequently have become extremely cognizant of power consumption for everything from the smallest light to the largest machine tool I run. I'm currently seriously considering jumping on a 3090 as an upgrade, 40 series would have to have massive supply and sub-MSRP prices for me to even consider them atm, however, I'm on the fence even with the 30 series pricing (second hand) being quite decent at the moment, based on AMD's efficiency statements. If some offering of the upcoming generation of their GPUs offered a 30%+ performance per watt advantage over a 3090, at roughly the same price as a used 3090 now, with similar performance, even slightly less, and closed the gap considerably in terms of RT, then to me, it would be an unequivocal no-brainer. Right now the only reason I'm not hunting a 6900xt is my above stated, somewhat unquantified interest, in experiencing RT features.

Energy costs have skyrocketed even in the US which has typically been extremely cheap compared to the rest of the world, but even the average consumer here is becoming more conscious of their electrical bill. All signs for the future point to this trend increasing, potentially, in more aggressive spurts, if recent history is any indicator. So personally, I think AMD is *very smart* to focus on increased power efficiency. They have already shown that apart from somewhat niche (though admittedly popular/useful) features like RT and DLSS that they can easily compete in raw performance, and currently trounce in performance per watt. If they can catch up with RT while still pushing efficiency, it'll be a no-brainer.

I think people really underestimate this, and give AMD kudos for seeing the big picture/long view. I'm curious to see if nvidia continues to just throw electricity and transistors at the problem while banking on process refinement to maintain the crown going forward, and where the line with consumers will be drawn in terms of energy consumption. Right now most consumers don't seem to really care, and it's somewhat understandable due to the deferred nature of al-la-carte electrical consumption the energy industry enjoys, but will people still be willing to pay more upfront for a card with 10-20% total FPS potential at an ongoing cost of operation in terms of energy consumption that's double, hypothetically? I doubt it. Just made off the cuff numbers without any real basis of course, I'd be curious to hear other's thoughts on the matter.




One aside to all this that occurs to me however; is heat loss, though I don't think anybody is actually testing this in any meaningful way to draw conclusions based off; you can have increased performance per watt but still have higher losses in terms of energy lost to heat, though it would make the performance per watt increase all the more impressive, in the summer, it would be off-set by additional cost of cooling a space (not cooling the card itself). I would love to see some metrics on this. I notice the top end AMD cards atm run hotter than a lot of their Nvidia equivalents however, that may have little to do with heat-loss and everything to do with cooling, if the actual BTUs of heat loss are similar, it becomes irrelevant how cool the card runs with the cooling solution if it doesn't affect longevity. The natural assumption of higher performance per watt is that there would be less actual electrical loss to heat, but I don't see anyone quantifying this conclusively.
 
Who other than a reviewer unplugs/replugs the power on their GPU more than 30 times in it's life?

If the problem is insertions of the 12VHPWR connector...why is anyone thinking new PSUs are the answer? You know what's on the end of the cable on those new PSUs? ...The same 12VHPWR connector...

How many insertions are current 8pin power connectors rated for?


Me for one. GPU power cables are notoriously in the way of everything no matter what you do, due to their location in a system. Any change in hardware almost guarantees unplugging them just for sanity but also to not risk physically jostling the heavy ass modern GPUs the PCIE slots can barely support. If you ever move your machine for any reasonable distance, I highly recommend removing the GPU entirely, especially for any kind of car ride, as the sheer size and weight of them can cause all sorts of damage if you hit any bumps let alone have to deal with the sort of roads I have. I've killed a couple of GPUs going back and forth to my cabin which has a mile long 4wd only rough mountain two-track road.

Some of you guys may just build a system, plug everything in one time and be done forever, but I imagine for most of us here, that's not the case. Any single significant hardware event causing a machine not to post, is likely to merit 30 cycles of plugging in my experience. Especially if you suspect a short to ground somewhere that ends up removing every peripheral and re-seating the MB, while ruling out each component one by one, which is pretty standard practice in diagnostic repair no? FWIW most electronics connectors in my experience over many years are rated in thousands of plugging cycles, though I'm not speaking to the PC market, which I admit I have no data on, but yeah, 30 is a pretty damn big red flag to me, especially when the potential for a dead short could fry the single most expensive component in most systems, and are common with electrical connector failures. Lacking any particular data however, I did work for a very long time in this industry, doing PC repair as a teenager for local shops, where we would routinely unplug connectors dozens of times in diagnostic assesment, to building some of the largest clusters of servers in the world at the time for various ecoms, b2bs, corps, and governments, (admittedly almost 20 years ago) and I can promise you if any connector or component had such low cycle life, I'd have killed it dozens of times over, so empirically I can't fathom this sort of cycle life being anywhere near normal, or commonly accepted.

Just my 2c, take it for what it's worth. 🤷‍♂️
 
Me for one. GPU power cables are notoriously in the way of everything no matter what you do, due to their location in a system. Any change in hardware almost guarantees unplugging them just for sanity but also to not risk physically jostling the heavy ass modern GPUs the PCIE slots can barely support. If you ever move your machine for any reasonable distance, I highly recommend removing the GPU entirely, especially for any kind of car ride, as the sheer size and weight of them can cause all sorts of damage if you hit any bumps let alone have to deal with the sort of roads I have. I've killed a couple of GPUs going back and forth to my cabin which has a mile long 4wd only rough mountain two-track road.

Some of you guys may just build a system, plug everything in one time and be done forever, but I imagine for most of us here, that's not the case. Any single significant hardware event causing a machine not to post, is likely to merit 30 cycles of plugging in my experience. Especially if you suspect a short to ground somewhere that ends up removing every peripheral and re-seating the MB, while ruling out each component one by one, which is pretty standard practice in diagnostic repair no? FWIW most electronics connectors in my experience over many years are rated in thousands of plugging cycles, though I'm not speaking to the PC market, which I admit I have no data on, but yeah, 30 is a pretty damn big red flag to me, especially when the potential for a dead short could fry the single most expensive component in most systems, and are common with electrical connector failures. Lacking any particular data however, I did work for a very long time in this industry, doing PC repair as a teenager for local shops, where we would routinely unplug connectors dozens of times in diagnostic assesment, to building some of the largest clusters of servers in the world at the time for various ecoms, b2bs, corps, and governments, (admittedly almost 20 years ago) and I can promise you if any connector or component had such low cycle life, I'd have killed it dozens of times over, so empirically I can't fathom this sort of cycle life being anywhere near normal, or commonly accepted.

Just my 2c, take it for what it's worth. 🤷‍♂️

From a risk perspective people look at two factors, frequency and impact.

I’d assign frequency at n=30 is “high”. The impact is also “high” since the risk is it bursts into flames. It boggles my mind people try to justify this as an acceptable business decision.

Personally, I’d plug it in, put my PC someplace non-flammable, check the temp under load and call it a day. From a spec / nVidia standpoint I’d be horrified.
 
From a risk perspective people look at two factors, frequency and impact.

I’d assign frequency at n=30 is “high”. The impact is also “high” since the risk is it bursts into flames. It boggles my mind people try to justify this as an acceptable business decision.

Personally, I’d plug it in, put my PC someplace non-flammable, check the temp under load and call it a day. From a spec / nVidia standpoint I’d be horrified.
I mean, it's Intel that made the spec for the connector, not Nvidia. If there's a problem with insertions of a 12VHPWR connector, it isn't an adapter vs. ATX 3.0 PSU thing. They are the same connector. But I want to see how many insertions PCIE 8pins are rated for (I did some googling and didn't immediately find anything). Nothing is new with the way the pins go in with the 12VHPWR connector compared to PCIE 6/8pin or EPS connectors to make this suddenly be a brand new problem.

Also incidentally, Nvidia didn't report any problems with the connection to cards. You can see what they say linked above on this page, they talk about heat issues where the cables connect to PSUs (not that it makes any difference that I can imagine, unless some PSU makers have lower quality connectors on their ends than the cards have).
 
Last edited:
Back
Top