$2,500 RTX-5090 ( 60% faster than 4090 )

.05v, 1050mV to 1100mV, 4.761905% Why did they start limiting the max voltage to 1070mV then?

If the silicon quality and architecture is efficient enough to hit their desired boost clocks with the lower voltage why not lower it? I think this has far more to do with process maturity than it does catastrophic voltage. If 1.1 kills it, -.07 is gonna kill it a short while later
 
If the silicon quality and architecture is efficient enough to hit their desired boost clocks with the lower voltage why not lower it? I think this has far more to do with process maturity than it does catastrophic voltage. If 1.1 kills it, -.07 is gonna kill it a short while later
Yea I agree with that, guess my GPU failed for another reason. I do think maybe the could have lowered it due reducing the risk of degradation as well, but IDK.
 
It makes a difference in frametime consistency (when GPU limited). In general I agree, its only like a 1-2% performance difference. The only game I've played that actually pulled nearly 600W was Metro Exodus Enhanced Edition, which was about 570W. I have seen games like Cyberpunk 2077 pull around 500W at times. When games actually use that power it is just for a short time when it is needed.

Not sure if this is what killed my 4090, I think the 1.1V might have killed it honestly. The reason I think this is because they started releasing 4090s at 1.07V limit shortly after launch IIRC.

From my testing I've seen no evidence of better frametime consistency with increased power limits either and I meticulously monitor it using tools like RTSS. You can see down below that I am GPU limited at 100% usage yet my frametime plot is flatter than a deceased person's heartbeat. The only time I get bad frame times in games is either due to shader compilation/traversal stuttering, or it's due to a CPU bottleneck, but never due to lack of power to the GPU.

1721169944994.png
 
From my testing I've seen no evidence of better frametime consistency with increased power limits either and I meticulously monitor it using tools like RTSS. You can see down below that I am GPU limited at 100% usage yet my frametime plot is flatter than a deceased person's heartbeat. The only time I get bad frame times in games is either due to shader compilation/traversal stuttering, or it's due to a CPU bottleneck, but never due to lack of power to the GPU.
It's likely not a greater than 450W load, so you won't see a difference if it is under 450W.

The boosting algorithm works on a `is power resources available` and `is it within temperature range?` basis when applying voltage to the core. If the power is locked down to 450W, the GPU will downclock if it the power limit is hit. Raising the power limit allows the GPU to maintain a higher clock through the duration of loads exceeding the default of 450W. This also happens if the temperature limit is hit, the default temperature limit for lowering the clock speed is 83c, though the 4090s typically have very good coolers and this doesn't play much of a role in the boosting algorithm, but it may start downclocking in the upper 70c range. You can check the precise point it will downclock in RTSS temperature tab in the curve editor, I don't have a 4090 on me right now to check myself lol.

If you want to maximize consistency you can do so by increasing the power limit, if you only play games that don't exceed the 450W limit just leave it there since it wouldn't really matter. But having it at 600W will perform the same in those loads without drawing any additional power, just provide more consistency in your voltage / clock speeds in the moments it needs to exceed 450W. But if you're playing a games that exceed the 450W limit recommended to have as much power available as possible to maintain consistency.

Frame times are affected by mostly 5 things, GPU performance, CPU performance, RAM, the GPU driver, event timers (HPET, RTC, PIT), and the game's optimization state. Other things can also effect frametime consistency such as your network (if it's an online game), HDD/SSD (if it is too slow it could cause stutters when traversing game worlds), and cooling.
 
It's likely not a greater than 450W load, so you won't see a difference if it is under 450W.

The boosting algorithm works on a `is power resources available` and `is it within temperature range?` basis when applying voltage to the core. If the power is locked down to 450W, the GPU will downclock if it the power limit is hit. Raising the power limit allows the GPU to maintain a higher clock through the duration of loads exceeding the default of 450W. This also happens if the temperature limit is hit, the default temperature limit for lowering the clock speed is 83c, though the 4090s typically have very good coolers and this doesn't play much of a role in the boosting algorithm, but it may start downclocking in the upper 70c range. You can check the precise point it will downclock in RTSS temperature tab in the curve editor, I don't have a 4090 on me right now to check myself lol.

If you want to maximize consistency you can do so by increasing the power limit, if you only play games that don't exceed the 450W limit just leave it there since it wouldn't really matter. But having it at 600W will perform the same in those loads without drawing any additional power, just provide more consistency in your voltage / clock speeds in the moments it needs to exceed 450W. But if you're playing a games that exceed the 450W limit recommended to have as much power available as possible to maintain consistency.

Frame times are affected by mostly 5 things, GPU performance, CPU performance, RAM, the GPU driver, event timers (HPET, RTC, PIT), and the game's optimization state. Other things can also effect frametime consistency such as your network (if it's an online game), HDD/SSD (if it is too slow it could cause stutters when traversing game worlds), and cooling.

I mean I've played a ton of games on my 4090 including fully path traced ones like Alan Wake 2 and CP2077 so surely something had to be pulling more than 450 watts at some point. I just never noticed any substantial difference between 80% power limit and greater no matter how many times I switch back and fourth and test. Sure there is probably a measurable difference if one were to use performance monitoring tools to record and track performance between 80% vs 133% power limits, but in terms of actual perception of a noticeable difference while gaming in person, can't say I noticed anything.
 
I mean I've played a ton of games on my 4090 including fully path traced ones like Alan Wake 2 and CP2077 so surely something had to be pulling more than 450 watts. I just never noticed any substantial difference between 80% power limit and greater. Sure there is probably a measurable difference if one were to use performance monitoring tools to record and track performance between 80% vs 133% power limits, but in terms of actual perception of a noticeable difference while gaming in person, can't say I noticed anything.
Going from 80% to 133% in AW2/CP2077 would net around 7-11% performance. Yea it's probably not enough to change the experience drastically unless you're super sampling the image.

I tend to use DLAA and DLDSR frequently so I notice this much more than people who solely use DLSS. When I use DLSS the lowest I go is 80% scaling with profile E, including in games like AW2/CP2077, I don't like aliasing at all. Earlier DLSS profiles I would always force DLAA because I didn't like it, but it's been improving so I am using it a bit more often.

Here's an example with Metro Exodus from my spreadsheets, unfortunately I can't find my spreadsheets for CP2077:
1721186061256.png
 
Going from 80% to 133% in AW2/CP2077 would net around 7-11% performance. Yea it's probably not enough to change the experience drastically unless you're super sampling the image.

I tend to use DLAA and DLDSR frequently so I notice this much more than people who solely use DLSS. When I use DLSS the lowest I go is 80% scaling with profile E, including in games like AW2/CP2077, I don't like aliasing at all. Earlier DLSS profiles I would always force DLAA because I didn't like it, but it's been improving so I am using it a bit more often.

Here's an example with Metro Exodus from my spreadsheets, unfortunately I can't find my spreadsheets for CP2077:
View attachment 666043

The chart doesn't show 1% lows though which is more representative towards frametime consistency. The avg fps is 10% higher for the much greater power limit but is it actually more consistent as well?
 
The chart doesn't show 1% lows though which is more representative towards frametime consistency. The avg fps is 10% higher for the much greater power limit but is it actually more consistent as well?
It demonstrates the performance difference between 80% and 133%. But yes it is consistent due to the nature of GPU boost, if the GPU requires power above the 80% PT or 360W, it will drop frames and naturally make the frametime less consistent. The same would go for if the power needed exceeded 600W, it would reduce frames. So the 1% low would be approximately 7-11% higher in a GPU limited scenario at 600W, compared to a 360W (80% PT) 4090.

In CPU limited scenarios it wouldn't change.
 
It demonstrates the performance difference between 80% and 133%. But yes it is consistent due to the nature of GPU boost, if the GPU requires power above the 80% PT or 360W, it will drop frames and naturally make the frametime less consistent. The same would go for if the power needed exceeded 600W, it would reduce frames. So the 1% low would be approximately 7-11% higher in a GPU limited scenario at 600W, compared to a 360W (80% PT) 4090.

In CPU limited scenarios it wouldn't change.

I agree that there is a measurable difference, if you collect data and do the comparison then of course the GPU being fed more than 50% more power will see some performance benefit. It only makes sense right? I'm just saying that speaking to whether or not I can actually perceive the difference in real life, I cannot. Gaming is my one and only hobby and I want the best experience possible, it's why I invest in things like a 4090 and 4K 240Hz OLED screens. If I felt like I was getting a substantially worst experience by limiting my GPU to 80% power and ruining my frame times, I would immediately juice it up. But honestly I haven't noticed any performance degradation to the point where I'm like yeah I need to up the power limit on my 4090 because I'm getting a worst gaming experience. And believe me I game A LOT lol.

1721235087405.png
 
I agree that there is a measurable difference, if you collect data and do the comparison then of course the GPU being fed more than 50% more power will see some performance benefit. It only makes sense right? I'm just saying that speaking to whether or not I can actually perceive the difference in real life, I cannot. Gaming is my one and only hobby and I want the best experience possible, it's why I invest in things like a 4090 and 4K 240Hz OLED screens. If I felt like I was getting a substantially worst experience by limiting my GPU to 80% power and ruining my frame times, I would immediately juice it up. But honestly I haven't noticed any performance degradation to the point where I'm like yeah I need to up the power limit on my 4090 because I'm getting a worst gaming experience. And believe me I game A LOT lol.

View attachment 666154
Yea, back on to the thread topic and what created this discussion was the rumored power limit of 500W. I do hope they allow for a 600W limit. I game a lot myself, but like I said before I usually use DLAA or super-sampling so I need as much power as possible. I bet the card will meet most users needs with lower power, but its always good to have options.
 
Yea, back on to the thread topic and what created this discussion was the rumored power limit of 500W. I do hope they allow for a 600W limit. I game a lot myself, but like I said before I usually use DLAA or super-sampling so I need as much power as possible. I bet the card will meet most users needs with lower power, but its always good to have options.

I'm sure they will. Going backwards and capping it at 500W seems like a weird move IMO, it's probably going to be 500W default now instead of 450W, with a 600W or greater max limit.
 
I was under the impression that they were setting the base at 500W instead of 450W, not capping it at 500W.
 
  • Like
Reactions: spine
like this
I was under the impression that they were setting the base at 500W instead of 450W, not capping it at 500W.
If Seasonic isn't guessing with their power figures it will be a 500W base. But at this time I think it is a guess, not an actual leak.
 
If 500w is the baseline for a stock card, I would expect OC versions to have a higher TDP. Also, just because a card rated TDP is something, does not mean it will operate there 100% of the time, it could well be limited by the CPU during large parts of a game etc. and just be cruising, setting limited options, as in why waste power on FPS that your monitor can never display? Hence frame limited settings, VSync etc. Unless lag is key to one's success in a multiplayer game.

I am way more interested in new technology or techniques that Nvidia may introduce, since being the AI masters at the moment -> I expect some useful new AI features to push the technology forward for gaming and other stuff. I still think Nvidia needs to keep a large mind share, as in the majority of gaming PCs having Nvidia cards for them to continue to be successful, even more lucrative with their other endeavors. Plus, if I get a 240hz 4K monitor, I will definitely want something that can push performance higher but currently my performance with my 7900 XTX seems to be fine on my 120hz OLED TV.
 
If Seasonic isn't guessing with their power figures it will be a 500W base. But at this time I think it is a guess, not an actual leak.
I looked at those Seasonic numbers and my best guess is they're advice on what's safe if you're buying a PSU. So it's not "card is going to need X" it's "if you get this much you should be ok."
 
I wonder how long they will be out of stock till you can just go buy a 5090 via microcenter or online.
 
hi so not in 2024 year , rtx 5090? or is any chance to 2024? I am currently on Xbox Series X, and waiting for new rtx. I dont wanna buy 4090 if in short time 5090 will be released. Oh and new intels.
 
hi so not in 2024 year , rtx 5090? or is any chance to 2024? I am currently on Xbox Series X, and waiting for new rtx. I dont wanna buy 4090 if in short time 5090 will be released. Oh and new intels.
You don't know until you know. Ampere was announced in late August or beginning of September (Jensen's kitchen) 2020. Lovelace was announced at GTC in late September 2022. So look for announcements around that time, and if you don't get any, then you'll know.
 
hi so not in 2024 year , rtx 5090? or is any chance to 2024? I am currently on Xbox Series X, and waiting for new rtx. I dont wanna buy 4090 if in short time 5090 will be released. Oh and new intels.
You are going to be waiting almost another year dude. You don't even know what stock is going to look like and how expensive its going to be or how long your going to have to wait till after release before anyone can "buy one" you are better off getting a cheap 4090 and selling it when you have the 5090 in your hand. But hey you do you. AND you want intel? LOL
 
  • Like
Reactions: hu76
like this
No upgrade this year? :(

"NVIDIA GeForce RTX 50 “Blackwell” Gaming GPUs Reportedly Launching At CES 2025"

https://wccftech.com/nvidia-geforce-rtx-50-blackwell-gaming-gpus-launch-ces-2025/
They did the exact same thing with the 40xx release to clear old Ampere stock (doing this going back as far as Curie, if I recall). nV leaked in July that we'd not be seeing 40xx until 2023, Ampere retail prices were dropped marginally, then they announced in September and launched in October.

If us old timers have no memory of these shenanegans, there's no hope for any one else ;)
 
They did the exact same thing with the 40xx release to clear old Ampere stock (doing this going back as far as Curie, if I recall). nV leaked in July that we'd not be seeing 40xx until 2023, Ampere retail prices were dropped marginally, then they announced in September and launched in October.

If us old timers have no memory of these shenanegans, there's no hope for any one else ;)
hi is any chance in this year?
 
hi is any chance in this year?
No. The RTX 5090 is never coming out. :p No more graphics cards ever! NVidia is going to change their branding strategy. Only AI accelerators now. Of course consumer AI accelerators will also be able to drive monitors and render games.

Yes, I'm just talking out of my ass and have no actual information on this happening. I do, however, think it's totally possible.

Intel Core i# 15th gen is never coming either. Next desktop releases are second gen Core Ultra in the fall, then Core (cheap seats - possibly another Raptor Lake refresh) next year.
 
  • Like
Reactions: hu76
like this
so buy 4090 and wait for new intels 15th?
If it was me I would because you can get most of your money back from that 4090. More so if the 5090 is to much but the 4090 is no longer being sold and people would be happy with a 4090 but can't get one would give you the most for your card. I would not go intel but that is me.
 
hi so not in 2024 year , rtx 5090? or is any chance to 2024? I am currently on Xbox Series X, and waiting for new rtx. I dont wanna buy 4090 if in short time 5090 will be released. Oh and new intels.
Wait for 5090 for the love of god. This is the worst time to buy a 4090. The worst. This is the time for shooting the shit in the forums, waiting it out. Whatever 5090 will or won’t be, one thing it is sure to be is worth waiting for.

Isn’t this whole forum one big emotional support group for people waiting for the next GPU?
 
Wait for 5090 for the love of god. This is the worst time to buy a 4090. The worst. This is the time for shooting the shit in the forums, waiting it out. Whatever 5090 will or won’t be, one thing it is sure to be is worth waiting for.

Isn’t this whole forum one big emotional support group for people waiting for the next GPU?
oki thank you i will wait
 
Wait for 5090 for the love of god. This is the worst time to buy a 4090. The worst. This is the time for shooting the shit in the forums, waiting it out. Whatever 5090 will or won’t be, one thing it is sure to be is worth waiting for.

Isn’t this whole forum one big emotional support group for people waiting for the next GPU?
To add a bit more color and depth, it's also the point where you should...

a) Sell your 4090 now (and by now I mean right now). Starting mid-August to 5090 launch, it will be a fire sale for 4090s driving used prices down. I wouldn't be surprised if the average used 4090 price drops to $1200 at points

b) Buy a used 4090 over the next two months of you are looking at a 5080. There will be deals to be had and the 4090 will almost certainly still outperform a 5080 (with better 4k performance and more VRAM). Remember the 2080 ti used prices where they could be had for $500, then the marginally slower 3070 launch with a $600 street price? That's the dynamic we'll see with 4090/5080.

c) If you are patient and do sit on your 4090, you can wait a few weeks after 5090 launch to sell and used sale prices will normalize as people realize nV will be charging more for less performance in the 5xxx series. I can envisage a scenario where they launch 5080 at $1200 ($1300 street price) that performs 5-10% below a 4090.

d) Do not, in any case, buy a new 4090 now. Even with the small price drops arriving the value might not be there
 
  • Like
Reactions: hu76
like this
If the 5090 launch spring 2025 and is really hard to get at first, could be a while to wait...
 
As much as I love my 4090, I'm ready for an upgrade. 2 years is a long time in the tech world and it's simply no longer the same beast for the games of 2024 as it was for the games of 2022. If I max out all the details without resorting to upscaling and FG then I would actually be below 60fps in many titles.
 
Back
Top