Watts needed

Status
Not open for further replies.

NeghVar

2[H]4U
Joined
May 1, 2003
Messages
2,671
What wattages for this equipment would you suggest? I tried some online calcs, but they all show less than 500W?
  • Ryzen 7 5800X
  • RTX 2060
  • ATX X570 chipset
  • 3x M.2 SSD
  • 2x 16GB DDR4 4000
Any other components that should be considered as significant power usage?
 
Other factors to consider would be:

Are you going to run your CPU and GPU stock or will you do any kind of overclocking on either?
Do you have any mechanical drives in your system?
How many fans are there in your system? What kind of fans? (high-CFM fans can pull a good bit of power)
What are you using to cool your CPU? Aftermarket heatsink or AIO? (If AIO, pump uses power also and should be considered)
Are you going to upgrade your system in the future? For example, if you are planning to eventually swap that 5800X for a 5950X, or swap that 2060 for a 3090, you would be better off buying a bigger PSU now than replacing it later.

The quality of the PSU will be a big factor also. I could see a high-end 500w unit probably being able to run that fine, but a cheap unit would probably die or cause system instability. Personally I would shoot for a decent 750w or a high-end 650w.
 
Fans: Not sure. Depends on the case
No OC, stock settings, and cooling
No HDD
Planning on an 80 Plus Gold PSU
Upgrading plans are unlikely for a long time. My current system was built in 2013 and the performance is satisfactory IMO. Even RDR2
 
No OC, stock settings, and cooling

The 5800X doesn't come with a stock heatsink FYI, so you'll have to choose an aftermarket cooler one way or another. Any decent cooler at this point is probably going to use at least one if not two 120mm fans (think CM Hyper 212 Evo, etc).
 
min for a normal 2060 is 500w so id say go 600+ or higher if you intend to upgrade down the road.
 
ok, thx for the info.
These are the plans so far.
  • AMD Ryzen 7 5800X Vermeer 8-Core 3.8 GHz
  • GIGABYTE X570 AORUS MASTER
  • ASUS GeForce RTX 2060 Overclocked 6G GDDR6
  • G.SKILL Ripjaws V Series 32GB (2 x 16GB) 288-Pin DDR4 SDRAM DDR4 4000
  • WD BLACK SN850 NVMe M.2 2280 500GB PCI-Express 4.0 x4 3D NAND
  • 2x WD BLACK SN850 NVMe M.2 2280 2TB PCI-Express 4.0 x4 3D NAND
ok, there is some technical OC, but nothing manually
 
Those components (including the motherboard's VRMs, audio, ethernet, wifi, etc) add up to around 400 watts plus whatever spikes from the GPU. I would go with 650 watt. Even if you don't plan on upgrading, I wouldn't want to be that close to the wire. You might change your mind about upgrading, and you might even have to replace a failed component years from now with something that consumes more power.
 
A general rule I have heard, since I built my My Celeron 300A /w 440BX chipset :), is to get your estimated need then add 33%. So 650 sound like a sweet spot.
 
Those components (including the motherboard's VRMs, audio, ethernet, wifi, etc) add up to around 400 watts plus whatever spikes from the GPU. I would go with 650 watt. Even if you don't plan on upgrading, I wouldn't want to be that close to the wire. You might change your mind about upgrading, and you might even have to replace a failed component years from now with something that consumes more power.
This, at least a 650W.
If you get a PSU thats good quality it will likely have a 5 year or higher warranty.
So it will still be in use when you next upgrade and that could quite easily mean a higher power gfx card.

I did this 10 years ago and still use the same 750W PSU (with 7yr warranty) in my 3090 gaming PC.
Getting a really good quality PSU ended up cheaper in the long run!
 
Purely anecdotal but I’d stick around the 850w gold rating for that build. Gives you some slight wiggle room for a gpu upgrade some day.
 
5800x as a system (including RAM, motherboard, and normal peripherals) uses 200 watts. RTX 2060 stock peaks out at 170 watts and averages 150 watts (overclocked models can peak at 190 watts). Your planned peripherals add no more than 20-40 watts to a basic system. That is why the online calculators recommend a 500 watt PSU minimum, because your planned system is really a 400 watt system. You will only achieve maximum power draw doing workstation tasks that fully max out the CPU and GPU simultaneously, and it takes certain types of instructions (usually video transcoding) to get peak power draw from a CPU. Gaming workloads are not that certain type of instruction, and you're likely to never see more than 140 watt power draw from the CPU in gaming.

That said, there is nothing wrong with oversizing your PSU. It's your money to choose what to do with it. However, it is far better to get a smaller capacity higher quality PSU than it is to get a higher capacity lower quality PSU.
 
5800x as a system (including RAM, motherboard, and normal peripherals) uses 200 watts. RTX 2060 stock peaks out at 170 watts and averages 150 watts (overclocked models can peak at 190 watts). Your planned peripherals add no more than 20-40 watts to a basic system. That is why the online calculators recommend a 500 watt PSU minimum, because your planned system is really a 400 watt system. You will only achieve maximum power draw doing workstation tasks that fully max out the CPU and GPU simultaneously, and it takes certain types of instructions (usually video transcoding) to get peak power draw from a CPU. Gaming workloads are not that certain type of instruction, and you're likely to never see more than 140 watt power draw from the CPU in gaming.

That said, there is nothing wrong with oversizing your PSU. It's your money to choose what to do with it. However, it is far better to get a smaller capacity higher quality PSU than it is to get a higher capacity lower quality PSU.

It definitely doesn't hurt to have some extra. Case in point, I had a machine with a Pentium G3258 and a 750TI. I originally used an ancient 300W PSU and it couldn't handle the load of that GPU at all. It was old school and half of that 300W was on the 5V rail. So I bought a 430W EVGA, and it worked perfectly. Then I got some parts for free and upgraded it to a 4690K and a GTX 960. Even though it should still be well under 300 watts, the 12V rail now sags to about 11.7 while gaming. I don't get the best feelz from that. Now I have a 980TI I can throw in it, but I'm pretty sure the PSU would completely lose its shit at that point. I wish I had just spent another $30 and gotten a 650 in the first place, because now I need one anyway.
 
It definitely doesn't hurt to have some extra. Case in point, I had a machine with a Pentium G3258 and a 750TI. I originally used an ancient 300W PSU and it couldn't handle the load of that GPU at all. It was old school and half of that 300W was on the 5V rail. So I bought a 430W EVGA, and it worked perfectly. Then I got some parts for free and upgraded it to a 4690K and a GTX 960. Even though it should still be well under 300 watts, the 12V rail now sags to about 11.7 while gaming. I don't get the best feelz from that. Now I have a 980TI I can throw in it, but I'm pretty sure the PSU would completely lose its shit at that point. I wish I had just spent another $30 and gotten a 650 in the first place, because now I need one anyway.

The voltage drop speaks more about the quality of the PSU rather than the capacity. Getting a larger capacity low quality PSU to drive a lesser load will land you in the same position when you start using more of the capacity. Get a high quality PSU and you don't need to worry about hitting capacity or even going slightly over, at least for the first few years of the PSU's life.
 
The chipset will have to divide the links when used with triple 4.0 SSDs, you'll either have to knock GPU down to a lower PCI-E level, or decrease on the sequential peak bandwidth of the SSDs, keep that in mind.
 
The chipset will have to divide the links when used with triple 4.0 SSDs, you'll either have to knock GPU down to a lower PCI-E level, or decrease on the sequential peak bandwidth of the SSDs, keep that in mind.

The first PCIe 16x slot (aka the GPU) and at least the first M.2 slot should be using PCIe lanes from the CPU and won't be shared or divided with anything.
 
The first PCIe 16x slot (aka the GPU) and at least the first M.2 slot should be using PCIe lanes from the CPU and won't be shared or divided with anything.

I don't think you read the post properly. He's thinking to add THREE 4.0 SSDs, in that case the chipset will switch lanes and cut down on one end or another. I know my X570 can't do that with my PCI-E SSDs, same thing applies to his case. I suggest looking at the max lanes available on the X570 chipset diagram.
 
I don't think you read the post properly. He's thinking to add THREE 4.0 SSDs, in that case the chipset will switch lanes and cut down on one end or another. I know my X570 can't do that with my PCI-E SSDs, same thing applies to his case. I suggest looking at the max lanes available on the X570 chipset diagram.

You do know that not all PCIe 4.0 lanes go through the chipset right? Many of the lanes come directly from the CPU. This applies to at least the first PCIe slot and usually at least the first M.2 slot on most all motherboards. So worst case, his 2nd and 3rd SSDs are using lanes from the chipset and potentially sharing bandwidth with each other, but that won't affect his GPU or his primary SSD at all, and chances are he's not going to be using all three of his SSDs at the same time anyway...

Since you like diagrams...
x570_PCIe.jpg
 
Last edited:
You do know that not all PCIe 4.0 lanes go through the chipset right? Many of the lanes come directly from the CPU. This applies to at least the first PCIe slot and usually at least the first M.2 slot on most all motherboards. So worst case, his 2nd and 3rd SSDs are using lanes from the chipset and potentially sharing bandwidth with each other, but that won't affect his GPU or his primary SSD at all, and chances are they he's not going to be using all three of his SSDs at the same time anyway...

You're buying a 4.0 SSD for those (rather) pointless sequential speeds. If it's getting knocked down to 3.0 speeds, why on earth are you even buying into even ONE of them in the first place? Just buy a 3.0 SSD then. Your argument is dumbfounded.
 
You're buying a 4.0 SSD for those (rather) pointless sequential speeds. If it's getting knocked down to 3.0 speeds, why on earth are you even buying into even ONE of them in the first place? Just buy a 3.0 SSD then. Your argument is dumbfounded.

In no way would a PCIe 4.0 SSD be knocked down to PCIe 3.0 speeds. If you had two PCIe 4.0 SSDs connected through the chipset, and were running benchmarks on both of those drives simultaneously (a contrived unrealistic scenario), then they would be forced to share the PCIe 4.0 x4 bandwidth from the chipset, but that would almost never happen in real-world usage. And chances are that the "best" SSD is going to be the one using the CPU lanes anyway (aka not sharing any bandwidth with anything connected to the chipset).

You clearly have zero clue what you are talking about, but please continue, this is kind of entertaining, like watching a drunk guy try to cross a freeway on foot.
 
The board will run out of a few of its lanes, it'll either knock the SSD down to 4.0 x2, or the GPU to x8. I suggest you research a triple M.2 configuration on X570.

It won't link its speeds up and down, that's not how the switch works. Once it detects an SSD on every slot, it will cut down the bandwidth on at least one device.

Since you didn't do your research on the chipset and insisted on insulting me, you deserve a special kid of this night award.
 
The board will run out of a few of its lanes, it'll either knock the SSD down to 4.0 x2, or the GPU to x8. I suggest you research a triple M.2 configuration on X570.

It won't link its speeds up and down, that's not how the switch works. Once it detects an SSD on every slot, it will cut down the bandwidth on at least one device.

Do you actually know what the difference is between PCIe lanes from the CPU and PCIe lanes from the chipset? Or in your mind do all lanes just come from "the board"?

According to the Aorus Master manual, the only compromise for having all three M.2 slots populated is that 2 out of the 6 SATA ports will be disabled:
https://download.gigabyte.com/FileList/Manual/mb_manual_x570-aorus-master_1102_e.pdf
Page 32
 
Last edited:
giphy.gif
The plan was to RAID 0 the two 2TBs. Looking through the manual, it sounds like as long as nothing is attached via SATA (protocol or port) there will be no significant performance issue. M2A will be the OS. The two 2TB SSD would be on M2B & C. So that would be fine since they both run through the chipset. I was planning on two 2TB vs one 4TB due to price.
Anyway, thank you all for the PSU
 
Last edited:
That just shows how the SATA ports work. It tells you zero about how many lanes the other ports exactly get. Way to actually not do your research and cherry pick a wrong link. Also, to put a light on one of your sentences about how "nothing will be switched or anything", I'll show you this:

Screenshot_1.jpg


Ignore the 2.0's as it's probably link bandwidth power saving management or wrong reporting. As you can see, the chipset already uses its switching algorithm to provide this solution for my setup, with two SSDs. When you put a third, you actually run out of the full amount of lanes available somewhere because the SATA controller / controllers, ethernet, USB etc also need their own lanes.

RAID0 with 4.0 SSDs will give you no benefit, the whole point of even 3.0 in the first place was higher peak sequential speeds. RAID does not increase your foremost small file 4k performance, therefore you're just putting your files to more risk. I'd actually like to mention how much of a colossal waste of money that whole setup is when DDR5 stuff is right out of the corner (like paying 300 for that board and all those SSDs including a 500gb 4.0 one). 2060 is not worth it either at the current pricing.
 
That just shows how the SATA ports work. It tells you zero about how many lanes the other ports exactly get. Way to actually not do your research and cherry pick a wrong link. Also, to put a light on one of your sentences about how "nothing will be switched or anything", I'll show you this:

View attachment 382174

Ignore the 2.0's as it's probably link bandwidth power saving management or wrong reporting. As you can see, the chipset already uses its switching algorithm to provide this solution for my setup, with two SSDs. When you put a third, you actually run out of the full amount of lanes available somewhere because the SATA controller / controllers, ethernet, USB etc also need their own lanes.

RAID0 with 4.0 SSDs will give you no benefit, the whole point of even 3.0 in the first place was higher peak sequential speeds. RAID does not increase your foremost small file 4k performance, therefore you're just putting your files to more risk. I'd actually like to mention how much of a colossal waste of money that whole setup is when DDR5 stuff is right out of the corner (like paying 300 for that board and all those SSDs including a 500gb 4.0 one). 2060 is not worth it either at the current pricing.
What did you use to show your current PCI-E lane uses? That would be nifty...
 
That just shows how the SATA ports work. It tells you zero about how many lanes the other ports exactly get. Way to actually not do your research and cherry pick a wrong link.

Cherry picked? I linked the entire manual. Feel free to point out which page I should look to support your argument. Or do you think they just forgot to include something that important in their 104 page manual?

Also, to put a light on one of your sentences about how "nothing will be switched or anything"

In no post did I ever use the word "switched" or "switch". The fact that you have to make-up fake quotes says a lot about your argument, and even you as a person.

Here is what I did say:
The first PCIe 16x slot (aka the GPU) and at least the first M.2 slot should be using PCIe lanes from the CPU and won't be shared or divided with anything.
And chances are that the "best" SSD is going to be the one using the CPU lanes anyway (aka not sharing any bandwidth with anything connected to the chipset).

Both quotes are true.

I'll show you this:

View attachment 382174

Ignore the 2.0's as it's probably link bandwidth power saving management or wrong reporting. As you can see, the chipset already uses its switching algorithm to provide this solution for my setup, with two SSDs. When you put a third, you actually run out of the full amount of lanes available somewhere because the SATA controller / controllers, ethernet, USB etc also need their own lanes.

What I see in your picture is that your GPU and your Primary SSD are using CPU PCIe lanes that aren't "swiched" or shared with anything. I see one of your SSDs using chipset lanes, which would share bandwidth (the upstream PCIe 4.0 x4 link between the chipset and the CPU) with other components also connected through the chipset. It doesn't mean that any individual component would be limited, or that you would necessarily even be limited when running multiple components connected through the chipset at the same time, unless their combined throughput can actually max out the PCIe 4.0 x4 upstream link to the CPU. Even if that were the case, again, it would have exactly zero impact on anything that was connected to CPU PCIe lanes.

RAID0 with 4.0 SSDs will give you no benefit, the whole point of even 3.0 in the first place was higher peak sequential speeds. RAID does not increase your foremost small file 4k performance, therefore you're just putting your files to more risk. I'd actually like to mention how much of a colossal waste of money that whole setup is when DDR5 stuff is right out of the corner (like paying 300 for that board and all those SSDs including a 500gb 4.0 one). 2060 is not worth it either at the current pricing.

It seems like you've shifted to just being bitter and trying to shit on all aspects of this thread now. Pretty sad, and not very helpful. While I would agree that Raid:0 for SSDs is pretty pointless, the higher sequentiual transfer speeds might eventually come into play once DirectStorage takes off, as it's currently the old APIs that prevent games from actually being able to make use of fast sequential speeds. Maybe the OP simply wants the storage to appear as one contiguous block, which would be fine.
 
Last edited:
Status
Not open for further replies.
Back
Top