2x8 or 4x4

B770

Weaksauce
Joined
Jul 27, 2017
Messages
106
all things being equal(timing wise) is 4 strips of 4 gigs better than 2 strips of 8 never really thought about it, and dont know if it matters.
 
If the same speed, i’d go 2x8 it will leave you expansion options, take less power, it may cost a little more though depending if youre going new or used.
 
If they are the same timings how could there be a difference, other then how many slots are used up and resale value in a few short years?
 
If they are the same timings how could there be a difference, other then how many slots are used up and resale value in a few short years?

4 DIMMs are harder on a CPU's memory controller compared to 2, as well as some compatibility issues depending on motherboard/RAM. If not OCing, there is no difference in stock performance.
 
4 DIMMs are harder on a CPU's memory controller compared to 2, as well as some compatibility issues depending on motherboard/RAM. If not OCing, there is no difference in stock performance.
True, especially on Ryzen. But based on OP question it seems like they were asking of the advantage of using 4 rather then 2 sticks?
So far it seems all of us would recommend 2 sticks (2x8).
 
All things being equal, use the minimum number of modules you need to reach your desired capacity while still filling all channels. More modules means more loading on the VRMs and memory controller, which means less stability at a given set of timings.
 
The only time I would recommend 4x4 is if your system is quad channel memory, which would yield significant performance gains in a lot of CPU bound tasks, available memory bandwidth is doubled. Otherwise 2x8GB is preferred on dual channel systems.
 
The memory controller has to supply the clock and data signaling to every chip on each DIMM that's installed. If you're running two DIMMs, the memory controller will not have to provide as much power to maintain a clean signal as it would if you were running four.

To get around this problem in servers where 8 to 16 or more DIMMs are needed, registered memory is used so the memory controller(s) only has to talk to one chip per DIMM instead of the 8 to 18 for unbuffered memory.
 
knowing.jpg

I did not know that
 
So then why board manufacturers install 4 DIMM in motherboards?
Provide as much power..............how much power 4 memory modules can consume vs 2 that power supplies can't handle or would have an issue?

But being that the controller is in the CPU, the "rated" power is not 95W when 4 memory modules are used? Would it jump to what?

Clean signal? What sort of weak/altered/distorted "signal" are we talking about when 2 modules are used vs 4? The kind that would make your system crash.........

It's not about the power supply, it's about the memory controller/chipset being able to supply stable power and keep everything synced.
 
So then why board manufacturers install 4 DIMM in motherboards?
Because people want expandibility. However, there are plenty of motherboards out there with a reduced number of DIMM slots on them (in fact my Threadripper system has only four DIMMs instead of the more popular eight). Also, a majority of laptops only have one or two slots.

Provide as much power..............how much power 4 memory modules can consume vs 2 that power supplies can't handle or would have an issue?
This isn't about the modules consuming the power. They have their own supply (though 4 DIMMs will consume twice the power as two DIMMs).

It's about the memory controller consuming more power to drive the signals to those modules with four DIMMs instead of two. That difference is pretty negligible from an overall system perspective, but a few extra milliwatts through nanoscale components can have effects that need to be accounted for in design. Whether or not the memory controller can handle it depends on how well it was designed and how close to the edge you're running.

But being that the controller is in the CPU, the "rated" power is not 95W when 4 memory modules are used? Would it jump to what?

A CPU's power rating is determined according to it's design. If a 95w rated processor is designed to run at 3.4GHz, 1.3v, and have up to four DIMMs installed, then the 95w rating will be for running at 3.4GHz, 1.3v, with four DIMMs installed.

The right question would be how much power could you save using two DIMMs instead of four. The answer to that is probably that it wouldn't be noticeable beyond a margin of error.

Clean signal? What sort of weak/altered/distorted "signal" are we talking about when 2 modules are used vs 4? The kind that would make your system crash.........

Adding more devices affects capacitance on the shared buses feeding the memory slots. Changes in capacitance affect how fast a signal can switch, how long that signal takes to stabilize so that it's useful, and how much power is required to drive that signal (P = C*V^2*f). If you try to drive a signal too fast, don't wait long enough for the signal to stabilize, or don't provide enough voltage to drive a stable signal, you won't get valid data into the memory chips at the right addresses. Bad data to the chips means incorrect or illegal instructions, access violations, and corrupted data.



One thing to note, it's not the DIMM count that really matters, it's the number of ranks. Four single rank DIMMs will have nearly the same affect on the memory controller as two dual rank DIMMs (or one quad rank DIMM). Back in the Athlon 64 days, installing more than four ranks would drop the memory speed from DDR400 to DDR333. This was because the memory controller couldn't provide enough power to drive the buses at the higher speed.
 
One thing to note, it's not the DIMM count that really matters, it's the number of ranks. Four single rank DIMMs will have nearly the same affect on the memory controller as two dual rank DIMMs
This can vary depending on the quality of the MB
A cheap MB with a thin PCB wont have good spacing between then lanes from the RAM to the CPU and may not be equal in length leading to cross talk\interference and small difference in the latency for signals between the sicks which all ads up to reduced stability at high speeds especially when running 4 sticks.

A MB with a high quality PCB can actually reach higher speeds with four single rank sticks than two dual rank. While a quality MB with only two RAM slots will typically OC a little higher with two dual rank sticks than one with four slots would
 
all things being equal(timing wise) is 4 strips of 4 gigs better than 2 strips of 8 never really thought about it, and dont know if it matters.

There are timing and loading difference between using 1 stick/channel and 2 sticks/channel.

The loading difference comes from capacitance. 1 stick/channel has less capacitance than 2 sticks/channel. The capacitance comes from having to bridge across dimm slots to get to the 2nd stick in the channel. The lower the capacitance the easier of a time the the memory controller has to access rows of memory. You would also find that running at XMP spec with all sticks in place may be difficult and you would have to provide a bit more voltage to get them stable.

The timings are all about the command rate. This is the number of clock cycles to start executing a command from the memory controller (the actual execution itself takes x number of clock cycles depending on your other latencies before it completes). When you use a single stick/channel, you can usually set this to 1T which means it only needs a single clock cycle to start executing. When you load up the dimm slots for each bank more often than not you the command rate gets set to 2T. It seems a minor difference bit it a 1 clock cycle latency for every command executed. It adds up.
 
The memory controller has to supply the clock and data signaling to every chip on each DIMM that's installed. If you're running two DIMMs, the memory controller will not have to provide as much power to maintain a clean signal as it would if you were running four.

To get around this problem in servers where 8 to 16 or more DIMMs are needed, registered memory is used so the memory controller(s) only has to talk to one chip per DIMM instead of the 8 to 18 for unbuffered memory.


This is why many AMD chips have higher official clocks when two sticks of RAM are installed than when four are installed.

It should also be easier to overclock two sticks to higher clocks stable, than with four.

If you get two, and leave two open slots, you also leave yourself an easy path for future upgrades.

In our modern Window case, RGB LED, custom paint era of PC building, many people obsess about not having empty RAM slots as they think it looks ugly, but in reality it is the better configuration.
 
In our modern Window case, RGB LED, custom paint era of PC building, many people obsess about not having empty RAM slots as they think it looks ugly, but in reality it is the better configuration.

I bought a kit that came with ram fillers, usb/fan pin covers, and pcie slots of various sizes to fill in the slots for dust and aesthetics. Was pretty cheap if I remember correctly too.
 
Back
Top