clayton006
[H]ard|Gawd
- Joined
- Jan 4, 2005
- Messages
- 1,087
So this may seem like a simple question but so far I haven't able to answer it with my own testing.
Using the rig in my sig (2011 socket) I was experimenting with different RAM amounts. I needed 64GB for all of the VM stuff I was doing at the time (and, well, could use it again now). I noticed that when I would run Sandra benchmarks checking memory bandwidth, I would get almost exactly 1/2 the bandwidth had I only had 32GB installed.
At first I was surprised and then I realized it may have something to do with 2 DIMMs per channel vs 1 DIMM per channel and the possibility that if you had more than one DIMM per channel that bandwidth may be reduced?
I thought this was also a concern on SB (and later) Xeons with how their memory controllers worked for much larger amounts of RAM?
Does anyone know the answers to these questions or has observed the same behavior?
Using the rig in my sig (2011 socket) I was experimenting with different RAM amounts. I needed 64GB for all of the VM stuff I was doing at the time (and, well, could use it again now). I noticed that when I would run Sandra benchmarks checking memory bandwidth, I would get almost exactly 1/2 the bandwidth had I only had 32GB installed.
At first I was surprised and then I realized it may have something to do with 2 DIMMs per channel vs 1 DIMM per channel and the possibility that if you had more than one DIMM per channel that bandwidth may be reduced?
I thought this was also a concern on SB (and later) Xeons with how their memory controllers worked for much larger amounts of RAM?
Does anyone know the answers to these questions or has observed the same behavior?