64GB of RAM in skylake.

SomeGuy133

2[H]4U
Joined
Apr 12, 2015
Messages
3,447
Skylake supports 64GB of RAM and I wanted to know how bad of an idea is that? This is of course non ECC.

What factual documentation exists that talks about the issues on non ECC RAM. Is newer tech making non ECC RAM better and turning the old anything above 32GB must be ECC or are we heading towards a world where all RAM will need to be ECC?

I am looking for objective data that looks at ECC vs non ECC not hearsay.

Thanks
 
"above 32GB must be ECC" is just a pile of horseshit. Back in the 90's people used to say the same about 64MB.
 
Depends on how critical the application is, at the sizes we play with typically.

E.g. some boxes I use to run shows seen by millions of people, use ECC for only 2x4gb sticks..
 
Depends on how critical the application is, at the sizes we play with typically.

E.g. some boxes I use to run shows seen by millions of people, use ECC for only 2x4gb sticks..

did you notice there was an issue or was that done for safety reasons....i mean just to be on the safe side.
 
See for yourself:
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf

However, I don't believe a general purpose gaming rig would really suffer drastically from the occasional memory glitch. A scientific computation where the data is valuable on the other hand..

what about a computer used for Photoshop and has encrypted drives? I am wondering if the encrypted OS would be an issue

It'll be a week or so before i can read that whole thing but it looks a bit old. No DDR3/DDR4 but does that make a difference? Do you have any opinion or summary of that article you like to share?
 
"above 32GB must be ECC" is just a pile of horseshit. Back in the 90's people used to say the same about 64MB.

might have to do with the electrical load when you have that much ram... as newer boards come out that can better handle the loading, this should become less of an issue....
 
might have to do with the electrical load when you have that much ram... as newer boards come out that can better handle the loading, this should become less of an issue....

I thought the error rate was just inherent to how the system works and not due to load. As in you get X errors per GB and each GB provides more errors. Thats what i read somewhere in the past. It was said it was a fixed amount of errors per GB (on average). It was in some sort of article. I forget where.
 
well electrical loading can cause ram to become instable in large quantities...
 
well electrical loading can cause ram to become instable in large quantities...

yea i get what you mean but what i meant was the article i read (not sure if it is accurate or relevant anymore...it was quite old DDR1/2 era) was talking about error rate per period per GB and that it was a linear equation/issue of errors. As in per day/month X errors naturally accord per GB hence why ECC was important for large systems since the errors became more prevalent. Instead of an error maybe once a month it became daily or several times a day.

As i said i dont know how accurate it was or if it is even relevant anymore. Tahts why i am trying to understand if Skylake with 64GB is a bad idea even though it is supported.
 
did you notice there was an issue or was that done for safety reasons....i mean just to be on the safe side.

To be on the safe side, one big mistake could cost future business. Something as simple as an energetic particle could cause a problem. Not a dice I wish to roll where possible :)
 
To be on the safe side, one big mistake could cost future business. Something as simple as an energetic particle could cause a problem. Not a dice I wish to roll where possible :)

yea thats cool. I doubt speed and latency was a major issue for you anyways. For me I want the fastest possible but I don't know if its foolish to use 64GBs. I just wish ECC was fast :/

EDIT: yea i just checked and high performance DDR4 non ECC is 50% more bandwidth and ~40% less latency. 14ns vs 8ns -_- 3200MTs vs 2133MTs. That is painfully slower.

EDIT: BTW according to this they observed that skylakes memory controller does better with 4 sticks over 2 sticks.

http://www.tomshardware.com/reviews/gskill-trident-z-ddr4-4000-memory,4362-2.html
 
Last edited:
might have to do with the electrical load when you have that much ram... as newer boards come out that can better handle the loading, this should become less of an issue....

In the olden days, the memory controller used to be on the motherboard. For a long time now, since at least the Athlon 64 and the Core line, the controller has been integrated into the CPU (IMC). So now it's a question of the CPU handling the load of fully populated high-density modules...
 
Like most of the K-CPUs, Skylake-K may not overclock memory as well when it is fully populated with 64GB of RAM. When the CPU cannot handle the "load", it's not going to cause undetectable single-bit errors -- more like a BSOD. At whatever configuration, it's a good idea to run Prime 95, IntelBurnTest or similar that can let you know if subtle math errors are being made, indicating an unstable system. Also, scan all RAM via Memtest+ for a day or two to make sure none of the modules are bad.

The old 32GB myth was also because consumer Intel processors couldn't address high-density modules (with 8 Gbit or 16Gbit chips) correctly due to lack of technical foresight, restricting them to 32GB at most with four slots. Now they can address 16GB modules correctly, and there's no inherent risk of those modules being any more prone to single-bit errors than the 8GB ones before them.
 
In the olden days, the memory controller used to be on the motherboard. For a long time now, since at least the Athlon 64 and the Core line, the controller has been integrated into the CPU (IMC). So now it's a question of the CPU handling the load of fully populated high-density modules...

traces still are on the board.....
 
So long as you run the modules at STOCK (2133), I can't see anything else causing errors for you. The memory controllers aren't the source of most DRAM errors.

The errors in memory are mostly caused by cosmic rays, which increase in intensity as the air gets thinner. If you live in Denver, you might still want ECC, but even then you don't start to serious errors until you go above 30k feet. So for most of us living near sea-level, there's no issue with errors.

As long as you're not loading memory with critical data a lot of the time, you're not likely to see serious memory errors. Most memory errors affect data that you're not modifying (read-only cache for things like applications and assets), so the worst that can happen is a system crash - and only if it hits application data. Typically, you only have to worry about memory errors if you keep large quantities of dynamic data in memory (i.e. database or large compute data set, like the stuff Google reported on).

If you won't modify it and store it, it can't corrupt data. And while you COULD increase the chance of a system crash with more ram, we're talking about doubling an incredibly tiny number. You won't notice :D

And I'm sure encryption will have enough redundancies and checking/voting to catch an error. Otherwise, it would be unusable.

Also, the L1 and L2 cache on processors are ECC already. This means most of the time the vast majority of your working set is ECC-protected. The exception would be cases where you're operating on large data-sets (database or computation).
 
Last edited:
My new skylake laptops I just ordered I went with just 32 GB but only using 2 of the 4 slots so I can upgrade it latter to the full 64 GB
 
look up linus on youtube he did a video testing 128gb ram. check out the results.
 
Back
Top