Do you have numbers, or are you just speculating?
I still remember my Dual CPU ABIT mother board with Dual P4's on it. Man.. I was king of the world back then lol. Loved that system... but I ran into a lot of issues with that mobo. RMA'ed it twice. I've never looked into dual cpu systems since then. I thought they were only used in production houses or very large rendering farms and stuff like that.
Well, that was a productive response.
If there's a romantic or emotional attraction to having multiple sockets, that's fine. I totally get it -- we all do things that serve entertainment or emotion rather than more tangible needs. Point is, SMP doesn't mean you have multiple sockets; that's one implementation, but multicore systems can be SMP, too. (OTOH, funny thing is that modern multi-socket systems are NUMA and not SMP.)
Over at 2cpu there is the same thread going on.
- high price for SMP CPUs that have high clockspeed, and you need two of them => $$$
- more heat and noise, some HSF placement problems
- no overclocking unless you go all out
So you get many cores but slow per-core speed unless you spend big money.
Tradition benefits include:
- much more memory capacity because you have 2-4x the number of memory slots and they can take 2x as big modules
- always working ECC support (regardless of whether registered or unbuffered)
- you can often use the higher number of RAM slots to get good amounts of RAM cheaper because you can use smaller modules
- quality of non-SMP boards gets spottier and spottier. You can even buy bad Gigabyte boards these days
- some removal of features from consumer CPUs, e.g. some virtualization functionality
- ECC functionality killed in non-Xeon (but not from non-SMP) and spotty availability with AMD
Myself I actually run more SMP boards now, the main reason that AMD doesn't perform well for my tasks anymore, so I cannot have ECC in consumer boards at all. Once I need a server board and a Xeon overclocking is gone. And more of my workload uses multiple CPUs. And I need a lot more RAM. So a SMP systems works out right now, although I curse the slow CPU every time that rawthrerapee writes some colorspace that GIMP has to convert.
Ever consider a single processor board with a Xeon E5-1650 or E5-1660? They both support ECC, large amounts of memory and are fully unlocked. The E5-1680 V2 8 core CPU has been announced, although the jury is still out on whether or not it'll end up being unlocked.
Also, Francois Piednoel at Intel is surveying the enthusiast audience to see if there's demand for an unlocked 12 core Extreme CPU. Check it out at the link below and be sure to express your interest if such a processor does interest you.
The E5-1660 has an unlocked multiplier?
And I don't think the non-SMP xeons support registered memory, so the maximum RAM capacity is the same as the desktops, no?
4 cores + hyperthreading are enough for me. Memory capacity, I don't even know. I suppose 48 GB is fine-ish for now but I just ran out of 48 in a particular project.
Indeed, the E5-1660s multiplier (and BCLK straps) are fully unlocked. It also supports registered ECC memory, as well as regular unbuffered ECC and unbuffered non-ECC memory, up to 256GB total supported. The IMC of these processors is Romley-EP based. The only downside is the fact that they're limited to single processor operation.
Right, I forgot about the single socket Sandy Bridge E and the quad-channel memory. Last time I looked at it it did indeed look attractive. Are you positive that a random supermicro single-socket 2011 board will do registered memory?
I beg to differ. In this thread, we discuss the new coming IBM POWER8 cpu. I mention in a post server cpus vs desktop cpus, cache, threading, performance, etc. In this post I discuss the bad scalability of Linux and talk about SMP, NUMA and HPC.Actually, modern dual socket systems are NUMA, not SMP. Modern desktop operating systems handle either situation just fine.
I beg to differ. In this thread, we discuss the new coming IBM POWER8 cpu. I mention in a post server cpus vs desktop cpus, cache, threading, performance, etc. In this post I discuss the bad scalability of Linux and talk about SMP, NUMA and HPC.
I hopped in this forum to satiate my curiosity and learn something. Ive considered several times going forward with a dual cpu build. Being in a forum as active as this, and seeing the activity level (or lacktherof) in this section is certainly cause for concern.
Cost aside, why so little interest or activity? Or is this section of the forums just super new? Wouldn't running a dual cpu rig potentially eliminate the cpu bottleneck a lot of our newer gpu's are seeing for example?
Ive heard of these all the way back since the tyan mobo days - so its certainly not a new tech. As enthusiasts we have latched on to extremes and we have successfully mainstreamed, dual channel memory, dual gpu's, dual hard disks, what is the hurdle with dual cpu's? Seems like a fairly logical "next" step for enthusiast level performance.
Just curious ^^
Maybe when threading becomes easier and more software can take advantage of it... But even now I can do a seemingly infinite amount of things on my "lowly" 6 core desktop