WoW! are dual cpu systems THAT uncommon?

klepp0906

Gawd
Joined
Jul 1, 2013
Messages
594
I hopped in this forum to satiate my curiosity and learn something. Ive considered several times going forward with a dual cpu build. Being in a forum as active as this, and seeing the activity level (or lacktherof) in this section is certainly cause for concern.

Cost aside, why so little interest or activity? Or is this section of the forums just super new? Wouldn't running a dual cpu rig potentially eliminate the cpu bottleneck a lot of our newer gpu's are seeing for example?

Ive heard of these all the way back since the tyan mobo days - so its certainly not a new tech. As enthusiasts we have latched on to extremes and we have successfully mainstreamed, dual channel memory, dual gpu's, dual hard disks, what is the hurdle with dual cpu's? Seems like a fairly logical "next" step for enthusiast level performance.

Just curious ^^
 
Dual CPU --> SMP. I think most desktop operating systems were not designed around this / not optimal.

Still would be cool to have dual, triple, or quad 3930K's :D
 
The issue is really that current operating systems are limited by how processes are handled in terms of thread and memory partitioning. This isn't necessarily the OS faults, but because modern computing was founded upon (and still is) a linear execution of instructions across a pipeline, the focus has always been on single-threaded performance.

You essentially have 2 types of processes: user level and kernel level.

You have kernel-level processes which are maintained directly by the OS. The OS directly controls these processes and maintains the threads inside of them. The OS can schedule any of the threads in this type of process across multiple cores (not virtualized). This is true core threading. The biggest problem with this is that in order to do any sort of thread manipulation inside one of these processes (be it switching, creating, deleting), you take a serious performance hit (up to x10 in some cases) because the system has to switch modes to the kernel (because the kernel operates the thread). This is your only shot @ true SMP.

On the other hand, you have user-level processes, which can can basically operate threads with no slowdown. The only (huge) problem is that the OS sees this process as literally one process, irregardless of how many threads this process is running. As such, these processes CANNOT schedule threads on different physical cores. So, best case is that you are left in a SMT situation.

On top of all of that, because threads share the same memory space in a process (regardless of user or kernel), you have to waste resources to ensure that the threads are sync'd properly when manipulating the shared memory space in the process image.

So, as you can see, computing is still essentially bottlenecked until we can figure out some creative way to truly parallelize processes and unleash threads. You can have a situation where you try to be crafty and schedule the appropriate workloads in either user-level or kernel-level processes, but at the end of the day it's still a compromise because your program will still contain kernel-level processes and by extension not be fully SMP parallelized.
 
Last edited:
If you want some more information on some dual/quad/eight processor machines you can head over the the Distributed Computing section of the forum. There are lots of different systems there. The multi processor systems are limited in the gaming world mostly due to the clock speeds that the single processor systems can achieve through overclocking.
 
Easier to just put them all on one die. so what is the benefit of having 2 discrete CPU locations?

well 16 cores instead of 8 for now ... extra dim slots for up to 512 gig of memory.

For a consumer app (client).. especially a game .. there is no need for that ... yet (and the power consumption).
 
Cost aside, why so little interest or activity? Or is this section of the forums just super new? Wouldn't running a dual cpu rig potentially eliminate the cpu bottleneck a lot of our newer gpu's are seeing for example?

Why? Because I don't need dual CPUs. And neither do you. Why do you want to waste your money? When I joined this forum, computers were slow and latency was apparent in anything you did and I, like many here, upgraded often. Now I use a moderately overclocked i5 2500k with a SSD and a bunch of RAM, and honestly there's nothing that can be improved on the hardware side of my PC right now that would matter, except maybe a Mushkin Enhanced Scorpion Deluxe PCI-E SSD, and even that would be a minor upgrade. Far from being worth the $500+ such a thing would cost.

I can still be a PC enthusiast without feeling the need to waste every spare dollar I have on my PC, as can you.

We're in the age of software now. It's software that limits what we can do, not hardware, currently.
 
Last edited:
I think that dual CPU systems are sparse in the enthusiast market because most applications only use 1-4 threads, for anything to use more you are beyond the scope of even a fairly diverse power user that likes to do photoshop/CAD and even a VM or two. They don't generally need 12-32 cores when they can get 8 cores on the cheap.
 
Easier to just put them all on one die. so what is the benefit of having 2 discrete CPU locations?

For a consumer app (client).. especially a game .. there is no need for that ... yet (and the power consumption).

How about 80 lanes of PCIe 3.0 connectivity?

As for power consumption, I have a 2p E5 running 24/7 - it pulls 248w from the wall at full load with 16c/32t.

as for overclocking, well it certainly seems possible from this hwbot score:- http://hwbot.org/benchmark/cinebench_r11.5.

In all seriousness - your best bet for info on these systems is the DC sub forum - lots of different systems and lots of knowledge in there to help you
 
Last edited:
So you can judge what computing needs I have? Can tell me what I need? I guess I dont need my SR2, or my 2p 2011 ...... sorry ...... That or, people have specific use considerations, where a two processor motherboard, with two CPUs, is a valid use case.....


As for the OS argument, that is a bad one, windows 8 Pro, Windows 7 Pro (and up) can all handle two CPUs without issues. The reason for the lack of them in the enthusiast market is that there is a lack of OC'able 2p motherboards, to date there are a handful (recently only the SRX and its Asus alternative, along with the SR2), as well as the cost prohibitive nature when a 1p machine will often do nearly as fast of a job.

Why? Because I don't need dual CPUs. And neither do you. Why do you want to waste your money? When I joined this forum, computers were slow and latency was apparent in anything you did and I, like many here, upgraded often. Now I use a moderately overclocked i5 2500k with a SSD and a bunch of RAM, and honestly there's nothing that can be improved on the hardware side of my PC right now that would matter, except maybe a Mushkin Enhanced Scorpion Deluxe PCI-E SSD, and even that would be a minor upgrade. Far from being worth the $500+ such a thing would cost.

I can still be a PC enthusiast without feeling the need to waste every spare dollar I have on my PC, as can you.

We're in the age of software now. It's software that limits what we can do, not hardware, currently.
 
So you can judge what computing needs I have? Can tell me what I need? I guess I dont need my SR2, or my 2p 2011 ...... sorry ...... That or, people have specific use considerations, where a two processor motherboard, with two CPUs, is a valid use case.....

Nope, it's your misunderstanding. It would be ridiculous to claim that nobody in the entire world needs more than a 2500k. But if you're a home user and you need to ask, you don't need 2 CPUs. But yes, I would say most likely you don't. Maybe you do. Who knows. Who cares.

It's certainly annoying how few PCI-E lanes 1155/1150 has. Even 24 would be a big improvement over 20, IMO. But it's still not a huge deal for the vast majority of people (including many of those who claim it is important).
 
Last edited:
For me, because I work with Photoshop, Lightroom, and x264/MeGUI/related tools, more horsepower and maor RAM is definitely useful. I have 32GB and I have run into many situations where 32GB isn't enough. 64GB would be awesome, but I fear I would hit limits occasionally with that too. :( SMP-wise, does Lightroom even support that? o_O

In the future I want to get the SSD cache add-on module for my RAID card so that I get an Intel DC S3500 800GB for a 500GB SSD cache setup, add a couple more 2TB Enterprise-grade HDDs to the RAID6, and have a few 4TB Enterprise-grade HDDs for compressed backups using Acronis. I would also like to replace my AMD GPU with an NVIDIA mainly since Adobe seems to have a preference for NVIDIA + CUDA (I also have a sneaking suspicion that it is also why Premiere Pro final renders perform at <1.0 FPS :( ), and replace my 128GB SSD OS drive with two RAID1 512GB SSDs.

Yesh, overkill, but the performance is worth it for what I'm doing. I wish the software I used were designed more efficiently in terms of multithreading/multicore/RAM/64-bit/etc. :| My 3930K is wonderful <3 wish I could have more 3930Ks and be able to effectively harness their haorsepowa. As far as computer games go, GPU is definitely bottleneck. :D
 
Well they do exist, but a lot of people use them for 'work' purposes as opposed to gaming. They are also very expensive rigs for enthusiasts who like to play with things until they fail. For reliability's sake, most people aren't going to be inclined to push things to the verge of breaking point. Then there are the others like me ;)

An advantage I can see for gaming would be to use some of the dual CPU motheboards with 4 x double wide 16 lane PCIe 3.0 slots. You wouldn't have to get 2 x 8 core CPUs - stick with some higher clocked 4 or 6 core Xeons and you could have an impressive gaming rig of 4 x GTX titans running at full PCIe 3.0 bandwidth.

I personally use my machine for 3D animation and video rendering. The software I use absolutely takes full advantage of all available cores, and although some things are heading towards GPGPU rendering, the CPU is still king for a lot of tasks.
 
Well they do exist, but a lot of people use them for 'work' purposes as opposed to gaming. They are also very expensive rigs for enthusiasts who like to play with things until they fail. For reliability's sake, most people aren't going to be inclined to push things to the verge of breaking point. Then there are the others like me ;)

An advantage I can see for gaming would be to use some of the dual CPU motheboards with 4 x double wide 16 lane PCIe 3.0 slots. You wouldn't have to get 2 x 8 core CPUs - stick with some higher clocked 4 or 6 core Xeons and you could have an impressive gaming rig of 4 x GTX titans running at full PCIe 3.0 bandwidth.

I personally use my machine for 3D animation and video rendering. The software I use absolutely takes full advantage of all available cores, and although some things are heading towards GPGPU rendering, the CPU is still king for a lot of tasks.
Heh heh heh... it's a wonderful feeling to see all 6 cores + HT in task manager to go 80% usage. :D
 
Dual-socket systems aren't common because Intel virtually holds a monopoly on the workstation side of them and charges a mint for legit dual-qpi Xeons, which are about as locked down as a chip can get. Fact is, most enthusiasts in the sense you speak of are gamers, and even a small overclock pushes the 3930k past the fastest 8-core 2687W in games and benches, negating that "enthusiast level performance" you were looking for.

Xeons are great for work, they do exactly what you need them to, give you many threads at a decent clock speed, locked to a specific TDP to keep heat at a minimum. The overclocking boards were created before manufacturers knew the chips would be locked, thats why the SR-X is dead already. I've owned the Z9PE-D8 myself and it was a supreme hassle to set up, very picky on RAM speeds and other settings.

Here is what happens when you saddle a decent GPU with a pair of average Xeons:

http://www.3dmark.com/3dm11/4099244
 
Easier to just put them all on one die. so what is the benefit of having 2 discrete CPU locations?

well 16 cores instead of 8 for now ... extra dim slots for up to 512 gig of memory.

For a consumer app (client).. especially a game .. there is no need for that ... yet (and the power consumption).

This

Times have changed; CPUs that have multiple cores and threads have taken the place of a need for dual socket systems. Now of course, the horde, video and CAD application, and those that do a lot of virtualization may have a need for them. But even the [H]ardest of the [H]ard do not need more than a single i7... which is still more than is probably 'needed'.
 
Multiple cores on one chip greatly reduced interest in dual CPU workstations/desktops. I used to build only dual CPU machines after I got out of college and could afford them. Then multi-core CPUs hit, and I no longer had any reason to. I'm a programmer and having 2 CPUs was really really nice back in the day, but now there's just no way for me to justify the cost unless I come up with something that needs it. So far, I haven't.

This section of the forum more or less hit its peak when dual core CPUs were first coming out. All the gamers with shiny new dual core CPUs came here to talk about what to do with multiple CPU cores. In time multiple cores became standard, and the discussion moved back to the Intel and AMD sections where it now belongs.

There is a lot more multi-CPU discussion on [H] than you see here though. Most of it's in the distributed computing subforum.

Dual (or more) CPUs are alive and well in server land. I've got a test rack full of them at work. I don't really get to bring work home though, so that doesn't result in me needing big machines at home. I do have a couple of quads... cast off socket F Opteron machines with 16 cores that work didn't want anymore and gave away to the employees.
 
Why? Because I don't need dual CPUs. And neither do you. Why do you want to waste your money? When I joined this forum, computers were slow and latency was apparent in anything you did and I, like many here, upgraded often. Now I use a moderately overclocked i5 2500k with a SSD and a bunch of RAM, and honestly there's nothing that can be improved on the hardware side of my PC right now that would matter, except maybe a Mushkin Enhanced Scorpion Deluxe PCI-E SSD, and even that would be a minor upgrade. Far from being worth the $500+ such a thing would cost.

I can still be a PC enthusiast without feeling the need to waste every spare dollar I have on my PC, as can you.

We're in the age of software now. It's software that limits what we can do, not hardware, currently.

How do you know im not some high level researcher crunching algorithyms for cancer research? =P LOL but no, your absolutely right. I don't "need" but then again, does the guy who pour's thousands into his big block for 700 rwhp "need" more than 200hp? Nope.. but we're americans, we love excess regardless of what it is. for me, pc's are my thing and it would certainly be cool :p

gotta waste your money somewhere, and when it comes to pc's id rather have too much than too little >.>
 
The issue is really that current operating systems are limited by how processes are handled in terms of thread and memory partitioning. This isn't necessarily the OS faults, but because modern computing was founded upon (and still is) a linear execution of instructions across a pipeline, the focus has always been on single-threaded performance.

You essentially have 2 types of processes: user level and kernel level.

You have kernel-level processes which are maintained directly by the OS. The OS directly controls these processes and maintains the threads inside of them. The OS can schedule any of the threads in this type of process across multiple cores (not virtualized). This is true core threading. The biggest problem with this is that in order to do any sort of thread manipulation inside one of these processes (be it switching, creating, deleting), you take a serious performance hit (up to x10 in some cases) because the system has to switch modes to the kernel (because the kernel operates the thread). This is your only shot @ true SMP.

On the other hand, you have user-level processes, which can can basically operate threads with no slowdown. The only (huge) problem is that the OS sees this process as literally one process, irregardless of how many threads this process is running. As such, these processes CANNOT schedule threads on different physical cores. So, best case is that you are left in a SMT situation.

On top of all of that, because threads share the same memory space in a process (regardless of user or kernel), you have to waste resources to ensure that the threads are sync'd properly when manipulating the shared memory space in the process image.

So, as you can see, computing is still essentially bottlenecked until we can figure out some creative way to truly parallelize processes and unleash threads. You can have a situation where you try to be crafty and schedule the appropriate workloads in either user-level or kernel-level processes, but at the end of the day it's still a compromise because your program will still contain kernel-level processes and by extension not be fully SMP parallelized.

tyvm for the detailed explanation. Definitely shed plenty of light on the reasoning. Im gonna head over to the dist comp forum and see what I find over there =) I always knew they weren't ideal on a consumer level, but never knew the OS was much of the limitation. makes sense though as the server environment where you see a lot of this in use is ran w/ a server OS as well. I assume those are better equipped to take advantage of it?
 
So you can judge what computing needs I have? Can tell me what I need? I guess I dont need my SR2, or my 2p 2011 ...... sorry ...... That or, people have specific use considerations, where a two processor motherboard, with two CPUs, is a valid use case.....


As for the OS argument, that is a bad one, windows 8 Pro, Windows 7 Pro (and up) can all handle two CPUs without issues. The reason for the lack of them in the enthusiast market is that there is a lack of OC'able 2p motherboards, to date there are a handful (recently only the SRX and its Asus alternative, along with the SR2), as well as the cost prohibitive nature when a 1p machine will often do nearly as fast of a job.

that's what got me thinking about it. noticing the SR2. interesting to know they can be overclocked and of "some" use with today's consumer OS. I had assumed it would be similar to SLI and nowhere near 1:1 scaling. Like an analogy I used earlier, 5 radiators isn't really necessary either, but when I build a pc once every 5 years or so.. I like to go a little bit overboard when I can =)
 
This

Times have changed; CPUs that have multiple cores and threads have taken the place of a need for dual socket systems. Now of course, the horde, video and CAD application, and those that do a lot of virtualization may have a need for them. But even the [H]ardest of the [H]ard do not need more than a single i7... which is still more than is probably 'needed'.

To play devils advocate to my own post. I did read about the haswell E's next year will have 8 physical cores. Of course that's overkill until they start writing apps/games to take advantage - then id say, what about 2 haswell e's again :p
 
I have 5 2p systems and 2 4p systems, so, there are people running multiprocessor systems. 1 is in my file server and the rest are folding boxes.

In the DC subforum, we have some crazy builds. :)
 
which is exactly where i'm heading. that's crazy lol. someone has the big bucks =P

have you done any tests folding or otherwise that allows you to see the productivity difference between successive cpus?
 
depending on the program, most folding or WCG programs will increase linearly (or very close to it) with core count. however, many are leaning towards GPU which have more parallelism.

if you're strictly gaming, very little benefit over a modern 4770k or the like. most of the bottleneck is GPU related.

lastly, more cpus = more money, more heat, more size. learned it the hard way, very quickly. haha
 
which is exactly where i'm heading. that's crazy lol. someone has the big bucks =P

have you done any tests folding or otherwise that allows you to see the productivity difference between successive cpus?

Definitely crazy, we have reserved insane for the OC, watercooled 4p opteron rig that one of our members has:eek::D

As for tests, nothing formal, we are all too busy building more rigs and chatting in irc
 
As a programmer I ran dual CPUs systems at home from 1994 to 2008. They worked just fine through many versions of windows and linux. In 2008 I replaced 2 dual core opterons with a single quad core that was significantly faster and used less than 1/2 of the power.

How do you know im not some high level researcher crunching algorithyms for cancer research

That is actually one reason for me considering these however for the most part (unless I am debugging) I do not process cases at home. A second reason is compiling code. When I update my libraries I need to build 7 or so million lines of code. This takes quite a few hours even on a 12 threaded processor. On top of that Visual Studio does not scale well for all types of code. I mean at times I see all 12 cores building but then when the dependencies increase the utilization decreases to the point that only 1 or two cores are building. I have noticed (even on ram disks) that core utilization greatly decreases when the # of include folders for a project increases greatly. Projects with 50 plus include folders tend to not make good usage of the processors. I believe the problem is caused by switching to kernel mode to get the directory listing.
 
To play devils advocate to my own post. I did read about the haswell E's next year will have 8 physical cores. Of course that's overkill until they start writing apps/games to take advantage - then id say, what about 2 haswell e's again :p

With the new consoles, we should finally start to see games using threads much more efficiently than before. The developers are being forced to utilize more threads and in a more efficient fashion on the new AMD APU 8-core systems since they're only running at ~1.6 GHz.
 
Some day lol. when i am rich I might get one to dedicate to folding. Or buy an old one for a cool desktop replacement
except I would end up wanting to watercool it and it would be just a ridiculous waste of money
 
yea the new consoles are impressive. theyre actually much closer specwise to gaming pc's than previous generations were. couple that with the fact theyre dedicated to that and that only, and the larger pool of games to draw from, and theyre looking pretty attractive this time around. Completely nother topic all together though >.>
 
I've got an office full of dual CPU machines, and hope to replace my dual E5 workstation with quad E5s soon enough. In the financial world, this sort of thing is alive and well. My workstation chugs along at 50gb of memory usage most of the time (due to keeping some very large datasets in memory), and I run a number of applications that will use every thread I can throw at them. More speed in what I do has real economic value.
 
Back in the day, 2 single-core CPU systems were so much more responsive. But now we have 8 threads on a die, or even 12 for sb-e/ib-e, maybe 16 for hw-e sometime soon. Multi-socket just makes for a large motherboard for the average enthusiast even. But there will always be a niche market for those with the need.
 
Many duallie motherboards are artificially limited, due to lack of overclocking ability, lack of SLi support and expensive processors. The benefits are extra I/O support, much higher memory capacity and a fair bit more multi-threaded grunt than the typical uniprocessor system. But, as was mentioned before, an increasing core count per socket is making an actual duallie more and more of a niche product, unless one truly needs the unique features mentioned above that a duallie brings to the table. Today, when overclocking a uniprocessor system, one can narrow the performance gap considerably between it and a locked duallie. An unlocked duallie can be very valuable in many areas, considering even top SKUs can add at least 1 GHz to their clock speed, and in a dual CPU situation, the gains are doubled. But, alas, with all new dual capable CPUs being hard locked, an overclockable duallie is becoming as rare as hen's teeth...:(
 
The DC forum is also a great resource for 2P and 4P info.
 
The DC forum is also a great resource for 2P and 4P info.

And I see that half the comments of use cases are from fellow DCers.
Need is relative. Games don't see much use beyond i5's unless you are at 4k or 1600p and need to run multi gfx cards and actually have PCIE lanes for them.

I run dual 2680s for my daily driver and you are right... BF4 beta was hitting 15% cpu usage. But rendering and folding will use everything I throw at it. As do my other 4p systems....
 
I hopped in this forum to satiate my curiosity and learn something. Ive considered several times going forward with a dual cpu build. Being in a forum as active as this, and seeing the activity level (or lacktherof) in this section is certainly cause for concern.

Cost aside, why so little interest or activity? Or is this section of the forums just super new? Wouldn't running a dual cpu rig potentially eliminate the cpu bottleneck a lot of our newer gpu's are seeing for example?

Ive heard of these all the way back since the tyan mobo days - so its certainly not a new tech. As enthusiasts we have latched on to extremes and we have successfully mainstreamed, dual channel memory, dual gpu's, dual hard disks, what is the hurdle with dual cpu's? Seems like a fairly logical "next" step for enthusiast level performance.

Just curious ^^

Simple. Ever since higher clocked multi-core systems became popular, the need for multiple sockets is fading as it should.
A system with multiple sockets isn't a feature but rather a workaround a limitation (that of not being able to put more cores on a socket).
With time, that limitation is being lifted.

I still run dual-CPU systems. However, this has only been because I need more RAM in my systems and running multiple CPUs was the most affordable way to do so. ;)
 
Dual CPU --> SMP. I think most desktop operating systems were not designed around this / not optimal.
Actually, modern dual socket systems are NUMA, not SMP. Modern desktop operating systems handle either situation just fine.
 
Simple. Ever since higher clocked multi-core systems became popular, the need for multiple sockets is fading as it should.
A system with multiple sockets isn't a feature but rather a workaround a limitation (that of not being able to put more cores on a socket).
With time, that limitation is being lifted.

I still run dual-CPU systems. However, this has only been because I need more RAM in my systems and running multiple CPUs was the most affordable way to do so. ;)

Yeah RAM is the biggest issue I think that causes people to look at multiprocessor systems. Even a Sandy/Ivy-E will give you basically only 64GB of RAM which sounds like a lot, but you can blow through that fast setting up big environments in ESX/Hyper-V.
 
Yeah RAM is the biggest issue I think that causes people to look at multiprocessor systems. Even a Sandy/Ivy-E will give you basically only 64GB of RAM which sounds like a lot, but you can blow through that fast setting up big environments in ESX/Hyper-V.

Yup... I certainly missed the ram when I switched to a 1p SB-E from a dual proc AMD G34.... 16 slots down to 4...that sucked.

Now I am back to dual socket SB-E running 64GB half populated. with 8gb dimms...
 
Back
Top