Pictures Of Your Dually Rigs!

Two 8C/16T Ivy Bridge Xeon E5-2650 v2's with 256GB of ECC RAM. This would have been quite the workstation back when it was new :p


But just to illustrate that that was quite a long time ago :p

2650v2_CinebenchR23.PNG
 
I lied.

New server is up and running, and I couldn't bring myself to decommission my dually, so I used it to upgrade my testbench build.

The server fans were a little much though, so I grabbed some 92mm Noctuas to replace them

View attachment 623718

Two 8C/16T Ivy Bridge Xeon E5-2650 V2's with 256GB of ECC RAM. This would have been quite the workstation back when it was new :p

All the PCIe lanes definitely help with all of my testbench/backup/imaging work though. And the fact that it has ECC makes me feel better about using ZFS on it.

The question is what I do with the old Sandy Bridge-E x79 Workstation board I was using in the testbench. It has been with me since 2011. I almost get a little misty-eyed at the thought of it no longer being in service somehow.

View attachment 623719

It was my main desktop from 2011 to 2019. I bought it when Bulldozer sucked at launch, and used it until I upgraded to my Threaderipper in 2019, then it went in the Testbench where it has been since.


Also, here's a reminder that if you install Windows (or simply move a Windows install from an older system) on a system with a large amount of RAM, unless you have a corresponding very large drive, Windows WILL take over your entire drive with hiberfil.sys :p

256GB of RAM = 256GB of hiberfil.sys + swapfile on a 400gb (~380gb available) Intel SSD750 PCIe drive (the only NVMe drive I've ever found with an OPROM that loads during POST and allows you to boot on non-NVMe aware motherboards), which I partitioned 100GB for Linux, and 280GB for Windows. That doesn't leave much free :p

"Why is the drive full? I don't remember storing stuff on this drive or filling it with programs... ...oh"

And now we have disabled hibernation and swap. We won't get that fancy fast booting hibernation stuff, but I don't care.
 
The question is what I do with the old Sandy Bridge-E x79 Workstation board I was using in the testbench. It has been with me since 2011. I almost get a little misty-eyed at the thought of it no longer being in service somehow.

View attachment 623719

It was my main desktop from 2011 to 2019. I bought it when Bulldozer sucked at launch, and used it until I upgraded to my Threaderipper in 2019, then it went in the Testbench where it has been since.
That's still a beautiful board that will do great for someone's first nas build with the native 8x sata ports and x16 slots for 10Gb. It's still got a long life ahead...somewhere.
 
I lied.

New server is up and running, and I couldn't bring myself to decommission my dually, so I used it to upgrade my testbench build.

The server fans were a little much though, so I grabbed some 92mm Noctuas to replace them

View attachment 623718

Two 8C/16T Ivy Bridge Xeon E5-2650 V2's with 256GB of ECC RAM. This would have been quite the workstation back when it was new :p

All the PCIe lanes definitely help with all of my testbench/backup/imaging work though. And the fact that it has ECC makes me feel better about using ZFS on it.

The question is what I do with the old Sandy Bridge-E x79 Workstation board I was using in the testbench. It has been with me since 2011. I almost get a little misty-eyed at the thought of it no longer being in service somehow.

View attachment 623719

It was my main desktop from 2011 to 2019. I bought it when Bulldozer sucked at launch, and used it until I upgraded to my Threaderipper in 2019, then it went in the Testbench where it has been since.
But just to illustrate that that was quite a long time ago :p

View attachment 623721

I'm tempted to - for fun - go on eBay and see if I can pick up the fastest this gen of Xeon e5's had to offer. (Ivy Bridge is where this maxes out, because after that they switched to DDR4)

I'm torn. Do I go for max clock speed, something like a pair of E5-2667 v2's or 2687W v2's which both have 8C/16T and can boost to 4.0GHz, or do I go in the opposite direction and maximize core count, something like a pair of E5-2697 v2's which have 12C/24T and can boost up to 3.5Ghz...

It will probably come down to what is cheapest. This is totally just a "because I am curious" project, with little to no budget or need to justify it.


I mean, for my use case, higher clock speeds are probably going to be more beneficial, but we are only talking going from a max boost of 3.4Ghz to 4.0Ghz, a whopping 17.6% clock speed increase.

From a "10 minutes of fun, running and posting a Cinebench benchmark" perspective, the 12 core models would probably win :p

Looking at the options, a pair of E5-2687W v2's are the most expensive option, surprisingly still almost going for $100. 8 cores, 3.4ghz base, 4.0ghz max. A whopping 150W TDP. That might be a bit much for the little 92mm 4U compatible CPU coolers.

The E5-2667 v2's are almost identical. Same core count (8), same boost clock, (4.0Ghz), but 100 Mhz lower base clock (3.3Ghz vs 3.4Ghz). A slightly more reasonable 130W TDP. A pair of these will run me $47.99, which is a more reasonable budget for this fun little project.

A pair of 12C/24T E5-2697 v2's are about the same price. We are talking a base of 2.7Ghz, and a max clock of 3.5Ghz, so a 100Mhz increase on both base and boost clocks, but 50% more cores...

Decisions, decisions :p

I've eliminated the 2687W's. To expensive for a little fun curiosity project. That and the 150W TDP is a little much.

So it's 8C/16T (total of 16C/32T across two sockets) E5-2667 v2 at 4Ghz OR 12C/24T (total of 24C/48T across two sockets) E5-2697 v2 at 3.5Ghz. Marginally better every day responsiveness for the testbench/imaging/IT type workloads I use it for, OR 2013 era Cinebench queen, so I can say "yeah, that box over there in the corner imaging that drive is 24C/48T over two sockets" :p


As I was typing this, it struck me that there was a missed opportunity here to have asymmetrical multi-socket systems. You know, put a high clocking, few core CPU in one socket, to handle peak low threaded loads, and a many core low clocked CPU in another for threaded stuff. It would have required the OS scheduler to be aware, which it definitely wasn't in 2013, but it would have resulted in some fun solutions.
 
Last edited:
So it's 8C/16T (total of 16C/32T across two sockets) E5-2667 v2 at 4Ghz OR 12C/24T (total of 24C/48T across two sockets) E5-2697 v2 at 3.5Ghz. Marginally better every day responsiveness for the testbench/imaging/IT type workloads I use it for, OR 2013 era Cinebench queen, so I can say "yeah, that box over there in the corner imaging that drive is 24C/48T over two sockets" :p

So which one would you guys go for? :p
 
I'm tempted to - for fun - go on eBay and see if I can pick up the fastest this gen of Xeon e5's had to offer. (Ivy Bridge is where this maxes out, because after that they switched to DDR4)

I'm torn. Do I go for max clock speed, something like a pair of E5-2667 v2's or 2687W v2's which both have 8C/16T and can boost to 4.0GHz, or do I go in the opposite direction and maximize core count, something like a pair of E5-2697 v2's which have 12C/24T and can boost up to 3.5Ghz...

It will probably come down to what is cheapest. This is totally just a "because I am curious" project, with little to no budget or need to justify it.


I mean, for my use case, higher clock speeds are probably going to be more beneficial, but we are only talking going from a max boost of 3.4Ghz to 4.0Ghz, a whopping 17.6% clock speed increase.

From a "10 minutes of fun, running and posting a Cinebench benchmark" perspective, the 12 core models would probably win :p

Looking at the options, a pair of E5-2687W v2's are the most expensive option, surprisingly still almost going for $100. 8 cores, 3.4ghz base, 4.0ghz max. A whopping 150W TDP. That might be a bit much for the little 92mm 4U compatible CPU coolers.

The E5-2667 v2's are almost identical. Same core count (8), same boost clock, (4.0Ghz), but 100 Mhz lower base clock (3.3Ghz vs 3.4Ghz). A slightly more reasonable 130W TDP. A pair of these will run me $47.99, which is a more reasonable budget for this fun little project.

A pair of 12C/24T E5-2697 v2's are about the same price. We are talking a base of 2.7Ghz, and a max clock of 3.5Ghz, so a 100Mhz increase on both base and boost clocks, but 50% more cores...

Decisions, decisions :p

I've eliminated the 2687W's. To expensive for a little fun curiosity project. That and the 150W TDP is a little much.

So it's E5-2667 v2 vs E5-2697 v2. Slightly better every day responsiveness for the testbench/imaging/IT type workloads I use it for, OR 2013 era Cinebench queen, so I can say "yeah, that box over there in the corner imaging that drive is 24C/48T over two sockets" :p


As I was typing this, it struck me that there was a missed opportunity here to have asymmetrical multi-socket systems. You know, put a high clocking, few core CPU in one socket, to handle peak low threaded loads, and a many core low clocked CPU in another for threaded stuff. It would have required the OS scheduler to be aware, which it definitely wasn't in 2013, but it would have resulted in some fun solutions.
For funsies like this, I will usually use the passmark results to help figure out what cpu and then ebay for the pricing:
https://www.cpubenchmark.net/compar...-E5-2697-v2-vs-Intel-i5-2500K-vs-Intel-i5-680

What's interesting is that the 'faster' cpus really won't feel much faster ime as I can barely tell the difference between the i5-680 and i5-2500k that I put in the comparison and their single thread speeds also are about 200 apart. So, unless you are doing something that truly will utilize the faster ipc over number of cores (like gaming), then more cores probably make sense--especially when the core count on a dual 2697 setup is going to be 24 vs 16, which is a whopping 40% more cores even if each core 13% slower.

In fact we can compare the performance between these using some rudimentary math and the 2687w as a baseline. If '1' represents the performance of a single core on this setup, then a dual 2687w setup will has a performance of '16'. If the 2697 is 13% slower, then a single core in a 2697 will be '.87' and the performance a dual 2697 will be 24x.87 = '20.88'. And since 20.88 > 16, the slower ipc dual 2697 setup is actually faster when all the cores are maxed out.
 
So which one would you guys go for? :p
What ever you think you'll enjoy the most.

I use older systems for DC and run them under windows, so I would go for a blend of many / fast cores up to 16/32c/t each. Windows gets stupid on some workloads with more than 64c/t counts and underutilizes my 20c/40t dual systems unless I micromanage them. I don't have time for that.

I have a single socket x79 - 2011 using an E5-2697v2. I chose this particular cpu because I wanted to see how it compared to my dual processor EVGA SR2 running a pair of Xeon X5690's. So, a faster, but older, 2P running 12/24 against a slightly newer, but slower, 1p running 12/24.

No surprises, the older faster system is faster, slightly, and uses much more power. But, having just powered them on and pulled these SS's made me happy.

So, whatever makes you happy.

1704392788814.png
1704393301440.png

1704393591183.png
1704393624824.png
 
For funsies like this, I will usually use the passmark results to help figure out what cpu and then ebay for the pricing:
https://www.cpubenchmark.net/compar...-E5-2697-v2-vs-Intel-i5-2500K-vs-Intel-i5-680

What's interesting is that the 'faster' cpus really won't feel much faster ime as I can barely tell the difference between the i5-680 and i5-2500k that I put in the comparison and their single thread speeds also are about 200 apart. So, unless you are doing something that truly will utilize the faster ipc over number of cores (like gaming), then more cores probably make sense--especially when the core count on a dual 2697 setup is going to be 24 vs 16, which is a whopping 40% more cores even if each core 13% slower.

In fact we can compare the performance between these using some rudimentary math and the 2687w as a baseline. If '1' represents the performance of a single core on this setup, then a dual 2687w setup will has a performance of '16'. If the 2697 is 13% slower, then a single core in a 2697 will be '.87' and the performance a dual 2697 will be 24x.87 = '20.88'. And since 20.88 > 16, the slower ipc dual 2697 setup is actually faster when all the cores are maxed out.

Agreed. It's not quite that straight forward though. For one, faster cores improve all workloads, whereas more cores only improve some workloads.

That, and while few core max boost on the 2697 v2's is 3.5Ghz, all core turbo speed is reportedly limited to only 3Ghz.

Still, as you say, the difference is unlikely to be noticeable in anything I do. This is strictly for funsies. And since that is the case, I think the 2697 v2 might be more fun :p

More cores, more fun?


View: https://www.youtube.com/watch?v=EbXSbP-wEFU&t=27s
 
More cores is the way to go on lga 2011 stuff. The single core at this point is horrific compared to modern gear so it nice to have a few extra cores for workloads that can use it
 
Agreed. It's not quite that straight forward though. For one, faster cores improve all workloads, whereas more cores only improve some workloads.

That, and while few core max boost on the 2697 v2's is 3.5Ghz, all core turbo speed is reportedly limited to only 3Ghz.

Still, as you say, the difference is unlikely to be noticeable in anything I do. This is strictly for funsies. And since that is the case, I think the 2697 v2 might be more fun :p

More cores, more fun?


View: https://www.youtube.com/watch?v=EbXSbP-wEFU&t=27s

Yep, more cores, more fun. :) Just gotta exercise them all *cough* Cinebench *cough* :D
 
My desktop used to always be a multi cpu rig, but with a 10 core 2011-3 and then a 3900x I think i've been converted. The single core is just too slow especially with security mitigations. All my servers and desktops run esxi.

The two multi processing systems I still use are:

A dual e5-2670 rig with many sas hdds used as a backup server. Ive had these 2670s for years and originally used them in my desktop with a EVGA SRX motherboard.
20240104_130100.jpg


old SRX setup circa 2017-
4lglKop.jpeg


I also recently put together this 2u box. A dual e5645 rig on a supermicro x8dtn motherboard, that I managed to install and run a 1tb nvme ssd on. 1366 is probably my favorite socket and is still reasonable capable. I would love to find a EVGA SR2 at some point.
961497_20240107_172218.jpg

961499_20240107_172006.jpg
 
Last edited:
My desktop used to always be a multi cpu rig, but with a 10 core 2011-3 and then a 3900x I think i've been converted. The single core is just too slow especially with security mitigations. All my servers and desktops run esxi.

The two multi processing systems I still use are:

A dual e5-2670 rig with many sas hdds used as a backup server. Ive had these 2670s for years and originally used them in my desktop with a EVGA SRX motherboard.
View attachment 625457

old SRX setup circa 2017-
View attachment 625446

I also recently put together this 2u box. A dual e5645 rig on a supermicro x8dtn motherboard, that I managed to install and run a 1tb nvme ssd on. 1366 is probably my favorite socket and is still reasonable capable. I would love to find a EVGA SR2 at some point.
View attachment 625443
View attachment 625444

Nice case. :D
 
But just to illustrate that that was quite a long time ago :p

View attachment 623721
Agreed. It's not quite that straight forward though. For one, faster cores improve all workloads, whereas more cores only improve some workloads.

That, and while few core max boost on the 2697 v2's is 3.5Ghz, all core turbo speed is reportedly limited to only 3Ghz.

Still, as you say, the difference is unlikely to be noticeable in anything I do. This is strictly for funsies. And since that is the case, I think the 2697 v2 might be more fun :p

More cores, more fun?


View: https://www.youtube.com/watch?v=EbXSbP-wEFU&t=27s


Well, there we go.

CinebenchR23_E5-2697v2.PNG


I have to say, that was slightly disappointing. Maybe windows update4 was screwing me in the background or something.

Going to have to re-run it at some point.

That said, this is probably pretty close to the ceiling for Ivy Bridge, at least at stock.

I could probably get a little bit more performance out of it by switching in some non-ECC UDIMM's at desktop speeds instead of the 16x16GB Registered ECC DDR3-1600. (I want to say it supports up to 1866Mhz)

Also, I wonder how much faster it would have been before the Spectre/Meltdown mitigations....

Anyway, we are talking about $1000 for the motherboard, $2600 each of for the two CPU's and another ~$1000 for the RAM back in 2013. That's ~$7,200 only to ~tie a Ryzen 5 7600x :p
 
Last edited:
Well, there we go.

View attachment 626087

I have to say, that was slightly disappointing. Maybe windows update4 was screwing me in the background or something.

Going to have to re-run it at some point.

That said, this is probably pretty close to the ceiling for Ivy Bridge, at least at stock.

I could probably get a little bit more performance out of it by switching in some non-ECC UDIMM's at desktop speeds instead of the 16x16GB Registered ECC DDR3-1600. (I want to say it supports up to 1866Mhz)

Also, I wonder how much faster it would have been before the Spectre/Meltdown mitigations....

Anyway, we are talking about $1000 for the motherboard, $2600 each of for the two CPU's and another ~$1000 for the RAM back in 2013. That's ~$7,200 only to ~tie a Ryzen 5 7600x :p

I have a few 26xx-V2 set-ups, can confirm, with the right RAM config, will run 1866.
 
Well, there we go.

View attachment 626087

I have to say, that was slightly disappointing. Maybe windows update4 was screwing me in the background or something.

Going to have to re-run it at some point.

That said, this is probably pretty close to the ceiling for Ivy Bridge, at least at stock.
All things considered it's fairly impressive that, albeit with 50% more cores and threads, it still manages to keep up with a first generation Threadripper system. Bonus points for that being a NUMA CPU when run in 16c mode.
 
Anyway, we are talking about $1000 for the motherboard, $2600 each of for the two CPU's and another ~$1000 for the RAM back in 2013. That's ~$7,200 only to ~tie a Ryzen 5 7600x :p
My HP DL380 G5 MSRP'd for over $7000 brand new. I got it for $65 with an installed license for windows server and storage over 5 years ago. :D May not be the fastest duel cpu, but it was built well for sure. :)
 
I have an ancient Supermicro 1U dually server with a X7DWU MB with a add in IPMI card, 2 E5474 CPU's, 32GB of RAM and 4 Seagate 300GB SAS 10K HD's. It servers no useful purpose but performs flawless. It's spends it's time in retirement doing nothing. It's a power hog and I might fire it up once a year for S and G's.
 
Well, there we go.

View attachment 626087

I have to say, that was slightly disappointing. Maybe windows update4 was screwing me in the background or something.

Going to have to re-run it at some point.

That said, this is probably pretty close to the ceiling for Ivy Bridge, at least at stock.

I could probably get a little bit more performance out of it by switching in some non-ECC UDIMM's at desktop speeds instead of the 16x16GB Registered ECC DDR3-1600. (I want to say it supports up to 1866Mhz)

Also, I wonder how much faster it would have been before the Spectre/Meltdown mitigations....

Anyway, we are talking about $1000 for the motherboard, $2600 each of for the two CPU's and another ~$1000 for the RAM back in 2013. That's ~$7,200 only to ~tie a Ryzen 5 7600x :p

Actually, come to think of it, this scaled almost perfectly with TDP.

The two 95w 2650 v2's put 59.3 points per watt.

The two 130w 2697 v2's put out 59.6 points per watt.

I think what is develtive is that while the 2697 v2's can boost to 3.5Ghz, they are limited to 3Ghz when all cores are loaded.
 
I have an ancient Supermicro 1U dually server with a X7DWU MB with a add in IPMI card, 2 E5474 CPU's, 32GB of RAM and 4 Seagate 300GB SAS 10K HD's. It servers no useful purpose but performs flawless. It's spends it's time in retirement doing nothing. It's a power hog and I might fire it up once a year for S and G's.
Wouldn't hurt to find some use for it that it can only do. I know some legacy stuff runs better on the older hardware than emulated. :)
 
Actually, come to think of it, this scaled almost perfectly with TDP.

The two 95w 2650 v2's put 59.3 points per watt.

The two 130w 2697 v2's put out 59.6 points per watt.

I think what is develtive is that while the 2697 v2's can boost to 3.5Ghz, they are limited to 3Ghz when all cores are loaded.
Makes sense since they're in the same generation.
 
Wouldn't hurt to find some use for it that it can only do. I know some legacy stuff runs better on the older hardware than emulated. :)

Yea, it's not going anywhere. It's all decked out and even includes a SM IPMI Daughter Card which works flawlessly. It's performance is acceptable but it's a power hog.
 
Actually, come to think of it, this scaled almost perfectly with TDP.

The two 95w 2650 v2's put 59.3 points per watt.

The two 130w 2697 v2's put out 59.6 points per watt.

I think what is develtive is that while the 2697 v2's can boost to 3.5Ghz, they are limited to 3Ghz when all cores are loaded.

When researching on upgrading a dual 2670 rig to 2697 v2, I discovered that 2696v2 boosts to 3.1 all core with a lower TDP
https://en.m.wikipedia.org/wiki/List_of_Intel_Xeon_processors_(Ivy_Bridge-based)#Xeon_E5-2696_v2
 
When researching on upgrading a dual 2670 rig to 2697 v2, I discovered that 2696v2 boosts to 3.1 all core with a lower TDP
https://en.m.wikipedia.org/wiki/List_of_Intel_Xeon_processors_(Ivy_Bridge-based)#Xeon_E5-2696_v2

Interesting. If all core performance is the priority, those are the ones to get. but 3.1Ghz vs 3.0Ghz is only going to be a 3.33% improvement at all core loads. I think the higher 3.5Ghz peak for low threaded bursty loads is better in my application.

Too bad they have the wrong specs for the 2696v2 listed there so not sure I would trust "their" benchmarks either.
Yeah, those summary pages aren't worth the second it takes to load them. cpubenchmark/cpumonkey/userbenchmark/etc./etc. are generally pretty useless.

Sadly, for more exotic parts they are often all the info there is.

I wouldn't be surprised if they are just AI summaries of other peoples review/data from the internet, and as such can't be trusted at all.

For this one though, I don't blame them for getting it a little wrong. The 3607v2 was an OEM part only, so detailed info is a little tougher to come by.
 
Well, there we go.

View attachment 626087

I have to say, that was slightly disappointing. Maybe windows update4 was screwing me in the background or something.

Going to have to re-run it at some point.

That said, this is probably pretty close to the ceiling for Ivy Bridge, at least at stock.

I could probably get a little bit more performance out of it by switching in some non-ECC UDIMM's at desktop speeds instead of the 16x16GB Registered ECC DDR3-1600. (I want to say it supports up to 1866Mhz)

Also, I wonder how much faster it would have been before the Spectre/Meltdown mitigations....

Anyway, we are talking about $1000 for the motherboard, $2600 each of for the two CPU's and another ~$1000 for the RAM back in 2013. That's ~$7,200 only to ~tie a Ryzen 5 7600x :p


Side note,

This machine came in very handy this week. I'm in the progress of upgrading my pfSense router, and it turned out to be more complicated than I had expected. Not wanting to be without internet during the process, I imaged the pfSense drive, wrote it to a ZFS block device on the testbench machine, then installed KVM and Virt-Manager, passed through two NIC's to it, and used my existing pfSense install virtualized on it.

Not the most efficient router ever, but it is only temporary, and is doing a good job of keeping the house interent up.

I forgot how much I loved the convenience of virtualization.

I might just install Proxmox on the testbench, using it as a non-24/7 cluster member, sticking two GPU's in it, and passing them though, one to a windows install, another to a Linux install, so I can have them running at the same time, and also have enough resources for the occasional temporary VM needs like this.

It's too bad Spectre and <eltdown make VM's less secure on these older Intel chips, but as I understand those vulnerabilities, they are really only a huge problem on systems where you have non-trusted 3rd parties as guests. In my case, I am the only guest, and I tend to trust myself...

At least in this capacity.
 
Interesting. If all core performance is the priority, those are the ones to get. but 3.1Ghz vs 3.0Ghz is only going to be a 3.33% improvement at all core loads. I think the higher 3.5Ghz peak for low threaded bursty loads is better in my application.

The 2696v2 peaks to 3.5 as well though...that's the kicker.
 
Might want to look at it again and break out the calculator ;)

Nothing to do with calculators. The page here is simply wrong, and that is what I was reviewing when I was researching these chips:

1707688518431.png


Now that I know better, I'd edit it to fix it, but Wikipedia will never let me edit anything due to my VPN. (Using a VPN is not a crime nor a indication that you are up to no good. Fuck all of the sites that block VPN users)

Interestingly, the dedicated Xeon page has the correct information.

I guess you can chuck that up to a reminder that it is never a good idea to rely on Wikipedia too much. I usually go to the source (in this case ark.intel.com) but there does not seem to be a page for the E5-2696 v2 for some reason. presumably because it was an OEM only variant.

With ark.intel.com out of the picture, not sure which site would have actually been the best authoritative source for specs for this chip.
 
Last edited:
Ahh, I was going g of the link I post with the turbo/core speeds and thought you were going off that.
 
When researching on upgrading a dual 2670 rig to 2697 v2, I discovered that 2696v2 boosts to 3.1 all core with a lower TDP
https://en.m.wikipedia.org/wiki/List_of_Intel_Xeon_processors_(Ivy_Bridge-based)#Xeon_E5-2696_v2
Xeon Haswell can be turbo-boosted on all cores with BIOS hack, though the clock will be reduced when hitting the TDP limit. I had all threads at 2.9GHz easily on 4699v3 (18C/36T) when running non-avx BOINC workload. The 2696v3 is a beast if you can run all 18C/36 threads anywhere from 3.1 or 3.6GHz but really depends on the type of workload. Note the 4669v3 TDP is only 10W lower than 2696v3 (135W vs 145W)
 
I'm having a lot of fun with the dually testbench.

I decided to mess around with BIOS modding to see if I could get NVMe boots to work. It was a little tricky for someone who has never edited a BIOS file before, but in the grand scheme of things really easy, and it succeeded. Picked up a used 16x 4-way Asus Hyper m.2 card for a fraction of the new price I paid for the ones I use in my main server and popped it in there with four spare 256GB Inland Premium M.2 drives I had laying around that I had decommed from the main server.

I also added the old 6GB Titan in it.

1708377636440.png


That Asus m.2 card chokes the air intake for the Titan. Going to have to move those around at some point to give it more air. Only problem is the first 16x slot won't work, as a long GPU interferes with the RAM if you put it in that slot.

1708379219811.png


It doesn't look like it would in this diagram, but it devinitely would not go in the slot without hitting the last ram stick when I tried it.

I'm thinking I can probably stick the Titan in the last 16x slot (CPU1 Slot2), but then I have to move the LSI 9300-16 SAS HBA. I might do that after I get a different SAS card. The 9300-16i runs very hot. I added a fan to mine, which makes it way too wide. I'm probably going to grab an 9400-16i which uses much less power and runs much cooler. Once I do that I'll be able to move them around and make it work.

The intent?

I am going to install Proxmox on it, add it to the cluster with the other two nodes as a "back up" server if I need it.

I installed Proxmox in the ZFS equivalent of RAID10 on the four m.2 drives, and it booted perfectly. It's kind of depressing how easy it is to get NVMe boot to work on older motherboards. The manufacturers should have just included it in their BIOS updates, but as far as I know, none of them ever did. (I'm ready to stand corrected if some brand decided to do it)

BUT I am also going to install both a Windows and a Linux VM on it with passed through GPU's so I can continue using it locally as my testbench/backup workstation machine.

Passing through Nvidia GPU's is known to be tricky, but I think I remember that Nvidia actually allowed it on professional cards, so I presumed the Quadro 2000 and the Titan would work.

My plan was to pass through the Quadro 2000 to the Linux VM and the Titan to the Windows VM. It would be fun to see if it could even support some light vintage gaming. Maybe even a multiplayer Sid Meier's Civilization session!

Well, I struggled for several hours with the Quadro 2000 without getting ti to work, until I finally had an epiphany. The Quadro 2000 is a Fermi card. GPU pass-through requires EFI, and that wasn't supported until the next gen, Kepler.

Well, luckily the Titan is Kepler, so I moved on to trying the Titan in Linux. I can't seem to get the BIOS to dump off of it (which many say is a requirement for Nvidia GPU's) but it seems to work anyway. I get a few error messages when the Linux VM boots, but once it is booted everything just works. IN order to test some load on it, I downloaded the Linux version of Unigine Heaven and it ran perfectly, and performed really well!

The plan is still to use the Titan in a Windows VM for light occasional gaming, but that will have to wait. I wen ton eBay looking for a single slot GPU that supports EFI I could use to replace the Quadro 2000 for the Linux VM. I decided I wanted to try to keep things contemporary. The Motherboard and CPU's are 2013 era, as is the x520 10gig NIC and the Titan.

The most capable contemporary GPU I could find that both has EFI support and is single slot was a Quadro K4000. As luck would have it I found one for $19 on eBay.

Once the K4000 comes in, I'm going to assign that to the Linux VM, move the Titan to the Windows VM, I figure I'll give the Windows VM one full CPU's worth of cores (12C24T). The Linux VM can make do with fewer. Maybe 6C12T, That leaves 6C/12T for everything else.

1708378726013.png


I never know how much fun it could be to just tinker with older hardware, especially when you can just pick up stuff dirt cheap on eBay when you need it.

I'm going to need to pick up a cheap dual monitor two-way USB KVM switch so I can switch between the Windows VM and the Linux VM. Anyone have any suggestions on one that works and doesn't break the bank?
 
Last edited:
Back
Top