• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.

Upgrades - Dual Xeon E5-2697v2 socket 2011

Deimos

[H]ard|Gawd
Joined
Aug 10, 2004
Messages
1,276
I've been rocking this server since new after I made the mistake of buying an Intel (LSI) RAID card that would ONLY work in the Intel S2600CP motherboard. Oops, I guess I had to build a server...

I can't remember the original specs, I purchased a couple of cheap 8 core Xeons off ebay a while ago to upgrade a little, this is the current setup:

Intel S2600CP4 (Quad Intel 1Gb NICs on board)
Dual Intel Xeon E5 2670 - 8 Core @ 2.6Ghz (I think the original config was dual 4 or 6 core 2.1Ghz CPUs) + Coolermaster Hyper212 HS.
64Gb Ram (8x8GB 1600MT)
Geforce GTX 1080 (a recent addition for transcoding duties)
USB3 HBA (backup duties)
Intel RAID Module RMS25PB080 + 4 x 18TB Seagate drives RAID5 + other storage for backups and OS (originally 8x2TB drives)
Seasonic 1200W PSU
Silverstone TJ07 case

I'm not sure how much longer this beast will last. Might as well go all out since I can get cheapish second hand parts.
I decided to see what was available on ebay and found the best CPU this board will take for a pretty reasonable price; the E5-2697 which is 12 core 24T CPU @ 2.7Ghz, 3.5Ghz boost clock. The v2 also supports faster ram so I ordered a 256GB (16x16GB 1866MT) kit and finally an Intel X550 Dual 10GB Nic. I went for the x550 chipset instead of the cheaper x520 for 2.5Gb support. I'm not sure if I can source the Ubiquity 10Gb SFP+ modules right now as all the local suppliers are out of stock.

I will post pictures when I install everything.
 
very nice. I recently did a build with a W-2191 for my main NAS/media sever/NVR. Then I took my old 2011 E5-2696 V3 and relegated it to my offsite backup/NVR machine. My problem with it was I could only get 64gb of ram with it...other than that, it was fine.
 
CPUs arrived about an hour after I made this post looking very used. The substrate on one is very slightly chipped on one edge, hopefully it wont be an issue. I'll install them this evening and see how it goes.
 
Nice! Although I would have went with 32GB modules since they're cheap too these days for 512GB or 256GB+. It will be interesting to see how much of 10Gb your setup saturates. :)
 
CPUs installed and running. Somehow it feels much snappier which I was not expecting. The new CPU's have 10MB of extra L3 cache as well as the extra 4 cores at 100mhz extra base clock.

I have a new problem now. This new 'v2' processor spawned a whole lot of unknown devices in the device manager and I don't know how to resolve it. Also, I didn't notice this before but the GTX 1080 is actually not working as windows has decided the driver signature 'has a problem'. Great.

20241015_205445.jpg
20241015_205443.jpg
20241015_210005.jpg
 
I'm still running Windows Server 2012 R2. If I can find money in the IT budget I might consider upgrading.
In the off chance that someone else has the same issues as me, here is how I fixed them.
For the GTX 1080 I installed driver version 466.11 (the oldest I could find in the nvidia database) and there is no longer a driver signing issue. My guess is that MS keeps a database of driver signatures online for 2012 R2 and the OS checks in to see if the signature is valid. MS probably stopped updating the DB after a certain date.
As for the unknown 'base system device' and system counters, the vendor and product IDs indicated these new devices were part of the new CPUs (Intel's vendor ID is 8086, that's a nice easter egg). I ended up having to download an intel chipset driver from Dell (thanks Intel FFS). After installing that everything was peachy.
 
CPUs installed and running. Somehow it feels much snappier which I was not expecting. The new CPU's have 10MB of extra L3 cache as well as the extra 4 cores at 100mhz extra base clock.

I have a new problem now. This new 'v2' processor spawned a whole lot of unknown devices in the device manager and I don't know how to resolve it. Also, I didn't notice this before but the GTX 1080 is actually not working as windows has decided the driver signature 'has a problem'. Great.

View attachment 685450View attachment 685451View attachment 685452
The 2697 v2 is a significant upgrade in terms if single thread performance so that's a kick you'll feel:
https://www.cpubenchmark.net/compare/1220vs2009/Intel-Xeon-E5-2670-vs-Intel-Xeon-E5-2697-v2

As far as the new devices, what I typically do is find out their pci\device line and then search it and try the various drivers that pop up. Usually if it's a dell/hp/lenovo driver directly from them, it's safe.
 
I'm still running Windows Server 2012 R2. If I can find money in the IT budget I might consider upgrading.
In the off chance that someone else has the same issues as me, here is how I fixed them.
For the GTX 1080 I installed driver version 466.11 (the oldest I could find in the nvidia database) and there is no longer a driver signing issue. My guess is that MS keeps a database of driver signatures online for 2012 R2 and the OS checks in to see if the signature is valid. MS probably stopped updating the DB after a certain date.
As for the unknown 'base system device' and system counters, the vendor and product IDs indicated these new devices were part of the new CPUs (Intel's vendor ID is 8086, that's a nice easter egg). I ended up having to download an intel chipset driver from Dell (thanks Intel FFS). After installing that everything was peachy.
Nice--looks like you got it all handled. :)
 
The 2697 v2 is a significant upgrade in terms if single thread performance so that's a kick you'll feel:
https://www.cpubenchmark.net/compare/1220vs2009/Intel-Xeon-E5-2670-vs-Intel-Xeon-E5-2697-v2
That is crazy, I did not look into this at all. I just looked at the best CPU my board could take and went on an ebay hunt. Everything feels much faster, including the emby server that is running on the host.

The memory has landed in NZ. I'm not expecting much of a bump in performance going from 1600 to 1866MT/s and it will definitely be nice not having to worry about the memory budget anymore with a few VMs.

I've probably spent way too much time messing about with this today. Just got done updating the on-board NIC drivers. The latest version includes support for SR-IOV in a team (this board has 4 x 1GB ports). I've given some more cores and memory to my download VM and after the network driver update my download VM can now max out my 1Gb/s internet connection. I was at least 300-400Mb/s shy before the upgrades/updates. The NIC driver update alone bumped performance by 150Mb/s.
 
Solid upgrades all the way around. :) This is why when people on the internet call older machines like this junk/ewaste/power hogs/etc, I publicly post about how they are dead wrong. So many boards like yours have been trashed as 'ewaste' by fools listening to that fud. :mad: That's a sin imo considering what stuff like this can still do!
 
That is crazy, I did not look into this at all. I just looked at the best CPU my board could take and went on an ebay hunt. Everything feels much faster, including the emby server that is running on the host.

The memory has landed in NZ. I'm not expecting much of a bump in performance going from 1600 to 1866MT/s and it will definitely be nice not having to worry about the memory budget anymore with a few VMs.

I've probably spent way too much time messing about with this today. Just got done updating the on-board NIC drivers. The latest version includes support for SR-IOV in a team (this board has 4 x 1GB ports). I've given some more cores and memory to my download VM and after the network driver update my download VM can now max out my 1Gb/s internet connection. I was at least 300-400Mb/s shy before the upgrades/updates. The NIC driver update alone bumped performance by 150Mb/s.
I love having a crap ton of drives with 10gbe. I set up a big ol iscsi "drive" for my machine to house my game library for myself and then one for my son. Writes aren't super fast, but reads...saturates that 10gbe link, so the performance is going to be comparable to if I had a SSD drive in there (big files vs little and all).

Our systems sound pretty similar in principal. I took my system, and did proxmox on it with two VMs. One is truenas and it just handles the storage. Then the other is ubuntu server and it has my gtx 1080ti. It does my emby tasks, and then I also installed immich, so my photos go onto the server. Immich can also use the 1080ti to do facial recognition. And if that isn't enough, I'm also running frigate NVR on there and using the pool to house the security recordings and the good ol' 1080ti sits and helps frigate with the video stuff there too.

Most of it was new to me hardware, but still, not too bad for seven or eight year old hardware.
 
Ram arrived, turns out they are samsung modules. Installed, the board did not auto-detect 1866 so set it manually. I was a little suspect so I looked up the part numbers and they are genuine 1866 memory so everything is looking good so far.

Feels roomy.
 
I know you got the CPUs already, but for others, look into the 2696v2 instead of the 2697v2 as is has a higher all-core Turbo and lower TDP.

https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors_(Ivy_Bridge-based)#Xeon_E5-2696_v2
It looks like that Wikipedia entry is wrong. The max turbo on the 2696v2 is 3.5Ghz not 3.3Ghz.

Someone posted over on techpowerup that the 2696v2 may not work on all boards due to being an OEM cpu. Not sure how accurate that is and I wonder if I would have any issues on an Intel board. A couple of people confirmed TB of 3.5Ghz and all core TB of 3.1Ghz on the 2696v2. I just tested my 2697 and can confirm all core TB is 3Ghz running small FFT Prime95.

https://www.techpowerup.com/forums/...el-2697-v2-cpu-performance-comparison.285827/
 
I don't see 3.3GHz, I see 3.5GHz...2.5 base with 10 as max for 3.5GHz. Look more closely at the different clocks for number of cores used and you'll see the difference between the two.

As for whether it will work on your S2600 MB that I have no clue on. I was researching for an ASRock server MB at the time and ran across the better 2696v2 option that was actually cheaper to get. If you have the 2697v2 already I wouldn't sweat it.
 
Last edited:
I don't see 3.3GHz, I see 3.5GHz...2.5 base with 10 as max for 3.5GHz. Look more closely at the different clocks for number of cores used and you'll see the difference between the two.

As for whether it will work on your S2600 MB that I have no clue on. I was researching for an ASRock server MB at the time and ran across the better 2696v2 option that was actually cheaper to get. If you have the 2697v2 already I wouldn't sweat it.
Oh I don't know what the hell I'm looking at, it says 2.5ghz there. My brain isn't working today, can you explain what those turbo numbers mean? I couldn't figure it out.

One guy in the techpowerup forum posted a screenshot of their 2696-v2 @ 3.4ghz all cores loaded.... (Although it looks like a desktop motherboard so probably has 'cheats' enabled).

The price on those 2696-v2s are nuts, I found a seller offloading them $58 including shipping (for a pair and after NZ's sales tax).

I don't know man, for that price its pretty tempting for a (possible) 3.4ghz all core boost. That isn't insignificant.
 
Last edited:
Recieved the 10G NIC today and installed it. Since it is all Intel, the proset software will team the 2 10Gb links with the 4 1Gb links. The only problem is that Unifi only supports LACP (no static teaming supported) which means that I cannot prioritise the 10Gb links over the 1Gb links. For some reason the LACP 'likes' the 1Gb connections more than the 10Gb link.

I disabled the 1Gb links to check performance and my PC (the fastest client with 2.5Gb) is maxxed out copying data from the RAID5 array. I'm reasonably happy with this.

I bought a second 10Gb NIC for my pfsense router as well so I'm ready for 'hyperfibre': 2Gb and 4Gb synchronous internet speeds for a premium. I'm considering going on to the 2Gb plan so I can download keyboard drivers faster. I'm currently on a 950/500Mb plan.
 
Last edited:
I decided to upgrade the OS on this server. What a nightmare. I downloaded a few images to try from masgrave. My suggestion? don't. Not one single image linked on Masgrave worked. I ended up downloading a 2025 Eval from MS directly and that was the only iso I could get to install.

Also, WTF is up with ACPI errors? I was tinkering in BIOS and changed a few settings, nothing would boot. It took me hours to figure out WTF was going on and it turns out it was USB legacy emulation. When I turned it off Windows would not cooperate.
 
Wow, what a shit-show. The NIC teaming built in to Server 2025 is broken. Also, Windows went into a bluescreen loop after installing the LSI raid card (I had to remove it to install windows). I managed to find the only forum post on the entire internet on how to fix it, you have to install 'legacy hardware' before you put the card in after deleting the default drivers that Windows has for it, you can then boot and update the drivers properly.

https://www.reddit.com/r/homelab/comments/s5o39w/lsi_raid_card_bsod_issues/?rdt=53151

After I finally got everything 'working' the display output is stuck at 800x600 (it was running fine at 1600x1200 earlier). I recall the onboard video being a Matrox G200 but the most recent drivers are for Server 2003. The GTX1080 drivers installed flawlessly so I think I'll just disable the onboard VGA.

I finally finished at around 6am. I'm afraid to reboot this thing now incase something else breaks.
 
I'm probably complaining too much. I fully did not expect Server 2025 to work at all on this ancient hardware. The major limitation in this system is probably disk speed which is pretty difficult to get around. I purchased an activation key for the extra SAS/SATA controller and moved my JBOD for a little speed boost. The software upgrade did actually make the system more responsive though and there was a decent uplift in internet speed, though still not quite enough grunt to max out my connection.

17338535025.png
Edit: Whoa, somehow the VM is faster than the host!

17338551835.png
 
I love having a crap ton of drives with 10gbe. I set up a big ol iscsi "drive" for my machine to house my game library for myself and then one for my son. Writes aren't super fast, but reads...saturates that 10gbe link, so the performance is going to be comparable to if I had a SSD drive in there (big files vs little and all).

Our systems sound pretty similar in principal. I took my system, and did proxmox on it with two VMs. One is truenas and it just handles the storage. Then the other is ubuntu server and it has my gtx 1080ti. It does my emby tasks, and then I also installed immich, so my photos go onto the server. Immich can also use the 1080ti to do facial recognition. And if that isn't enough, I'm also running frigate NVR on there and using the pool to house the security recordings and the good ol' 1080ti sits and helps frigate with the video stuff there too.

Most of it was new to me hardware, but still, not too bad for seven or eight year old hardware.
I wanted to do 10gbe and did for a while but the modules got so hot under sustained load i switched the runs less than 60 meters to sfp+ because it's cool and derated to 5gbps with multigig ethernet modules for longer ones.
 
I wanted to do 10gbe and did for a while but the modules got so hot under sustained load i switched the runs less than 60 meters to sfp+ because it's cool and derated to 5gbps with multigig ethernet modules for longer ones.
I find it interesting that SPF+ gets hot but doesn't draw as much power as the base-T. I own a couple of supermicro router boards that are cheap surplus now. A couple of months ago I got a fully built one for $180 after shipping. This one I built from a board off ebay with a 40W processor. It runs IPFire and my network nicely with six 10 Gb ports:
IMG_20250110_043431.jpg
 
It was EOFY so made some room in the budget for upgrades. Put one of these in the server with a Kingston NV3 1TB NVMe drive, just to see if it would even work.
1743488644705.png


This silly frankenstein of a machine was all good. Instantly recognised the disk so I formatted it and copied the virtual machines to it. Bye bye SATA bottleneck. The VMs feel very snappy now. If I can be bothered I might have a crack at making it the boot disk, but first, I'll see if there is a better card that supports multiple NVMe drives for RIAD 0 (this one only supports one, the other is keyed for SATA).

Edit: So it turns out disk speed is my major bottleneck for internet speeds. After upgrading to an NVMe drive I can now max out the upload.
 
Last edited:
This silly frankenstein of a machine was all good. Instantly recognised the disk so I formatted it and copied the virtual machines to it. Bye bye SATA bottleneck. The VMs feel very snappy now. If I can be bothered I might have a crack at making it the boot disk, but first, I'll see if there is a better card that supports multiple NVMe drives for RIAD 0 (this one only supports one, the other is keyed for SATA).
Does your motherboard support bifurcation? It should be under the PCIe settings. If it does, that opens up a lot of possibilities for inexpensive cards that support multiple NVMe drives.
 
Edit: So it turns out disk speed is my major bottleneck for internet speeds. After upgrading to an NVMe drive I can now max out the upload.
It can be, but the real factor of effective bandwidth is software IRQ allocation with memory buffers. However, if the connections are not fast enough, it can buffer bloat in the memory if its set too high. These buffers are set low on clients for the best latency. However in specialized applications such as gateway/router and web server, these are increased for higher transfer rate with multiple connections.
 
Does your motherboard support bifurcation? It should be under the PCIe settings. If it does, that opens up a lot of possibilities for inexpensive cards that support multiple NVMe drives.
No. Apparently there are cards that can do multiple NVMe drives on a single slot with a PCIe 'switch'.
https://forums.servethehome.com/ind...apters-that-do-not-require-bifurcation.31172/

I might look into it if I have time. I found another thread where people were having some success installing windows on NVMe, so that gives me hope.

I still have one slot spare, I'll just need to get a ribbon cable as it is currently blocked by the GPU.

Edit:

Purchased one of these.
https://www.amazon.com/dp/B0D47B2W75
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
No. Apparently there are cards that can do multiple NVMe drives on a single slot with a PCIe 'switch'.
https://forums.servethehome.com/ind...apters-that-do-not-require-bifurcation.31172/

I might look into it if I have time. I found another thread where people were having some success installing windows on NVMe, so that gives me hope.

I still have one slot spare, I'll just need to get a ribbon cable as it is currently blocked by the GPU.

Edit:

Purchased one of these.
https://www.amazon.com/dp/B0D47B2W75
Yeah the cards with a PLX switch are an option and should work on any board which is nice, but they're a lot more expensive since it's a more complex device than a bifurcation card. Still a great way to breathe new life into an old system though. Hopefully you can get the NVMe boot working!
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I'm probably complaining too much. I fully did not expect Server 2025 to work at all on this ancient hardware. The major limitation in this system is probably disk speed which is pretty difficult to get around. I purchased an activation key for the extra SAS/SATA controller and moved my JBOD for a little speed boost. The software upgrade did actually make the system more responsive though and there was a decent uplift in internet speed, though still not quite enough grunt to max out my connection.

View attachment 708452
Edit: Whoa, somehow the VM is faster than the host!

View attachment 708454
This one is receive buffer related because the download ping is high and upload ping = idle ping.
If it was hard drive related up would be effected.
 
Back
Top