L5520 Best PPD w/o OC?

Syran

Gawd
Joined
Apr 26, 2007
Messages
683
It's been a semi-interesting year for me so far. Had the opportunity to ramp up my folding for a short time, and load test a few servers.

First up I have a pair of IBM 7870 Blades with 48 (and 40) GB of Ram and a pair of L5520's in each. I have configured them per Musky's ubuntu Linux with 10.04.4LT release. I put one to normal SMP, and the other, I tried a BigAdv (SMP 16 flag). It wasn't able to handle an 8101 within the proper timing, so I switched it to standard SMP as well. Both are hitting 28-30k ppd. Any chance these processors can handle BigAdv or BigBeta these days, or should I just let them have a cpl weeks of standard SMP? Can't OC them due to the infrastructure of the IBM blades.

I also have a pair of Cisco UCS Servers with Dual Hex 1.8Ghz Xeons that are happily chewing through Big Adv @ 120kppd. I'm going to be sad when I have to remove them from load testing, but might as well enjoy it while it lasts. :)
 
Dual L5520s aren't going to be able to do bigadv. Just leave them on smp and let them crunh.
 
Dual L5520s aren't going to be able to do bigadv. Just leave them on smp and let them crunh.

Can confirm this. I tried with the L5520's in the C6100. SMP :-/ The new server I'm building this weekend will have two nodes with dual L5638's. Want to see how those do.
 
I have dual E5620's? 4 core with HT, Westmere, 2.4ghz with turbo. Runs SMP16.

While very efficient (255w) folding, they can't do 8101, not even close.

To have a 10% safety margin with Westmere twin 6-core machines, you are going to need 2.9ghz worth. 2.8ghz will be marginal. 2.6ghz won't make it all the time.

I also have a dual X5650 (2.6 hexacore), but I had to overclock to 3.0 to give me some room on P8101. Luckily, I did not have to bump the voltage to do it, so it runs cool, and 450w max. At 3.9ghz, it chews 680w and runs hot.

Make sure to use EXT3 file system, or an SSD if running EXT4. There is a huge hit at the end of the BigAdv projects when using HDD's and EXT4.
 
Last edited:
EDIT:

If you are looking for max performance, don't run more sticks of RAM than channels. If the 5500's are triple channel, run only 3 sticks per CPU, per motherboard instructions on slot placement of sticks. Normally furthest slot, skip, slot, skip, etc.
 
Last edited:
Note:

If you are looking for max performance, don't run more sticks of RAM than channels. If the 5500's are dual channel, run only 2 sticks per CPU, per motherboard instructions.

5500 series is triple channel. Issue comes when you use 6 DIMMs + per CPU and the clocks go down.
 
DOH!! I knew the 56xx were triple channel, but I forgot the 55xx were triple also.
 
Can confirm this. I tried with the L5520's in the C6100. SMP :-/ The new server I'm building this weekend will have two nodes with dual L5638's. Want to see how those do.

Should be right around 40k PPD on linux, no bigadv:(
 
I have dual E5620's? 4 core with HT, Westmere, 2.4ghz with turbo. Runs SMP16.

While very efficient (255w) folding, they can't do 8101, not even close.

To have a 10% safety margin with Westmere twin 6-core machines, you are going to need 2.9ghz worth. 2.8ghz will be marginal. 2.6ghz won't make it all the time.

I also have a dual X5650 (2.6 hexacore), but I had to overclock to 3.0 to give me some room on P8101. Luckily, I did not have to bump the voltage to do it, so it runs cool, and 450w max. At 3.9ghz, it chews 680w and runs hot.

Make sure to use EXT3 file system, or an SSD if running EXT4. There is a huge hit at the end of the BigAdv projects when using HDD's and EXT4.

The Cisco UCS servers I'm playing with (since Cisco gave them to us as part of a package deal) have Dual E5-2420's @ 1.9Ghz (a little up from the 1.8 I thought they were), They seem to do an 8101 with 3-8 hours to spare. I believe the 3 hr one is the one that has ext4 instead of ext3 for the main partition. I might see if I can recarve the partitions and mount an ext3 store for folding, without having to reinstall ubuntu (not that it takes long, but just annoying to do). I was tired and just trying to figure out the UCS stuff when I was installing it, and forgot to manually partition the drive.

I must admit, I think it's funny that Cisco sold us 6 blades, 4 of them as VMs, 2 as bare metal, and the VM servers each have E5-2650's in them, with a whole 2 cores being used by actual VMs... Alas, production machines, so I cannot touch them.
 
You don't need to recarve or reinstall. Just edit the fstab file and add barrier=0 to the line for the ext4 partition.

H.
 
Back
Top