Dragging me back, slowly

EvilAlchemist

2[H]4U
Joined
Jan 11, 2008
Messages
2,730
Fired up my old F@H rig and got Boinc going.

Not much cause it cpu only but maybe it help team out.

Specs:

CPU type GenuineIntel
Intel(R) Xeon(R) CPU L5640 @ 2.27GHz
Number of processors 12
Coprocessors AMD AMD Radeon HD 6350/6450/7450/7470/R5 230 series (Caicos) (1024MB) driver: 1.4.1848 OpenCL: 1.2
Microsoft Windows Server 2012 R2
Memory 47.99 GB
 
Right now the team in general is just building up our standing with the Marathon portion of Formula-BOINC. A few of us are also running PrimeGrid for the Toud de Primes challenge that goes on throughout February. Formula-BOINC sprints don't start until March. Pentathlon is in May. Also, if you want to join us in Slack, let me know and I will shoot you the invite link.
 
Little Update: So I have been impressed so far at my older hardware crunching threw these S@H Work Units.

My Server is getting about 4,100.45 Average Per Day ( 12 CPU Cores - HT off - 75% Work Load)
My Desktop is getting about 4,086.82 Average Per Day ( 6 CPU Cores - HT off - 100% Work Load + 1 ATI GPU 280x)

So both are getting about the same amount of work done per day. Not too bad so far.
 
If that is SETI@home, are you running the optimized applications? Also, is there a reason to run with HT off?
 
If that is SETI@home, are you running the optimized applications?

I am running the standard BOINC Client.

Also, is there a reason to run with HT off?

Hyper Threading is great for some applications, but not for work like this. Each core's FPU is 95-98% saturated with the work units.
Having HT on will give you a 5-10% boost in efficiency (maybe), but it also required more power and creates more heat on the CPU.

Back in my folding days, there were many tests I ran with my systems (had more systems back then) with HT (on and off) and measuring the power draw of each system vs performance.
I have found that leaving it off on dedicated systems yielded the best performance to cost / heat ratio.

The only test I have not preformed in detail yet, but plan to, is to measure what the GPU clients actually needs from the CPU cores during processing.
My preliminary data with my Radeon 280x is that a physical core is needed , not a virtual one. I saw a 25% increase in GPU WU speed when i allowed it to have it own cpu core.

I also noticed that it used about 35-50% of that cpu core. That leaves just enough room for the OS to do its thing and have the rest churning out WUs.
 
Back
Top