fastgeek's Virtual Box Question

phoenicis

[H]ard|DCer of the Year 2018
Joined
Oct 3, 2010
Messages
590
Answer moved here when I realised it may contain information that our competitors may find useful. Didn't want anybody shouting at me upon dawn in the US :D

Now I'm wondering... have I been screwing myself over in other projects by not using the VM enabled BOINC option? In the case of the current Pentathlon competition, am I leaving points on the table by not going that route? Inquiring minds want to know! :) Thanks!

I only ran Cosmology on a G34 4p for a short while before LHC started but I'll give this a go while the experts are sleeping.
There didn't appear to be a great points advantage for planck tasks over the non virtual box single thread work (camb_legacy) but what I liked was that tasks finished much quicker. Some of the legacy tasks took days compared to an average of about 30 minutes per task using the settings below. This is becoming important now that we only have a couple of days or so left.

If you decide to switch any boxes over another important consideration is that when leaving the config of the planck task as default boinc tried to use 32 cores of my 48 core box for one task which had a cpu utilisation of less than 50% for some reason. After some experimentation the optimal number of cores turned out to be 4 for each task and so my app_config file was:

app_config>
<app>
<name>lsplitsims</name>
<max_concurrent>12</max_concurrent>
</app>
<app_version>
<app_name>lsplitsims</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>4</avg_ncpus>
</app_version>
< /app_config>

You will, of course, have to make adjustments to account for the number of threads on your box and to allow for the legacy tasks to finish.

YMMV and good luck!
 
I do recall when I was doing work at YAFU running their multi-threaded apps several months back that quads seemed to be the sweet spot. Other projects like Collatz have been open about their apps actually being less efficient in Multi-threading than apps running in single instances. So, make sure to do your own testing in advance when possible. It can also differ by system and whether it is a full core, hyper threaded, or even Intel vs AMD.

To explain why you will sometimes see low utilization on a multi-threaded application is because there will be points where the other threads have to wait for portions to be completed before work can continue as not all applications can scale equally at all times. So, there may be times that it is actually only using one thread out of 32 and then suddenly spike to all 32. This is why you will find better utilization on smaller core counts many of times though the work units may take longer.
 
Back
Top