Collatz

I already switched the main ones out. I might setup one of the single GPU rigs with the #53 settings to see what happens.

How can I tell that they're all working? With so many different GPU types my production might overall go up, but how can I tell if one of the GPUs aren't still holding back further progress?
 
Gilthanis, if these settings don't put me back in #1 production for the team I'm gonna really need to borrow those 970s you have laying around. :LOL::ROFLMAO:










For real though.
 
Use the GPU-z program to see what your GPU utilization is for each card. If they are all at or near 100%, you are probably pushing it to the max. If not, you can try adjusting those config settings. If you have them pretty well maxed, that is when I would look into more than one work unit per card.


The downside, is that any time slicker makes a new application, you would have to fine tune again.
 
Wow. Just wow. Either 1080 Ti's are that much faster or those settings are phenomenal.

I think it's mostly the settings. I noticed a massive improvement in ppd when I changed from default to the above settings. I think you will be back where you belong ie bestowing tyre marks on the masses.

How can I tell that they're all working? With so many different GPU types my production might overall go up, but how can I tell if one of the GPUs aren't still holding back further progress?

I did it by quickly comparing the run times and points on the latest work units to the previous on the project website then recalculating the ppd in excel. Bit of a pain if you have multiple GPUs in the same rig but should be fine.
 
Yes..if you have multiple different cards in the same rig...it is going to be an issue unless you do some multi-client magic. I would either set the cards to work on other projects by excluding them via cc_config or would just set them for your fastest cards and hope the slower ones don't puke on you.
 
Use the GPU-z program to see what your GPU utilization is for each card. If they are all at or near 100%, you are probably pushing it to the max. If not, you can try adjusting those config settings. If you have them pretty well maxed, that is when I would look into more than one work unit per card.


The downside, is that any time slicker makes a new application, you would have to fine tune again.

I don't mind changing settings. It's the troubleshooting I just can't do. Takes too much time so I have to depend on my teammates to walk me through it, hopefully.


I think it's mostly the settings. I noticed a massive improvement in ppd when I changed from default to the above settings. I think you will be back where you belong ie bestowing tyre marks on the masses.

LOL Love it!



I did it by quickly comparing the run times and points on the latest work units to the previous on the project website then recalculating the ppd in excel. Bit of a pain if you have multiple GPUs in the same rig but should be fine.

Yeah that's just not going to happen. Editing the config files on each rig is a big task for me much-less manually calculating and comparing tasks per rig to previous yeah.. i'm already typing too much thinking about all the work.
 
Yes..if you have multiple different cards in the same rig...it is going to be an issue unless you do some multi-client magic. I would either set the cards to work on other projects by excluding them via cc_config or would just set them for your fastest cards and hope the slower ones don't puke on you.

Cards are all grouped together in multi-GPU rigs. 3x980 Ti, 2x970, and 2x980Ti are my three multi-GPU setups.. for now.
 
Team [H] is currently top 5 points today in this project since phoenicis joined. If I can see a similar increase in production with the configs I think the [H] might take #1 for the daily.
 
I used the savings on electricity for the last five years as my excuse. I was amazed when the wife accepted this logic.
Talking about saving electricity, outside of challenge/competition, I run my card with power limit set to 50% to 60% for greater GPU crunching efficiency. Even a 10% electricity savings over a year is a big saving. This is good for project that does not award bonus for quick turn-around. This might help if wife complains about the cost of electricity even with the 1080Ti :p

BTW, I haven't optimized the Pascal ability to tweak frequency and voltage curve to find optimum power efficiency during crunching.
 
applejacks

Edit the configs on your GPUs for this project. It will help with your daily production tremendously.
 
I did the stuff from post 53 I think it was... on this computer it has one gtx1060
its made the computer really slow :p
not sure what settings sould be good for gtx 1060 or 1070
I have another box with 2 1060s in it not sure if the run times went down at all.
upload_2017-5-31_13-39-59.png
 
applejacks you'll need to log onto your account page and view the completed/validated tasks. Since you probably did it a while back you'll have to search through until you find the date you made the changes and see if they're returning them faster.

phoenicis we made top 2 so far.
top-3.jpg
 
phoenicis we made top 2 so far.

Nice! Although the 1 and 2 seems to be the wrong way round :p

In an attempt to correct this injustice I've done some more testing.

The 1080 Ti cards were fluctuating from 95-100% load so to address this I tried increasing the lut_size to 18 (bumped ppd 5%) and doubling up a la Einstein (another 5% bump). Was working well until one rig went haywire an hour later. I suspect that either the driver crashed or a x86 task arrived for which there was no matching config file. I'll see how it behaves overnight.

I did the stuff from post 53 I think it was... on this computer it has one gtx1060
its made the computer really slow :p

Give the settings from post # 69 a try but change the threads setting to 8. Watch out for the heat but you should be able to almost double your output.
 
Give the settings from post # 69 a try but change the threads setting to 8. Watch out for the heat but you should be able to almost double your output.
ok i chagned to the settings on #69 will this only run one wu at a time ?
 
applejacks I had a little bit of free time so I took at look at your 1070 rig since you have them viewable.

This screenshot is some of your first valid tasks returned.
1st1070.jpg


You can see you're averaging around 1150 seconds per task.

These are your most recent valid results.
2nd1070.jpg


You can see that whatever you did before didn't make a difference as they're still returning in 1150 seconds on average. However, I do see one at 627 seconds and two around 450 seconds, so whatever changes you did a few minutes ago is working now.
 
Nice! Although the 1 and 2 seems to be the wrong way round :p

In an attempt to correct this injustice I've done some more testing.

The 1080 Ti cards were fluctuating from 95-100% load so to address this I tried increasing the lut_size to 18 (bumped ppd 5%) and doubling up a la Einstein (another 5% bump). Was working well until one rig went haywire an hour later. I suspect that either the driver crashed or a x86 task arrived for which there was no matching config file. I'll see how it behaves overnight.

ohnoes.gif
 
yea i just now touched this box just now thats the 1070. so i was watching the 450 ish times come in now. I have 2 computers out there with 1060s that when i attached them to collatz they just errored out a few hundred WU till i was able to go to thoes boxes and reset the project. on 2 of thoes boxes they have the settings from post 53 set and are running dual WU at a time
 
I don't think running 2 tasks at a time will benefit in this project. I briefly looked at your returns tasks for your 1060s. They're 1600+ seconds each WU. For the single 1060 boxes. I seen some tasks as high as 2200 seconds. I do not think running 2 WU's at a time on those boxes are benefiting you. My 970s are returning single work units at ~800 seconds. Prior to my edits they were seeing ~1500 second returns. I think your 1060s can do better than that if you just let them run 1 WU at a time.

As for the dual 1060 rig. I am seeing ~3400 seconds per WU. Something is terribly wrong here.

Are you running other GPU projects on these rigs also? That might explain the higher return times.
 
phoenicis Oh noes! Your edits might have given you an edge. This last update I was only 18,467 points ahead! Oh noes!

ohnoes.jpg
 
nope not sure whats up with the boxes with the 1060s they seem to be running pretty slow they are only doing GPU on Collatz and CPU on TN-Grid. i think the config is running dual WU on one box and quad WU on the box with 2x 1060s in it. Ill have to check tomorrow.
 
Running 2 tasks per GPU is hurting your production I think.
 
phoenicis Oh noes! Your edits might have given you an edge. This last update I was only 18,467 points ahead! Oh noes!

Lol, I suppose this is the point you roll out another gpu into your plethora of rigs and wave me goodbye.

Anyway, way past my bedtime. GL guys.
 
I'm not gonna lie. I looked on eBay for any decent GPU deals that might be available for purchase tonight. lol
 
Well I have messed around with 2 of my boxes with gtx 1060's I cannot get them to do any better . one of them at home I got it down to 975-1000 run time range. other ones with the same settings are still sitting 1600-1700- 3k with the same settings I used at home.
should I reset the project from the boinc manager?
 
If you reset the project, you will most likely have to re-do the config files again as it will delete those out. But if you are fine with that...go for it.
 
Well they seem to be doing better now both boxes before had the collatz_sieve_1.21_windows_x86_64__opencl_nvidia_gpu.config and the above but only x86. After the reset it only put back the x86 open cl.... And I just edited the config and both machines seem to be happy.
 
Yesterday we were the #3 more productive team in the world.

Today, Un4given has raised his production a lot. Though I doubt we will ever get to #2 we might be able to replicate another #3 finish for today.
 
It looks like 2 of my boxes stopped on collatz. Probably overheated.... Ac been out where thoes boxes are
 
This might be the last month until Fall that I can continue running this project. Will be shutting down more of my rigs for the heat real soon.
 
Haha nah.

Most of my GPUs are all under water already anyway, that's not the problem. It's heating up the house and will make the AC work overtime which costs $$$. My last electric bill was $629. I turned the AC off as soon as I got that bill. If it's even remotely close this month when I get the bill they're going off. I might leave a single 4P rig running.

I'll just fire them up for the 3-day sprints to help the team if needed.
 
I'm wondering if I should continue to build this 2p xeon machine. Just really lacking the motherboard at this point or see what the threadripper or dual threadripper (is that Naples) has to offer? I have 2x 2680 v3 to put to use but if threadripper going to be much faster should wait? Sell off xeons and probably save money..... Decisions decisions :coffee:
 
Chances are the Intel rig will still be faster than the newer AMD rig.

With that being said, how much would you sell your Xeons for? I'm looking to do a 2P Intel rig towards the end of the summer.
 
you really think the intels will be faster ? hummm Well im gonna hold off just for a lil bit ... we are in June so hopefully they start hitting the testing market . I was holding out on buying a ASUS Z10PE-D8 WS motherboard but seems pretty pricey still seeing that board came out in sept 2014
 
Depends on the projects. projects that are optimized for instructions that Intel has will destroy AMD's offerings.
 
And don't forget Intel's i9 series being released. But yes, Intel is typically the better route for DC dedicated 24/7.
 
Yea true I'll just keep saving up for that Asus board
 
Back
Top