Joining the Horde and having an issue

Cyberrad

Limp Gawd
Joined
Sep 12, 2008
Messages
327
I just setup the SMP client from Standford's website and I don't seem to be able to download any cores.

Here is what I am getting from the client:
Code:
[16:56:04] - Error: HTTP GET returned error code 403
[16:56:04] + Error: Could not download core
[16:56:04] + Core download error (#2), waiting before retry...
I received this same error when I tried notfred's vm on one of my ESX boxes as well.

The network here is Comcast Business -> pfSense -> Cisco Business switch -> Machine in question. The pfSense box has Snort and HAVP installed. Looking through the snort alerts there aren't any entries that relate to Standford IP addresses.
 
Well the servers are up and working, so it would seem that there is a network configuration issue. Try adding that one machines IP to the DMZ and run wireshark to watch which ports its trying to connect to and see if it works. If it does then we know PFsense is blocking a particular port and wireshark should catch it and you can open that port.
 

I followed the install guide thanks.

Well the servers are up and working, so it would seem that there is a network configuration issue. Try adding that one machines IP to the DMZ and run wireshark to watch which ports its trying to connect to and see if it works. If it does then we know PFsense is blocking a particular port and wireshark should catch it and you can open that port.

According to the F@H website only port 80 and 8080 outgoing are used. I have a feeling it is the HAVP server. I added the the stanford.edu website to the whitelist but I am still not having any luck.
 
Only when there is a new version, which isn't very often. I've uploaded the current one for you here, so you can download it and just stick it into your F@H directory: http://rapidshare.com/files/407328865/FahCore_a3.exe

My real concern is that the same issue may prevent you from downloading and uploading work. If that is the case, then there's really nothing you can do.
 
It might. Is there a way to tell what IP(s) I will be getting assignments from? I think I need to add them to the whitelist.
 
Welcome to the team. Glad to have ya. The one thing I can think of that has not been mentioned is uninstall/reinstall the client. I don't have any port issues here and I have my network behind a firewall as I am sure most peeps here do, so a broken install could be it.

Fold On !!
Fish :cool:

 
I don't recommend notfred's. It's outdated and incompatible with SMP2 last I checked. You'd be better off using linuxrouter's folding distro, though the best way to go in general is always to just set up your own OS and configure F@H the way you like it.
 
I created my own as you suggested. 8 vcpu on the dual L5410 machine in my sig. It is performing just like my Q6600 rig. Any suggestions on pulling the numbers up? I would have thought that this setup should have done at least 75% better then the Q6600.

Currently I am using -verbosity 9 -smp 8 -forceasm.
 
Are you really loading all 8 cores? I seem to recall that some VMWare products could not run an 8 core VM, so i am wondering if you are really only using 4 cores. That would make sense based on your Q6600 comparison.

Also, welcome to the team!
 
I am using ESX 4. According to vSphere Client I am using up all of my processing power. All 8 cores are pegged.
 
Are you really loading all 8 cores? I seem to recall that some VMWare products could not run an 8 core VM, so i am wondering if you are really only using 4 cores. That would make sense based on your Q6600 comparison.

Also, welcome to the team!

VMware version 3.0.0 is the only one that can run all 8 cores using LinuxFAH's image. A shame really that they have 2 newer versions and they literally turned off support for more than 4 cores.
 
I created my own as you suggested. 8 vcpu on the dual L5410 machine in my sig. It is performing just like my Q6600 rig. Any suggestions on pulling the numbers up? I would have thought that this setup should have done at least 75% better then the Q6600.

Currently I am using -verbosity 9 -smp 8 -forceasm.
Is the Q6600 overclocked or at stock? If it is at stock, then your Xeon machine should absolutely be performing better. Since you've got 8 cores however, I would recommend that you run -bigadv units. Just add in the -bigadv flag along with the rest and see if you get better performance out of it that way.
 
Is the Q6600 overclocked or at stock? If it is at stock, then your Xeon machine should absolutely be performing better. Since you've got 8 cores however, I would recommend that you run -bigadv units. Just add in the -bigadv flag along with the rest and see if you get better performance out of it that way.

I guess that information would help huh. The Q6600 is OC'ed to 3Ghz so it really isn't that tall of an OC. I'll try the bigadv flag and see what I get.
 
Just an update on the other issue I had. I changed HAVP to not intercept all traffic instead pointing clients that need the AV protection to use the proxy. Now all of the clients can connect no problem to Stanford.

I also started GPU2 on my GTX280. The points are rolling in now.
 
Is the Q6600 overclocked or at stock? If it is at stock, then your Xeon machine should absolutely be performing better. Since you've got 8 cores however, I would recommend that you run -bigadv units. Just add in the -bigadv flag along with the rest and see if you get better performance out of it that way.
The L5410 is a great processor but at stock speeds it likely won't make the bonus deadline running P2684 WUs, they're clocked at only 2.33GHz. My dual [email protected] was processing P2684 WUs with a TPF of ~52 minutes. I doubt even the P2685 will make the bonus deadline at stock 5410 speeds, unfortunately.

Now, if the system is OCed, that's a different story but it depends to what extent. My E5410 has a massive OC and I am getting lousy TPFs compared to the A2 -bigadv WUs, thanks to Stanford's nice code rewrite for A3 -bigadv... :rolleyes: :mad:
 
The L5410 is a great processor but at stock speeds it likely won't make the bonus deadline running P2684 WUs, they're clocked at only 2.33GHz. My dual [email protected] was processing P2684 WUs with a TPF of ~52 minutes. I doubt even the P2685 will make the bonus deadline at stock 5410 speeds, unfortunately.

Now, if the system is OCed, that's a different story but it depends to what extent. My E5410 has a massive OC and I am getting lousy TPFs compared to the A2 -bigadv WUs, thanks to Stanford's nice code rewrite for A3 -bigadv... :rolleyes: :mad:

Woah... that makes me feel better about my L5410's. They are currently doing 5:45 TPF. The E5320's are doing 6:36 TPF. The Q6600 is doing 6:55 TPF with GPU2 running as well on the GTX2800 doing 45 TPF.
 
Just an update on the other issue I had. I changed HAVP to not intercept all traffic instead pointing clients that need the AV protection to use the proxy. Now all of the clients can connect no problem to Stanford.

I also started GPU2 on my GTX280. The points are rolling in now.

Outstanding! The addiction has started.

Fold On!
 
Woah... that makes me feel better about my L5410's. They are currently doing 5:45 TPF. The E5320's are doing 6:36 TPF. The Q6600 is doing 6:55 TPF with GPU2 running as well on the GTX2800 doing 45 TPF.
What motherboards are you running? I don't recognize the products in your sig. These are complete Supermicro systems correct?
 
What motherboards are you running? I don't recognize the products in your sig. These are complete Supermicro systems correct?

The rig the E5320's are in is a Supermicro bare bones (SuperServer). The L5410's are just in a Supermicro chassis with a Tyan i5100w.
 
Back
Top